Yesterday i was testing Vertex AI fine tuning with using text-bison model a 3KB file (10 examples total). The first try took about 3 hours and failed at the end while creating an endpoint, the very last step.
I tried it agiain with Compute Engine API service account another time which took around 2,3 hours and succeded. This was also with the same .jsonl file of 3 KB in size.
When i check this morning i have noticed that i was charged $254 for this.
I would like to know why? How is this possbile?
Considering I ll have 14000+ examples on my original data, i would like to know more about htis pricing becuase this does being stated anywhere....
I just find out that the following GPU has been used where the majority of the cost is coming from.
I never chose this yet it is still ridicoulus for a 10 example 3KB jsonl file.
Ok here is the problem... When you setup your fine tuning with text-bison model, the accelerator type is automatically chosen as shown below.
Here are the two definitions of the accelerators:
NVIDIA A100 80GB GPU
TPU 64 Core v3 Pod
User | Count |
---|---|
13 | |
1 | |
1 | |
1 | |
1 |