I started using batch prediction for the text-bison model. I currently
encounter the problem, that sometimes results are cut off in the middle
of the text before the max_output_tokens limit is reached (set to 1024,
response is cut off after ~300 char...