Truncated response of Palm 2 model

When sending text with length near the input token limit, the output is truncated.
It happens also in the vertex ai studio.
We expect the input and output token limits to be independent, what can be the possible reasons for this behaviour?

5 1 88
1 REPLY 1

Hi. The same thing happens to me, too. It doesn't seem to happen with all documents.
Did anyone else face this issue? If yes, how did they solve that?
I'm interested on hearing more on that.