Gemini 1.5 Pro API blocking prompt help

Brotis
New Member

Hey everyone! I am a Data Scientist for my day job but I am a writer as my hobby. I wanted to use 1.5 Pro's incredible context length to help me edit my novel. I gave it my novel in an intial prompt via the API in my Python environment and asked it for feedback, this worked great and it gave me awesome feedback. 

But when I tried to run the prompt again with different wording it blocked the prompt, now every thing I try to use with the Gemini API I am blocked. It wont let me prompt with anything, even just saying "Hello" is blocked now.

I created a new API key, and this worked for a moment, but then I tried to edit the novel again and boom it was blocked again. How is this supposed to be useful if it won't edit fictional content that has something violent in it? I've tried editing the safety features, and turning them all off, still blocked. There's nothing to help me understand what part of my story is blocking it or what I can do about it. I'd try to just take that part out of the novel and then do it again but I have no context as to what is actually blocking it and there's nothing in my novel that seems to be "bad" except for a basic fight sequence. It would be considered PG in a movie. 

I've checked the "response.feedback" and "candidate.safety_ratings". All I get is BLOCKED REASON: OTHER. or Blocked reason: 2. It's literally just a fight scene in one of my chapters; there is no blood or even cursing. I can't believe this is what is causing the block. How could anyone use this model for anything at all if its that sensitive?

 

8 2 559
2 REPLIES 2

Same Lol, literally turned all Block to None, yet still got blocked. What the hell. What's even the point of this setting ? No wonder the trend is open weight models now.

Hi @Brotis

Thank you for joining our community.

I hear your frustration with Gemini 1.5 Pro's blocking mechanism. It sounds like you're right about your fight sequence triggering the blocks. This aligns with the safety guidelines outlined in Generative AI on Vertex AI, particularly the sections on probabilities and severity.

Content can have a low probability score and a high severity score, or a high probability score and a low severity score. For example, consider the following two sentences:

  1. The robot punched me.
  2. The robot slashed me up.

The first sentence might cause a higher probability of being unsafe and the second sentence might have a higher severity in terms of violence. Because of this, it's important to carefully test and consider the appropriate level of blocking required to support your key use cases and also minimize harm to end users.


The good news is, the document mentions you can adjust safety settings for each request. However, the "BLOCK_NONE" setting requires either allowlist approval or switching to a monthly billing plan.

The "BLOCK_NONE" safety setting removes automated response blocking (for the safety attributes described under Safety Settings) and allow you to configure your own safety guidelines with the scores that are returned. In order to access the "BLOCK_NONE" setting, you have two options:

(1) You might apply for the allowlist through the Gemini safety filter allowlist form, or

(2) You might switch your account type to monthly invoiced billing with the GCP invoiced billing reference.


I hope I was able to provide you with useful insights.