A model which outputs things that OpenAI deems is unsafe. Try getting text-davinci-003 to complete instructions about building Molotov cocktails and compare that with davinci-002.
Running it with the resulting text from that was generated by text-davinci-003 didn't get flagged either, though the score for violence went up to '"violence": 0.01034669'.
Note that they will be removing access [1] to text-davinci-003. They want usecases on text-davinci-003 to move to either gpt3.5-turbo-instruct or davinci-002, both of which have trouble with unsafe inputs.