The LLM would never have access to any API keys to send to the attacker. You send text to the LLM along with the prompt and it sends back JSON. You then send the JSON to your traditionally coded API. It’s not like your API has a function “returnAPIKeys()”.
As far as the LLM call, you are just sending your users text to another function that calls the LLM and reading the response back from the LLM.
If it didn’t create JSON you expected, your traditionally coded API is going to fail.
I keep wondering how are developers using LLMs in production and not doing this simple design pattern
As far as the LLM call, you are just sending your users text to another function that calls the LLM and reading the response back from the LLM.
If it didn’t create JSON you expected, your traditionally coded API is going to fail.
I keep wondering how are developers using LLMs in production and not doing this simple design pattern