Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The LLM would never have access to any API keys to send to the attacker. You send text to the LLM along with the prompt and it sends back JSON. You then send the JSON to your traditionally coded API. It’s not like your API has a function “returnAPIKeys()”.

As far as the LLM call, you are just sending your users text to another function that calls the LLM and reading the response back from the LLM.

If it didn’t create JSON you expected, your traditionally coded API is going to fail.

I keep wondering how are developers using LLMs in production and not doing this simple design pattern

 help



Oh man, this made me do a quick search on github. Looks like I picked the wrong week to stop quoting Zucker Brothers films.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: