What doesn’t help the community is that “hallucinate”, “cite sources” still doesn’t capture what the LLM is doing. LLMs were pre-trained to do one thing, trained to do another and maybe fine-tuned for yet another thing. Do they hallucinate? From our perspective they do because we know true and false but from the tool’s perspective, it’s “just interpolating the text crammed inside of it”.