Thats the whole point of llama index? I can connect my LLM to any node or context i want. Syncing it to a real time data flow like an API and it can learn...? How is that different than a human?
Once optimus is up an working by the 100k+, the spatial problems will be solved. We just don't have enough spatial awareness data, or for a way for the LLM to learn about the physical world.
True equivalent to human memories would require something like a multimodal trillion token context window.
RAG is just not going to cut it, and if anything will exacerbated problems with hallucinations.