Was there any concern about giving the LLM access to this return data? Reading your article I wondered if there could be an approach that limits the LLM to running the function calls without ever seeing the output itself fully, e.g., only seeing the start of a JSON string with a status like “success” or “not found”. But I guess it would be complicated to have a continuous conversation that way.
If all this does is give you the data from a contact API, why not just let the users directly interact with the API? The LLM is just extra bloat in this case.
Surely a fuzzy search by name or some other field is a much better UI for this.
- Made a RAG in ~50 lines of ruby (practical and efficient)
- Perform authorization on chunks in 2 lines of code (!!)
- Offload retrieval to Algolia. Since a RAG is essentially LLM + retriever, the retriever typically ends up being most of the work. So using an existing search tool (rather than setting up a dedicated vector db) could save a lot of time/hassle when building a RAG.
Surely a fuzzy search by name or some other field is a much better UI for this.
- Made a RAG in ~50 lines of ruby (practical and efficient)
- Perform authorization on chunks in 2 lines of code (!!)
- Offload retrieval to Algolia. Since a RAG is essentially LLM + retriever, the retriever typically ends up being most of the work. So using an existing search tool (rather than setting up a dedicated vector db) could save a lot of time/hassle when building a RAG.