Back

Retrieval-Augmented Generation

Enhance AI apps with RAG

In the age of AI, large language models (LLMs) like OpenAI's GPT let you build chatbots and other AI apps that can answer questions with natural language. But these general-purpose LLMs lack the specialized context relevant to your use case. Retrieval-Augmented Generation (RAG) is a technique for surpassing the capabilities of generic AI apps by pulling domain-, app-, and/or user-specific data into the generation pipeline.

Convex supports several different strategies for building tailored AI experiences by using your Convex data for RAG. Whether using Convex's built-in vector database or popular tools like Langchain, you can build bespoke AI interfaces that augment standard LLM outputs with additional context retrieved from your Convex data.

Screen grab of Retrieval-Augmented Generation in action