A high-performance document question-answering service built with FastAPI and LangChain, implementing Retrieval-Augmented Generation (RAG) for intelligent document analysis.
This service allows users to upload documents via URL and ask multiple questions about the content. Using advanced RAG techniques with LangChain, it provides accurate, context-aware answers by retrieving relevant document sections and generating responses.
The challenge was to build a RAG system to process insurance policy documents and answer questions against them, evaluated through multiple tests with documents and queries provided in each request.
A robust FastAPI service that accepts a document URL and a list of questions. It uses LangChain for an efficient RAG pipeline to deliver accurate answers, all served through a scalable, containerized API.
Dynamic Document Processing
Multi-Question Support
RESTful API
Advanced RAG Pipeline
Dockerized Deployment
Full Pydantic Type-Safety
Architecture: A clean, modular FastAPI application with dedicated services for document processing and Q&A, ensuring scalability and maintainability.
This project was developed for the HackRx 6.0 hackathon organized by Bajaj Finserv Health Limited.
My Role: I was the sole developer responsible for designing the architecture, implementing the RAG pipeline with LangChain, building the FastAPI service, and containerizing the application with Docker.
curl -X POST "https://hackrx-rag-app.onrender.com/hackrx/run" \ -H "Content-Type: application/json" \ -d '{ "documents": "https://example.com/document.pdf", "questions": [ "What is the main topic?", "What are the key findings?" ] }'
Visit the interactive Swagger UI at the API Docs.
I'm passionate about building intelligent systems with LLMs and RAG. Let's discuss how we can build something impactful together.