Back to Portfolio

FastAPI RAG Service

A high-performance document question-answering service built with FastAPI and LangChain, implementing Retrieval-Augmented Generation (RAG) for intelligent document analysis.

TL;DR

This service allows users to upload documents via URL and ask multiple questions about the content. Using advanced RAG techniques with LangChain, it provides accurate, context-aware answers by retrieving relevant document sections and generating responses.

Problem → Solution

Problem

The challenge was to build a RAG system to process insurance policy documents and answer questions against them, evaluated through multiple tests with documents and queries provided in each request.

Solution

A robust FastAPI service that accepts a document URL and a list of questions. It uses LangChain for an efficient RAG pipeline to deliver accurate answers, all served through a scalable, containerized API.

Key Features

Dynamic Document Processing

Multi-Question Support

RESTful API

Advanced RAG Pipeline

Dockerized Deployment

Full Pydantic Type-Safety

Tech Stack

FastAPI
LangChain
Pydantic
Docker
Python

Architecture: A clean, modular FastAPI application with dedicated services for document processing and Q&A, ensuring scalability and maintainability.

Role & Credits

HackRx 6.0 Hackathon (2025)

This project was developed for the HackRx 6.0 hackathon organized by Bajaj Finserv Health Limited.

My Role: I was the sole developer responsible for designing the architecture, implementing the RAG pipeline with LangChain, building the FastAPI service, and containerizing the application with Docker.

API Usage

POST `/hackrx/run`

curl -X POST "https://hackrx-rag-app.onrender.com/hackrx/run" \
  -H "Content-Type: application/json" \
  -d '{
    "documents": "https://example.com/document.pdf",
    "questions": [
      "What is the main topic?",
      "What are the key findings?"
    ]
  }'

Visit the interactive Swagger UI at the API Docs.

Interested in This Work?

I'm passionate about building intelligent systems with LLMs and RAG. Let's discuss how we can build something impactful together.