
Documind
LiveProblem
Extracting insights from large document collections requires expensive custom engineering or poor off-the-shelf tools that don't handle mixed formats or nuanced Q&A.
Solution
End-to-end RAG (Retrieval-Augmented Generation) platform built with FastAPI and React. Users upload documents and ask natural-language questions answered by OpenAI or Gemini.
Key Features
- ▸Intelligent document Q&A across PDF, DOCX, and TXT formats
- ▸Multi-provider LLM support: OpenAI and Gemini, switchable per query
- ▸1,000+ document uploads handled with 99.9% uptime
- ▸Automated CI/CD pipeline deployed on Render.com
Tech Stack
▶Challenges faced
- Balancing accuracy, latency, and cost trade-offs across multiple LLM providers
- Designing reliable RAG retrieval logic for varied document structures
- Building a microservices architecture solo without over-engineering it



