ML Engineer
2+ years across the ML lifecycle — document AI, predictive work, and production inference.
The story matches the interface: strong engineering underneath, calm presentation on top — ML delivery, disciplined execution, and UI quality in one place.
2+ years across the ML lifecycle — document AI, predictive work, and production inference.
Neutrinos · ML Engineer II · Feb 2024 – Present · Bengaluru
BCA · University of Mysore · ML at Brototype (Calicut).
Building scalable AI pipelines, agent-style workflows, and microservices that stay maintainable after launch.
I care how cutting-edge ML shows up in the real world — from RAG systems to dependable deployments on Kubernetes.
Map: Wikimedia Commons
Building more than models. Side projects and football give me the discipline and focus to keep shipping.
Mastering the craft — from training models to autonomous agent systems — is how I pursue excellence.
Orchestration for document AutoML pipelines, queue-backed FastAPI services, MongoDB, and GenAI features—PII masking, OCR, RAG, MCP—containerized on Docker and Kubernetes.
Key–value field detection and extraction with YOLOv8, OCR, and inference served through a dedicated API—field-level accuracy up to roughly 99% on structured document workloads.
Personalized arXiv recommendations in AI, ML, DL, and CV; summarization and paper Q&A—with reported user satisfaction from about 65% to about 85%.
Reviews
Happy clients
The GenAI guardrail work was a game-changer for our team—PII masking in production and a RAG pipeline that actually cut response times.
Rare mix of LangChain depth and FastAPI discipline. The platform scaled from dozens to thousands of daily requests without drama.
Document extraction jumped in accuracy after the YOLOv8 + OCR pass. Clean code we could own and extend.
Mentoring was practical and clear—hard ML topics landed fast for the whole cohort.
Shipped MLflow + K8s patterns we had been circling for weeks. Four days from brief to working pipeline.
Chroma + Redis caching cut semantic-search latency sharply—exactly the performance lens we needed.
The GenAI guardrail work was a game-changer for our team—PII masking in production and a RAG pipeline that actually cut response times.
Rare mix of LangChain depth and FastAPI discipline. The platform scaled from dozens to thousands of daily requests without drama.
Document extraction jumped in accuracy after the YOLOv8 + OCR pass. Clean code we could own and extend.
Mentoring was practical and clear—hard ML topics landed fast for the whole cohort.
Shipped MLflow + K8s patterns we had been circling for weeks. Four days from brief to working pipeline.
Chroma + Redis caching cut semantic-search latency sharply—exactly the performance lens we needed.
A local copy of the resume is included so the portfolio stays self-contained.
Open Resume98% precision on the classifier project, production AI delivery, and ML-focused academic coursework.
See WorkOpen to AI product work, applied ML delivery, and high-impact collaboration opportunities.