I build AI systems and web tools that solve real problems and are actually pleasant to use. I'm currently completing my MS in Computer Science at New York University and looking for Summer internships and Fall co-op opportunities.
I'm a computer scientist working at the intersection of machine learning, human-computer interaction and extended reality. My work tends to live in places where intelligent systems meet people directly. My diverse interests in this field led me to a plethora of domains where my work has made an impact, be it MRI Super Resolution that could change access to low cost healthcare diagnostics forever, or helping enthusiast programmers learn to build audio tools for free. This site documents everything, and if you're still not satisfied, you can talk to me on my socials.
Before NYU, I completed my bachelor's in computer science and engineering at VIT Chennai. I co-authored a research paper on VR for programming education that was presented at the WEEF 2019 Conference and has since accumulated 50+ citations. I also got to discover the domain of Computer Vision, which I quickly became interested in, translating my media-centric background into efforts that would snowball into groundbreaking medical tech. I developed MedVisor, a patent-pending novel MRI VR visualization system designed to help surgeons plan procedures and educate patients. That became my bachelor's thesis.
I also worked as a Data Scientist at TVS Motor Company in the electric vehicle division, where I learned what it actually means to build systems that run in the real world — not just inside notebooks. I built data pipelines end to end that, to this day, power tens of thousands of connected vehicles across India and the rest of the world, and built AI models that detect range anomalies, reward driver behavior, and more.
At NYU I've worked on projects spanning medical imaging, brain-computer interfaces, proactive AI systems, and synthetic media disclosure — all connected by the same goal: building intelligent tools that people genuinely find useful.
Ensemble model for 4× upsampling of low-field MRI scans, outperforming the state-of-the-art diffusion model baseline and topping the Neuroinformatics Spring classroom leaderboard on Kaggle among 36 participants.
A multi-agent, provider-agnostic LLM orchestration platform that routes source code through a sequential pipeline of specialised AI agents — syntax correction, performance optimisation, test generation, and documentation — with a MoE-inspired routing approach. Streams results in real time via SSE.
Surface and volume rendering of DICOM MRI scan data in immersive VR/AR environments for surgical planning and medical education. Custom DICOM import pipeline, real-time 3D volume reconstruction and interactive AR overlay. The subject of ongoing paper work targeting Computers & Graphics (Elsevier).
GraphRAG-powered AI tutoring system representing course material as a knowledge graph. Where flat RAG retrieves chunks, ERICA understands how concepts connect. Built for Prof. Pantelis' Introduction to AI at NYU.
A stock market simulation roguelite where an LLM-powered systemic variability engine generates unique storylines and market conditions on every run. Also the research testbed for the LLM-augmented narrative paper.
Desktop file search utility with fuzzy matching, syntax highlighting and a clean native UI. Compiled to a standalone binary, released on GitHub.
A C++ data parsing engine exposed to Python via Pybind11, built to accelerate ingestion of large telemetry log files. Achieved 8.25× acceleration in parsing throughput with significant reduction in CPU utilisation.
Suite of browser-based audio tools: Flora (polyphonic subtractive synthesizer), Flambe / Chop Deck (real-time slicing sampler with scratch simulation), and Fenetrix (seed-based kick drum generator). All built on the Web Audio API with no external libraries.
Four-lesson interactive course on building audio software in the browser — synthesis fundamentals, drum synthesis, distortion and waveshaping, modulation, and a complete two-oscillator synth with piano roll.
Smart cane prototype for visually impaired users with IR and ultrasonic sensors integrated with a companion Android app delivering real-time text-to-speech distance feedback with customizable thresholds. User-tested for usability.
Predicted next-month credit card spend for 34,000 customers using 3 months of transaction history and demographic features. Engineered behavioral spending features, evaluated 14 regression models and built an ensemble pipeline. Ranked 2nd out of 45 teams with RMSLE = 1.16.
Classification on the UCI Breast Cancer dataset. Evaluated Logistic Regression, K-Nearest Neighbors and SVM classifiers — achieving 96.5% peak accuracy in detecting malignancy from cell nuclei features.