
Jubi Taneja is a Research Engineer at Microsoft Research whose work lies at the intersection of AI, Compiler Optimizations, and Formal Verification. Her research focuses on building trustworthy compiler optimizations and leveraging LLMs to advance code generation technologies, all while ensuring correctness through formal methods.
At Microsoft Research, Jubi leads efforts to make AI-assisted code generation reliable and verifiable. She built a PTX-level kernel equivalence checker based on symbolic execution to formally verify that optimized GPU kernels—generated by large language models or handwritten—preserve the semantics of their original unoptimized implementations. She also created LLM-Vectorizer, a framework that prompts large language models to produce vectorized loops and formally verifies the transformation using Alive2 (SMT solver-based tool), presented at CGO 2025. Additionally, she contributed to Accera, a compiler for accelerating AI models on heterogeneous hardware, targeting CPUs and GPUs.
Jubi earned her Ph.D. in Computer Science from the University of Utah, where she applied formal semantics and solver-based techniques to improve compiler reliability and optimization effectiveness. She was advised by Prof. John Regehr. She contributed to Souper, a program synthesis-based superoptimizer for LLVM IR and advanced techniques for verifying the soundness and precision of LLVM’s static analyses, earning her the Distinguished Paper and Best Student Presentation Awards at CGO 2020.
Her interest in compiler research began during her undergraduate studies, when she interned at IIT Bombay, contributing to incremental compiler construction and specification-based code generation techniques. She later joined IIT Bombay as a Junior Research Fellow, where she extended dataflow analysis techniques in GCC. Dr. Taneja earned her Bachelor of Technology in Computer Science from the University College of Engineering, Punjabi University, where she received the University Gold Medal.
Beyond her technical contributions, Jubi is actively engaged in the programming languages and systems research community. She serves as a mentor in SIGPLAN-M, co-leads initiatives such as Women in Research and Moms in Research at Microsoft Research, and co-hosts the Women in Compilers and Tools Meetup Series. She has served on numerous program committees, including PLDI, CGO, and LCTES, and continues to champion mentorship and diversity within the research community.
Jubi envisions a future where compiler development evolves beyond manual engineering and hand-crafted optimizations to meet the demands of an increasingly diverse ecosystem of AI accelerators. As hardware innovation accelerates, traditional compiler design methodologies struggle to keep pace with new backends and rapidly changing workloads. She believes the next generation of compilers will be self-learning systems, where large language models and program synthesis techniques collaborate in feedback loops—LLMs proposing optimizations, while formal verification and reinforcement learning provide correctness and performance feedback, enabling the models to self-reason and self-improve. Her research aims to pioneer this paradigm shift by exploring neuro-symbolic compiler construction, where machine learning accelerates discovery and adaptation, while rigorous formal verification ensures reliability. This long-term vision seeks to democratize high-performance compilation, making it scalable, trustworthy, and responsive to the needs of emerging AI-driven computing platforms.
🧪 Research Interests
📰 Recent News
More News...