Hello everyone! I am Disha Shrivastava, a Research Scientist at Google DeepMind, London. My research focuses on enhancing the coding capability of large-scale generative models. In particular, I am interested in improving the ability of the model to iteratively refine its predictions and effectively leverage relevant context from execution, API usage and local repository.

I completed my PhD in Machine Learning at Mila, working with Hugo Larochelle and Danny Tarlow. As part of my thesis, I developed methods to identify relevant contextual cues and effectively leverage the selected cues for aiding deep learning models of code. During my PhD, I also worked as a Student Researcher at Google Brain, as a Research Scientist Intern at DeepMind, and as a Visiting Researcher at ServiceNow Research.

Prior to starting my PhD, I worked at IBM Research, India as a Research Software Engineer. My work focussed on unsupervised construction of knowledge graphs, metrics for computational creativity and topical coherence, and reasoning for maths question-answering. I graduated from IIT Delhi with a Masters in Computer Technology where I developed a data and model-parallel framework for training of deep networks in Apache Spark. I hold a Bachelors degree in Electronics and Communication Engineering from BIT Mesra.

I co-organized the Deep Learning for Code Workshops at ICLR 2022-23, AIPLANS Workshop at NeurIPS 2021 and Neurosymbolic Generative Models at ICLR 2023. When I am not teaching machines how to code, I like travelling to explore nature, cooking, reading books, singing and blogging!

Publications

RepoFusion: Training Code Models to Understand Your Repository [Dataset] [Code] [Trained Checkpoints]
Disha Shrivastava, Denis Kocetkov, Harm de Vries, Dzmitry Bahdanau, Torsten Scholak
Preprint, 2023
Repository-Level Prompt Generation for Large Language Models of Code [Code] [Poster]
Disha Shrivastava, Hugo Larochelle, Daniel Tarlow
ICML, 2023
Approach Intelligent Writing Assistants Usability with Seven Stages of Action
Avinash Bhat, Disha Shrivastava, Jin L.C. Guo
CHI Workshop on Intelligent and Interactive Writing Assistants, 2023
Minimax and Neyman-Pearson Meta-Learning for Outlier Languages [Code]
Edoardo Maria Ponti*, Rahul Aralikatte*, Disha Shrivastava, Siva Reddy, Anders Søgaard
Findings of ACL, 2021
On-the-Fly Adaptation of Source Code Models [Poster]
Disha Shrivastava, Hugo Larochelle, Daniel Tarlow
NeurIPS Workshop on Computer-Assisted Programming, 2020
Transfer Learning by Modeling a Distribution over Policies [Poster]
Disha Shrivastava*, Eeshan Gunesh Dhekane*, Riashat Islam
ICML Workshop on Multi-Task and Lifelong Reinforcement Learning, 2019
A Machine Learning Approach for Evaluating Creative Artifacts [Poster]
Disha Shrivastava, Saneem Ahmed CG, Anirban Laha, Karthik Sankaranarayanan
SIGKDD Workshop on Machine Learning for Creativity, 2017
What is Deemed Computationally Creative?
Shavak Agrawal, Anush Sankaran, Anirban Laha, Saneem Ahmed CG, Disha Shrivastava, Karthik Sankaranarayanan
IBM Journal of Research and Development, 2019