Konstantin Grotov

Research

2022 A Large-Scale Comparison of Python Code in Jupyter Notebooks and Scripts 🏆 ACM SIGSOFT Distinguished Paper Award 🏆 MSR'22 In recent years, Jupyter notebooks have grown in popularity in several domains of software engineering, such as data science, machine learning, and computer science education. Their popularity has to do with their rich features for presenting and visualizing data, however, recent studies show that notebooks also share a lot of drawbacks: high number of code clones, low reproducibility, etc. In this work, we carry out a comparison between Python code written in Jupyter Notebooks and in traditional Python scripts. We compare the code from two perspectives: structural and stylistic. In the first part of the analysis, we report the difference in the number of lines, the usage of functions, as well as various complexity metrics. In the second part, we show the difference in the number of stylistic issues and provide an extensive overview of the 15 most frequent stylistic issues in the studied mediums. Overall, we demonstrate that notebooks are characterized by the lower code complexity, however, their code could be perceived as more entangled than in the scripts. As for the style, notebooks tend to have 1.4 times more stylistic issues, but at the same time, some of them are caused by specific coding practices in notebooks and should be considered as false positives. With this research, we want to pave the way to studying specific problems of notebooks that should be addressed by the development of notebook-specific tools, and provide various insights that can be useful in this regard. [Link]
2024 Debug Smarter, Not Harder: AI Agents for Error Resolution in Computational Notebooks EMNLP'24 Computational notebooks became indispensable tools for research-related development, offering unprecedented interactivity and flexibility in the development process. However, these benefits come at the cost of reproducibility and an increased potential for bugs. With the rise of code-fluent Large Language Models empowered with agentic techniques, smart bug-fixing tools with a high level of autonomy have emerged. However, those tools are tuned for classical script programming and still struggle with non-linear computational notebooks. In this paper, we present an AI agent designed specifically for error resolution in a computational notebook. We have developed an agentic system capable of exploring a notebook environment by interacting with it -- similar to how a user would -- and integrated the system into the JetBrains service for collaborative data science called Datalore. We evaluate our approach against the pre-existing single-action solution by comparing costs and conducting a user study. Users rate the error resolution capabilities of the agentic system higher but experience difficulties with UI. We share the results of the study and consider them valuable for further improving user-agent collaboration. [Link]
2024 Untangling Knots: Leveraging LLM for Error Resolution in Computational Notebooks CHI'24 The 1st Workshop Human-Notebook Interactions Computational notebooks became indispensable tools for research-related development, offering unprecedented interactivity and flexibility in the development process. However, these benefits come at the cost of reproducibility and an increased potential for bugs. There are many tools for bug fixing; however, they are generally targeted at the classical linear code. With the rise of code-fluent Large Language Models, a new stream of smart bug-fixing tools has emerged. However, the applicability of those tools is still problematic for non-linear computational notebooks. In this paper, we propose a potential solution for resolving errors in computational notebooks via an iterative LLM-based agent. We discuss the questions raised by this approach and share a novel dataset of computational notebooks containing bugs to facilitate the research of the proposed approach. [Link]
2024 Hidden Gems in the Rough: Computational Notebooks as an Uncharted Oasis for IDEs ICSE'24 The 1st IDE Workshop In this paper, we outline potential ways for the further development of computational notebooks in Integrated Development Environments (IDEs). We discuss notebooks integration with IDEs, focusing on three main areas: facilitating experimentation, adding collaborative features, and improving code comprehension. We propose that better support of notebooks will not only benefit the notebooks, but also enhance IDEs by supporting new development processes native to notebooks. In conclusion, we suggest that adapting IDEs for more experimentation-oriented notebook processes will prepare them for the future of AI-powered programming. [Link]
2023 Optimizing Duplicate Size Thresholds in IDEs MSR'23 In this paper, we present an approach for transferring an optimal lower size threshold for clone detection from one language to another by analyzing their clone distributions. We showcase this method by transferring the threshold from regular Python scripts to Jupyter notebooks for using in two JetBrains IDEs, Datalore and DataSpell. [Link]