MSR 2024
Mon 15 - Tue 16 April 2024 Lisbon, Portugal
co-located with ICSE 2024
Dates
Tracks
You're viewing the program in a time zone which is different from your device's time zone change time zone

Mon 15 Apr

Displayed time zone: Lisbon change

09:00 - 10:30
Day 1: OpeningTechnical Papers / MSR Awards / Social Events / Tutorials / Data and Tool Showcase Track / Mining Challenge / Registered Reports / Industry Track / MIP Award / Vision and Reflection / Keynotes at Grande Auditório
Chair(s): Diomidis Spinellis Athens University of Economics and Business & Delft University of Technology
09:00
30m
Day opening
Opening Session & Award Announcements
MSR Awards

09:30
30m
Awards
MSR 2024 Foundational Contribution Award talk
MSR Awards
Margaret-Anne Storey University of Victoria
10:00
30m
Talk
Most Influential Paper Award talk
MIP Award
10:30 - 11:00
Coffee for MSR newcomersSocial Events at Open Space (reserved area)
Chair(s): Federica Sarro University College London, Alexander Serebrenik Eindhoven University of Technology
10:30
30m
Coffee break
Coffee for MSR newcomers
Social Events
Federica Sarro University College London, Alexander Serebrenik Eindhoven University of Technology
11:00 - 12:30
Ecosystems, Reuse and APIs & TutorialsData and Tool Showcase Track / Technical Papers / Tutorials at Almada Negreiros
Chair(s): Mahmoud Alfadel University of Waterloo, Ayushi Rastogi University of Groningen, The Netherlands
11:00
12m
Talk
Thirty-Three Years of Mathematicians and Software Engineers: A Case Study of Domain Expertise and Participation in Proof Assistant Ecosystems
Technical Papers
Gwenyth Lincroft Northeastern University, Minsung Cho Northeastern University, Mahsa Bazzaz Northeastern University, Katherine Hough Northeastern University, Jonathan Bell Northeastern University
Pre-print Media Attached
11:12
12m
Talk
Boosting API Misuse Detection via Integrating API Constraints from Multiple Sources
Technical Papers
Can Li Nanjing University of Aeronautics and Astronautics, Jingxuan Zhang Nanjing University of Aeronautics and Astronautics, Yixuan Tang Nanjing University of Aeronautics and Astronautics, Zhuhang Li Nanjing University of Aeronautics and Astronautics, Tianyue Sun Nanjing University of Aeronautics and Astronautics
11:24
6m
Talk
Availability and Usage of Platform-Specific APIs: A First Empirical Study
Technical Papers
Pre-print File Attached
11:30
4m
Talk
AndroLibZoo: A Reliable Dataset of Libraries Based on Software Dependency Analysis
Data and Tool Showcase Track
Jordan Samhi CISPA Helmholtz Center for Information Security, Tegawendé F. Bissyandé University of Luxembourg, Jacques Klein University of Luxembourg
11:34
4m
Talk
Goblin: A Framework for Enriching and Querying the Maven Central Dependency Graph
Data and Tool Showcase Track
Damien Jaime Sorbonne Université - Lip6 - SAP, Joyce El Haddad Paris Dauphine-PSL Université, CNRS, LAMSADE, Pascal Poizat Université Paris Nanterre & LIP6
Pre-print File Attached
11:38
4m
Talk
Dataset: Copy-based Reuse in Open Source Software
Data and Tool Showcase Track
Mahmoud Jahanshahi Research Assistant, University of Tennessee Knoxville, Audris Mockus The University of Tennessee & Vilnius University
Pre-print
11:45
45m
Talk
Mining Our Way Back to Incremental Builds for DevOps Pipelines
Tutorials
Shane McIntosh University of Waterloo
Pre-print
11:00 - 12:30
11:00
12m
Talk
Enhancing Performance Bug Prediction Using Performance Code Metrics
Technical Papers
Guoliang Zhao Computer Science of Queen's University, Stefanos Georgio , Safwat Hassan University of Toronto, Canada, Ying Zou Queen's University, Kingston, Ontario, Derek Truong IBM Canada, Toby Corbin IBM UK
11:12
12m
Talk
CrashJS: A NodeJS Benchmark for Automated Crash Reproduction
Technical Papers
Philip Oliver Victoria University of Wellington, Jens Dietrich Victoria University of Wellington, Craig Anslow Victoria University of Wellington, Michael Homer Victoria University of Wellington
11:24
12m
Talk
An Empirical Study on Just-in-time Conformal Defect Prediction
Technical Papers
Xhulja Shahini paluno - University of Duisburg-Essen, Andreas Metzger University of Duisburg-Essen, Klaus Pohl
11:36
12m
Talk
Fine-Grained Just-In-Time Defect Prediction at the Block Level in Infrastructure-as-Code (IaC)
Technical Papers
Mahi Begoug , Moataz Chouchen ETS, Ali Ouni ETS Montreal, University of Quebec, Eman Abdullah AlOmar Stevens Institute of Technology, Mohamed Wiem Mkaouer University of Michigan - Flint
11:48
4m
Talk
TrickyBugs: A Dataset of Corner-case Bugs in Plausible Programs
Data and Tool Showcase Track
Kaibo Liu Peking University, Yudong Han Peking University, Yiyang Liu Peking University, Zhenpeng Chen Nanyang Technological University, Jie M. Zhang King's College London, Federica Sarro University College London, Gang Huang Peking University, Yun Ma Peking University
11:52
4m
Talk
GitBugs-Java: A Reproducible Java Benchmark of Recent Bugs
Data and Tool Showcase Track
André Silva KTH Royal Institute of Technology, Nuno Saavedra INESC-ID and IST, University of Lisbon, Martin Monperrus KTH Royal Institute of Technology
11:56
4m
Talk
A Dataset of Partial Program Fixes
Data and Tool Showcase Track
Dirk Beyer LMU Munich, Lars Grunske Humboldt-Universität zu Berlin, Matthias Kettl LMU Munich, Marian Lingsch-Rosenfeld LMU Munich, Moeketsi Raselimo Humboldt-Universität zu Berlin
12:00
4m
Talk
BugsPHP: A dataset for Automated Program Repair in PHP
Data and Tool Showcase Track
K.D. Pramod University of Moratuwa, Sri Lanka, W.T.N. De Silva University of Moratuwa, Sri Lanka, W.U.K. Thabrew University of Moratuwa, Sri Lanka, Ridwan Salihin Shariffdeen National University of Singapore, Sandareka Wickramanayake University of Moratuwa, Sri Lanka
Pre-print
12:04
4m
Talk
AW4C: A Commit-Aware C Dataset for Actionable Warning Identification
Data and Tool Showcase Track
Zhipeng Liu , Meng Yan Chongqing University, Zhipeng Gao Shanghai Institute for Advanced Study - Zhejiang University, dong li , Xiaohong Zhang Chongqing University, Dan Yang Chongqing University
12:08
5m
Talk
Predicting the Impact of Crashes Across Release Channels
Industry Track
Suhaib Mujahid Mozilla, Diego Costa Concordia University, Canada, Marco Castelluccio Mozilla
12:13
5m
Talk
Zero Shot Learning based Alternatives for Class Imbalanced Learning Problem in Enterprise Software Defect Analysis
Industry Track
Sangameshwar Patil Dept. of CSE, IIT Madras and TRDDC, TCS, B Ravindran IITM
14:00 - 15:30
Mining ChallengeMining Challenge at Almada Negreiros
Chair(s): Preetha Chatterjee Drexel University, USA, Fabio Palomba University of Salerno
14:00
5m
Talk
ChatGPT Chats Decoded: Uncovering Prompt Patterns for Superior Solutions in Software Development Lifecycle
Mining Challenge
Liangxuan Wu Huazhong University of Science and Technology, Yanjie Zhao Huazhong University of Science and Technology, Xinyi Hou Huazhong University of Science and Technology, Tianming Liu Monash Univerisity, Haoyu Wang Huazhong University of Science and Technology
14:05
5m
Talk
Write me this Code: An Analysis of ChatGPT Quality for Producing Source Code
Mining Challenge
Konstantinos Moratis Electrical and Computer Engineering Dept., Aristotle University of Thessaloniki, Themistoklis Diamantopoulos Electrical and Computer Engineering Dept, Aristotle University of Thessaloniki, Dimitrios-Nikitas Nastos Electrical and Computer Engineering Dept., Aristotle University of Thessaloniki, Andreas Symeonidis Aristotle University of Thessaloniki
Pre-print
14:10
5m
Talk
Quality Assessment of ChatGPT Generated Code and their Use by Developers
Mining Challenge
Mohammed Latif Siddiq University of Notre Dame, Lindsay Roney University of Notre Dame, Jiahao Zhang , Joanna C. S. Santos University of Notre Dame
Pre-print Media Attached File Attached
14:15
5m
Talk
Analyzing Developer Use of ChatGPT Generated Code in Open Source GitHub Projects
Mining Challenge
Balreet Grewal University of Alberta, Wentao Lu University of Alberta, Sarah Nadi University of Alberta, Cor-Paul Bezemer University of Alberta
Pre-print
14:20
5m
Talk
How I Learned to Stop Worrying and Love ChatGPT
Mining Challenge
Piotr Przymus Nicolaus Copernicus University in Toruń, Poland, Mikołaj Fejzer Nicolaus Copernicus University in Toruń, Jakub Narębski Nicolaus Copernicus University in Toruń, Krzysztof Stencel University of Warsaw
Pre-print
14:25
5m
Talk
Can ChatGPT Support Developers? An Empirical Evaluation of Large Language Models for Code Generation.
Mining Challenge
Kailun Jin York University, Chung-Yu Wang York University, Hung Viet Pham York University, Hadi Hemmati York University
Pre-print
14:30
5m
Talk
The role of library versions in Developer-ChatGPT conversations
Mining Challenge
Rachna Raj Concordia University, Diego Costa Concordia University, Canada
Pre-print
14:35
5m
Talk
AI Writes, We Analyze: The ChatGPT Python Code Saga
Mining Challenge
Md Fazle Rabbi Idaho State University, Arifa Islam Champa Idaho State University, Minhaz F. Zibran Idaho State University, Md Rakibul Islam Lamar University
DOI Pre-print
14:40
5m
Talk
ChatGPT in Action: Analyzing Its Use in Software Development
Mining Challenge
Arifa Islam Champa Idaho State University, Md Fazle Rabbi Idaho State University, Costain Nachuma Idaho State University, Minhaz F. Zibran Idaho State University
DOI Pre-print
14:45
5m
Talk
Chatting with AI: Deciphering Developer Conversations with ChatGPT
Mining Challenge
Suad Mohamed Belmont University, Abdullah Parvin Belmont University, Esteban Parra Belmont University
14:50
5m
Talk
Does Generative AI Generate Smells Related to Container Orchestration?: An Exploratory Study with Kubernetes Manifests
Mining Challenge
Yue Zhang Auburn University, Rachel Meredith Auburn University, Wilson Reaves Auburn University, Julia Coriolano Federal University of Pernambuco, Muhammad Ali Babar School of Computer Science, The University of Adelaide, Akond Rahman Auburn University
Pre-print
14:55
5m
Talk
On the Taxonomy of Developers' Discussion Topics with ChatGPT
Mining Challenge
Ertugrul Sagdic Lamar University, Arda Bayram Lamar University, Md Rakibul Islam Lamar University
15:00
5m
Talk
How to refactor this code? An exploratory study on developer-ChatGPT refactoring conversations
Mining Challenge
Eman Abdullah AlOmar Stevens Institute of Technology, AnushKrishna Venkatakrishnan Rochester Institute of Technology, USA, Mohamed Wiem Mkaouer University of Michigan - Flint, Christian Newman , Ali Ouni ETS Montreal, University of Quebec
15:05
5m
Talk
Analyzing Developer-ChatGPT Conversations for Software Refactoring: An Exploratory Study
Mining Challenge
Omkar Sandip Chavan Rochester Institute of Technology, Divya Dilip Hinge Rochester Institute of Technology, Soham Sanjay Deo Rochester Institute of Technology, Yaxuan (Olivia) Wang Rochester Institute of Technology, Mohamed Wiem Mkaouer University of Michigan - Flint
15:10
5m
Talk
How Do Software Developers Use ChatGPT? An Exploratory Study on GitHub Pull Requests
Mining Challenge
Moataz Chouchen ETS, Narjes Bessghaier ETS Montreal, University of Quebec, Mahi Begoug , Ali Ouni ETS Montreal, University of Quebec, Eman Abdullah AlOmar Stevens Institute of Technology, Mohamed Wiem Mkaouer University of Michigan - Flint
15:15
5m
Talk
Investigating the Utility of ChatGPT in the Issue Tracking System: An Exploratory Study
Mining Challenge
Joy Krishan Das University of Saskatchewan, Saikat Mondal University of Saskatchewan, Chanchal K. Roy University of Saskatchewan, Canada
Pre-print
15:20
5m
Talk
Enhancing User Interaction in ChatGPT: Characterizing and Consolidating Multiple Prompts for Issue Resolution
Mining Challenge
Saikat Mondal University of Saskatchewan, Suborno Deb Bappon Department of Computer Science, University of Saskatchewan, Canada, Chanchal K. Roy University of Saskatchewan, Canada
Pre-print
14:00 - 15:30
Software QualityTechnical Papers / Registered Reports / Data and Tool Showcase Track at Grande Auditório
Chair(s): Gopi Krishnan Rajbahadur Centre for Software Excellence, Huawei, Canada
14:00
12m
Talk
Not all Dockerfile Smells are the Same: An Empirical Evaluation of Hadolint Writing Practices by Experts
Technical Papers
Giovanni Rosa University of Molise, Simone Scalabrino University of Molise, Gregorio Robles Universidad Rey Juan Carlos, Rocco Oliveto University of Molise
14:12
12m
Talk
Supporting High-Level to Low-Level Requirements Coverage Reviewing with Large Language Models
Technical Papers
Anamaria-Roberta Preda Johannes Kepler University Linz, Christoph Mayr-Dorn JOHANNES KEPLER UNIVERSITY LINZ, Atif Mashkoor Johannes Kepler University Linz, Alexander Egyed Johannes Kepler University Linz
DOI Pre-print
14:24
12m
Talk
On the Executability of R Markdown Files
Technical Papers
Md Anaytul Islam Lakehead University, Muhammad Asaduzzman University of Windsor, Shaowei Wang Department of Computer Science, University of Manitoba, Canada
14:36
12m
Talk
APIstic: A Large Collection of OpenAPI Metrics
Technical Papers
souhaila serbout Software Institute @ USI, Cesare Pautasso Software Institute, Faculty of Informatics, USI Lugano
14:48
6m
Talk
Improving Automated Code Reviews: Learning From Experience
Technical Papers
Hong Yi Lin The University of Melbourne, Patanamon Thongtanunam University of Melbourne, Christoph Treude Singapore Management University, Wachiraphan (Ping) Charoenwet The University of Melbourne
14:55
4m
Talk
Multi-faceted Code Smell Detection at Scale using DesigniteJava 2.0
Data and Tool Showcase Track
Tushar Sharma Dalhousie University
Pre-print
14:59
4m
Talk
SATDAUG - A Balanced and Augmented Dataset for Detecting Self-Admitted Technical Debt
Data and Tool Showcase Track
Edi Sutoyo Bernoulli Institute for Mathematics, Computer Science and Artificial Intelligence, University of Groningen, Andrea Capiluppi University of Groningen
15:03
4m
Talk
Curated Email-Based Code Reviews Datasets
Data and Tool Showcase Track
Mingzhao Liang The University of Melbourne, Wachiraphan (Ping) Charoenwet The University of Melbourne, Patanamon Thongtanunam University of Melbourne
15:07
4m
Talk
TestDossier: A Dataset of Tested Values Automatically Extracted from Test Execution
Data and Tool Showcase Track
Pre-print
15:11
4m
Talk
Greenlight: Highlighting TensorFlow APIs Energy Footprint
Data and Tool Showcase Track
Saurabhsingh Rajput Dalhousie University, Maria Kechagia University College London, Federica Sarro University College London, Tushar Sharma Dalhousie University
Pre-print
15:15
5m
Talk
When Code Smells Meet ML: On the Lifecycle of ML-specific Code Smells in ML-enabled Systems
Registered Reports
Gilberto Recupito University of Salerno, Giammaria Giordano University of Salerno, Filomena Ferrucci University of Salerno, Dario Di Nucci University of Salerno, Fabio Palomba University of Salerno
15:20
5m
Talk
Comparison of Static Analysis Architecture Recovery Tools for Microservice Applications
Registered Reports
Simon Schneider Hamburg University of Technology, Alexander Bakhtin University of Oulu, Xiaozhou Li University of Oulu, Jacopo Soldani University of Pisa, Italy, Antonio Brogi Università di Pisa, Tomas Cerny University of Arizona, Riccardo Scandariato Hamburg University of Technology, Davide Taibi University of Oulu and Tampere University
16:00 - 17:30
Mobile AppsData and Tool Showcase Track / Technical Papers at Almada Negreiros
Chair(s): Dario Di Nucci University of Salerno
16:00
12m
Talk
Automating GUI-based Test Oracles for Mobile Apps
Technical Papers
Kesina Baral CQSE America, Jack Johnson , Junayed Mahmud George Mason University, Sabiha Salma George Mason University, Mattia Fazzini University of Minnesota, Julia Rubin University of British Columbia, Jeff Offutt George Mason University, Kevin Moran University of Central Florida
16:12
12m
Talk
Global Prosperity or Local Monopoly? Understanding the Geography of App Popularity
Technical Papers
Liu Wang Beijing University of Posts and Telecommunications, Conghui Zheng Beijing University of Posts and Telecommunications, Haoyu Wang Huazhong University of Science and Technology, Xiapu Luo The Hong Kong Polytechnic University, Gareth Tyson Queen Mary University of London, Yi Wang , Shangguang Wang Beijing University of Posts and Telecommunications
16:24
12m
Talk
GuiEvo: Automated Evolution of Mobile App UIs
Technical Papers
Sabiha Salma George Mason University, S M Hasan Mansur George Mason University, Yule Zhang George Mason University, Kevin Moran University of Central Florida
16:36
12m
Talk
Comparing Apples to Androids: Discovery, Retrieval, and Matching of iOS and Android Apps for Cross-Platform Analyses
Technical Papers
Magdalena Steinböck TU Wien, Jakob Bleier TU Wien, Mikka Rainer CISPA Helmholtz Center for Information Security, Tobias Urban Institute for Internet Security & secunet Security Networks AG, Christine Utz CISPA Helmholtz Center for Information Security, Martina Lindorfer TU Wien
16:48
12m
Talk
Keep Me Updated: An Empirical Study on Embedded Javascript Engines in Android Apps
Technical Papers
Elliott Wen The University of Auckland, Jiaxiang Liu The Hong Kong Polytechnic University, Xiapu Luo The Hong Kong Polytechnic University, Giovanni Russello University of Auckland, Jens Dietrich Victoria University of Wellington
17:00
12m
Talk
Large Language Model vs. Stack Overflow in Addressing Android Permission Related Challenges
Technical Papers
Sahrima Jannat Oishwee University of Saskatchewan, Natalia Stakhanova University of Saskatchewan, Zadia Codabux University of Saskatchewan, Canada
17:12
4m
Talk
DATAR: A Dataset for Tracking App Releases
Data and Tool Showcase Track
Yasaman Abedini Sharif University of Technology, Mohammad Hadi Hajihosseini Sharif University of Technology, Abbas Heydarnoori Bowling Green State University
17:16
4m
Talk
AndroZoo: A Retrospective with a Glimpse into the Future
Data and Tool Showcase Track
Marco Alecci University of Luxembourg, Pedro Jesús Ruiz Jiménez University of Luxembourg, Kevin Allix Independent Researcher, Tegawendé F. Bissyandé University of Luxembourg, Jacques Klein University of Luxembourg
16:00 - 17:30
Machine learning for Software EngineeringTechnical Papers at Grande Auditório
Chair(s): Diego Costa Concordia University, Canada
16:00
12m
Talk
Whodunit: Classifying Code as Human Authored or GPT-4 Generated - A case study on CodeChef problems
Technical Papers
Oseremen Joy Idialu University of Waterloo, Noble Saji Mathews University of Waterloo, Canada, Rungroj Maipradit University of Waterloo, Joanne M. Atlee University of Waterloo, Mei Nagappan University of Waterloo
DOI Pre-print
16:12
12m
Talk
GIRT-Model: Automated Generation of Issue Report Templates
Technical Papers
Nafiseh Nikehgbal Sharif University of Technology, Amir Hossein Kargaran LMU Munich, Abbas Heydarnoori Bowling Green State University
DOI Pre-print
16:24
12m
Talk
MicroRec: Leveraging Large Language Models for Microservice Recommendation
Technical Papers
Ahmed Saeed Alsayed University of Wollongong, Hoa Khanh Dam University of Wollongong, Chau Nguyen University of Wollongong
16:36
12m
Talk
PeaTMOSS: A Dataset and Initial Analysis of Pre-Trained Models in Open-Source Software
Technical Papers
Wenxin Jiang Purdue University, Jerin Yasmin Queen's University, Canada, Jason Jones Purdue University, Nicholas Synovic Loyola University Chicago, Jiashen Kuo Purdue University, Nathaniel Bielanski Purdue University, Yuan Tian Queen's University, Kingston, Ontario, George K. Thiruvathukal Loyola University Chicago and Argonne National Laboratory, James C. Davis Purdue University
DOI Pre-print
16:48
12m
Talk
Data Augmentation for Supervised Code Translation Learning
Technical Papers
Binger Chen Technische Universität Berlin, Jacek golebiowski Amazon AWS, Ziawasch Abedjan Leibniz Universität Hannover
17:00
12m
Talk
On the Effectiveness of Machine Learning-based Call-Graph Pruning: An Empirical Study
Technical Papers
Amir Mir Delft University of Technology, Mehdi Keshani Delft University of Technology, Sebastian Proksch Delft University of Technology
Pre-print
17:12
12m
Talk
Leveraging GPT-like LLMs to Automate Issue Labeling
Technical Papers
Giuseppe Colavito University of Bari, Italy, Filippo Lanubile University of Bari, Nicole Novielli University of Bari, Luigi Quaranta University of Bari, Italy
Pre-print

Tue 16 Apr

Displayed time zone: Lisbon change

09:00 - 10:30
Development: practices and humans Data and Tool Showcase Track / Technical Papers at Almada Negreiros
Chair(s): Gema Rodríguez-Pérez University of British Columbia (UBC)
09:50
6m
Talk
Exploring the Effect of Multiple Natural Languages on Code Suggestion Using GitHub Copilot
Technical Papers
Kei Koyanagi Kyushu University, Dong Wang Kyushu University, Japan, Kotaro Noguchi Kyushu University, Masanari Kondo Kyushu University, Alexander Serebrenik Eindhoven University of Technology, Yasutaka Kamei Kyushu University, Naoyasu Ubayashi Kyushu University
Pre-print
09:56
4m
Talk
A Four-Dimension Gold Standard Dataset for Opinion Mining in Software Engineering
Data and Tool Showcase Track
Md Rakibul Islam Lamar University, Md Fazle Rabbi Idaho State University, Jo Youngeun Lamar University, Arifa Islam Champa Idaho State University, Ethan J Young Lamar University, Camden M Wilson Lamar University, Gavin J Scott Lamar University, Minhaz F. Zibran Idaho State University
10:00
4m
Talk
Opening the Valve on Pure-Data: Usage Patterns and Programming Practices of a Data-Flow Based Visual Programming Language
Data and Tool Showcase Track
Anisha Islam Department of Computing Science, University of Alberta, Kalvin Eng University of Alberta, Abram Hindle University of Alberta
10:04
4m
Talk
The PIPr Dataset of Public Infrastructure as Code Programs
Data and Tool Showcase Track
Daniel Sokolowski University of St. Gallen, David Spielmann University of St. Gallen, Guido Salvaneschi University of St. Gallen
Link to publication DOI Pre-print
10:08
4m
Talk
A Dataset of Microservices-based Open-Source Projects
Data and Tool Showcase Track
Dario Amoroso d'Aragona Tampere University, Alexander Bakhtin University of Oulu, Xiaozhou Li University of Oulu, Ruoyu Su University of Oulu, Lauren Adams Baylor University, Ernesto Aponte Universidad del Sagrado Corazón, Francis Boyle Baylor University, Patrick Boyle Baylor University, Rachel Koerner Baylor University, Joseph Lee University of Richmond, Fangchao Tian University of Oulu, Yuqing Wang University of Oulu, Jesse Nyyssölä University of Helsinki, Ernesto Quevedo Baylor University, Shahidur Md Rahaman Baylor University, Amr Elsayed Baylor University, Mika Mäntylä University of Helsinki and University of Oulu, Tomas Cerny University of Arizona, Davide Taibi University of Oulu and Tampere University
10:12
4m
Talk
SensoDat: Simulation-based Sensor Dataset of Self-driving Cars
Data and Tool Showcase Track
Christian Birchler Zurich University of Applied Sciences & University of Bern, Cyrill Rohrbach University of Bern, Switzerland, Timo Kehrer University of Bern, Sebastiano Panichella Zurich University of Applied Sciences
10:16
4m
Talk
Incivility in Open Source Projects: A Comprehensive Annotated Dataset of Locked GitHub Issue Threads
Data and Tool Showcase Track
Ramtin Ehsani Drexel University, Mia Mohammad Imran Virginia Commonwealth University, Robert Zita Elmhurst University, Kostadin Damevski Virginia Commonwealth University, Preetha Chatterjee Drexel University, USA
10:20
4m
Talk
A Dataset of Atoms of Confusion in the Android Open Source Project
Data and Tool Showcase Track
Davi Batista Tabosa Federal University of Ceará, Oton Pinheiro Federal University of Ceará, Lincoln Souza Rocha Federal University of Ceará, Windson Viana Federal University of Ceará
10:24
4m
Talk
PlayMyData: a curated dataset of multi-platform video games
Data and Tool Showcase Track
Andrea D'Angelo University of L'Aquila, Claudio Di Sipio University of L'Aquila, Cristiano Politowski DIRO, University of Montreal, Riccardo Rubei University of L'Aquila
09:00 - 10:30
Keynote and TutorialTutorials / Keynotes at Grande Auditório
Chair(s): Romain Robbes
09:00
45m
Keynote
Questioning the questions we ask about the impact of AI on software engineering
Keynotes
Margaret-Anne Storey University of Victoria
09:45
45m
Talk
Open Source Software Digital Sociology: Quantifying and Managing Complex Open Source Software Ecosystem
Tutorials
Minghui Zhou Peking University, Yuxia Zhang Beijing Institute of Technology, Xin Tan Beihang University
11:00 - 12:30
Process automation & DevOps and Tutorial ITechnical Papers / Tutorials at Almada Negreiros
Chair(s): Tom Mens University of Mons, Ayushi Rastogi University of Groningen, The Netherlands
11:00
12m
Talk
Learning to Predict and Improve Build Successes in Package Ecosystems
Technical Papers
Harshitha Menon Lawrence Livermore National Lab, Daniel Nichols University of Maryland, College Park, Abhinav Bhatele University of Maryland, College Park, Todd Gamblin Lawrence Livermore National Laboratory
11:12
12m
Talk
The Impact of Code Ownership of DevOps Artefacts on the Outcome of DevOps CI Builds
Technical Papers
Ajiromola Kola-Olawuyi University of Waterloo, Nimmi Rashinika Weeraddana University of Waterloo, Mei Nagappan University of Waterloo
11:24
12m
Talk
A Mutation-Guided Assessment of Acceleration Approaches for Continuous Integration: An Empirical Study of YourBase
Technical Papers
Zhili Zeng University of Waterloo, Tao Xiao Nara Institute of Science and Technology, Maxime Lamothe Polytechnique Montreal, Hideaki Hata Shinshu University, Shane McIntosh University of Waterloo
Pre-print
11:45
45m
Talk
Cohort Studies for Mining Software Repositories
Tutorials
Nyyti Saarimäki Tampere University, Sira Vegas Universidad Politecnica de Madrid, Valentina Lenarduzzi University of Oulu, Davide Taibi University of Oulu and Tampere University , Mikel Robredo University of Oulu
11:00 - 12:30
Software Evolution & AnalysisTechnical Papers / Data and Tool Showcase Track / Industry Track at Grande Auditório
Chair(s): Vladimir Kovalenko JetBrains Research
11:00
12m
Talk
Unveiling ChatGPT's Usage in Open Source Projects: A Mining-based Study
Technical Papers
Rosalia Tufano Università della Svizzera Italiana, Antonio Mastropaolo Università della Svizzera italiana, Federica Pepe University of Sannio, Ozren Dabic Software Institute, Università della Svizzera italiana (USI), Switzerland, Massimiliano Di Penta University of Sannio, Italy, Gabriele Bavota Software Institute @ Università della Svizzera Italiana
11:12
12m
Talk
DRMiner: A Tool For Identifying And Analyzing Refactorings In Dockerfile
Technical Papers
Emna Ksontini University of Michigan - Dearborn, Aycha Abid Oakland University, Rania Khalsi University of Michigan - Flint, Marouane Kessentini University of Michigan - Flint
11:24
12m
Talk
A Large-Scale Empirical Study of Open Source License Usage: Practices and Challenges
Technical Papers
Jiaqi Wu Zhejiang University, Lingfeng Bao Zhejiang University, Xiaohu Yang Zhejiang University, Xin Xia Huawei Technologies, Xing Hu Zhejiang University
11:36
12m
Talk
Analyzing the Evolution and Maintenance of ML Models on Hugging Face
Technical Papers
Joel Castaño Fernández Universitat Politècnica de Catalunya, Silverio Martínez-Fernández UPC-BarcelonaTech, Xavier Franch Universitat Politècnica de Catalunya, Justus Bogner Vrije Universiteit Amsterdam
Link to publication Pre-print
11:48
12m
Talk
On the Anatomy of Real-World R Code for Static Analysis
Technical Papers
Florian Sihler Ulm University, Lukas Pietzschmann Ulm University, Raphael Straub Ulm University, Matthias Tichy Ulm University, Germany, Andor Diera Ulm University, Abdelhalim Dahou GESIS Leibniz Institute for the Social Sciences
Pre-print File Attached
12:00
6m
Talk
Encoding Version History Context for Better Code Representation
Technical Papers
Huy Nguyen The University of Melbourne, Christoph Treude Singapore Management University, Patanamon Thongtanunam University of Melbourne
Pre-print
12:06
4m
Talk
CodeLL: A Lifelong Learning Dataset to Support the Co-Evolution of Data and Language Models of Code
Data and Tool Showcase Track
Martin Weyssow DIRO, Université de Montréal, Claudio Di Sipio University of L'Aquila, Davide Di Ruscio University of L'Aquila, Houari Sahraoui DIRO, Université de Montréal
12:10
4m
Talk
Bidirectional Paper-Repository Tracing in Software Engineering
Data and Tool Showcase Track
Daniel Garijo , Miguel Arroyo Universidad Politécnica de Madrid, Esteban González Guardia Universidad Politécnica de Madrid, Christoph Treude Singapore Management University, Nicola Tarocco CERN
12:14
4m
Talk
DistilKaggle: A Distilled Dataset of Kaggle Jupyter Notebooks
Data and Tool Showcase Track
Mojtaba Mostafavi Department of Computer Engineering of Sharif University of Technology, Arash Asgari Department of Computer Engineering of Sharif University of Technology, Mohammad Abolnejadian Department of Computer Engineering of Sharif University of Technology, Abbas Heydarnoori Bowling Green State University
12:18
5m
Talk
Estimating Usage of Open Source Projects
Industry Track
Sophia Vargas Google LLC, Georg Link Bitergia, JaYoung Lee Google
14:00 - 15:30
Process automation & DevOps IITechnical Papers / Data and Tool Showcase Track at Almada Negreiros
Chair(s): Shane McIntosh University of Waterloo
14:00
12m
Talk
Options Matter: Documenting and Fixing Non-Reproducible Builds in Highly-Configurable Systems
Technical Papers
Georges Aaron RANDRIANAINA Université de Rennes 1, IRISA, Djamel Eddine Khelladi CNRS, IRISA, University of Rennes, Olivier Zendra Inria, Mathieu Acher University of Rennes, France / Inria, France / CNRS, France / IRISA, France
14:12
12m
Talk
How do Machine Learning Projects use Continuous Integration Practices? An Empirical Study on GitHub Actions
Technical Papers
João Helis Bernardo Federal Institute of Education, Science and Technology of Rio Grande do Norte, Daniel Alencar Da Costa University of Otago, Sergio Queiroz de Medeiros Universidade Federal do Rio Grande do Norte, Uirá Kulesza Federal University of Rio Grande do Norte
DOI Pre-print
14:24
4m
Talk
A dataset of GitHub Actions workflow histories
Data and Tool Showcase Track
Guillaume Cardoen University of Mons, Tom Mens University of Mons, Alexandre Decan University of Mons; F.R.S.-FNRS
14:28
4m
Talk
gawd: A Differencing Tool for GitHub Actions Workflows
Data and Tool Showcase Track
Pooya Rostami Mazrae University of Mons, Alexandre Decan University of Mons; F.R.S.-FNRS, Tom Mens University of Mons
14:32
4m
Talk
RABBIT: A tool for identifying bot accounts based on their recent GitHub event history
Data and Tool Showcase Track
Natarajan Chidambaram University of Mons, Tom Mens University of Mons, Alexandre Decan University of Mons; F.R.S.-FNRS
14:36
12m
Talk
An Investigation of Patch Porting Practices of the Linux Kernel Ecosystem
Technical Papers
Xingyu Li UC Riverside, Zheng Zhang UC Riverside, Zhiyun Qian University of California at Riverside, USA, Trent Jaeger UC Riverside, Chengyu Song University of California at Riverside, USA
14:48
4m
Talk
BugsPHP: A dataset for Automated Program Repair in PHP
Data and Tool Showcase Track
K.D. Pramod University of Moratuwa, Sri Lanka, W.T.N. De Silva University of Moratuwa, Sri Lanka, W.U.K. Thabrew University of Moratuwa, Sri Lanka, Ridwan Salihin Shariffdeen National University of Singapore, Sandareka Wickramanayake University of Moratuwa, Sri Lanka
Pre-print
14:00 - 15:30
Security and Vision & ReflectionData and Tool Showcase Track / Technical Papers / Registered Reports / Vision and Reflection at Grande Auditório
Chair(s): Tim Menzies North Carolina State University
14:00
12m
Talk
Quantifying Security Issues in Reusable JavaScript Actions in GitHub Workflows
Technical Papers
Hassan Onsori Delicheh University of Mons, Belgium, Alexandre Decan University of Mons; F.R.S.-FNRS, Tom Mens University of Mons
Pre-print
14:12
12m
Talk
What Can Self-Admitted Technical Debt Tell Us About Security? A Mixed-Methods Study
Technical Papers
Nicolás E. Díaz Ferreyra Hamburg University of Technology, Mojtaba Shahin RMIT University, Mansooreh Zahedi The Univeristy of Melbourne, Sodiq Quadri Hamburg University of Technology, Riccardo Scandariato Hamburg University of Technology
Pre-print
14:24
12m
Talk
Are Latent Vulnerabilities Hidden Gems for Software Vulnerability Prediction? An Empirical Study
Technical Papers
Triet Le Huynh Minh The University of Adelaide, Xiaoning Du Monash University, Australia, Muhammad Ali Babar School of Computer Science, The University of Adelaide
14:36
4m
Talk
MalwareBench: Malware samples are not enough
Data and Tool Showcase Track
Nusrat Zahan North Carolina State University, Philipp Burckhardt Socket, Inc, Mikola Lysenko Socket, Inc, Feross Aboukhadijeh Socket, Inc, Laurie Williams North Carolina State University
14:40
4m
Talk
Hash4Patch: A Lightweight Low False Positive Tool for Finding Vulnerability Patch Commits
Data and Tool Showcase Track
Simone Scalco University of Trento, Ranindya Paramitha University of Trento
14:44
4m
Talk
MegaVul: A C/C++ Vulnerability Dataset with Comprehensive Code Representations
Data and Tool Showcase Track
Chao Ni School of Software Technology, Zhejiang University, Liyu Shen Zhejiang University, Xiaohu Yang Zhejiang University, Yan Zhu Zhejiang University, Shaohua Wang Central University of Finance and Economics
Pre-print
14:48
5m
Talk
Analyzing and Mitigating (with LLMs) the Security Misconfigurations of Helm Charts from Artifact Hub
Registered Reports
Francesco Minna Vrije Universiteit Amsterdam, Fabio Massacci University of Trento; Vrije Universiteit Amsterdam, Katja Tuma Vrije Universiteit Amsterdam
14:53
5m
Talk
Fixing Smart Contract Vulnerabilities: A Comparative Analysis of Literature and Developer's Practices
Registered Reports
Francesco Salzano University of Molise, Simone Scalabrino University of Molise, Rocco Oliveto University of Molise, Remo Pareschi University of Molise
15:00
30m
Talk
Then, Now, and Next: Constants in Changing MSR Research Landscape
Vision and Reflection
Ayushi Rastogi University of Groningen, The Netherlands
16:00 - 17:30
Day 2: ClosingMSR Awards / Vision and Reflection at Grande Auditório
Chair(s): Alberto Bacchelli University of Zurich
16:00
30m
Talk
MSR in the age of LLMs
Vision and Reflection
Christoph Treude Singapore Management University
16:30
30m
Talk
Idealists and Pragmatists—An Only Somewhat Self-Indulgent Reflection on the Development of an MSR Paper (and Researcher)
Vision and Reflection
Shane McIntosh University of Waterloo
17:00
30m
Day closing
Closing session
MSR Awards
Diomidis Spinellis Athens University of Economics and Business & Delft University of Technology, Olga Baysal

Accepted Papers

Title
AI Writes, We Analyze: The ChatGPT Python Code Saga
Mining Challenge
DOI Pre-print
Analyzing Developer-ChatGPT Conversations for Software Refactoring: An Exploratory Study
Mining Challenge
Analyzing Developer Use of ChatGPT Generated Code in Open Source GitHub Projects
Mining Challenge
Pre-print
Can ChatGPT Support Developers? An Empirical Evaluation of Large Language Models for Code Generation.
Mining Challenge
Pre-print
ChatGPT Chats Decoded: Uncovering Prompt Patterns for Superior Solutions in Software Development Lifecycle
Mining Challenge
ChatGPT in Action: Analyzing Its Use in Software Development
Mining Challenge
DOI Pre-print
Chatting with AI: Deciphering Developer Conversations with ChatGPT
Mining Challenge
Does Generative AI Generate Smells Related to Container Orchestration?: An Exploratory Study with Kubernetes Manifests
Mining Challenge
Pre-print
Enhancing User Interaction in ChatGPT: Characterizing and Consolidating Multiple Prompts for Issue Resolution
Mining Challenge
Pre-print
How Do Software Developers Use ChatGPT? An Exploratory Study on GitHub Pull Requests
Mining Challenge
How I Learned to Stop Worrying and Love ChatGPT
Mining Challenge
Pre-print
How to refactor this code? An exploratory study on developer-ChatGPT refactoring conversations
Mining Challenge
Investigating the Utility of ChatGPT in the Issue Tracking System: An Exploratory Study
Mining Challenge
Pre-print
On the Taxonomy of Developers' Discussion Topics with ChatGPT
Mining Challenge
Quality Assessment of ChatGPT Generated Code and their Use by Developers
Mining Challenge
Pre-print Media Attached File Attached
The role of library versions in Developer-ChatGPT conversations
Mining Challenge
Pre-print
Write me this Code: An Analysis of ChatGPT Quality for Producing Source Code
Mining Challenge
Pre-print

Call for Mining Challenge Papers

Mining Challenge Presentation: https://github.com/NAIST-SE/DevGPT/files/12923358/2024.MSR.Challenge.pdf

Mining Challenge Video: https://youtu.be/0EhskEg7NxA?si=4DfrnxjT90mWnjVx

The emergence of large language models (LLMs) such as ChatGPT has disrupted the landscape of software development. Many studies are investigating the quality of responses generated by ChatGPT, the efficacy of various prompting techniques, and its comparative performance in programming contests, to name a few examples. Yet, we know very little about how ChatGPT is actually used by software developers.

This year, the mining challenge focuses on DevGPT, a curated dataset of developer-ChatGPT conversations that encompasses prompts with ChatGPT’s responses, including code snippets. This dataset is paired with corresponding software development artifacts, which range from source code, commits, issues, and pull requests to discussions and Hacker News threads. The purpose of DevGPT is to enable a comprehensive analysis of the context and implications of developer interactions with ChatGPT.

To create DevGPT, we leveraged a feature introduced by OpenAI in late May 2023, which allows users to share their interactions with ChatGPT through dedicated links. DevGPT is updated weekly by tracking mentions of ChatGPT sharing links on GitHub and Hacker News, starting from July 27, 2023. The snapshot 20230831 contains 2,891 shared ChatGPT links, sourced from 2,237 GitHub or Hacker News references.

Challenge

The challenge is open-ended: participants can choose the research questions that they find most interesting. Our suggestions include:

  1. What types of issues (bugs, feature requests, theoretical questions, etc.) do developers most commonly present to ChatGPT?
  2. Can we identify patterns in the prompts developers use when interacting with ChatGPT, and do these patterns correlate with the success of issue resolution?
  3. What is the typical structure of conversations between developers and ChatGPT? How many turns does it take on average to reach a conclusion?
  4. In instances where developers have incorporated the code provided by ChatGPT into their projects, to what extent do they modify this code prior to use, and what are the common types of modifications made?
  5. How does the code generated by ChatGPT for a given query compare to code that could be found for the same query on the internet (e.g., on Stack Overflow)?
  6. What types of quality issues (for example, as identified by linters) are common in the code generated by ChatGPT?
  7. How accurately can we predict the length of a conversation with ChatGPT based on the initial prompt and context provided?
  8. Can we reliably predict whether a developer’s issue will be resolved based on the initial conversation with ChatGPT?
  9. If developers were to rerun their prompts with ChatGPT now and/or with different settings, would they obtain the same results?

Participants may combine the DevGPT data with mentions of links to ChatGPT shared on other platforms or websites. Participants are encouraged to “bring their own data” (BYOD) by integrating the DevGPT data with information from other public, readily available sources. We urge participants to thoroughly consider the ethical implications arising from using the DevGPT data in conjunction with other data sources. Sharing or using personally identifiable information is strictly prohibited.

How to Participate in the Challenge

First, familiarize yourself with the DevGPT infrastructure:

Use the dataset to answer your research questions, and report your findings in a four-page challenge paper that you submit to our challenge. If your paper is accepted, present your results at MSR 2024 in Lisbon, Portugal!

You can also join the DevGPT community, get support and find others to collaborate with. To do so:

Submission

A challenge paper should describe the results of your work by providing an introduction to the problem you address and why it is worth studying, the version of the dataset you used, the approach and tools you used, your results and their implications, and conclusions. Make sure your report highlights the contributions and the importance of your work. See also our open science policy regarding the publication of software and additional data you used for the challenge.

To ensure clarity and consistency in research submissions:

  • When detailing methodologies or presenting findings, authors should specify which snapshot/version of the DevGPT dataset was utilized.
  • Given the continuous updates to the dataset, authors are reminded to be precise in their dataset references. This will help maintain transparency and ensure consistent replication of results.

All authors should use the official “ACM Primary Article Template”, as can be obtained from the ACM Proceedings Template page. LaTeX users should use the sigconf option, as well as the review (to produce line numbers for easy reference by the reviewers) and anonymous (omitting author names) options. To that end, the following LaTeX code can be placed at the start of the LaTeX document:

\documentclass[sigconf,review,anonymous]{acmart}
\acmConference[MSR 2024]{MSR '24: Proceedings of the 21st International Conference on Mining Software Repositories}{April 15–16, 2024}{Lisbon, Portugal}

Submissions to the Challenge Track can be made via the submission site by the submission deadline. We encourage authors to upload their paper info early (the PDF can be submitted later) to properly enter conflicts for anonymous reviewing. All submissions must adhere to the following requirements:

  • Submissions must not exceed the page limit (4 pages plus 1 additional page of references). The page limit is strict, and it will not be possible to purchase additional pages at any point in the process (including after acceptance).
  • Submissions must strictly conform to the ACM formatting instructions. Alterations of spacing, font size, and other changes that deviate from the instructions may result in desk rejection without further review.
  • Submissions must not reveal the authors’ identities. The authors must make every effort to honor the double-anonymous review process. In particular, the authors’ names must be omitted from the submission and references to their prior work should be in the third person. Further advice, guidance, and explanation about the double-anonymous review process can be found in the Q&A page for ICSE 2024.
  • Submissions should consider the ethical implications of the research conducted within a separate section before the conclusion.
  • The official publication date is the date the proceedings are made available in the ACM or IEEE Digital Libraries. This date may be up to two weeks prior to the first day of the ICSE 2024. The official publication date affects the deadline for any patent filings related to published work.
  • Purchases of additional pages in the proceedings are not allowed.

Any submission that does not comply with these requirements is likely to be desk rejected by the PC Chairs without further review. In addition, by submitting to the MSR Challenge Track, the authors acknowledge that they are aware of and agree to be bound by the following policies:

  • The ACM Policy and Procedures on Plagiarism and the IEEE Plagiarism FAQ. In particular, papers submitted to MSR 2024 must not have been published elsewhere and must not be under review or submitted for review elsewhere whilst under consideration for MSR 2024. Contravention of this concurrent submission policy will be deemed a serious breach of scientific ethics, and appropriate action will be taken in all such cases (including immediate rejection and reporting of the incident to ACM/IEEE). To check for double submission and plagiarism issues, the chairs reserve the right to (1) share the list of submissions with the PC Chairs of other conferences with overlapping review periods and (2) use external plagiarism detection software, under contract to the ACM or IEEE, to detect violations of these policies.
  • The authorship policy of the ACM and the authorship policy of the IEEE.

Upon notification of acceptance, all authors of accepted papers will be asked to fill a copyright form and will receive further instructions for preparing the camera-ready version of their papers. At least one author of each paper is expected to register and present the paper at the MSR 2024 conference. All accepted contributions will be published in the electronic proceedings of the conference.

This year’s mining challenge and the data can be cited as:

@inproceedings{
title={DevGPT: Studying Developer-ChatGPT Conversations},
author={Xiao, Tao and Treude, Christoph and Hata, Hideaki and Matsumoto, Kenichi},
year={2024},
booktitle={Proceedings of the International Conference on Mining Software Repositories (MSR 2024)},
}

A preprint is available online.

Submission Site

Papers must be submitted through HotCRP: https://msr2024-challenge.hotcrp.com/

Important Dates

  • Live tutorial and Kick-off session: September 2023
  • Abstract Deadline: Dec 7, 2023
  • Paper Deadline: Dec 11, 2023
  • Author Notification: Jan 19, 2024
  • Camera Ready Deadline: Jan 28, 2024

Open Science Policy

Openness in science is key to fostering progress via transparency, reproducibility and replicability. Our steering principle is that all research output should be accessible to the public and that empirical studies should be reproducible. In particular, we actively support the adoption of open data and open source principles. To increase reproducibility and replicability, we encourage all contributing authors to disclose:

  • the source code of the software they used to retrieve and analyze the data
  • the (anonymized and curated) empirical data they retrieved in addition to the DevGPT dataset
  • a document with instructions for other researchers describing how to reproduce or replicate the results

Already upon submission, authors can privately share their anonymized data and software on archives such as Zenodo or Figshare (tutorial available here). Zenodo accepts up to 50GB per dataset (more upon request). There is no need to use Dropbox or Google Drive. After acceptance, data and software should be made public so that they receive a DOI and become citable. Zenodo and Figshare accounts can easily be linked with GitHub repositories to automatically archive software releases. In the unlikely case that authors need to upload terabytes of data, Archive.org may be used.

We recognise that anonymizing artifacts such as source code is more difficult than preserving anonymity in a paper. We ask authors to take a best effort approach to not reveal their identities. We will also ask reviewers to avoid trying to identify authors by looking at commit histories and other such information that is not easily anonymized. Authors wanting to share GitHub repositories may want to look into using https://anonymous.4open.science/ which is an open source tool that helps you to quickly double-blind your repository.

We encourage authors to self-archive pre- and postprints of their papers in open, preserved repositories such as arXiv.org. This is legal and allowed by all major publishers including ACM and IEEE and it lets anybody in the world reach your paper. Note that you are usually not allowed to self-archive the PDF of the published article (that is, the publisher proof or the Digital Library version). Please note that the success of the open science initiative depends on the willingness (and possibilities) of authors to disclose their data and that all submissions will undergo the same review process independent of whether or not they disclose their analysis code or data. We encourage authors who cannot disclose industrial or otherwise non-public data, for instance due to non-disclosure agreements, to provide an explicit (short) statement in the paper.

Best Mining Challenge Paper Award

As mentioned above, all submissions will undergo the same review process independent of whether or not they disclose their analysis code or data. However, only accepted papers for which code and data are available on preserved archives, as described in the open science policy, will be considered by the program committee for the best mining challenge paper award.

Best Student Presentation Award

Like in the previous years, there will be a public voting during the conference to select the best mining challenge presentation. This award often goes to authors of compelling work who present an engaging story to the audience. Only students can compete for this award.

Call for Mining Challenge Proposals


Update: The MSR 24 Mining Challenge Paper is ‘‘DevGPT: Studying Developer-ChatGPT Conversations’’ by Tao Xiao, Christoph Treude, Hideaki Hata, and Kenichi Matsumoto!

DevGPT is a curated dataset which encompasses 16,129 prompts and ChatGPT’s responses including 9,785 code snippets, coupled with the corresponding software development artifacts—ranging from source code, commits, issues, pull requests, to discussions and Hacker News threads—to enable the analysis of the context and implications of these developer interactions with ChatGPT.


The International Conference on Mining Software Repositories (MSR) has hosted a mining challenge since 2006. With this challenge, we call upon everyone interested to apply their tools to a common dataset. The challenge is for researchers and practitioners to bravely use their mining tools and approaches on a dare.

One of the secret ingredients behind the success of the International Conference on Mining Software Repositories (MSR) is its annual Mining Challenge, in which MSR participants can showcase their techniques, tools, and creativity on a common data set. In true MSR fashion, this data set is a real data set contributed by researchers in the community, solicited through an open call. There are many benefits of sharing a data set for the MSR Mining Challenge. The selected challenge proposal explaining the data set will appear in the MSR 2024 proceedings, and the challenge papers using the data set will be required to cite the challenge proposal or an existing paper of the researchers about the selected data set. Furthermore, the authors of the data set will join the MSR 2024 organizing committee as Mining Challenge (co-)chair(s), who will manage the reviewing process (e.g., recruiting a Challenge PC, managing submissions and review assignments). Finally, it is not uncommon for challenge data sets to feature in MSR and other publications well after the edition of the conference in which they appear!

If you would like to submit your data set for consideration for the 2024 MSR Mining Challenge, please submit a short proposal (1-2 pages plus appendices, if needed) at https://msr-mc24.hotcrp.com/, containing the following information:

  1. Title of data set.
  2. High-level overview:
    • Short description, including what types of artifacts the data set contains.
    • Summary statistics (how many artifacts of different types).
  3. Internal structure:
    • How are the data structured and organized?
    • (Link to) Schema, if applicable
  4. How to access:
    • How can the data set be obtained?
    • What are recommended ways to access it? Include examples of specific tools, shell commands, etc, if applicable.
    • What skills, infrastructure, and/or credentials would challenge participants need to effectively work with the data set?
  5. What kinds of research questions do you expect challenge participants could answer?
  6. A link to a (sub)sample of the data for the organizing committee to pursue (e.g., via GitHub, Zenodo, Figshare).

Each submission must conform to the IEEE formatting instructions IEEE Conference Proceedings Formatting Guidelines (title in 24pt font and full text in 10pt type, LaTeX users must use \documentclass[10pt,conference]{IEEEtran} without including the compsoc or compsocconf options). For more information see here: https://www.ieee.org/conferences/publishing/templates.html

The first task of the authors of the selected proposal will be to prepare the Call for Challenge Papers, which outlines the expected content and structure of submissions, as well as the technical details of how to access and analyze the data set. This call will be published on the MSR website on September 1st. By making the challenge data set available by late summer, we hope that many students will be able to use the challenge data set for their graduate class projects in the Fall semester.

Important dates:

  • Deadline for proposals: August 15, 2023
  • Notification: August 24, 2023
  • Call for Challenge Papers Published: September 1, 2023

Expected deadlines for Mining Challenge Papers:

  • Live tutorial and Kick-off session: September 2023
  • Abstract Deadline: Dec 7, 2023
  • Paper Deadline: Dec 11, 2023
  • Author Notification: Jan 19, 2024
  • Camera Ready Deadline: Jan 28, 2024