Whodunit: Classifying Code as Human Authored or GPT-4 Generated - A case study on CodeChef problems
Artificial intelligence (AI) assistants such as GitHub Copilot and ChatGPT, built on large language models like GPT-4, are revolutionizing how programming tasks are performed, raising questions about whether code is authored by generative AI models. Such questions are of particular interest to educators, who worry that these tools enable a new form of academic dishonesty, in which students submit AI generated code as their own work. Our research explores the viability of using code stylometry and machine learning to distinguish between GPT-4 generated and human-authored code. Our dataset comprises human-authored solutions from CodeChef and AI-authored solutions generated by GPT-4. Our classifier outperforms baselines, with an F1-score and AUC-ROC score of 0.91. A variant of our classifier that excludes gameable features (e.g., empty lines, whitespace) still performs well with an F1-score and AUC-ROC score of 0.89. We also evaluated our classifier with respect to the difficulty of the programming problem and found that there was almost no difference between easier and intermediate problems, and the classifier performed only slightly worse on harder problems. Our study shows that code stylometry is a promising approach for distinguishing between GPT-4 generated code and human-authored code.
Mon 15 AprDisplayed time zone: Lisbon change
16:00 - 17:30 | Machine learning for Software EngineeringTechnical Papers at Grande Auditório Chair(s): Diego Costa Concordia University, Canada | ||
16:00 12mTalk | Whodunit: Classifying Code as Human Authored or GPT-4 Generated - A case study on CodeChef problems Technical Papers Oseremen Joy Idialu University of Waterloo, Noble Saji Mathews University of Waterloo, Canada, Rungroj Maipradit University of Waterloo, Joanne M. Atlee University of Waterloo, Mei Nagappan University of Waterloo DOI Pre-print | ||
16:12 12mTalk | GIRT-Model: Automated Generation of Issue Report Templates Technical Papers Nafiseh Nikehgbal Sharif University of Technology, Amir Hossein Kargaran LMU Munich, Abbas Heydarnoori Bowling Green State University DOI Pre-print | ||
16:24 12mTalk | MicroRec: Leveraging Large Language Models for Microservice Recommendation Technical Papers Ahmed Saeed Alsayed University of Wollongong, Hoa Khanh Dam University of Wollongong, Chau Nguyen University of Wollongong | ||
16:36 12mTalk | PeaTMOSS: A Dataset and Initial Analysis of Pre-Trained Models in Open-Source Software Technical Papers Wenxin Jiang Purdue University, Jerin Yasmin Queen's University, Canada, Jason Jones Purdue University, Nicholas Synovic Loyola University Chicago, Jiashen Kuo Purdue University, Nathaniel Bielanski Purdue University, Yuan Tian Queen's University, Kingston, Ontario, George K. Thiruvathukal Loyola University Chicago and Argonne National Laboratory, James C. Davis Purdue University DOI Pre-print | ||
16:48 12mTalk | Data Augmentation for Supervised Code Translation Learning Technical Papers Binger Chen Technische Universität Berlin, Jacek golebiowski Amazon AWS, Ziawasch Abedjan Leibniz Universität Hannover | ||
17:00 12mTalk | On the Effectiveness of Machine Learning-based Call-Graph Pruning: An Empirical Study Technical Papers Amir Mir Delft University of Technology, Mehdi Keshani Delft University of Technology, Sebastian Proksch Delft University of Technology Pre-print | ||
17:12 12mTalk | Leveraging GPT-like LLMs to Automate Issue Labeling Technical Papers Giuseppe Colavito University of Bari, Italy, Filippo Lanubile University of Bari, Nicole Novielli University of Bari, Luigi Quaranta University of Bari, Italy Pre-print |