
Large Language Models (LLMs) have demonstrated remarkable capabilities across various domains, but their effectiveness in coding workflows, particularly in machine learning (ML), requires deeper evaluation. This paper investigates the coding proficiency of LLMs such as GPT and Gemini by benchmarking their performance on three ML problems: Titanic, MNIST, and Steel Defect. These problems were chosen to encompass a range of challenges, including handling missing data, feature engineering, deep learning architectures, and multi-label classification. Using systematic prompts, we evaluated the LLMs’ abilities in data preprocessing, hyperparameter tuning, and classifier generation, comparing their outputs with those of human developers and AutoML frameworks. Experimental results indicate that the human developer outperformed untuned LLMs in data preprocessing, maintaining a 3–5% accuracy advantage across datasets. However, GPT’s hyperparameter tuning improved model performance by up to 6.3% in Titanic and 3.33% in Steel Defect, surpassing human-tuned models in some cases. In contrast, Gemini exhibited only marginal tuning improvements (0.19–1.78%) and failed to compensate for preprocessing inefficiencies. These findings show that while LLMs can assist with ML coding tasks, they exhibit varying levels of efficiency depending on task complexity and preprocessing requirements. GPT demonstrated superior hyperparameter tuning capabilities, whereas both LLMs struggled with intuitive data preprocessing, particularly in feature selection and transformation. This study provides practical insights into the strengths and limitations of LLMs in ML workflows, offering guidance for their effective integration into real-world applications.
Artificial intelligence, machine learning, large language model (LLM), generative pre-trained transformer (GPT), code generation, Electrical engineering. Electronics. Nuclear engineering, TK1-9971
Artificial intelligence, machine learning, large language model (LLM), generative pre-trained transformer (GPT), code generation, Electrical engineering. Electronics. Nuclear engineering, TK1-9971
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
