Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ IEEE Accessarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
IEEE Access
Article . 2025 . Peer-reviewed
License: CC BY NC ND
Data sources: Crossref
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
IEEE Access
Article . 2025
Data sources: DOAJ
versions View all 2 versions
addClaim

This Research product is the result of merged Research products in OpenAIRE.

You have already added 0 works in your ORCID record related to the merged Research product.

Evaluating Coding Proficiency of Large Language Models: An Investigation Through Machine Learning Problems

Authors: Eunbi Ko; Pilsung Kang;

Evaluating Coding Proficiency of Large Language Models: An Investigation Through Machine Learning Problems

Abstract

Large Language Models (LLMs) have demonstrated remarkable capabilities across various domains, but their effectiveness in coding workflows, particularly in machine learning (ML), requires deeper evaluation. This paper investigates the coding proficiency of LLMs such as GPT and Gemini by benchmarking their performance on three ML problems: Titanic, MNIST, and Steel Defect. These problems were chosen to encompass a range of challenges, including handling missing data, feature engineering, deep learning architectures, and multi-label classification. Using systematic prompts, we evaluated the LLMs’ abilities in data preprocessing, hyperparameter tuning, and classifier generation, comparing their outputs with those of human developers and AutoML frameworks. Experimental results indicate that the human developer outperformed untuned LLMs in data preprocessing, maintaining a 3–5% accuracy advantage across datasets. However, GPT’s hyperparameter tuning improved model performance by up to 6.3% in Titanic and 3.33% in Steel Defect, surpassing human-tuned models in some cases. In contrast, Gemini exhibited only marginal tuning improvements (0.19–1.78%) and failed to compensate for preprocessing inefficiencies. These findings show that while LLMs can assist with ML coding tasks, they exhibit varying levels of efficiency depending on task complexity and preprocessing requirements. GPT demonstrated superior hyperparameter tuning capabilities, whereas both LLMs struggled with intuitive data preprocessing, particularly in feature selection and transformation. This study provides practical insights into the strengths and limitations of LLMs in ML workflows, offering guidance for their effective integration into real-world applications.

Related Organizations
Keywords

Artificial intelligence, machine learning, large language model (LLM), generative pre-trained transformer (GPT), code generation, Electrical engineering. Electronics. Nuclear engineering, TK1-9971

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
0
Average
Average
Average
gold
Related to Research communities