
Abstract: Background: Learning programming is often difficult for beginners, primarily because of the challenge of providing timely and personalized feedback in large educational environments. While automated assessment systems have improved efficiency in grading and feedback, they typically focus on correctness and often lack personalized guidance concerning code quality, readability, and maintainability Objective: This study aims to investigate whether integrating static code analysis into automated assessment systems to provide personalized feedback can effectively enhance students code quality, learning process, and engagement in programming courses. Methods: We designed a personalized feedback system integrated with static analysis tools (Cppcheck and Clang-format), deployed within an existing automated assessment platform used by undergraduate programming students. The system was evaluated in a controlled experiment involving 60 students randomly divided into control and treatment groups. The effectiveness of personalized feedback was measured through quantitative metrics (style violations, potential bugs, and design issues), qualitative surveys, and submission behaviours over multiple assignments. Results: Results demonstrated that students receiving personalized feedback improved their code quality, reducing the number of style violations by 76%, potential bugs by 52%, and structural issues by 32% compared to the control group. Students also expressed higher satisfaction, increased motivation, and greater willingness to iteratively refine their code based on personalized feedback. Conclusion: The integration of static code analysis for personalized feedback not only enhances code quality but also helps a deeper understanding of good programming practices among students. Future research should focus on making feedback systems more adaptive, incorporating intelligent tutoring techniques, and exploring long-term impacts on programming habits and skill retention.
Personalized Feedback, Static Code Analysis, Student Engagement, Code Quality Improvement, Learning Analytics, Student-Centered Feedback, Clean Code Principles, Code Quality Metrics, Interactive Feedback
Personalized Feedback, Static Code Analysis, Student Engagement, Code Quality Improvement, Learning Analytics, Student-Centered Feedback, Clean Code Principles, Code Quality Metrics, Interactive Feedback
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
