Downloads provided by UsageCounts
Code Smell Detection Repository This repository contains multiple projects related to code smell detection. Each project focuses on a different aspect of code smell detection. Calibrating Deep Learning-based Code Smell Detection using Human Feedback Abstract Code smells are inherently subjective in nature. Software developers may have different opinions and perspectives on smelly code. While many attempts have been made to use deep learning-based models for code smell detection, they fail to consider each developer's subjective perspective while detecting smells. Ignoring this aspect defies the purpose of using deep learning-based smell detection methods because the models are not customized to the developer's context. This paper proposes a method that considers human feedback to account for such subjectivity. Towards this, we created a plugin for IntelliJ IDEA and developed a container-based web-server to offer services of our baseline deep learning model. The setup allowed developers to see code smells within the IDE and provide feedback. Using this setup, we conducted a controlled experiment with 14 participants divided into experimental and control groups. In the first round of our experiment, we show code smells predicted using the baseline deep learning model and collect feedback from the participants. In the second round, we fine-tune the model based on the experimental group's feedback and reevaluate its performance before and after adjustment. Our results show that using such calibration improves the performance of the smell detection model by 15.49% in F1 score on average across the participants of the experimental group. Our work carries implications for both researchers and practitioners. Practitioners can apply our approach to enhance the quality of their code in day-to-day development activities, aligning it with their own code smell definitions. Furthermore, software engineering researchers can leverage this study to adopt analogous approaches for addressing similar issues, including code review. Projects Deployable Server: The server utilizes machine learning techniques to automate code smell detection, making the process faster and more efficient. Experiment Data: This project contains all the raw data collected from the experiment. Research Scripts: This project houses all the research scripts we used to generate the results for each research question. Deep Learning Models: This project implements and compares four deep learning approaches for code smell detection. The models include CodeBERT, CodeT5, Autoencoder with Logistic Regression (AutoLR), Autoencoder with LSTM (AutoLSTM), and Variational Autoencoder (VAE). Each model has its own Jupyter notebook with detailed instructions for running the model. User Feedback Integration: This project explores user-specific subjective analysis and aims to customize smell detection based on the user's perspective or context. It incorporates user feedback into the deep learning models, fine-tunes the models based on the feedback, and compares the performance of the fine-tuned models with the original models. Prerequisites Dataset (Follow the instructions at to download the dataset). Python 3.x Jupyter Notebook PyTorch Transformers library (for CodeBERT and CodeT5) Tensorflow (for VAE) Other dependencies as mentioned in the individual project README files Usage Clone or download this repository to your local machine. Navigate to the specific project directory you are interested in. Follow the instructions provided in the README file of that project to set up and run the code. Notes For added convenience, all the necessary python libraries for all the subprojects can be installed using the requirements.txt file. Use the command below to install all the dependencies using pip: pip install -r requirements.txt The pdf copy of the accepted paper can be found at https://github.com/SMART-Dal/DLFeedback/blob/5be8a89f5e7fd69c690b9f386a09c7e88473bf73/SCAM23_DL_Feedback.pdf
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
| views | 28 | |
| downloads | 1 |

Views provided by UsageCounts
Downloads provided by UsageCounts