Powered by OpenAIRE graph
Found an issue? Give us feedback
ZENODOarrow_drop_down
ZENODO
Dataset . 2024
License: CC BY
Data sources: Datacite
ZENODO
Dataset . 2024
License: CC BY
Data sources: Datacite
versions View all 2 versions
addClaim

This Research product is the result of merged Research products in OpenAIRE.

You have already added 0 works in your ORCID record related to the merged Research product.

Nigerian Software Engineer or American Data Scientist? GitHub Profile Recruitment Bias in Large Language Models

Authors: Takashi, Nakano; Kazumasa, Shimari; Raula Gaikovina, Kula; Christoph, Treude; Marc, Cheong; Kenichi, Matsumoto;

Nigerian Software Engineer or American Data Scientist? GitHub Profile Recruitment Bias in Large Language Models

Abstract

Large Language Models (LLMs), have taken the world by storm, demonstrating their ability not only to automate tedious tasks, but also to show some degree of proficiency in completing software engineering tasks. A key concern with LLMs is their "black-box" nature, which obscures their internal workings and could lead to societal biases in their outputs. In the software engineering context, in this short paper, we empirically explore how well LLMs can automate recruitment tasks for a geographically diverse software team. We utlize OpenAI’s GPT to conduct an initial set of experiments using GitHub Profiles from four regions (i.e., the United States, India, Nigeria, and Poland) to recruit a six-person software development team, analyzing a total of 3,896 profiles over a 5-year period (2019–2023). The results indicate that GPT tends to prefer some regions over others, even when some profiles have been manipulated to contain counterfactuals, such as swapping the location strings of two profiles. Furthermore, GPT was more likely to assign certain developer roles to users from a specific country, revealing an implicit bias. Overall, this study reveals insights into the inner workings of GPT and has implications for mitigating these potential biases.

This repository serves as the online appendix for the paper "Nigerian Software Engineer or American Data Scientist? GitHub Profile Recruitment Bias in Large Language Models".

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
0
Average
Average
Average