| Literature DB >> 35284822 |
Shenggang Hu1, Jabir Alshehabi Al-Ani1, Karen D Hughes2, Nicole Denier3, Alla Konnikov3, Lei Ding4, Jinhan Xie4, Yang Hu5, Monideepa Tarafdar6, Bei Jiang4, Linglong Kong4, Hongsheng Dai1.
Abstract
Despite progress toward gender equality in the labor market over the past few decades, gender segregation in labor force composition and labor market outcomes persists. Evidence has shown that job advertisements may express gender preferences, which may selectively attract potential job candidates to apply for a given post and thus reinforce gendered labor force composition and outcomes. Removing gender-explicit words from job advertisements does not fully solve the problem as certain implicit traits are more closely associated with men, such as ambitiousness, while others are more closely associated with women, such as considerateness. However, it is not always possible to find neutral alternatives for these traits, making it hard to search for candidates with desired characteristics without entailing gender discrimination. Existing algorithms mainly focus on the detection of the presence of gender biases in job advertisements without providing a solution to how the text should be (re)worded. To address this problem, we propose an algorithm that evaluates gender bias in the input text and provides guidance on how the text should be debiased by offering alternative wording that is closely related to the original input. Our proposed method promises broad application in the human resources process, ranging from the development of job advertisements to algorithm-assisted screening of job applications.Entities:
Keywords: bias evaluation; bias mitigation; constrained sampling; gender bias; importance sampling
Year: 2022 PMID: 35284822 PMCID: PMC8905631 DOI: 10.3389/fdata.2022.805713
Source DB: PubMed Journal: Front Big Data ISSN: 2624-909X
Estimated weight for each word group.
|
|
|
| |
|---|---|---|---|
| Intercept | −0.1439*** | 0.0035 | −40.78 |
| Strong masculine | 0.1580*** | 0.0008 | 199.42 |
| Weak masculine | 0.0073*** | 0.0004 | 16.39 |
| Strong feminine | −0.1824*** | 0.0016 | −115.45 |
| Weak feminine | −0.1440*** | 0.0008 | −175.35 |
|
| 0.465 | ||
***p < 0.001.
Figure 1Histogram of bias score distribution (A) before and (B) after debiasing algorithm is applied. Both scores are measured using the fitted metric in Section 4.1.
Figure 2(A) Raw improvement and (B) percentage improvement plotted against the unsigned bias score before debiasing. In the percentage plot, only positive improvements are plotted since the points with negative improvement were already close to no bias and thus not relevant to the context.
Mean unsigned bias before and after debiasing with mean improvement and percentage improvement for different groups of data.
|
|
| |||
|---|---|---|---|---|
|
|
|
|
| |
| Mean |before| | 0.4149 | 0.4536 | 0.6269 | 1.2362 |
| Mean |after| | 0.0628 | 0.0588 | 0.0647 | 0.0677 |
| Mean improv. | 0.3521 | 0.3948 | 0.5623 | 1.1685 |
| Mean % improv. | 32.77% | 75.92% | 86.08% | 93.89% |
Text-bias evaluation
Bias reduction on word counts