2022
Haochen, L; Thekinen, J; Mollaoglu, S; Tang, D; Yang, J; Cheng, Y; Liu, H; Tang, J
Toward Annotator Group Bias in Crowdsourcing Conference Forthcoming
60th Annual Meeting of the Association for Computational Linguistics (ACL), Dublin, Forthcoming.
Abstract | BibTeX | Tags: Information Systems, Machine Learning | Links:
@conference{nokey,
title = {Toward Annotator Group Bias in Crowdsourcing},
author = {Haochen, L and J Thekinen and Mollaoglu, S and Tang, D and Yang, J and Cheng, Y and Liu, H and Tang, J},
url = {https://josephdthekinen.com/wp-content/uploads/2022/03/Toward_Annotator_Group_Bias_in_Crowdsourcing____ACL_camera_ready.pdf},
year = {2022},
date = {2022-05-23},
urldate = {2022-05-23},
booktitle = {60th Annual Meeting of the Association for Computational Linguistics (ACL)},
address = {Dublin},
abstract = {Crowdsourcing has emerged as a popular approach for collecting annotated data to train supervised machine learning models. However, annotator bias can lead to defective annotations. Though there are a few works investigating individual annotator bias, the group effects in annotators are largely overlooked. In this work, we reveal that annotators within the same demographic group tend to show consistent group bias in annotation tasks and thus we conduct an initial study on annotator group bias. We first empirically verify the existence of annotator group bias in various real-world crowdsourcing datasets. Then, we develop a novel probabilistic graphical framework GroupAnno to capture annotator group bias with an extended Expectation Maximization (EM) algorithm. We conduct experiments on both synthetic and realworld datasets. Experimental results demonstrate the effectiveness of our model in modeling annotator group bias in label aggregation and model learning over competitive baselines.},
keywords = {Information Systems, Machine Learning},
pubstate = {forthcoming},
tppubtype = {conference}
}
Crowdsourcing has emerged as a popular approach for collecting annotated data to train supervised machine learning models. However, annotator bias can lead to defective annotations. Though there are a few works investigating individual annotator bias, the group effects in annotators are largely overlooked. In this work, we reveal that annotators within the same demographic group tend to show consistent group bias in annotation tasks and thus we conduct an initial study on annotator group bias. We first empirically verify the existence of annotator group bias in various real-world crowdsourcing datasets. Then, we develop a novel probabilistic graphical framework GroupAnno to capture annotator group bias with an extended Expectation Maximization (EM) algorithm. We conduct experiments on both synthetic and realworld datasets. Experimental results demonstrate the effectiveness of our model in modeling annotator group bias in label aggregation and model learning over competitive baselines.
2021
Haochen, L; Thekinen, J; Mollaoglu, S; Tang, D; J, Yang; Y, Cheng; H, Liu; J, Tang
Toward Annotator Group Bias in Crowdsourcing Online
2021, visited: 14.10.2021, (Cite as arXiv:2110.08038).
Abstract | BibTeX | Tags: Machine Learning | Links:
@online{liu2021toward,
title = {Toward Annotator Group Bias in Crowdsourcing},
author = {Haochen, L and Thekinen, J and Mollaoglu, S and Tang, D and Yang J and Cheng Y and Liu H and Tang J},
url = {https://josephdthekinen.com/wp-content/uploads/2021/11/2021_Haochen_Toward-Annotator-Group-Bias-in-Crowdsourcing.pdf},
year = {2021},
date = {2021-10-14},
urldate = {2021-10-14},
abstract = {Crowdsourcing has emerged as a popular approach for collecting annotated data to train supervised machine learning models. However, annotator bias can lead to defective annotations. Though there are a few works investigating individual annotator bias, the group effects in annotators are largely overlooked. In this work, we reveal that annotators within the same demographic group tend to show consistent group bias in annotation tasks and thus we conduct an initial study on annotator group bias. We first empirically verify the existence of annotator group bias in various real-world crowdsourcing datasets. Then, we develop a novel probabilistic graphical framework GroupAnno to capture annotator group bias with a new extended Expectation Maximization (EM) training algorithm. We conduct experiments on both synthetic and real-world datasets. Experimental results demonstrate the effectiveness of our model in modeling annotator group bias in label aggregation and model learning over competitive baselines.},
note = {Cite as arXiv:2110.08038},
keywords = {Machine Learning},
pubstate = {published},
tppubtype = {online}
}
Crowdsourcing has emerged as a popular approach for collecting annotated data to train supervised machine learning models. However, annotator bias can lead to defective annotations. Though there are a few works investigating individual annotator bias, the group effects in annotators are largely overlooked. In this work, we reveal that annotators within the same demographic group tend to show consistent group bias in annotation tasks and thus we conduct an initial study on annotator group bias. We first empirically verify the existence of annotator group bias in various real-world crowdsourcing datasets. Then, we develop a novel probabilistic graphical framework GroupAnno to capture annotator group bias with a new extended Expectation Maximization (EM) training algorithm. We conduct experiments on both synthetic and real-world datasets. Experimental results demonstrate the effectiveness of our model in modeling annotator group bias in label aggregation and model learning over competitive baselines.