Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.11851/11276
Full metadata record
DC FieldValueLanguage
dc.contributor.authorRahman, Md Mustafizur-
dc.contributor.authorBalakrishnan, Dinesh-
dc.contributor.authorMurthy, Dhiraj-
dc.contributor.authorKutlu, Mücahid-
dc.contributor.authorLease, Matthew-
dc.date.accessioned2024-04-06T08:09:49Z-
dc.date.available2024-04-06T08:09:49Z-
dc.date.issued2021-
dc.identifier.isbn9781713871095-
dc.identifier.urihttps://datasets-benchmarks-proceedings.neurips.cc/paper_files/paper/2021/hash/e00da03b685a0dd18fb6a08af0923de0-Abstract-round2.html-
dc.identifier.urihttps://hdl.handle.net/20.500.11851/11276-
dc.description.abstractBuilding a benchmark dataset for hate speech detection presents various challenges. Firstly, because hate speech is relatively rare, random sampling of tweets to annotate is very inefficient in finding hate speech. To address this, prior datasets often include only tweets matching known "hate words". However, restricting data to a pre-defined vocabulary may exclude portions of the real-world phenomenon we seek to model. A second challenge is that definitions of hate speech tend to be highly varying and subjective. Annotators having diverse prior notions of hate speech may not only disagree with one another but also struggle to conform to specified labeling guidelines. Our key insight is that the rarity and subjectivity of hate speech are akin to that of relevance in information retrieval (IR). This connection suggests that well-established methodologies for creating IR test collections can be usefully applied to create better benchmark datasets for hate speech. To intelligently and efficiently select which tweets to annotate, we apply standard IR techniques of {\em pooling} and {\em active learning}. To improve both consistency and value of annotations, we apply {\em task decomposition} and {\em annotator rationale} techniques. We share a new benchmark dataset for hate speech detection on Twitter that provides broader coverage of hate than prior datasets. We also show a dramatic drop in accuracy of existing detection models when tested on these broader forms of hate. Annotator rationales we collect not only justify labeling decisions but also enable future work opportunities for dual-supervision and/or explanation generation in modeling. Further details of our approach can be found in the supplementary materials.en_US
dc.language.isoenen_US
dc.relation.ispartofNeural Information Processing Systems Track on Datasets and Benchmarks 1 (NeurIPS Datasets and Benchmarks 2021)en_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.titleAn Information Retrieval Approach To Building Datasets for Hate Speech Detectionen_US
dc.typeConference Objecten_US
dc.departmentTOBB ETU Computer Engineeringen_US
dc.identifier.startpage1en_US
dc.identifier.endpage15en_US
dc.authorid0000-0002-5660-4992-
dc.institutionauthorKutlu, Mücahid-
dc.relation.publicationcategoryKonferans Öğesi - Uluslararası - Kurum Öğretim Elemanıen_US
item.openairetypeConference Object-
item.languageiso639-1en-
item.grantfulltextnone-
item.fulltextNo Fulltext-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
item.cerifentitytypePublications-
crisitem.author.dept02.3. Department of Computer Engineering-
Appears in Collections:Bilgisayar Mühendisliği Bölümü / Department of Computer Engineering
Show simple item record



CORE Recommender

Page view(s)

64
checked on Dec 16, 2024

Google ScholarTM

Check




Altmetric


Items in GCRIS Repository are protected by copyright, with all rights reserved, unless otherwise indicated.