Please use this identifier to cite or link to this item:
https://hdl.handle.net/20.500.11851/11801
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Demirok, B. | - |
dc.contributor.author | Mergen, S. | - |
dc.contributor.author | Oz, B. | - |
dc.contributor.author | Kutlu, M. | - |
dc.date.accessioned | 2024-09-22T13:30:58Z | - |
dc.date.available | 2024-09-22T13:30:58Z | - |
dc.date.issued | 2024 | - |
dc.identifier.issn | 1613-0073 | - |
dc.identifier.uri | https://hdl.handle.net/20.500.11851/11801 | - |
dc.description | 25th Working Notes of the Conference and Labs of the Evaluation Forum, CLEF 2024 -- 9 September 2024 through 12 September 2024 -- Grenoble -- 201493 | en_US |
dc.description.abstract | As we increasingly integrate artificial intelligence into our daily tasks, it is crucial to ensure that these systems are reliable and robust against adversarial attacks. In this paper, we present our participation in Task 6 of CLEF CheckThat! 2024 lab. In our work, we explore several methods, which can be grouped into two categories. The first group focuses on using a genetic algorithm to detect words and changing them via several methods such as adding/deleting words and using homoglyphs. In the second group of methods, we use large language models to generate adversarial attacks. Based on our comprehensive experiments, we pick the genetic algorithm-based model which utilizes a combination of splitting words and homoglyphs as a text manipulation method, as our primary model. We are ranked third based on both BODEGA metric and manual evaluation. © 2024 Copyright for this paper by its authors. | en_US |
dc.language.iso | en | en_US |
dc.publisher | CEUR-WS | en_US |
dc.relation.ispartof | CEUR Workshop Proceedings | en_US |
dc.rights | info:eu-repo/semantics/closedAccess | en_US |
dc.subject | Adversarial Examples | en_US |
dc.subject | Credibility Assessment | en_US |
dc.subject | Natural Language Processing | en_US |
dc.subject | Robustness | en_US |
dc.subject | Generative adversarial networks | en_US |
dc.subject | Genetic algorithms | en_US |
dc.subject | Credibility assessment | en_US |
dc.subject | Daily tasks | en_US |
dc.subject | Language model | en_US |
dc.subject | Language processing | en_US |
dc.subject | Natural language processing | en_US |
dc.subject | Natural languages | en_US |
dc.subject | Robustness | en_US |
dc.subject | Second group | en_US |
dc.subject | Splittings | en_US |
dc.subject | Text manipulation | en_US |
dc.subject | Adversarial machine learning | en_US |
dc.title | Turquaz at Checkthat! 2024: Creating Adversarial Examples Using Genetic Algorithm | en_US |
dc.type | Conference Object | en_US |
dc.department | TOBB ETÜ | en_US |
dc.identifier.volume | 3740 | en_US |
dc.identifier.startpage | 396 | en_US |
dc.identifier.endpage | 404 | en_US |
dc.identifier.scopus | 2-s2.0-85201618529 | en_US |
dc.institutionauthor | … | - |
dc.authorscopusid | 59280903200 | - |
dc.authorscopusid | 59278835400 | - |
dc.authorscopusid | 59279864500 | - |
dc.authorscopusid | 35299304300 | - |
dc.relation.publicationcategory | Konferans Öğesi - Uluslararası - Kurum Öğretim Elemanı | en_US |
item.openairetype | Conference Object | - |
item.languageiso639-1 | en | - |
item.grantfulltext | none | - |
item.fulltext | No Fulltext | - |
item.openairecristype | http://purl.org/coar/resource_type/c_18cf | - |
item.cerifentitytype | Publications | - |
Appears in Collections: | Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection |
CORE Recommender
Items in GCRIS Repository are protected by copyright, with all rights reserved, unless otherwise indicated.