Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.11851/11868
Full metadata record
DC FieldValueLanguage
dc.contributor.authorAydogdu, M. Fatih-
dc.contributor.authorDemirci, M. Fatih-
dc.date.accessioned2024-11-10T14:56:04Z-
dc.date.available2024-11-10T14:56:04Z-
dc.date.issued2024-
dc.identifier.issn2169-3536-
dc.identifier.urihttps://doi.org/10.1109/ACCESS.2024.3476238-
dc.identifier.urihttps://hdl.handle.net/20.500.11851/11868-
dc.description.abstractTo address the challenge of relative camera pose estimation, many permutation-invariant neural networks have been developed to process sparse correspondences with constant latency. These networks typically utilize an n-to-n framework, where n putative correspondences from the same image pairs are placed in distinct batch instances without any specific order. This uncorrelated set-type input structure does not sufficiently facilitate the extraction of contextual information for the correspondences. In this paper, we introduce a novel one-to-one framework designed to maximize context interaction within the network. Our framework prioritizes providing specialized context for each correspondence and enhancing the interaction of context data and correspondence data through a carefully designed input structure and network architecture schema. We conducted a series of experiments using various architectures within the one-to-one framework. Our results demonstrate that one-to-one networks not only matches but often surpasses the performance of traditional n-to-n networks, highlighting the one-to-one framework's significant potential and efficacy. To ensure a fair comparison, all one-to-one and n-to-n networks were trained on Google's Tensor Processing Units (TPUs). Notably, the memory capacity of a single TPUv4 device is sufficient to train one-to-one networks presented without generating TPU pods using multiple devices. © 2013 IEEE.en_US
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineers Inc.en_US
dc.relation.ispartofIEEE Accessen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectConvolutional neural networksen_US
dc.subjectdeep learningen_US
dc.subjectessential matrixen_US
dc.subjectfeature extractionen_US
dc.subjectRANSACen_US
dc.subjectrelative camera pose estimationen_US
dc.subjectstereo imagesen_US
dc.subjecttensor processing unitsen_US
dc.titleA Novel One-To-One Framework for Relative Camera Pose Estimationen_US
dc.typeArticleen_US
dc.departmentTOBB ETU Graduate School of Engineering and Scienceen_US
dc.identifier.wosWOS:001349727700001-
dc.identifier.scopus2-s2.0-85207281733-
dc.institutionauthor-
dc.identifier.doi10.1109/ACCESS.2024.3476238-
dc.authorscopusid59380365300-
dc.authorscopusid14041575400-
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - İdari Personel ve Öğrencien_US
dc.identifier.scopusqualityQ1-
dc.identifier.wosqualityQ2-
item.fulltextNo Fulltext-
item.languageiso639-1en-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
item.cerifentitytypePublications-
item.openairetypeArticle-
item.grantfulltextnone-
Appears in Collections:Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection
WoS İndeksli Yayınlar Koleksiyonu / WoS Indexed Publications Collection
Öğrenci Yayınları / Students' Publications
Show simple item record



CORE Recommender

Page view(s)

42
checked on Mar 31, 2025

Google ScholarTM

Check




Altmetric


Items in GCRIS Repository are protected by copyright, with all rights reserved, unless otherwise indicated.