Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.11851/11702
Full metadata record
DC FieldValueLanguage
dc.contributor.authorXue, Wenqian-
dc.contributor.authorLian, Bosen-
dc.contributor.authorKartal, Yusuf-
dc.contributor.authorFan, Jialu-
dc.contributor.authorChai, Tianyou-
dc.contributor.authorLewis, Frank L.-
dc.date.accessioned2024-08-18T17:23:05Z-
dc.date.available2024-08-18T17:23:05Z-
dc.date.issued2024-
dc.identifier.issn1545-5955-
dc.identifier.issn1558-3783-
dc.identifier.urihttps://doi.org/10.1109/TASE.2024.3427657-
dc.identifier.urihttps://hdl.handle.net/20.500.11851/11702-
dc.description.abstractThis paper proposes a data-driven model-free inverse reinforcement learning (IRL) algorithm tailored for solving an inverse H-infinity control problem. In the problem, both an expert and a learner engage in H-infinity control to reject disturbances and the learner's objective is to imitate the expert's behavior by reconstructing the expert's performance function through IRL techniques. Introducing zero-sum game principles, we first formulate a model-based single-loop IRL policy iteration algorithm that includes three key steps: updating the policy, action, and performance function using a new correction formula and the standard inverse optimal control principles. Building upon the model-based approach, we propose a model-free single-loop off-policy IRL algorithm that eliminates the need for initial stabilizing policies and prior knowledge of the dynamics of expert and learner. Also, we provide rigorous proof of convergence, stability, and Nash optimality to guarantee the effectiveness and reliability of the proposed algorithms. Furthermore, we show-case the efficiency of our algorithm through simulations and experiments, highlighting its advantages compared to the existing methods.en_US
dc.description.sponsorshipNSFC [61991404, 62394342, U22A2049]; Liaoning Revitalization Talents Program [XLYC2007135]; Science and Technology Major Project of Liaoning Province [2020JH1, 10100008]; Key Research and Development Program of Liaoning Province [2023JH26, 10200011]; Research Program of the Liaoning Liaohe Laboratory [LLL23ZZ-05-01]en_US
dc.description.sponsorshipThis work was supported in part by NSFC under Grant 61991404, Grant 62394342, and Grant U22A2049; in part by Liaoning Revitalization Talents Program under Grant XLYC2007135; in part by the 2020 Science and Technology Major Project of Liaoning Province under Grant 2020JH1/10100008; in part by the Key Research and Development Program of Liaoning Province under Grant 2023JH26/10200011; and in part by the Research Program of the Liaoning Liaohe Laboratory under Grant LLL23ZZ-05-01.en_US
dc.language.isoenen_US
dc.publisherIeee-Inst Electrical Electronics Engineers Incen_US
dc.relation.ispartofIeee Transactions on Automation Science and Engineeringen_US
dc.rightsinfo:eu-repo/semantics/closedAccessen_US
dc.subjectGame theoryen_US
dc.subjectGamesen_US
dc.subjectTrajectoryen_US
dc.subjectCost functionen_US
dc.subjectMathematical modelsen_US
dc.subjectReinforcement learningen_US
dc.subjectOptimal controlen_US
dc.subjectInverse reinforcement learningen_US
dc.subjectinverse H-infinity controlen_US
dc.subjectreinforcement learningen_US
dc.subjectzero-sum gamesen_US
dc.subjectimitation learningen_US
dc.titleModel-Free Inverse H-Infinity Control for Imitation Learningen_US
dc.typeArticleen_US
dc.typeArticle; Early Accessen_US
dc.departmentTOBB ETÜen_US
dc.identifier.wosWOS:001279014600001en_US
dc.institutionauthor-
dc.identifier.doi10.1109/TASE.2024.3427657-
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US
item.openairetypeArticle-
item.openairetypeArticle; Early Access-
item.languageiso639-1en-
item.grantfulltextnone-
item.fulltextNo Fulltext-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
item.cerifentitytypePublications-
item.cerifentitytypePublications-
Appears in Collections:WoS İndeksli Yayınlar Koleksiyonu / WoS Indexed Publications Collection
Show simple item record



CORE Recommender

Page view(s)

50
checked on Dec 16, 2024

Google ScholarTM

Check




Altmetric


Items in GCRIS Repository are protected by copyright, with all rights reserved, unless otherwise indicated.