Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.11851/7751
Full metadata record
DC FieldValueLanguage
dc.contributor.authorAteş, Görkem Can-
dc.contributor.authorGörgülüarslan, Recep Muhammet-
dc.date.accessioned2021-09-11T15:59:27Z-
dc.date.available2021-09-11T15:59:27Z-
dc.date.issued2021en_US
dc.identifier.issn1615-147X-
dc.identifier.issn1615-1488-
dc.identifier.urihttps://doi.org/10.1007/s00158-020-02788-w-
dc.identifier.urihttps://hdl.handle.net/20.500.11851/7751-
dc.description.abstractA vital necessity when employing state-of-the-art deep neural networks (DNNs) for topology optimization is to predict near-optimal structures while satisfying pre-defined optimization constraints and objective function. Existing studies, on the other hand, suffer from the structural disconnections which result in unexpected errors in the objective and constraints. In this study, a two-stage network model is proposed using convolutional encoder-decoder networks that incorporate a new way of loss functions to reduce the number of structural disconnection cases as well as to reduce pixel-wise error to enhance the predictive performance of DNNs for topology optimization without any iteration. In the first stage, a single DNN model architecture is proposed and used in two parallel networks using two different loss functions for each called the mean square error (MSE) and mean absolute error (MAE). Once the priori information is generated from the first stage, it is instantly fed into the second stage, which acts as a rectifier network over the priori predictions. Finally, the second stage is trained using the binary cross-entropy (BCE) loss to provide the final predictions. The proposed two-stage network with the proposed loss functions is implemented for both two-dimensional (2D) and three-dimensional (3D) topology optimization datasets to observe its generalization ability. The validation results showed that the proposed two-stage framework could improve network prediction ability compared to a single network while significantly reducing compliance and volume fraction errors.en_US
dc.language.isoenen_US
dc.publisherSpringeren_US
dc.relation.ispartofStructural And Multidisciplinary Optimizationen_US
dc.rightsinfo:eu-repo/semantics/closedAccessen_US
dc.subjectDeep learningen_US
dc.subjectNeural networken_US
dc.subjectTopology optimizationen_US
dc.subjectConvolutional neural networken_US
dc.subjectEncoder and decoder networken_US
dc.titleTwo-Stage Convolutional Encoder-Decoder Network To Improve the Performance and Reliability of Deep Learning Models for Topology Optimizationen_US
dc.typeArticleen_US
dc.departmentFaculties, Faculty of Engineering, Department of Mechanical Engineeringen_US
dc.departmentFakülteler, Mühendislik Fakültesi, Makine Mühendisliği Bölümütr_TR
dc.identifier.volume63en_US
dc.identifier.issue4en_US
dc.identifier.startpage1927en_US
dc.identifier.endpage1950en_US
dc.authorid0000-0002-0550-8335-
dc.identifier.wosWOS:000605553100003en_US
dc.identifier.scopus2-s2.0-85099019663en_US
dc.institutionauthorGörgülüarslan, Recep Muhammet-
dc.identifier.doi10.1007/s00158-020-02788-w-
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.identifier.scopusqualityQ1-
item.openairetypeArticle-
item.languageiso639-1en-
item.grantfulltextnone-
item.fulltextNo Fulltext-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
item.cerifentitytypePublications-
crisitem.author.dept02.7. Department of Mechanical Engineering-
Appears in Collections:Makine Mühendisliği Bölümü / Department of Mechanical Engineering
Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection
WoS İndeksli Yayınlar Koleksiyonu / WoS Indexed Publications Collection
Show simple item record



CORE Recommender

SCOPUSTM   
Citations

3
checked on Dec 21, 2024

WEB OF SCIENCETM
Citations

43
checked on Dec 21, 2024

Page view(s)

192
checked on Dec 16, 2024

Google ScholarTM

Check




Altmetric


Items in GCRIS Repository are protected by copyright, with all rights reserved, unless otherwise indicated.