Show simple item record

dc.rights.licenseVisos teisės saugomos / All rights reserveden_US
dc.contributor.authorYushchenko, Artur
dc.contributor.authorSmelyakov, Kirill
dc.contributor.authorChupryna, Anastasiya
dc.date.accessioned2026-01-09T11:32:31Z
dc.date.available2026-01-09T11:32:31Z
dc.date.issued2025
dc.identifier.isbn9798331598747en_US
dc.identifier.issn2831-5634en_US
dc.identifier.urihttps://etalpykla.vilniustech.lt/handle/123456789/159709
dc.description.abstractThis paper explores three prominent deep learning architectures — Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), and Vision Transformers (ViT) — for emotion recognition, examining their potential strengths and weaknesses under various conditions. It discusses how each approach may capture critical spatial, temporal, or global features in emotional data, highlighting differences in feature extraction, representational capacity, and scalability. Additionally, new solutions are proposed to enhance accuracy and adaptability, integrating design principles that address recognized challenges in real-world implementations. Novel insights are offered on aligning model selection with specific application demands, such as the nature of input signals, available computational resources, and desired real-time performance. While the comparative analysis remains broad to accommodate diverse use cases, it underscores the importance of carefully balancing accuracy and efficiency. Conclusions drawn from the investigation include recommendations on when each architecture may be most advantageous, providing a flexible framework for researchers and practitioners to navigate the trade-offs. These findings have implications for developing adaptive emotion recognition systems that leverage state-of-the-art deep learning techniques across multiple contexts.en_US
dc.format.extent6 p.en_US
dc.format.mediumTekstas / Texten_US
dc.language.isoenen_US
dc.relation.urihttps://etalpykla.vilniustech.lt/handle/123456789/159405en_US
dc.source.urihttps://ieeexplore.ieee.org/document/11016864en_US
dc.subjectemotion recognitionen_US
dc.subjectdeep learningen_US
dc.subjectconvolutional neural networksen_US
dc.subjectrecurrent neural networksen_US
dc.subjectvision trans-formersen_US
dc.subjectfacial expression analysisen_US
dc.titleEvaluating CNN, RNN, and Vision Transformer for Emotion Recognition: Strengths and Weaknessesen_US
dc.typeKonferencijos publikacija / Conference paper
dcterms.accrualMethodRankinis pateikimas / Manual submissionen_US
dcterms.issued2025-06-02
dcterms.references21en_US
dc.description.versionTaip / Yesen_US
dc.contributor.institutionKharkiv National University of Radio Electronicsen_US
dcterms.sourcetitle2025 IEEE Open Conference of Electrical, Electronic and Information Sciences (eStream), April 24, 2025, Vilnius, Lithuaniaen_US
dc.identifier.eisbn9798331598730en_US
dc.identifier.eissn2690-8506en_US
dc.publisher.nameIEEEen_US
dc.publisher.countryUnited States of Americaen_US
dc.publisher.cityNew Yorken_US
dc.identifier.doihttps://doi.org/10.1109/eStream66938.2025.11016864en_US


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record