Show simple item record

dc.rights.licenseVisos teisės saugomos / All rights reserveden_US
dc.contributor.authorPradeep Reddy, G.
dc.contributor.authorPavan Kumar, Y. V.
dc.contributor.authorPurna Prakash, K.
dc.date.accessioned2026-01-02T07:54:43Z
dc.date.available2026-01-02T07:54:43Z
dc.date.issued2024
dc.identifier.isbn9798350352429en_US
dc.identifier.issn2831-5634en_US
dc.identifier.urihttps://etalpykla.vilniustech.lt/handle/123456789/159645
dc.description.abstractThe recent advancements in neural network architectures, particularly transformers, have played a crucial role in the rapid progress of Large Language Models (LLMs). LLMs are trained on many parameters. By training these parameters on vast amounts of text data, LLMs can learn to generate reactions to a wide variety of prompts. These models have enabled machines to generate new data (human-like), driving significant developments in Natural Language Processing (NLP). They have demonstrated remarkable capabilities in producing new content. Besides their impressive performance, LLMs occasionally generate hallucinatory responses that produce nonsensical or inaccurate information. In simple terms, hallucinations in LLMs happen when the model generates information that may sound believable but is actually wrong. It can make up details or go beyond what it has learned from the training data, resulting in inaccurate output. These hallucinatory responses appear to be authentic but lack grounding in reality. Such hallucinations can include fabrications such as facts, events, or statements that lack support from real-world data. Addressing this issue is important to enhance the reliability of AI-generated content. Hallucinations can be a significant challenge in critical applications such as healthcare, law, etc. In this view, this paper delves into the phenomenon of hallucinations in the context of LLMs. The objective is to understand the causes, explore the implications, and discuss potential strategies for mitigation.en_US
dc.format.extent6 p.en_US
dc.format.mediumTekstas / Texten_US
dc.language.isoenen_US
dc.relation.urihttps://etalpykla.vilniustech.lt/handle/123456789/159404en_US
dc.source.urihttps://ieeexplore.ieee.org/document/10542617en_US
dc.subjecthallucinationsen_US
dc.subjectlarge language models (LLMs)en_US
dc.subjectlarge language model operations (LLMOps)en_US
dc.subjectnonsensicalen_US
dc.subjectnatural language processing (NLP)en_US
dc.subjecttransformersen_US
dc.titleHallucinations in Large Language Models (LLMs)en_US
dc.typeKonferencijos publikacija / Conference paperen_US
dcterms.accrualMethodRankinis pateikimas / Manual submissionen_US
dcterms.issued2024-06-05
dcterms.references21en_US
dc.description.versionTaip / Yesen_US
dc.contributor.institutionKookmin Universityen_US
dc.contributor.institutionVIT-AP Universityen_US
dc.contributor.institutionKoneru Lakshmaiah Education Foundationen_US
dcterms.sourcetitle2024 IEEE Open Conference of Electrical, Electronic and Information Sciences (eStream), April 25, 2024, Vilnius, Lithuaniaen_US
dc.identifier.eisbn9798350352412en_US
dc.identifier.eissn2690-8506en_US
dc.publisher.nameIEEEen_US
dc.publisher.countryUnited States of Americaen_US
dc.publisher.cityNew Yorken_US
dc.identifier.doihttps://doi.org/10.1109/eStream61684.2024.10542617en_US


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record