• Lietuvių
    • English
  • English 
    • Lietuvių
    • English
  • Login
View Item 
  •   DSpace Home
  • Universiteto produkcija / University's production
  • Universiteto leidyba / University's Publishing
  • Konferencijų medžiaga / Conference Materials
  • Tarptautinės konferencijos / International Conferences
  • International Conference "Electrical, Electronic and Information Sciences“ (eStream)
  • 2024 International Conference "Electrical, Electronic and Information Sciences“ (eStream)
  • View Item
  •   DSpace Home
  • Universiteto produkcija / University's production
  • Universiteto leidyba / University's Publishing
  • Konferencijų medžiaga / Conference Materials
  • Tarptautinės konferencijos / International Conferences
  • International Conference "Electrical, Electronic and Information Sciences“ (eStream)
  • 2024 International Conference "Electrical, Electronic and Information Sciences“ (eStream)
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Hallucinations in Large Language Models (LLMs)

Thumbnail
Date
2024
Author
Pradeep Reddy, G.
Pavan Kumar, Y. V.
Purna Prakash, K.
Metadata
Show full item record
Abstract
The recent advancements in neural network architectures, particularly transformers, have played a crucial role in the rapid progress of Large Language Models (LLMs). LLMs are trained on many parameters. By training these parameters on vast amounts of text data, LLMs can learn to generate reactions to a wide variety of prompts. These models have enabled machines to generate new data (human-like), driving significant developments in Natural Language Processing (NLP). They have demonstrated remarkable capabilities in producing new content. Besides their impressive performance, LLMs occasionally generate hallucinatory responses that produce nonsensical or inaccurate information. In simple terms, hallucinations in LLMs happen when the model generates information that may sound believable but is actually wrong. It can make up details or go beyond what it has learned from the training data, resulting in inaccurate output. These hallucinatory responses appear to be authentic but lack grounding in reality. Such hallucinations can include fabrications such as facts, events, or statements that lack support from real-world data. Addressing this issue is important to enhance the reliability of AI-generated content. Hallucinations can be a significant challenge in critical applications such as healthcare, law, etc. In this view, this paper delves into the phenomenon of hallucinations in the context of LLMs. The objective is to understand the causes, explore the implications, and discuss potential strategies for mitigation.
Issue date (year)
2024
Author
Pradeep Reddy, G.
URI
https://etalpykla.vilniustech.lt/handle/123456789/159645
Collections
  • 2024 International Conference "Electrical, Electronic and Information Sciences“ (eStream) [41]

 

 

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjects / KeywordsInstitutionFacultyDepartment / InstituteTypeSourcePublisherType (PDB/ETD)Research fieldStudy directionVILNIUS TECH research priorities and topicsLithuanian intelligent specializationThis CollectionBy Issue DateAuthorsTitlesSubjects / KeywordsInstitutionFacultyDepartment / InstituteTypeSourcePublisherType (PDB/ETD)Research fieldStudy directionVILNIUS TECH research priorities and topicsLithuanian intelligent specialization

My Account

LoginRegister