• Lietuvių
    • English
  • Lietuvių 
    • Lietuvių
    • English
  • Prisijungti
Peržiūrėti įrašą 
  •   DSpace pagrindinis
  • Universiteto produkcija / University's production
  • Universiteto leidyba / University's Publishing
  • Konferencijų medžiaga / Conference Materials
  • Tarptautinės konferencijos / International Conferences
  • International Conference "Electrical, Electronic and Information Sciences“ (eStream)
  • 2024 International Conference "Electrical, Electronic and Information Sciences“ (eStream)
  • Peržiūrėti įrašą
  •   DSpace pagrindinis
  • Universiteto produkcija / University's production
  • Universiteto leidyba / University's Publishing
  • Konferencijų medžiaga / Conference Materials
  • Tarptautinės konferencijos / International Conferences
  • International Conference "Electrical, Electronic and Information Sciences“ (eStream)
  • 2024 International Conference "Electrical, Electronic and Information Sciences“ (eStream)
  • Peržiūrėti įrašą
JavaScript is disabled for your browser. Some features of this site may not work without it.

Hallucinations in Large Language Models (LLMs)

Thumbnail
Data
2024
Autorius
Pradeep Reddy, G.
Pavan Kumar, Y. V.
Purna Prakash, K.
Metaduomenys
Rodyti detalų aprašą
Santrauka
The recent advancements in neural network architectures, particularly transformers, have played a crucial role in the rapid progress of Large Language Models (LLMs). LLMs are trained on many parameters. By training these parameters on vast amounts of text data, LLMs can learn to generate reactions to a wide variety of prompts. These models have enabled machines to generate new data (human-like), driving significant developments in Natural Language Processing (NLP). They have demonstrated remarkable capabilities in producing new content. Besides their impressive performance, LLMs occasionally generate hallucinatory responses that produce nonsensical or inaccurate information. In simple terms, hallucinations in LLMs happen when the model generates information that may sound believable but is actually wrong. It can make up details or go beyond what it has learned from the training data, resulting in inaccurate output. These hallucinatory responses appear to be authentic but lack grounding in reality. Such hallucinations can include fabrications such as facts, events, or statements that lack support from real-world data. Addressing this issue is important to enhance the reliability of AI-generated content. Hallucinations can be a significant challenge in critical applications such as healthcare, law, etc. In this view, this paper delves into the phenomenon of hallucinations in the context of LLMs. The objective is to understand the causes, explore the implications, and discuss potential strategies for mitigation.
Paskelbimo data (metai)
2024
Autorius
Pradeep Reddy, G.
URI
https://etalpykla.vilniustech.lt/handle/123456789/159645
Kolekcijos
  • 2024 International Conference "Electrical, Electronic and Information Sciences“ (eStream) [41]

 

 

Naršyti

Visame DSpaceRinkiniai ir kolekcijosPagal išleidimo datąAutoriaiAntraštėsTemos / Reikšminiai žodžiai InstitucijaFakultetasKatedra / institutasTipasŠaltinisLeidėjasTipas (PDB/ETD)Mokslo sritisStudijų kryptisVILNIUS TECH mokslinių tyrimų prioritetinės kryptys ir tematikosLietuvos sumanios specializacijosŠi kolekcijaPagal išleidimo datąAutoriaiAntraštėsTemos / Reikšminiai žodžiai InstitucijaFakultetasKatedra / institutasTipasŠaltinisLeidėjasTipas (PDB/ETD)Mokslo sritisStudijų kryptisVILNIUS TECH mokslinių tyrimų prioritetinės kryptys ir tematikosLietuvos sumanios specializacijos

Asmeninė paskyra

PrisijungtiRegistruotis