Recent research from Google and LaminiAI has highlighted significant advancements and challenges in reducing hallucinations in large language models (LLMs). A new study by Google Research, led by Zorik Gekhman, reveals that fine-tuning LLMs on new knowledge can slow down learning and increase the risk of generating incorrect facts. In contrast, LaminiAI has introduced a novel approach called Memory Tuning, which claims to achieve 95% accuracy and reduce hallucinations by tenfold. This method involves training millions of LoRAs (Low-Rank Adapters) for specific facts and retrieving the top relevant ones during inference. This breakthrough, which includes the Lamini-1 model weights available on Huggingface, has been validated with a Fortune 500 customer case study.
Huge if true! Basically, the way they significantly reduce hallucination is by tuning millions of expert adapters (e.g., LoRAs) to learn exact facts and retrieve them from an index at inference time. https://t.co/rDyNpky5VT
A buzzy process called retrieval augmented generation, or RAG, is taking hold in Silicon Valley and improving the outputs from large language models. How does it work? I asked the experts: https://t.co/XYiaVl8K7E
In addition... Lamini, a new way to embed facts into LLMs that improves factual accuracy and reduces hallucinations, claims 95% accuracy compared to 50% with other approaches and hallucinations reduced from 50% to 5% https://t.co/L9MTyn9sAn https://t.co/CVWGrpetLN
RAG combines real-time data retrieval with LLMs to reduce AI hallucinations by ensuring context-specific accuracy and is being refined for better scalability and efficiency. https://t.co/E2y3es3JwP
🤖🇺🇸 Fight AI Hallucinations with This Clever Software Hack! Developers are turning to Retrieval Augmented Generation (RAG), a technique that could make generative AI tools far more reliable. Here's how it's done! https://t.co/JGJVz1MYMo
Tech enthusiasts, buckle up! 🚀 The rise of RAG (retrieval augmented generation) in Silicon Valley is revolutionizing large language models. 🤖✨ #AI #RAG #TechInnovation https://t.co/JOAu606mSE
How Retrieval Augment Generation (RAG) makes LLMs smarter than before https://t.co/Vx1NyACnCX #bigdata, #datascience, #datascience #ds, inoreader
Introducing Lamini Memory Tuning: 95% LLM Accuracy, 10x Fewer Hallucinations https://t.co/FqfPp8xxri
https://t.co/ieRsvT8eHP introduces Memory Tuning: 95% LLM Accuracy, 10x Fewer Hallucinations https://t.co/l9L814Kr9g
Something new and different: "Lamini Memory Tuning": - Train millions of LoRAs for specific facts - Retrieve top N for a query at inference time - Combine and run This apparently significantly reduces hallucinations. Great for limited/scoped domains. https://t.co/LNxJOcd3qY https://t.co/R77MXPKPpR
Tried an open LLM lately? They're not so great at the facts. We could use RAG, or fine-tuning, but is there another way? @LaminiAI thinks so, with a new Memory Tuning strategy. By creating a 1-million-way-expert tuned model, hallucinations reduced by 10x https://t.co/vcfYrczyCw https://t.co/iNlOIbnppo
RAG is not enterprise ready for LLM/Generative AI applications. This is not surprising for those of us who work with it. Yet, the hype machine goes on. Key findings, critical reflections, and suggestions based on this work 🧵⤵️ 1/4 https://t.co/7BTgaCmZXk
really interesting novel approach to reduce LLM hallucinations without fine tuning and loss of generality: train a mixture of a million experts, each LoRA’d with a fact that you want to memorize. https://t.co/mrVIftv0lA
New AI research on removing hallucinations from LLMs: "Banishing LLM Hallucinations Requires Rethinking Generalization" by @LaminiAI Full paper: https://t.co/oCNJCenYRN Lamini-1 LLM weights: https://t.co/fYgW8UZnK9 tl;dr: https://t.co/Dwc4o3fv32 https://t.co/644je5Ac04
Excited to announce @LaminiAI Memory Tuning, a new research breakthrough! 🎉 ◽95%+ accuracy, cutting hallucinations by 10x ◽Turns any open LLM into a 1M-way adapter MoE (paper & Lamini-1 model weights on @Huggingface) ◽Fortune 500 customer case study on how they memory-tuned…
Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations? ◼ New study reveals large language models struggle to learn new facts via fine-tuning, learning slower than pre-known data. This method also increases the risk of generating incorrect facts. Findings suggest… https://t.co/HHx9OIkwkV
New paper 📢 Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations? by Zorik Gekhman et al at Google Research 🧵 https://t.co/PDwt6mb5ei
A great deal of effort is going into reducing “hallucinations” for LLMs. This is mostly a mistake. How sensitive a use case is to “hallucinations” varies inversely with how good a fit it is for LLMs.
Hallucination in Large Language Models (LLMs) and Its Causes https://t.co/hFMFMPmGhC