AI – Next RAG Frontier: Self-Improving Retrieval v Unifying Explicit and Implicit User Signals

Authors

  • Sarthak BHATT Cyquent Inc., Rockville, MD, USA
  • Atif Farid MOHAMMAD Capitol Technology University, Laurel, MD, USA

Abstract

This paper introduces FeedbackRAG, a model-agnostic framework that enhances Retrieval-Augmented Generation (RAG) quality through continuous user feedback. The system integrates explicit signals such as helpfulness ratings and hallucination flags with implicit sentiment-based cues derived from chat interactions. A three-loop mechanism drives improvement, where Loop A applies real-time bias updates to retrieved chunks using a decay-weighted confidence model; next Loop B aggregates feedback to train a reranker and fine-tune embeddings through contrastive learning; and Loop C governs the generator by tightening prompts or abstaining when hallucination risks are detected. The framework supports any embedding–LLM combination; in our experiments we employ All-MiniLM for retrieval and Claude for generation. Results show that unified explicit-implicit feedback significantly improves retrieval relevance, citation precision, and factual accuracy, establishing FeedbackRAG as a scalable approach for self-improving, human-aligned RAG systems.

Published

2025-11-25