Researchers in China developed a hallucination correction engine for AI models
The USTC/Tencent group established a tool called “Woodpecker” that they declare is capable of fixing hallucinations in multi-modal big language models (MLLMs). According to the groups pre-print research paper, Woodpecker uses three separate AI designs, apart from the MLLM being remedied for hallucinations, to perform hallucination correction. Together, these models work as evaluators to determine hallucinations and advise the model being remedied to re-generate its output in accordance with its information.
Thank you for reading this post, don't forget to subscribe!
A team of scientists from the University of Science and Technology of China and Tencents YouTu Lab have established a tool to combat “hallucination” by expert system (AI) designs. Hallucination is the propensity for an AI model to create outputs with a high level of self-confidence that do not appear based upon details present in its training information. This issue penetrates big language model (LLM) research study. Its effects can be seen in designs such as OpenAIs ChatGPT and Anthropics Claude. The USTC/Tencent group developed a tool called “Woodpecker” that they declare is capable of correcting hallucinations in multi-modal large language models (MLLMs). This subset of AI involves models such as GPT-4 (particularly its visual variation, GPT-4V) and other systems that roll vision and/or other processing into the generative AI modality alongside text-based language modelling. According to the teams pre-print term paper, Woodpecker uses three different AI models, apart from the MLLM being remedied for hallucinations, to perform hallucination correction. These consist of GPT-3.5 turbo, Grounding DINO, and BLIP-2-FlanT5. Together, these models work as critics to recognize hallucinations and instruct the model being remedied to re-generate its output in accordance with its information. In each of the above examples, an LLM hallucinates an inaccurate answer (green background) to triggering (blue background). The corrected “Woodpecker” actions are revealed with a red background. (Image source: Yin, et. al., 2023). To remedy hallucinations, the AI models powering “Woodpecker” use a five-stage procedure that involves “essential concept extraction, question formulation, visual understanding recognition, visual claim generation, and hallucination correction.” The scientists declare these methods provide additional transparency and “a 30.66%/ 24.33% improvement in accuracy over the standard MiniGPT-4/ mPLUG-Owl.” They assessed various “off the shelf” MLLMs utilizing their approach and concluded that Woodpecker could be “quickly integrated into other MLLMs.”Related: Humans and AI typically prefer sycophantic chatbot responses to the fact– StudyAn evaluation variation of Woodpecker is available on Gradio Live where anybody curious can have a look at the tool in action.
Related Content
- The Truth about Bitcoin Growth and its Impact on Your Financial Future
- Fort Worth, Texas Becomes First City Government In The U.S To Mine Bitcoin
- Bitcoin traders’ bullish bias holds firm even as BTC price dips to $37K
- Canada’s regulatory clarity is bringing institutions to crypto — WonderFi CEO
- The Price Of Principles