Crypto

Can Chainlink Solve the AI Hallucination Problem?

Chainlink has introduced a brand new method to tackling the problem of AI hallucinations when giant language fashions (LLM) generate incorrect or deceptive data. 

Laurence Moroney, Chainlink Advisor and former AI Lead at Google, defined how Chainlink reduces AI errors by utilizing a number of AI fashions as a substitute of counting on only one. 

Chainlink’s method to tackling AI hallucinations, Supply: X

Chainlink wanted AI to investigate company actions and convert them right into a structured machine-readable format, JSON. 

As an alternative of trusting a single AI mannequin’s response, they used a number of Massive Language Fashions (LLMs) and gave them totally different prompts to course of the identical data. For this, Chainlink makes use of AI fashions from suppliers like OpenAI, Google, and Anthropic

The AI fashions generated totally different responses, which have been then in contrast. If all or a lot of the fashions produced the identical end result, it was thought of extra dependable. With this course of, the chance of counting on a single, doubtlessly flawed AI-generated response decreases.

As soon as a consensus is reached, the validated data is recorded on the blockchain, guaranteeing transparency, safety, and immutability.

This method was efficiently examined in a collaborative undertaking with UBS, Franklin Templeton, Wellington Administration, Vontobel, Sygnum Financial institution, and different monetary establishments, proving its potential to cut back errors in monetary information processing.

By combining AI with blockchain, Chainlink’s technique enhances the reliability of AI-generated data in finance, setting a precedent for enhancing information accuracy in different industries as properly.

Additionally Learn: Aptos Adopts Chainlink Standard for Secure Data Feeds



Show More

Related Articles

Leave a Reply