The fact that chatbots are sometimes able to confidently give out meaningless answers, warns the senior vice president of Google and head of the search unit Prabhakar Raghavan. In an interview with a German publication Welt am Sonntag he highlights that chatbots like ChatGPT can be potentially dangerous for users.

We are talking about some far-fetched answers to the questions that artificial intelligence gives out without appeal. Raghavan equates such AI actions with what is called “hallucinations.” He argues that minimizing such manifestations is the task of developers. They should conduct a large-scale technology test in order to get the most accurate answers from chatbots.

In turn, users should be vigilant about referring to the information received from the AI and be responsible for it. This is the only way to maintain the trust of society, he concludes.

To recap, Judge Juan Manuel Padilla of Colombia admitted that he used artificial intelligence during the trial. The process involved deciding to insure a child with autism spectrum disorder.

Commentary