TECHNOLOGY

A new report alleges a change in Google’s policies might make Gemini less accurate

×

A new report alleges a change in Google’s policies might make Gemini less accurate

Share this article
A new report alleges a change in Google’s policies might make Gemini less accurate



There’s a new report alleging that some of the internal evaluation policies of Google’s generative AI chatbot, Gemini, warranted less accurate responses. Allegedly, Google is making contractors (people who are evaluating the model) rate Gemini’s responses on topics they are not qualified in.

Training an AI chatbot is quite a complex process. It’s not just about adding data to the AI model’s database. In fact, the data should meet certain parameters, such as appropriate organizational structure, for the AI to be able to use it. There are hundreds and maybe even thousands of people evaluating the quality of the generated responses, to ensure wrong responses are as few as possible.

However, a report from TechCrunch alleges that Google has not put all the effort it needs in the policies for rating Gemini responses. Previously, it’s reported that contractors had the option to skip an answer if unqualified to verify its accuracy. Now, reportedly, Google is not letting them skip answers, even if they don’t possess the needed knowledge to verify it. Google requires people to rate the part of the prompt they understand even if the entirety of it is out of their competences. They should also leave a note mentioning they did not have sufficient expertise in the area. Reportedly, there are also exceptions when contractors are allowed to skip a response – in the case if key info is missing making the response incomprehensible. Also, the exception applies when potentially harmful content is generated.
 
Of course, some people may find concerns about the alleged new policies and Gemini’s accuracy. It may be especially worrying in the case when people turn to Gemini to ask for advice on health.

See also  Gmail update brings Gemini Q&A to iOS users

At this point, there’s no statement from Google on the matter. It’s always possible the company has also tweaked other policies to ensure accuracy.

I personally find that generative AI has a lot more evolving to do before I’d trust it with health advice. I’ve used different models so far, including ChatGPT and Microsoft’s Copilot, and although I love the tech, I still wouldn’t trust it 100%, especially when it comes to important stuff like health questions.



Source Link Website

Leave a Reply

Your email address will not be published. Required fields are marked *