Behind the generative artificial intelligence responses that Google gives users is an army of people who rate the responses for accuracy. However, a new policy change is raising concerns that Google’s Gemini AI may become more prone to giving inaccurate answers.
It’s potentially a major concern, particularly as Gemini gives automated answers to searches related to healthcare.
TechCrunch reported that Google changed the rules for rating AI-generated responses for its contractors from the company GlobalLogic.
Previously, GlobalLogic contractors could skip rating a response for accuracy if they didn’t know much about the subject.
But now, Google will not allow the contractors to skip rating the prompts.
TechCrunch saw emails from concerned contractors.
One contractor asked, “I thought the point of skipping was to increase accuracy by giving it to someone better?”
AI can occasionally engage in “hallucination,” a catch-all term for inaccuracies it creates.
For instance, one lawyer was fired for using AI to generate legal briefs after it cited court cases that did not exist.
Additionally, an early Google AI answered searches that asked how to keep cheese from falling off pizza by suggesting adding glue.
“Raters perform a wide range of tasks across many different Google products and platforms,” Google spokeswoman Shira McNamara told TechCrunch. “They do not solely review answers for content, they also provide valuable feedback on style, format, and other factors. The ratings they provide do not directly impact our algorithms.”