Google’s ChatGPT rival is trained by workers who are under pressure to audit AI answers in as little as 3 minutes, documents show

Google’s Bard is reportedly trained by thousands of contractors under pressure to review answers generated by the AI chatbot in as little as three minutes.

The accuracy of Google’s rival to OpenAI’s ChatGPT depends on contractors at companies such as Appen and Accenture, who are given minimal training and earn as little as $14 an hour, Bloomberg reported, citing several contractors. The workers requested anonymity, the publication added.

Bard was first announced by Google in February after the launch of ChatGPT put the company on high alert. The OpenAI chatbot accumulated 100 million users within two months and posed a direct threat to Google’s search business as Microsoft poured billions into OpenAI.

Though chatbots like Bard and ChatGPT depend on the large language models underlying them to generate responses, humans are also involved in reviewing responses to ensure they’re reliable and accurate.

However, the workload of humans reviewing responses for Bard has become increasingly larger and more complex, Bloomberg reported, citing internal documents and six contractors.

The instruction documents were published by Bloomberg and reviewed by Insider.

“As it stands right now, people are scared, stressed, underpaid, don’t know what’s going on,” one contractor told Bloomberg. “And that culture of fear is not conducive to getting the quality and the teamwork that you want out of all of us.”

The report shines a light on how seriously Google is taking the threat from OpenAI, as an AI arms race accelerates between the two companies hoping to take the lead on AI’s rollout to the world.

Part of the contractors’ tasks often involves rating responses on their “helpfulness” on a scale from “not at all helpful” to “extremely helpful” by gauging how up-to-date the response is.

In a statement, a Google spokesperson told Insider: “Connecting people to high-quality information is core to our mission. We undertake extensive work to build our AI products responsibly, including rigorous testing, training, and feedback processes we’ve honed for years to emphasize factuality and reduce biases.

“Human evaluation – from individuals internal and external to Google – is one of many approaches we use to improve our products.”

The spokesperson added that “ratings don’t directly impact the output of our models and they are by no means the only way we promote accuracy.”

“Teams across Google with specialized skill sets – from engineering, to user experience, to trust and safety experts – use a range of techniques to build these products and continuously improve their quality and accuracy,” the spokesperson said.

Appen and Accenture did not immediately respond to Insider’s request for comment.

Must Read

error: Content is protected !!