Google is rushing to take part in the sudden fervor for conversational AI, driven by the pervasive success of rival OpenAI’s ChatGPT. Bard, the company’s new AI experiment, aims to “combine the breadth of the world’s knowledge with the power, intelligence, and creativity of our large language models.” Not short on ambition, Google!
The model, or service, or AI chatbot, however you wish to describe it, was announced in a blog post by CEO Sundar Pichai. He pointedly notes Google’s recentering around AI some years back, as well as the fact that the most influential concept (the Transformer) was created by the company’s researchers in 2017.
“It’s a really exciting time to be working on these technologies as we translate deep research and breakthroughs into products that truly help people,” Pichai writes. It’s hard not to wonder while reading this how Google managed to get leapfrogged so decisively by OpenAI, the latter of which is now synonymous with the technologies the former pioneered.
The short explanation is that tech moves fast and big companies move slow, and while Google released paper after paper and tried to figure out how to fit AI into its existing business strategies, OpenAI has focused on making the best models and let people figure out their own applications.
Bard shows Google taking a page from that playbook, releasing a “lightweight” version of the model for testing purposes. The model uses Google’s own LaMDA (Language Model for Dialogue Applications) to power a conversational AI that can also draw on information from the web. How exactly it does that is not clear from the blog post, but it appears to at least keep more or less current.
Bard “help[s] explain new discoveries from NASA’s James Webb Space Telescope to a 9-year-old, or learn more about the best strikers in football right now, and then get drills to build your skills.”
Google of course maintains the most up to date record of web content on Earth, and no doubt Bard will be using that information to its benefit, but exactly how it processes and packages that information for you and your nine-year-old will only be clear once people start using it.
The post notes you can also use Bard to “plan a friend’s baby shower,” “compare two Oscar-nominated movies” and “plan a trip to Ecuador.” One can picture how an AI model might do any of these things using the various search results and data firehoses Google has access to, but this experiment will likely be limited to telling you stuff, not doing deep integrations with things like your calendar or airlines.
Of course every conversational AI must face the inevitable (these days, almost instant) attempts to bait it into saying something hateful, foolish or embarrassing. Google will surely be recording conversations with users “to make sure Bard’s responses meet a high bar for quality, safety, and groundedness in real-world information.” The last one is clearly a shot across OpenAI’s bows, as well as Microsoft’s, since the former’s models don’t cite their sources and the latter’s short-lived Galactica famously invented them.
(Update: “In light of recent announcements,” Microsoft has now made public a previously confidential event being held tomorrow in Redmond. The topic is not officially declared but it is widely expected to be a Bing-OpenAI tie-up that brings a next-generation language model to Microsoft’s perennially beleaguered search engine. An early version of the features was reportedly tested and leaked by student Owen Lin, but we have been unable to confirm anything from that post.)
AI will be coming to Google Search more directly in the form of several new features “which can help synthesize insights for questions where there is no one right answer. Soon, you’ll see these AI-powered features in Search that distill complex information and multiple viewpoints into easy-to-digest formats, so you can quickly understand the big picture and learn more from the web,” the company said in a separate email. Nuance but bullet point format, got it.
While no doubt people will ask it variations on the Trolley Problem, the example provided is someone asking “Is piano or guitar easier to learn and how much practice does each need?”
Not an ethically charged query (for most) but also not necessarily one with a simple result. But if of a hundred articles comparing the various learning rates of instruments there is some sort of consensus about the difficulty, with various caveats and tips also common, Google can just suck those up and pop them at the top of the search results.
Questions abound: isn’t that just plagiarism? Will sponsored results go above or below, and will they be included and/or promoted within the AI framework? What qualifies as a question with no right answer? Can users customize the results or crawling process?
We may very well learn the answers to these questions at Google’s Search and AI event Wednesday morning, one that strangely goes unmentioned in Pichai’s post. You can watch the livestream right here at 6:30 AM Pacific time or check the front page for more info then.