American News Group

Google built a musical instrument that uses A.I. — and released the plans so you can make your own

Google on Tuesday revealed a synthesizer that uses artificial intelligence to make interesting sounds based on normal sounds from real life. But there’s a catch: You won’t be able to buy it.

Despite Google’s recent hardware push under executive Rick Osterloh, this thing isn’t like the Pixel phone or the Home speaker, which bring in revenue for Google. It’s a research project. It came about basically because some people at Google wanted to see what can happen if they build a dedicated hardware version of a software synth that they’d previously come up with. The researchers are publishing hardware designs and the underlying software on GitHub, so people can assemble their own versions.

This isn’t even a product that could show Google’s intent to do more in the music gear business with companies like Korg and Roland. Really, Google is just showing what it can do with the AI software it has developed.

Instead, it shows what’s technically possible — and other companies aren’t pushing the envelope in this way. It also shows that AI isn’t just a frightening out-of-control technology that steals jobs. It can also help the creative human process.

“In the ’60s your thing might have been having a soldering iron, and now we’re saying that we can do something with machine learning that’s equally hacky,” Google senior staff research scientist Doug Eck told CNBC during a demonstration of the project at Google headquarters last week.

Part of Google Brain’s AI research

At the heart of the software synthesizer was an AI system called NSynth — the N stands for neural, as in neural network — which is trained on hundreds of thousands of short recordings of single musical notes played on different musical instruments. That data makes it possible for the software to come up with the sounds of other notes, longer notes, and even notes that blend the sounds of multiple instruments.

NSynth came out last spring. It’s one of the foundational technologies from Magenta, an effort from the Google Brain AI research group to push the envelope in the area of creating art and music with AI. Other Magenta projects include AI Duet, which lets you play piano alongside a computer that riffs on what you play, and sketch-rnn, an AI model for drawing pictures based on human drawings.

In developing NSynth, Google Brain worked together with DeepMind, an AI research firm under Google’s parent company, Alphabet. The researchers even released a data set of musical notes and an open-source version of the NSynth software for others to work with.

With the virtual synth, you could choose a pair of instruments and then move a slider toward one or another, to create a combination between the two. Then, with the keys on your computer keyboard, you could play piano notes with that unusual combination acting as a filter.

It’s interesting, but it’s limited in its powers.

The hardware synth prototype, which goes by the name NSynth Super, provides several physical knobs to turn and a slick display to drag your finger on, making it more accommodating for live performers who are used to tweaking hardware boxes on the fly. There are controls for adjusting how much of a note you want to play, along with qualities known as attack, decay, sustain and release. And it lets you play sounds with a combination of four instruments at once, not two. It pushes the limits of what’s possible with NSynth.

“We wanted to make an Etch A Sketch for sound,” said Joao Wilbert of Google’s London Creative Lab, one of the people who worked on the hardware project.

The classic red drawing toy has just two knobs. In all seriousness, though, NSynth Super lets you express yourself with far more precision.

You turn each of the four knobs in the corners to choose instruments. Then you place your finger somewhere on the display in the middle. On the outside corners are the most normal output of the instruments. But as you slide your finger from one of the corner toward the middle, or anywhere else, you start to get more of a combination with the other instruments. The middle is where AI lets you make genuinely bizarre noises.

If you have a computer hooked up, you can play a backdrop of a rhythm and maybe also a melody, and then adjust the knobs and slide your finger on the display.

You can certainly go further. If you connect a piano keyboard and use the NSynth Super to make just the right filter, you can play your own melody through that filter. And if you’re really ambitious, you could add your own sounds into the system.

Whether you play your own melody or not, the results can be simultaneously fun and surprising, even if you’re not a skilled musician.

Google’s NSynth Super is sleek, with beautiful metal housing. Google called on an internal manufacturing plant to help with production, said Eck, who leads the Magenta group.

But if you go ahead and find a company to build you one using the designs it’s posting online, it probably won’t look as svelte as Google’s version. It could be made out of wood or even plastic — yes, you can 3-D print your own — and it can run with a simple Raspberry Pi miniature computer. And that’s kind of the point — it will cost less money, so more people will be able to afford it.

“We’re not going to fab 20 billion of these things,” Eck said.

Exit mobile version