Computers Can Now Generate Sounds To Fool Human Listeners

200
Computers Can Now Generate Sounds To Fool Human Listeners
Computers Can Now Generate Sounds To Fool Human Listeners

Computers Can Now Generate Sounds To Fool Human Listeners

In this blog, Computers can now generate sounds to fool human listeners. Think of all the sounds in a movie–drumming hoofbeats, zooming cars, booming thunder, heels clacking down a hall.

Can machines come up with plausible sounds effects for video? It sounds weird right but Recently, MIT’s artificial intelligence (CSAIL) lab created a sort of Turing test that fooled folks into thinking that machine-created letters were written by humans.

In a paper that will be presented this month at the Computer Vision and Pattern Recognition conference by researchers at MIT’s Computer Science and Artificial Intelligence Lab, Researcher’s describes about the deep-learning algorithm that can watch a silent movie and create sounds that go along with the motions on screen. It’s so good, it even fooled people into thinking they were actual, recorded sounds from the environment.

Researchers used a drumstick (chosen for consistency and because it doesn’t obscure the video) to hit various objects, including railings, bushes and metal gratings. The algorithm was fed 978 videos with 46,620 actions, helping it recognize patterns in the audiovisual signal. After learning from the video, the computer was then able to generate appropriate sound effects for a new silent movie, fooling people asked to watch videos and determine if the sounds they heard were real or computer generated. Computers can now generate sounds to fool human listeners.

The AI uses deep learning to figure out how sounds relate to video, meaning it finds the patterns on its own without intervention from scientists. Then, when it’s shown a new, silent video, “the algorithm looks at the properties of each frame of that video, and matches them to the most similar sounds in the database,” says lead author Andrew Owens. As shown in the video (above), it can simulate the differences between someone tapping rocks, leaves or a couch cushion.

“A robot could look at a sidewalk and instinctively know that the cement is hard and the grass is soft, and therefore know what would happen if they stepped on either of them,” Andrew Owens, lead author of the paper said. “Being able to predict sound is an important first step toward being able to predict the consequences of physical interactions with the world.”

I hope you like this blog,  Computers can now generate sounds to fool human listeners.