Google’s first Artificial Intelligence Machine-generated Song, played on piano with only 4 notes.
Researchers have been attempting to make robots and artificial intelligence more creative over the past months – from drawing to writing quasi-dystopian poetry. Today we get another piece of work from a Google machine: a 90-second piano melody created through a trained neural network, provided with just four notes up front. The drums and orchestration weren’t generated by the algorithm, but added for emphasis after the fact.
It’s the first tangible product of Google’s Magenta program, which is designed to put Google’s machine learning systems to work creating art and music systems. It’s the result of Google’s Project Magenta, which aims to use machine learning to create music and art, and bridge the communities between those interests with coders and researchers. Magenta is built on top of its TensorFlow system, and you can find the open-sourced materials through its Github.
Magenta is built on top of Google’s TensorFlow system, which is already open-source, and the new project also plans to publish its code as open-source on GitHub. “We believe this area is in its infancy, and expect to see fast progress here,” the announcement says.
The team says the challenge is not to just get Google machines to create art, but to be able to tell stories from it. After all, that’s what artists do with their crafts: to compose a narrative into their work then share them with the world.
“The design of models that learn to construct long narrative arcs is important not only for music and art generation, but also areas like language modeling, where it remains a challenge to carry meaning even across a long paragraph, much less whole stories,” the team wrote. “Attention models like the Show, Attend and Tell point to one promising direction, but this remains a very challenging task.”
Along with the melody, Google published a new blog post delving into Magenta’s goals, offering the most detail yet on Google’s artistic ambitions. In the long term, Magenta wants to advance the state of machine-generated art and build a community of artists around it — but in the short term, that means building generative systems that plug in to the tools artists are already working with. “We’ll start with audio and video support, tools for working with formats like MIDI, and platforms that help artists connect to machine learning models,” the team wrote in an announcement. “We want to make it super simple to play music along with a Magenta performance model.”
It’s not the first time Google has experimented with machine-generated art. The company’s DeepDream algorithm — initially developed to visualize the actions of neural networks — has become a popular image tool in its own right and the basis for a gallery show earlier this year. Google also developed the Artists and Machine Intelligence program to sponsor further collaborations along the same lines.
Source: http://www.theverge.com/2016/6/1/11829678/google-magenta-melody-art-generative-artificial-intelligence
http://thenextweb.com/google/2016/06/01/lets-talk-song-google-ai-made/#gref