Search

Google Wants to Upend the World of Music

sheet music2

The world of music is not immune to generative AI. In May, Google announced MusicLM that is able to generate high-fidelity music conditioned on detailed text descriptions. In fact, MusicLM can be tuned by both text and whistled melodies. Recently, Google researchers have gone a step further. They worked with scientists from Osaka University and showed that MusicLM could turn brain activities, as measured by fMRI, into music that is somehow similar to the piece being heard.

Functional magnetic resonance imaging (fMRI) can capture changes associated with blood flow in the brain. This technique has been used to study the effect of music on brain activities. However, the new study may be the first one trying to use brain activities to reconstruct the music being heard. In the study, five volunteers listened to 540 short music pieces across 10 different genres. Their fMRI data were processed and fed to MusicLM to generate music. The generated music turned out to be similar to the pieces being heard in genre, instrumentation, and mood. This new technology is similar to another one reported by us earlier that uses brain scans to predict the stories being heard.

However, some limitations are noteworthy. First, due to individual differences in brain anatomy, the system trained for one individual can not be directly applied to someone else. Second, the information within fMRI data is still limited, restricting the ability to fully replicate the music being listened to. For now, this new study has no commercial application.

Yet, Google is doing much more than just lab research. Financial Times reported on Aug 8 that Google and Universal Music Group, which is home to Taylor Swift and Drake and holds about one third of the global music market, were negotiating a deal to license musicians’ melodies and voices to enable AI-generated songs. The goal is to enable fans to legally create music mimicking the voice and lyrics of their heroes and pay the copyright owners. Artists can choose to participate or not. The music industry will not be the same.

Further Possibilities

1. Use hit songs to train AI to produce high quality music

This approach involves leveraging the vast amount of existing successful songs to teach AI systems how to generate high-quality, appealing compositions.

2. Self-driving music player based on wearable inputs

This will be a music player that intelligently adapts its playlist and playback based on real-time data collected from wearables.

3. Personalized music generation

Develop a platform that allows individuals to create personalized music based on their brain activity patterns or physiological responses recorded by wearables. By utilizing the body data and advanced AI algorithms, users could generate music that resonates with their emotions, preferences, and mood.

4. Collaborative music creation

Create an online platform where musicians and listeners can collaboratively generate music using physiological data. Musicians could use the physiological patterns of their audience to co-create music that resonates with their listeners on a profound level.

5. Neural time capsules

Capture wearable data during significant life events, milestones, or moments of inspiration. Convert these data patterns into musical compositions that serve as unique audio time capsules, preserving and sharing personal memories in a novel way.

MORE CREATIVE IDEAS

Share this post

Leave a Comment