A few weeks ago, The Beach Boys sold all their intellectual property for an undisclosed amount – business speak for loads. The deal explicitly mentioned the rights to use technologies such as virtual reality, augmented reality, and natural language processing (computer-generated speech), clearly planning for a future where the band would be digitally recreated some sixty years after their heyday. That’s right; soon, you could pay to attend a Beach Boys concert with no crowd, no stage, no band, and no beach.
The act you will be paying to see will essentially be ‘deepfakes’, the term for when AI is combined with CGI and NLP to create fake videos of real people, an inevitable feature of our rapidly approaching future. In her recent book by the same name, Nina Schick estimates the CGI currently available to Hollywood studios will be accessible in a few years to anyone who has a smartphone. Soon you will be able to rewrite your favourite film endings on a bus; finally, we’ll hear Jack asking Rose to scooch.
But hang on. Why would people want to experience a performance that they know is completely fabricated? An act so fake, it makes lip-syncing boybands seem one step from the pinnacle of authenticity. The obvious answer is that even a simulation of the gig environment can enhance the sound of music. This may seem mundane on the surface, but stopping to understand why this is at the neurological level can help you maximise the appeal of your music to your audience and reveal a hidden ability you have as a creator.
Like the computer you are reading this on, your brain is a gigantic store of information. But unlike the data on your hard drive, when one part of the brain is stimulated (accessed), connected parts receive some stimulation too. Think of a smell bringing you back to a time and place. That memory is part of the same neural network as the smell. The simulated gig works the same way. It fires up the same neural network as the music, concentrating the stimulation, resulting in a stronger and enhanced feeling of pleasure that you attribute to the music.
Ok, fine. But what if you don’t perform or have a CGI stage double to perform for you? Well, this type of enhanced listening is always possible. In fact, listening is always under the influence of other types of neural stimulation, for better or worse. Explicitly: there is space to rent inside a brain during any listening experience, and if you have music you want an audience to connect with, it is up to you to fill it wisely. Oh, and you don’t need to be Derren Brown.
You may have never realised you have the ability to control minds, but actually, everyone does. Try telling a friend you once DJ’d naked at Burning Man, and you will instantly move their mind to a different place than it was. As an artist, the mind control you exert over your audience is achieved through all the thoughts they freely associate with you: your external identity.
Whether you like it or not, the most reliable thing that will enter an individual’s mind when listening to a particular song is the artist. If this conjures up thought processes congruent with the style and emotional character of the music, then the listening experience will be enhanced. The opposite is also true, which is why rappers don’t tend to rap in tweed jackets and polo necks.
The inescapable truth for producers is there are many things external to their music that influence its reception – it is just the way a human mind works. Anything that involves sensation will be influenced by what we think prior to and during the experience. Our eyes and ears just take calls, our brain is the opinion executive, and our brains have a lot more on their plates than photons and sine waves.
Specifically, other than avoiding dodgy clothing, what can you do with this knowledge as a creator? The answer is too broad to cover here fully, but all action should centre around the understanding that the scope of your creative output extends way outside your 24-bit master. The universal format every artist uses is the neurocanvas: you write directly to people’s brains.
So think about why your audience is listening to the sound you are a part of and what emotions they trying to engage with. If all contact points between you and an observer convey that essence, you are doing it right. Our AI future will be forever changing the ways people receive music, but it won’t change the receivers: human beings. Understand these mad creatures, and you will always be one step ahead of the robots.
Read more opinions here.
The post The deepfakes are coming, but how can new talent compete? appeared first on MusicTech.