The way we consume music was drawn up in the days of the gramophone and has been the same ever since:
- Artist writes song and performs hundreds of individual, unique performances
- Artist records one particular performance on one particular day
- We listen to that one performance over and over again
In an age in which cars can drive themselves and entire meals can come out of a printer, don't we think we can do better than this?
The idea of there being one definitive version of a piece of music is a pretty new one. Before sound recording was invented, music was different every time you heard it. The same thrill we get now when we go to a gig and hear a performance that we know is completely unique - back in the nineteenth century, they got that every time they heard music.
The same goes for the jazz age. For Miles Davis and his ilk, it wasn't about laying down the one true version of a tune. Every time they performed, they made up new melodies, tried new turns of phrase, right there in front of the audience. Their music was always based on the specific circumstances in which it was played - the venue, the crowd, and the mood of the performers would all affect what you heard if you went along.
So why not use modern technology to get back some of the magic that comes from hearing something unique, something that you know is tailored to the specific situation you're in?
The way to do this is using what's traditionally known as generative music - software that writes its own music. A lot of musical composition is about repetition and variation, things computers are actually very suited to. And if we can teach computers to write music, we can use them to create music that's different every time you play it.
Generative music isn't a new idea. As early as the 18th century, composers were creating musical dice games, in which people would roll a dice to decide the order in which different bars of music should be played. Even Mozart reportedly tried his hand at them.
There have been a few attempts at writing software that writes music over the years, but they've all been limited either to imitating a particular style or to producing ambient music (music with no real tune). There's been nothing that can actually write musical tunes, in a range of styles, on its own. That's what my startup, Jukedeck, is working on, because we think that creating decent tunes is the key to bringing generative music into the mainstream.
I think calling this kind of music 'generative' misses the point, though. It focuses on the means, whereas what we should really be interested in is the end - what this technology will enable. It could soon give us personalised music, written in realtime, that responds to our surroundings and our mood: the soundtrack to your life. So 'responsive music' seems like a better term.
People aren't without their reservations about putting musical composition in the hands of computers. I was sitting next to a friend's mum at dinner recently, and I told her what we were doing. "Well," she said, hardly batting an eyelid, "I'm not entirely sure I want you to succeed." Her reason? "I like to think there's a layer of music - five percent, right at the top - that can't be touched by computers."
She's got a point.
There's definitely a part of music that is currently, and quite probably will always be, untouchable by computers. The impact that emotion and memory have on the way we perceive music is massive, and these are things that aren't likely to be replicable in code any time soon. Music places a never-ending series of filters over the lens through which we view our lives, and its human element is something we should continue to celebrate.
But responsive music isn't out to replace that. We're not trying to pit computers against humans in some X Factor-style songwriting showdown - we're trying to work towards a world in which music can be completely personalised. To get there, we have to teach computers the basics, so that we can build tools humans can use to craft the next generation of musical experiences.
It's early days, but computers are increasingly able to write music on their own. This is something that's going to get more and more normal, and, as it does, people are going to come up with amazing new ways of listening to music. Hopefully we can work towards a future in which music can be truly personalised - in which you have a soundtrack to your life.