
A.I. – The Phonograph of New Ages?
Where Will Music Composition and A.I. Ultimately Entwine?
D. H. Kang
IT SEEMS THAT ORWELL’S HAUNTING PREDICTIONS OF A DYSTOPIAN FUTURE MAY ONCE AGAIN BE TRUE. In his novel “1984”, Big Brother quells the masses with illusory lyrics made from a machine named the “versificator”, designed to automatically generate lyrics to popular tunes. And now, in our age, we have A.I., creating the very music itself.
The career of a musician was something considered future-proof; it has been long thought that creative endeavours require the ingenuity of something uniquely human, something that comes from our soul. But the rising trend in our society is an increasing dependence on A.I. algorithms to make the tunes and beats to our songs. The question is: will human inspiration slowly die out, replaced by machines that continuously press the right buttons in our minds to make hit songs, just as foretold by Orwell?
In August of 2023, Meta released the source code for “AudioCraft”, a collection of “generative” models built using the learning capacity of A.I. The power of A.I., generally speaking, lies in its superhuman ability to process information, and for music, this is no exception. One of the “AudioCraft” models analysed patterns in around 400,000 recordings, ultimately coming up with around 3.3 billion “parameters” for which soundtracks could be formed given the input or request of the user.
These music “generators” are also becoming increasingly user-friendly, just like the development of the internet and computer interfaces over the last couple of decades. A model named “Stable Audio,” developed in London at a firm named Stability A.I., allowed users to take an audio clip that they might have played on the piano, and morph the music to be played by a whole orchestra or jazz band.
As with anything related to A.I., there is still considerable scepticism related to its potential. Naturally, even with its seemingly endless database of riffs and sounds, who’s to say that it will be able to convey true emotions of human love and sadness, and pull at the heartstrings of all those who hear? Perhaps this will happen someday far, or maybe not so far into the future, when A.I. first awakens its consciousness, and begins to feel human emotions. For this, only time will tell.
There are also several practical concerns. Many have brought up the issue of copyright and the royalty fees that may drive prices upwards for consumers. Just as we sometimes see strange replies as we use ChatGPT, there is no doubt that A.I. also currently produces many “discordant” sounds that sound unpleasant to hear. However, the development of these “generative” models is still in its early phases.
Over time A.I.’s influence on music will only continue to grow. It will be interesting to see how this human industry will coexist with machines in the future. Will art be next?