Novelty bands aside, here’s how musicians can really use AI
One of the latest trends in AI is that it can now make songs.
You can prompt it to create a track about a monkey that doesn’t like bananas, played in a reggae blues style, for example (follow the link to listen to “my creation”).
As remarkable as it is to hear this come back in under a minute as a seemingly fully produced piece, I’ve yet to meet anyone who listens to it more than a few times—unless they’re showing it off, proudly declaring that it’s all over for musicians: “Look what I made!”
Or take what Velvet Sundown did—an entire Americana-style album, complete with AI-generated band images.
Again, we’re left with a hot topic: is this the end of music as we know it?
If you prefer Velvet Sundown to Kings of Leon, who am I to say you’re wrong?
What interests me more is how I can use AI as a tool—to assist with my work, not replace it.
There are plugins now that remove noise, which is genuinely helpful when mixing.
I see mixing as an art form I’m not willing to give up. It’s part of the process. I might use tools within it, but I’m not sending my stems to an AI mixing engineer—at least not this month.
Mastering, however, I’m open to. By then, the creative process is complete, and asking the AI to produce ten different master files can be helpful, even if only for inspiration.
All of this is still a developing field and, to me, best discussed over a pint in the pub.
But one area where AI is already incredibly useful—and something I use daily—is stem separation.
Stem separation – not just for DJ’s
This lets you remove vocals from a track, creating a karaoke version that actually sounds like the original, not some dreadful MIDI file.
If you perform with backing tracks, you should learn to DJ using Serato immediately. You can remove the vocal at any point—say, when transitioning from a break into a live performance.
If you’ve got a sax player with you, you can loop sections for them to solo over.
This feature alone instantly improves solo singers performing with backing tracks.
For musicians, the software is a game-changer when it comes to transcription.
Not only can you loop sections with ease, you can isolate or remove vocals, drums, bass—or just listen to the chords. Equally, you can remove the chords and keep everything else to practise your part in context.
I do this daily. As I transcribe, I print versions of each song: with and without vocals, with and without chords, and at the right pitch so I don’t have to retune my guitar.
These are then paired with chord charts and used as backing tracks for Spytunes users.
My transcriptions can now be recorded to the original track. That’s right—I no longer need to hire musicians to create backing tracks for your practise sessions. Does that mean it’s all over? Probably not. More likely, it’s just another step in the ongoing evolution of the music industry.
I’ve started this process, here’s an example: Watermelon Sugar (Harry Styles) – Backing track, chart, chords, loops, lyrics, and TAB.
But you don’t need to wait for me to finish this project (there’s another 500 to do!).
You could, and should, get Serato and Tidal today. Start transcribing. Start removing stems. It might just be the best thing you’ve ever done for your development as a musician.
And the best news?
You can finally delete your Spotify account.
Not only does it sound worse than Tidal and pay less to creators, the founder is also investing in AI-driven weapons of war.
Daniel Ek may have ruined the music industry—let’s stop him before he goes for the world.




