Generative music, so far, hasn’t left the same impact as generative art in web3 culture. Weav is trying to change that, making it easy for anyone to create a truly unique song. Clovis McEvoy speaks to Weav’s Keit Kollo and Mansimran Singh about adaptive music in web3, making tools accessible, and their plans for the metaverse.
One of the biggest drawcards in web3 is adaptive content. NFTs that generate, rearrange, or reform with the click of a button — typically in unpredictable ways — make up some of the most exciting projects in the space. So far, this approach has been heavily tilted towards the visual domain; most music NFTs derive rareness from their accompanying artwork and only a handful of web3 musicians possess the know-how to bring dynamic audio to their fans.
Weav Music is looking to change that. Founded by Google Maps co-founder Lars Rasmussen, who now serves as an advisor to the company, Weav made a name for itself with Weav Run — an adaptive exercise app that takes songs by artists like Lizzo and Cardi B and dynamically shifts their mix, tempo, and arrangement to synchronise with your running speed. Now their attention has turned to web3, with a goal to bring adaptive audio NFTs to the world.
“We’ve been looking at the NFT market for a while now,” says Weav Music’s CEO, Keit Kollo. “We believe people are interested in something that is unique for the ears as well as the eyes, but a lot of the audio in music NFTs is basically the same as what you can buy on iTunes or stream on Spotify. So, we want to upgrade that experience and bring something new to the table.”
That upgraded experience rests on the company’s own .weav audio format and the Weav Mixer, a free-to-use digital audio workstation (DAW) designed specifically for adaptive audio production. Using their mixer, Keit says users “can create a song and render it out in almost unlimited combinations, all unique, but all derivatives of the same underlying song.”
The Weav Mixer works with a song's ‘stems’ — such as the drums, bass, and vocals. These components, isolated as individual audio files, serve as the ‘composable primitives’ for dynamic music. In practice, this means that producers working in the Weav Mixer can use a data input to shift between instrument layers, change the sound mix, apply different audio effects, and vary the BPM.
Of course, other DAWs already exist with similar capabilities, and this hasn’t, to date, translated into a surge of adaptive music NFTs. But that’s partly because writing adaptive music is hard — much of the software to make it happen was pioneered by, and for, the video game industry, and composers generally need to learn specialised skills and techniques to use it. Breaking down this barrier of entry is an essential part of making adaptive music accessible for a larger group of artists — hence why Weav Music are shaping their tools not for composers and sound designers, but for bedroom producers. “Using our music engine,” says Manisimran Singh, the company’s CTO, “with a bit of tagging and production work you can turn any song into an adaptive song.”
We’ll soon get a practical demonstration of all this with Weav Music’s forthcoming NFT collection. The release will focus on single songs, and all the different variations that can be spun out using the Weav Mixer. Tentatively scheduled to drop early next year, the collection is paired with an interactive music player that lets NFT holders dig into the different adaptive possibilities and see for themselves how the magic is made.
On the heels of this collection, Mansimran says the company is looking at ways to bring NFT functionality to the Weav Mixer. “There are so many things you can do when you have access to stems, effects, and mixes,” he says. “We're currently experimenting with a few different NFT mechanics. We potentially won't get to a place where there’s a single ‘mint’ button built in, but we will add more support, to a point where it’s like ‘here’s this generic format, and these are all the different mechanics you can build into your NFT’.”
Keit and Mansimran believe that their format will not only be relevant to music NFTs as they exist today, but will also be a foundational element of future metaverse experiences — where 3D engines, video game mechanics, musicians, artist, and fans all coalesce. These multi-sensory spaces demand audio content that is not static, but can flow and adapt to match the surrounding environment, and the actions of users within them. “We believe our music engine can create truly interactive NFT experiences,” says Mansimran. “Over time, this may broaden into a full metaverse where all music is shared, tokenised, traded, and listened to.”
“There are so many things you can do when you have access to stems, effects, and mixes.”
— Manisimran Singh, CTO Weav
“There's no question that music will be part of the culture of metaverse worlds — just like it is a part of every culture in this world,” adds Keit. “Accordingly, we are educating artists about the short and long-term value of NFTs, in particular the value of interactive music collections.”
And it’s blockchain technology that allows for the attribution to original — and subsequent — creators, removing a key obstacle for the presence of generative music across the space. “Today, those collections can offer artists new revenue streams while introducing their audiences to the world of blockchain and interactive music. In the coming months, songs in the .weav format will be portable into all kinds of metaverse worlds, offering artists opportunities to reach new audiences.”
“So much remains yet to be imagined, let alone built.”
— Keit Kollo, CEO Weav
Empowering musicians and producers to easily take existing, linear songs and remake them in a dynamic format — rather than needing to follow the long process of specifically writing adaptive music — would go a long way to addressing one of the creative bottlenecks that have kept adaptive music NFTs from keeping pace with their visual cousins.
Looking ahead to communities where fans collect, compare, and discuss different versions of their favourite songs, where the line between ‘original’ and ‘remix’ can shift in real time, and where a single song can adapt to suit transient, user created virtual spaces, it’s hard not to get inspired. As Keit puts it: “We’ve only just begun to see the potential of the metaverse. So much remains yet to be imagined, let alone built.”
Clovis is a New Zealand born writer, journalist, and educator working at the meeting point between music and technological innovation. He is also an active composer and sound artist, and his virtual reality and live-electronic works have been shown in over fifteen countries around the world.
Trevor Jones, a traditional painter and cryptoart leader, is a longstanding believer in technology's ability to enhance the experience of viewing art. But as well as enabling his medium, technology makes up one of the artist's key subject matters as he highlights the driving forces of change in the contemporary world at the intersection of art and tech.
With any groundbreaking technology, there are widespread ethical and safety risks. Randy Ginsburg explores how AI deepfakes are shaking up the marketing industry, what brands need to do to be prepared, and why consumers need to second guess the adverts they engage with.
A new artistic vanguard is taking shape, and DeltaSauce is at the heart of it. The Texan artist’s quietly meditative works have elevated him as an essential voice in the burgeoning AI art movement. He speaks to Signal about the parallels between woodworking and AI prompts, the meaning behind his art, and why relationships are the foundation of his career. Clovis McEvoy tells the story.