The Art of AI
Season 7: Episode 5
- Share on Twitter
- Share on Facebook
-
Copy Link
URL copied to clipboard
- Download MP3
Show Notes
From Hollywood to Hip Hop, artists are negotiating new boundaries of consent for use of AI in the creative industries. Bridget Todd speaks to artists who are pushing the boundaries.
It’s not the first time artists get squeezed, but generative AI presents new dilemmas. In this episode: a member of the AI working group of the Hollywood writers union; a singer who licenses use of her voice to others; an emcee and professor of Black music; and an AI music company charting a different path.
Van Robichaux is a comedy writer in Los Angeles who helped craft the Writers Guild of America’s proposals on managing AI in the entertainment industry.
Holly Herndon is a Berlin-based artist and a computer scientist who has developed “Holly +”, a series of deep fake music tools for making music with Holly’s voice.
Enongo Lumumba-Kasongo creates video games and studies the intersection between AI and Hip Hop at Brown University. Her alias as a rapper is Sammus.
Rory Kenny is co-founder and CEO of Loudly, an AI music generator platform that employs musicians to train their AI instead of scraping music from the internet.
Thank you to Sammus for sharing her track ‘1080p.’ Visit Sammus’ Bandcamp page to hear the full track and check out more of her songs.
Transcript
Van Robichaux: Interior. Office. Night. Van sits at his computer. He’s reading about GPT2. He emails the CEO of OpenAI. Soon after he realizes this might be a problem.
Bridget Todd: That’s Van Robichaux. He’s a comedy and animation writer in Los Angeles. He’s dreaming up an imaginary screenplay about his experience going on strike as a member of the AI working group of the Hollywood writers’ union.
Van Robichaux: I would say this is like a dystopian sci fi and ‘Based on a true story’ comes up right at the beginning.
Bridget Todd: I’m Bridget Todd and this is IRL, an original podcast from Mozilla, the nonprofit behind Firefox. In this episode, we talk to creators of music, movies and video games with new ideas about how AI should and shouldn’t be used. I mean, for starters, could studios just replace screenwriters with generative AI? Van Robichaux gets this question a lot.
Van Robichaux: The AI Working Group brought together a variety of writers from different kinds of writing. And the thing that tied most of them together was futurism in their writing or scheming in their writing. We had writers who wrote about con men and con jobs. We had writers from Star Trek. We had a writer from Quantum Leap.
Bridget Todd: The Writers Guild of America went on strike for five months in 2023 at the same time as Hollywood actors. Dozens of productions were interrupted. That cost studios hundreds of millions of dollars. In the end, they reached an agreement that included several points about AI.
Van Robichaux: Because of AI having its big headline moment around the time that our negotiations started and around the time that our strike was starting, it became a headline issue for the writers. It doesn’t mean it’s the most important issue but it’s hard to say if any one issue is the most important issue, because the issues all intertwine.
Bridget Todd: Yup, there were dozens of issues related to labor rights and compensation. Union members staged daily pickets on the streets in front of studios like Netflix and Warner Brothers. Van’s post was at CBS Television City. He spoke to a lot of writers here who were worried that AI would literally replace them.
Van Robichaux: I let them know, like, that’s not the big fear. The big concern is something like the studio saying they’re using AI, but really they just hire someone at a lower rate that’s non union. But there were also people on the lines who were saying things like, “We have to ban AI. We’ve got to get them to promise to never use this technology.” But my point of view on that is, Microsoft Word, by January, will have AI built into it by default. I don’t think we want a contract that says we’re not allowed to use Microsoft Word.
Bridget Todd: Van’s background is in engineering. Back in 2020, he got curious about GPT2 and GPT3. So he cold emailed an executive at OpenAI and got early access to try it out.
Van Robichaux: And that’s when I learned about the datasets and the internet web scraping. Something that happens for screenwriters already is that people post copies of their screenplay illegally on the internet. And it becomes a nightmare to try and stop that. And so if you’re scraping the entire internet, and you’re using that to train an AI, you’re already scraping and grabbing copies of screenplays. And that’s when I realized, this is going to eventually intersect with what the union does.
Bridget Todd: Van started talking about AI at union meetings. In 2022, the board set up a working group to study AI and prepare union leaders for contract meetings with major studios. These days, big tech companies like Amazon and Apple are studios too.
Van Robichaux: The meetings were a lot like a writer’s room. It was people throwing together ideas, throwing things up against the wall, learning from each other, and it was very collaborative.
Bridget Todd: In weekly Zoom meetings, the writers put themselves in the shoes of their employers to imagine what they would want to do with AI. What ways would they find to undercut writers? The union’s contract negotiating committee took their recommendations to the companies and were mostly successful.
Van Robichaux: We got added to the contract that: one, a writer is a human being. Previously our contract said a writer is a person. We wanted to just clarify that that doesn’t mean a corporate person, that doesn’t mean, if it’s later declared that AI has personhood — human being. But more importantly, literary material, which is the contractual term for what a script is, cannot be written or rewritten by an AI.
Bridget Todd: Union writers are paid different rates depending on whether they do rewrites of screenplays by others, or create original material. So they wanted to close any loopholes the companies might take advantage of.
Van Robichaux: So if the company were to generate something with AI, bring it to a writer and say, we want you to rewrite this, we want you to write a movie based on this, we can do it. That’s fine, but we have to be paid and treated as though it’s a completely original project.
Bridget Todd: During the strike, the US copyright office signaled that material generated by AI alone probably cannot be copyrighted. That bolstered the union’s case.
Van Robichaux: If AI generated content needs a human involved to get it copyright protection, well guess what? You have 10,000 humans that already do that for you.
Bridget Todd: The 3-year contract says writers can’t be forced to use AI in their writing. On the flip side, writers who do use AI have to ask studios for permission.
Van Robichaux: I do think there’s concern on the studio side about writers using AI without telling them, and then that causing them a problem down the line with the copyright chain of title. And so I think that is a protection they wanted from us.
Bridget Todd: The studios say it’s fair if they use their own existing material as training data for generative AI. That data could come from decades of scripts, for shows like Law & Order or Star Trek. But writers want to have a say too. The issue has yet to be hashed out. Van says studios should at least side with writers and actors to keep tech companies from using the material to compete with them all. Instead, OpenAI, Meta, and major tech industry groups, have joined forces with the studios on AI and copyright. These questions of who generative AI is benefiting and who it’s replacing are cascading through every industry, not just the arts community.
Van Robichaux: There are a lot of people writing software who are very concerned about AI. It’s being trained on the code they’ve written. But they don’t have a union. And so, they can’t control what the companies they work for do, in the same way that the unionized Hollywood workers can. What I hope other concerned workers, in other industries, take away from this is: this is on the table, this can be bargained for. We have the power to ask for AI to be used to lift all of us up, and not only a few of us.
Bridget Todd: There are tons of new AI tools that make it easy to create audio with the cloned voices of recording artists. I think about this a lot as a podcaster. Spotify recently started cloning the voices of some of their top podcasters and translating their shows into other languages. I worry about opting into something like this, and losing control over how my voice can or cannot be used.
Bridget Todd: So wait, how do I know I’m talking to Holly Holly and not Holly Plus?
Holly Herndon: Yeah, good question. Well, Holly Plus is usually singing, so we haven’t released a speech voice model. So if I’m speaking, pretty sure that it’s the real me.
Bridget Todd: That’s Holly Herndon. She’s an artist and a computer scientist in Berlin. For the last few years, Holly and her collaborators have been working on “Holly Plus”, a collection of deep fake tools for making music with Holly’s voice.
[music plays]
Bridget Todd: How does it make you feel when you hear your own voice singing, you know, this beautiful hymn composed by Mendelssohn 200 or so years ago?
Holly Herndon: Honestly, it’s incredibly liberating because I never really thought of myself as a vocalist. I always just used my voice as a kind of data input or a kind of controller for my laptop. It was like, I wanted to play around with the voice and I was the cheapest, most available voice I had. So I just used my own voice. So with Holly Plus, you know, I can through the sound of my voice, perform music that I don’t even have the training to be able to perform.
[music plays]
Bridget Todd: Wow. So how exactly does all of this work? Like, how do you feed Holly Plus music?
Holly Herndon: So the score reading instrument essentially is, I just kind of can feed the software any score, and it can read the notes and the lyrics in multiple languages and then reproduce that in my voice.
Bridget Todd: To train the models, Holly spent many hours singing and speaking words at different pitches. There’s also a system that can convert the voice of a person live on stage in real time. And there’s a more basic version of Holly Plus online that anyone can use. It doesn’t say words, but converts audio into abstract Holly sounds.
Holly Herndon: I wasn’t quite ready to have people, you know, typing in whatever text, um, that they wanted to. I wasn’t quite ready for that yet. I think I will be ready for that eventually, but we specifically deliberately chose a version that was a bit more abstract, but the idea was, is we wanted to make it as user friendly as possible.
Bridget Todd: So I’m curious, do you see ways where this can be kind of mutually beneficial, beneficial to someone who wants to play around with your voice, and beneficial to you as an artist?
Holly Herndon: Yeah, absolutely. I mean, I don’t think that my approach will work for everyone. So I’m certainly not prescriptive. But for me, I really wanted to take a permissive approach to IP. So thinking about intellectual property rather as a kind of identity play. What would it mean if I opened up my identity for other people to perform through? What ideas would other people come up with that I wouldn’t come up with myself? That was something that I was excited about.
Bridget Todd: Holly challenges us to rethink power and consent over creative content. It’s why she’s also on Time Magazine’s list of 100 Most Influential People in AI this year.
Holly Herndon: So our approach to this was we formed a DAO, which is just a fancy way of saying basically like a collaborative or a co-op of people who are interested in this voice. And we vote on different ideas around the voice and what we want to do with it. People submitted works to the DAO, and then we sold those as NFTs, and then the profits from that sale went to further build more tools for the Holly Plus system.
Bridget Todd: Holly Plus isn’t a big money maker, but generative AI probably will be. Think about how popular ChatGPT or Midjourney are. But a lot of artists don’t like that you can instantly generate images or audio in the ‘style’ of their works. Holly and her partner, Matt Dryhurst, lead a startup called Spawning that gives artists technical tools to “opt-out”. They run a website for artists to search for their work in popular datasets called haveibeentrained.com and a system for AI companies to automatically identify non-consenting data in their training materials. Spawning says they supported more than ten thousand artists to opt out of training datasets. They also built a plugin for websites called Kudurru that acts as a barrier against web scrapers.
Bridget Todd: How do you think consent plays into these conversations about how creative industries do or do not put people over profit?
Holly Herndon: I think consent is the biggest question when it comes to machine learning facing us today. And I think that if we can have a system of consent, then I think we can have a lot of fun and things can get really weird and really creative. If we kind of remove the consent layer altogether, then I think that’s when people start to feel exploited and things can become really unfair. I mean, that’s part of the work that we’re doing at Spawning, is we’re really interested in trying to build the infrastructure to have a consent layer and I think there’s a way of doing that that doesn’t stifle creativity or innovation.
[music plays]
Bridget Todd: So if AI builders can’t just take big bites out of the internet to train their systems, how can they do it? Let’s say I need some music for a podcast. I’m not a musician, but this tune you’re hearing, I made it using Loudly, a new AI music generator. You click on a genre, some instruments, and a few other settings. And that’s it. Five seconds later you have three new, royalty-free songs for your project.
Rory Kenny: I made a very conscious decision that we would never, never illegally or illegitimately or unethically scrape the internet for music that already exists that doesn’t belong to us as a company or where we don’t have the right to do so.
Bridget Todd: That’s Rory Kenny, the CEO and co-founder of Loudly. He’s based in Berlin.
Rory Kenny: The music that we actually generate is using human produced audio sounds. Like, they’re literally being made down the hallway from me in music studios, in professionally graded environments where the sound quality is second to none.
Bridget Todd: So this is an alternative approach. Loudly employs full time musicians who create and catalog sounds in a way that’s especially suited for AI.
Rory Kenny: I’m also a musician from my own, you know, in my, in my own history. And I would be shocked to learn that some company had to use my material to train an AI that they would then create a business from. So we took a very strong position on that a couple of years ago and we stand by it. And I think we, you know, we want to be on the right side of history here. And there’s no need to plagiarize anything because there should be value for all in this new AI technology evolution.
Bridget Todd: Rory sees a future where users could upload tunes directly to streaming platforms and revenues could be split. He says Loudly’s policy is to hang on to the copyright of the music, even though you can use it pretty freely online. If you download a lot, you pay a subscription.
Rory Kenny:
So you know, from our side, we have, of course, huge consent, transparent consent from our own music producers who I speak to every day and they’re helping us and helping me build this whole system out. And they’re also redefining how they make their own music to make it better fit into the Loudly AI music system. They’re literally reinventing or inventing a new kind of musical format that works so beautifully with our system. So they’re active participants. They’re active contributors to the music creation platform that we have.
Bridget Todd: Loudly has an additional source for training material based on user generated content. They operate a beatmaker app called Music Maker Jam that has been downloaded by millions of people. The tunes people made with the app, which doesn’t use AI, are incorporated in training datasets for Loudly’s next new AI models. They explain this in their terms and conditions. But will users really notice this, tucked away in the fine print? Rory claims they’re still more transparent than others.
Rory Kenny: So when I think about AI, I think that this is like a big new technology shift and there are many voices screaming against it, how it’s bad, how it’s going to destroy people’s livelihoods and be the end of creativity. But I see it quite the opposite. To be creative is to be, is something quite intrinsic to our nature. So that’s quite affirming. And I think AI will play a great role in expanding the creativity options for many people out there who don’t necessarily have the skill or the time to learn how to become an incredibly good musician. It’s really quite a hard thing to do.
Bridget Todd: AI isn’t the first technology to upend music production and copyright. The internet did this in a big way too. So did drum machines. And auto tune. For all the futuristic promises, AI can also amplify the past.
Enongo Lumumba-Kasongo: I’ve tried this experiment where I’ve written into ChatGPT, write me a verse in the style of Sammus and, you know, it’ll give me some kind of generic, pretty awful set of rhymes.
Bridget Todd:
Enongo Lumumba-Kasongo is a producer and emcee who goes by the name Sammus. She also studies the intersection between AI and hip hop at Brown University.
Enongo Lumumba-Kasongo: It’s interesting because the material that the verse reflects is actually pretty reflective of what I’m interested in as an artist.
[music plays]
You know, it’ll often talk about being a geek, which is something I do talk a lot about. It’ll talk about, you know, academia, navigating school, navigating being a Black woman and it’s almost like a shadow version of my material. And so it’s made me think a little bit more about how I could produce art that’s illegible to a system like this. Because there’s something about being known or categorized in this way that feels really constraining.
Bridget Todd: As an artist and a researcher, as well as a video game producer, Enongo has thought a lot about how reductive today’s AI tools can be when it comes to artistry — and of the lived experience that gives it meaning. She worries about how the exploitation of Black artists by the music industry will be echoed in AI. It’s about unfair contracts and residuals, but it’s also about how Black and Brown artists have been sidelined, over and over, so white people can perform their music.
Enongo Lumumba-Kasongo: A lot of the music that we’re hearing coming out of this space, this intersection around vocal likenesses and filters, is actually pretty catchy and pretty convincing. And this has definitely created some new possibilities around monetization.
Bridget Todd: There was an AI Drake song that went viral earlier this year called “Heart on My Sleeve”. Enongo says hearing it was an eye opening moment.
Enongo Lumumba-Kasongo: I think that was the first moment where my heart sort of sunk into my stomach. And I realized that we have entered into a new kind of space and it is still wildly unregulated. So when we have this new tool that enables folks to take on the personhood of Black artists and performers, and then monetize that, of course, you know, my alarm bells are going to go off.
Bridget Todd: It’s a fad on Tik Tok and other platforms for white people to lip sync or dance to music clips of Black performers. Enongo says it opens up a whole new set of questions around cultural appropriation, when anyone now also can embody the voice of a Black artist using voice filters, like those for Drake or Jay Z.
Enongo Lumumba-Kasongo: Often the phrase digital blackface has kind of emerged in relation to these conversations. And I see this as an extension of the anxieties that so many of us have about what it means when someone can step into Black personhood, right, without having any kind of relationship with what it means to be Black in this world. And often what that means is having limited opportunities to generate wealth or generate income from our own cultural products. It feels like an echo of what has happened in the past, but also kind of supercharged because of how easy it is for any person to step into that role.
Bridget Todd: If you wanted to generate a rap song right now, you’d have your pick between dozens of AI rap generators online. There’s a technical reason, too, for why there are so many.
Enongo Lumumba-Kasongo: Hip hop itself is vulnerable in certain ways to AI because each line of hip hop has more kind of speech data per line than like a line of, say, a pop song, right? Like there’s so much information that’s packed into each line of verse. And so a hip hop artist, just by virtue of their catalog, will have offered so much more speech data often than artists in other sort of genres.
Bridget Todd: But some of these generators lean heavily on violent or misogynistic stereotypes. Enogo says those are choices their developers make. As the audio director of Glow Up Games, she helped develop a mobile game for the HBO show “Insecure” starring Issa Rae. They built a custom rap generator and even added the sound of her own voice. But their values are to celebrate people of color.
Enongo Lumumba-Kasongo: We ran into so many challenges as we were just trying to scope out what this thing could be. And so many of those challenges were around the voice.
Bridget Todd: In the game, players create a rap verse — but the words and phrases in the game are curated to empower women and queer folks. And they use a version of Enongo’s voice that is more transferable to different identities.
Enongo Lumumba-Kasongo: What we settled on was this kind of rap gibberish language that we jokingly called raplish.
[music plays]
Bridget Todd: Enongo sees the creative potential for AI. But she’s critical of how and why it’s used. She thinks about how AI can be used to make people more visible to each other, instead of erasing them from the process. How can AI make it easier for artists to collaborate with each other? Or how could a musician use AI to show their work in new ways? Like next generation album liner notes where fans could toggle between different drafts of an artist’s piece.
Enongo Lumumba-Kasongo: Like, there’s real thinking and work that goes into making that beautiful song be that thing that you love. And we often think about creative work as like descending from the sky or like the muses just blessing us. You know, artists think that way and I think audiences do, but tools that can kind of make alive or open up the creative process, I think is one way that generative AI can actually have a really meaningful impact.
The conversation about what’s happening in AI is really important. And it’s part of a continuum I think, in regards to what’s happening with artists across the board, and that if we fight for better kind of social conditions for artists and makers at all levels of industry, then we can start to have these AI questions be more constrained by like aesthetic concerns and not take up so much space as it relates to who’s being fed and who isn’t, because we’ll have opportunities to live with dignity, regardless of what our relationship is to these AI systems.
Bridget Todd: As we’ve heard throughout this season, it’s people who decide what AI should do. We can use these tools to take creativity to new and unknown places, but we should ground our hopes in ideas that put people ahead of profits. If something sounds off key, maybe it’s time to change it. I’m Bridget Todd. Thanks for being with us for this seventh season of IRL: Online Life is Real Life, an original podcast from Mozilla, the non-profit behind Firefox. For more about our guests, check out our show notes or visit irlpodcast.org. Mozilla. Reclaim the internet.