Decoding the Planet: From Whales to Whistleblowers

Show Notes

AI may be able to talk to animals, but at what cost to the planet. Who is making those decisions, and why it matters. From decoding whale language to protecting our oceans from unchecked offshore drilling, Bridget Todd talks to visionaries seeking to preserve our beautiful, fragile world.

Holly Alpine left her job with Microsoft over the company’s role in providing fossil fuel companies with AI tools that could accelerate their production of oil and gas.

As the AI and climate lead at Hugging Face, a platform for sharing open-source AI models, Sasha Luccioni calls for more transparency from tech companies about how much energy it takes to power AI.

Aza Raskin, co-founder and president of the Earth Species Project, explains why using AI to decode animal communication could be the key to protecting our planet.

Transcript

Bridget Todd: Hey everyone, I’m Bridget Todd and this is IRL — the award-winning podcast brought to you by Mozilla Foundation with PRX. And you know something, I’m kind of getting a little worried about the climate impacts of AI. Listen up.

Aza Raskin: The Hello in Humpback Whale - it’s called a whoop call. Sounds a little bit like, Pbbbbt. When you get really good, apparently you can also say your name while saying your hello, but all I’ve got is Pbbbt.

Bridget Todd: That’s pbbbbt - better known as Aza Raskin, co-founder of the Earth Species project. He’s using AI to understand how animals communicate. More from Aza in a bit.

Holly Alpine: My name is Holly Alpine. I am ex Microsoft and currently building a coalition about the intersection of big tech and big oil.

Bridget Todd: You may have heard of Holly Alpine. She used to work for Microsoft. She was the Head of Datacenter Community Environmental Sustainability and Employee Engagement. It was a big title. And a big job. But last year she quit. Then she went public with her ethical concerns about Microsoft’s support for the oil and gas industry. Holly has been called a whistle-blower and an activist. And it’s not that Holly’s down on technology. She actually loves it. You can really hear that when she talks about moving through MIcrosoft’s ranks to become a leader within the company.

Holly Alpine: I had a fantastic time working for Microsoft for the vast majority of my time there. I have such respect for the company. So I was an energy analyst for a couple of years and that was fantastic to see this huge growth in the data center space and kind of how Microsoft was managing energy to try to power these massive, massive data centers.

Bridget Todd: Holly worked with communities around some of Microsoft’s data centers.

Holly Alpine: I proposed a new program, which I ended up founding and developing over the next six years on a program to invest in sustainability projects into those communities. I led that for many years as my day job. And absolutely loved it.

Bridget Todd: Two areas in particular were a big focus.

Holly Alpine: So urban forestry, that’s supporting planting and protection of existing trees in urban environments. Trees are so important for local communities in so many different ways. And then of course, ecological restoration as well.

Bridget Todd: It’s funny that you say this. I care quite a bit about the tree canopy in my community and all the things you said, right? It’s an equity issue. It’s an environmental issue. It’s a crime issue because when there’s less of a tree canopy, crime is, in those areas, crime is worse - It’s a whole thing with me, so I’m happy to hear you say this. But I say that to say that your passion and concern and care around sustainability and the environment is palpable, like I can hear it in your voice when you’re speaking about it. And I wonder, was there a moment where within Microsoft, your perspective shifted on AI and Microsoft’s climate commitments, where you were like, something needs to change. I’m not feeling good about these commitments and what they’re saying versus what they’re doing.

Holly Alpine: Absolutely. There’s a LinkedIn article that was published by a Microsoft employee talking about how with Azure, the future of oil and gas exploration and production is brighter than ever. That really resonated with me because I did not want to be part of a company that was helping the oil and gas industry have a future that is brighter than ever.

Bridget Todd: So can you help us understand how exactly generative AI is used? You know, it’s not, we’re not talking about oil executives having conversations with chatbots, right?

Holly Alpine: Correct. AI can, or advanced technology can be used in a few different ways, but a lot of it is around these large, large data sets that really cannot be used by a human effectively. And so this can be fed into an artificial intelligence to be able to analyze vast amounts of data. And so some of the things that are being used for, would be to find new hydrocarbon reserves quicker. So for example, mapping the entire ocean floor is a great example of how this technology is used. That’s not something that a human being would generally be able to analyze. Um, but with these advanced technologies, they can map the entire ocean floor to find these previously inaccessible reserves. They can find more wells. They can enhance the recovery of their existing reservoirs. They also have insights into new revenue models so they can increase profits. So these are things that these advanced technologies are able to do.

Bridget Todd: Just to help me get a sense of the scope of what we’re talking about, like, do you have some of your favorite facts or figures about the climate impact that can help folks understand, like, the scale and the magnitude that we’re talking about?

Holly Alpine: Yeah, I mean, one of the big ones is around the $1.4 billion potential revenue increase for Exxon Mobil from Microsoft’s AI tools. And this is from Microsoft’s AI tools.

Bridget Todd: In an article in the Atlantic magazine last September, tech journalist Karen Hao wrote that she had accessed a slide deck from 2022 with an analysis that showed how Microsoft’s tools could allow ExxonMobil to increase its annual revenue by $1.4 billion dollars. Karen also said the analysis showed that $600 million dollars of that revenue could come from optimizing ExxonMobil’s drilling. It’s hard to wrap your mind around those numbers.

Holly Alpine: It’s really hard to get your head around. And what a lot of people gravitate to and what we’re really seeing now is when we think about AI and sustainability and, you know, Earth impacts, a lot of people gravitate towards the impacts of the data centers themselves and the energy use and the water use. Definitely important. This is a separate topic though on what these advanced technologies are being used to do and that’s what our focus is and we believe it’s extremely important and it’s somehow being skipped over in a lot of these discussions on AI.

Bridget Todd: At first, Holly tried pushing for change from within Microsoft. She co-founded an employee group promoting sustainability. And it grew to over 10,000 employees, all of whom wanted to be more engaged in climate issues and solutions.

Holly Alpine: So within Microsoft, we wrote a memo highlighting our concerns. We were very explicit with some recommendations that were well thought out and Microsoft actually really largely agreed with but they made some promises in that initial meeting that also never came to fruition. We had many discussions with the whole employee community. We had hundreds of people on the calls where we facilitated these discussions and brought up employee concerns and that’s where we got a lot of those commitments. And I think it just took a few years of that and we really tried as hard as we felt like we could internally before really realizing that internal only was not going to be enough.

Bridget Todd: So after 10 years at Microsoft, Holly faced a moment of truth about what she could accomplish from within. Her final decision: leave her dream job.

Holly Alpine: It was definitely tough. I loved my team and I loved the work I was doing, but you know, deep down I always had a bit of concerns about how at the end of the day the purpose of a corporation is to generate value for shareholders.

Sasha Luccioni: There are lots of people that care about sustainability, that care about ethics inside big tech companies, but it’s really hard to operationalize these intentions when decisions are made because they are driven by the profit model of the company.

Bridget Todd: That’s Sasha Luccioni Luccioni. She’s the AI and Climate Lead at Hugging Face, a platform for building, training, and deploying machine learning models. And while Holly Alpine focuses on the output of AI and its impact on the environment, a big part of Sasha’s focus is on the energy input. Just how much energy does it take to power AI?

The fact is, we don’t really know.

Sasha Luccioni: So, for example, we don’t have any numbers for Chat GPT, we don’t have any numbers for Claude. We’re operating with a bunch of kind of hearsay. People say that this is this kind of model. People say that chat GPT is this energy intensive, but we don’t know. And so for me, it’s like,as long as we’re operating in a structure, right, where we don’t have any details about these models, we can’t essentially be sustainably minded. But once, for example, regulators or, or users themselves say, Hey, we’re not using this technology unless you give us this information, like each one of my queries on average, how much energy does it use or how much CO2 does it emit - like, I think that that’s the information that we need in order to make the choices, because otherwise we’re just comparing, I don’t know, like apples and giraffes. Like we don’t even know what orders of magnitude. And also the problem is that like most of these systems are not single models anymore. They’re kind of pipelines of different components. And so we don’t even know what we’re using. We don’t know what the safety mechanisms are. We don’t know if a model changed or not. We don’t know if there’s some kind of filters, we don’t know any, any of it. And so the first step would be more transparency. And then you can talk about sustainability.

Bridget Todd: So that’s the business side of the equation, but what about the users? Sasha says we need to think about how we’re contributing to the energy consumption of AI.

Sasha Luccioni: What I’m hearing more and more is that people have ChatGPT or Claude or whatever other LLM open on their desktop all the time. And so they just go to it for any kind of tasks, for example, calculating some math problems. And so in that case, it really doesn’t make sense because it’s orders of magnitude. It’s like 10,000 times more energy, if I had to put a number on it, than kind of a solar powered small handheld calculator, and that just doesn’t make any sense. But if you’re doing something that’s, you know, writing a haiku about the beauty of Canadian winters, like maybe that’s something that you can’t do with any other technology but I think it’s worth kind of having this reflection, like, do you need to use Siri for a grocery list? Or can you have a paper and pen or, you know, something put up on your fridge or even on your phone locally as a notepad thing? So, I mean, it’s all about tasks and trade-offs. And I think the more that people start having these reflections, the less we’re going to see kind of these superfluous uses of AI.

Bridget Todd: Reflecting on tasks and trade-offs. For Sasha - that means forget the grocery lists. Save the AI power for more important things. Like solving big problems using as little energy as possible. For example, at an AI summit in Paris a few months ago, attendees were challenged to use AI to come up with a way to detect forest fires.

Sasha Luccioni: Yeah, so one of the tasks was using images gathered by CCTV cameras of essentially forests in France. And forest fires are a huge problem in France, like in many places. And the goal is to use these kind of low energy CCTV cameras with on-device AI systems in order to detect not wildfires, but the plumes of smoke before the wildfires happen. And the goal is that they’re going to be kind of scanning forests in real time. And then once they detect that initial kind of smoke, then firefighters will get dispatched before it becomes a fire and before it gets out of hand. And of course, AI is well suited for that.

Bridget Todd: More after a quick break. And we’re back. Shouldn’t the fundamental point of AI be to solve big problems like wildfires? And find ways to help the planet? That lofty goal is something Aza Raskin Raskin holds close to his heart.

Bridget Todd: Aza, thank you so much for being here. I’m so excited to talk to you.

Aza Raskin: Oh, thank you so much. I’m so excited for this conversation.

Bridget Todd: Aza Raskin Raskin - inventor of infinite scroll, is a writer, entrepreneur, inventor, and interface designer. And as co-founder of the Earth Species Project, he wants AI to help us do something extraordinary for the planet: understand and learn from animals.

Bridget Todd: So Aza Raskin, how did this project come to be in the first place?

Aza Raskin: Uh, well, many, many years ago, uh, I heard an NPR story about gelada monkeys, and I didn’t know anything about them. They live in the Ethiopian highlands, and they have these large family groups of 1,000 to 2,000. And what’s interesting is they sound sort of like women and children babbling when they speak, and in fact, the researchers sort of like swear that the geladas talk about them behind their backs, which, you know, it’s probably true.

Bridget Todd: They’re gossips?

Aza Raskin: They’re gossips, exactly. And what I found so interesting was that this researcher said they have a very large vocabulary, one of the largest in the primate community. But we don’t know what they’re saying. And she was out there with a hand recorder and a hand transcriber trying to figure out what they’re saying. And the thought in my mind was, well, shouldn’t we be using, you know, machine learning and AI and large scale microphone arrays to figure this out? But back then, this was, you know, 2013. AI couldn’t do something that human beings couldn’t already do. It couldn’t translate a language without a Rosetta Stone. And every year, I would check back in. Can we translate a language even if we don’t have any examples of translation? And every year, the answer was no, no, no, until 2017. That year, the AI community figured out how to translate between human languages without any examples, without any Rosetta Stone, just by matching the shape of language. And I can describe a little bit of what that means. But that was the moment - it was like, it’s time to start. And so that’s when I started Earth Species with, uh, Britt Selvitelle, who’s my original co-founder, eventually Katie Zacarian, who’s now our CEO, joined as co-founder. We spent the first, really, three years just listening and learning and going out to the Congo rainforest or up to Alaska to work with biologists and ethologists because their work is hard. Being out in the field is hard. It’s not like with humans where you can just get on the internet and scrape lots of data. Like, it, it is blood, sweat, and tears to go capture and work with these animals. And it was then in really 2020 after we had done this sort of listening tour, they were like, all right, it’s time to start hiring AI researchers and get going.

Bridget Todd: Aza’s team shared a couple of recordings of animals with us - we cued one up for this interview and full disclosure - I didn’t know what kind of animal it was when we played the clip. Okay. I’m going to guess that was some kind of an exotic bird having a conversation.

Aza Raskin: Hmm. Uh, it is a bird. Um, do you want to know what kind of bird? It’s not exotic.

Bridget Todd: Oh, what is it?

Aza Raskin: It’s a crow. It’s a carrion crow. And isn’t that, isn’t that amazing? Like we think we know what animals sound like, but in fact, we don’t really. Um, the diversity and the range of their communication. This is actually one of the projects I’m most excited about, is, we are working with these crows in Spain and they have a unique culture. So normally crows raise their chicks in pairs. So like a father and a mother will raise their, their chick, but these crows have figured out a different social structure where they come together and communally raise their children, sort of like in a village, it’s almost like a commune or a kibbutz. And to do that, they have their own unique vocabulary and their own unique dialect, and they’ll take outside adults in, teach them their vocabulary and dialect, and then they will start participating in the communal child rearing, which is wild. And so we’re working with these researchers that have little backpacks on the birds that record how the birds move, what they say. And we’re starting to get towards decoding using our latest models. And to give just one example, what the models have been able to pick up is that there is a particular sound that the crows make when they’re coming back to feed their babies. And they will land in the nest, make this sound, and it seems sort of like, I’m home, get ready to be fed, and they’ll feed their babies.

Bridget Todd: So if you were to play the sound of that crow into the translator for crow sounds, would you be able to tell me what it was saying?

Aza Raskin: To some degree, but not the full degree. So, we have a model we call NatureLM. We released it the end of last year. And think of this as sort of like the GPT2 or GPT3 of animal communication. Uh, it’s not up to a GPT4 yet. And so you can post in the sound or many sounds from many different animals because we’ve trained this model on human speech on human music, and then like huge swaths of publicly accessible animal communication, as well as data from our partners. And it can actually answer you in English. So you would upload that sound and you’d be like, ‘What kind of animal?’ And it would say, ‘crow’. And then you say, ‘Is it an adult or is it a juvenile?’ And it might say, juvenile or adult. Um, and you could say, ‘What kind of call?’ And it might say, this is an alarm call or this is a feeding call, depending on what we already know. And so we’re starting to get to this place where you can have a full interactive session, drop in an entire audio clip, and say how many animals, of what kind, how many individuals are in here, and do we know anything about their behavior.

Bridget Todd: Yeah, I’m curious. How was this model built? And like, what data sets are being used?

Aza Raskin: So, we’re actually starting now with human models. So, you know, you take one of the large scale open source language models trained on human speech. We then will add in human music, and then we start adding all of the datasets that are publicly available online and in research papers, and then we work with, you know, 40, 50, 60 plus biologists and institutions, and they also give us access to their data of animal communication, animal behavior, the 3D tracks of how animals move, and that’s what trains these models. What’s really interesting is the fact that training it on human language and human music first, turns out really helps the models work to classify, detect, do all of the tasks that we and biologists need for animal communication.And that tells you something interesting. That means domain transfer is working. That the patterns that AI is picking up in humans is transferring and is teaching us, teaching the AI, something about how animals communicate. And if there was nothing that was the same between the two communication systems, you wouldn’t see that result. That’s fascinating, that’s starting to give you hints that actually there really is a similar underlying structure which we’re starting to be able to understand.

Bridget Todd: How close do you think we might be to actually being able to have a two way conversation and communicate with animals?

Aza Raskin: Hmm. Well, here it depends a little bit on what you mean. If you mean that we will be able to have an AI and an animal communicate back and forth, but without human beings knowing what they’re saying, that is right now. That’s this year. That’s possible or nearly possible. If it’s, and human beings understand what we’re saying, that’s still a little ways off.

Bridget Todd: I cannot wait until we’re getting, like, animal podcasts, like, what, like, what’s on their mind?

Aza Raskin: Right? Um, we’re gonna have a whole new type of reality television show where it’s just like all of their internal dynamics. But also, the thing that blows my mind, right, is - actually, we’re working with, with, um, with beluga, as of one of the species in collaboration with Valeria Vergara, who’s one of the leading experts on beluga communication, they have dialects. You can actually tell which clan a beluga comes from based on just how they speak, just their dialect. And dialects over time can drift so far apart to become mutually unintelligible languages. So orcas, for instance, have both dialects and languages. Dialects, they can communicate between themselves.Languages, they cannot. So just like English to French. And when you trace those back, you know, as I said, often those can go in dolphins whales for 34 million years. And imagine what wisdom there must be, in something that has survived, 34 million years of cultural evolution. Like, whatever it is, almost certainly it is outside the scope of human imagination. And whatever it is that is the solution to humanity’s problems, I argue it’s not in our imagination because if it was, we’d be doing it, but we’re not. So we are looking for things that are outside the sphere of human imagination. And that’s what gets me so excited is not the ways that we’re going to be able to directly translate to animal language to and from. That I think will be fascinating. But even more so are the things we cannot possibly imagine.

Bridget Todd: You can probably tell, this was blowing my mind. It is mind-blowing stuff. And a real creative use of AI. If we really learn to decode animal communication, it could fundamentally change our relationship with them. But the question is: what will we as humans do with this powerful new tool?

Aza Raskin: What happens when AI starts to be able to speak back to animals. So in the same way that chat GPT doesn’t really understand what you’re saying but it spits out statistically likely text at a very high level that your brain then interprets as meaningful. So too can we make models that mimic the statistics of how animals speak, and then speak back. And in fact, you know, we are working right now to see if we can pass the Zebra Finch Turing test. That is, can we have a Zebra Finch, which is a type of songbird, talking to another Zebra Finch, and then you swap out the animal for an AI, and can the Zebra Finch tell the difference? Um, and we’re starting to get hints that maybe actually we can pass that test, um, in some very preliminary studies, which is, which is fascinating. But what do we know then? Well, we know that humpback whale song goes viral. So for whatever reason, humpback whales off the coast of Australia are like the K-Pop singers. They, whatever they sing, because there’s, you know, they, they travel halfway around the world and sometimes their communication could go across an ocean basin, that the entire world population or much of the world population can start to sing the songs of the Australian Humpbacks. So if we just create a synthetic whale that sings and place it in the ocean, that might create a kind of viral meme, a whale QAnon, that disrupts a culture which has lasted for 34 million years, as far as we can tell, right? That’s 85 times longer than the human species has existed. So obviously there’s a deep set of responsibility and ethics that’s required once you start to create the ability to communicate back, especially before you know what you’re saying. And that means it’s really important to think about what does open sourcing mean if there aren’t protections against the abuse of the technology. And so we’re doing a lot of work now to think about, what is the, you know, Geneva Convention for Cross-Species Communication or Declaration for Universal Communication Across Species? Like, how do we create the norms and the rules and the laws so that when the technology is deployed and, you know, we now see it as inevitable that this stuff gets made, that it’s used by not just us but the world to, you know, enhance the natural world and not, you know, create viral memes and disrupt these ancient cultures?

Bridget Todd: Well I want to get into that because I know that intersects with a lot of your different bodies of work. You’re somebody who’s been really critical of how technology is used inhumanely in a ton of different contexts. So I wonder, like, I mean, it sounds weird to think about, but it sounds like it is absolutely necessary and time to be having these conversations. How do you prevent harms like this? How do we prevent bad actors tricking whales into swimming into traps, people who want to hunt them or something. How do we actually make sure that this technology is being used in ways that are humane and ethical?

Aza Raskin: It’s a great question and If I said I knew all the answers, I’d be lying. It’s very complex. So bad actor use is going to be, you know, uh, poachers and ecotourists sort of like, ‘Hey, Hey, animal, like, Hey, Rhino, come over here, come over here. There’s food over here.’ And for that, we’re going to need a set of like, you know, rules, norms, enforcement, um, geofencing, uh, any app that might be able to, like, speak back, things like that. And then, you know, I think factory farms are going to have an incentive to use these kinds of technologies for greater control. And before there’s an economic incentive, before the technology is real, I think creators of technology have an obligation to pre-think through how their technology will be used and then work to create the appropriate international and national protections so that they don’t get used that way.

** Bridget Todd:** So Aza, how has this work changed the way that you personally think about and relate to the natural world?

Aza Raskin: I think of it, a little bit like that moment, in 1995, when the Hubble Telescope was pointed at the empty patch of sky, and what they discovered was not nothing. What they discovered was the most number of galaxies anyone had ever seen in a particular spot. That is to say, they pointed the telescope at nothing and what they discovered was everything. And I think that’s what is about to happen with our new scientific tools. That we will point our new scientific tools at an area where we don’t think there really is anything as Western science and what we’re to discover is everything. And that, to me, as an individual human, it just cracks my heart wide open.

Bridget Todd: Imagine if we could understand what whales are saying to each other. If we could learn from their millions of years of evolutionary experience. Or if AI could help prevent wildfires from burning our towns and cities. If we think along those lines we have a real chance of using AI to better the world. And I think the whales would agree with that. Thank you all so much for listening to Season 8 of IRL. A big shoutout to all of our guests, who so generously shared their time and their stories. A special thank you to Erin Pettit, along with Solana Larsen, Eeva Moore J. Bob Alotta, Toni Carlson, Caitlin Faulds, Rebecca Seidel, and the rest of the team at PRX. For more about our guests, check out our show notes, or visit IRLpodcast dot org. This fall, Mozilla Foundation is launching an exciting new digital magazine all about how to live online. It’s called Nothing Personal. It’ll champion the best of the internet, the people, the ideas, and the possibilities. So stay tuned.