Bot Or Not

Season 2: Episode 1

From politics to poetry, bots are playing an increasingly visible role in culture. Veronica Belmont investigates the rise of social media bots with Lauren Kunze and Jenn Schiffer. Butter.ai’s Jack Hirsch talks about what happens when your profile is stolen by a political bot. Lisa-Maria Neudert measures how bots influence politics. Ben Nimmo teaches us how to spot and take down bot armies. And Tim Hwang explores how bots can connect us in surprising, and meaningful, new ways.

Published: January 8, 2018

Show Notes

Bots, they’re just like you and me. Except easier to find, especially on Twitter. :) Here’s a handy guide to spotting bots in social media, plus the answers to the bot-or-not quiz you heard on the episode.

Transcript

Jack Hirsch: My name is Jack Hirsch. I’m the CEO of Butter.ai.

Veronica Belmont: This is my friend, Jack. He supports net neutrality. Except, there’s one problem.

Jack Hirsch: My name was used to submit a fake comment to the FCC in favor of repealing net neutrality.

Veronica Belmont: The FCC is the Federal Communications Commission in America. In December, they voted to kill rules that were in place to protect a free and open internet. In the months leading up to the vote the public logged more than 22 million comments on the FCC’s website. They were overwhelmingly in favor of keeping those rules intact, but buried within those legitimate comments were millions of fraudulent ones. Comments from dead people, from made-up email addresses. I even saw one from Luke Skywalker. And they come from people like Jack, whose real identities were taken to attack net neutrality with fake messages supporting repeal. Jack only learned about this when the Wall Street Journal contacted him with a very targeted survey.

Jack Hirsch: I said, “Yeah. Sure, I’ll take the survey,” and slowly walked me down a more and more horrific path where it was like, “Is your name Jack Hirsch?” “Yes, it is.” “Are you the CEO of this company?” “Yes, I am.” “Is this your home address?” I was like, “Okay, Wall Street Journal you’re getting a bit creepy.” Then they showed me in the survey a comment that said something along the lines of, “I want to … I am in favor of repealing this Obama-era overreach of federal government oversight of our telecommunications industry.” “Did you submit this comment?” I said, “Absolutely not.” It was exactly the opposite.

Veronica Belmont: But who did this? Or, more to the point, what did this?

Jack Hirsch: I have heard the theory that there were foreign state actors using bots to submit comments opposing or in favor of certain bits of legislation.

Veronica Belmont: Bots, little bits of software built to do a programmer’s bidding, used in this case, to spam the FCC website.

Jack Hirsch: As I was going through the process, my first feeling was sort of disbelief and then led quickly to you know sort of just confusion and disgust.

Veronica Belmont: Jack supports net neutrality, but here he was on the record saying just the opposite, all thanks to a bot. Now he wonders what else is being influenced by a network of bots. Can we trust that our institutions are hearing from and listening to the actual voice of the people?

Jack Hirsch: If we are meant to be having public discourse and we want to use the internet as a tool to enable that, how do we make that happen in a safe way?

Veronica Belmont: Personally, I love bots. I actually work for a company that makes a bot. When I think of bots, I think of bots that help people suffering from mental health issues. I think of business bots that make for better customer service. Good bots also helped organizations mobilize and gather real submissions from real people during the FCC’s net neutrality debate. But the bad bots created a scandal. A story like Jack’s makes it easy to hate bots. They tamper with our politics. Russian bots attempted to do just that during the 2016 US federal election. Bots will spam us. Bots will use fake identities and pretend to be us. Bots can go from being friendly minions to evil gremlins with a flick of a virtual switch. So what do we do about our bot problem? I’m Veronica Belmont and welcome to season two of IRL: Online Life is Real Life. An original podcast from Mozilla. Like me, Lauren Kunze works with bots. She’s the CEO of Pandorabots and they’re the world’s largest developer of Chatbots. They’ve built more than 300,000 of them. We happen to know each other. It’s a pretty small community.

Lauren Kunze: A bot very simply put is a software program that runs automated tasks usually on the internet.

Veronica Belmont: It’s not necessarily a robot in a chair somewhere spamming me over Twitter?

Lauren Kunze: That’s right. Most bots do not have a corporeal presence.

Veronica Belmont: I like that. That’s how I’m going to describe it from now on. “This bot does not necessarily have a corporeal presence.”

Lauren Kunze: Definitely, but I should say that all bots do have a human behind them who programmed it. Once the program has been written it can be fully automated, so that doesn’t mean that there’s a human supervising everything, or people at Amazon reading in the middle of the night everything that you’re saying to Alexa.

Veronica Belmont: Why does it feel like we’re hearing so much more about bots these days than previously?

Lauren Kunze: I think it’s because we’re at this tipping point with social media in particular, where there have been a lot of automated accounts that are actually having a serious real-world impact, that are affecting policies, and politics.

Veronica Belmont: Yeah, sometimes it does feel like I’m speaking to bots and not to humans. How much of an issue is this for Twitter?

Lauren Kunze: Bots have always been a tricky grey area for Twitter in particular and the reason that I say it’s a grey area is because some of the bots were really delightful. There’s a poetry bot, and Shakespeare bots, and bots that engage with people that are very clever, but there are also bots that are malicious. This goes back to what we were saying earlier, which is that behind every bot there’s a human.

Veronica Belmont: What would you say the odds are the average person online has had a conversation with a bot without even realizing it?

Lauren Kunze: The chances that you’ve encountered a bot on the internet in your routine internet activity are probably 100%. Whether it’s some kind of spam email that you’ve been sent that was automated, or something retweeting you on Twitter, or a Facebook friend request from a profile that doesn’t look real. So this is a very, very pervasive technology.

Veronica Belmont: How good can a bot be at manipulating people’s opinions, or political leanings, public policy? Can bots actually influence politics?

Lauren Kunze: I think so, definitely. You actually don’t need something that sophisticated to reinforce somebody’s opinion. Simply repeating the same thing back to them and making them think that there are a lot of people and forces, because you’ve automated the spewing out of this rhetoric, can be tremendously powerful.

Veronica Belmont: Lauren Kunze is with Pandorabots. Political bots are simply one kind of many, many kinds of bots you’ll find online. Like Lauren mentioned, you’re probably interacting with ordinary helpful bots all the time and you might not even know it. Like, have you tried Apple’s Siri, or Amazon Echo’s Voice Assistant? Those are bots. Hundreds of bots help maintain Wikipedia pages.

Jenn Schiffer: I just know that Wikipedia has a lot of them in action. I know that there are bots that allow them to patrol edits. Let’s say a celebrity had passed away, they have a bot that will track that page and flag it as potential for vandalism and revert anything that does happen.

Veronica Belmont: Jenn Schiffer helps build something they call, Handy Bots, over at Glitch.com.

Jenn Schiffer: Choosing my favorite bot’s kind of like asking me what my favorite musician is. It changes all the time because there are so many bots for different moods. There are ones that take TV screen caps and add fake captions to it. There is a soft landscapes, which just shows these digital landscapes of pastel colors. There’s a lot of bots that remind you to, “Oh, take a break.” Like, “Drink some water.” I also see bots that people are making to change behaviors in their like workspace. Bots in Slack that when you say, “Hey guys,” reminds you, “Maybe don’t use guys, it’s not inclusive speech,” which I think is pretty wild. As we are generating more and more data, we want to democratize that so it can be used for good and bots allow us to automate tasks that we otherwise would be spending precious time doing, instead of democratizing that data, or telling stories of that data. So I think in that sense bots are extremely valuable, especially as we gain more and more data and needs to use it to solve societal problems.

Veronica Belmont: Like I said, I love bots. We need our bots, but because they’re so fundamental to our online lives, we have to watch out for how they’re subtly influencing us. After the 2016 US election, execs from Twitter were called to the US Congress to discuss their bot problem. It’s been suggested that as much as 15% of Twitter users are bots and some of them were trying to game our election. Here’s Congressman Jim Himes questioning Sean Edgett from Twitter.

Jim Himes: So I guess my question is, should political content created on the one hand by algorithms, by bots, or by any other form of artificial intelligence, should that be labeled as such, and if that political content is generated by a foreign person, should it be labeled as such?

Sean Edgett: We don’t try to label it, we try to remove it. So when we’re seeing automated accounts engaged in the activity that we’re talking about today, the mass retweets, the mass replies, the mass liking of other tweets, we’re removing those actors from the platform and because of the information we have behind the scenes, we can actually connect those accounts often times. So we’re not just removing the one, we’re removing the collective.

Jim Himes: What do you think your estimated rate of success in removing bots is?

Sean Edgett: We’re getting better. We think we’ve gotten twice as good in the last year. We’re challenging four million accounts a week, 450,000 a day.

Jim Himes: Give me a …

Veronica Belmont: The challenge for Twitter is that they actually want some bots on their platform. Bots increase site traffic and that’s obviously good for them, but when you leave the door open to some bots, the bad ones can creep in too. Have you ever been fooled by a bot? Telling the difference between a bot or not can be harder than you think. We quizzed a few people around the office to see how they’d do.

David: So the game is called bot or not. Basically, I just read you a tweet and then you tell me if you think it was written by a bot or an actual human. First one off the top here, why are delis considered to be restaurants? Bot or not?

Speaker 2: Ahh…Not.

Speaker 3: Bot.

Speaker 4: Not.

David: That was a bot. I am the most mature out of everyone in my imaginary circle of friends. Bot or not?

Speaker 4: Not.

Speaker 3: Not.

Speaker 2: Not.

David: That’s a bot.

Speaker 2: (laughing) I’m not very good at this game.

David: Rudeness towards and harassment of bots should not be tolerated because it can normalize such behavior in interactions with humans. Bot or not?

Speaker 3: Bot.

Speaker 4: Bot.

Speaker 2: That’s a bot defending him, or her, or itself.

David: Not a bot.

Speaker 3: No.

Speaker 4: Dang it.

David: Okay, one more here. None of my Gmail ads are relevant these days. It’s like Google doesn’t even know me anymore. Bot or not?

Speaker 2: Not.

Speaker 3: Not.

Speaker 4: Bot.

David: No, that is actually from Veronica Belmont. That’s a Veronica Belmont tweet.

Speaker 4: I’m sorry Veronica.

David: Veronica, you are not a bot. You’re a human. I can attest to that.

Veronica Belmont: Thanks for the vote of confidence, David. Though, I find it oddly flattering that I passed for a bot. How about you at home, were you playing along and trying to guess? How about I toss a few more tweets at you during the episode and you can keep score? Here’s one to get us started. Ready? All right, here we go. A Congressman is standing perfectly still in a bolt of lightning. Well, bot or not? Keep track of your guesses and I’ll give you the answers at the end of the episode. Aside from spreading propaganda and sowing chaos in online discussions, sometimes a bot army will zero in on individuals. Ben Nimmo knows firsthand what happens when you make a bot angry. He’s a Senior Fellow at the Atlantic Council’s Digital Forensic Research Lab. DFR Lab for short. His job is to stop bots and anything else from spreading lies. So, of course, he became a target.

Ben Nimmo: I was being attacked because I had co-authored a couple of posts on Russian disinformation in America and I had exposed pro-Russian political botnets and manipulation in America. So what had happened was somebody had taken the profile page of one of my colleagues and created an exact copy of it and then they’d used that account to tweet that I had died that morning. Then they used another botnet to retweet that about 13,000 times. I didn’t feel too threatened, but what really got to me was that I started getting messages from my colleagues and from my friends asking if I was okay. They had got scared and that got me increasingly angry and increasingly determined to do something about this. What then happened was the Atlantic Council, of which my team, the DFR Lab is apart, tweeted its own tweets advertising our story on the Russian botnet that we’d identified. They started getting the same treatment from the botnet. They got massive volumes of retweets and very, very quickly. In their case, by the end of the day they had had 106,000 retweets on a single post.

Veronica Belmont: At first blush, that might sound like a good thing, an army of bots promoting your content on social, but that’s not what’s happening here.

Ben Nimmo: They were being used to retweet stuff that me and my colleagues were tweeting, so that me and my colleagues would see our notifications effectively melting down. It’s insane to watch and if you’re not aware of what’s going on, it’s very spooky and it can be quite alarming.

Veronica Belmont: As overwhelming as that might have felt, Ben wasn’t about to let these bots get away with harassing him and his team. He fought back.

Ben Nimmo: And so I suspected that they had been programmed to retweet anything which mentioned my Twitter handle, the DFR Lab Twitter handle, and the words, “Bot attack.” So I wrote a tweet tagged to Twitter Support saying, “Hey Twitter Support, have you noticed what happens when you tweet about DFR Lab, Bot Attack.” So what I was trying to do was build a trap. When I got up in the morning 86,000 bots had taken the bait and had pinged themselves to Twitter Support, but progressively during the day the number or retweets dropped down to about 10,000, which I took to mean that about 76,000 bots had been taken offline. I think we killed a botnet.

Veronica Belmont: Ben Nimmo won a battle, but the war the politicized bots wage against us is much, much larger. Bots have been spotted in all kinds of political places. They were deployed to slander France’s then Presidential candidate, Emmanuel Macron. Other bots were launched to increase tensions in Qatar and its neighbors in the Gulf. The Islamic State built an app that automatically spreads Jihadist messages. The infamous Mirai botnet was capable of overloading sites like Reddit and Twitter. They could hold entire platforms ransom, killing conversation altogether. Every day political bots nudge our conversations in whatever direction their programmers desire. Lisa-Maria Neudert works at Oxford University’s Computational Propaganda Project. She says, “There are three types of people who make political bots with bad intentions.”

Lisa-Maria Neudert : So the first group is probably the least intimidating one. It’s just … It’s a couple of hackers. It’s techies that enjoy making bots. They just want to play around with them a little bit. Then the second group is the political economy-surrounding bots. For example, our research has found a, for lack of a better work, bot farm in Poland and at this bot farm we’re 40 people, fully employed. We’re doing that for political parties. We’re doing it for the oil industry. We’re doing it for the farmer industry. Then the third group is the one that probably scares me the most and that is the deep ideological group that is doing it purposefully to just sow propaganda. Often times state sponsored agencies, often clandestine agencies, that are doing it purposefully to intervene in elections, to influence referendum, and also to change the way that we’re having conversations on social media.

Veronica Belmont: On average, how successful have bots been?

Lisa-Maria Neudert : It’s difficult how to measure success in bots, but what I can say is that right now for almost any political issue, for almost any political hashtag that we see on Twitter, on Facebook, we have bots that are somehow involved in the conversation and that are somehow manipulating the climate of opinion.

Veronica Belmont: What do we do about political bots?

Lisa-Maria Neudert : There’s a couple of measures that are being suggested and that are being discussed right now. I myself, I’m German, and right after the US elections, Angela Merkel actually said, “Guys, watch out for those elections. There are going to be different than anything that we have ever seen. There’s going to be propaganda. There’s going to be algorithms. There’s going to be bots.”

Veronica Belmont: Yeah.

Lisa-Maria Neudert : What happened then is that regulators really perceived this as something that they should intervene in. Germany right now is the first western democracy that has actually regulated in response to bots and in response to fake news. It is something that is called the [German]. That by the way is one word in German. It holds social media companies liable for the content that is being posted there. Also, if a bot is posting a piece of fake news content, a piece of defamatory content, and they are now obliged to take that content down, and if they fail to do so within 24 hours upon notice, they face fines of up to 50 million euro.

Veronica Belmont: Wow.

Lisa-Maria Neudert : Which even for a Facebook is a lot of money. Yeah and the problem with those kind of countermeasures, they’re really fearful, and really ill-advised. Because I think if Facebook has the incentive to take down whatever kind of content, that is problematic, because otherwise they are facing huge sums that they would have to pay. Then this is going to change the way the political conversation is happening over social media in Germany. That might have an impact on freedom of speech, and that might also have an impact on the general openness of conversations.

Veronica Belmont: Lisa-Maria Neudert is a researcher at the Computational Propaganda Project, which sounds badass. Okay, so you feeling confident in your bot spotting skills? Try this tweet out. “Knock, knock.” “Who’s there?” “Consequences.” “Consequences, who?” “Consequences of global warming.” So who’s responsible for this joke, a bot or not? I just want to know who’s responsible, because this joke is terrible. It’s a terrible joke. Politicized bots are threatening activists, trying to swing votes, and are even getting retweeted by President Trump. But what if we gave them less devious tasks? They could be put to work spreading real information on highly charged topics. Bots can make us more informed, instead of more confused.

Tim Hwang: So one of the projects that we played around with for a period of time was using bots to sort of fight disinformation, or misinformation online, and specifically the topic we were working on was sort of anti-vaccine activism. The notion was these bots would kind of float around online and have normal behavior, but as soon as someone said, “Hey, have you seen this link, ‘Vaccines might cause autism?’” They would sort of execute countermeasures.

Veronica Belmont: Tim Hwang has been running a series of experiments to see how bots can connect us to each other in more meaningful ways.

Tim Hwang: It was trying to bring in pro-vaccine activists and then point them to people who might just be learning about sort of anti-vaccine conspiracy theories and so the bot would say, “Hey Tim, you should really talk to Veronica. She feels differently about this issue and also you have this friend in common, or these three friends in common, or this person is a friend of a friend.” I think what we were really surprised by was the extent to which people really had these sort of deep conversations online that would really go back and forth.

Veronica Belmont: Do you have any stats or data from your own research and experiments about how bots have made new positive connections between people?

Tim Hwang: Yeah, definitely. We were able to find that actually that they were able to increase connections at a much higher rate than before on the order of about 50 to 60% more than they were otherwise connecting and that the sentiment of those were overwhelmingly positive. That was a really interesting kind of result. Now, I think I don’t want to over-sell it, right? I think we’ve had our real big failures, right? Where the bots have gone in and really I think that there’s no real statistically significant effect, so jury is still out and I think we’re really trying to still figure out what are the factors that make it work and what are the factors that don’t make it work.

Veronica Belmont: Tim Hwang was dubbed the Busiest Man on the Internet by Forbes, which they probably meant as a good thing. He’s also Director of the Ethics and Governance of the AI Initiative at the MIT Media Lab. All right, friends. Here’s one last bot or not challenge, a final tweet for you to decide about. Here we go. Be sure to change your face every 90 days. The longer the better. Made your guess? Okay, you’ve been very patient and now it’s time to finally get some answers. So let’s recap. A Congressman is standing perfectly still in a bolt of lightning. Yeah, that was a bot. A drunk bot, maybe, but a bot nonetheless. And the second one, the awful knock-knock joke. Yeah, a bot. Actually, it’s a bot that only tells knock-knock jokes using Google auto-complete. And finally, be sure to change your face every 90 days. Bot or not? Human. In fact, it was a tweet from Anil Dash, who you may remember from our free speech episode last season. Now that you’ve completed three rounds of bot or not, you’ve probably realized that telling the difference between humans and scraps of software is no easy task. You need more clues, more evidence. There are actually bots on Twitter designed to sniff out other bots. Check out Probabot, for example. It’s a project from the Quartz online news site. It scores political tweets to rate the likelihood that they were written by bots. And Ben Nimmo, the destroyer of evil botnets, you heard from earlier, has some tips on how to spot a bot.

Ben Nimmo: The easiest thing to do is to start off by going to the profile page of an account that you suspect might be a bot. And you look at when it was created and you look at how many tweets it’s posted since it was created. And normally, for a human user you might find them tweeting maybe 10 or 15 times a day. There are bot accounts out there, which I know, which post over 2,000 times a day. Then you can look at the anonymity. You look at the profile page and you ask, “Does it have a real human name, or does it just have a scramble of letters and numbers? Has it got a profile picture of a human being, or has it got a tree, or a lake, or a mountain?” Then look at the amplification and if all you see is an endless slew of retweets, or of shares from websites, then you’ve got an account which is behaving like a bot. It’s not making its own voice heard. It’s making other people’s voices heard. So the top three indicators are activity, anonymity, and amplification.

Veronica Belmont: Ben actually has a full 12 point how to spot a bot checklist to read online. There’s a link to it in the show notes. Check it out. Whether you’re hunting down malicious propaganda bots, or building a better bot to save the world, at the end of the day these marvelous bits of software are just that, software, pieces of code. Bots didn’t invent propaganda, or dirty political tricks. Their ability to scale, to scale quickly, and to scale big, that’s the risk undermining how we talk to each other. Before you dive into your next Twitter feud, ask yourself, “Am I about to fight a bot?” And if so, is this really a fight worth having? When we go online, those questions about what’s real and who is real can feel overwhelming. It’s not just about bots impersonating angry mobs either. It’s the whole online experience. When we transform into avatars, when we express ourselves with emojis and star ratings, our identities seem to shift, get reimagined. As we dive into season two, that theme, identity, is what will carry us through every episode. Because the internet doesn’t just transform society at large, it transforms you and me. All season long the identity crisis that looms at the heart of online lives. If you’ve been with me since season one, nice to have you back. If you’re here for the very first time, don’t forget to subscribe. You can find us anywhere you get your shows. If you like the show, let us know. The best way is to leave a rating and a review on Apple Podcasts. IRL is an original podcast from Mozilla, the nonprofit behind the all-new Firefox browser. I’m Veronica Belmont, I’ll see you online. Until we catch up again, IRL. Either it’s, a bot wrote it and it’s bad, because it doesn’t make sense, because bots don’t have a sense of humor, or it’s actually good and I’m just not smart enough to get it, or a person wrote it and they’re not funny. Either way, it’s terrible. It’s really terrible. It’s truly, truly terrible. This is what I … You’re getting real Veronica right now. Getting the essence of me. Don’t you want that? Don’t you want my essence? Why does that sound so gross?