Misinfo, Bots, and Trolls - Some Key Concepts for Misinformation
Episode: S1 E6
Podcast published date:
SUMMARY KEYWORDS
bots, misinformation, people, account, troll, tweet, sock puppet account, disinformation, conversation, create, platforms, botnet, online, person, boost, trolling, information, disinformation campaign, new york times, post
SPEAKERS - Shawn Walker, Michael Simeone
Michael Simeone: This is Misinfo Weekly, a somewhat weekly program about misinformation in our time. Misinfo Weekly is made by the Unit for Data Science and Analytics at Arizona State University Library. This week we do something we should have done a while back, we go over some basic terms and concepts for misinformation studies, starting with basic definitions, moving just some basic concept definitions of terms that people have heard before, and then move on to slightly more complicated things. There's a lot more to talk about after today, but we're aiming to get everybody generally oriented. This week, we'll aim to get a lay of the land on some common terms that people use when they talk about mis and disinformation. So Shawn, why don't we start out with something simple misinformation. This is a word that we've used a lot define misinformation.
Shawn Walker: So, misinformation is basically information that's false, but not necessarily intentionally false.
Michael Simeone: So I'm trying to think of an example. I have an idea. I think that teens on tik tok were the reason that Trump how to really reduce rally attendance in Tulsa. And I share that on Facebook, but it might not be 100% true. Is that misinformation?
Shawn Walker: So that could be a good example of misinformation because your intent wasn't to deceive. So the difference between mis and we'll talk about disinformation, the second is often intent. So am I trying to deceive versus Am I just circulating potentially incorrect information?
Michael Simeone: Okay. So if it turns out and this is a conversation that we've had offline a little bit about how it's kind of the claim that the that that particular rally had it's had its attendance reduced by Tick Tock is a little dubious and that there are plenty of other factors that go into it. So if it's partially untrue, and then I share it without knowing that is that also misinformation?
Shawn Walker: So yes, the difference is that with disinformation there's an intention to deceive. Okay. information, you don't necessarily mean to deceive, so you're only sharing part of the story. But it's not that you know the entire story and you're only sharing a specific aspect. It's that you're, you're sharing only the piece that you know, but you don't know that there's more to the story.
Michael Simeone: Okay, so you're kind of hooked by an appealing fact. Or an appealing fiction, and there might be some truth peppered in there somewhere. But the whole point of misinformation or, and we've mentioned this before, the whole tech-- one technique of misinformation is to treat the communication like cement, right, where there's chunks of it that are in there that are truth, and then there's chunks in there that make it false. But those two particles together those two kinds of substances together kind of harden and create a misinformation.
Shawn Walker: Yeah, or kind of like jello. There's a bit of wiggle room in there.
Michael Simeone: Yeah, I tried to think of some more metaphors we might be able to use to think about Misinfo, but it almost feels like one of the things that makes it so good, or one of the reasons why it's such an effective technique to mix truth with fiction is because most of the time When someone shares it or believes it, they don't validate every fact. They tend to gravitate towards the facts that they can already think are true.
Shawn Walker: Yes. So going back to the Tulsa rally Tiktok example, that's a really appealing story to think that, especially for those that are anti Trump, that a group of teens work together to become activists to thwart you know, a huge political campaign. So that's actually a really appealing story that has legs and so people hear that and they just they want to share that, but that's not the entire story of what happened at the rally. Right. There's a combination of Coronavirus plus. tik tok and also tik tok is not just full of teens, there are actually a lot of adults and senior citizens that are also on tik tok. So, there's multiple layers of misinformation in the story that you just told.
Michael Simeone: Okay, but thinking back to intent because this intense thing is a really interesting side of it. So we tracked the one news outlet NY Timespost.com, which is has the domain registered in Turkey, we have no idea who actually all the kind of ownership information beyond that is just obscured. We don't know who actually owns and runs that site. It's obviously supposed to look like a New York Times publication of some kind. But it's not. You know, we talked about other colleagues who posted stories from NYTimespost.com. So the people who are circulating these stories, which are a mix of truth and fiction, that is misinformation. What about the people curating the content on NYTimespost.com? Is that misinformation or disinformation?
:
Shawn Walker: So we get we have to make some assumptions to determine which camp we fall in here, whether it's misinformation, but again, disinformation is false or misleading information that is intentionally created or intentionally spread for a particular purpose. So that could be financial gain. That could be political gain. Some sort of objective. So I would say using that definition, the folks who are creating and curating content for the NY Times post, which, as an aside note, their logo even is stolen from the New York Times.
Michael Simeone: It is such a good fake newspaper. I was shocked, right, because the letterhead looks the same as New York Times. It even sounds kind of like a kind of Quick Hits news subsidiary of the New York Times. They did a good job.
Shawn Walker: And the name too, so you just you kind of look at the name, you look at the domain. And it looks so similar that unless it's something you're looking for, as we talked about in our first episode, right, those eight It looks
Michael Simeone: Just like a news website. Unfortunately, you can just buy a news website template for very little money and achieve what appears at first glance to be at a comparable level of polish.
Shawn Walker: So if we put those pieces together, the close resemblance in the New York Times the types of stories that they're circulating. There's a lot of effort into the design of this misinformation. So that turns it into disinformation. So they're intentionally creating, intentionally curating and intentionally spreading this information. So that means it's disinformation.
Michael Simeone: Okay. Okay, so I've got disinformation as being purposeful. So if the people who are aggregating this content, we're just say, looking at Google Search trends and trying to collect new stories and recirculate them that were just on Google Search trends, it's possible that they weren't intentionally deceiving anybody. They were just being kind of opportunistic in their news selection. And it just so happened that this was a story that was a misinformation item. The reason I'm kind of talking about it in this way is to try to understand where disinformation ends and misinformation begins and vice versa.
Shawn Walker: So it's the intent of the actor so we can actually switch back and forth this example of the this website, the NY Times.
Michael Simeone: NY Times post.
Shawn Walker: Yes, NY Times post.
Michael Simeone: They're getting more Popular unfortunately.
Shawn Walker: So this example of the NY Times post is intentional. And then so say, Michael, you see an article from this site, it looks really close to that actual New York Times. The name just feels familiar, even though it's slightly off, but you're, you're in a rush. It's right before you are going out to get coffee. So this was maybe six months ago now.
Michael Simeone: Right? I wouldn't be going out to get coffee right now.
Shawn Walker: Yes, at the beginning of July.
Michael Simeone: So sometimes I just make it outside, just to feel like I'm going somewhere on my camping stove.
Shawn Walker: But not this weekend as it's 117
Michael Simeone: Not this weekend. But you know, sometimes you really got to feel like you're going places.
Shawn Walker: This is disinformation. But then say, you know, you're seeing this site. You think the story might be legitimate about these tick talkers. So you share that information. Now we've switched from disinformation to misinformation, because your intent was not to deceive, you're in it was to share this information because you think But it might be true.
Michael Simeone: Okay. Okay. So I think, you know, in my own perspective I, I feel like my my own habits of thinking or to see at all as misinformation because it feels really difficult to tell. But there's also some use between distinguishing between mis and disinformation, right?
Shawn Walker: Yes. And we also normally talk about the distinction between mis and disinformation in terms of information operations. So they're intending to use misinformation as a tactic to spread chaos or to cause problems. We often talk about disinformation as an information operation because it's intentionally designed, you know, process that folks are going through to cause chaos. And all the other cases we often just label misinformation.
Michael Simeone: Great. Okay, so let's think about a disinformation operation. A disinformation operation could be something like we decide we're going to start a newspaper that just has complete clickbait news. That is completely fake. But all we're looking For us the ad revenue, because we have outrageous titles and outrageous content, that is a disinformation operation.
Shawn Walker: Right? And so say we could just take pictures of celebrities and accuse them of things that are outrageous. And then you know, folks want to come there to see what celebrities are doing. But that's a disinformation campaign because we're intentionally putting out incorrect information in order to profit.
Michael Simeone: Okay, so what's an example of another disinformation campaign I tried to choose? Not the most innocuous, but the one that had the least kind of strategic interests in mind.
Shawn Walker: Another example of a disinformation campaign would be foreign interference in an election. So a country might pay a journalist to write misinformed misinformed stories, incorrect stories about certain political actors. And then that spreads in an election to cause problems for certain political actors that the other country is not a fan of. And that's an intentional campaign to cause chaos. For what What are the political candidates?
Michael Simeone: So this is the intelligence service of foreign government or some other actors, what are we talking about here?
Shawn Walker: So it could be intelligence service of another government. Of course, it could also be corporate actors. So companies have a lot of interest in who's in power due to regulation. So they can find certain campaigns to bring out information, which some of it may be true, but then they put enough of a twist on it to make it you know, deadly for someone who's running for office.
Michael Simeone: Okay. Okay. And so trying to think of other disinformation campaigns. Well, you know, another example, could be much more innocuous than this, but it's to really just kind of really boost our market. We talked about, like a predatory news organization, but also just kind of defaming somebody for personal gain would be another reason, right. That's different from defaming a celebrity that's different from duping people into clicking on your news story, but really trying to kind of attack someone's reputation or a business's reputation. A misinformation campaign or sorry, a dis information campaign.
Shawn Walker: So we could think of this an example of a dubious competitor might release information about a competitor's product saying that there are these problems or they might go on to different review sites and post incorrect reviews, saying their product doesn't work or it's inferior or broken. They refuse to fix it, all of that incorrect. And then that's a disinformation campaign to take a competitor down so that they have a competitive advantage.
Michael Simeone: Or boost somebody up. I feel like reading Amazon reviews might be helpfully construed sometimes as a series of competing, disinformation campaigns about products where people can actually buy product reviews really operationalize a positive review for your product. I know Amazon is kind of constantly fighting with this kind of behavior, but saying something really nice about yourself or nice about your product could also be considered a disinformation campaign. I feel like it's Kind of a gray area, if maybe the product is actually okay. It's not necessarily deception, but masquerading as another person independent, who evaluated this product of their own volition that is dis informing. And so it seems like Amazon reviews through this perspective could be considered a battlefield for disinformation.
Shawn Walker: Imagine a lot of folks have encountered ads for free products that you order from Amazon, then the company checks your order to make sure that you've actually put in the legitimate order. And then when you give them a five star review for that product, then they offer you a refund. So then you have free product to test, but that's only in exchange for a five star review. So in many ways, we consider that to be a disinformation campaign because they're paying folks to buy a product and then submit reviews, some of which may be true, but some of which may not be true, because there's a financial incentive in those reviews.
Michael Simeone: Okay? Okay. So we're almost automating the people to turn Now the good review content for us so I, it sounds like disinformation is any rock we turn up online we might see some disinformation. But it also sounds like misinformation and disinformation here are connected. And that disinformation can precipitate misinformation.
Shawn Walker: Yes. And it could also be the reverse too so we could have someone spreading some misinformation online, then someone else could pick that up and see, Oh, I know this is not true. But if I continue to spread this information, and I hype this line, then I can harm someone. So politicians might do this often. But I think in the mis and disinformation conversation, this might be really helpful in certain instances to see what's the difference between the two. But I think the impact is even more important than whether something is misinformation or disinformation.
Michael Simeone: Yeah, I agree. I feel like sometimes, it can be helpful to put your disinformation glasses on because you're trying to understand motivation, tactics and strategies. Or even source and mechanisms. But the misinformation lens or you know, it seems like the direction you're you're kind of pushing the conversation right now is maybe being agnostic to intent can be very helpful sometimes.
Shawn Walker: Yes. So sometimes we get caught up in is this Mis, is this dis information? And other times I think it's more helpful to look at is what actually happened. What did this cause to happen? What harm has emerged whether or not the harm was intended?
Michael Simeone: Yeah. Yeah. Okay, great. So, thinking through misinformation, disinformation and disinformation operation. Let's talk a little bit about what people can do with accounts. I think early in the days of kind of internet studies. I think we can both recall one of the interesting kind of fascinations about it was people's ability to become or to represent themselves as something else, right, there was this back in the early kind of internet, with things like Second Life, or other kinds of virtual environments. Everyone's gonna have to Look up Second Life. I don't know, maybe some people have heard of Second Life before. Shawn, did you ever get into second life? This would have been in the early, early to mid 2000s.
Shawn Walker: Virtual world, what are the most popular virtual worlds? Which Second Life is still running?
Michael Simeone: Yeah, I don't know. It's not as popular. But I think I'm the reason I'm talking about second life right now, or even some of the really early days of the massive multiplayer online games, is this idea that you could be someone else online, and there was this kind of liberating potential to be someone else online. And this is especially important because it means that you get to leave behind some aspects of your of your meatspace right to use the cyberpunk term of your meatspace identity and explore the potential of a virtual identity. We would be remiss in talking about this utopian imagination of the early internet without mentioning that it really never got to that point right?
Shawn Walker: Yeah, there was a early belief in the internet that if we just kind of sprinkled the Internet of various places in society, so in political discussions and democracy and commerce in medical care, that magically all of the problems and all of the ills in the world would be fixed. So basically take one internet pill in the morning, and the next day, all of our harms would be gone.
Michael Simeone: Yeah. Or people who, you know, would find it liberating to be anyone they wanted online. Because of the various kind of intersectional biases and persecution that they face in their life. It turns out all of those kinds of things existed on the internet, and were brought to bear and kind of really brutal ways on people's lives.
Shawn Walker: I don't know who would have thought that we bring our biases our issues online, we don't leave those at the computer and become a completely different person.
Michael Simeone: Yeah, or mutate them and kind of supercharge them into doing some of these other kinds of behaviors that we're talking about today.
Shawn Walker: And oftentimes, we talk about that As the internet is mediated communication, meaning that the computer or cell phone or some other device sits between us and the other person that we're having discussions with, or the other person we're talking about. And so behaviors that we would never exhibit in person, things that we would never say so like hate speech, other things that we might never say, to someone's face, because there's a screen in front of us. That's protecting us from seeing the other person, we can actually do some pretty bad things.
Michael Simeone : Yeah, it's like the malevolent version of there are some conversations that people are more likely to have over text only this is far more malicious and far more harmful.
Shawn Walker: That goes reminds me of one of the more famous early internet cartoons from the New York Times. And so picture it there's a dog sitting at a computer and the caption under it says on the internet, no one knows I'm a dog.
Michael Simeone: But now with social media platforms, being anyone you want to be has a very different kind of context that is a point of exploitation of these platforms is that through the account system, it's not So robust, where it validates your identity to the point where you can represent yourself you can, you can still be anyone you want to be online to some extent, until a troll doxesyou or you can represent a virtual account or an account that is not directly controlled by you, you can represent that as a person to write. So that flexibility and identification now has tipped in a non utopian direction.
Shawn Walker: Well, but it also depends upon the platform that you're using to express yourself. So some platforms have more stringent identity requirements and other platforms. But I want to go back, you mentioned a couple of terms. You said a troll and doxing. So can you talk a little bit about what a troll would be?
Michael Simeone: Yeah, so a troll account is generally one that is interested in kind of disingenuous conversation where a troll is more interested in the effect of the conversation or of the kind of comments rather than an actual dialogue. If you like troll of all the different comments or all the different terms that we're kind of talking about today. This is the one that probably people have some functional understanding of right and with trolling is just in regular speech, like, Oh, I'm just trolling, which means I was just kind of joking and being sarcastic, which is an interesting use of that term. Considering that true genuine trolling means that you want to really dismantle an entire conversation, or bring something to a halt, just through completely outrageous comments that have no investment in anybody there, or in any outcome of the conversation at all.
Shawn Walker: I think one of the favorite places of trolls are online newspaper comments, or Reddit or other places where and oftentimes trolls at first seem like they want to engage in a genuine discussion. And then you realize they're just there to sort of stir things up a troll account. It's almost like their hobby is to create discord and online discussion.
Michael Simeone: Yeah, exactly. And there's a for the troll. There is a sadism kind of built into your mode of conversation. You want to outrage People you want to upset them, it brings you happiness to make all those things happen. So that's, in general that's trolling.
Shawn Walker: So I guess for an example, I'm involved in a lot of dog communities since I have some dogs and I volunteer at the shelter. And in the dog training communities, there are a couple trolls that come up and anytime anyone discusses any type of training, they're just always there to talk about how that training is abusive and incorrect and not helpful, no matter what the training type is. And so now, when that person pops up, you'll see other people you know, quickly, within a few minutes, say, oh, ignore that person. They're always here trolling.
Michael Simeone: Yeah.
Shawn Walker: So in that way, it's almost like a trolls create sort of a deliberate target and then Welcome everyone into this tarpit. So we all just get mired down in whatever their technique is and tactic.
Michael Simeone: Concern. outrage. Exactly.
Shawn Walker: And then we're just kind of stuck there until the conversation just dies like the dinosaurs.
Michael Simeone: Of all of our like hilariously kind of awkward metaphors that we use to talk about things. I think that one is definitely first class. So far as being the most effective, but doxxing doxxing was another thing that was talking about doxxing is basically disclosing your personal information details online. So we talked about that utopian splendor of being able to be anyone you want and having this protection of anonymity, which is exploited by trolls, because you can't really find out who they are. They get to hide behind that virtual identity, and harm other people or harm other conversations. But doxxing is kind of overkill and the other direction, which is you decide that you're going to find somebody and you are going to basically disclose all their personal information address contact, so that people can harass them in real life, rather than just on whatever platform that they're working on.
Shawn Walker: And sort of the original example of doxxing even offline, would be in the early abortion, anti abortion discussions. Anti abortion protesters would often release the personal information of abortion doctors so that folks could go to their homes and harass them. And then that technique has now been brought online. We see this case on on all sides, you might see videos of someone harassing someone about a mask. And then 20 minutes later, we have their phone number, their address their workplace, and then 30 minutes after that they've lost their job.
Michael Simeone: Yep. Yeah, or swatting, which is a subset I feel or a cousin of doxxing, which is to call the SWAT team on someone's residence as a form of harassment.
Shawn Walker: That's a particular favorite in gaming communities. So that sometimes that will be done as a joke that, you know, say someone takes out your team in the game. So then you might call those SWAT on them. And there are multiple cases of people dying as a result of this or someone has the incorrect address information, and they send them to the house across the street, for example, instead of the actual gamers house.
Michael Simeone: Yeah, and the reason we're talking about doxxing and swatting in a larger conversation about some kind of key concepts and misinformation is just to indicate that there are key points that are pretty well known like doxxing and swatting, and then other points that aren't As well known were the kind of virtual space and the non virtual space interact in very important ways, right? So it's easy to pay attention to doxxing and swatting because they're normally it's a pretty dramatic event. But misinformation existing online affects the way that people think and do stuff in real life all the time. So even a troll account can affect quote unquote real life. Even though the troll account only exists in virtual space. That troll or an online troll can do things like Dox other people and coerce them and be threatening. A troll can spread disinformation can spread misinformation. So trolls are kind of like a modality, right? Or how you how you kind of conduct online. And those can be fonts of mis and disinformation and other kinds of coercive things that, in general, make it harder to reason through an information environment. They're kind of antithetical to the idea of the internet as being something where you get information and get informed and connect with people.
Shawn Walker: And the idea of doxxing is, is a coercive method that can often be used to prevent someone from engaging in a conversation around whether something is missing or disinformation. Yeah.
Michael Simeone: Totally can silence you doxxing and swatting could just completely silence you.
Shawn Walker: And then once that's happened, then you have a chilling effect is that people no longer want to engage with that community. Even if they're saying something that might be incorrect. They're now afraid to engage with that community because of the threat to their, their physical and mental person.
Michael Simeone: I think trolling is one way to manipulate an online account to create either misinformation disinformation, or create an environment where that kind of thing can flourish. But other kinds of account manipulations exist as well. So whereas with trolling, you can kind of hide behind the anonymity of your account to feel like you're more comfortable to troll with something like a sock puppet, how would you describe a sock puppet? What's the best way to understand a sock puppet account this is this is a similar manipulation of Have the affordances of an account on an online platform.
Shawn Walker: So let's talk about why it would be a account that you often create that's separate from your primary online identity. Reddit is a great example of this where sock puppets are in pervasive use. So you have your main main Reddit account, and then you might create a temporary throw away handle that is a sock puppet account that's not connected at all to your main handle. And then that allows you to then do things anonymously away from your primary handle.
Michael Simeone: Okay, and so what's the advantage of using something like that?
Shawn Walker: Well, so you're not tainting your, your, your your primary account. So for example, then you could talk about issues that you might be uncomfortable discussing. So you could talk about political issues, you could talk about relationship issues, other things, or you could tell maybe a really funny, embarrassing story that happened to you in a subreddit, but you don't want that connected to your primary account. So then you can go off sort of in this new private account that you've created with a different identity, and you can share information that you don't want anyone to connect to your, your primary account identity.
Michael Simeone: So it's another way of hiding but kind of literally one degree removed.
Shawn Walker: Yes, and it's not as positive and negative connotations, right? So a sock puppet, you could then use to troll. So that means you could be an upstanding member of the community using your primary account. And then when you want to totally trash someone, you then log out log into your sock puppet account that's disconnected from your upstanding online citizen account, and then you move into your sock puppet account, and then you just cause chaos. Or you could use your sock puppet account to discuss something that you might be embarrassed or that might be sensitive. So then that doesn't get connected to your primary account.
Michael Simeone: So is this like when like when Mitt Romney did you remember the name of mitt romney's Twitter? Like he had a second Twitter account that wasn't connected to him at all, but he just used it to kind of speak his mind and follow other people that He would normally be shunned in kind of Republican politics. If I think his name was like Pierre, did I just didn't invent some kind of hilarious Twitter handle for Mitt Romney just now?
Shawn Walker: No, he did have his Pierre. Yeah, it was Pierre. It was his Twitter account was...
Michael Simeone: Pierre Delecto! That's what it is.
Shawn Walker: Yes. So it was Pierre Delecto. Today, he could tweet when he was really
Michael Simeone: So is that a sock? That's a sock puppet. Right?
Shawn Walker: That's an example of the sock puppet. But then this is unmasked sock puppet. This is exactly what you The purpose of the sock puppet is so you can go do these things. Without anyone knowing who you really are. It's like a you know, Nan de plume. But oftentimes the danger is it's easy to connect those two together.
Michael Simeone: It's not the same level of protection as other forms of exploiting the the account system, but it maintains a little bit more human control. As in like you and with a sock puppet account. Typically there's a person posting To the sock puppet account, like an actual human being has to sit down at a keyboard and type out the stuff for it to appear as a sock puppet, or in a sock puppet account.
Shawn Walker: Yeah, so you would log out log back in. And this is just a disconnected account, you can use some automated control. So that would be this could be what we would call a bot.
Michael Simeone: Yeah, because I think, to me, that's where I draw the line between a bot and a sock puppet is once a human being doesn't have to enter for every message that you see. A human being didn't have to type that thing out. There's a one to one relationship between user and message that you see there.
Shawn Walker : So often sock puppets and trolls are these human controlled accounts, where there's actually a human driving the computer and typing in the content.
Michael Simeone: Okay, so it's easier to troll from a sock puppet account is what you're saying?
Shawn Walker: Well, yes, because oftentimes, when you're trolling, you get banned. And so or your account is deleted or removed or kicked out, basically. So you want to have this account that you can throw away an account You don't care about. So then you can create another one and go back. And if the count disappears, no harm, I just created another one in the system so that I can keep doing my trolling.
Michael Simeone: So the Pierre Delecto version is really the most innocuous version of the sock puppet account that we can think of.
Shawn Walker: Yes.
Michael Simeone: Or like, you know, when when parents start an Instagram account and follow their children, that's sock puppeting, right.
Shawn Walker: I mean, it could be
Michael Simeone: You've heard you've heard of this trend, right? where parents will kind of invent a fake Instagram account of someone their child's age, like when they're a teenager or something and then follow their teenager on Instagram. So their teenager doesn't realize that they're being surveilled by their parents, but because of the sock puppet account, the parents able to keep their tabs on them.
Shawn Walker: I've heard of that. But often the joke is on the parents because the public Instagram account is not the actual content, you know, you have your insta and then you have your frinsta and that's a private account that the parent has no clue exists. So in the end, even though parents think they're being kinda cool by creating a sock puppet account to follow their children. That's not their real persona. It's actually the private account that the parents have no clue is around. That's the content that parents really want to see that they'll never see.
Michael Simeone: Yeah, it's just the internet could be a miraculous technology for sharing information, or just a special kind of hell for parents. Probably both,
Shawn Walker: Or the reverse. It could be a special hell for teens, because there's now you know, parents are creating Facebook accounts for their young children and continually posting pictures and basically broadcasting their, their children's adolescence, and and then when they grow up, they see Oh, I have this account where the whole world has been kind of following me and seeing pictures of me What do I do with this now?
Michael Simeone: So bring it back to this conversation? Does this mean that kids can claim that their parents created a sock puppet account that was representing them or is this is this far beyond them? Do we need a new term for parents invented an account for their child Children that their children had no control over.
Shawn Walker: I think we need somewhat of a new term, but in a way that the parent account is a form of a sock puppet because it's not the actual child operating the account. It's the parents. I mean, I have a sock puppet account for my dog because, you know, he's deaf and blind, he can't really type and post pictures on Instagram.
Michael Simeone: He's really bad at computers.
Shawn Walker: I say that he can you know, people think it's funny. So technically, my dog has a sock puppet account.
Michael Simeone: Okay, who will because it's in the voice of your dog? Yes. Which is different. Okay. Okay. Okay. So we need a different name for that, like a helicopter account or something like well, moving on. Let's think about what what happens when you don't have a one to one relationship with that account, when it's not a human being posting this stuff. Or when we start to automate some of the content that gets put out of some of these accounts. This is when we start talking about bots.
Shawn Walker: So a bot account is basically an account that is automated. So there's not a human directly behind that. I mean, a human wrote the code to control The bot, but it's not the actual human typing in the tweet and pressing the button or choosing to follow users. That account has some automated activity. And in a lot of the public discussion of bots, in the news media and other places, they just they sort of accuse any problematic account as being a bot. And that's sort of a synonym for a disingenuous account. But there are actually a lot of bots that are really useful and really helpful, and that we love every day.
Michael Simeone: Bots have a bad reputation. Yeah, I think, in general, well, we can get into the nitty gritty about how people tend to use bots for automated content posting. But I think that there's a couple I think, well, let's talk about the different kinds of bots that we can really put together, right, one bot, or some kinds of bots are meant to spew content that that gets kind of generated from a source. And what I mean by that is, Shawn, have you made you've made Twitter bots before? Right?
Shawn Walker: Yes
Michael Simeone: So the idea is that you have this kind of Kind of source of training language. And then that bot will or this is a bot that's going to post automated content, I should say. One way of going about this is, if you want it to post similar kinds of texts a lot, you have some training text, and you have that bot trained on that training text. And then it will spew out kind of recombined language based on that training text that kind of approximates that training text in the first place. Right. So...
Shawn Walker: So a couple of, I think, really interesting examples. So there's a pollution monitor on the US Embassy in China in Beijing. And there's a Twitter account that tweets out the daily display is no longer active, it was active, but it would tweet out the daily pollution levels in Beijing. And so his automated process, and these pollution levels would differ from the official Chinese government pollution levels of Beijing every day.
Michael Simeone: Okay, so the automated task rather than generating Some texts from your training text to create a partisan environment instead was report data periodically from the source.
Shawn Walker: Yes. And you could also argue in a way that the, you know, the pollution bought in, you know, the US Embassy in China was also trolling the Chinese government a little bit. So we can combine all of these together. So there are certain that the bots circulating information that's making maybe China uncomfortable. Other examples are mainstream media. So if you ever follow the New York Times, or Fox News or NPR, those just those are bots that automatically tweet out the stories as they're posted to, you know, the fox news or the NPR website.
Michael Simeone: Yeah, yeah. And you know, there's a even there's a kind of mix of those things where there's some bots that will actually monitor communications online, and then reach out to people whose communications fit a particular profile, a kind of a more benevolent version of this was actually something like this was developed at ASU A few years ago, where they had trained the bot to recognize the kinds of tweets that people send out when they're at risk for suicide, or self harm. And then that bot would intervene by tweeting to them resources, kind of treatment and mitigation. That is an example of a bot that might be able to detect certain kinds of tweets and then respond with something, you know, that we think is good intentions here, and very different from other kinds of bots that might say, try to detect certain kinds of political speech and then boost that political speech
Shawn Walker: Or other bots that monitor Twitter for discussions of earthquakes. And they then connect that with monitoring stations throughout the world.
Michael Simeone: Yeah, so I mean, bots encompass a wide range of non human information actors. In the context of misinformation. I think a lot of times we're talking about a couple different kinds of bots right? There's the the troll bot account. It's a bot that's trained on a certain kind of speech that will troll people or create a pitched and very partisan environment. What other kinds of bots do we encounter? Specifically in kind of misinformation conversations,
Shawn Walker: There are bots that will specifically, you know, retweet certain political accounts? Or there are bots that will take like, for example, there's a bot that takes President Trump's tweets and then retweets them as a press release?
Michael Simeone: Yes. So there are ways that an individual bot can make interventions like spewing out a bunch of material or boosting really harmful links, right. So this is where we can start working bots together with disinformation is that if I've got a story that I wrote up, and it's specifically meant to deceive or victimized certain people, then my Twitter bot is just going to boost that story over and over and over again. So it repeats There's a higher profile online.
Shawn Walker: Or there might be bots that so I could pay a company, for example, to boost certain types of content, or boost certain followers. And when these bots work in concert or work together, so you have multiple accounts, we call that a botnet because these bots are connected and coordinating their actions. So you There are companies that you can pay, or they have hosts of Twitter accounts. And then you can say, I'd like you to boost this information. So they might retweet it, they might use certain hashtags, they might mention certain accounts. And what they're trying to do often is connect with trending topics on Twitter and other platforms then moves to those platforms, because once you're on like a trending topics, that's actually a really powerful marketing tool, or political marketing tool.
Michael Simeone: Yeah, so a single bot is only so powerful. I like for instance, if I tweet so many links in an hour, Twitter's just gonna shut me down for a little while, right because that looks suspicious. I could mix it up a little bit, I could tweet links for a little while and then tweet text for a little while, I could get even more sophisticated and only tweet within the hours that a normal person might tweet and that timezone. But in the end, right, a lone bot is only so effective. And so, yeah, you gotta if you really want to move the needle with your bot, you need a bot net, which is, as you were saying, a whole bunch of bots that are connected together. The interesting thing to me about the the bot scene is how you can how you can kind of buy a, you need a bunch of accounts, right to put set up a botnet, and you can buy sometimes you can buy accounts,
Shawn Walker: You can buy access to those accounts, because the other problem is you can't just create, so to say we want to boost this podcast, for example, because we want more than just our colleagues in our departments to listen to us.
Michael Simeone: Right, sometimes not but yeah, generally Yeah,
Shawn Walker: We can't just create 50 Twitter accounts and then have those 50 Twitter accounts follow us and see tweeting about our podcasts, because that looks really suspicious. And that's a pretty low bar for a platform like Twitter, for example, to just shut those accounts off. Like they're disingenuous activity.
Michael Simeone: They're all registered from the same IP address,
Shawn Walker: Right there on a US us campus, you know, we get shut down by the university and shut down by Twitter.
Michael Simeone: So what do I do instead?
Shawn Walker: Well, so you go to a company that has sort of sleeper accounts, so to speak, that they created a long time ago and have a profile of acting like a human so to speak. And then all of a sudden, they start maybe tweeting about a political campaign, or for our case, they might start tweeting about our podcasts and how awesome it is. And so that's harder to detect. Because those accounts have a history of not being problematic.
Michael Simeone: So you say go to a company, but when you like, go to a company, like I'm going to go to Starbucks, like that kind of company, or another kind of company.
Shawn Walker: I would say, that's another kind of company. Okay,
Michael Simeone: Okay. Like please tell the world Whereas with a Z
Shawn Walker: Something like that, chances are they're offshore or something and you're sending them money via Bitcoin, writing them a check or giving them your credit card.
Michael Simeone: Right because the other move on this is to is to mask your IP address and bulk register a bunch of different accounts where you can use like, you know, something like a, I think certain bot herders like bot herder applications will allow you to mass register accounts. But but that that brings us to the idea of a bot herder right, which is that software application which allows you to manage a whole bunch of bots at once. And you know, a bot herder can help you fulfill some of those core functionalities of a bot net. So Shawn, you had mentioned the idea of boosting a particular person's signal, you could ask you know, through the bot herder, you could coordinate your bots to do that kind of thing?
Shawn Walker: You're calling this about her but you could also think of it as the conductor of an orchestra. conductor is telling You know, the violins or the oboe or the drums or other things. I'm not in music as you can tell. But the conductor's controlling when folks come in when folks leave. So we're strategically having these these accounts to post on social media at various moments in time to make it look like it's not coordinated, but it is coordinated.
Michael Simeone: Yeah, yeah. And, you know, flooding a particular hashtag is something that you can do. Sending or retweeting a particular person is something that you can do, or spreading a particular link or set of messages is something that you can do so it's not like a botnet always is up to the same thing. If anybody has ever kind of tried to look at bot activity online. A lot of times they'll try to look for what that you know, so bot Sentinel is a great example bot Sentinel is this online service that allows you to track bot activity online. And one of their kind of front pages is what hashtags have the most bot activity on them. What we're talking about now is, is kind of behind the curtain there, which is somebody is controlling a lot. It's not like a whole bunch of individual people have one bought a piece. And they're doing this kind of stuff. Someone is systematically controlling many bots to operate on a single hashtag, because they're trying to influence a conversation.
Shawn Walker: And the sort of art of determining that is very difficult. And some bot detection tools use better metrics and others. Many other bot detection tools use very basic metrics. So they try to figure out how many times an hour Can someone really posts? How many how frequently can they post? So what time of the day should someone be awake? what time of the day? Should they be asleep? When should they be at work? What types of activity do we expect as a normal account, but a lot of accounts actually fall outside of that. We have a lot of accounts where people schedule posts and they can use tools to you might write a tweet and then you might want to schedule that to go out on a Monday morning instead. Or we have super activists so a colleague of ours that we've worked with Marco Bastos, at University College, Dublin, you know, he wrote a paper about these super activists that participate in hundreds of social movements every year. And they tweet out more than the average human. And so a lot of these tools would classify them as bots, but they're not. And then there's also cases where there's bots and people so there might be some automated activity on an account. And then there might also be a human on an account at the same time. Yeah.
Michael Simeone: Yeah. And this idea of humans behaving like bots, you know, we saw this with a Plandemic film, where they're the retweet rate or the share rate, the time interval between share between receiving it and sharing it every time clicking on and sharing. It was so short, it looked a lot like bot activity, but it's not it. They were, they were people, but it was just bot-like behavior coming out of people. And so, you know, with something like a like a botnet, there are different ways to discover bots and to detect them. And you can get web browser plugins that can help but no, no bot detector software is going to be able to A) tell you definitively if something someone is a bot or not. And then you know B) sometimes that's completely immaterial because human beings are perfectly capable of behaving like bots in a lot of different contexts, right? They can. unreflectively tweet out material. They can't or unreflectively, I'm sorry. But reflexively tweet out material, right? They can not review any things that they see. But then they can also just can spread or tweet on a particular hashtag over and over and over again, because they're just on one particular kind of idea. So there are some ways to tell the difference between bots and non bots. But it's very difficult to know for sure, and it's also very difficult, especially if all you're doing is scrolling through Twitter. You might be able to see suspect to someone's a bot but it's really hard to tell what other bots they're connected to. Who is ultimately controlling those bots.
Shawn Walker: And the whole purpose of a botnet is actually to get the content outside of the botnet. So you use the botnet to kind of move content around it looks like is active, it looks like multiple sources are interested in this. And then that moves from the botnet to the wider population.
Michael Simeone: Right? It's exploiting the way that these platforms, like rank different topics or trends,
Shawn Walker: Right? And it's and because if the information stays within the botnet, and it doesn't leave, then it's ineffective. Right? Right, wasted our money. If it doesn't leave the bond that.
Michael Simeone: Yeah, we're trying to hype up some ideas or a person, you know. So if you're looking at an edge list right, or if you download data from Twitter, and you're trying to look at all the different tweets and retweets that have happened in a certain time period, you can spot a bot pretty easily because they at so many people, and then send messages or that's one kind of thought that's easy to spot is what I should say is that they're constantly trying to signal to people and Get them to click or click on a link or spread something. So this hailing people constantly is another thing that bots can do, that we didn't quite cover in some of our earlier conversations on bots. And that is a way to do exactly what you talked about, which is to tie other people to this conversation and get them to spread it to their circles, and get them to have their audiences and the people who trust them to pick it up instead.
Shawn Walker: So we think about what are the types of kind of currency on social media platforms. And these can be hashtags, it can be the volume of tweets mentioning a certain user. So these bot networks are trying to plug into those types of currency, then suck folks into their conversation and then have them take those conversations outside of the botnet.
Michael Simeone: Yeah, yeah, exactly. So if you have somebody who is really trusted and not like a celebrity, right, you can have people who are not, don't rise to the celebrity level, but have a pretty steady following on social media. That is a that is a really important target for a bot. A bot wants That kind of person to be, obviously, if a celebrity tweets out their material, that's awesome too. But there are people who are kind of medium sized fish on social media, who are great targets for bots. And you want that the bot, the whole object of tweeting at those people and circulating links to those people, is to get them to circulate that content somewhere else. Now, you may see the opposite, right where that bot will try to boost that person's signal or make them seem more important than they actually are. Again, just to like so. Dr. Judy Mikovits right, who when she's kind of talking about Plandemic, if you actually scope her Twitter account and look at how many suspected bot accounts are retweeting her activity, there's a lot a greater proportion than most people don't have bots to be tweeting their material or engaging them. Dr. Judy Mikovits has an awful lot. And again, the idea is to boost her signal. Why do we want to boost her signal? It really depends back to this concept of disinformation, the motives behind the person operating the botnet.
Shawn Walker: And also the sort of techniques you're using. To boost. So what can we do to add legitimacy to a claim? And so we go back to a lot of our information literacy training, and is that are multiple people discussing this? Are there multiple sort of seemingly legitimate new sites? Are there multiple links? So that combines together that, you know, bots create that environment that's kind of fertile to make something look legitimate or seem legitimate. And then once you know, once that job has been done, then everybody kind of carries on that conversation outside of the bot network. And the bot networks work is done.
Michael Simeone: Yeah, so we've really kind of created a virtual environment for the untruth and hoping that it just lives outside of that virtual environment somewhere else. So a way for people to be skeptical of other people online is to say, you know, half your followers are bots, or, you know, half the people who are on this hashtag are bots, and people always respond really negative to that kind of allegation. It feels like that itself is not a good adaptation to addressing Bot behavior online is to accuse everybody of being a bot.
Shawn Walker: Well, that's borderline trolling behavior, depending upon the context. So that can be a way to de legitimize someone's conversation. Well, that can't be right. Because there are a lot of bots sending that information out.
Michael Simeone: Yeah, it can be particularly awkward when you actually look at the conversation and there really are a lot of bots there. It kind of the, the activity of a botnet is really interesting, because not only does it do all the things that we've been talking about, but it also brings everybody in that conversation to a total impasse. If one side says, Hey, I think the other side has really boosted up by bots. It really leaves the other side and not you don't really have a lot of outs there. You know, is everyone supposed to say oh, well, it looks like bots are boosting my position. I need to do a lot of self reflection about what's been going on on my phone for the last two hours. That's a really hard thing to do. Most people don't really do that.
Shawn Walker: And because it's difficult to detect bots, and the tools that do detect bots were like, well, maybe it's a bot maybe it's not, how do you actually refute Someone stands to say, Oh, well actually there are a lot of bots that are sending out this information. So your arguments not wrong. How do you disprove that? Because it's pretty difficult to prove that there are bots. So then you just have this wide open sort of Valley for people to come in and spread misinformation.
Michael Simeone: Yeah, the the number one readers of text in the future are machines. And it's okay to anticipate that the authors and promoters of text of the future are machines. And so going in with some base level of skepticism seems like a healthy thing to do.
Shawn Walker: Yeah, and this puts platforms in a difficult position, because some bots are really good and really helpful. And we love them and we use them. Other bots are really harmful. But either side is really difficult to detect. It's not like there's a little bot icon next to, you know, your Twitter handle or your Facebook account. That says FYI, this is about just so you should know. So they have to use all these metrics to try to figure out what When something is a harmful bot, and they're not really successful at that, but also, bots stir up sort of chaos and chaos is good for platforms in many ways, because that increases traffic and discussion and engagement with the platform. So then they can sell more ads.
Michael Simeone : Yeah, it's almost as if platforms are simultaneously incentivized and disincentivize to abide bots. Business is good, right when when bots are active.
Shawn Walker: Yeah, I mean, so, I mean, until the humans leave, and then if it's just bots, that can be problematic.
Michael Simeone: Yeah, perhaps that was an oversimplification.
Shawn Walker: But the humans haven't left. The humans have threatened to leave a plethora of times, but no one leaves. I mean, it's, it's similar to you know, we talked about Parler a lot of the folks that are, you know, the high profile, right wing figures on Parler that are active, are still active on Twitter, as long as their account hasn't been deleted. They're still active on Facebook, they're still active on YouTube. So while they're sort of shaming these, these practices or or shaming these sites, they're still really active. find these sites and they use them to their full extent possible.
Michael Simeone: Yeah, my Parler account is completely ruined at this point because they've collapsed notifications and timeline together. So that any time anyone who I follow on parler sends a message at all that shows up as a notification. So I literally have lost track of any conversation that I was a part of during, during my earlier time with the app, so something changed there. I wonder if there will be bots on Parler. But maybe that's a discussion for another day.
Shawn Walker: I mean, there might already be bots on Parler. And who knows, there's just no way to look into that platform to figure out which posts are automated, which posts aren't and even though platforms like Twitter and Facebook do provide researchers with some data, that data is still really difficult to use to figure out whether something is a bot or not because the bots are trying to mimic human behavior.
Michael Simeone: Yeah, and I feel like this is maybe a decent place to wind up and that the we often think about misinformation, bots. Bot Nets trolling, it's very easy for us to think about this as like at the consumer level, right? A whole lot of information literacy training is all about the consumer level. And I, you know, no one here is going to say that the consumer, the information consumer doesn't have a role to play here. But what the platforms do and how the platform's cooperate with researchers and others, it is a really important ingredient to making sure that this stuff doesn't be that this stuff isn't like a permanent and harmful part of the kind of information and social media landscape.
Shawn Walker: And I think it's important for the public to understand these terms. Because often in the media, these terms are thrown out pretty loosely, that oh, well, this something is disinformation, when Well, someone just made a mistake and said something that was incorrect. Their purpose wasn't to deceive. So no, that's not disinformation. That's misinformation. Or they might accuse you know, bots are doing this thing online. When there's not a Actually a lot of quantitative evidence that shows that that's actually happening. So it's important for consumers of news for it's important for us as consumers of news to be educated and sort of make a mental note to check when these terms are used. Are they actually using them correctly?
Michael Simeone: Yeah, so the kind of battle against these different things is kind of constant and, and really time consuming.
Shawn Walker: Right, both sides are using them, you know, just like your example of, Oh, well, someone says, you know, all of your information is being tweeted out by bots. So your argument is not legitimate. That's happening offline, as well as online to.
Michael Simeone: Yeah, and I think it makes sense that if someone has gone through all the work, to register a bunch of fake accounts, to create or work with a piece of software that's going to coordinate those bots, to train those bots up on a particular talking point to surveil social media to figure out where to intervene to have this complex vocabulary. Of actions, maybe not complex but somewhat nuanced vocabulary of actions to boost a hashtag spam a hashtag, send out a bunch of links, try to move or inject certain bits of your conversation or other content into other people's social media circles. All that sounds really smart and time consuming. And it seems unreasonable to expect that somebody just scrolling through their phone casually is going to just be able to thought that by being a smarty pants like it's just not going to happen. There's has to be some kind of not completely symmetrical, but partially kind of reciprocal time investment here on the part of the consumer.
Shawn Walker: As well as on the part of journalists and analysts that are doing the same thing.
Michael Simeone: Right, right. All those things have to add up to be an effective kind of mitigation. All right, any final thoughts?
Shawn Walker: Yeah, so I think it's really important just at To understand these base terms so that we can navigate when they're used in sort of everyday discourse, or they're used as part of an argument to de legitimize something we can understand because mis and disinformation is kind of added sort of hottest point right now in our public discourse across the world.
Michael Simeone: Yeah, I think that's an a:wesome place to wrap up. So thanks for joining us this week, and we will see you in the next one.
For questions or comments, use the email address datascience@asu.edu. And to check out more about what we're doing, try library.asu.edu/data.