Victims of stalking, harassment, hate, election interference, and other abuses have for years argued that we need to rethink the way social media functions. But a consensus has been growing in recent weeks among people tired of the way social media works today, with advocates for reform ranging from civil rights groups to antitrust regulators to Prince Harry.
The work of University of California, Berkeley associate professor Niloufar Salehi might very well play a role in such a process. Salehi was recently awarded a National Science Foundation (NSF) grant to consider what it would be like to apply principles of restorative justice to conflicts that occur on social media platforms.
A human-computer interaction researcher, Salehi has studied the personas of YouTube’s recommendation algorithm and was recently awarded a Facebook Research grant to study how Muslim Americans create counter-narratives to combat anti-Muslim hate speech online. Earlier in her career, she gained recognition for her work on Dynamo, a platform made for Amazon Mechanical Turk employees to organize and communicate. Dynamo debuted in 2015 after a year of consultations with workers who use Mechanical Turk to complete a range of small tasks, like labeling data used to train machine learning models.
Salehi spoke with VentureBeat about the challenges of applying restorative justice principles to social media, how platforms and the people who use them can rethink the role of content moderators, and ways that social media platforms like Facebook can better deal with online harms.
This interview was edited for brevity and clarity.
VentureBeat: So how did your research into restorative justice in social media start?
Salehi: This work came out of a research project that I was doing with a group called #BuyTwitter, and the idea was “What if users bought Twitter and ran it as a collectively owned co-op?” And one of the big questions that came up was “How would you actually run it in a sort of democratic way?” Because there weren’t any models for doing that. So basically the group reached out to me as someone who does social system design and understands online social systems, and one of the first things we did was consider the problems that exist with the current model.
We had these workshops, and the thing that kept coming up was online harm, especially harassment of different kinds, all these things that people felt like weren’t being addressed in any kind of meaningful way. So we started from that point and tried to think about how else you could address online harm. That brought us to restorative justice, which is a framework coming out of the prison abolition movement and some indigenous ways of dealing with harming communities, which asks after harm has happened, “Who has been harmed? What are their needs? Whose obligation is it to meet those needs?” And sort of thinking about every member — the person who’s done the harm, the person who’s been harmed, and other members of the community.
The way it’s used right now in schools and neighborhoods is usually that it’s within a community [where] everyone knows each other. And after some instance of harm happens — say, someone steals from someone else — instead of going to the police, they might have a neighborhood conference. There’s usually a mediator. They talk about the harm that’s happened and they basically come up with a plan of action for how they’re going to address this. A lot of it is just to have that conversation so the person who’s done the harm starts to understand what harm they’ve caused and starts to strive to repair it.
So the big problem with trying that model online is that not everyone knows each other, so that makes it really hard to have these kinds of conversations. Basically, what we’re doing in this research is taking that model’s values and processes and thinking about what happens if you take that and apply it at a higher level to problems online.
As part of that, we’re doing participatory design workshops with restorative justice practitioners, as well as moderators of online spaces who know the ins and outs of what can go wrong online. Part of what we’re doing is giving moderators online harm scenarios — say, revenge porn — then having them think through how you might address that differently. One of the things that happens is that thinking about the problem of online harm as just a problem of content moderation is actually extremely limiting, and that’s one of the things that we’re trying to push back on.
So the end goal for this work is to explore the sort of options available to add elements and features on these [social media] platforms that incorporate elements of restorative justice.
VentureBeat: Will you be working directly with Facebook or Twitter or large social media companies?
Salehi: There are people at those companies that I’ve talked to at conferences and things like that, and there’s certainly interest, but I’m not working directly with any of those [companies] right now, but maybe down the line.
VentureBeat: What role do platforms play in restorative justice?
Salehi: I mentioned how in the restorative justice process you’re supposed to take each actor and ask “What are their needs and what are their obligations?” In this sense, we’re treating the platform as one of the actors and asking what are the obligations of the platform? Because the platform is both enabling the harm to happen and benefiting from it, so it has some obligations — although we don’t necessarily mean that the platform has to step in and be a mediator in a full-on restorative justice circle. I personally don’t think that that’s a good idea. I think that the platform should create the infrastructure needed so that community moderators or community members can do that, and that can mean training community moderators in how to approach harm. It can mean setting obligations.
For instance, with regards to sexual harm, there’s been some work around how to actually work this into the infrastructure. And some models that people have come up with say every organization or group needs to have two point people to whom instances of sexual harm are reported, and it has to have protocols. So one simple thing could be that, say, Facebook requires that every Facebook group that is above a certain size has those protocols. So then there’s something that you can do if sexual harm happens, and it’s also something that can be reviewed, and you can step in and change if things are running amok.
But yeah, it’s sort of thinking about: What are the platform’s obligations? What are people’s obligations? And also what are some institutions that we don’t have right now that we need?
VentureBeat: Something that comes to mind when talking about or thinking restorative justice and social media are adjacent issues. Like at Facebook, civil rights leaders say algorithmic bias review should be made a companywide policy, the company has been criticized for lack of diversity among employees, and apparently the majority of extremist group members join because the Facebook recommendation algorithm suggested they do so. That’s a very long way of asking, “Will your research make recommendations or guidance to social media companies?”
Salehi: Yeah, I definitely think that is a type of harm. And to take this framework and apply it there would be to go to these civil rights groups who keep telling us that this is a problem and Facebook keeps ignoring [them] and [instead do] the opposite of ignoring them, which is to listen to them, ask what their needs are. Part of that is the platform’s and part of that is fixing the algorithms. And part of why I’m really pushing this work is that it really bothers me how bottled up we get into the problem of content moderation.
I’ve been reading these reports and things that these civil rights groups have been putting out after talking with Mark Zuckerberg and Sheryl Sandberg, and the problem is still framed [in such as limited way] as a problem of content moderation. Another one is the recommendation algorithm, and it bothers me because I feel like that is the language that the platform speaks in and wants us to speak in too, and it’s such a limiting language that it limits what we’re able to push the platform to do. [Where] I’m pushing back is trying to create these alternatives so that we can point at them and say “Why aren’t you doing this thing?”
VentureBeat: “Will the final work have policy recommendations?” is another way to put that question.
Salehi: Yeah, I hope so. I don’t want to overpromise. We’re taking one framework, restorative justice, but there are multiple frameworks to look at. So we’re thinking about this in terms of obligations, and you have the platform’s obligations and the public obligations, and I think those public obligations are what gets translated to policy. So [as] I was saying, maybe we need some resources for this, maybe we need the digital equivalent of a library. Then you would say, “Well, who’s going to fund that? How can we get resources directed to that? What are needs that people have, especially marginalized people that could be resolved with more information or guidance, and then can we get some public funding for that?”
I think a lot about libraries, an institution built to fulfill a public need to access information. So we created these buildings basically that host that information in books, and we have this whole career and profession made for librarians. And I think that there’s a big gap here in — if I am harmed online, who do I go to and what do I do? I do think that’s also a public need for information and support. So I’m thinking about what would the online version of a library for online harm look like? What kind of support can they offer people, as well as communities to deal with their own harms?
VentureBeat: So the library would be involved with getting redress when something happens?
Salehi: It could just be providing information to people who have been harmed online or helping them figure out what their options are. I mean, a lot of in-person libraries have a corner where they put the information that’s about something that’s stigmatized, like sexual assault, that people can go and read and understand.
What I’m trying to basically do is take a step back and understand what are the needs and what are the obligations, and what are the obligations of a platform, and what are the obligations of we as a public of people who have harm among ourselves. So what are the public options? And then you can think about what are individual people’s obligations? So it’s trying to take a holistic view of harm.
VentureBeat: Will this work incorporate any of the previous work you did with Dynamo?
Salehi: Yeah. Part of what we were trying to do with Dynamo was create a space where people could talk about issues that they shared. I did a whole year of digital ethnography with these communities, and when I started doing that work, some of what I found was that it was so hard for them to find things that they could agree on and act on together, and they actually had past animosity with each other very similar to a lot of the online harms that I’m finding now again.
When harm happens on the internet, we basically have zero ways to deal with it, and so we quickly ended up in these flame wars and people attacking each other, and so that that had resulted in these multiple fractured communities that basically hated each other and wouldn’t talk to each other.
So what we’re trying to achieve with the restorative justice work is when harm happens, what can we do to deal with it? So for instance, one of my Ph.D. students in this work is working with a lot of gaming communities, people who do multiplayer games, and a lot of them are quite young. Well, a lot of them are actually under 18 and we can’t even interview them. But harm happens a lot, and they do a lot of slurring and being misogynistic and racist. And there’s basically no mechanism to stop it, and they learn it, and it’s normalized, and it’s sort of what you’re supposed to do until you … go too far and you get reported, which happened in one of the harm cases that we’re looking at.
Someone recorded a video of this kid using all sorts of slurs and being super racist and misogynistic and put it on Twitter and people went after this person, and he was under 18, quite young, and he basically lost a lot of friends and he got kicked out of his gaming communities. And we’re sort of trying to figure out “Why did this happen?” Like, this doesn’t help anyone. And also these kids are learning all of these harmful behaviors and there’s no correction for it. They’re not learning what’s wrong here until they either never learn or they learn in a way that harms them and just removes them from their communities. So a lot like the prison industrial complex — but of course not at the scale or the harmfulness of that, but a microcosm of that same dynamic. So we’re trying to think about what other approaches could work here. Who needs training to do this? What tools could be helpful?
VentureBeat: I know the focus is on restorative justice, but what are some different forms of AI systems that might be considered as part of that process?
Salehi: I’m a little bit resistant to that question, partly because I feel like a lot of what has gotten us to this point where … everyone’s approach to harm is so horrible is that we’ve just pushed toward these minimal cost options. And you had Mark Zuckerberg going to Congress [in 2018] — he was asked questions about misinformation, about election tampering, about all sorts of harm that’s happened on his platform, and he said “AI” like 30 times. It sort of became this catch-all, like “Just leave me alone, and at some undisclosed time in the future AI will solve these problems.”
One thing that also happens because of the funding infrastructure and amount of hope we’ve put into AI is that we take AI and go looking for problems for it to solve, and that’s one of the things that I’m resistant toward. That doesn’t mean that it can’t be helpful. I’m trained as a computer scientist, and I actually think that it could be, but I’m trying to push back that question for now and say “Let’s not worry about the scale. Let’s not worry about the technology. Let’s first figure out what the problem is and what we’re trying to do here.”
Maybe sometime in the future we find that one of the obligations of the platform is to detect whenever images are used and then not just detect it and remove it but detect it and do something that helps meet people’s needs, and here we might say AI will be helpful.