Being called upon to strengthen content moderation has actually proven to be a plus for online communities, because it can actually boost user retention and encourage growth. To learn more about the changing landscape of online conversations, and how to get it right, don’t miss this VB Live event featuring veteran analyst and author Brian Solis.
It’s becoming increasingly clear that user generated content on social platforms is open to all kinds of abuse, like livestreamed violence, cyberbullying, and toxic user behavior. Companies need to start asking themselves how to balance the benefits of engaging users on social platforms or in communities with the risks of losing users by overstepping, says Brian Solis, principal analyst at Altimeter, a Prophet company, and author of new book on mastering digital distractions, Lifescale: How to Live a More Productive, Creative and Happy Life.
“This whole idea is new,” Solis says. “Human interactions are complex and nuanced, so this isn’t an easy task. We’re still navigating what this looks like and what the best management practices are.”
Generations are functioning on different standards. Stakeholder and shareholder pressure to monetize at all costs is pervasive. At the same time, there’s a huge disconnect between experts, parents, teachers, and society as it’s all evolving. This is creating new norms and behaviors that bring out the best and the worst in us on each platform.
The challenge is that not all groups see things the same way or agree on what’s dangerous or toxic. For some, they’re moving so fast between accelerating incidents and catastrophic events that it’s impossible to be empathetic for more than a minute because something else is around the corner.
“Leading platforms are also either not reacting until something happens, or they’re paying lip service to the issues, or they’re facilitating dangerous activity because it’s good for business,” Solis says. “Hate speech, abuse, and violence should not be accepted as the price we pay to be on the internet. We need to bring humanity back into the conversation.”
In a perfect world, this wouldn’t even be a discussion, Solis says. Though it is a nuanced concept and constantly evolving, this kind of technology and practice is good for business. Platforms won’t have to fear losing advertisers or users because of harmful content and behavior. Brands and advertisers will look to platforms that have positive, lasting engagement. There’s great power in being an early adopter in the technology space.
“We’re also at a point where doing the right thing is also good for business,” Solis says. “Whether it’s CSR or #MutingRKelly, platforms are keenly aware of the reputation they build. Implementing this kind of technology and building a healthy community by doing so will help brands’ reputations, goals, and bottom-lines.”
Before implementing this kind of practice in your platform, you really need to identify and understand what kind of community you want to foster. Each platform has very unique behaviors and personality. For example, do you want a community closer to that of Medium, a positive and inviting community that embraces personal expression, or would you want to hover over on the 8chan side, where anything goes and the environment becomes toxic.
“It’s not an easy question to ask yourself, but it’s necessary,” he says. “I think once you can establish your vision for the community you plan to build, you can then set very clear boundaries on what’s acceptable and what is not.”
While technology and AI are invaluable in comment moderation and community monitoring, it can only shoulder so much of the responsibility.
“This is a problem of human interaction and behavior, James Bond-level villainy, intentionally unethical intent, an incredible absence of consequences, and emboldened behaviors as a result,” says Solis. “So we need humans, AI, and more to help fix it.”
The best practice is a mix of both AI and human intelligence (HI) working together to be proactive in avoiding and removing unacceptable content.
“Humans are using technology for evil, certainly,” he says. “But we can also use technology as a solution. This kind of content moderation doesn’t hinder the ability to express; it protects our expression. It allows us to continue to post online, but with some reassurance that we’re in a welcoming environment.”
AI is like a toddler, able to identify things and inform you, and it’s also much faster at processing and identifying sensitive material. But it’s still learning context and nuance. That’s where humans come in. Humans can take the information from AI to make the tough calls, but in a more efficient way. We can let AI bear the responsibility of processing a lot of toxic material, then let humans come in when needed to make the final decision.
“Raising the bar means raising our standards,” he says. “Demanding online communities foster healthy environments, protect its users from toxic behavior, and being unapologetic in doing so.”
To learn more about the role of content moderation in creating healthy communities, plus a look at the tools, strategies, and techniques for balanced moderation that keeps communities engaging and growing, register now for this VB Live event.
Don’t miss out!
You’ll learn:
- How to start a dialogue in your organization around protecting your audience without imposing on free speech
- The business benefits of joining the growing movement to “raise the bar”
- Practical tips and content moderation strategies from industry veterans
- Why Two Hat’s blend of AI+HI (artificial intelligence + human interaction) is the first step towards solving today’s content moderation challenges
Speakers:
- Brian Solis, Principal Analyst at Altimeter, a Prophet company, and author of Lifescale
- Chris Priebe, CEO & founder of Two Hat Security
Sponsored by Two Hat Security