Hate speech has been around for a long time, but the connected world has amplified it. Sometimes hateful and threatening comments on social media and in comment sections feel like they are run-of-the-mill daily events. Sadly, Twitter, an awesome social media communications platform — one that I and many educators use and adore — has offered one of the easiest pathways for hate speech amplification. Twitter makes it easy to be “sort-of” anonymous.
For a good overview of Twitter’s online hate problems, take a few minutes to read Jim Rutenberg’s New York Times article, On Twitter, Hate Speech Bounded Only by a Character Limit. Rutenburg shares some of the hateful accusations he’s received and talks about the the challenges that Twitter faces with so much hateful, accusatory, and threatening speech. He notes that Twitter, which is no longer growing its subscriber base, is now for sale. Gutenberg speculates on who might purchase it. “You have to wonder,” he writes, “whether the cap on Twitter’s growth is tied more to that basic — and base — of human emotions: hatred.”
It’s fairly common to hear about hateful and threatening comments that people receive, especially public figures, journalists, and performers. And most often these days the missives arrive via Twitter. Several weeks ago I saw an example that Washington Post columnist, Ruth Marcus shared — a disgusting, ominous, and anti-semitic Tweet and photo. I saw it and was shocked, but when I went back in my email to re-examine what Marcus had received, I found the account was suspended. Problem is, the folks with suspended accounts just sign up again with a different, pseudonymous account and go to it again.
Twitter is not the only social media tool with hate demonstrated anonymously and vigorously, but it is the only one that has not been able to figure out how to provide consistent oversight. Interestingly, many of the apps that began life as anonymous communication tools are rapidly changing the way they do business, because of the huge difficulties that arise when people can sign up and operate with no oversight. Learn more at this Washington Post article by Caitlin Dewey, These Apps Were Made to Share, and that’s Why They Couldn’t Last, appearing on October 4, 2016. And, of course, one only has to look at a few comments at the end of an article to see individuals spewing hateful responses.
These days our excitement about the freedom, access, and ease of digital communication can turn on a moment to dread when we observe the hatful comments and threats. Our commitment to free speech is sorely tested when products — social media platforms and apps — enable so many people to say things that shouldn’t be said. Why, we might ask, do these products, without ways to control hateful rhetoric or behavior, keep appearing in our lives and in the lives of 21st Century children?
Perhaps the most worrisome aspect of all this public hatred is how it affects children, and how base and emotional expressions of hate seem almost normal to young people — like it’s always been this way and it happens every day. The 2016 election cycle is one example in that children observe and hear things that only a few years ago adults would have considered inappropriate. No matter how many conversations we have about digital citizenship and civics, there can be no doubt that the World Wide Web and social media have made it difficult for adults and educators to counteract what kids just happen to see and hear in settings where the influence of parents and educators is scant.
A Few Past Posts that Relate to This Subject