Removing Racist and Hateful Comments: A Simple Relevancy Test
Posted by Marti Weston on March 16, 2012
After the jury announced its verdict in New Jersey I watched Associated Press video statement read by Tyler Clementi’s father. Sad and clearly with a heavy heart, he nevertheless looked to the future in a way that most of us could not have done had we lost a child the way he lost Tyler. Then I glanced down at the YouTube comments — just about every one included a gay slur or offensive language, and I was disgusted. The comments were not relevant.
Racist and hateful online comments demean writers, video-makers, and people who thoughtfully share digital content. It’s becoming tiresome. Masquerading as run-of-the-mill responses at the end of articles and videos – they are actually cyber-bullies’ remarks left here and there with the goal of offending and hurting others. The time has long past for comment and blog editors everywhere – but especially at Google’s YouTube — to set up and enforce guidelines.
I know that the United States Constitution guarantees freedom of speech; however, it’s not freedom of speech we are observing but freedom to run off at the mouth and bully others in ways that are not relevant to the content. As a result we are teaching all sorts of silent lessons — the kind we don’t really intend to teach to young people as they grow up.
Perhaps more than half of the sites I visit and most of the YouTube videos include sections with obscene and hateful remarks. The words themselves don’t particularly bother me — it’s the venom and hate they convey and how these responses are becoming standard. We seem to have accepted the notion that free speech means that it’s OK to say whatever you like anywhere you want to say it, and maybe it is. But why must we provide a publicity vehicle for cyber-bullies who promote this type of hate? It’s misbehavior, not free speech.
I am not making a case for censorship here, but for relevancy. I know how to teach people, young and old, to evaluate and judge digital content. But how can my lessons or the lessons of diligent parents overcome a virtual world that casually sprinkles the world’s compelling content with hate?
YouTube might learn from the Providence Journal (ProJo) in Rhode Island. Last fall when Ruth Simmons, the President of Brown University decided to retire, I headed right over to ProJo to read what the local Rhode Island Press had to say. Simmons has been a remarkable university president, and the article described her achievements in detail. The comments? Appalling racist words, jokes, and slurs. The other day, when I went back to that article these comments were gone because ProJo adopted a comment editing and relevancy policy.
If we are to achieve real progress with our national cyberbullying problems, we will need to adopt, like ProJo, relevancy content policies at every website and blog so we can edit out hate-filled rants. As long as casual cyber-bullies wend their ways through the websites in our lives, even when we filter them fairly well for our kids, we will have difficulty convincing pre-adolescents, adolescents, and young adults to take these issues seriously. Too many times in a digital day they see hate-filled vitriol looking just as normal, and apparently just as acceptable, as any other content.
We reward cyber-bullies and hate mongers when we allow their unedited and irrelevant comments to remain on a site, just because we believe we are supporting free speech.
This entry was posted on March 16, 2012 at 10:50 pm and is filed under 21st Century Learning, commenting, conversations on commenting, cyber-bullying, digital citizenship, digital parenting, family conversations, parent education, parents and technology. Tagged: comments, cyber-bullying, cyberbullying, digital kids, digital parenting, editors, freedom of speech, Google, hate speech, obscenity, ProJo, Tyler Clementi, websites, YouTube. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.