Ever since the beginning of Facebook in 2004, social media networks have become a major part of society. Selfies, memes, debates, and discussions have become a daily routine for a large part of the population. What used to happen between a couple of friends after a pint or two(the good, the bad, and the ugly!) has become a daily occurrence.
As the major content moderators in the free world, social media platforms hold huge power. They can be compared to propaganda machines of the 21st century. And they have a huge say in deciding the fate of important events. Case in point – 2016 US Presidential elections when Russian hackers spread misinformation to affect the outcome of the election. The case is one of the many that reiterates the security measures and policies that should be maintained by social media companies. Effective policing of social media content is not an easy task and this comes down to many factors.
Social media – the last bastion of free speech. Or are they?
The rise of social media has raised a whole lot of questions about free speech and censorship and they have come up with people of all political backgrounds. While the advent of the internet enabled the fast transfer of information, there were still restrictions on who can publish content and also required some technical knowhow. Even though online chat rooms existed, compared to the current number of social media users, they were very limited. The arrival of Facebook brought about whole new dynamics to the interaction between people. Exchange of ideas became rapid and everyone became more exposed to more ideas(and flower selfies). It enabled more people to express their views and created a sort of lawless wild west kind of situation. Alliances were found over long distances and over common ideas while anonymity offered by fake profiles exposed some disturbing aspects of human nature.
Social media companies have long been hard-pressed to remove content that was deemed unfit for consumption. While content from fringe groups that were outright dangerous was easy to identify, in many situations decisions to allow or remove certain content have created controversy and have led to many debates on what is allowed and what is not allowed under free speech. While many argue that free speech means full freedom to express whatever they want to express, questions about hate speech have been a subject of controversy. Even what qualifies as hate speech is a debate, with many arguing that jokes get a free pass while others argue that jokes that make fun of a particular community should not be allowed. From what I’ve noticed, people across the political spectrum have both supported and opposed free speech on a case by case basis.
And since social media platforms are generally designed to encourage people to use the platform more, the algorithms are designed to ensure that people see the content they want to see, content that engages them, and to reduce content they don’t want to see. This has often resulted in the formation of “filter bubbles”. Essentially, it creates a “frog in the well” kinda situation, making you think that what you’re saying or believe is right, as you don’t see anything conflicting with your ideas. This results in the creation of intellectual echo chambers and kinda goes against the very idea of social media. This as you can imagine, can result in more and more people believing in conspiracy theories and often hate and prejudice against people of other communities. Fringe groups that never grew up simply because they couldn’t find many people to join them now can reach out to the entire world. You don’t need me to tell you the whole irony when conspiracy theorists call others sheeple.
Who do you want on your platform?
An important aspect while developing a social media website is the type of audience; the kind of people you want on your website. Some platforms, like Reddit, have dedicated subreddits moderated by admins and as such welcomes people of all demographics and interests. But as the various subreddits show, they’re all catered to very specific interests and are carefully moderated to remove content not suitable for the specific Reddit. On Facebook, the entire world, people from all different backgrounds are mixed into a soup. Even though groups exist, the conversations are not limited to groups. Social media platforms earn most of their revenue from marketing and to be really successful, they’ll want to bring as many people as possible. And that includes everyone from your standard-issue Karen to a neo-Nazi. It’s no wonder that facebook’s content moderation policies appear so arbitrary.
Alternative facts
Globally content platforms are being called out every day for failing to fight fake news. With free to use content platforms such as Facebook and Instagram producing millions of pictures and content daily, fake news spreads at an alarming rate. Fake news has real-world consequences(besides the uncle who posts about the fruit juice with AIDS), leading people to fall for scams and maybe even change the outcome of elections.
With the recent outbreak about the coronavirus, Google and Facebook have been working overtime to fight fake news. They really can’t afford to have a fake news crisis during a global pandemic, as more and more countries are planning to enforce reforms to curb big tech.
But a recently announced policy of Facebook has shown the ethical compromises that tech companies often make to increase their revenue. Facebook made it clear that it will not remove political ads showing fake news. Essentially meaning that if I am a politician I can lie about my opponent in a Facebook ad. While Facebook claims that it is in the interest of free speech and that it lets the users decide, Twitter has banned political ads outright and Google has announced that it will not allow targeted political ads.
The (mis)management of user data could be argued as a key point of concern for all social media companies. Indeed, the profiling of individuals based on the content they view and the searches they’ve made is one of the main revenue generators for social media companies and search engines. They show exactly how “if you’re not paying for a product, you are the product”.
This is the third part of series on different ethical dilemmas created by modern day tech. Click here to read the previous part, and here to read the first part.
Stay tuned for the next part