Ethical dilemmas of modern day tech: Part one

As ventures for profit, it is hard to guess how companies can be made to hold ethical standards. As technology moves at a rapid pace, it is hard to form a legal framework that ensures rights to the end-users. Companies with huge resources backing them are often able to find legal loopholes and find ways to circumvent rules holding them accountable. Sometimes products designed with good intentions may have wrong unintended consequences.  In the rush to create tech that is more appealing and therefore more profit-generating, the ethical implications of a service or a product is often the last thing the creator or even the consumer thinks of. 

While everyone is pondering on how to reign in Big Tech, here’s a 4 part discussion on the major ethical grey areas that were faced by, are facing and will face the tech industry. 

AI and the ethical conundrums associated with it.

Conversations about AI have always been full of debates. They range from questions of technical nature(is strong AI possible-  the Chinese room problem?), of philosophical and ethical nature (the trolley problem?) and the more political but definitely relevant problem – are the machines going to steal our jobs? I call that a political problem because I think it’s going to replace the “Are immigrants stealing our jobs?” question. Advancements in AI have revealed some new ethical questions as well. 

As AI systems are becoming more popular, a curious case of supposedly objective systems mirroring the discrimination seen in the society has appeared. AI systems are trained using data and are then applied to use the training to analyze data of similar nature. If the data used in training is not good, the resulting system is also not good. An interesting example would be the case of IBM Watson. Researchers fed Watson the entire Urban Dictionary and soon had to delete all the learnings from the new info as the machine started swearing. While the incident was somewhat funny, it emphasized the “garbage in garbage out” philosophy while developing AI systems. 

This has some troubling consequences for AI systems involved in decision making, due to the implicit gender and racial bias. A recent study showed that in a commonly used algorithm to make medical decisions,  black patients with the same score of medical risk as white patients were sicker. As systematic racism and gender discrimination is prevalent in the world, the existing data on how people are judged – whether it be for medical reasons, or for granting loans – is likely to contain the discrimination and bias, and is likely to produce systems that are racist or sexist. The result is that systems that were designed to eliminate human bias often end up amplifying it. 

To see for yourself how gender bias occurs in an AI system follow the link and try entering different professions and you can quickly see how the system follows traditional gender roles. Or do a simple image search on google for professions that are associated with gender. You can see clearly that almost all the images show traditional gender roles. For example, an image search for “gynaecologists” will exclusively show female doctors. There is also an absence of representation in the data. For example, people of colour are almost nonexistent on most of the image searches for different professions. And now remember that google is one of the largest databases and a powerful search engine. Most of the databases will similarly be biased, if not more. One can easily picture the results of AI systems that come out of these systems. Making AI systems unbiased is not just a political or moral move. To have efficient AI systems, gender and racial diversity have to be improved.  When face detection first arrived in commercial point and shoot cameras, there were cases of the AI detecting Asian faces as blinking even when they were not. Non-inclusiveness in training data will result in AI systems filled with bugs. 

Alexa, do you perpetuate gender stereotypes?

Chatbots are a common area where gender biases and stereotypes in AI systems can be easily noticed. Almost all the chatbots or digital assistants are given a female persona. As the role of these digital assistants is to serve the customer or client, a female chatbot may perpetuate gender stereotypes of women in society and in working environments. Since the choice is based on market research, the fact that most of the AI assistants are females may be a reflection of society. The founder/ developer of Mitsuku –  an AI chatbot that has won the Loebner Prize Turing Test – Steve Worswick discussed the implications of his chatbot having a female persona. He also mentioned how his chatbot was often perceived as being an alive human being by many. As AI digital assistants and chatbots become more and more common, as well as more human-like, female AI assistants can perpetuate and amplify gender stereotypes prevalent in society. 

Chatbots often serve as a reflection of humanity and offer a sneak peek into the future when machines human interactions resemble that of an interaction between two humans. Take for example the chatbot Tay developed by Microsoft as a twitter bot. Twitter trolls tweeted profanity, racist and sexist statements at Tay and the bot learned to become sexist and racist and started making offensive tweets. It is common knowledge that users regularly harass bots, using profane language and sexually harassing them. While some of them may be intended as jokes, it does reveal some aspects of human nature, pursuing happiness by hurting a defenceless creature/system and will have to be considered while developing similar systems. 

Applications of AI is another ethical grey area. AI and machine learning algorithms are able to make sense of vast troves of seemingly unrelated data. It can spot patterns that humans are not able to spot, and a machine doesn’t get tired and it does not need sleep. It can learn from a huge amount of data, data that humans simply don’t have the time to go through in a lifetime. Just take the case of self-driving cars. Waymo’s self-driving cars have driven around 10 billion miles so far, which means that the self-driving AI from Waymo has data worth 10 billion miles. This is without including the miles it has driven in simulation. And the number is increasing. Compare this to the distance driven by an average human driver in a lifetime. Sure a human driver is much more capable of applying their learning to other situations, but as algorithms improve, it’s a matter of when rather than if a self-driving car will be better than human drivers. 

With technology capable of so much, it is rather worrisome how it may be applied. Researchers recently developed an AI system that can tell if a person is gay or straight. While it was done with only good intentions,(and it did show that being gay is not a choice, so, good results) it showed the potential for an AI system to be abused. People or governments with misguided moral compasses may use them for unethical purposes and there isn’t much to stop them. It showed the potential for AI systems to be used for profiling individuals (Captain America – Winter Soldier – Dr Zola’s algorithm?). Harold Finch sure wasn’t paranoid being afraid of the artificial superintelligence he himself made. 

A popular application of AI in science fiction movies is its use in warfare. From Terminator to Ultron, I’ve always been fascinated by AI in movies. Even though we are ages away from killer androids, AI warfare systems are currently in existence and more weapons are being developed. 

AI in warfare

Billions or even trillions of dollars have been spent in developing targeting systems that lock on to enemy targets and guides munitions to destroy them. While many systems are capable of this, it requires a human operator to first point or show an enemy aircraft or troops. AI technology is being currently explored to develop weapons that are capable of identifying enemy troops, distinguishing them from friendly troops and destroying them. Research is also underway to develop fully autonomous drones and Drone clusters. While current systems are said to have safeguards, requiring a human operator to press a button before taking a human life, there is no guarantee that future systems will. Current systems include the autonomous warship belonging to the US – Sea Hunter – capable of fully autonomous operations, Israel’s Harpy, a drone to be launched by ground troops which will then fly over enemy defences to destroy radar equipment. 

Project Maven, an artificial intelligence project by the Pentagon involving major tech firms such as Microsoft, Amazon, and Google have raised outcry within the public and among the employees themselves. The project was directed at developing AI vision systems for unmanned aerial vehicles (drones), to develop autonomous drones. The system would be capable of identifying, tagging and tracking individuals or enemy troops, but won’t allow firing without human oversight. Essentially like an AI spotter. It was one of the reasons for the recent turmoil inside Google. 

The use of AI in warfare is only going to increase and may result in future wars being fought between artificial intelligence systems of rival nations. We may all wake up one day to see a suit of armour around the world. One can only hope it won’t be so cold. 

This is the first part of a series. Click here to read the second part of the series.

Related Posts

Begin typing your search term above and press enter to search. Press ESC to cancel.

Back To Top