This essay on Machine Ethics was written as an assessment product for the elective course of Business Ethics, part of my bachelor program International Business Innovation Studies.
With Siri, Google and self-driving cars, we have all heard about the dawn of AI and machine employment by now. Artificial intelligence is the field of building computer systems that understand and learn from observations without the need to be explicitly programmed, as defined by Nathan Benaich. As distant as it may still seem, it is undeniable that this force will take on a crucial role in our society in the very near future and influence it in a way that we can hardly even begin to imagine. It will rapidly open up a plethora of opportunities and at the same time pose a new range of dangers. It will replace humans in a large majority of jobs, even create creative works, it will assist us in making choices, we will build up a relationship with them, it will help us solve problems and even implement its solutions. Healthcare, education, business, engineering; it is hard to think of an area that it wouldnât touch.
Like the Future of Life Instituteâs President Max Tegmark pronounces, âeverything we love about civilization is a product of intelligence, so amplifying our human intelligence with artificial intelligence has the potential of helping civilization flourish like never before â as long as we manage to keep the technology beneficial.â
The more data they have the better they will function, and the better they perform the more autonomy we will be willing to give them. Once AI systems and potentially their robotic physical extensions are able to make and carry out decisions autonomously, there is no telling what they will be up to. Utopian and dystopian views alike, it is practically impossible to foresee the profound impact this development will have. Yet if we want to harness its power to make a positive impact and ultimately ensure this technology doesn’t result in an existential threat to humankind, we have no choice but to at least anticipate.
It is not a coincidence that modern-day masterminds like Stephen Hawking, Bill Gates and Tesla’s Elon Musk are cautious and are already investing themselves and their fortune to keeping AI beneficial for humanity and protecting people from the potential misuse and accidental consequences of this new superpower.
Also people working in the field urge precautions, such as AI researcher Stuart Russell elaborating that “AI methods are progressing much faster than expected, which makes the question of the long-term outcome more urgent,” adding that “in order to ensure that increasingly powerful AI systems remain completely under human control… there is a lot of work to do.â (MariĂ«tte Le Roux, “Rise of the Machines: Keep an eye on AI, experts warn”, 12 March 2016)
Although there is a handful of different primary concerns in AI safety, in this essay Iâd like to focus on the ethical issues and the need for a way to fortify moral behavior of artificially intelligent beings; an area of study that is generally referred to as machine ethics.
Iâd like to begin with a thought experiment called âthe Paperclip Maximizerâ that was originally described by Swedish philosopher Nick Bostrom (one of the leading voices in the field) for the very purpose of illustrating the need for machine ethics. It presents the following scenario:
Imagine an advanced artificial intelligence tasked only with the goal to make as many paper clips as possible. If such a machine were not programmed to value human life, then given enough power its optimized goal would be to turn all matter in the universe, including human beings, into either paperclips or machines which manufacture paper clips. (Nick Bostrom, “Ethical Issues in Advanced Artificial Intelligence”, 2003)
Although this is a somewhat silly example, it points out the potential danger of letting the AI carry out poorly-defined and even seemingly innocent goals without conditioning it. Or as Nick Bostrom says in his TED talk “What happens when our computers get smarter than we are?”: if you create a really powerful optimization process to maximize for objective x, you better make sure that your definition of x incorporates everything you care about.
One more example. Suppose we give an A.I. the goal to make humans smile. When the A.I. is weak, it performs useful or amusing actions that cause its user to smile. When the A.I. becomes superintelligent, it realizes that there is a more effective way to achieve this goal: take control of the world and stick electrodes into the facial muscles of humans to cause constant, beaming grins. Another example, suppose we give A.I. the goal to solve a difficult mathematical problem. When the A.I. becomes superintelligent, it realizes that the most effective way to get the solution to this problem is by transforming the planet into a giant computer, so as to increase its thinking capacity. And notice that this gives the A.I.s an instrumental reason to do things to us that we might not approve of. Human beings in this model are threats, we could prevent the mathematical problem from being solved.
These basic examples show the colossal danger we could be exposed to if we do indeed unleash AI into the world without proper precautions. One of those I think would be to instill some sense of ethics, the distinction between right and wrong, and the ability to apply it. We need the machine to learn in some way about human values. Before we can even start thinking about a way to implement this, we need to decide on what those values are; on an universal moral framework.
The challenge in that is of course that there is no such thing. People have debated since the beginning of times about what is right and wrong, supported by their culture, religion and personal convictions. Sure there are some situations that are generally agreed on, but there are plenty more dilemmas. We know murder or rape is wrong, but how about euthanasia and abortions, lying and stealing? Hence there are a bunch of moral frameworks, the main ones being the consequentialist framework, the duty framework and the virtue framework. (Brown University, “Making Choices: Ethical Decisions at the Frontier of Global Science” seminar, spring 2011)
Many of these differences are also rooted in culture and since the AI is likely to operate on an international level, this brings another question. Should it exert the same values everywhere, or adapt the system to its cultural context? Considering the discrimination and legal issues these adaptations would bring along, would it be better to stick to one framework? But this approach demands compromises; how can we decide on those? Also, how can we ensure a truly global approach when AI is mostly developed in 1st world countries?
As if finding a universal moral framework wasn’t challenging enough, the next mission ahead of us is to finding a way to quantify our beliefs, or seek out a solid way to let the AI learn about ethics. We need to find a way to program or teach the AI ethics so that it can be a moral agent; a being who is capable of acting with reference to right and wrong.
What we wouldâve had to do 10 years ago, which is build an algorithm, nowadays are machines are self-learning. This self-learning capacity, once a fantasy, is slowly but surely starting to become technologically feasible.
A recent example of this is that of AlphaGO. AlphaGO has beaten the world champion of the Chinese board game Go in 4 out of 5 rounds; something which people deemed impossible without human intuition. As amazing an achievement as this is, I couldnât help but feel a little concerned when I read that its developers virtually have very little understanding of how it works and how it was able to become so good. To learn how to play the game the system started with nothing and played himself countless times until he reached this level.
This means even those who have created this machine do not know how it learns, and this âblack boxâ allows for very space to correct its learning along the way. How can we build in mechanisms that in some way show us how the machine learns and what conclusion it has drawn? And then re-evaluate them if they donât align with human values? Regardless, the possibilities created by self-learning machines are amazing.
The learning approach implies that the machine would learn from carefully observing human behavior and sources, much like a young child does. The problem with that is that we humans donât always act on and sometimes even contradict our universal values. Since only the actions are observable to the AI and the inner thought process is not, how can we make the AI understand the difference and only copy the good stuff? Much like a child that grows up in a hostile environment, it will consider a normality whatever itâs exposed to.
Surely in the case of big data this will lead to a realistic representation of human values exposed, but wouldnât we want our AI to have a stronger sense of morality, mirroring kindness rather than hatred?
This has happened before, when Twitter let their chatbot Tay into the wild for 24h with the command to converse and learn from userâs tweets, only to have it return with a strong racist, hitler-promoting and feminist-hating perspective.
So if we choose the learning approach, how can we make sure it copies from virtuous people and actions and not the vile? Would it solve this problem to have it learn only from a database of mental dilemmas and peopleâs responses to hypothetical scenarios only rather than learning from the real world? It is generally known that people tend to make much more ethical choices in cases in which they are not emotionally involved and at stake? In general, how can we at all teach the AI what statements are true or false? And how is it suppose to act in case of debatable topics? Does the majority always win? How can the AI implement morality even in cases where the people or the data may say differently?
As daunting a task as all of this seems, there are some very brave and bright people to stand up to it and work on this every single day. Although nowhere near complete, I’d like to point out just three of these initiatives.
The first is a company called GoodAI which is located in Prague. They have made it their mission to ‘develop general artificial intelligence as fast as possible, be helpful to humanity, and understand the universeâ. Besides working on developing a general and strong AI itself, they also spend a lot of resources on ensuring that it remains safe.
The second is actually a conglomerate of research projects that are granted by the Future of Life Institute which is in turn financially supported by the likes of Elon Musk. These projects are focussed on addressing ethical and other potential future risks from AI. Some examples of these are How to Build Ethics into Robust Artificial Intelligence, Inferring Human Values: Learning âOughtâ, not âIsâ, Aligning Superintelligence With Human Interests and Teaching AI Systems Human Values Through Human-Like Concept Learning.
There’s also the Machine Intelligence Research Institute and the Future of Humanity Institute, which are both non-profits actively pursuing and making progress towards these goals by initiating and encouraging extensive research.
Finally, there is Googleâs Safety and Ethics Advisory Board. When Google acquired the prominent and promising AI start-up DeepMind they agreed only on the condition that Google would put together a group of its most visionary people to research and come up with solutions to the potential dangers. Although practically nothing is known about the board, itâs a comforting thought that a tech giant like Google is at least examining this.
However difficult, I believe this invention and its impact are so profound that they can not be ignored. More than anything by writing this I hope to open up the conversation and all bright brains to think along.
We should not be frightened by the thoughts of its possible threat, but rather be motivated to prevent it. As we currently speak, AI is perfectly within human control, easily below human intelligence and only applied within limited and controlled environments. We need to make the most out of this time to develop, test and implement solutions to the many challenges of machine intelligence brings with so that once we dispatch this into the world it will not sabotage, but enrich the human condition and our day-to-day lives.