Ethics in Artificial Intelligence: what the future holds – speaking to Inge de Waard

Dr Inge de Waard works as a strategic instructional designer for InnoEnergy Europe and is an avid enthusiast for open science. Her focus and knowledge spans many areas in the digital learning sphere: from MOOCs to individual online learning types, self-directed learning to questions of ethics in Artificial Intelligence. At OEB MidSummit (June 8 – 9) Inge will discuss the risk of leaving out ethics in machine intelligence development in her interactive session “Society reclaims Ethics for Education”. She has already given us a taste of what she thinks about the use of Artificial Intelligence in education from its potential to the risks.

 

You promote using AI in education, but not blindly or at any cost. What are your concerns? Is there a price learners or educators may unknowingly be paying as they use AI?

 

I would not want to say that I promote using Artificial Intelligence (AI) in education, instead that I accept the fact that it is used, and ever more so. However, I have some critical thoughts accompanying this evolution.
An algorithm is just a set of rules to be followed, it is a self-contained set of actions. AI is made up out of a complex network of algorithms with the aim to try and mimic human thought. The algorithms are pieces of rules translated into programming code so it can be embedded into larger pieces of software, which in turn can communicate with other big software components also embedding algorithms, making up AI.

 

Taking the above into account, my concern with AI is two-fold: First, the brain – to me – is not reversibly engineerable. It is something bigger than its parts, just like all emerging complex systems. This also means it is multi-faceted and does not always follow one set of rules linearly or logically. Secondly, only a niche group of people are constructing the algorithms that make up AI. Their cultural assumptions are inevitably translated into the rules that make up the algorithms.
The so-called effect of the programmer was first seen in filter bubbles. Eli Pariser coined the term filter bubble, and highlighted the unintentional lock-in that can happen if algorithms filter what they are programmed to filter for you. They are a result of web personalization and programmed on the basis of prior preferences, use, location, interests, etc. A well-known example of a filter bubble compares search results for professional with unprofessional hairstyles and reveals a racial bias. (https://www.theguardian.com/technology/2016/apr/08/does-google-unprofessional-hair-results-prove-algorithms-racist-)

 

In a way, we are building an external set of logic rules outside of our brains. However, the crux is that those rules come out of our human brains. To be precise, they come from a selected number of brains, which do not necessarily represent the complexity of human thought out there. For example, what are difficult activities to translate into logical thought? I’d say, creativity and art. Modelling the universe is easier than modelling what makes art. To me this means we have a clear gap. Although AI is promoted as the new utopia, there only seems to be a handful of people willing to make AIs that resemble Mahatma Ghandi, Shei Shonagon, Cindy Sherman … So, which brain is AI interested in? The use of AI is currently more about production, efficiency and constructing what some think the brain should be like, than actually providing an additional benefit to the whole of society.

 

Could a focus on a code of ethics for the use of AI offer a solution to address possible anxieties? Which issues need to be taken into consideration, especially in the education sector?

 

Looking at AI from an educational perspective reveals potential benefits. AI can help us to understand what works and why. For example, we can analyse big data from learners to find out more about learning efficiency. AI also makes it possible to personalize learning easier.

 

However, at the same time AI will increase automation, raising the risk of job elimination, especially less complex jobs. This is something that might scare specific groups of learners. I think offering some sort of ethical framework is a great idea. The immediate downside of any ethical framework is that it is built upon cultural norms. This means it is very difficult to construct an ethical framework that appeals to multiple societies since. What decreases anxiety for one group of people can raise it for other groups. Nevertheless, if we – as a society – consider well-being to be at the core of life, a code of ethics related to AI might decrease anxiety, by offering predictions on the effect of AI on our overall well-being. Not every student is helped by ‘improving learning’, and certainly not by loss of income due to job loss. A code of ethics might also include financial, emotional, and social factors that society wants to uphold, or even – ultimately – achieve.

 

The tech giants are already working to build safeguards into their AI technologies. Which bodies, institutions or associations should be involved in coming up with an ethical framework for the development of machine intelligence?

 

Both big and small companies are using algorithms to support their (software-based) applications. With the increasingly pervasive way in which AI interacts with our lives, ethics become more important. Only by looking at the Internet of Things and its  expansion into different segments of both our professional and personal lives, we must acknowledge the impact of those algorithms on various aspects of our lives, including education. An important term used frequently is ‘smart’: smart buildings, smart technologies… But as we have seen with the unprofessional/professional hairstyles, algorithms result in unexpected outcomes. Therefore, I’m not sure how good of an idea it is that Facebook answers to fake news by creating new algorithms (http://www.independent.co.uk/life-style/gadgets-and-tech/news/facebook-fake-news-feature-uk-election-2017-a7720506.html ).
As algorithms affect all of us, inevitably this means that all layers of society should be involved in creating early indicators that address the effects of for AI effects. This includes, for instance, civil society, e.g. Fairness, Accountability, Transparency in Machine Learning (http://www.fatml.org/) or the Electronic Privacy Information Centre (https://epic.org/privacy/consumer/), governments, as well as industry. All of these, and other organisations, can support policy standards to make the monitoring of the effects of AI possible.

 

How do you assess the chances of a valid and globally applicable code of ethics for AI?

 

Personally, I think implementing an ethical layer is possible but will take an enormous amount of interdisciplinary research (including Arts) and effort. We can see this in simple – though admittedly negative – examples such as internet censorship (e.g. location-based restriction of access to online resources).
But there are multiple difficulties to overcome in order to get to a global ethics code, if this is possible at all. The Deepmind ethics board gives an indication of the difficulty to achieve transparency when it comes to ethics and AI. Many AI companies have ethics boards, yet very little is shared (https://www.theguardian.com/technology/2017/jan/26/google-deepmind-ai-ethics-board ).
So, I think it is possible to create an ethical layer but it takes more than an industry led initiative; it involves opening up AI to broader society and its options. If we look at management options, or institutions like the United Nations, we can see that adding a top-layer that looks at ‘vision and future’ might be possible – at least as a theoretical idea. And while working on the implementation of this theoretical idea, all the while debates can be organized to fine-tune the code.

 

What is your view on taking some decisions aimed at making the world more “moral” out of human hands? Can machines help humankind in this effort?

 

Morality is a personal compass for right and wrong, therefor it is an internal, individual process, unlike ethics which can be expressed as a set of external rules. Looking at whether machines can help humankind to be more ethical… for the sake of argument, I would say Yes.

 

This might expose me as a believer (in constructed technology based thinking) or a fatalist – if you consider the low degree of confidence I have in humanity. My belief is based on the ability to construct complex societal models based on complex algorithms, and recalibrated algorithms which can estimate the outcome of certain innovations or actions, and see its effect on, for example the earth’s sustainable ecosystem or on mental health.

 

Do you think we also need to consider establishing guidelines for “rights” to be granted to AI?

 

The need to write such a set of rules still seems far from possible, as even explainable AI (XAI) is still not realized. One of the first steps in order to understand why AI comes up with a specific result is turning AI into XAI, so it can self-explain its functions. So, I think the first hurdle to take in order to come to a set of AI rights (at present) is to look for a code of ethics of the people coding the current algorithms, as chances are that these sets of rules will get written into the AI anyway.
This idea is of course a long standing sci-fi challenge. I think the most well-known are the three laws of robotics, listed by Asimov in the process of providing ‘rights’ to AI. But the definitions in these three laws offer an array of interpretations. Essentially, based on these laws, it would be perfectly normal to put humanity in one specific location (a zoo), keep humanity sedated while offering augmented reality (think pods scene from the Matrix), and thus keeping humanity safe from harm. It might prove to be a conundrum.

 

What are you looking forward to at the MidSummit conference?

 

I’m definitely looking forward to the network of leaders involved in the online learning and training field. In this quickly evolving line of work, you can no longer wait until a reference book comes out or a set of best practices is established. In a way, we as professionals live in a constant beta world. Change is constant, rapid, and impactful. As a professional you need to be adaptive, which means you need to keep on top of what is happening, who is doing what, with which effect and why. This is why I’m looking forward to the MidSummit. I’m certain it will provide me with additional, sometimes conflicting ideas that will ignite new knowledge.

Leave a Reply

Your email address will not be published.