top of page
Writer's pictureEtienne Oosthuysen

Ethics in artificial intelligence (AI)

Updated: Feb 5, 2023

Thank you to Rory Tarnow-Mordi who contributed and advised.



In this article we will explore some important concepts including:


  • What AI actually is and why it's important.

  • The different types of AI systems.

  • Some examples of ethics in AI, already here and some potential consequences of AI in society.

  • What are the cloud vendors doing?

  • What can you do?


But before we can decide what is wrong and what is right we need to understand what AI actually is.


What is Artificial Intelligence (AI)



I won't go into loads of detail describing deep learning and machine learning as subsets of AI - it's just important to understand that these layers are the mechanism at our disposal when we create AI solutions, and so the ethics discussed here must be considered when we create these solutions.



Why is AI and its ethics important?


The boom in data and the maturity of cloud computing (innovative technologies as well as computing power) is driving an increasing adoption of AI. A PWC Global CEO Survey projects a potential contribution to the global economy from AI to be in excess of $15 trillion by 2030. This significant contribution to economic growth therefore means we need to start thinking about how we think and talk about AI and Data.


As AI becomes a larger part of our lives, increasingly it will replace functions that humans perform. This is true by definition: AI is designed to mimic human behaviour. However, while AI may replace human behaviours, should it replace the humanity in these behaviours? This is the question that we all must contend with. Do we inevitably lose nuance, fairness, empathy and understanding when we lose the human? This is where AI ethics comes in, and consideration and continued development of ethics in AI is how we ensure that our brave new world is not Brave New World.


AI Ethics is concerned with the principles around the development and use of AI that ensure that:

  • The use of AI improves the lives of individuals and societies

  • AI systems are fair and respect the dignity and autonomy of people throughout the systems' lifecycles

  • Decisions made and actions taken by or due to an AI system are reasonable, able to be understood and challenged, and have an accountable party

When we talk about AI ethics, it is worth understanding more about AI and its interface with ethics. Starting with AI, we often classify it into narrow and general intelligence:


Artificial Narrow Intelligence (ANI)


(Also called weak AI) is focused on performing a specific task and represents most of the AI that surrounds us today. ANI is not conscious, it has no sentience, and it is not driven by emotion. It operates within a pre-determined, pre-defined range of behaviours and cannot operate beyond those parameters. A recent example of ANI would be the new Azure Purview data catalog engine that classifies data, but only according to patterns and expressions available to it, for example 16 character numerical data fields are likely credit card numbers.

But ANI is not 'intelligent'. For example, the data catalog engine will never ponder the meaning of life (unless it's contained within a credit card number) - and it certainly wont participate in the complicated realm of philosophy. But within the domain of identifying and classifying data types, Azure Purview appears intelligent. Likewise, asking a virtual assistant what the time in New York is right now will provide an accurate and maybe even conversational answer because the virtual assistant is an ANI within the domain of simple conversation. There is an appearance of intelligence, but this is because parameters of the question is clear, calculable and unambiguous - we've all experienced how a chat bot behaves when we stray outside the conversations it is trained to have.


ANI can therefore only do what it is designed to do, for example Purview can process my request, enter it into its own index catalog, and return the location of credit card numbers in my datasets. It is not truly intelligent, even though it can definitely be very sophisticated. VERY sophisticated in fact, think of IBM’s Deep Blue, which beat chess grandmaster Garry Kasparov at chess in 1996, or Google Deepmind's AlphaZero which is able to teach itself board games to a superhuman level in a matter of hours. Evenso, ANI is restricted to the space of problems it is designed to produce solutions to.


Sophisticated ANI has has made our lives easier by relieving us from mundane, repetitive, frustrating or unsafe tasks. Look at Spot, a complicated AI system (and a robot dog) who will be able to inspect powerlines for South Australia Power Networks.



But some would argue that the fact that ANI can replace the need for humans to perform repetitive tasks does not mean it does not pose a perceived threat. ANI can power robots, like Spot, to replicate tasks in an assembly line, or to inspect powerlines based on variables, but what does that mean to the livelihoods of thousands of workers who used to perform those tasks. This creates an ethical dilemma.


Artificial General Intelligence (AGI)


(Also called strong AI or singularity) is focused on performing and adapting to general intelligent tasks, much like a human can. As humans we can think abstractly, we can strategise and tap into our consciousness and memories when we decide on a course of action and we can acquire new skills ourselves. AGI will require systems to be able to comprehend, and learn in a general way (not just restricted to board games, for example) when deciding on a course of action. This type of AI has not been realised yet. However, the race to AGI has led to many machine learning innovations that underpin current AI, including deep learning algorithms - https://www.google.com.au/amp/s/fortune.com/2021/09/07/deepmind-agi-eye-on-ai/amp/.


Still, these complex algorithms only know what they've been shown. For example it takes thousands of labelled photos to train an image recognition model. So there is still a long way to go before we see real AGI.



Others argue that AGI is not possible at all as we don’t even fully understand our own brains. Diego Klabjan, a professor at Northwestern University for example states: “currently, computers can handle a little more than 10,000 words. So, a few million neurons. But human brains have billions of neurons that are connected in a very intriguing and complex way, and the current state-of-the-art [technology] is just straightforward connections following very easy patterns. So going from a few million neurons to billions of neurons with current hardware and software technologies — I don’t see that happening” - https://www.analyticsinsight.net/experts-foresee-future-ai-agi/


Some view AI through the prism of the threats it poses. Stephen Hawking warned that AI could wipe us out once they become too clever: “If AI itself begins designing better AI than human programmers, the result could be machines whose intelligence exceeds ours by more than ours exceeds that of snails.” And Elon Musk stated that “AGI is humanity’s biggest existential threat". Efforts to bring it about, he has said, are like “summoning the demon.”


Whether you beleve AGI is possible or not possible, or even believe it's going to wipe us out, this article will not provide a running commentary on the validity of the opposing arguments. The focus here is rather on those systems that mimic human behaviour, and AGI is not required for this. ANI is perfectly sufficient at mimicking human behaviour in a limited way, and this is more than sufficient to begin considering ethical implications. Especially for "AGI-like" systems like virtual assistants that are intended to appear generally intelligent, but are in fact only extremely sophisticated, containing multiple ANI systems working in synergy. These tools we typically call Siri, Google Assistant or Cortana and they've entered our homes and talk to our children.


However, virtual assistants are not the only examples of current complex applications of AI systems.


Optimised, personalized healthcare treatment recommendations


Even though there are many opportunities for AI in healthcare, think of the scenario where AI can diagnose skin cancer more accurately, faster and more efficiently than a dermatologist can (https://www.nature.com/articles/nature21056 ), a World Health Organisation (WHO) report raises concerns about AI in healthcare including “unethical collection and use of health data, biases encoded in algorithms, and risks to patient safety, cybersecurity, and the environment”. The report raises concerns re the subordination of the rights and interests of patients in favour of powerful companies with commercial interests- https://www.healthcareitnews.com/news/emea/who-warns-about-risks-ai-healthcare


Even though AI has come leaps and bounds in advancements of medical care, healthcare policy and ethical guidelines are lagging behind the progress AI has made in this area and one major theme to be addressed is how to balance the benefits of AI with the risks of AI technology, and a second is how to interpret and deal with legal conflicts that arise with the use of AI in health care (for example can a machine be liable for medical malpractice) - https://journalofethics.ama-assn.org/article/ethical-dimensions-using-artificial-intelligence-health-care/2019-02


Driverless delivery trucks, drones and autonomous cars


Another example involves scenarios where our judgement means we can sometimes act illegally as the judgment is not only informed by what is legal, and what is not, but the decision is also wrapped into morals and sentiment, sprinkled with experience, and lathered with emotion.


Imagine a busy road and what appears to be a box in the road. The current position of the road has a non-overtake solid line, and no oncoming traffic. To avoid the obstructing box we’d simply drift a little into the opposite lane and drive around it. But an automated car might come to a full stop, as it dutifully observes traffic laws that prohibit crossing a solid line. This unexpected move would avoid driving over the box, but likely cause a crash with the human drivers behind it.


Pushing this example a little further, if the automated car were forced into a scenario where it has to make difficult decisions, say between colliding with different pedestrians, how should it resolve this? Do AI agents need to have an ethical (or even moral) framework built into their behaviour?

Algorithmic bias


AI used is already being used in healthcare, but its not yet hugely pervasive. And autonomous cars is still in its infancy. But data scientists all over the world have already been creating machine learning algorithms for predictions and decision support for years. So when it comes to ethics, this is, in my humble opinion, the area in most urgent need of ethical consideration.

Algorithms are created and trained by humans, and as humans we are knowingly, or unknowingly biased, which may be reflected in the algorithms. Or rather, the data passed through the algorithms itself may contain bias due to the way in which it was collected, processed or stored.


Amazon, for example, scrapped a recruitment algorithm a few years back which was trained on data submitted by applicants over a 10-year period, and most of those applicants were men, so the data itself contained bias, which was reflected in the results which was in essence discriminatory towards women applicants. The solution was not rating candidates in a gender-neutral way because it was built on data accumulated from CVs, mostly from men. - https://www.bbc.com/news/technology-45809919


And in 2020, Microsoft's AI new aggregator, MSN came under fire for pairing a photo of Leigh-Anne Pinnock with an article of her little Mix bandmate, Jade Thirwall. Both women are mixed race. In the same year Twitter had to remove its automatic image cropping algorithm because it appeared to preference certain races - https://www.bbc.com/news/technology-57192898.


Some research is starting to reveal issues with decision making that relies heavily on algorithms as these algorithms often replicate, or in some cases even amplify, human biases, particularly those affecting protected groups.


These examples (and there are many more), the explosion of data, and the increasing adoption of machine learning and AI, shows why it is important to start thinking about frameworks around ethics in your AI sooner rather than later. But lets first look at what the main vendors are doing in this space: Microsoft, Google, Amazon and IBM.

What are the main cloud data and AI platform vendors doing about ethics in AI?


A few years back IBM launched a tool that can scan for signs of bias and make recommend adjustments to the data or algorithm. The open source nature of the tool and the ability to see, via a visual dashboard, how algorithms are making decisions and which factors are being used in making the final recommendations also means that transparency is improved.


Similarly, Google launched the what-if tool to analyse ML models and to manually edit examples from a dataset and see the effect of those changes. It can detect misclassifications, unfairness in binary classification models and model performance across subgroups.


Microsoft too launched Fairlearn SDK that will detect and alert people to AI algorithms that may be treating them based on their race or gender, or other bias.


But beyond tools, what about supporting frameworks?


Microsoft (Azure) and Google Cloud Platform (GCP) are tackling this explicitly through frameworks and principles:


Microsoft for example state that they are ”committed to the advancement of AI driven by ethical principles that put people first” and created principles and tool kits to help with ethical implementation of AI- https://www.microsoft.com/en-us/ai/responsible-ai?activetab=pivot1%3aprimaryr6

Google state that “AI is transforming industries and solving important, real-world challenges at scale. This vast opportunity carries with it a deep responsibility to build AI that works for everyone.” with their own principles and resources that help with responsible AI - https://cloud.google.com/responsible-ai


While its rivals Microsoft and Google published their frameworks for ethical AI that guide their work on AI, AWS, in stead, put the onus on its customers to use the technologies appropriately: “AWS offers some best practice advice relating to its customers' use of data, but has stopped short of laying out its own guiding principles. It is up to clients to decide whether their use of AWS tools is ethical, said the company's head of solution architecture in ANZ, Dr Peter Stanski.” - https://www2.computerworld.com.au/article/661203/aws-ethical-about-ai-we-just-don-t-talk-about-it-say-apac-execs/. But they are not complacent by any means as in 2020 they announced a moratorium on police using their facial recognition software until there are stringer regulations to govern ethical use of face recognition software https://www.aboutamazon.com/news/policy-news-views/we-are-implementing-a-one-year-moratorium-on-police-use-of-rekognition?ots=1&tag=curbedcom06-20&linkCode=w50


So what should you do?


Dependant on your AI platform of choice, there may be tools/ technologies in place to help you analyse your models, highlighting issues you may not be aware of. You must develop and embed your own official policy and an operating framework regarding the ethical use and creation of AI and there may be tool kits and other resources in place that you can use to inform such a framework (I.e. leverage the legwork and research already done by others as a starting point - for example Developing the AI Ethics Framework and principles by the Australian Department of Industry, Science, Energy and Resources). Such an operating framework, must be embedded into your organisation and the data workers (people).


Operational framework - create a framework that changes the paradigm from creating great AI solutions towards creating great AI solutions and ensure that ethics is a core consideration for the creation and management of such solutions:

  • Determine the challenges that may be unique to your organisation.

  • Create an official policy and commitment towards ethics in AI.

  • Develop procedures on how you will manage the initiation, creation, testing and ongoing management of AI solutions - including those provided by technology.

People - ensure your AI related data workers are familiar with ethical challenges re AI, broadly, as well as those that could be unique to your organisation. Ensure your operational framework is embedded into your organisation via the appropriate change management.

  • Make research available for all to consume.

  • Ensure your data workers are across the approaches and tools available on your platform of choice.

  • Embedded your operational framework into your data worker workloads (similar to how you would embed data security policies and regimes into your organisation).

  • Embrace collaboration from your data workers and use their experiences, as well as new information in the broader data domain to review and refresh your operational framework on a regular basis.

Technology - use the technologies at your disposal to augment your operational framework.

  • Use technology to highlight issues with AI and your models (for example the Google what-if tool or the Microsoft SDK).

  • Understand and deploy the appropriate toolkits to augment technology and processes.

Data - ethically collect, store and use data

  • Collect and store data with the consent of the party from which the data is being collected.

  • Consider analysis of data during collection of data to eliminate potential sources of bias.

  • Use good governance of data, ensuring appropriate controls exist for confidentiality, privacy, security and quality


Conclusion


AI will surely become more present in all of our lives as we approach the possible advent of AGI, but even as the technical capability to deliver AGI is developed, we all have a duty to consider the ethical impacts that AI has. Particularly we need to consider the ethical aspects of creating and using AI, and the ongoing ethical ramifications of biased AI. To serve this the tools and frameworks we develop and use must include ethics.


In many ways the ethical aspects of AI are more fiendish problems than the purely technical aspects of AI. The developments in this space from Microsoft, Google, Amazon, IBM and others are an important investment in a fair, humane future, but the onus is on us all to develop operating frameworks that ensure AI benefits us all.

Additional references:


Disclaimer

The views expressed on this post are mine and do not necessarily reflect the views of any organisation I am associated with.

363 views0 comments

Comments


bottom of page