It was striking to hear the record breaking new announcement the other day of the new Stephen A. Schwarzman Centre for the Humanities at our local city of Oxford, UK. Made possible by a landmark £150 million gift from the philanthropist and businessman, the centre when it is built in 2022, aims to answer “the essential role of the humanities in helping society confront and answer fundamental questions of the 21st century.”

It’s inspiring that it will bring together a range of disciplines who will co-exist and collaborate in the new space, including English, History, Linguistics, Medieval and Modern Languages, Music, Philosophy and Theology alongside providing new concert and exhibition areas for the public to interact with them as well.

But what I found interesting about the announcement was the note that the centre will also be home to a new Institute for Ethics in AI and lead the study of the ethical implications of artificial intelligence and other new computing technologies.

The combining of Artificial Intelligence with a true and multi-disciplinary understanding of its impact on language, ethics and philosophy has the potential to deliver something really fascinating. In the words of Sir Tim Berners-Lee (founder of the World Wide Web):

It is essential that philosophy and ethics engages with those disciplines developing and using AI. If AI is to benefit humanity we must understand its moral and ethical implications. Oxford with its rich history in humanities and philosophy is ideally placed to do this.

Ethics and big tech

As recent commentary demonstrates, ethics is becoming an area of real and persistent concern to big tech firms. A range of companies who are heavily investing in AI research take vastly different stances on ethics. Google for example has stated that it won’t sell its facial recognition technologies to governments following widespread outcry amongst its engineers regarding its participation in Project Maven, a programme to improve how drones recognise and select their targets. Currently both Amazon and Microsoft continue to work with the US Government on AI-enabled technology.

Take for example Microsoft’s contract win at the start of 2019 with the US Department of Defense (DoD) to supply services to the value of $1.76bn over five years. This win comes at a time when the DoD is also assessing proposals for its $10bn, 10-year cloud contract known as JEDI or Joint Enterprise Defense Infrastructure, rumoured to be heading to Amazon. As Microsoft writes in its blog from last year:

We believe in the strong defense of the United States and we want the people who defend it to have access to the nation’s best technology, including from Microsoft.

Contrast Microsoft’s statement with Google’s statement to the Pentagon that it would refuse to provide artificial intelligence products that could build more accurate drones or compete with China for next-generation weapons. As the New York Times wrote last year:

The divergent paths underscore concerns inside the American defense and intelligence establishments about how the United States will take on a rising China. In the past two years, the Chinese government has set goals for dominance in the next decade in artificial intelligence, quantum computing and other technologies that it believes will allow its military and intelligence agencies to surpass those of the United States.

So in this context companies with a stated interest and ongoing development programmes in Artificial Intelligence are aligned to US Government in strikingly different ways to develop technologies that might power the future of warfare. What should the role of tech companies be in furthering such capability?

AI and Bias

As various commentators have written, the ethical debate with AI tends to come down to bias. The people who are developing the machine learning, data  analytics and algorithms that that drive AI do not represent all of us and can therefore not consider all of our unique needs and wants. If scientists and engineers are furthering AI then how can they be expected to incorporate all of our ethnic, cultural, gender, age, geographic or economic diversity.

There is a risk that we become further dependent on systems that don’t represent us and therefore take different ethical choices than we would. Of course the often cited example of this is the decisions that an autonomous vehicle would have to take when chasing between the lesser of two evils – for example colliding with five pedestrians or fatally harming its own passengers. As an aside, take a look at this moral test from MIT to find out your own bias.

Various technology companies have started to self-regulate their own approach to the ethics of AI. For example Deep Mind’s Ethics and Society division works in partnership with academia and other research bodies to develop papers on topics of concern.

Google’s Advanced Technology External Advisory Council (ATEAC), dedicated to “the responsible development of AI”, was closed after less than a week after more than 2,000 Google workers signed a petition criticising its selection of a rightwing thinktank leader. Microsoft, Google, IBM and others have been busy developing and publishing lists of company ethics.

But the challenge remains that the various ways in which technology companies are approaching ethical issues varies depending on their own individual slants and interests and often lack transparency or true external governance. As AI Now (a non-profit) writes:

Ethical approaches in industry implicitly ask that the public simply take corporations at their word when they say they will guide their conduct in ethical ways. This does not allow insight into decision making, or the power to reverse or guide such a decision

So if there is a chance that the establishment of a new Oxford centre for independent study can help to move the dial on such complex ethical and societal issues, that can only be a good thing – although it’s a vast and challenging topic whose importance is only really starting to emerge.