New Challenges for legislators and regulators around AI and Information Technology

 

Why AI raises particular challenges

 

1. Speed of change and unforeseen consequences - it's only 11 years since the first Apple iPhone was launched. Nobody foresaw how smartphones would change our world and our behaviour. An infamous slogan of Silicon Valley is "Move fast and break things" - the naïve assumption is that disruption is always positive.

2. The emerging technologies intersect and interact leading to new and unforeseen abilities and dangers:

  • increasing computing power

  • global mobile high-bandwidth internet connection

  • 'big data' - massive databases and computer power situated in the cloud

  • concentration of the best minds on developing new technology and algorithms

  • new technology enabling 'the internet of things' and invasive surveillance of everyday life

  • success of the advertising and information mining models for disseminating technology - 'if you are not paying, then you are the product'

  • AI can be increasingly used to defeat anonymization of big data - individuals can be 're-identified'

3. There is a concentration of enormous power in a small number of commercial companies and shareholders. Commercial companies are buying up the best talent in AI around the world by offering 'football player' salaries, leading to a dangerous monopoly of expertise. The operation of old-fashioned laissez-faire capitalism. The technology is very new but the power dynamics are very old and very well-defended.

4. Surveillance - individual human beings can now be continuously monitored, processed and analysed in an extraordinarily invasive way. A new low-cost AI computer chip which can be incorporated into cameras is capable of continuously assessing and measuring 10 variables: Face Detection, Face Recognition, Gender Estimation, Age Estimation, Expression Estimation, Facial Pose Estimation, Gaze Estimation, Blink Estimation, Hand Detection, Human Body Detection. The software is able to quantify in detail the emotional state of every individual in real time. Imagine if miniature cameras in every street, behind every advertising billboard, and in every home, were fitted with these chips, communicating to databases in the cloud.

5. The technology raises genuinely new ethical, moral, personal and theological questions. The ability for machines to simulate most if not all aspects of human behaviour will become astonishingly effective. This will include the simulation of friendship, compassion, love, wisdom, creativity, humour and so on. If you can't tell the difference between whether you are interacting with a real human or a machine - does it matter? The power of simulation carries great potential for good in many areas, but it also raises obvious potential for manipulation, deception, coercion, abuse and so on.

What specific insights and concerns do Christians bring to the debate?

 

A. An understanding of the human person as made in the image of God. A mechanical understanding of human beings is dominant amongst tech leaders. 'The brain is a computer made out of meat'*. This mechanised understanding is linked to ever more effective surveillance and monitoring of human behaviour and the promotion of 'simulated personhood' and simulated relationships. The Christian understanding of the unique and profound value and significance of each human person, together with the importance of individual freedom and protection from manipulation, is an essential counter-balance to the increasing mechanisation of humanity and the promotion of simulation.

B. An understanding of the reality of evil - that human beings are fallen and we live in a fallen world. Therefore there is an almost unlimited potential for the abuse of good things, including powerful technology, by fallen human beings. Datasets used for training AI systems enshrine human prejudices and biases which lead to discrimination, injustice and evil consequences. Zuckerberg naively thought that connecting human beings around the planet would be a good thing. But despite many good outcomes Facebook has also directly contributed to an astonishing outpouring of unexpected evil around the world. Christians understand the need to predict and restrain evil by the force of law.

 

C. A concern for the protection and defence of the most marginalised and vulnerable in society. Power is becoming concentrated in a very small number of hands. Christians have an understanding of the need to protect the most vulnerable in society - children, elderly, people with mental health issues, dementia etc. We see the need to restrain the concentration of power and where necessary break-up monopolies by regulation and state intervention.

 

D. A vision of what human flourishing and healthy relationships might look like in the future. Do we wish to live in a science fiction utopia where beneficent machines instantly take care of every aspect of our needs and wishes and we are free to live lives of idleness and pleasure? Christians understand the importance of having a sense of purpose and meaning and the intrinsic value of serving others, of building flourishing communities and relationships and serving the common good.

*Marvin Minsky

 
us-flag.gif

Four models of AI/robotics industry and regulation

United States model

  • Driven by 'laissez faire' capitalism and maximization of shareholder value

  • Adoption of advertising model to offer services to consumers for free

  • Entrepreneurial - 'Move fast and break things'

  • Libertarian

  • Opposed to restrictive top-down government regulation

  • Overcome public fears of dystopian future by sophisticated marketing and 'free' access to technology

 
 
ch-flag.gif

Chinese model

  • Driven and financially supported by state-controlled industries with political oversight

  • Focus on meeting citizens' needs whilst ensuring surveillance, application of social reward systems and behaviour modification for social control

  • Regulation is controlled by state and subservient to political goals and social control

 
 
ja-flag.gif

Japanese model

  • Financially underpinned by government with strong political support.

  • Focussed on social robotics and support of ageing population.

  • Positive social attitudes towards companion robots.

  • Government regulation focussed on safety and social cohesion rather than individual rights.

 
 
european-union-hi.jpg

European model

  • Strong emphasis on innovation and entrepreneurial start-ups, but much more direct Government investment in training, education and data ethics.

  • Societal concerns about adverse effects and unintended consequences of new technology, including privacy concerns, abuse of personal data, adverse effects on vulnerable individuals and cybercrime.

  • Positive attitudes towards the development of new national and international regulations to protect vulnerable individuals and enhance the common good.

 

Do we need new concepts and new human rights to protect our inner mental lives?

'Towards new human rights in the age of neuroscience and neurotechnology'
Ienca and Andorno, Life Sciences Society and Policy, 2017, 13, 5

The right to cognitive liberty - we are free to think whatever we like.

The right to mental privacy - no-one has the right to know what we are thinking unless we agree.

The right to mental integrity - no-one has the right to directly change or manipulate our thoughts without our agreement.

The right to psychological continuity - no-one has the right to manipulate our personality using technology.


Possible regulatory approaches

  1. Enforcing greater transparency for consumers when AI systems are operating - eg dynamic pricing systems, AI screening of job applications, use of credit scores, feedback on why individuals have been rejected by AI systems, targeted advertising etc. (Article 22 of GDPR contains a 'right to explanation' provision, Data Protection Bill in UK)

  2. Enforce the 'intelligibility' of AI systems in critical areas - healthcare, legal, government, military etc

  3. Enforce transparency when human/machine confusion is possible. "I have to tell you that I'm only a machine and not a human person …" (similar to warnings about CCTV or audio recordings)

  4. Enforce transparency about how much commercial value my individual data is to Google and Facebook etc and enforce the option of paid-for entirely confidential services and products of equal quality to free data-mined services.

  5. Regulate the availability and use of sophisticated companion and sex robots. For example, should there be a law against the enaction of abuse, rape or torture on a highly realistic child robot?

  6. Anticipate massive disruption in established employment structures and promote new ways to protect individuals and communities. The connection between 'paid employment' and 'meaningful work' may be increasingly severed. How can 'community service' and 'pro-bono work' be promoted and recognized?

References

AI in the UK: ready, willing and able? House of Lords Report, published April 2018

Work and Automation, Jeremy Kidwell, Theos/Bible Society briefing paper.


© John Wyatt February 2019

profjwyatt@gmail.com

johnwyatt.com