MITIGATING THE UNINTENDED CONSEQUENCES OF TECHNOLOGY

· DRONES,INNOVATION,AUTONOMOUS,DEEP LEARNING,MACHINE LEARNING

You may have seen the Facebook trending fictitious video of the sea of swarming drones that is entirely capable of effective urban warfare. Just like any mobile it has cameras and sensors and just like your phones and social media apps it does facial recognition. The only problem is that it's loaded with 3mg of explosive. It's stochastic motion is an anti-sniper feature. When it explodes, the bang on the skull is enough to kill. The idea that a swarm of these could envelop a small city is beyond horrifying. These autonomous weapons are small, fast, accurate, and unstoppable - and they are just the beginning. The cost? For only $20M it can take out a small city. The video developed by the Future of Life Institute brings this hypothetical scenario frighteningly to life.

It is in fact a call to action to AI and Robotics researchers all over the world. University of Berkeley Professor, Stuart Russell sends out this call to action encouraging all AI Researchers to sign an open letter against the use of autonomous weapons. This open letter was announced July 28 at the opening of the IJCAI 2015 conference. It has since been signed by leading AI and Robotics Professors from Research Institutions all over the world.

"Smart swarms. Adaptive adversarial learning. On-the-field robust target prioritisation. AI-powered tactics and strategy a Go master would envy. This is the future of " peacekeeping". 

Every day my colleagues and I work in the research and development of design tools that are used in the field of artificial intelligence and robotics. One of our tools compresses neural networks up to 40x so that the machine learning can fit on the smallest possible piece of hardware and delivers higher performance, consumes less energy and is cheaper to manufacture. What is to stop a designer using the tools we develop to achieve such purposes?

At Inspirit IoT, our Senior leadership team and key members of our R&D team also signed support to this Open Letter for a ban on autonomous weapons.

We also have a critical thinking commitment to the LEADS principle (Love, Mike 2015). Here we ask ourselves:

  • Is it Legal (both by the letter and the spirit of the law)
  • Is it Ethical (both locally and against international standards)
  • Is it Acceptable (by those stakeholders who matter most)
  • Is it Defendable (if it is published in the public domain)
  • Is it Sensible (even if it passes the criteria of L.E.A.D)
We then apply our globally responsible leadership questions and ask ourselves,
  • Does this decision change who we are as a company?
  • Does this decision change who I am as a person?

As globally responsible leaders committed to the development of technologies on the frontier that make a valuable contribution to society, we have both a fiduciary duty on behalf of our investors, and an ethical mandate to consider the environmental, social and governance factors behind our economic performance.

We are also committed to deploying basic critical thinking skills and regularly ask ourselves the 7 So What's. This works to unveil not only the intended consequence of our technologies, but also the unintended consequences (usually discovered only after the 5th So-What).

There has already been much publicised drama over the past year with Facebook and the Cambridge Analytica scandal, and only last week Google contractors and employees in a public memo, challenged the ethical implications of Project Dragonfly due to its contribution to “aiding the powerful in oppressing the vulnerable.” IBM Watson, Tesla and others also appear in the crossfire.

AI Now, my alma mater NYU's interdisciplinary research institute dedicated to understanding the social implications of AI technologies have just published their 2018 report that examines “the accountability gap” as artificial intelligence is integrated “across core social domains.”

The report quotes a number of python-like scandals involving the world's leading tech platforms and their AI and algorithmic systems, and these are just the beginning.

AI Now have proposed ten recommendations, including calling for :

  1. Enhanced government regulation and oversight of AI technologies,
  2. New regulation of Facial recognition and Affect recognition to protect the public interest (NB: Microsoft president Brad Smith has also advocated for this),
  3. New approaches to governance to ensure accountability for AI systems, and
  4. Enhanced whistleblower protections that includes conscientious objectors and employee organising,

In 2018 numerous ethical principles and guidelines for the creation and deployment of AI technologies have been proposed, many in response to growing concerns about AI’s social implications. But as AI Now rightly point out "Ethical codes can only help close the AI accountability gap if they are truly built into the software development practices and processes of AI development and are backed by enforcement, oversight, or consequences for deviation."

How do we as the inventors and innovators, protect against the unintended consequences of our technological inventions?

In a distributed reputation economy, it is not only about what we do i.e. the products and services we sell, but it's also about who we are as a company and how we operate. Leaders forget that we not only have a legal and regulatory license to operate, but we also have a social license to grow and innovate from our stakeholders. When we lose our social license to operate, it can result in consumer boycotts, parliamentary enquiries, mass protests and NGO activist action. The torrent of negative media coverage only serves to further accelerate the damage to the corporate and related employee brands and reputation.

In summary, we live in an inter-connected world. No wo/man today can live in a silo. As globally responsible leaders, with a commitment to our fiduciary obligations, we can understand both the intended and unintended consequences of our actions by:

  1. Taking responsible collective global action by making sure we are solving the root cause of the problem and not just symptoms (5 Why's)
  2. Remembering our L.E.A.D.S. and our two globally responsible leadership questions, and
  3. Asking ourselves the 7 So Whats (noting the unintended consequences may only emerge after the 5th So what)

--

I appreciate that you are reading my post. Here, at LinkedIn, I write about board related issues - corporate strategy, human capital, reputation risk, technology, corporate governance and risk management trends.

If you enjoyed reading this post, please click the thumbs up icon above and let me know.

If you would like to read my regular posts then please click 'Follow' (at the top of the page). If we have met, do send me a LinkedIN invite. And, of course, feel free to also connect via Twitter and Facebook.

About Leesa Soulodre

Leesa Soulodre is the General Partner of R3i Ventures, and an Adjunct Faculty Member of Singapore Management University, and an Expert for the European Research Agency on SME ICT Disruption. She is a Board member of RunwayHQ, Rubens Technologies and Board Advisor to a portfolio of the world's leading companies.

As a serial en/intrepreneur, Leesa has worked for more than 20 years on the cutting edge of strategy and technology. She has advised more than 400+ multinationals and their start-ups in 19 sectors across Europe, Asia Pacific and the Americas and led companies with turnovers from $4M to $14B USD into new markets. She has shared the exhilaration of one IPO, numerous exits, and the hard knocks of lessons learned.