โNever before in history have such a small number of designers โ a handful of young, mostly male engineers, living in the Bay Area of California, working at a handful of tech companies โ had such a large influence on two billion peopleโs thoughts and choices.โ
Those are the words of
Tristan Harris, former design ethicist at Google and founder of
Time Well Spent, a not-for-profit initiative to help educate businesses, users, and designers about morally acceptable technology design choices.
Tristan is spot on. However, he and the entire high-tech industry may not be going far enough, or fast enough.
Take for instance Google, Tristanโs former employer.
Recently at their annual, I/O conference, Google CEO Sundar Pichai demonstrated the companyโs โDuplexโ technology nested within the Google Assistant project. Google
describes it as โa new technology for conducting natural conversations to carry out โreal worldโ tasks over the phone.โ
In the
demonstration, Pichai asked Assistant to book a haircut appointment and in another a restaurant reservation. In both cases, Google Assistant acted as a human might, delivering โummsโ and โahhsโ in the speech while conducting a conversation that seemed as natural as two people might. Both humans on the other end of the phone had no idea they were interacting with a machine.
It was both compelling and terrifying.
Compelling in that artificial intelligence has progressed to the point where most of us are utterly astonished such a conversation can now play out between man and machine. Although it was just a demo, Google indicated it plans to test Duplex in the Assistant technology this summer.
Duplex is not only compelling but equally terrifying because, well, there are plenty of reasons.
However, there is one issue in particular that organizations need to start doing something about, and that is the need for a Chief Ethics Officer role and an in-house ethics office in general.
When you
search (yes, in Google) for โGoogle Chief Ethics Officer,โ the first few results highlight Andy Hinton, Googleโs โVice-President and Chief Compliance Officer.โ Most companies have such a role. However, there are rarely any Chief Ethics Officers. Why?
Microsoft is also in on the act. It recently
announced that all developers at the company would become "AI developers." There is caution in the wind at least. Satya Nadella, the Microsoft CEO, said, โThese [AI] advancements create incredible developer opportunity and come with a responsibility to ensure the technology we build is trusted and benefits all.โ
Whether Google, Microsoft or any other high-tech company found in Silicon Valley or elsewhere, it is time they created a separate role and officeโoutside of compliance, regulatory and the lawyersโto make ethical recommendations on whether or not a particular technology ought to come to market.
We require teams of differing minds debating the pros and cons of whether or not a technology is good for society. If Silicon Valley has turned itself into one massive case study of groupthinkโswimming in sinkholes of cognitive biasesโwho is standing up for those of us in a society that may not want such advancements? Who becomes the judge of societyโs ethics?
There is an example to look up to in these confusing times. The medical community.
Patrick Lin, Associate Philosophy Professor and the Director of the Ethics and Emerging Sciences Group at California Polytechnic State University, and Evan Selinger, Associate Professor of Philosophy at Rochester Institute of Technology, wrote in
Forbes four years ago, the medical community has been at the forefront of ethics for years. They write:
โIn-house ethics committees have been a mainstay in medicine for the last 30 years when a US
Presidential commission recommended it in 1983. Those committees are composed of lawyers too, but also doctors, nurses, bioethicists, theologians, and philosophersโa much more capable approach than mere risk-avoidance to tackle controversial procedures, such as
ending life support and
amputating healthy limbs.โ
In Canada, the Canadian Medical Association first produced its Code of Ethics in 1868 and is considered the Associationโs most important document. The Code is updated every five yearsโfrom a wide range of representativesโand focuses on areas that
include โdecision-making, consent, privacy, confidentiality, research and physician responsibilities.โ
It is from the medical community that the high-tech community may learn its greatest lesson.
Create a Chief Ethics Officer role, and an in-house ethics team made up not only of lawyers but educators, philosophers, doctors, psychologists, sociologists, and artists.
Furthermore, as universities such as Carnegie Mellon University begin
introducing undergraduate degrees in artificial intelligence, ensure the program has a strong ethics component throughout the entire curriculum.
Only thenโwhen ethics is outside of the compliance department, and it is woven into academic pedagogyโwill society be in a better place to stem the tide of potentially unwanted, technological advances.
I am all for technological advancement. I have even started to use Siri on occasion. But when I visit my doctor, I trust that the ethics of her decision-making and use of technology have already been vetted by a mixed group of professionals weighing the pros and cons.
Now more than ever our technology companies (and faculties) need to employ the same type of thinking.