The reckoning of the importance of ethics in tech and the need for new approaches to steer innovation in the right direction have been rapidly gathering steam, and not just among regulators.
For example, last year saw the formation of the Center for Humane Technology, a coalition of ‘deeply concerned tech insiders’ aiming to re-direct the course of technology away from extracting our finite attention towards a better alignment with humanity. And in April, the European Union released its guidelines for achieving “trustworthy” artificial intelligence (AI), a milestone in putting ethical guardrails around the development of technology.
Approaching ethics in tech
In March, Stanford University, the birthplace of the term ‘artificial intelligence,’ launched the Stanford Institute for Human-Centered Artificial Intelligence (HAI), a sprawling think tank whose mission is to “advance AI research, education, policy, and practice to improve the human condition.” Industry behemoths have also started to take actions. Both Google and Microsoft, for instance, released ethical principles for the development and use of AI in the past year.
A re-assessment of the industry’s status quo is in order, gauging from consumer sentiment as well.  Consumers’ attitudes towards technology seem to have reached an inflection point. As pointed out by my colleague Kathy Sheehan in her recent blog post, concerns about data privacy and tech addiction have soared amidst high-profile data misuse, privacy breach scandals and mounting evidence of the effect of technology on mental health (incl. World Health Organization’s classification of gaming addiction as a mental health disorder last year). As AI grows ever more powerful and increasingly extends its reach into our lives, consumers also increasingly recognize the risks it poses to humanity: According to a survey conducted last year, the majority (59%) of Americans feel that AI has the potential to be good but comes with some inherent risks.
A shift of perception

View Entire Article on GFK.com