In 1942, author Isaac Asimov laid out a set of rules intended to govern the choices of robots and avoid potentially catastrophic consequences of artificial intelligence proliferation.1 Numerous arguments have since been put forth as to why the Three Laws of Robotics are inadequate and Asimov’s subsequent short stories intentionally depicted robots in various situations where their application led to counterintuitive outcomes.
While artificial intelligence has become more embedded in our daily life since then, we are still some distance away from the robot apocalypse. Nonetheless, its increasing adoption has added impetus for us to explicitly codify what is acceptable behaviour so that we do not end up with unintended outcomes. Values and norms that are implicit in our decision-making cannot be assumed when handing the reins to a machine. The need to find ways to connect values to tangible metrics presents an opportunity for us to reflect on what metrics we might leverage to identify good “culture”.
This content is available to both premium Members and those who register for a free Observer account.
If you are a Member or an Observer of Starling Insights, please sign in below to access this article.
Members enjoy full access to all articles and related content from past editions of the Compendium as well as Starling's special reports. Observers can access a limited number of articles and may purchase articles on an ala carte basis.
You can click the 'Join' button below to become a Member or to register for free as an Observer.