Follow TopicFollow Contributor Share Feedback

A: People didn’t initially think of AI as a trust question in the sense I mean here. They saw it more narrowly — as a controls or regulatory problem to govern the development and use of AI. That’s where governments and the academy landed when they began engaging on the issue, in part at the urging of the AI companies themselves.

But if you think too narrowly, you don’t end up in a good place. I tell people, “Think of the TikTok or Huawei problem — on steroids.” If the only way you’ve got to deal with things you don’t trust is to ban them, then you’re basically on a path to autarky and isolationism. That tends to breed more and more instability. It all goes in the wrong direction.

This content is available to paid Members of Starling Insights.

If you are a Member of Starling Insights, you can sign in below to access this item. 

 

If you are not a member, please consider joining Starling Insights to support our work and get access to our entire platform.  Enjoy hundreds of articles and related content from past editions of the Compendium, special video and print reports, as well as Starling's observations and comments on current issues in culture & conduct risk management.

Join The Discussion

See something that doesn't look quite right?

We strive to provide high quality and accurate content at all times. With that said, we realize that sometimes links break, new information becomes available, or there is something that you feel we may have missed.

If you see something that you think we should be aware of, we would love to hear from you. Feel free to drop us a note below and leave your name and contact info if you'd like to hear back from us.

Thank you for being a key part of the Starling Insights community!