The intensely competitive race to produce ever more powerful generative AI models has created challenges for the world's most sophisticated technology companies. Extraordinary technological breakthroughs must balance fast delivery with safety and trust concerns. The latest to fall to this challenge is Google with the disastrous launch of its Gemini model last month.
The Gemini model was a significant upgrade over Google's earlier Bard offering and was meant as an answer to OpenAI's ChatGPT, Anthropic's Claude, and Microsoft's Copilot. As part of the launch, Google introduced a new image-generating capability. However, within hours, users discovered historical inaccuracies and offensive content that went viral online, and the company pulled the feature shortly afterward. As of this moment, Google is still working on a fix.
This content is available to paid Members of Starling Insights.
If you are a Member of Starling Insights, you can sign in below to access this item.
If you are not a member, please consider joining Starling Insights to support our work and get access to our entire platform. Enjoy hundreds of articles and related content from past editions of the Compendium, special video and print reports, as well as Starling's observations and comments on current issues in culture & conduct risk management.
Join The Discussion