
KEY POINTS
Responsible AI expert, Sophia Banton, highlights the global misalignment in AI responsibility, emphasizing the need for organizational commitment and education.
Banton stresses that AI governance should be seen as a strategic priority, not a cost center, to ensure safe and effective AI deployment.
The importance of building AI responsibility from the start is crucial to avoid bias and ensure sustainable solutions, especially in regulated industries.
"I think what's happening is organizations view AI governance as a cost center. Many people do not governance as a strategy, but rather something they have to keep up with that is not tied to growth or business economics. And for that reason, people deprioritize it."
In the relatively brief history of modern enterprise software, global concerns were often limited to simple challenges such as solving localization or effective marketing based on the needs of a specific geographical region. But the age of AI has spurred an entirely new geopolitical discussion around not only AI sovereignty, but also the fundamental roadmap for how to safely and effectively develop and deploy it.
When geopolitics comes into play, speed is all that matters because the stakes transcend any tradeoffs that could imply falling behind on the global stage. Although clearly less dramatic and dire, the race for global AI dominance harkens to other historical milestones like the space race and atomic pursuits surrounding WWII. Those historical occurrences, however, did not meaningfully crossover with business needs as AI does. Nearly every business is in the process of deeply integrating what is undeniably still a work in progress. Leaders guiding adoption are therefore the ones responsible for safe use in the absence of firm governance and guardrails. They must implement the infrastructural resources and tools that are available today in order to prepare for whatever macro conditions arise beyond their control.
We spoke with Sophia Banton, a veteran AI leader who was most recently the Associate Director of Digital AI Solutions & Responsible AI at global biopharma giant UCB. Banton discusses some of the global misalignment in a recent LinkedIn post, as well as the disconnects around AI responsibility at the company level and how it trickles down to the user level.
Banton explains the disparate discourse happening between the United States, China, and the European Union. President Trump has prioritized innovation, leaders in China are publicly urging collaboration, and the EU is writing its own rules for AI.
U.S. leads adoption: "I understand the value of leading with innovation. In the real world, when you're building AI systems, you want to get it to work first," says Banton. She highlights that the United States has the highest rates of AI adoption as well as the fact that the primary driver of tech, Silicon Valley, is also in the U.S. "It's very imbalanced to think that somehow we can have a global consortium. I don't see that happening."
The global disconnect is mirrored inside organizations, and Banton stresses that the way you communicate risk within an organization makes all the difference:
The semantics of security: "I don't like to use the term 'AI ethics'. Instead, I like to say 'AI responsibility' because it gets more people to listen to you", she explains. "When you lead with the word 'ethics' people often think, 'Here comes the compliance police!'," jokes Banton, "whereas when you use the term 'responsible AI', you don't get pushed out the conversation."
Governance is strategy, not a cost center: Banton stresses that the way data is collected combined with the way AI is implemented to leverage that data should be top of mind for every leader. While safe data practices are well-understood within companies, AI is still being sorted out. "I think what's happening is organizations view AI governance as a cost center. Many people do not see governance as a strategy, but rather something they have to keep up with that is not tied to growth or business economics. And for that reason, people deprioritize it."
"I understand the value of leading with innovation. In the real world, when you're building AI systems, you want to get it to work first."
Banton proposes a roadmap that starts with organizational commitment through funding, a dedicated lead with executive sponsorship, and a mandate to train people. But the real work is tackling the human feeling of exclusion. The solution is education that connects individual actions to more foundational usage issues, like teaching users about proper prompting for different models.
With every complex concept comes the tendency to over-simplify it for easier human consumption. Ever since ChatGPT thrusted generative AI into the mainstream consciousness in late 2022, people have been anthropomorphizing LLMs with pedestrian terms like "hallucination" and "thinking" to describe its inference capabilities. But AI is simply technology, and technology has historically been treated as contained and controllable. As we increasingly humanize tech, the conversation for using it effectively and responsibly must shift to proper governance, especially in highly-regulated industries.
Tech is just tech: "When we talk about biases in AI, many people think AI is like a boogeyman," says Banton. "But it's simply technology, and technology is inherently neutral. It's code and hardware." To prove her point, she points to the distinct "personalities" of today's frontier models. "When you interact with Claude, ChatGPT, and Gemini, you see you're getting three completely different personalities," she explains, pointing the fact that the nuances of how LLMs operate are choices consciously made by companies at the design level.
The stakes for getting it right are immense, particularly when it comes to bias. The danger, Banton warns, lies in the "subtle bias that creeps into these systems" that tend to amplify whatever exists in society. For implementations in healthcare specifically, missteps are largely unacceptable. Banton says the only sustainable solution is to break down the organizational silos separating legal, IT, and business teams and build responsibility in from the start.
Build it in: "Don't build it and then do an audit at the end," she urged. "Build responsibly upfront. Have the right people at the table to make those decisions from the start." In the long run, this governance does not equal hindrance. It's simply smart business that protects the brand and prevents projects from getting stalled at the finish line.