On the CAIO Connect Podcast, Amanda Brock argues AI security isn’t about open vs closed—but transparency, licensing, and global collaboration.
WASHINGTON, DC, UNITED STATES, February 26, 2026 /EINPresswire.com/ — “Is open source AI less secure — or simply more transparent?”
That was the central question explored when Amanda Brock, CEO of OpenUK, joined host Sanjay Puri live at the India AI Impact Summit on the CAIO Connect Podcast. In a wide-ranging and candid discussion, Amanda Brock challenged some of the biggest assumptions policymakers and Fortune 500 executives make about AI, security, and openness.
Her message was clear: the security issues we face in AI are not about open source. They are about software — and they are universal.
Open vs. Closed: The Transparency Divide
Amanda Brock pushed back on the common claim that open source introduces more security risk. In her view, both open and proprietary systems contain vulnerabilities. The difference lies in visibility.
In open-source environments, she explained, “We wash our dirty linen in public.” When there’s a flaw, everyone can see it. But that transparency triggers a powerful response—communities rally to fix issues quickly. Many eyes make bugs shallow.
Closed systems, on the other hand, operate like black boxes. Users often have limited insight into how models function internally or how quickly vulnerabilities are disclosed. The obligation to inform customers can be slower and less transparent.
The takeaway from Amanda Brock’s appearance on the CAIO Connect Podcast? Security is not a function of openness. It is a function of accountability.
The Llama 2 Moment: A Shift in AI History
Amanda Brock also reflected on why Meta’s release of Llama 2 in July 2023 marked a turning point. OpenUK partnered on the launch because it was framed as “open innovation,” though not fully open source.
Llama 2 opened model weights but not training data, and its license imposed certain restrictions. Even so, it represented a significant shift. For the first time, the broader developer community had meaningful access to a powerful large language model.
According to Amanda Brock, that release — along with DeepSeek’s launch in January 2025 — may be remembered as two defining milestones in AI openness.
The reason? Open access accelerates iteration. When developers can inspect, test, and adapt models, innovation moves faster.
What Does “Open” Really Mean?
One of Amanda Brock’s most important clarifications was about terminology. Rather than using the phrase “open source AI,” she prefers “AI openness.”
Why? Because AI is not a single component. It includes model weights, training data, algorithms, and licenses. Each element can be open, partially open, or closed.
Equally critical is how something is licensed. A model may appear open, but restrictive commercial terms can significantly limit real-world use. Policymakers and business leaders must examine both what is open and how it is governed.
DeepSeek, China, and Lean Innovation
The conversation also touched on China’s open-source trajectory. DeepSeek’s open-weight model, released under an MIT license, demonstrated how community-driven iteration can rapidly enhance performance. Within days, developers retrained versions using alternative datasets.
Amanda Brock noted that Chinese engineers tend to build lightweight, compute-efficient systems — an approach that could benefit emerging markets and the Global South, where access to large-scale compute is limited.
Smaller, edge-based models may ultimately prove more sustainable and more accessible worldwide.
A Message to World Leaders
Closing the conversation, Amanda Brock delivered a simple but powerful message to policymakers: if you want to shape AI’s future responsibly, engage directly with open-source communities.
Openness is not a slogan. It is a practice grounded in licensing, governance, and collaboration. Decisions about AI should not be made in closed rooms with only a handful of CEOs. They should include the builders, contributors, and global communities who sustain the ecosystem.
As Amanda Brock made clear on the CAIO Connect Podcast, the real question is not whether AI will be open or closed. It is whether it will be transparent enough to earn trust.
Upasana Das
Knowledge Networks
email us here
Visit us on social media:
LinkedIn
Instagram
Facebook
YouTube
X
Legal Disclaimer:
EIN Presswire provides this news content “as is” without warranty of any kind. We do not accept any responsibility or liability
for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this
article. If you have any complaints or copyright issues related to this article, kindly contact the author above.
![]()


































