The Supermicro Open Storage Summit is one of those gatherings where the big questions about enterprise infrastructure get real. This year I joined two panels, one on data lakes and lakehouses for AI, the other on modernizing enterprise applications, alongside experts from AMD, MiniO, and Lightbits Labs.
Different sessions, but both kept circling back to the same question: how do enterprises make AI work in production without losing control of their data, their costs, or their future?
The answer came down to four practical moves that every enterprise can act on today:
1. Build on open standards. Postgres, Iceberg, Kubernetes/OpenShift.
2. Modernize with production in mind. High availability, observability, lifecycle support.
3. Design storage for AI. Tiering, object storage, real-time lakehouse access.
4. Anchor in sovereignty. Control is how you manage cost, energy efficiency, and compliance.
These takeaways apply to any technology decision, and they connect directly to the choices we made building the EDB Postgres® AI (EDB PG AI) Sovereign Data and AI Factory: a modular, storage-aware, and open system that enterprises can run on their own terms.
Get any one of these moves wrong, and AI becomes a science project; get them right, and you set the foundation for scale, speed, and sovereignty.
Open source as competitive advantage
The AI landscape moves fast; what feels like the “must-have” tool today might be irrelevant in 18 months. If your architecture is closed, every shift means pain—refactoring, costs, or outright lock-in.
That’s why I argue for open standards as the first principle:
- Postgres as the universal data store for transactions, analytics, and AI workloads.
- Apache Iceberg for open lakehouse tables.
- Kubernetes and OpenShift as the common control plane for cloud-native operations.
At EDB, we are the world’s #1 contributor to the Postgres database. Our company believed in Postgres over a decade ago before it was ranked as the world’s most popular database. The choices we made back then led to Postgres climbing the ranks and to our customers making enormous returns on their investment as it surged in popularity and gained new features.
That history is why Postgres anchors our view of open standards. It’s living proof that when you build on open technology, you don’t just protect against lock-in; you create long-term resilience and advantage.
Why modernization can’t wait
You can’t strap AI onto a 20-year-old monolith and hope it scales. AI workloads today demand low latency, elastic performance, and always-on availability.
That means cloud-native patterns, microservices where they fit, and a control plane that automates resilience. For our customers, Postgres is at the center of their modernization journey—it’s extensible, proven, and supported by a massive ecosystem. Paired with Kubernetes or OpenShift, it creates a path to replatform apps so they’re AI-ready.
But prototypes are easy; production is the bar. That means high availability across fault domains, rolling upgrades, and—critically—observability across the data estate. If you can’t see what’s happening across every cluster, you can’t run AI at scale or with confidence.
Storage: where sovereignty, efficiency, and scale intersect
At both sessions, storage was the layer that determines whether AI workloads are fast, affordable, and sustainable. It’s also where energy and carbon impacts are managed, something every CIO I meet is now measured against.
Sovereignty lives here, too. If you don’t control where your data is stored, how it moves across tiers, and what formats it’s in, you don’t really control your AI. Storage is the starting point where enterprises decide not just cost and performance, but ownership, governance, and compliance.
The modernization playbook looks like this:
- Tier hot and cold data so operational databases stay fast while older data moves automatically into object storage.
- Stream to columnar formats into a lakehouse so analytics and forecasting run on current data, not yesterday’s snapshot.
- Use open formats like Iceberg so any engine can query the same data using your choice of tools.
This is how enterprises control cost, carbon, and compliance, while keeping models grounded in the freshest possible data.
Inside the Sovereign Data and AI Factory
All of this is why we built the EDB PG AI Sovereign Data and AI Factory with our ecosystem partners Supermicro, NVIDIA, and Red Hat. That name isn’t a metaphor; the Sovereign Data and AI Factory is an engineered, curated system you can stand up in your own data center, with everything tested, validated, and optimized for AI-ready Postgres.
What it delivers:
- Clustered Supermicro Hyper servers, validated with EDB for Postgres performance and reliability.
- NVIDIA accelerated compute with NeMo Retriever and NIM microservices, enabling enterprises to build RAG and generative AI pipelines on their own data.
- Red Hat OpenShift as the orchestration layer for portability and enterprise-grade operations.
- EDB PG AI Hybrid Management and AI Factory software to unify observability, automation, and policy-driven tiering across the entire Postgres estate.
- Liquid-cooled, power-aware design to reduce energy use and carbon footprint at scale.
- Lifecycle support across the stack—hardware, Kubernetes/OpenShift, Postgres, and AI services—all backed by expert configuration for your workload needs.
The point is to make the production path the default path. Deploy a highly available Postgres cluster in a few clicks. Move data across tiers automatically. Spin up a chatbot or workflow that uses live lakehouse data. Keep it observable and supportable for the long haul. Above all: keep it simple.
Sovereignty is the north star
This isn’t just our hunch. In our 2025 global study of 2,200 executives across 13 economies—representing $48 trillion in GDP—67% said they plan to have sovereign AI and data strategies by 2027.
Data and AI sovereignty doesn’t mean isolation. It means control over where your data lives, how your models run, and how you manage cost, compliance, and carbon.
And finance is a big part of that. Renting eight GPUs in the cloud can run over $60 an hour—more than half a million dollars a year for one system—without guaranteeing performance at production scale. That’s why the EDB PG AI Factory model is modular: start small, scale as you grow, and finance in a way that matches adoption. With Supermicro, we even design growth plans that let enterprises expand capacity without locking into a footprint they’ll regret in a year.
Watch the sessions
Both of my Open Storage Summit panels dig deep into these topics alongside industry partners. You can get them here:
AI is moving at lightspeed; eighteen months feels like a decade at this point. But with openness, modernization, and sovereignty as your guideposts, you can move quickly without painting yourself into a corner.
That was the thread across both of my Summit sessions, and it’s the same advice I give every enterprise leader I meet today: don’t just experiment; build factories.