The Future of AI Data Centers: Notes From the (Virtual) Panel

Evan Forsberg
15 Dec 20257 mins read
AI/ML
Languages, frameworks, tools, and trends

AI data centers aren’t just physical infrastructure anymore. They’ve become the nervous system behind intelligence itself. They power how we create, train, and operate the systems that move us closer to AGI.

Getting there takes more than racks and cooling. We’re talking high-density compute, liquid-first thermal designs, terabit-class interconnects, and clean, consistent power. And the storage layer can’t act like an archive. It has to behave more like data metabolism, constantly ingesting, processing, and feeding models so they can learn, reason, and improve continuously.

I recently participated in a panel with AI leaders to discuss the future of AI data centers: what’s needed, what’s next, and how this propels the technological advances we all seek to achieve. We discussed everything from multimodal clusters and modular construction to geothermal campuses, global data planes, and the shift from static storage to continuous learning loops. The through line was clear: every advance in AI depends on the tempo and intelligence of the systems we build beneath it.

Why infrastructure now decides AI outcomes

At Turing, we think about AGI in four layers: multimodality, reasoning, tool use, and coding. Every one of those layers is constrained or accelerated by the data center it runs on. You can build better algorithms all day, but without the right physical substrate (think power, cooling, interconnects, and storage) you’re not scaling cognition, you’re throttling it.

When we talk about Artificial Superintelligence (ASI), we’re talking about multi-trillion-parameter models that continuously train and reason across text, vision, audio, and structured data. That only works when your data centers can sustain full-power operation, deliver massive bandwidth across thousands of nodes, and do it without introducing bottlenecks or thermal inefficiencies.

Bottom line: The road to smarter models runs straight through smarter data centers. The more capable the infrastructure, the faster we can move from training intelligence to deploying it at scale.

A quick who’s who of the panel

Before we talk takeaways, a quick overview of who spoke and what they spoke about:

  • KJ (Hitachi): Macro infrastructure shifts, cooling, power, and construction
  • Rajiv (Pure Storage): “AI factories,” enterprise data platforms, and data readiness
  • Me (Turing): How next-gen infrastructure unlocks multimodality, reasoning, tool use, and encoding at enterprise scale

What I shared: Infrastructure as a throttle on cognition

When I think about where AI infrastructure is headed, the story isn’t about bigger data centers, it’s about smarter ones. During the panel, I walked through what that really means when you’re operating at frontier scale. 

My key takeaways:

  • Multimodality at scale starts with density. Training across text, vision, audio, and structured data takes GPU clusters running at 80–150 kW per rack, connected by terabit-class interconnects like silicon photonics. Without that bandwidth, your models start choking on their own tensors. The next generation of intelligence depends on how efficiently those signals move across thousands of nodes.
  • Long-horizon reasoning—the ability for models to “think longer”—demands a different kind of architecture altogether. We’re moving toward quantum-ready, grid-aware campuses designed to sustain continuous training cycles and preserve deep context over time. If you want models to reason across days, not milliseconds, your power and cooling strategy has to evolve with that ambition.
  • Then there’s tool use: the agentic loop that connects perception to action. Tool-using agents need low-latency, closed feedback systems that constantly feed experience back into the model: streaming telemetry, rapid retraining, and fresh inference. The faster those cycles close, the faster the model learns from its own outcomes.
  • Finally, I called out a shift that’s happening quietly but fundamentally: from data storage to data metabolism. The question isn’t “where do we keep the data?” anymore, it’s “how do we ingest it, process it, and turn it into better decisions?” The companies that treat data like a living system, not a static archive, will be the ones that actually make intelligence scalable.

In short, the future of intelligence depends as much on the systems it runs on as the models themselves.

KJ’s view: From IT asset → real estate → national asset

KJ dove into the physical realities of building for AI at scale, and the scope is staggering. Three points in particular stuck with me.

  • First: scale is exploding. We’re moving into the era of gigawatt-scale AI “gigafactories.” These aren’t tucked inside urban tech corridors, they’re sprawling across Tier-2 and Tier-3 regions, often on 1,500–3,000-acre sites. Tier-1 metros simply don’t have the power, space, or grid headroom. The frontier of AI infrastructure is literally being built where the land and electrons are.
  • Second: we have to build differently. To keep pace with deployment timelines, operators are shifting toward prefabricated, modular construction. Electrical rooms, cooling systems, and even substation components are being factory-built, trucked in, and assembled onsite. It’s shaving months off delivery schedules, sometimes cutting time by 50% and reducing dependence on a limited pool of skilled trades.
  • Third: water, heat, and power have become strategic variables. Cooling is going liquid-first: direct-to-chip, in-row, and full immersion setups are becoming standard, but that heat still needs efficient rejection at the primary loop. Water use is under heavy scrutiny; at this scale, careless design can mean millions of gallons a day. That’s why the best operators are investing in waterless cooling, heat reuse for district systems or agriculture, and closed-loop recovery.

On the energy front, clean portfolios are diversifying fast. We’re seeing combinations of solar and battery storage, geothermal projects, and even early exploration of small modular reactors to guarantee 24/7 clean power and predictable carbon baselines.

The takeaway: reliability, density, and sustainability aren’t separate conversations anymore. They’re the same design challenge.

Rajiv’s view: From data lakes to AI factories

Rajiv took a step back from infrastructure specifics to map out how enterprises actually mature in their AI adoption. He outlined five stages, from simply consuming AI services to eventually building cloud-scale models that run as part of their core systems. Two points in particular hit home.

  • First: the rise of a global data plane. Rajiv argued that enterprises need to stop thinking in terms of app silos and start designing around a governed, global data pool that spans both on-prem and cloud. That unified layer is what enables rapid experimentation, fine-tuning, and production without the friction of data handoffs or duplicated environments. It’s the connective tissue for real proprietary intelligence, not just scattered AI pilots.
  • Second: data readiness is the real bottleneck. According to Rajiv, most stalled AI initiatives aren’t compute-starved, they’re data-starved. The issue isn’t lack of GPUs; it’s messy pipelines, slow ETL, and inconsistent governance. The companies pulling ahead are the ones automating data readiness, wiring their storage for real-time retraining, and enabling low-latency inference loops. The infrastructure that feeds the model is now just as critical as the model itself.

He also pointed to a clear industry shift toward AI-certified platforms—hardware and systems designed from the ground up for modern AI stacks. Enterprises are standardizing around NVIDIA reference architectures and building deeper partnerships with server and network vendors to guarantee predictable performance as they scale.

Rajiv’s bottom line: the enterprise AI journey isn’t about buying more compute, it’s about unifying data, automating readiness, and designing the foundation that makes intelligence reproducible.

Where it converges: Continuous reinforcement learning = real ROI

For AI agents to take on high-value work (think Tier-1 or Tier-2 support with over 90% first-call resolution) you need continuous reinforcement learning. That means:

  • Streaming every interaction, tool call, and outcome
  • Storing petabytes of fast-changing data without letting it become the bottleneck
  • Retraining and fine-tuning in hours or days, not quarters
  • Deploying models with low-latency inference inside tight feedback loops
  • Governing for drift, safety, and adversarial robustness every single day

That’s what I mean by data metabolism. When storage, networking, and compute are orchestrated for metabolism—not archival—you feel the ROI in weeks.

A practical checklist for CIOs and Heads of AI

Building for intelligence at scale means designing for evolution, not just capacity. The foundations you set today have to support continuous learning, safety, and sustainability tomorrow. Here’s what that looks like in practice:

  • Density: Design for 80–150 kW per rack now, with a clear path higher. Go liquid-first on cooling.
  • Interconnect: Budget for silicon-photonics and terabit-class fabrics. Multimodal training dies at the bottleneck.
  • Power: Engineer a 24/7 energy mix—grid plus BESS, geothermal, and pilot SMRs where viable.
  • Water & heat: Evaluate waterless cooling and plan for heat reuse early with local municipalities.
  • Modularity: Use prefabricated electrical and cooling blocks to compress delivery timelines and reduce labor risk.
  • Data plane: Unify storage into a governed global pool with automated data readiness.
  • Learning loop: Instrument everything for stream → retrain → deploy cycles. Measure improvements in reasoning, not just tokens served.
  • Safety & reliability: Build in drift monitoring, red-teaming, and rollback pathways from day one.

These principles provide a blueprint for scaling intelligence responsibly. The organizations that embrace them now will be the ones defining how AI infrastructure evolves over the next decade.

Closing thoughts

Every advance we want—richer multimodality, deeper reasoning, more capable tool-using agents—depends on the tempo at which our infrastructure can metabolize data into better behavior. Build for that tempo, and the rest of the roadmap starts to click.

That’s exactly where Turing focuses: helping enterprises move from general AI tools to proprietary intelligence—systems that know their data, follow their workflows, and improve continuously. If you’re ready to align your infrastructure with that kind of intelligence, Talk to a Turing Strategist and explore how we’re helping global teams design, fine-tune, and scale next-generation AI systems. 

Evan Forsberg

Evan Forsberg is the Vice President of AI Strategy and Innovation at Turing and a technology executive with over 20 years of experience leading digital transformation and AI adoption across the Telco and High Tech sectors. At Turing, he drives enterprise AI strategy, model innovation, and deployment frameworks that enable global organizations to operationalize intelligence at scale. A recognized leader in AI, Quantum Computing, and emerging technologies, Evan has built and led high-performing teams that deliver measurable impact across cloud, data, and network transformation initiatives. He holds a Master’s in AI Strategy and Innovation from Wake Forest University, along with postgraduate credentials from UC Berkeley, UT Austin, and MIT.

Ready to Optimize Your Model for Real-World Needs?

Partner with Turing to fine-tune, validate, and deploy models that learn continuously.

Optimize Continuously