The Data Center Frontier Show
The Data Center Frontier Show

<p>Welcome to The Data Center Frontier Show podcast, telling the story of the data center industry and its future. Our podcast is hosted by the editors of Data Center Frontier, who are your guide to the ongoing digital transformation, explaining how next-generation technologies are changing our world, and the critical role the data center industry plays in creating this extraordinary future.</p>

We’re taking a closer look at a topic that’s no longer optional for data‑center leaders: sustainability with measurable accountability. As carbon regulations tighten, especially around Scope 3 emissions, owners and operators are rethinking how they specify and source every component in the power chain. At the same time, supply‑chain pressures, copper constraints, and new state‑level requirements like on‑premise power for large sites are introducing new complexities into design, procurement, and long‑term planning. Joel Wynn, VP of Data Center Sales at Southwire, brings a unique end‑to‑end perspective, spanning mining practices, material traceability, advanced conductor engineering, Environmental Product Declarations, and the real‑world challenges hyperscalers and colos face when trying to reduce embodied carbon. Hear a conversation about how reduced‑carbon copper, transparent supply chains, and next‑generation power infrastructure can meaningfully move the needle on sustainability and how data‑center developers can prepare for the regulatory, technical, and community‑driven expectations coming next. Where does power innovation come into play in the context of sustainability?  We are already seeing shifts in the industry and the move to on-premise power. Southwire is focused on bringing innovation to the industry from the mining companies to the data center, all while identifying opportunities to upgrade existing cable for greater efficiency.
As AI data center campuses scale toward gigawatt capacity, the industry is confronting a new kind of bottleneck. Not just how to generate power, but how to move it efficiently across increasingly complex environments. In this episode of the Data Center Frontier Show Podcast, MetOx CEO Bud Vos outlines why traditional copper-based power distribution may be approaching its limits, and how high-temperature superconducting (HTS) wire could offer a fundamentally different path forward. “When you start looking at gigawatt-type campuses, you find three fundamental constraints—the grid interconnect, campus distribution, and delivery inside the data hall,” Vos explains. At each layer, scaling with copper drives exponential increases in materials, infrastructure, and complexity. HTS technology changes that equation. By delivering roughly 10x the power density of copper, superconducting cables can dramatically reduce the physical footprint of power infrastructure, replacing dozens of conventional cables with just a few, while also cutting material use and simplifying system design. The technology also reverses a key trend in data center power architecture. Instead of pushing voltage higher to compensate for copper limitations, superconductors enable higher current at lower voltage, potentially simplifying electrical systems across the facility. Just as importantly, superconductors are effectively lossless. “They don’t generate heat as part of the power delivery infrastructure,” Vos notes, a property that could reshape how operators think about thermal management in high-density AI environments. While HTS systems require cooling with liquid nitrogen, that requirement may align with the industry’s broader shift toward liquid cooling. Beyond engineering, HTS could also play a role in easing permitting and community opposition by reducing the physical footprint of power infrastructure. Narrower rights-of-way and fewer materials translate into less visible impact—an increasingly important factor as data center development faces growing scrutiny. Crucially, superconducting systems are not theoretical. They have already been deployed in utility environments, providing a track record of reliability that may help accelerate adoption in the data center sector. As onsite and behind-the-meter generation become more common, HTS is particularly well-suited to moving large amounts of power across multi-building campuses and into high-density data halls. At the same time, the technology offers a potential alternative to strained supply chains for copper and traditional electrical equipment. Looking further ahead, superconductivity’s role may extend even deeper, with HTS materials also serving as a foundation for emerging fusion energy systems, hinting at a future where power generation and data center infrastructure are more tightly linked. For now, Vos sees the industry at the beginning of an adoption cycle. “We’re deploying, testing, and then innovating on top of that,” he says. As AI infrastructure enters its execution phase, superconductivity may move from a niche technology to a core component of how the next generation of data centers is powered.
A look at the major trends shaping the data center and HVAC industries in 2026. Key topics include the growing role of high-voltage DC for improved power quality, the rise of liquid cooling, and how air-cooling technologies continue to play a critical part across the data center ecosystem.  Industry discussions also touch on innovation momentum coming out of recent events, shifting demand toward high growth markets, and the increasing importance of localized manufacturing to reduce lead times, navigate tariffs, and strengthen supply chain resilience—especially as AI driven data center expansion accelerates.  Themes such as energy efficiency, grid capacity limitations, hybrid cooling approaches, and system level optimization frame a broader question for operators and suppliers alike: Where do you fit within the data center system, and how are you preparing for what comes next?
Subzero Engineering is pleased to announce the acquisition of the Dissolvable Air Barrier (DAB) Panels product line from Cambridge R&D, further expanding Subzero’s portfolio of data center containment solutions and reinforcing its commitment to safety, performance, and turnkey system delivery.  DAB Panels are a unique overhead containment solution designed to provide effective airflow separation during normal data center operation while dissolving within seconds when exposed to water during sprinkler activation. This dissolvable design helps eliminate falling panel hazards and supports safer fire suppression outcomes—addressing a critical challenge found in traditional rigid overhead containment systems.  “With this acquisition, we’re strengthening our ability to deliver truly integrated, safety-driven containment solutions,” said Shane Kilfoil, President of Subzero Engineering. “DAB Panels complement our existing containment portfolio and give our customers another proven option to address airflow management and fire safety without compromise.”  DAB Panels are engineered for both hot aisle and cold aisle containment applications and offer a combination of airflow performance, safety, and installation flexibility. Made from EPA-certified, plant-based cellulose materials, the panels achieve Class A fire and smoke performance, producing low heat and minimal smoke while maintaining visibility for emergency personnel.  Despite their dissolvable design, DAB Panels remain durable during normal operation—withstanding high static air pressure and maintaining airflow separation where it matters most. Panels can be easily modified in the field to accommodate varying cabinet heights and existing infrastructure, eliminating the need to relocate sprinkler heads and reducing installation time and cost.  DAB Panels integrate seamlessly across Subzero’s full portfolio of data center containment products, including aisle frames, doors, roofs, and airflow management systems. This unified approach enables Subzero to deliver turnkey containment solutions engineered for performance, safety, and long-term scalability—backed by a single partner and a coordinated system designed to work together.
In this episode of the Data Center Frontier Show, DCF Editor-in-Chief Matt Vincent speaks with Michael Siteman, President of Prodigious Proclivities and a long-time leader and board member within 7x24 Exchange International, about how data center development is being reshaped by AI, power scarcity, network strategy, and community resistance. Siteman explains how site selection has evolved from a traditional real estate exercise into a far more complex infrastructure challenge. “The business used to be a pure real estate play,” Siteman says. “Now it’s a systems engineering problem. It’s power, network topology, the real estate itself, and political risk.” The conversation explores the growing dominance of power in development strategy, including the rapid rise of behind-the-meter generation as utilities struggle to keep pace with demand. Siteman notes that attitudes toward onsite generation have shifted dramatically in just the past few months. “Six months ago, people would say, ‘If you don’t have grid interconnection, we’re not interested,’” he says. “In the last 30 days, it’s completely different.” Vincent and Siteman also discuss the balance between network access and power access, the risks of pre-leasing capacity before buildings are completed, and the growing importance of local politics and government relations in getting projects approved. The episode closes with a look at the widening gap between traditional hyperscale facilities and AI factories, the question of whether AI infrastructure is heading toward a bubble, and the industry’s urgent workforce shortage. “Data centers don’t run themselves,” Siteman says. “We simply don’t have enough people to build and operate the infrastructure that’s coming.” This is a grounded, field-level conversation about what is really driving data center development in the AI era, and what the industry will need to solve next.
The AI infrastructure boom is rapidly reshaping how the data center industry thinks about power. What was once a relatively straightforward utility procurement exercise is evolving into a complex strategy spanning onsite generation, fuel logistics, financing, and system architecture. That reality framed a recent special edition of The Data Center Frontier Show Podcast, which recast and updated a pivotal DCF Trends Summit 2025 session: From Grid to Onsite Powering: Optimizing Energy Behind the Meter for Data Centers.  Moderated by Fengrong Li, Senior Managing Director at FTI Consulting, the panel explored how operators are responding as interconnection timelines stretch and AI workloads surge. Li’s framing emphasized a core shift: onsite power is moving from contingency planning to critical-path infrastructure. From the OEM perspective, David Blank of Siemens Energy noted that behind-the-meter deployments have accelerated sharply over the past year as developers confront multi-year waits for firm utility capacity. “Everyone would prefer grid power,” Blank said. “But in many cases, reliable access isn’t available for five, ten, even ten-plus years.” Panelists agreed that AI’s scale and speed are driving a structural rethink. Brian Gitt of Oklo described the moment as a return to industrial roots, with large loads once again building dedicated generation to meet growth timelines. At the same time, new technical pressures are emerging. AI clusters can produce sharp load swings, forcing developers to deploy fast-response buffering technologies such as batteries, flywheels, and supercapacitors to maintain stability. Despite differing technology paths—including gas turbines, hydrogen fuel cells, and advanced nuclear—the panel aligned on one common theme: modularity. Phased power blocks increasingly mirror how AI campuses are actually built and financed. The discussion also highlighted the growing importance of contract structures. Long-term offtake commitments, capacity reservations, and credit support are increasingly required to unlock equipment queues and fuel supply. Other panelists included Marty Trivette of AlphaStruxure and Yuval Bachar of ECL. The event was hosted by Data Center Frontier’s Matt Vincent. The takeaway was clear: in the AI era, energy strategy has moved to the critical path—and for many operators, that path now runs behind the meter.
The data center industry is racing into the AI era with bigger campuses, tighter timelines, and unprecedented infrastructure complexity. But in this episode of The Data Center Frontier Show Podcast, 7x24 Exchange International founding member and Mission Critical Global Alliance (MCGA) board member Dennis Cronin argues the industry’s biggest constraint may be the one it talks about least: people. Cronin’s message is direct: the “talent cliff” isn’t coming; it’s already here. Based on recent research into open roles, he estimates 467,000 to 498,000 openings in core data center positions (facilities and ops leadership, electrical, generator/UPS, HVAC, controls), plus another ~514,000 emerging roles tied to AI infrastructure, sustainability, and cyber-physical security—bringing the total to roughly one million jobs the industry needs to fill. A major driver is what Cronin calls the “five-year experience trap”: employers require five years of experience even for entry-level roles, but newcomers can’t get experience without being hired. The result is widespread talent poaching, involving workers jumping from site to site for 10–20% raises, without expanding the overall labor pool. Cronin also highlights a frequently missed reality in public policy debates: the job multiplier effect. While data centers may have lean direct staffing, they support a much larger ecosystem of contractors, service providers, and manufacturers, from generator and UPS technicians to security integrators and the electrical/mechanical supply chain, many of whom are already scrambling to hire. On training, Cronin explains why company-run programs and commercial training aren’t enough on their own. Internal academies often produce siloed specialists trained for a single operator’s environment, while commercial courses, often ~$1,000 per day per person, are typically designed to upskill people already in the industry, not onboard new entrants. MCGA’s strategy focuses on community colleges as the most scalable on-ramp: affordable programs, scholarships, and hands-on labs that can produce strong technicians in two-year degrees. Cronin cites programs at Cleveland Community College (NC), Northern Virginia Community College, and Southside Community College (VA), noting that dozens of schools are exploring data center curricula but funding remains a barrier. Cronin’s proposed solution is a true workforce ecosystem: outreach, standardized curriculum, certification labs, structured apprenticeships, and employer commitments. He also advocates replacing the “five years” requirement with an entry-level certification that proves foundational knowledge, i.e. acronyms and language, reading one-lines, SOPs/MOPs, and crucially, safety and situational awareness in electrical and mechanical environments. Finally, Cronin tackles the money question. With $60B in data centers announced this year, he says the industry needs a major, shared investment across operators, vendors, contractors, and manufacturers to fund training and scholarships at scale. The stakes are operational: in an era of gigawatt AI facilities and shrinking margins for error, workforce readiness is now a mission-critical issue.
In the latest episode of The DCF Show Podcast, Data Center Frontier founder Rich Miller joins present DCF Editor in Chief Matt Vincent and Senior Editor David Chernicoff to examine where the data center industry stands as AI infrastructure moves from announcement to execution. Miller also discusses his new Data Center Richness podcast and Substack project, which explores how data center professionals consume content and learn about the rapidly evolving industry. With information overload now a reality, Miller’s goal is to distill the most important signals shaping infrastructure decisions. The conversation then turns to what defines 2026 for data centers: execution. After a year filled with megaproject announcements, the industry now faces the harder task of actually delivering campuses at AI scale—often under severe power constraints. With utilities struggling to keep pace, on-site generation is shifting from temporary solution to long-term strategy, as developers seek reliable ways to power projects while easing community concerns about grid impacts. Public resistance has also become a major factor. Miller notes that community opposition is now delaying or halting billions of dollars in projects, forcing operators to rethink how they engage with local stakeholders. Issues like power pricing and water usage are increasingly central to project approval. On the technology front, Nvidia’s roadmap continues to reshape infrastructure planning, with rack densities rising sharply, liquid cooling becoming standard, and new power distribution models emerging to support AI factories. At the same time, Miller expects the market to stratify, with some operators specializing in AI factories while others serve cloud and enterprise demand. The discussion also touches on nuclear power’s future role, with data centers positioning themselves as anchor customers, though meaningful SMR deployment remains years away. Ultimately, Miller argues that the industry is moving faster than ever, and 2026 will reveal how well today’s massive investments translate into real deployments. As he concludes: the next phase belongs to those who can deliver.
In this installment of Nomads at the Frontier, Data Center Frontier Editor-in-Chief Matt Vincent checks in with Nomad Futurist founders Nabeel Mahmood and Phillip Koblence for on-the-ground reflections from PTC 2026 in Hawaii, and a clear signal that the digital infrastructure market is shifting from hype to delivery. Mahmood says PTC 2026 reaffirmed the move toward integrated digital infrastructure, with attendance continuing to grow and conversations increasingly translating into real progress. But the defining theme across AI, investment, and deployments was power. As Koblence puts it, “all of those questions are power”—and unlike prior years, the tone has moved from speculative site talk to “show me the money, show me the power,” with real timelines and secured capacity. The episode digs into the industry’s evolving stance on behind-the-meter generation, which is increasingly treated as the most viable medium-term path to getting online as grid bureaucracy and interconnection delays become the “long pole in the tent.” The discussion also tackles the sustainability tension in that shift: why the industry often kicks the can down the road, what alternative options (fuel cells, hydrogen) may offer, and why nuclear timelines don’t solve the near-term gap. Mahmood and Koblence also emphasize that the buildout isn’t just a power story; it’s a people and community story. Workforce shortages remain structural and long-lived, and community acceptance is now central to the industry’s “license to build.” Nomad Futurist’s mission, they argue, is becoming a bridge between digital infrastructure and the public, demystifying what the industry is, why it matters, and how the next generation can enter it. Finally, the conversation pressures-tests the AI boom: Mahmood predicts the “mega-scale AI factory” bubble will burst within three to five years, with growth shifting toward inferencing closer to users, but he still expects the sector to normalize into sustained double-digit expansion. And on Nvidia’s roadmap, both founders call for realism: megawatt racks may be coming, but as Koblence notes, “there are zero facilities” today that can support a 1–1.5 MW rack at scale.
In the latest episode of the Data Center Frontier Show Podcast, Editor in Chief Matt Vincent speaks with Sailesh Krishnamurthy, VP of Engineering for Databases at Google Cloud, about the real challenge facing enterprise AI: connecting powerful models to real-world operational data. While large language models continue to advance rapidly, many organizations still struggle to combine unstructured data (i.e. documents, images, and logs) with structured operational systems like customer databases and transaction platforms. Krishnamurthy explains how vector search and hybrid database approaches are helping bridge this gap, allowing enterprises to query structured and unstructured data together without creating new silos. The conversation highlights a growing shift in mindset: modern data teams must think more like search engineers, optimizing for relevance and usefulness rather than simply exact database results. At the same time, governance and trust are becoming foundational requirements, ensuring AI systems access accurate data while respecting strict security controls. Operating at Google scale also reinforces the need for reliability, low latency, and correctness, pushing infrastructure toward unified storage layers rather than fragmented systems that add complexity and delay. Looking toward 2026, Krishnamurthy argues the top priority for CIOs and data leaders is organizing and governing data effectively, because AI systems are only as strong as the data foundations supporting them. The takeaway: AI success depends not just on smarter models, but on smarter data infrastructure. 🎧 Listen to the full episode to explore how enterprises can operationalize AI at scale.
The data center industry is changing faster than ever. Artificial intelligence, cloud expansion, and high-density workloads are driving record-breaking energy and cooling demands. But behind every megawatt of compute capacity lies an equally critical resource: water. As data halls evolve from static infrastructure to dynamic, service-driven ecosystems, cooling has emerged as one of the most powerful levers for efficiency, reliability, and sustainability. In this episode, Ecolab explores how Cooling as a Service (CaaS) is transforming data center operations, shifting cooling from a capital expense to a measurable, performance-based service that drives uptime, reliability, and environmental stewardship. Tune in to hear experts discuss how data centers can future-proof their operations through a smarter, service-oriented approach to thermal management. From proactive analytics to commissioning best practices, this conversation dives into the technologies, partnerships, and business models redefining how cooling is managed and measured across the world’s most advanced digital infrastructure.
Applied Digital CEO Wes Cummins joins Data Center Frontier Editor-in-Chief Matt Vincent to break down what it takes to build AI data centers that can keep pace with Nvidia-era infrastructure demands and actually deliver on schedule. Cummins explains Applied Digital’s “maximum flexibility” design philosophy, including higher-voltage delivery, mixed density options, and even more floor space to future-proof facilities as power and cooling requirements evolve. The conversation digs into the execution reality behind the AI boom: long-lead power gear, utility timelines, and the tight MEP supply chain that will cause many projects to slip in 2026–2027. Cummins outlines how Applied Digital locked in key components 18–24 months ago and scaled from a single 100 MW “field of dreams” building to roughly 700 MW under construction, using fourth-generation designs and extensive off-site MEP assembly—“LEGO brick” skids—to boost speed and reduce on-site labor risk. On cooling, Cummins pulls back the curtain on operating direct-to-chip liquid cooling at scale in Ellendale, North Dakota, including the extra redundancy layers—pumps, chillers, dual loops, and thermal storage—required to protect GPUs and hit five-nines reliability. He also discusses aligning infrastructure with Nvidia’s roadmap (from 415V toward 800V and eventually DC), the customer demand surge pushing capacity planning into 2028, and partnerships with ABB and Corintis aimed at next-gen power distribution and liquid cooling performance.
In this episode of the Data Center Frontier Show, Matt Vincent is joined by Liam Weld, Head of Data Centers for Meter to discuss why connectivity for data centers is often forgotten about.
AI data centers are no longer just buildings full of racks. They are tightly coupled systems where power, cooling, IT, and operations all depend on each other, and where bad assumptions get expensive fast. On the latest episode of The Data Center Frontier Show, Editor-in-Chief Matt Vincent talks with Sherman Ikemoto of Cadence about what it now takes to design an “AI factory” that actually works. Ikemoto explains that data center design has always been fragmented. Servers, cooling, and power are designed by different suppliers, and only at the end does the operator try to integrate everything into one system. That final integration phase has long relied on basic tools and rules of thumb, which is risky in today’s GPU-dense world. Cadence is addressing this with what it calls “DC elements”:  digitally validated building blocks that represent real systems, such as NVIDIA’s DGX SuperPOD with GB200 GPUs. These are not just drawings; they model how systems really behave in terms of power, heat, airflow, and liquid cooling. Operators can assemble these elements in a digital twin and see how an AI factory will actually perform before it is built. A key shift is designing directly to service-level agreements. Traditional uncertainty forced engineers to add large safety margins, driving up cost and wasting power. With more accurate simulation, designers can shrink those margins while still hitting uptime and performance targets, critical as rack densities move from 10–20 kW to 50–100 kW and beyond. Cadence validates its digital elements using a star system. The highest level, five stars, requires deep validation and supplier sign-off. The GB200 DGX SuperPOD model reached that level through close collaboration with NVIDIA. Ikemoto says the biggest bottleneck in AI data center buildouts is not just utilities or equipment; it is knowledge. The industry is moving too fast for old design habits. Physical prototyping is slow and expensive, so virtual prototyping through simulation is becoming essential, much like in aerospace and automotive design. Cadence’s Reality Digital Twin platform uses a custom CFD engine built specifically for data centers, capable of modeling both air and liquid cooling and how they interact. It supports “extreme co-design,” where power, cooling, IT layout, and operations are designed together rather than in silos. Integration with NVIDIA Omniverse is aimed at letting multiple design tools share data and catch conflicts early. Digital twins also extend beyond commissioning. Many operators now use them in live operations, connected to monitoring systems. They test upgrades, maintenance, and layout changes in the twin before touching the real facility. Over time, the digital twin becomes the operating platform for the data center. Running real AI and machine-learning workloads through these models reveals surprises. Some applications create short, sharp power spikes in specific areas. To be safe, facilities often over-provision power by 20–30%, leaving valuable capacity unused most of the time. By linking application behavior to hardware and facility power systems, simulation can reduce that waste, crucial in an era where power is the main bottleneck. The episode also looks at Cadence’s new billion-cycle power analysis tools, which allow massive chip designs to be profiled with near-real accuracy, feeding better system- and facility-level models. Cadence and NVIDIA have worked together for decades at the chip level. Now that collaboration has expanded to servers, racks, and entire AI factories. As Ikemoto puts it, the data center is the ultimate system—where everything finally comes together—and it now needs to be designed with the same rigor as the silicon inside it.
AI is reshaping the data center industry faster than any prior wave of demand. Power needs are rising, communities are paying closer attention, and grid timelines are stretching. On the latest episode of The Data Center Frontier Show, Page Haun of Cologix explains what sustainability really looks like in the AI era, and why it has become a core design requirement, not a side initiative. Haun describes today’s moment as a “perfect storm,” where AI-driven growth meets grid constraints, community scrutiny, and regulatory pressure. The industry is responding through closer collaboration among operators, utilities, and governments, sharing long-term load forecasts and infrastructure plans. But one challenge remains: communication. Data centers still struggle to explain their essential role in the digital economy, from healthcare and education to entertainment and AI services. Cologix’s Montreal 8 facility, which recently achieved LEED Gold certification, shows how sustainable design is becoming standard practice. The project focused on energy efficiency, water conservation, responsible materials, and reduced waste, lowering both environmental impact and operating costs. Those lessons now shape how Cologix approaches future builds. High-density AI changes everything inside the building. Liquid cooling is becoming central because it delivers tighter thermal control with better efficiency, but flexibility is the real priority. Facilities must support multiple cooling approaches so they don’t become obsolete as hardware evolves. Water stewardship is just as critical. Cologix uses closed-loop systems that dramatically reduce consumption, achieving an average WUE of 0.203, far below the industry norm. Sustainability also starts with where you build. In Canada, Cologix leverages hydropower in Montreal and deep lake water cooling in Toronto. In California, natural air cooling cuts energy use. Where geography doesn’t help, partnerships do. In Ohio, Cologix is deploying onsite fuel cells to operate while new transmission lines are built, covering the full cost so other utility customers aren’t burdened. Community relationships now shape whether projects move forward. Cologix treats communities as long-term partners, not transactions, by holding town meetings, working with local leaders, and supporting programs like STEM education, food drives, and disaster relief. Transparency ties it all together. In its 2024 ESG report, Cologix reported 65% carbon-free energy use, strong PUE and WUE performance, and expanded environmental certifications. As AI scales, openness about impact is becoming a competitive advantage. Haun closed with three non-negotiables for AI-era data centers: flexible power and cooling design, holistic resource management, and a real plan for renewable energy, backed by strong community engagement. In the age of AI, sustainability isn’t a differentiator anymore. It’s the baseline.
In this episode of the Data Center Frontier Show, Matt Vincent, Editor-in-Chief of Data Center Frontier, talks to Axel Bokiba, General Manager Data Center Cooling for MOOG, about what is takes to deliver liquid cooling reliably at hyperscale.
In this episode of The Data Center Frontier Show, DCF Editor in Chief Matt Vincent speaks with Kevin Ooley, CFO of DataBank, about how the operator is structuring capital to support disciplined growth amid accelerating AI and enterprise demand. Ooley explains the rationale behind DataBank’s expansion of its development credit facility from $725 million to $1.6 billion, describing it as a strong signal of lender confidence in data centers as long-duration, mission-critical real estate assets. Central to that strategy is DataBank’s “Devco facility,” a pooled, revolving financing vehicle designed to support multiple projects at different stages of development; from land and site work through construction, leasing, and commissioning. The conversation explores how DataBank translates capital into concrete expansion across priority U.S. markets, including Northern Virginia, Dallas, and Atlanta, with nearly 20 projects underway through 2025 and 2026. Ooley details how recent deployments, including fully pre-leased capacity, feed a development pipeline supported by both debt and roughly $2 billion in equity raised in late 2024. Vincent and Ooley also dig into how DataBank balances rapid growth with prudent leverage, managing interest-rate volatility through hedging and refinancing stabilized assets into fixed-rate securitizations. In the AI era, Ooley emphasizes DataBank’s focus on “NFL cities,” serving enterprise and hyperscale customers that need proximity, reliability, and scale while Databank delivers power, buildings, and uptime, and customers source their own GPUs. The episode closes with a look at Databank’s long-term sponsorship by DigitalBridge, its deep banking relationships, and the market signals—pricing, absorption, and customer demand—that will ultimately dictate the pace of growth.
DCF Trends Summit 2025 Session Recap As the data center industry accelerates into an AI-driven expansion cycle, the fundamentals of site selection and investment are being rewritten. In this session from the Data Center Frontier Trends Summit 2025, Ed Socia of datacenterHawk moderated a discussion with Denitza Arguirova of Provident Data Centers, Karen Petersburg of PowerHouse Data Centers, Brian Winterhalter of DLA Piper, Phill Lawson-Shanks of Aligned Data Centers, and Fred Bayles of Cologix on how power scarcity, entitlement complexity, and community scrutiny are reshaping where—and how—data centers get built. A central theme of the conversation was that power, not land, now drives site selection. Panelists described how traditional assumptions around transmission timelines and flat electricity pricing no longer apply, pushing developers toward Tier 2 and Tier 3 markets, power-first strategies, and closer partnerships with utilities. On-site generation, particularly natural gas, was discussed as a short-term bridge rather than a permanent substitute for grid interconnection. The group also explored how entitlement processes in mature markets have become more demanding. Economic development benefits alone are no longer sufficient; jurisdictions increasingly expect higher-quality design, sensitivity to surrounding communities, and tangible off-site investments. Panelists emphasized that credibility—earned through experience, transparency, and demonstrated follow-through—has become essential to securing approvals. Sustainability and ESG considerations remain critical, but the discussion took a pragmatic view of scale. Meeting projected data center demand will require a mix of energy sources, with renewables complemented by transitional solutions and evolving PPA structures. Community engagement was highlighted as equally important, extending beyond environmental metrics to include workforce development, education, and long-term social investment. Artificial intelligence added another layer of complexity. While large AI training workloads can operate in remote locations, monetized AI applications increasingly demand proximity to users. Rapid hardware cycles, megawatt-scale racks, and liquid-cooling requirements are driving more modular, adaptable designs—often within existing data center portfolios. The session closed with a look at regional opportunity and investor expectations, with markets such as Pennsylvania, Alabama, Ohio, and Oklahoma cited for their utility relationships and development readiness. The overarching conclusion was clear: the traditional data center blueprint still matters—but power strategy, flexibility, and authentic community integration now define success.
As the data center industry enters the AI era in earnest, incremental upgrades are no longer enough. That was the central message of the Data Center Frontier Trends Summit 2025 session “AI Is the New Normal: Building the AI Factory for Power, Profit, and Scale,” where operators and infrastructure leaders made the case that AI is no longer a specialty workload; it is redefining the data center itself. Panelists described the AI factory as a new infrastructure archetype: purpose-built, power-intensive, liquid-cooled, and designed for constant change. Rack densities that once hovered in the low teens have now surged past 50 kilowatts and, in some cases, toward megawatt-scale configurations. Facilities designed for yesterday’s assumptions simply cannot keep up. Ken Patchett of Lambda framed AI factories as inherently multi-density environments, capable of supporting everything from traditional enterprise racks to extreme GPU deployments within the same campus. These facilities are not replacements for conventional data centers, he noted, but essential additions; and they must be designed for rapid iteration as chip architectures evolve every few months. Wes Cummins of Applied Digital extended the conversation to campus scale and geography. AI demand is pushing developers toward tertiary markets where power is abundant but historically underutilized. Training and inference workloads now require hundreds of megawatts at single sites, delivered in timelines that have shrunk from years to little more than a year. Cost efficiency, ultra-low PUE, and flexible shells are becoming decisive competitive advantages. Liquid cooling emerged as a foundational requirement rather than an optimization. Patrick Pedroso of Equus Compute Solutions compared the shift to the automotive industry’s move away from air-cooled engines. From rear-door heat exchangers to direct-to-chip and immersion systems, cooling strategies must now accommodate fluctuating AI workloads while enabling energy recovery—even at the edge. For Kenneth Moreano of Scott Data Center, the AI factory is as much a service model as a physical asset. By abstracting infrastructure complexity and controlling the full stack in-house, his company enables enterprise customers to move from AI experimentation to production at scale, without managing the underlying technical detail. Across the discussion, panelists agreed that the industry’s traditional design and financing playbook is obsolete. AI infrastructure cannot be treated as a 25-year depreciable asset when hardware cycles move in months. Instead, data centers must be built as adaptable, elemental systems: capable of evolving as power, cooling, and compute requirements continue to shift. The session concluded with one obvious takeaway: AI is not a future state to prepare for. It is already shaping how data centers are built, where they are located, and how they generate value. The AI factory is no longer theoretical—and the industry is racing to build it fast enough.
As AI workloads push data center infrastructure in both centralized and distributed directions, the industry is rethinking where compute lives, how data moves, and who controls the networks in between. This episode captures highlights from The Distributed Data Frontier: Edge, Interconnection, and the Future of Digital Infrastructure, a panel discussion from the 2025 Data Center Frontier Trends Summit. Moderated by Scott Bergs of Dark Fiber and Infrastructure, the panel brought together leaders from DartPoints, 1623 Farnam, Duos Edge AI, ValorC3 Data Centers, and 365 Data Centers to examine how edge facilities, interconnection hubs, and regional data centers are adapting to rising power densities, AI inference workloads, and mounting connectivity constraints. Panelists discussed the rapid shift from legacy 4–6 kW rack designs to environments supporting 20–60 kW and beyond, while noting that many AI inference applications can be deployed effectively at moderate densities when paired with the right connectivity. Hospitals, regional enterprises, and public-sector use cases are emerging as key drivers of distributed AI infrastructure, particularly in tier 3 and tier 4 markets. The conversation also highlighted connectivity as a defining bottleneck. Permitting delays, middle-mile fiber constraints, and the need for early carrier engagement are increasingly shaping site selection and time-to-market outcomes. As data centers evolve into network-centric platforms, operators are balancing neutrality, fiber ownership, and long-term upgradability to ensure today’s builds remain relevant in a rapidly changing AI landscape.
In this episode of the Data Center Frontier Show, DCF Editor in Chief Matt Vincent speaks with Uptime Institute research analyst Max Smolaks about the infrastructure forces reshaping AI data centers from power and racks to cooling, economics, and the question of whether the boom is sustainable. Smolaks unpacks a surprising on-ramp to today’s AI buildout: former cryptocurrency mining operators that “discovered” underutilized pockets of power in nontraditional locations—and are now pivoting into AI campuses as GPU demand strains conventional markets. The conversation then turns to what OCP 2025 revealed about rack-scale AI: heavier, taller, more specialized racks; disaggregated “compute/power/network” rack groupings; and a white space that increasingly looks purpose-built for extreme density. From there, Vincent and Smolaks explore why liquid cooling is both inevitable and still resisted by many operators—along with the software, digital twins, CFD modeling, and new commissioning approaches emerging to manage the added complexity. On the power side, they discuss the industry’s growing alignment around 800V DC distribution and what it signals about Nvidia’s outsized influence on next-gen data center design. Finally, the conversation widens into load volatility and the economics of AI infrastructure: why “spiky” AI power profiles are driving changes in UPS systems and rack-level smoothing, and why long-term growth may hinge less on demand (which remains strong) than on whether AI profits broaden beyond a few major buyers—especially as GPU hardware depreciates far faster than the long-lived fiber built during past tech booms. A sharp, grounded look at the AI factory era—and the engineering and business realities behind the headlines.
In this Data Center Frontier Trends Summit 2025 session—moderated by Stu Dyer (CBRE) with panelists Aad den Elzen (Solar Turbines/Caterpillar), Creede Williams (Exigent Energy Partners), and Adam Michaelis (PointOne Data Centers)—the conversation centered on a hard truth of the AI buildout: power is now the limiting factor, and the grid isn’t keeping pace. Dyer framed how quickly the market has escalated, from “big” 48MW campuses a decade ago to today’s expectations of 500MW-to-gigawatt-scale capacity. With utility timelines stretched and interconnection uncertainty rising, the panel argued that natural gas has moved from taboo to toolkit—often the fastest route to firm power at meaningful scale. Williams, speaking from the IPP perspective, emphasized that speed-to-power requires firm fuel and financeable infrastructure, warning that “interruptible” gas or unclear supply economics can undermine both reliability and underwriting. Den Elzen noted that gas is already a proven solution across data center deployments, and in many cases is evolving from a “bridge” to a durable complement to the grid—especially when modular approaches improve resiliency and enable phased buildouts. Michaelis described how operators are building internal “power plant literacy,” hiring specialists and partnering with experienced power developers because data center teams can’t assume they can self-perform generation projects. The panel also “de-mystified” key technology choices—reciprocating engines vs. turbines—as tradeoffs among lead time, footprint, ramp speed, fuel flexibility, efficiency, staffing, and long-term futureproofing. On AI-era operations, the group underscored that extreme load swings can’t be handled by rotating generation alone, requiring system-level design with controls, batteries, capacitors, and close coordination with tenant load profiles. Audience questions pushed into public policy and perception: rate impacts, permitting, and the long-term mix of gas, grid, and emerging options like SMRs. The panel’s consensus: behind-the-meter generation can help shield ratepayers from grid-upgrade costs, but permitting remains locally driven and politically sensitive—making industry communication and advocacy increasingly important. Bottom line: in the new data center reality, natural gas is here—often not as a perfect answer, but as the one that matches the industry’s near-term demands for speed, scale, and firm power.
In this episode, we crack open the world of ILA (In-Line Amplifier) huts, those unassuming shelters are quietly powering fiber connectivity. Like mini utility substations of the fiber world, these small, secure, and distributed facilities keep internet, voice, and data networks running reliably, especially over long distances or in developing areas. From the analog roots of signal amplification to today’s digital optical technologies, this conversation explores how ILAs are redefining long-haul fiber transport. We’ll discuss how these compact, often rural, mini data centers are engineered and built to boost light signals across vast distances. But it’s not just about the tech. There are real-world challenges to deploying ILAs: from acquiring land in varied environments, to coordinating civil construction often built in isolation. You’ll learn why site selection is as much about geology and permitting as it is about signal loss, and what factors can make or break an ILA deployment. We also explore the growing role of hyperscalers and colocation providers in driving ILA expansion, adjacent revenue opportunities, and what ILA facilities can mean for the future of rural connectivity. Tune in to find out how the pulse of long-haul fiber is beating louder than ever.
In this panel session from the 2025 Data Center Frontier Trends Summit (Aug. 26-28) in Reston, Va., JLL’s Sean Farney moderates a high-energy panel on how the industry is fast-tracking AI capacity in a world of power constraints, grid delays, and record-low vacancy. Under the banner “Scaling AI: The Role of Adaptive Reuse and Power-Rich Sites in GPU Deployment,” the discussion dives into why U.S. colocation vacancy is hovering near 2%, how power has become the ultimate limiter on AI revenue, and what it really takes to stand up GPU-heavy infrastructure at speed. Schneider Electric’s Lovisa Tedestedt, Aligned Data Centers’ Phill Lawson-Shanks, and Sapphire Gas Solutions’ Scott Johns unpack the real-world strategies they’re deploying today—from adaptive reuse of industrial sites and factory-built modular systems, to behind-the-fence natural gas, microgrids, and emerging hydrogen and RNG pathways. Along the way, they explore the coming “AI inference edge,” the rebirth of the enterprise data center, and how AI is already being used to optimize data center design and operations. During this talk, you’ll learn: * Why record-low vacancy and long interconnection queues are reshaping AI deployment strategy. * How adaptive reuse of legacy industrial and commercial real estate can unlock gigawatt-scale capacity and community benefits. * The growing role of liquid cooling, modular skids, and grid-to-chip efficiency in getting more power to GPUs. * How behind-the-meter gas, virtual pipelines, and microgrids are bridging multi-year grid delays. * Why many experts expect a renaissance of enterprise data centers for AI inference at the edge. Moderator: Sean Farney, VP, Data Centers, Jones Lang LaSalle (JLL) Panelists: Tony Grayson, General Manager, Northstar Lovisa Tedestedt, Strategic Account Executive – Cloud & Service Providers, Schneider Electric Phill Lawson-Shanks, Chief Innovation Officer, Aligned Data Centers Scott Johns, Chief Commercial Officer, Sapphire Gas Solutions
Recorded live at the 2025 Data Center Frontier Trends Summit in Reston, VA, this panel brings together leading voices from the utility, IPP, and data center worlds to tackle one of the defining issues of the AI era: power. Moderated by Buddy Rizer, Executive Director of Economic Development for Loudoun County, the session features: Jeff Barber, VP Global Data Centers, Bloom Energy Bob Kinscherf, VP National Accounts, Constellation Stan Blackwell, Director, Data Center Practice, Dominion Energy Joel Jansen, SVP Regulated Commercial Operations, American Electric Power David McCall, VP of Innovation, QTS Data Centers Together they explore how hyperscale and AI workloads are stressing today’s grid, why transmission has become the critical bottleneck, and how on-site and behind-the-meter solutions are evolving from “bridge power” into strategic infrastructure. The panel dives into the role of gas-fired generation and fuel cells, emerging options like SMRs and geothermal, the realities of demand response and curtailment, and what it will take to recruit the next generation of engineers into this rapidly changing ecosystem. If you want a grounded, candid look at how energy providers and data center operators are working together to unlock new capacity for AI campuses, this conversation is a must-listen.
Live from the Data Center Frontier Trends Summit 2025 – Reston, VA In this episode, we bring you a featured panel from the Data Center Frontier Trends Summit 2025 (Aug. 26-28), sponsored by Schneider Electric. DCF Editor in Chief Matt Vincent moderates a fast-paced, highly practical conversation on what “AI for good” really looks like inside the modern data center—both in how we build for AI workloads and how we use AI to run facilities more intelligently. Expert panelists included: Steve Carlini, VP, Innovation and Data Center Energy Management Business, Schneider Electric Sudhir Kalra, Chief Data Center Operations Officer, Compass Datacenters Andrew Whitmore, VP of Sales, Motivair Together they unpack: How AI is driving unprecedented scale—from megawatt data halls to gigawatt AI “factories” and 100–600 kW rack roadmaps What Schneider and NVIDIA are learning from real-world testing of Blackwell and NVL72-class reference designs Why liquid cooling is no longer optional for high-density AI, and how to retrofit thousands of brownfield, air-cooled sites How Compass is using AI, predictive analytics, and condition-based maintenance to cut manual interventions and OPEX The shift from “constructing” to assembling data centers via modular, prefab approaches The role of AI in grid-aware operations, energy storage, and more sustainable build and operations practices Where power architectures, 800V DC, and industry standards will take us over the next five years If you want a grounded, operator-level view into how AI is reshaping data center design, cooling, power, and operations—beyond the hype—this DCF Trends Summit session is a must-listen.
On this episode of The Data Center Frontier Show, Editor in Chief Matt Vincent sits down with Rob Campbell, President of Flex Communications, Enterprise & Cloud, and Chris Butler, President of Flex Power, to unpack Flex’s bold new integrated data center platform as unveiled at the 2025 OCP Global Summit. Flex says the AI era has broken traditional data center models, pushing power, cooling, and compute to the point where they can no longer be engineered separately. Their answer is a globally manufactured, pre-engineered platform that unifies these components into modular pods and skids, designed to cut deployment timelines by up to 30 percent and support gigawatt-scale AI campuses. Rob and Chris explain how Flex is blending JetCool’s chip-level liquid cooling with scalable rack-level CDUs; how higher-voltage DC architectures (400V today, 800V next) will reshape power delivery; and why Flex’s 110-site global manufacturing footprint gives it a unique advantage in speed and resilience. They also explore Flex’s lifecycle intelligence strategy, the company’s circular-economy approach to modular design, and their view of the “data center of 2030”—a landscape defined by converged power and IT, liquid cooling as default, and modular units capable of being deployed in 30–60 days. It’s a deep look at how one of the world’s largest manufacturers plans to redefine AI-scale infrastructure.
Artificial intelligence is completely changing how data centers are built and operated. What used to be relatively stable IT environments are now turning into massive power ecosystems. The main reason is simple — AI workloads need far more computing power, and that means far more energy. We’re already seeing a sharp rise in total power consumption across the industry, but what’s even more striking is how much power is packed into each rack. Not long ago, most racks were designed for 5 to 15 kilowatts. Today, AI-heavy setups are hitting 50 to 70 kW, and the next generation could reach up to 1 megawatt per rack. That’s a huge jump — and it’s forcing everyone in the industry to rethink power delivery, cooling, and overall site design. At those levels, traditional AC power distribution starts to reach its limits. That’s why many experts are already discussing a move toward high-voltage DC systems, possibly around 800 volts. DC systems can reduce conversion losses and handle higher densities more efficiently, which makes them a serious option for the future. But with all this growth comes a big question: how do we stay responsible? Data centers are quickly becoming some of the largest power users on the planet. Society is starting to pay attention, and communities near these sites are asking fair questions — where will all this power come from, and how will it affect the grid or the environment? Building ever-bigger data centers isn’t enough; we need to make sure they’re sustainable and accepted by the public. The next challenge is feasibility. Supplying hundreds of megawatts to a single facility is no small task. In many regions, grid capacity is already stretched, and new connections take years to approve. Add the unpredictable nature of AI power spikes, and you’ve got a real engineering and planning problem on your hands. The only realistic path forward is to make data centers more flexible — to let them pull energy from different sources, balance loads dynamically, and even generate some of their own power on-site. That’s where ComAp’s systems come in. We help data center operators manage this complexity by making it simple to connect and control multiple energy sources — from renewables like solar or wind, to backup generators, to grid-scale connections. Our control systems allow operators to build hybrid setups that can adapt in real time, reduce emissions, and still keep reliability at 100%. Just as importantly, ComAp helps with the grid integration side. When a single data center can draw as much power as a small city, it’s no longer just a “consumer” — it becomes part of the grid ecosystem. Our technology helps make that relationship smoother, allowing these large sites to interact intelligently with utilities and maintain overall grid stability. And while today’s discussion is mostly around AC power, ComAp is already ready for the DC future. The same principles and reliability that have powered AC systems for decades will carry over to DC-based data centers. We’ve built our solutions to be flexible enough for that transition — so operators don’t have to wait for the technology to catch up. In short, AI is driving a complete rethink of how data centers are powered. The demand and density will keep rising, and the pressure to stay responsible and sustainable will only grow stronger. The operators who succeed will be those who find smart ways to integrate different energy sources, keep efficiency high, and plan for the next generation of infrastructure. That’s the space where ComAp is making a real difference.
In this episode of the DCF Show podcast, Data Center Frontier Editor in Chief Matt Vincent sits down with Bill Severn, CEO of 1623 Farnam, to explore how the Omaha carrier hotel is becoming a critical aggregation hub for AI, cloud, and regional edge growth. A featured speaker on The Distributed Data Frontier panel at the 2025 DCF Trends Summit, Severn frames the edge not as a location but as the convergence of eyeballs, network density, and content—a definition that underpins Farnam’s strategy and rise in the Midwest. Since acquiring the facility in 2018, 1623 Farnam has transformed an underappreciated office tower on the 41st parallel into a thriving interconnection nexus with more than 40 broadband providers, 60+ carriers, and growing hyperscale presence. The AI era is accelerating that momentum: over 5,000 new fiber strands are being added into the building, with another 5,000 strands expanding Meet-Me Room capacity in 2025 alone. Severn remains bullish on interconnection for the next several years as hyperscalers plan deployments out to 2029 and beyond. The conversation also dives into multi-cloud routing needs across the region—where enterprises increasingly rely on Farnam for direct access to Google Central, Microsoft ExpressRoute, and global application-specific cloud regions. Energy efficiency has become a meaningful differentiator as well, with the facility operating below a 1.5 PUE, thanks to renewable chilled water, closed-loop cooling, and extensive free cooling cycles. Severn highlights a growing emphasis on strategic content partnerships that help CDNs and providers justify regional expansion, pointing to past co-investments that rapidly scaled traffic from 100G to more than 600 Gbps. Meanwhile, AI deployments are already arriving at pace, requiring collaborative engineering to fit cabinet weight, elevator limitations, and 40–50 kW rack densities within a non–purpose-built structure. As AI adoption accelerates and interconnection demand surges across the heartland, 1623 Farnam is positioning itself as one of the Midwest’s most important digital crossroads—linking hyperscale backbones, cloud onramps, and emerging AI inference clusters into a cohesive regional edge.
In this episode, Matt Vincent, Editor in Chief at Data Frontier is joined by Rob Macchi, Vice President Data Center Solutions at Wesco and they explore how companies can stay ahead of the curve with smarter, more resilient construction strategies. From site selection to integrating emerging technologies, Wesco helps organizations build data centers that are not only efficient but future-ready. Listen now to learn more!
In this episode of the Data Center Frontier Show, we sit down with Ryan Mallory, the newly appointed CEO of Flexential, following a coordinated leadership transition in October from Chris Downie. Mallory outlines Flexential's strategic focus on the AI-driven future, positioning the company at the critical "inference edge" where enterprise CPU meets AI GPU. He breaks down the AI infrastructure boom into a clear three-stage build cycle and explains why the enterprise "killer app"—Agentic AI—plays directly into Flexential's strengths in interconnection and multi-tenant solutions. We also dive into: Power Strategy: How Flexential's modular, 36-72 MW build strategy avoids community strain and wins utility favor. Product Roadmap: The evolution to Gen 5 and Gen 6 data centers, blending air and liquid cooling for mixed-density AI workloads. The Bold Bet: Mallory's vision for the next 2-3 years, which involves "bending the physics curve" with geospatial energy and transmission to overcome terrestrial limits. Tune in for a insightful conversation on power, planning, and the future of data center infrastructure.
On this episode of the Data Center Frontier Show, DartPoints CEO Scott Willis joins Editor in Chief Matt Vincent to discuss why regional data centers are becoming central to the future of AI and digital infrastructure. Fresh off his appearance on the Distributed Edge panel at the 2025 DCF Trends Summit, Willis breaks down how DartPoints is positioning itself in non-tier-one markets across the Midwest, Southeast, and South Central regions—locations he believes will play an increasingly critical role as AI workloads move closer to users. Willis explains that DartPoints’ strategy hinges on a deeply interconnected regional footprint built around carrier-rich facilities and strong fiber connectivity. This fabric is already supporting latency-sensitive workloads such as AI inference and specialized healthcare applications, and Willis expects that demand to accelerate as enterprises seek performance closer to population centers. Following a recent recapitalization with NOVA Infrastructure and Orion Infrastructure Capital, DartPoints has launched four new expansion sites designed from the ground up for higher-density, AI-oriented workloads. These facilities target rack densities from 30 kW to 120 kW and are sized in the 10–50 MW range—large enough for meaningful HPC and AI deployments but nimble enough to move faster than hyperscale builds constrained by long power queues. Speed to market is a defining advantage for DartPoints. Willis emphasizes the company’s focus on brownfield opportunities where utility infrastructure already exists, reducing deployment timelines dramatically. For cooling, DartPoints is designing flexible environments that leverage advanced air systems for 30–40 kW racks and liquid cooling for higher densities, ensuring the ability to support the full spectrum of enterprise, HPC, and edge-adjacent AI needs. Willis also highlights the importance of community partnership. DartPoints’ facilities have smaller footprints and lower power impact than hyperscale campuses, allowing the company to serve as a local economic catalyst while minimizing noise and aesthetic concerns. Looking ahead to 2026, Willis sees the industry entering a phase where AI demand becomes broader and more distributed, making regional markets indispensable. DartPoints plans to continue expanding through organic growth and targeted M&A while maintaining its focus on interconnection, high-density readiness, and rapid, community-aligned deployment. Tune in to hear how DartPoints is shaping the next chapter of distributed digital infrastructure—and why the market is finally moving toward the regional edge model Willis has championed.
In this episode of the Data Center Frontier Show, DCF Editor-in-Chief Matt Vincent speaks with Ed Nichols, President and CEO of Expanse Energy / RRPT Hydro, and Gregory Tarver, Chief Electrical Engineer, about a new kind of hydropower built for the AI era. RRPT Hydro’s piston-driven gravity and buoyancy system generates electricity without dams or flowing rivers—using the downward pull of gravity and the upward lift of buoyancy in sealed cylinders. Once started, the system runs self-sufficiently, producing predictable, zero-emission power. Designed for modular, scalable deployment—from 15 kW to 1 GW—the technology can be installed underground or above ground, enabling data centers to power themselves behind the meter while reducing grid strain and even selling excess energy back to communities. At an estimated Levelized Cost of Energy of $3.50/MWh, RRPT Hydro could dramatically undercut traditional renewables and fossil power. The company is advancing toward commercial readiness (TRL 7–9) and aims to build a 1 MW pilot plant within 12–15 months. Nichols and Tarver describe this moonshot innovation, introduced at the 2025 DCF Trends Summit, as a “Wright Brothers moment” for hydropower—one that could redefine sustainable baseload energy for data centers and beyond. Listen now to explore how RRPT Hydro’s patented piston-driven system could reshape the physics, economics, and deployment model of clean energy.
At this year’s Data Center Frontier Trends Summit, Honghai Song, founder of Canyon Magnet Energy, presented his company’s breakthrough superconducting magnet technology during the “6 Moonshot Trends for the 2026 Data Center Frontier” panel—showcasing how high-temperature superconductors (HTS) could reshape both fusion energy and AI data-center power systems. In this episode of the Data Center Frontier Show, Editor in Chief Matt Vincent speaks with Song about how Canyon Magnet Energy—founded in 2023 and based in New Jersey and Stony Brook University—is bridging fusion research and AI infrastructure through next-generation magnet and energy-storage technology. Song explains how HTS magnets, made from REBCO (Rare Earth Barium Copper Oxide), operate at 77 Kelvin with zero electrical resistance, opening the door to new kinds of super-efficient power transmission, storage, and distribution. The company’s SMASH (Superconducting Magnetic Storage Hybrid) system is designed to deliver instant bursts of energy—within milliseconds—to stabilize GPU-driven AI workloads that traditional batteries and grids can’t respond to fast enough. Canyon Magnet Energy is currently developing small-scale demonstration projects pairing SMES systems with AI racks, exploring integration with DC power architectures and liquid-cooling infrastructure. The long-term roadmap envisions multi-mile superconducting DC lines connecting renewables to data centers—and ultimately, fusion power plants providing virtually unlimited clean energy. Supported by an NG Accelerate grant from New Jersey, the company is now seeking data-center partners and investors to bring these technologies from the lab into the field.
Who is Packet Power? Since 2008, Packet Power has been at the forefront of energy and environmental monitoring, pioneering wireless solutions that helped define the modern Internet of Things (IoT). Built on the belief that energy is the new cost frontier of computation, Packet Power enables organizations to understand exactly where, when, and how energy is used—and at what cost. As AI-driven workloads push energy demand to record levels, Packet Power’s mission of complete energy traceability has never been more critical. Their systems are trusted worldwide for providing secure, out-of-band monitoring that remains fully independent of operational data networks.   Introducing the All-New High-Density Power Monitor Packet Power’s newest innovation, the High-Density Power Monitor, is redefining what’s possible in energy monitoring. At just under 6 cubic inches, it’s the smallest and most scalable multi-circuit power monitoring system on the market, capable of tracking 120 circuits in a space smaller than what’s inside a standard light switch. The High-Density Power Monitor eliminates bulky hardware, complex wiring, and lengthy installations. It’s plug-and-play simple, seamlessly integrates with Packet Power’s EMX software or any third-party monitoring platform, and supports both wired and wireless connectivity—including secure, air-gapped environments.   Solving the Challenges of Modern Power Monitoring The High-Density Power Monitor is engineered for the next generation of high-performance systems and facilities. It tackles five key challenges: Power Density: Monitors high-load environments with unmatched precision. Circuit Density: Tracks more circuits per module than any competitor. Physical Density: Fits anywhere, from PDUs to sub-panels to embedded devices. Installation Simplicity: Snaps into place—no tools, no complexity. Connection Flexibility: Wireless, wired, LAN, cloud, or cellular—you can mix and match freely. Whether managing a single rack or thousands of devices, Packet Power ensures monitoring 1 device is as easy as monitoring 1,000.   Why It Matters Now Today’s computing environments are experiencing an energy density arms race—with systems consuming megawatts of power in a single cabinet. New cooling methods, extreme power densities, and evolving form factors demand monitoring solutions that can keep up. Packet Power’s new High-Density Power Monitor meets that challenge head-on, offering the scalability, adaptability, and visibility needed to manage energy use in the AI era.   Perfect for Any Application This solution is ideal for: High-density servers and compute cabinets Distribution panels, PDUs, and busway components Embedded monitoring in OEM systems Large-scale deployments requiring fleet-level simplicity + more! Whether new installations or retrofitting existing buildings, Packet Power systems deliver vendor-agnostic integration and proven scalability with unmatched turn times and products Made in the USA for BABA compliance.   Learn More! Discover the true meaning of small & mighty:  👉 Visit PacketPower.com/high-density-power-monitor   📧 Contact sales@packetpower.com
In this episode of The Data Center Frontier Show, DCF Editor-in-Chief Matt Vincent talks with Yuval Boger, Chief Commercial Officer at QuEra Computing, about the fast-evolving intersection of quantum and AI-accelerated supercomputing. QuEra, a Boston-based pioneer in neutral-atom quantum computers, recently expanded its $230 million funding round with new investment from NVentures (NVIDIA’s venture arm) and announced a Nature-published breakthrough in algorithmic fault tolerance that dramatically cuts runtime overhead for error-corrected quantum algorithms. Boger explains how QuEra’s systems, operating at room temperature and using identical rubidium atoms as qubits, offer scalable, power-efficient performance for HPC and cloud environments. He details the company's collaborations with NVIDIA, AWS, and global supercomputing centers integrating quantum processors alongside GPUs, and outlines why neutral-atom architectures could soon deliver practical, fault-tolerant quantum advantage. Listen as Boger discusses QuEra’s technology roadmap, market position, and the coming inflection point where hybrid quantum-classical systems move from the lab into the data center mainstream.
Matt Vincent, Editor-in-Chief of Data Center Frontier, sits down with Angela Capon, Vice President of Marketing at EdgeConneX, to discuss the groundbreaking collaboration between EdgeConneX and the Duke of Edinburgh's International Award Program.
Charting the Future of AI Storage Infrastructure In this episode, Solidigm Director of Strategic Planning Brian Jacobosky guides listeners through a tech-forward conversation on how storage infrastructure is helping redefine the AI-era data center. The discussion frames storage as more than just a cost factor; it's also a strategic building block for performance, efficiency, and savings. Storage Moves to the Center of AI Data Infrastructure Jacobosky explains how, in the AI-driven era, storage is being elevated from a forgotten metric like “dollars per gigabyte” to a core priority: maximizing GPU utilization, managing soaring power draw, and unlocking space savings. He illustrates how every watt and every square inch counts. As GPU compute scales dramatically, storage efficiency is being engineered to enable maximum density and throughput. High-Capacity SSDs as a Game-Changer Jacobosky spotlights Solidigm D5-P5336 122TB SSDs as emblematic of the shift. Rather than a simple technical refresh, these drives represent a tectonic realignment in how data centers are being designed for huge capacity and optimized performance. With all-flash deployments offering up to nine times the space savings compared to hybrid architectures, Jacobosky underscores how SSD density can enable more GPU scale within fixed power and space budgets. This could even unlock achieving a 1‑petabyte SSD by the end of the decade. Embedded Efficiency The episode brings environmental considerations to the forefront. Jacobosky shares how an “all‑SSD” strategy can dramatically slash physical footprints as well as energy consumption. From data center buildout through end of lifecycle drive retirement, efficiency is driving both operational cost savings and ESG benefits — helping reduce concrete and steel usage, power draw, and e‑waste. Pioneering Storage Architectures and Cooling Innovation Listeners learn how AI-first innovators like Neo Cloud-style providers and sovereign AI operators lead the charge in deploying next-generation storage. Jacobosky also previews the Solidigm PS-1010 E1.S form factor, an NVIDIA fanless server solution that enables direct‑to‑chip Cold-Plate-Cooled SSDs integrated into GPU servers. He predicts that this systems-level integration will become a standard for high-density AI infrastructure. Storage as a Strategic Investment Solidigm challenges the notion that high-capacity storage is cost prohibitive. Within the framework of the AI token economy, Jacobosky explains that the true measure becomes minimizing cost per token and time to first token and, when storage is optimized for performance, capacity, and efficiency, the total cost of ownership (TCO) will often prove favorable after the first evaluation. Looking Ahead: Memory Wall, Inference Workloads, Liquid Cooling Jacobosky ends with a look ahead to where storage innovation will lead in the next five years. As AI models grow in size and complexity, he argues, storage is increasingly acting as an extension of memory, breaking through the “memory wall” for large inference workloads. Companies will design infrastructure from the ground up with liquid-cooling, future-scalable storage, and storage that supports massive model deployments without compromising latency. This episode is essential listening for data center architects, AI infrastructure strategists, and sustainability leaders looking to understand how storage is fast-becoming a defining factor in AI-ready data centers of the future.
Florida is emerging as one of the most promising new frontiers for data center growth — combining power availability, policy alignment, and strategic geography in ways that mirror the early success of Northern Virginia. In this episode of The Data Center Frontier Show, Editor-in-Chief Matt Vincent sits down with Buddy Rizer, Executive Director of Loudoun County Economic Development, and Lila Jaber, Founder of the Florida’s Women in Energy Leadership Forum and former Chair of the Florida Public Service Commission. Together, they explore how Florida is building the foundation for large-scale digital infrastructure and AI data center investment. Episode Highlights: Energy Advantage: While Loudoun County faces a 600-megawatt deficit and rising demand, Florida enjoys excess generation capacity, proactive utilities, and growing renewable integration. Utilities like FPL and Duke Energy are preparing for hyperscale and AI-driven loads with new tariff structures and grid-hardening investments. Tax Incentives & Workforce: Florida’s extended data center sales tax exemption through 2037 and its raised 100-megawatt IT load threshold signal a commitment to hyperscale development. The state’s universities and workforce programs are aligned with this tech growth, producing top talent in engineering and applied sciences. Strategic Location: As a digital gateway to Latin America and the Caribbean, Florida’s connectivity advantage—especially around Miami—is attracting hyperscale and AI operators looking to expand globally. Market Outlook: Industry insiders predict that within the next year, a major data center player will establish a significant footprint in Florida. Multiple campuses are expected to follow, driven by the state’s power resilience, policy stability, and collaborative approach between utilities, developers, and government leaders. Why It Matters: Florida’s combination of energy abundance, policy foresight, and strategic geography positions it as the next great growth market for digital infrastructure and AI-ready data centers in North America.
This podcast explores the rapidly evolving thermal and water challenges facing today’s data centers as AI workloads push rack densities to unprecedented levels. The discussion highlights the risks and opportunities tied to liquid cooling—from pre-commissioning practices and real-time monitoring to system integration and water stewardship. Ecolab’s innovative approaches to thermal management can not only solve operational constraints but also deliver competitive advantage by improving efficiency, reducing resource consumption, and strengthening sustainability commitments.
Join Bill Tierney of The Data Center Construction Alliance, as he discusses some of the emerging challenges facing data center development today. Topics will include how increasing collaboration between OEMs, owners, contractors, and sub-contractors is leading to some exciting and innovative solutions in the design and construction of data centers. He will also share some examples of how collaboration has led to new ideas and methodologies in the field.
AI networks are driving dramatic changes in data center design, especially around power, cooling, and connectivity. Modern GPU-powered AI data centers require far more energy and generate much more heat than traditional CPU-based setups, pushing cabinets to new power densities and necessitating advanced cooling solutions like liquid direct-to-chip cooling. These environments also demand significantly more fiber cabling to handle increased data flows, with deeper cabinets and complex layouts that make traditional rear-access cabling impractical.
In this DCF Trends-Nomads at the Summit Podcast episode, the hosts from Data Center Frontier and Nomad Futurist sit down with Adrienne Pierce, CEO of New Sun Road, to explore the emerging frontier of sovereign and renewable energy solutions for modular data center deployment. With over 1,500 microgrids under management via the company’s Stellar platform, Pierce brings a field-tested perspective on how flexible, AI-driven energy controls can empower edge and sub-10 MW data center systems—especially in regions where traditional grid infrastructure can’t keep up with AI-era demands. This discussion dives into the real-world opportunities for modular, microgrid-powered data centers to unlock new markets, reduce energy costs, and create more resilient and autonomous compute infrastructure at the edge and beyond. Expect sharp insights into what it means to decouple data center growth from utility bottlenecks—and how the right energy intelligence can accelerate both sustainability and scalability.
In this DCF Trends-Nomads at the Summit Podcast episode, the hosts of Data Center Frontier and Nomad Futurist sit down with UVA Darden MBA candidates Tosin Fashola and Albert Odum for an energizing conversation about next-generation data infrastructure—and why they believe Africa is poised to be its future epicenter. With professional backgrounds spanning data center strategy at KPMG and government-led implementations in Ghana, Tosin and Albert bring fresh, globally-minded perspectives on AI infrastructure, regional power strategy, and the role of connectivity in economic transformation. Expect a wide-ranging dialogue on the untapped potential of African markets, the roadmap to building sovereign cloud capacity and IXPs, and how a new generation of leaders is preparing to close the global digital divide—one hyperscale project at a time.
In this DCF Trends-Nomads at the Summit Podcast episode, Data Center Frontier editors and Nomad Futurist hosts sit down with Greg Stover, Vertiv’s Global Director, Hi-Tech Development. The discussion delves into Stover’s work at the intersection of advanced cooling technologies, hyperscale growth, and AI-driven infrastructure design. Drawing on his experience guiding Vertiv’s strategy for high-density deployments, liquid cooling adoption, and close collaboration with hyperscalers and chipmakers, Stover offers a forward-looking perspective on how evolving compute architectures, thermal management innovations, and market forces are redefining the competitive edge in the data center industry.
In this DCF Trends-Nomads at the Summit Podcast episode, the ever-curious, future-focused podcast hosts from Data Center Frontier and Nomad Futurist reunite with Infrastructure Masons CEO Santiago Suinaga for a timely, in-depth follow-up to his impactful debut on the DCF Show. With AI infrastructure growth hitting warp speed, the conversation will dig deeper into Suinaga’s vision for how the digital infrastructure community can scale responsibly—without losing sight of net zero goals, workforce development, or supply chain accountability. Expect a candid, high-level exchange on emerging regulatory pressures, the embodied carbon challenge, and why flexible cooling and modular design must be table stakes for the AI-powered data center of the future. Suinaga will also share the latest on iMasons' Climate Accord, job-matching platform, and new cross-sector partnerships—all aimed at fostering sustainability, equity, and innovation in an industry racing to keep pace with exponential demand.
In this DCF Trends-Nomads at the Summit Podcast episode, Chris James, CEO of NoesisAI, delivers a sweeping, insight-rich overview of how different classes of AI models—from LLMs and RAG to vision AI and scientific workloads—are driving a new wave of infrastructure decisions across the data center landscape. With a sharp focus on the diverging needs of training vs. inference, James breaks down what it takes to support today’s AI—from GPU-intensive clusters with high-speed interconnects and liquid cooling to inference-optimized, edge-deployed accelerators. He also explores the rapidly shifting hardware ecosystem, including the rise of custom silicon, heterogeneous computing, and where the battle between NVIDIA, AMD, Intel, and hyperscaler-designed chips is headed. Whether you're designing for scalability, sustainability, or the bleeding edge, this conversation offers a field guide to the infrastructure behind intelligent computing.
In this DCF Trends-Nomads at the Summit Podcast episode, the Data Center Frontier and Nomad Futurist hosts engage in a dynamic, behind-the-scenes conversation with two of the most influential voices shaping digital infrastructure communications: Illisa Miller, founder of iMiller PR, and Adam Waitkunas, founder of Milldam PR. With decades of experience guiding some of the industry's most prominent brands through launches, crises, and rebranding efforts, Miller and Waitkunas offer an unfiltered look at what it really takes to cut through the noise in a crowded, technically complex market. From telling the right story about AI and sustainability, to building trust across hyperscalers, investors, and public stakeholders, this episode explores the evolving narrative demands of the data center space—and why strategic communications is now mission-critical to business success. Expect honest reflections, practical PR wisdom, and a few war stories from the front lines of digital infrastructure storytelling.
In this DCF Trends-Nomads at the Summit Podcast episode, the editors of Data Center Frontier and the hosts of Nomad Futurist sit down with Lovisa Tedestedt, Sales Executive at Schneider Electric, where she focuses on colocation accounts. With more than 25 years of leadership roles in international sales management and business development, Lovisa has built a career defined by strong client relationships, bold growth strategies, and a passion for delivering excellence. From Sweden to the U.S. to China, Lovisa brings a truly global perspective to the data center industry. In this conversation, she shares insights on strategic planning, high-stakes negotiations, and the importance of adaptability in today’s fast-changing market. Beyond her career, Lovisa talks about life outside of work as an avid hockey mom, now based in Des Moines, Iowa with her husband and two adult children in college. Join us for a conversation that blends global business lessons, sales leadership, and the personal side of a career in the digital infrastructure world.
In this DCF Trends-Nomads at the Summit Podcast episode, the editors of Data Center Frontier and the hosts of Nomad Futurist sit down with Doug Recker, a telecommunications veteran and edge data center pioneer with more than 30 years of industry leadership. Today, Recker leads Duos Edge AI, driving initiatives to bring multi-access edge data centers (EDCs) to underserved communities, including schools and health facilities across the U.S. From founding Edge Presence (acquired by Ubiquity in 2023) and Colo5 Data Centers (later acquired by Cologix in 2014), to deploying more than 40 TM2500 units worldwide, Recker has consistently been at the forefront of building scalable, resilient infrastructure. His career is marked by multiple honors, including Northeast Florida’s Ultimate CEO Award and recognition among Inc. 500’s fastest-growing companies. In this conversation, Recker shares insights on the evolution of edge computing, lessons learned from decades in telecom and data centers, and how his time in the U.S. Marine Corps shaped his leadership philosophy. Tune in for a wide-ranging discussion on innovation, resilience, and the future of edge AI.
In this DCF Trends-Nomads at the Summit Podcast episode, Matt Grandbois, Vice President at AirJoule, introduces a game-changing approach to one of the data center industry’s most pressing challenges: water sustainability. As power-hungry, high-density environments collide with growing water scarcity concerns, Grandbois lays out a compelling vision for water-positive data centers—facilities that produce more water than they consume. Leveraging AirJoule’s advanced atmospheric water harvesting technology, he explains how waste heat, typically seen as a problem to mitigate, can become a valuable resource for onsite water generation. From adiabatic cooling and humidification to local water replenishment, this conversation opens up new possibilities for sustainable design, reduced PUE, and location flexibility—redefining what it means for data centers to be responsible community partners.
Speakers:  Mike Klassen, Director of Business Development, ZincFive Sugam Patel, VP of Product Management, DG Matrix In this DCF Trends-Nomads at the Summit Podcast episode, experts from ZincFive and DG Matrix unpack how medium voltage (MV) UPS architectures are redefining the way data centers power up for AI. As AI densification pushes traditional infrastructure to its limits, MV UPS solutions offer a path forward—boosting efficiency, reducing heat and losses, and reclaiming floor space for compute. The conversation delves into how higher voltage translates into smarter, more scalable designs that not only meet the demands of today's high-performance AI workloads but also future-proof facilities for what's coming next. From design frameworks to deployment strategies, Klassen and Patel provide a grounded, technical look at the UPS shift already underway.
In this episode of the DCF Trends–Nomads at the Summit podcast, we bring together two dynamic voices shaping the future of digital infrastructure: Melissa Farney, Editor at Large for Data Center Frontier and board member of the Nomad Futurist Foundation, and Bill Kleyman, Contributing Editor for Data Center Frontier and CEO of Apolo, who also serves as a member of the Nomad Futurist Foundation. Melissa and Bill join up for a candid discussion on the biggest trends transforming the data center and digital ecosystem. From AI-driven growth and sustainability challenges to the human capital needed to sustain the industry’s rapid expansion, they share a unique blend of editorial perspective and executive experience. This episode also dives into the mission of the Nomad Futurist Foundation: inspiring and equipping the next generation of leaders in the digital infrastructure space. Listeners will gain insights not just into market shifts, but also into the values and vision shaping the future of the field. Tune in for an engaging conversation at the intersection of thought leadership, industry transformation, and the mission to build a more resilient, inclusive digital future.
Speakers:  Joseph Ford, Senior Associate – Technology, Bala Consulting Engineers Eric Klaiber, Data Center Design Manager, Bala Consulting Engineers In this DCF Trends-Nomads at the Summit Podcast episode, Joseph Ford and Eric Klaiber of Bala Consulting Engineers offer a consultant engineer’s hard-won perspective on the complex realities of designing infrastructure for hyperscale, MTDC, and wholesale data centers. Drawing on years of field experience, they dig into the nuanced choreography required to align incoming duct banks, meet-me room layouts, and overlapping network systems—all while staying within the spatial constraints driven by power and cooling demands. This candid conversation highlights what it really takes to create design harmony across client expectations, design teams, and contractors, with insights into space planning, coordination strategy, and the delicate balance of infrastructure coexistence that underpins modern high-performance facilities.
In this DCF Trends-Nomads at the Summit Podcast episode, Data Center Frontier and Nomad Futurist hosts sit down with Bob Cassiliano, Chairman & CEO of 7x24 Exchange International, for a wide-ranging conversation on the state of mission-critical infrastructure and the evolving challenges facing the data center industry. As the leader of one of the most influential organizations in the space, Cassiliano offers a national perspective on power constraints, workforce development, sustainability pressures, and the cultural shifts reshaping operations and leadership across the digital infrastructure landscape. The discussion also highlights how 7x24 Exchange continues to serve as a vital convening force for collaboration, education, and resilience in an industry tasked with powering the AI era. With decades of insight and a pulse on what’s next, Cassiliano shares where the data center sector must go to meet the moment.
AI has pushed liquid cooling from a niche technology to a critical requirement for high density data centers. In this episode, Pat McGinn, COO and President of CoolIT Systems, shares why AI is driving liquid cooling from optional to essential. He explains how CoolIT helps customers deliver AI systems at speed and scale through proven capacity, modular solutions, and dedicated engineering support. Listeners will gain insight into the trends shaping adoption, examples of customer success, and what the future holds for high performance and sustainable cooling.
In this episode, we’re joined by Justin Loritz, Product Manager for Large Diesel at Rehlko, to explore how the company is redefining the role of a manufacturer in today’s dynamic data center landscape. Rehlko isn’t just delivering equipment, they’re delivering answers. As Justin shares, Rehlko’s philosophy centers on being a true solutions provider: collaborating early, working through complexity, and staying flexible to meet each customer’s unique challenges. Whether it’s identifying alternative components, navigating supply constraints, or designing systems that meet aggressive density and uptime requirements, Rehlko’s engineers partner closely with customers to ensure no detail is overlooked. Their process is driven by a deep understanding of the application, operational goals, and broader market context, allowing them to fine-tune specifications and avoid missteps that could compromise performance or timelines. Justin also discusses how this proactive, collaborative mindset extends beyond the customer relationship. By engaging with industry organizations like iMasons and contributing to shared challenges, like power availability and infrastructure strain, Rehlko helps move the entire ecosystem forward. Key discussion points include: What it means to be a solutions provider in a high-demand, high-stakes environment How Rehlko engineers collaborate to solve challenges before they impact project delivery Why deep application knowledge is essential to right-sizing designs and avoiding over- or under-specification How industry collaboration is key to unlocking new energy strategies, sourcing approaches, and long-term resilience For data center leaders navigating rising demand and tighter constraints, this episode highlights how Rehlko’s engineering-first, collaboration-driven approach is helping customers stay ahead, delivering smarter, more resilient infrastructure for the AI-powered future.
As artificial intelligence (AI) reshapes the data center landscape, power resiliency is being tested like never before. With enormous new facilities coming online and operators exploring alternatives to diesel, the backup power market is at an inflection point. In this episode of the Data Center Frontier Show, we sit down with Ricardo Navarro, Vice President of Global Solutions at Generac Power Systems, to discuss how the company is positioning itself as a major player in the data center ecosystem. Diesel Still Reigns — For Now Navarro begins by addressing the foundational question: why diesel remains the primary backup power choice for hyperscale and AI workloads. The answer, he explains, comes down to density, responsiveness, and reliability. Diesel engines respond instantly to the fluctuating loads that are common in AI training clusters, and fuel can be stored directly on-site. While natural gas is gaining traction as a bridging and utility-support solution, true redundancy requires dual pipelines — a level of infrastructure not yet common in data center deployments. That said, Navarro is clear that the story doesn’t end with diesel. He sees a future where natural gas, paired with batteries, becomes a cost-effective and environmentally attractive option. Hybrid systems, combined with demand response and grid participation programs, could give operators new tools for balancing reliability and sustainability. “Natural gas might not be the right solution right now, but definitely it will be in the future,” Navarro notes. Scaling Fast to Meet Hyperscaler Demands The conversation also explores how hyperscalers are shaping requirements. With campuses needing hundreds of generators, customers are asking not just about product performance, but about scale, lead times, and support. Generac is addressing that demand by delivering open sets in as little as 30 to 35 weeks — about a third of the wait time from traditional OEMs. That speed-to-deployment advantage has driven significant new interest in Generac across the hyperscale sector. From Generators to Energy Technology Equally important is Generac’s shift toward digital tools and predictive services. Over the past decade, the company has invested in acquisitions such as Deep Sea Electronics, Blue Pillar, and Off Grid Energy, expanding its expertise in controls, telemetry, and microgrid integration. Today, Generac is layering advanced sensors, machine learning, and AI-driven analytics onto its equipment fleet, enabling predictive failure detection, condition-based maintenance, and smarter load orchestration. This evolution, Navarro explains, represents Generac’s transformation “from being just a generator manufacturer to being an energy technology company.” What’s Next for Generac Looking ahead, the company is putting real capital behind its ambitions. Generac recently completed a $130 million facility in Beaver Dam, Wisconsin, designed to expand production capacity and meet surging demand from data center customers. With firm domestic and international orders already in place, Navarro says the company is determined “to be in the driver’s seat” as AI accelerates the need for scalable, resilient, and flexible backup power. For data center leaders, this episode provides a clear look into how backup power strategies are evolving — and how one of the industry’s largest players is preparing for the next wave of energy and infrastructure challenges.
Columbus Hosts First Nvidia HGX B200 AI Cluster, Scaling AI at the Aggregated Edge In this episode of the Data Center Frontier Show, Matt Vincent sits down with Bill Bentley (Cologix) and Ken Patchett (Lambda) to discuss Columbus, Ohio’s first Nvidia HGX B200 AI cluster deployment. The conversation dives into: Why Columbus is emerging as a strategic hub for AI workloads in the Midwest. How Lambda’s one-click clusters and Cologix’s interconnection-rich campus enable rapid provisioning, low-latency inference, and scalable enterprise AI. Flexible GPU consumption models that lower entry barriers for startups and allow enterprises to scale efficiently. Innovations in energy efficiency, cooling, and sustainability as data centers evolve to meet the demands of modern AI. The impact on regional industries like healthcare, manufacturing, and logistics—and why this deployment is a repeatable playbook for future AI clusters. Join us to hear how AI is being brought closer to the point of need, transforming the Midwest into a next-generation AI infrastructure hub.
Artificial intelligence is changing the data center industry faster than anyone anticipated. Every new wave of AI hardware pushes power, density, and cooling requirements to levels once thought impossible — and operators are scrambling to keep pace. In this episode of the Data Center Frontier Show, Schneider Electric’s Steven Carlini joins us to unpack what it really means to build infrastructure for the AI era. Carlini explains how the conversation around density has shifted in just a year: “Last year, everyone was talking about the one-megawatt rack. Now densities are approaching 1.5 megawatts. It’s moving that fast, and the infrastructure has to keep up.” These rapid leaps in scale aren’t just about racks and GPUs. They represent a fundamental change in how data centers are designed, cooled, and powered. The discussion dives into the new imperatives for AI-ready facilities: Power planning that anticipates explosive growth in compute demand. Liquid and hybrid cooling systems capable of handling extreme densities. Modularity and prefabrication to shorten build times and adapt to shifting hardware generations. Sustainability and responsible design that balance innovation with environmental impact. Carlini emphasizes that operators can’t treat these as optional upgrades. Flexibility, efficiency, and sustainability are now prerequisites for competitiveness in the AI era. Looking beyond hardware, Carlini highlights the diversity of AI workloads — from generative models to autonomous agents — that will drive future requirements. Each class of workload comes with different power and latency demands, and data center operators will need to build adaptable platforms to accommodate them. At the Data Center Frontier Trends Summit last week, Carlini expanded further on these themes, offering insights into how the industry can harness AI “for good” — designing infrastructure that supports innovation while aligning with global sustainability goals. His message was clear: the choices operators make now will shape not just business outcomes, but the broader environmental and social impact of the AI revolution. This episode offers listeners a rare inside look at the technical, operational, and strategic forces shaping tomorrow’s data centers. Whether it’s retrofitting legacy facilities, deploying modular edge sites, or planning new greenfield campuses, the challenge is the same: prepare for a future where compute density and power requirements continue to skyrocket. If you want to understand how the world’s digital infrastructure is evolving to meet the demands of AI, this conversation with Steven Carlini is essential listening.
Are you facing challenges with Edge Computing in your organization? Join us as we explore how Penguin Solutions’ Stratus ztC Edge platform combined with Kubernetes management creates a powerful, low-maintenance Edge Computing solution.   Learn how to:  Leverage Kubernetes for scalable, resilient Edge Computing  Simplify edge management with automated tools  Implement robust security strategies  Integrate Kubernetes with legacy operations  Don't miss this opportunity to optimize your Edge Computing infrastructure with cutting-edge tools and practices.  This podcast is ideal for IT leaders and engineers looking to optimize their Edge Computing infrastructure with cutting-edge tools and practices.
In this episode of the Data Center Frontier Show podcast, we sit down with Martin Renkis, Executive Director of Global Alliances for Sustainable Infrastructure at Johnson Controls, to explore how Data Center Cooling as a Service (DCCaaS) is changing the way operators think about risk, capital, and sustainability. Johnson Controls has delivered guaranteed infrastructure services for over 40 years, shifting cooling from a CAPEX burden to an OPEX model. The company designs, builds, operates, and maintains systems under long-term agreements that transfer performance risk away from the operator. Key to the model is AI-driven optimization through platforms like OpenBlue, paired with financial guarantees tied directly to customer-defined KPIs. A joint venture with Apollo Group (Ionic Blue) also provides flexible financing, freeing up capital for land or expansion. With rising rack densities and unpredictable AI factory demands, Renkis says cooling-as-a-service offers “a financially guaranteed safety net” that adapts to change while advancing sustainability goals. Listen now to learn how Johnson Controls is redefining cooling for the AI era.
As AI workloads reshape the data center landscape, speed to power has overtaken sustainability as the top customer demand. On this episode of the Data Center Frontier Show, Editor-in-Chief Matt Vincent talks with Brian Melka, CEO of Rehlko (formerly Kohler Energy), about how the century-old power company is helping operators scale fast, stay reliable, and meet evolving energy challenges. Melka shares how Rehlko is quadrupling production, expanding its in-house EPC capabilities, and rolling out modular power blocks through its Wilmott/Wiltech acquisition to accelerate deployments and system integration. The discussion also covers the balance between diesel reliability and greener alternatives like HVO fuel, hybrid power systems that combine batteries and engines, and strategies for managing noise, emissions, and footprint in urban sites. From rooftop generator farms in Paris to 100MW hyperscale builds, Rehlko positions itself as a technology-agnostic partner for the AI era. Listen now to learn how the company is helping the data center industry move faster, smarter, and more sustainably.
Smarter Security Starts with Key & Equipment Management In data centers, physical access control is just as critical as cybersecurity. Intelligent key and equipment management solutions help safeguard infrastructure, reduce risk, and improve efficiency — all while supporting compliance. Key Benefits: Enhanced Security – Restrict access to authorized personnel only Audit Trails – Track every access event for full accountability Operational Efficiency – Eliminate manual tracking and delays Risk Reduction – Prevent loss, misuse, or unauthorized access System Integration – Connect with access, video, and visitor tools Regulatory Support – Comply with ISO 27001, SOC 2, HIPAA & more A smart solution for a high-stakes environment — because in the data center world, every detail matters.
New DCF Podcast Episode Breaks Down the Real Work Behind Energy and Emissions Metrics In the latest episode of the Data Center Frontier Podcast, Editor-in-Chief Matt Vincent sits down with Jay Dietrich, Research Director of Sustainability at Uptime Institute, to examine what real sustainability looks like inside the data center — and why popular narratives around net zero, offsets, and carbon neutrality often obscure more than they reveal. Over the course of a 36-minute conversation, Dietrich walks listeners through Uptime’s expanding role in guiding data center operators toward measurable sustainability outcomes — not just certifications, but operational performance improvements at the facility level.
In this episode of the Data Center Frontier Show, Editor-in-Chief Matt Vincent speaks with LiquidStack CEO Joe Capes about the company’s breakthrough GigaModular platform — the industry’s first scalable, modular Coolant Distribution Unit (CDU) purpose-built for direct-to-chip liquid cooling. With rack densities accelerating beyond 120 kW and headed toward 600 kW, LiquidStack is targeting the real-world requirements of AI data centers while streamlining complexity and future-proofing thermal design. “AI will keep pushing thermal output to new extremes,” Capes tells DCF. “Data centers need cooling systems that can be easily deployed, managed, and scaled to match heat rejection demands as they rise.” LiquidStack's new GigaModular CDU, unveiled at the 2025 Datacloud Global Congress in Cannes, delivers up to 10 MW of scalable cooling capacity. It's designed to support single-phase direct-to-chip liquid cooling — a shift from the company’s earlier two-phase immersion roots — via a skidded modular design with a pay-as-you-grow approach. The platform’s flexibility enables deployments at N, N+1, or N+2 resiliency. “We designed it to be the only CDU our customers will ever need,” Capes says. Tune in to listen to the whole discussion, which goes on to explore why edge infrastructure and EV adoption will drive the next wave of sector innovation.
Every second an AI-enabled data center operates, it produces massive amounts of heat. Cooling needs are often thought of separately from heat, and for years, that is how systems were built. In most facilities, waste heat has to be managed, properly expelled, and is then forgotten. The heat may not be needed by the data center, but the question arises, ‘where else could this energy be put to use?’ What if energy use was viewed differently by data centers and the systems and institutions around them? Rather than focusing on a data center’s enormous power demands, let’s recognize data centers are part of a larger energy network, capable of giving back through the recovery and redistribution of thermal waste. The pursuit of heat reuse solutions drives technological advancements in data center cooling and energy management systems. Recovering waste heat isn’t just a matter of technology and hardware. Systems need to run smoothly, and uptime is critical. This can lead to the development of more efficient and sustainable technologies that benefit not only data centers but the communities they operate within, creating a symbiotic relationship. Join Trane® expert Esti Tierney as she explores critical considerations for enabling heat reuse as part of the circular economy. Esti will discuss high computing’s growing impact on heat production, the importance of a holistic view of thermal management, and why the need to collaborate and plan a heat redistribution strategy early with community stakeholders matters.   Heat reuse in data centers is a crucial aspect of modern energy management and sustainability practices, offering benefits that extend beyond the immediate operational efficiencies. Designing for optimized energy efficiency and recovering waste heat isn’t just about saving money. The ability to reduce energy demand on the grid will be critical for all today and into the future. As server densities increase and next-generation chips push power demands ever higher, waste heat is no longer a byproduct to manage — it's power waiting to be harnessed.
As AI reshapes the digital infrastructure landscape, data center design is evolving at every level. In this episode of the Data Center Frontier Show, we sit down with JP Buzzell, Eaton’s VP and Data Center Chief Architect, and Doug Kilgariff, Strategic Accounts Manager, to explore the key shifts driving the next generation of compute environments. Topics include: Purpose-built vs. retrofit approaches to AI infrastructure. Liquid cooling requirements for GPU clusters. Modular power design and construction. Behind-the-meter energy strategies. Data center workforce shortages. Eaton’s evolving role and insights from its Data Center Vision event. From rethinking site selection to solving for stranded assets and building talent pipelines, Buzzell and Kilgariff provide a practical, forward-looking view on the forces shaping AI-era data centers. Listen now to get the inside track on powering the future of AI infrastructure.
In this wide-ranging conversation, EdgeCore Digital Infrastructure CEO Lee Kestler joins the Data Center Frontier Show to discuss how the company is navigating the AI-fueled demand wave with a focused, disciplined strategy. From designing water-free campuses in the Arizona desert to long-term utility partnerships and a sober view on nuclear and behind-the-meter power, Kestler lays out EdgeCore’s pragmatic path through today’s high-pressure data center environment. He also shares insights on the misunderstood public perception of data centers, and why EdgeCore is investing not just in infrastructure, but in the communities where it builds.
In this episode of the Data Center Frontier Show, we explore CoreSite’s strategic acquisition of the Denver Gas and Electric Building, widely regarded as the most network-dense facility in the Rocky Mountain region. Now the sole owner and operator of the DE1 data center housed within the historic building, CoreSite is doubling down on its interconnection strategy and reshaping the future of Denver’s cloud and network ecosystem. Podcast guests Yvonne Ng, CoreSite’s Central Region General Manager, and Adam Post, SVP of Finance and Corporate Development, discuss how the acquisition enables CoreSite to simplify access to the Google Cloud Platform onramp and supercharge the Any2Denver peering exchange. The deal also adds over 100 interconnection-rich customers to CoreSite’s portfolio and sets the stage for a broader Denver campus strategy including the under-construction DE3 facility built for AI-scale workloads. The conversation explores key themes around modernizing legacy carrier hotels for high-density computing, integrating newly acquired customers, and how CoreSite, as backed by parent company American Tower, is evaluating similar interconnection-focused acquisitions in other metro markets. This is a timely deep dive into how legacy infrastructure is being reimagined to meet AI, multicloud, and edge computing demands. Denver is now positioned as a cloud peering hotspot, and CoreSite is at the center of the story.
The digital geography of America is shifting, and in Wichita, Kansas, that shift just became tangible. In a groundbreaking ceremony this spring, Connected Nation and Wichita State University launched construction on the state’s first carrier-neutral Internet Exchange Point (IXP), a modular facility designed to serve as the heart of regional interconnection. When completed, the site will create the lowest-latency, highest-resilience internet hub in Kansas, a future-forward interconnection point positioned to drive down costs, enhance performance, and unlock critical capabilities for cloud and AI services across the Midwest. In this episode of The Data Center Frontier Show podcast, I sat down with two of the leaders behind this transformative project: Tom Ferree, Chairman and CEO of Connected Nation (CN), and Hunter Newby, co-founder of CNIXP and a veteran pioneer of neutral interconnection infrastructure. Together, they outlined how this facility in Wichita is more than a local improvement, it’s a national proof-of-concept. “This is a foundation,” Ferree said. “We are literally bringing the internet to Wichita, and that has profound implications for performance, equity, and future participation in the digital economy.” A Marriage of Mission and Know-How The Wichita IXP is being developed by Connected Nation Internet Exchange Points, LLC (CNIXP), a joint venture between the nonprofit Connected Nation and Hunter Newby’s Newby Ventures. The project is supported by a $5 million state grant from Governor Laura Kelly’s broadband infrastructure package, with Wichita State providing a 40-year ground lease adjacent to its Innovation Campus. For Ferree, this partnership represents a synthesis of purpose. “Connected Nation has always been about closing the digital divide in all its forms, geographic, economic, and educational,” he explained. “What Hunter brings is two decades of experience in building and owning carrier-neutral interconnection facilities, from New York to Atlanta and beyond. Together, we’ve formed something that’s not only technically rigorous, but mission-aligned.” “This isn’t just a building,” Ferree added. “It’s a gateway to economic empowerment for communities that have historically been left behind.” Closing the Infrastructure Gap Newby, who’s built and acquired more than two dozen interconnection facilities over the years, including 60 Hudson Street in New York and 56 Marietta Street in Atlanta, said Wichita represents a different kind of challenge: starting from scratch in a region with no existing IXP. “There are still 14 states in the U.S. without an in-state Internet exchange,” he said. “Kansas was one of them. And Wichita, despite being the state’s largest city, had no neutral meetpoint. All their IP traffic was backhauled out to Kansas City, Missouri. That’s an architectural flaw, and it adds cost and latency.” Newby described how his discovery process, poring over long-haul fiber maps, researching where neutral infrastructure did not exist, ultimately led him to connect with Ferree and the Connected Nation team. “What Connected Nation was missing was neutral real estate for networks to meet,” he said. “What I was looking for was a way to apply what I know to rural and underserved areas. That’s how we came together.” The AI Imperative: Localizing Latency While IXPs have long played a key role in optimizing traffic exchange, their relevance has surged in the age of AI, particularly AI inference workloads, which require sub–3 millisecond round-trip delays to operate in real time. Newby illustrated this with a high-stakes use case: fraud detection at major banks using AI models running on Nvidia Blackwell chips. “These systems need to validate a transaction at the keystroke. If the latency is too high, if you’re routing traffic out of state to validate it, it doesn’t work. The fraud gets through. You can’t protect people.” “It’s not just about faster Netflix anymore,” he said. “It’s about whether or not next-gen applications even function in a given place.” In this light, the IXP becomes not just a cost-saver, but an enabler, a prerequisite for AI, cloud, telehealth, autonomous systems, and countless other latency-sensitive services to operate effectively in smaller markets. From Terminology to Technology: What an IXP Is Part of Newby’s mission has been helping communities, policymakers, and enterprise leaders understand what an IXP actually is. Too often, the industry’s terminology, “data center,” “meet-me room,” “carrier hotel”, obscures more than it clarifies. “Outside major cities, if you say ‘carrier hotel,’ people think you’re in the dating business,” Newby quipped. He broke it down simply: An Internet Exchange (IX) is the Ethernet switch that allows IP networks to directly peer via VLANs. An Internet Exchange Point (IXP) is the physical, neutral facility that houses the IX switch, along with all the supporting power, fiber, and cooling infrastructure needed to enable interconnection. The Wichita facility will be modular, storm-hardened, and future-proofed. It will include a secured meet-me area for fiber patching, a UPS-backed power room, hot/cold aisle containment, and a neutral conference and staging space. And at its core will sit a DE-CIX Ethernet switch, linking Wichita into the world’s largest ecosystem of neutral exchanges. “DE-CIX is the fourth partner in this,” said Newby. “Their reputation, their technical capacity, their customer base, it’s what elevates this IXP from a regional build-out to a globally connected platform.” Public Dollars, Private Leverage The Wichita IXP was made possible by public investment, but Ferree is quick to note that it’s the kind of public investment that unlocks private capital and ongoing economic impact. “This is the Eisenhower moment for digital infrastructure,” he said, referencing both the interstate highway system and the Rural Electrification Act. “Without government’s catalytic role, these markets don’t emerge. But once the neutral facility is there, it invites networks, it invites cloud, it invites jobs.” As states begin to activate federal funds from the $42.5 billion BEAD (Broadband Equity, Access, and Deployment) program, Ferree believes more will follow Kansas’s lead, and they should. “This isn’t just about broadband access,” he said. “It’s about building a digital economy in places that would otherwise be excluded from it. And that’s an existential issue for rural America.” From Wichita to the Nation Ferree closed the podcast with a forward-looking perspective: the Wichita IXP is just the beginning. “We have 125 of these locations mapped across the U.S.,” he said. “And our partnerships with land-grant universities, state governments, and private operators are key to unlocking them.” By pairing national mission with technical rigor, and public funding with local opportunity, the Wichita IXP is blazing a trail for other states and regions to follow.
As artificial intelligence surges across the digital infrastructure landscape, its impacts are increasingly physical. Higher densities, hotter chips, and exponentially rising energy demands are pressuring data center operators to rethink the fundamentals, and especially cooling. That’s where Shumate Engineering steps in, with a patent-pending system called Hybrid Dry Adiabatic Cooling (HDAC) that reimagines how chilled water loops are deployed in high-density environments. In this episode of The Data Center Frontier Show, Shumate founder Daren Shumate and Director of Mission Critical Services Stephen Spinazzola detailed the journey behind HDAC, from conceptual spark to real-world validation, and laid out why this system could become a cornerstone for sustainable AI infrastructure. “Shumate Engineering is really my project to design the kind of firm I always wanted to work for: where engineers take responsibility early and are empowered to innovate,” said Shumate. “HDAC was born from that mindset.” Two Temperatures, One Loop: Rethinking the Cooling Stack The challenge HDAC aims to solve is deceptively simple: how do you cool legacy air-cooled equipment and next-gen liquid-cooled racks, simultaneously and efficiently? Shumate’s answer is a closed-loop system with two distinct temperature taps: 68°F water for traditional air-cooled systems. 90°F water for direct-to-chip liquid cooling. Both flows draw from a single loop fed by a hybrid adiabatic cooler, a dry cooler with “trim” evaporative functionality when conditions demand it. During cooler months or off-peak hours, the system economizes fully; during warmer conditions, it modulates to maintain optimal output. “This isn’t magic; it’s just applying known products in a smarter sequence,” said Spinazzola. “One loop, two outputs, no waste.” The system is fully modular, relies on conventional chillers and pumps, and is compatible with heat exchangers for immersion or CDU-style deployment. And according to Spinazzola, “we can make 90°F water just about anywhere” as long as the local wet bulb temperature stays below 83°F, a threshold met in most of North America.
The future of AI isn’t coming; it’s already here. With NVIDIA’s recent announcement of forthcoming 600kW+ racks, alongside the skyrocketing power costs of inference-based AI workloads, now’s the time to assess whether your data center is equipped to meet these demands. Fortunately, two-phase direct-to-chip liquid cooling is prepared to empower today’s AI boom—and accommodate the next few generations of high-powered CPUs and GPUs. Join Accelsius CEO Josh Claman and CTO Dr. Richard Bonner as they walk through the ways in which their NeuCool™ 2P D2C technology can safely and sustainably cool your data center. During the webinar, Accelsius leadership will illustrate how NeuCool can reduce energy savings by up to 50% vs. traditional air cooling, drastically slash operational overhead vs. single-phase direct-to-chip, and protect your critical infrastructure from any leak-related risks. While other popular liquid cooling methods carry require constant oversight or designer fluids to maintain peak performance, two-phase direct-to-chip technologies require less maintenance and lower flow rates to achieve better results. Beyond a thorough overview of NeuCool, viewers will take away these critical insights: The deployment of Accelsius’ Co-Innovation Labs—global hubs enabling data center leaders to witness NeuCool’s thermal performance capabilities in real-world settings Our recent testing at 4500W of heat capture—the industry record for direct-to-chip liquid cooling How Accelsius has prioritized resilience and stability in the midst of global supply chain uncertainty Our upcoming launch of a multi-rack solution able to cool 250kW across up to four racks Be sure to join us to discover how two-phase direct-to-chip cooling is enabling the next era of AI.
During the 14-minute interview, Walsh discusses MOOG’s legacy in designing and manufacturing high-performance motion control products and how the company’s experience with mission critical solutions translates into the data center space. He outlines how intelligent cooling controls and maintenance services contribute to overall data center sustainability and explains what sets MOOG’s purpose-built data center products apart from the competition. Walsh also discusses recent advancements in motion control and cooling systems for data centers, including a new ultrasonic sensor that measures cavitation in liquid cooling fluids. During the interview, Walsh shares his thoughts on the rise of liquid cooling across the data center industry and the role MOOG plans to play in this transformation.
Join us for an insightful conversation with Jenny Zhan, the newly appointed EdgeConneX Chief Transformation Officer, as she shares her unique perspective on leading organizational change in today’s fast-paced, competitive environment. Transitioning from her previous role as Chief Accounting Officer to spearheading digital transformation efforts, Zhan brings a wealth of expertise and a fresh approach to the role.
In this episode of the Data Center Frontier Show, we sit down with Kevin Cochrane, Chief Marketing Officer of Vultr, to explore how the company is positioning itself at the forefront of AI-native cloud infrastructure, and why they’re all-in on AMD’s GPUs, open-source software, and a globally distributed strategy for the future of inference. Cochrane begins by outlining the evolution of the GPU market, moving from a scarcity-driven, centralized training era to a new chapter focused on global inference workloads. With enterprises now seeking to embed AI across every application and workflow, Vultr is preparing for what Cochrane calls a “10-year rebuild cycle” of enterprise infrastructure—one that will layer GPUs alongside CPUs across every corner of the cloud. Vultr’s recent partnership with AMD plays a critical role in that strategy. The company is deploying both the MI300X and MI325X GPUs across its 32 data center regions, offering customers optimized options for inference workloads. Cochrane explains the advantages of AMD’s chips, such as higher VRAM and power efficiency, which allow large models to run with fewer GPUs—boosting both performance and cost-effectiveness. These deployments are backed by Vultr’s close integration with Supermicro, which delivers the rack-scale servers needed to bring new GPU capacity online quickly and reliably. Another key focus of the episode is ROCm (Radeon Open Compute), AMD’s open-source software ecosystem for AI and HPC workloads. Cochrane emphasizes that Vultr is not just deploying AMD hardware; it’s fully aligned with the open-source movement underpinning it. He highlights Vultr’s ongoing global ROCm hackathons and points to zero-day ROCm support on platforms like Hugging Face as proof of how open standards can catalyze rapid innovation and developer adoption. “Open source and open standards always win in the long run,” Cochrane says. “The future of AI infrastructure depends on a global, community-driven ecosystem, just like the early days of cloud.” The conversation wraps with a look at Vultr’s growth strategy following its $3.5 billion valuation and recent funding round. Cochrane envisions a world where inference workloads become ubiquitous and deeply embedded into everyday life—from transportation to customer service to enterprise operations. That, he says, will require a global fabric of low-latency, GPU-powered infrastructure. “The world is going to become one giant inference engine,” Cochrane concludes. “And we’re building the foundation for that today.” Tune in to hear how Vultr’s bold moves in open-source AI infrastructure and its partnership with AMD may shape the next decade of cloud computing, one GPU cluster at a time.
Explore the critical intersection of Data Center Infrastructure Management (DCIM), Common Data Center Security issues and Zero Trust Architecture (ZTA) with a special focus on how our innovative OpenData solution can help. As data centers face increasing security threats and regulatory pressures, understanding how to effectively integrate DCIM into a Zero Trust framework is essential for safeguarding operations and ensuring compliance.
As the digital economy accelerates on the back of AI and hyperscale infrastructure, the question of who will build and run tomorrow’s data centers has never been more urgent. Since its inception in 2015, International Data Center Day (IDCD), organized by 7x24 Exchange International, has steadily grown into a global campaign to answer that question—by inspiring the next generation of mission-critical talent. This year’s IDCD, observed in March but increasingly seen as a year-round initiative, was the subject of a recent Data Center Frontier Show podcast conversation with 7x24 Exchange International Chairman and CEO Bob Cassiliano and Aheli Purkayastha, Chief Product Officer of Purkay Labs and President of the New England Chapter. The two industry leaders outlined how 7x24 Exchange is advancing the mission of IDCD through grassroots engagement, structured resources, and a growing constellation of strategic partnerships. A Response to the Talent Shortage The origin of IDCD traces back to 7x24 Exchange’s recognition—at a 2015 leadership event—that there was not only a lack of awareness of data center careers among students, but also a vacuum of visibility in the educational system. In response, the organization launched IDCD to build a long-term pipeline by introducing the industry to students early, consistently, and accessibly. Today, that mission is more critical than ever. As generative AI workloads surge and new builds stretch power and land capacity, the need for skilled, motivated professionals to support design, operations, and innovation across the lifecycle of data centers has intensified. Turning Awareness Into Action In 2025, IDCD expanded its reach through a broad range of local chapter events and partner activations. These included data center tours, educational presentations, interactive demos, 5K runs, and a hackathon hosted by the New England Chapter. The hackathon stood out as a model for applied learning, pairing 50 high school students with industry professionals in a challenge to design a data center in space—all in just five hours. The result: heightened student interest, deeper industry engagement, and a clear illustration of the educational value these events can offer. While university students remain a key audience, organizers have recognized the need to reach even younger learners. Initiatives are increasingly targeting elementary and middle school students through age-appropriate programming, with a special emphasis on encouraging young women to consider careers in mission-critical infrastructure. Resources, Reach, and Real Outcomes The IDCD campaign is more than a collection of events—it is supported by a robust infrastructure of tools, templates, and thought leadership. At the core is InternationalDataCenterDay.org, a centralized hub offering educational content tailored to different age groups, a career path “tree,” and a library of interviews with professionals across the ecosystem. These resources empower volunteers, educators, and sponsors to create consistent, high-impact programming. The outcomes speak for themselves. IDCD has helped catalyze the development of data center curricula at both the secondary and postsecondary levels. The Carolinas Chapter, for instance, played a key role in helping Cleveland Community College secure a $23 million grant to develop a full-fledged data center program. Elsewhere, scholarships are on the rise, and growing numbers of students and faculty are attending industry conferences. Supporting these gains are complementary 7x24 Exchange programs such as WIMCO (Women in Mission Critical Operations), STEM mentoring, and Data Center 101 sessions—designed to provide clear entry points for newcomers while reinforcing the industry's inclusive, interdisciplinary nature.
The data center industry is undergoing rapid transformation, driven by technological advancements, sustainability concerns, and evolving market demands. This conversation with JLL data center expert Sean Farney explores the world of data center project management, offering insights into current challenges and opportunities. One of the most significant trends in the industry is the growing need for liquid cooling retrofits. With only 4.6% of global data center critical load currently supporting liquid cooling, there's a substantial opportunity for upgrading existing facilities to meet the demands of high-density computing. This shift is driven by rapid advancements in chip technology, forcing data centers to adapt quickly to maintain efficiency and performance. Adaptive reuse has emerged as another key strategy in the data center sector. This approach involves converting non-traditional spaces into data centers or updating existing facilities for new technologies. Beyond addressing capacity demands, adaptive reuse offers significant sustainability benefits, aligning with the industry's growing focus on environmental responsibility. Energy efficiency and sustainability are critical considerations in modern data center design and operations. Often driven by cost savings, these initiatives are reshaping the industry. For instance, some estimates suggest that liquid cooling can reduce carbon impact by up to 40% in new facilities, highlighting the potential for both operational and environmental improvements. The global nature of data center operations presents unique challenges for project managers. Navigating complex regulatory environments across different markets requires a deep understanding of local codes and standards while meeting global corporate objectives. This complexity underscores the need for project management teams with both global reach and local expertise. As the industry grapples with a significant talent shortage, innovative approaches to attracting, training, and retaining skilled professionals are crucial. Comprehensive training programs and strategies for bridging the skills gap are becoming increasingly important in this rapidly evolving field. Emerging technologies continue to shape the future of data center project management. The integration of AI and machine learning in facility management is becoming more common, while the potential impact of quantum computing looms on the horizon. Project managers must stay ahead of these technological shifts to deliver future-ready facilities. As the data center industry continues to evolve, project management will play a crucial role in delivering cost-effective, efficient, and future-ready facilities. By addressing key challenges such as energy efficiency, technological adaptation, global operations, and talent management, project managers can help transform data center portfolios into strategic assets that support critical business objectives.
In today’s podcast, Matt Vincent, Editor in Chief of Data Center Frontier is joined by Bala Naidu, Vice President – Energy Transition Solutions at Bloom Energy to discuss how the exponential growth of data centers in the United States is putting immense pressure on the power infrastructure. With traditional power sources struggling to keep up, data centers are facing a critical challenge: how to secure timely access to affordable power while adhering to sustainability and permitting regulations.
The data center industry is experiencing substantial growth, placing increasing pressure on the power grid to meet the rising demand. These facilities necessitate continuous power supply with zero interruptions and demand highly reliable backup power to minimize downtime. The expansion of data centers is contributing to a disparity between the demand for power and the capacity of the grid to supply it, which may result in gaps ranging from several months to multiple years. Consequently, numerous developers are exploring alternative power supply options to address these challenges. Solutions that act as a bridge to grid power, commonly referred to as bridge power, are becoming increasingly essential. Reliable bridge power solutions are critical for enabling stakeholders to expedite revenue generation and enhance the resilience of these mission-critical developments.  Users may also decide to forgoe the utility and procure a self-generated behind-the-meter permanent solution. When considering a bridge power or self-generation behind-the-meter solution, one of the first factors to examine is the length of time from power need to utility availability. A key question arises: when can we expect the utility power to be available? Accurately assessing the length of time for which the bridge solution is required is vital in determining various other components of the power system.  A bridge power solution acts as a temporary or permanent on-site power plant for a data center, providing not only immediate energy needs but also the potential for long-term flexibility and scalability. This adaptability in both duration and equipment selection significantly accelerates the ability to respond to market demands, ensuring that the data center capacity can continue to expand to meet data storage needs. The next critical consideration in the development of bridge or behind-the-meter power energy solutions is fuel, as it represents one of the most significant ongoing expenses for projects that operate continuously, 24/7.  Natural-gas-fueled reciprocating engine generators have been proven to be highly effective in distributed generation applications. They offer reliable power supply, straightforward maintenance procedures, and low life-cycle costs, making them an attractive option for many operators. Additionally, natural gas is widely available across most regions in the country, and its comparatively low market prices in various areas enhance the appeal of reciprocating engines, making them a cost-effective solution.  As projects extend into longer timeframes, the option to incorporate gas turbines becomes increasingly relevant. These turbines are particularly well-suited for long-term applications and can be effectively combined with reciprocating engines to optimize capacity and ensure an uninterrupted power supply. This combination allows operators to leverage the strengths of both technologies, ensuring efficiency and reliability in energy production.  In situations where natural gas is not accessible, but the project's duration justifies the use of natural gas solutions, a virtual pipeline system can be deployed. A virtual pipeline consists of a modular approach utilizing either Compressed Natural Gas (CNG) or Liquefied Natural Gas (LNG). These gases can be transported through various modes effectively bridging the gap in areas lacking direct natural gas infrastructure. The flexibility of virtual pipelines enables efficient delivery of fuel to remote sites well before a conventional pipeline is constructed. A bridge or behind-the-meter power solution represents a substantial investment, and like any significant financial commitment, it comes with various inherent risks the project. These risks can be categorized into several areas including: technology risks, environmental permitting risks, construction risks, and financial risks.  To streamline the complexities of the project, it is advisable to collaborate with an experienced partner specializing in bridge and permanent power solutions.  The ideal partner should demonstrate a robust track record of installing and servicing comprehensive power solutions and employ a network of service technicians. These experts can offer a wide range of support, from basic planned maintenance and overhauls to detailed long-term service agreements that ensure sustained performance.  Moreover, the partner should ideally manage the entire project lifecycle, handling engineering, procurement, and construction (EPC) while supplying all necessary components, including engines, generators, transformers, switchgear, fuel treatment systems, and other essential ancillary equipment.  Another crucial aspect is the partner's diverse financing capability.  This includes the ability to finance the entire infrastructure rather than just the generation equipment and to provide flexible financing programs tailored to meet unique project needs. To address the surging demand for power, companies are actively exploring alternative generation solutions such as permanent self-generation, bridge power, and enhanced load flexibility. This examination of options underscores the urgent need for technology-agnostic strategies, highlighting the effectiveness of a holistic solutions approach. In an industry striving to expand data center capacity to meet insatiable demand, adopting these innovative solutions is essential for long-term success and competitive advantage.
For this episode of the Data Center Frontier Show podcast, DCF Editor-in-Chief Matt Vincent and Senior Editor David Chernicoff sat down with Tony Grayson, President and General Manager of Northstar Technology Group's Enterprise and Defense unit, to unpack a strategic acquisition that’s shaking up the edge and modular data center space. The conversation centered on Northstar’s acquisition of Compass Quantum, a company known for its rapidly deployable, composite-based modular infrastructure tailored for both enterprise and defense applications. From Compass to Northstar: A Strategic Realignment “We were developing a modular brand at Compass,” said Grayson. “Where Compass was building the gigawatt-scale campuses, I was building the smaller campuses using building blocks of modules—versus, you know, kind of a stick build.” That smaller-scale focus gained traction with enterprise clients, including several Fortune 50 companies, but new opportunities in the defense sector introduced regulatory friction. “Compass is Canadian-owned, and that goes against some of the rules that the U.S. government has,” Grayson explained. “Chris Crosby was a huge supporter… he wanted to sell us so he wouldn’t hinder us from growing the company or servicing U.S. defense needs.” Enter Northstar Technology Group, which brings a strategic partnership with Owens Corning—the manufacturer and IP holder behind Compass Quantum’s composite materials. With engineering, manufacturing, and construction capabilities now integrated under one roof, Grayson sees the acquisition as a natural fit. “Everything is now in-house instead of trying to go outside to other consultants,” he said. AI-Ready Modulars in 5MW Increments As hyperscale demands evolve, Grayson noted growing customer appetite for 5 megawatt modular units—mirroring the scale at which Nvidia and others are now building AI infrastructure. “You’ve seen Wade Vinson talk about it at Data Center World, and you see Jensen [Huang] talking about 5 megawatts being the line where you cross between the L2 and L3 network,” he said. “We can build in 5 megawatt increments and drop that stuff in parking lots—either as an operating lease or as a sale.” That flexibility extends to Northstar’s channel partners, who are increasingly seeking a variety of procurement models. “Some want sales, not just leases. It gives us more freedom to do that kind of stuff,” said Grayson. “Sometimes it’s better to be lucky than good, and I feel like the timing of this couldn’t have been better for where the industry’s at right now.” Veteran-Led Advisory Team Strengthens Defense Strategy In addition to the materials and platform innovations, Northstar’s defense ambitions are underpinned by what Grayson describes as a “dream team” of senior military advisors. “We basically have every outgoing ‘six’—the people in charge of IT and comms for the Air Force, Marine Corps, Army, and Navy—as advisors,” he said. “Some will be coming on full time.” These high-level advisors, many of whom retired as three-star generals, are instrumental in helping Northstar align its solutions with evolving defense requirements, particularly in distributed compute and real-time data processing. “We’re making huge progress on the enterprise side, but the defense side is where we need to catch up,” Grayson added. “Defense globally needs distributed compute… they’re ahead of enterprise when it comes to inference platforms.” He also highlighted Northstar’s engagement with the Navy, particularly around airborne systems. “That’s why we have the old air boss, Admiral Weitzel. He helps us with aircraft systems. These planes generate so much data, and we need advice on how best to internalize and analyze it.” Material Advantage: Why FRP Composites Are a Game-Changer: Durability, Customization—and No Tariffs A key differentiator for Northstar’s modular approach is its use of fiber-reinforced polymer (FRP) composites instead of traditional steel or concrete enclosures. As Grayson explained, “There’s no tariffs involved in any of this stuff. It’s all locally sourced and rather easy to get from Owens Corning.” This material advantage extends far beyond sourcing. FRP composites allow Northstar to customize modules for specific use cases, including: Fire resistance: Two-hour fire ratings. Extreme weather: Withstanding 250 mph winds—Category 5 hurricanes and F5 tornadoes. Military resilience: Ballistic protection up to 7.62mm and .50 caliber rounds. And despite their strength, these modules are extremely lightweight—“30% lighter than aluminum,” said Grayson. “I don’t know if you’ve ever seen the picture of me holding the 15-foot I-beam. I’m a sub guy, not Army tough. I definitely couldn’t hold that up if it were steel.”
Global demand for data center capacity is expected to grow between 19 and 22 percent annually through 2030, according to McKinsey & Company.  As data center capacity expands, so does the challenge of managing the heat generated by high-performance chips. This includes heat at the chip, as well as external heat rejection and room cooling. LG, a global HVAC technology leader, discusses the evolving landscape and the latest technology to ensure efficient, reliable cooling for data centers. This includes the full suite of data center cooling solutions that LG debuted at Data Center World 2025. The cutting-edge cooling technologies, including direct-to-chip, room, and chiller plant cooling capabilities, are intended to meet the challenge of increasing data center capacity head-onm helping provide reliable, energy-efficient solutions.
This episode will explore how Tecto Data Centers is shaping the future of digital infrastructure in Latin America through its operations in Fortaleza, Brazil. André Busnardo, Head of Data Center Sales at Tecto, discuss why the region is considered one of the most important connectivity hubs in LATAM and how the company’s investment strategy is helping address the growing demand for reliable, neutral, and scalable infrastructure.
Global power deficit and solutions   The discussion will address the power deficit we are experiencing and how new demands for power are navigated across different regions.
WASHINGTON, D.C.— At this year’s Data Center World 2025, held earlier this month at the Walter E. Washington Convention Center, the halls were buzzing with what could only be described as industry sensory overload. As hyperscalers, hardware vendors, and infrastructure specialists converged on D.C., the sheer density of innovation underscored a central truth: the data center sector is in the midst of rapid, almost disorienting, expansion. That made it the perfect setting for the latest episode in our ongoing podcast miniseries with Nomad Futurist, aptly titled Nomads at the Frontier. This time, I sat down in person with Nabeel Mahmood, co-founder and board director of the Nomad Futurist Foundation—a rare face-to-face meeting after years of remote collaboration. “Lovely seeing you in person,” Mahmood said. “It’s brilliant to get to spend some quality time at an event that’s really started to hit its stride—especially in terms of content.” Mahmood noted a welcome evolution in conference programming: a shift away from vendor-heavy pitches and toward deeper, mission-driven dialogue about the sector’s true challenges and future trajectory. “Events like these were getting overloaded by vendor speak,” he said. “We need to talk about core challenges, advancements, and what we’re doing to improve and move forward.” A standout example of this renewed focus was a panel on disruptive sustainability, in which Mahmood joined representatives from Microsoft, AWS, and a former longtime lieutenant of Elon Musk’s sustainability operations. “It’s not just about e-cycling or carbon,” Mahmood emphasized. “We have to build muscle memory. We’ve got to do things for the right reasons—and start early.” That starting point, he argued, is education—but not in the traditional sense. Instead, Mahmood called for a multi-layered approach that spans K–12, higher education, and workforce reskilling. “We’ve come out from behind the Wizard of Oz curtain,” he said. “Now we’re in the boardroom. We need to teach people not just how technology works, but why we use it—and how to design platforms with real intention.” Mahmood’s remarks highlighted a growing consensus among forward-thinking leaders: data is no longer a support function. It is foundational. “There is no business, no government, no economy that can operate today—or in the future—without data,” he said. “So let’s measure what we do. That’s the KPI. That’s the minimum threshold.” Drawing a memorable parallel, Mahmood compared this kind of education to swimming lessons. “Sure, you might not swim for 20 years,” he said. “But if you learned as a kid, you’ll still be able to make it back to shore.” Inside-Out Sustainability and Building the Data Center Workforce of Tomorrow As our conversation continued, we circled back to Mahmood’s earlier analogy of swimming as a foundational skill—like technology fluency, it stays with you for life. I joked that I could relate, recalling long-forgotten golf lessons from middle school. “I'm a terrible golfer,” I said. “But I still go out and do it. It’s muscle memory.” “Exactly,” Mahmood replied. “There’s a social element. You’re able to enjoy it. But you still know your handicap—and that’s part of it too. You know your limits.” Limits and possibilities are central to today’s discourse around sustainability, especially as the industry’s most powerful players—the hyperscalers—increasingly self-regulate in the absence of comprehensive mandates. I asked Mahmood whether sustainability had truly become “chapter and verse” for major cloud operators, or if it remained largely aspirational, despite high-profile initiatives. His answer was candid. “Yes and no,” he said. “No one's following a perfect process. There are some who use it for market optics—buying carbon credits and doing carbon accounting to claim carbon neutrality. But there are others genuinely trying to meet their own internal expectations.” The real challenge, Mahmood noted, lies in the absence of uniform metrics and definitions around terms like “circularity” or “carbon neutrality.” In his view, too much of today’s sustainability push is “still monetarily driven… keeping shareholders happy and share value rising.” He laid out two possible futures. “One is that the government forces us to comply—and that could create friction, because the mandates may come from people who don’t understand what our industry really needs. The other is that we educate from within, define our own standards, and eventually shape compliance bodies from the inside out.” Among the more promising developments Mahmood cited was the work of Rob Lawson-Shanks, whose innovations in automated disassembly and robotic circularity are setting a high bar for operational sustainability. “What Rob is doing is amazing,” Mahmood said. “His interest is to give back. But we need thousands of Robs—people who understand how it works and can repurpose that knowledge back into the tech ecosystem.” That call for deeper education led us to the second major theme of our conversation: preparing the next generation of data center professionals. With its hands-on community initiatives, Nomad Futurist is making significant strides in that direction. Mahmood described his foundation as “connective tissue” between industry stakeholders and emerging talent, partnering with organizations like Open Compute, Infrastructure Masons, and the iMasons Climate Accord. Earlier this year, Nomad Futurist launched an online Academy that now features five training modules, with over 200 hours of content development in the pipeline. Just as importantly, the foundation has built a community collaboration platform—native to the Academy itself—that allows learners to directly engage with content creators. “If a student has a question and the instructor was me or someone like you, they can just ask it directly within the platform,” Mahmood explained. “It creates comfort and accessibility.” In parallel, the foundation has beta launched a job board, in partnership with Infrastructure Masons, and is developing a career pathways platform. The goal: to create clear entry points into the data center industry for people of all backgrounds and education levels—and to help them grow once they’re in. “Those old jobs, like the town whisperer, they don’t exist anymore,” Mahmood quipped. “Now it’s Facebook, Twitter, social media. That’s how people get jobs. So we’re adapting to that.” By providing tools for upskilling, career matching, and community-building, Mahmood sees Nomad Futurist playing a key role in preparing the sector for the inevitable generational shift ahead. “As we start aging out of this industry over the next 10 to 20 years,” he said, “we need to give people a foundation—and a reason—to take it forward.”
As the data center industry continues to expand, two powerful forces are reshaping the search for next-generation power solutions. First, the rapid expansion of AI, IoT, and digital transformation is significantly increasing global power demand, placing increased pressure on traditional grid systems to meet the energy needs. The International Energy Agency forecasts that electricity consumption by data centers and AI could double by 2026, adding an amount equal to the entire current electricity usage of Japan. The second force is the urgent need for a smaller environmental footprint. As energy consumption rises, the drive for decarbonization becomes more critical, making it harder for data centers to balance environmental sustainability with performance reliability. In response to these challenges, data center leaders are looking beyond conventional solutions and exploring innovative alternatives that can meet the demands of a rapidly evolving industry. This podcast will focus on hydrogen fuel cell technology as a potential fuel source. This emerging technology has the potential to transform how data centers power their operations, providing a sustainable solution that not only helps reduce carbon emissions but also ensures reliable and scalable energy for the future. Hydrogen fuel cells present an opportunity for data centers. Unlike traditional fossil fuel-based systems, hydrogen fuel cells generate power through an electrochemical reaction between hydrogen and oxygen, with water and heat as the only byproducts. This makes them a virtually emission-free, environmentally friendly power solution. Moreover, hydrogen fuel cells can reduce data center emissions by up to 99%, providing one of the most effective means of decarbonizing the industry. The environmental benefits are matched by their impressive efficiency, as fuel cells operate with fewer energy losses compared to traditional combustion-based systems. In this episode, Ben Rapp, Strategic Product Development Manager at Rehlko, will explore the science behind hydrogen fuel cells, offering an overview of the key components that make them a viable power solution for data centers. He will also highlight the practical advantages of hydrogen fuel cells, particularly their ability to deliver reliable, on-demand power with minimal disruption. This episode also addresses the challenges of adopting hydrogen fuel cells, including infrastructure development, cost, and the need for a robust hydrogen distribution network. Additionally, we talked to Ben about Rehlko’s hydrogen fuel cell project and the partnerships involved. As part of this initiative, Rehlko has collaborated with companies like Toyota to develop a 100-kilowatt hydrogen fuel cell solution aimed at reducing the carbon footprint of data centers. We’ll go over the progress of this partnership and the practical steps being taken to make hydrogen fuel cells a viable and scalable power solution. Finally, Ben will talk about his perspective on the future role of hydrogen fuel cells in data centers worldwide. With the industry facing increasing pressure to meet sustainability targets while ensuring performance reliability, hydrogen fuel cells are poised to play a critical role in the evolution of data center power systems. They offer both environmental and operational benefits that are essential for the industry’s future. Whether used as a primary power source, backup system, or for grid stabilization, hydrogen fuel cells are poised to become a key player in the future of data center energy management.
In this episode of the Data Center Frontier Show podcast, we explore how Packet Power is transforming data center monitoring. As the demand for energy efficiency and operational transparency grows, organizations need solutions that provide real-time insights without adding complexity. Packet Power’s wireless, scalable, and secure technology offers an easy, streamlined approach to power and environmental monitoring. Monitoring Made Easy® Traditional monitoring solutions can be difficult to install, configure, and scale. Packet Power’s wireless, out-of-band technology removes these hurdles, offering a plug-and-play system that allows organizations to start with a few monitoring nodes and expand as needed. With built-in fleet management, remote diagnostics, and broad compatibility with existing systems, Packet Power helps data centers gain visibility into their power and environmental conditions with minimal effort. Fast, Flexible Deployment Deploying monitoring solutions can be time-consuming and resource-intensive, particularly in large-scale facilities. Many systems require extensive cabling, specialized personnel, and lengthy configuration processes. Packet Power eliminates these roadblocks by offering a vendor-agnostic, rapidly deployable system that works seamlessly with existing infrastructure. Designed and manufactured in the USA, Packet Power products ship in just 2-3 weeks, avoiding the delays often associated with global supply chain issues and ensuring data centers can implement monitoring solutions without unnecessary downtime. Security Built from the Ground Up Security is a critical concern in mission-critical environments. Unlike traditional monitoring solutions that focus primarily on encryption, Packet Power integrates security at every level—from hardware to networking and software. Their read-only architecture ensures that failed hardware won’t disrupt power delivery, while out-of-band monitoring prevents exposure to network vulnerabilities. One-way communication protocols and optional physical data isolation further enhance security, ensuring that critical infrastructure remains protected from cyber threats and misconfigurations. Adapting to Industry Changes The data center landscape is rapidly evolving, with increasing demands for efficiency, flexibility, and sustainability. Packet Power’s solutions are designed to keep pace with these changes, offering a non-intrusive way to enhance monitoring capabilities without modifying existing infrastructure. Their technology is easily embedded into power and cooling systems, enabling organizations to implement real-time monitoring across a wide range of devices while maintaining operational agility. Why Wireless Wins Traditional wired monitoring solutions often require extensive installation efforts and ongoing maintenance, while common consumer wireless options—such as WiFi, Bluetooth, and Zigbee—are not designed for industrial environments. These protocols pose security risks and struggle in settings with high electromagnetic interference. Packet Power’s proprietary wireless system is optimized for reliability in data centers, eliminating IP-based vulnerabilities while supporting secure, large-scale deployments. Cost Savings & Efficiency Monitoring solutions should provide a return on investment, not create additional overhead. Packet Power reduces costs by minimizing IT infrastructure needs, eliminating the expense of network switches, dedicated cabling, and IP address management. Their wireless monitoring approach streamlines deployment, allowing organizations to instantly gain actionable insights into their energy usage and environmental conditions. This improves cost allocation, supports sustainability initiatives, and enhances operational efficiency. Versatile Applications Energy monitoring is crucial across multiple aspects of data center management. Packet Power’s solutions support a wide range of applications, including tracking energy use in busways, HVAC systems, generators, switchgear, tenant submeters, and selective circuits. Organizations use their data for billing, cost allocation, efficiency optimization, and failure detection. By providing real-time insights into power consumption and environmental conditions, Packet Power helps data centers maintain reliability, compliance, and cost-effectiveness. The Power of EMX Software & 3D Visualization Collecting data is only part of the equation—turning that data into actionable insights is equally important. Packet Power’s EMX Software integrates seamlessly with existing DCIM and BMS platforms, offering real-time alerts, custom reporting, and a brand new 3D Layout Viewer for enhanced visualization. These tools help facility managers and operators make informed decisions, ensuring optimal performance and risk mitigation. Conclusion In an industry where efficiency, security, and flexibility are paramount, Packet Power provides a modern approach to data center monitoring. Their wireless, scalable, and vendor-agnostic solutions simplify installation, reduce costs, and deliver real-time insights into critical infrastructure. As data centers continue to evolve, Packet Power’s innovative technology ensures organizations can adapt quickly and operate more effectively without the burden of complex monitoring systems. To learn more, visit PacketPower.com or email sales@packetpower.com for a free consultation.
The AI revolution is charging ahead—but powering it shouldn't cost us the planet. That tension lies at the heart of Vaire Computing’s bold proposition: rethinking the very logic that underpins silicon to make chips radically more energy efficient. Speaking on the Data Center Frontier Show podcast, Vaire CEO Rodolfo Rossini laid out a compelling case for why the next era of compute won't just be about scaling transistors—but reinventing the way they work. “Moore's Law is coming to an end, at least for classical CMOS,” Rossini said. “There are a number of potential architectures out there—quantum and photonics are the most well known. Our bet is that the future will look a lot like existing CMOS, but the logic will look very, very, very different.” That bet is reversible computing—a largely untapped architecture that promises major gains in energy efficiency by recovering energy lost during computation. Product, Not IP Unlike some chip startups focused on licensing intellectual property, Vaire is playing to win with full-stack product development. “Right now we’re not really planning to license. We really want to build product,” Rossini emphasized. “It’s very important today, especially from the point of view of the customer. It’s not just the hardware—it’s the hardware and software.” Rossini points to Nvidia’s CUDA ecosystem as the gold standard for integrated hardware/software development. “The reason why Nvidia is so great is because they spent a decade perfecting their CUDA stack,” he said. “You can’t really think of a chip company being purely a hardware company anymore. Better hardware is the ticket to the ball—and the software is how you get to dance.” A great metaphor for a company aiming to rewrite the playbook on compute logic. The Long Game: Reimagining Chips Without Breaking the System In an industry where even incremental change can take years to implement, Vaire Computing is taking a pragmatic approach to a deeply ambitious goal: reimagining chip architecture through reversible computing — but without forcing the rest of the computing stack to start over. “We call it the Near-Zero Energy Chip,” said Rossini. “And by that we mean a chip that operates at the lowest possible energy point compared to classical chips—one that dissipates the least amount of energy, and where you can reuse the software and the manufacturing supply chain.” That last point is crucial. Vaire isn’t trying to uproot the hyperscale data center ecosystem — it's aiming to integrate into it. The company’s XPU architecture is designed to deliver breakthrough efficiency while remaining compatible with existing tooling, manufacturing processes, and software paradigms.
In this episode of the Data Center Frontier Show podcast, Matt Vincent, Editor-in-Chief of Data Center Frontier, talks to Tony DeSpirito, vice president of enterprise sales at Vertiv, about AI densification and how data centers can prepare for ever-growing rack power demands. They also explore cooling and physical infrastructure conundrums, and Vertiv’s AI roadshow.
The 25th anniversary of the latest Data Center Dynamics event in New York City last month (DCD Connect NY 2025) brought record-breaking attendance, underscoring the accelerating pace of change in the digital infrastructure sector. At the heart of the discussions were evolving AI workloads, power and cooling challenges, and the crucial role of workforce development. Welcoming Data Center Frontier at their show booth were Phill Lawson-Shanks of Aligned Data Centers and Phillip Koblence of NYI, who are respectively managing director and co-founder of the Nomad Futurist Foundation. Our conversation spanned the pressing issues shaping the industry, from the feasibility of AI factories to the importance of community-driven talent pipelines. The 25th anniversary of the latest Data Center Dynamics event in New York City last month (DCD Connect NY 2025) brought record-breaking attendance, underscoring the accelerating pace of change in the digital infrastructure sector. At the heart of the discussions were evolving AI workloads, power and cooling challenges, and the crucial role of workforce development. Welcoming Data Center Frontier at their show booth were Phill Lawson-Shanks of Aligned Data Centers and Phillip Koblence of NYI, who are respectively managing director and co-founder of the Nomad Futurist Foundation. Our conversation spanned the pressing issues shaping the industry, from the feasibility of AI factories to the importance of community-driven talent pipelines.
For this episode of the DCF Show podcast, host Matt Vincent, Editor in Chief of Data Center Frontier, is joined by Santiago Suinaga, CEO of Infrastructure Masons (iMasons), to explore the urgent challenges of scaling data center construction while maintaining sustainability commitments, among other pertinent industry topics. The AI Race and Responsible Construction "Balancing scale and sustainability is key because the AI race is real," Suinaga emphasizes. "Forecasted capacities have skyrocketed to meet AI demand. Hyperscale end users and data center developers are deploying high volumes to secure capacity in an increasingly constrained global market." This surge in demand pressures the industry to build faster than ever before. Yet, as Suinaga notes, speed and sustainability must go hand in hand. "The industry must embrace a build fast, build smart mentality. Leveraging digital twin technology, AI-driven design optimization, and circular economy principles is critical." Sustainability, he argues, should be embedded at every stage of new builds, from integrating low-carbon materials to optimizing energy efficiency from the outset. "We can't afford to compromise sustainability for speed. Instead, we must integrate renewable energy sources and partner with local governments, utilities, and energy providers to accelerate responsible construction." A key example of this thinking is peak shaving—using redundant infrastructure and idle capacities to power the grid when data center demand is low. "99.99% of the time, this excess capacity can support local communities, while ensuring the data center retains prioritized energy supply when needed." Addressing Embodied Carbon and Supply Chain Accountability Decarbonization is a cornerstone of iMasons' efforts, particularly through the iMasons Climate Accord. Suinaga highlights the importance of tackling embodied carbon—the emissions embedded in data center construction materials and IT hardware. "We need standardized reporting metrics and supplier accountability to drive meaningful change," he says. "Greater transparency across the supply chain can be achieved through carbon labeling of materials and stricter procurement policies." To mitigate embodied emissions, companies should prioritize suppliers with validated Environmental Product Declarations (EPDs) and invest in low-carbon alternatives like green concrete and recycled steel. "Collaboration across the industry will be essential to drive policy incentives for greener supply chains," Suinaga asserts. The Role of Modular and Prefabricated Builds As the industry seeks more efficient construction methods, modular and prefabricated builds are emerging as game changers. "They significantly reduce construction waste, improve quality control, and shorten deployment times," Suinaga explains. "By shifting a large portion of the build process to controlled environments, we can improve worker safety and optimize material usage. Companies leveraging prefabrication will gain a competitive edge in both cost savings and sustainability." Modular construction also presents financial advantages. "It allows for deferred CapEx investments, creating attractive internal rates of return (IRRs) for investors while reducing the risk of oversupply by aligning capacity with demand," Suinaga notes. However, he acknowledges that the approach has challenges, including potential supply chain constraints and quick time-to-market pressures during demand spikes. "Maintaining a recurrent production cycle and closely monitoring market conditions are key to ensuring capacity planning aligns with real-time needs." Innovation in Cooling and Water Use With AI workloads driving increasing power densities, the industry is rapidly shifting toward liquid cooling, immersion cooling, and heat reuse strategies. "We’re seeing innovations in direct-to-chip cooling and closed-loop water systems that significantly reduce water consumption," Suinaga says. "Some data centers are capturing and repurposing waste heat to provide energy to nearby facilities—an approach that needs to be scaled." Immersion cooling, he adds, offers the potential to shrink data center footprints and dramatically improve Power Usage Effectiveness (PUE). "A hybrid approach combining air and liquid cooling is key," Suinaga explains. "There’s still uncertainty around the right mix of technologies, as hyperscalers need to support not just AI but also continued cloud growth. Flexibility in cooling design is now essential to accommodate a diverse range of workloads." Regulatory Pressures and the Future of Sustainability Standards Regulatory frameworks such as the SEC’s climate disclosure rules and Europe’s Corporate Sustainability Reporting Directive (CSRD) are pushing data center operators toward greater transparency. Suinaga believes these measures will enforce more accurate sustainability reporting and drive greener investment decisions. "This will push data center operators to adopt more energy-efficient designs early in the planning phase and, in the long term, standardize carbon reporting and create incentives for sustainable practices," he explains. He also highlights the role of investors and publicly traded companies in enforcing stricter climate reporting requirements across their portfolios. "At iMasons, we are refining existing reporting benchmarks and frameworks to provide the industry with a holistic view of best practices. This is an area where we aim to support data center operators with an analytical approach." The Road to Net Zero: Overcoming Challenges Despite ambitious net zero goals, execution remains a significant challenge. "The biggest roadblock to net zero is the availability of truly carbon-free energy and materials at scale," Suinaga states. Achieving net zero requires substantial investment in renewable infrastructure, grid connectivity improvements, and energy storage innovation. To accelerate progress, he emphasizes the importance of adopting circular economy practices, advocating for renewable energy policy support, and investing in next-generation cooling and power technologies. "The demand from AI is outpacing current power infrastructure and renewable options. While some net zero commitments may be delayed, investing in new technologies and clean energy solutions will ultimately put us back on the path to net zero." Workforce Development and Addressing the Talent Shortage The digital infrastructure industry has long faced a talent shortage, which has only become more urgent as demand increases. To help address this challenge, iMasons has launched a new job-matching platform. "It’s designed to bridge the talent gap by connecting skilled professionals with opportunities in digital infrastructure," Suinaga explains. "For job seekers, it’s free to use, providing a streamlined way to match with job listings based on skills, experience, and location." For employers, iMasons partners gain access to the platform to find vetted candidates efficiently. "At the pace this industry is growing, the current workforce isn’t enough—we need to bring in talent from other industries and create new career pathways. Digital infrastructure is recession-proof and offers tremendous opportunities for growth." Industry Partnerships Driving Innovation iMasons has been expanding its partnerships, adding 15 new partners in recent months. "We've welcomed companies from various backgrounds, including AI-driven construction management firms, energy-related companies, and cooling solution providers," Suinaga shares. "iMasons is a hub for industry collaboration, helping to drive innovation across the entire digital infrastructure ecosystem. Our mission is simple: to ensure the industry thrives." Looking Ahead As AI accelerates the demand for digital infrastructure, the industry must embrace innovative, responsible strategies to balance scale with sustainability. iMasons, alongside major players in the sector, is committed to ensuring the next generation of data centers are not just fast to deploy but also environmentally responsible.
In this episode of the Data Center Frontier Show podcast, Matt Vincent, Editor-in-Chief of Data Center Frontier, talks to Craig Compiano, CEO of Modius, about how data centers are evolving to meet modern demands, specifically in terms of scalability, security and intelligence. They also discuss Modius’s commitment to enabling the next generation of data centers with scalable and secure solutions.
The modular data center industry is undergoing a seismic shift in the age of AI, and few are as deeply embedded in this transformation as Andrew Lindsey, Co-Founder and CEO of Flexnode. In a recent episode of the Data Center Frontier Show podcast, Lindsey joined DCF Editor-in-Chief Matt Vincent and Senior Editor David Chernicoff to discuss the evolution of modular and edge data centers, the growing demand for high-density liquid-cooled solutions, and the industry factors driving this momentum. A Background Rooted in Innovation Lindsey’s career has been defined by the intersection of technology and the built environment. Prior to launching Flexnode, he worked at Alpha Corporation, a top 100 engineering and construction management firm founded by his father in 1979. His early career involved spearheading technology adoption within the firm, with a focus on high-security infrastructure for both government and private clients. Recognizing a massive opportunity in the data center space, Lindsey saw a need for an innovative approach to infrastructure deployment. "The construction industry is relatively uninnovative," he explained, citing a McKinsey study that ranked construction as the second least-digitized industry—just above fishing and wildlife, which remains deliberately undigitized. Given the billions of square feet of data center infrastructure required in a relatively short timeframe, Lindsey set out to streamline and modernize the process. Founded four years ago, Flexnode delivers modular data centers with a fully integrated approach, handling everything from site selection to design, engineering, manufacturing, deployment, operations, and even end-of-life decommissioning. Their core mission is to provide an "easy button" for high-density computing solutions, including cloud and dedicated GPU infrastructure, allowing faster and more efficient deployment of modular data centers. The Rising Momentum for Modular Data Centers As Vincent noted, Data Center Frontier has closely tracked the increasing traction of modular infrastructure. Lindsey has been at the forefront of this shift, witnessing the market evolve significantly over the last five years. "Five years ago, we were looking at a graveyard of modular data center companies that leaned heavily on the edge," Lindsey recalled. Many early modular providers focused on latency-sensitive, interconnected solutions—such as base stations at 5G tower sites. However, the market proved premature, hindered by high costs and the scale of deployment within the telecommunications industry. Now, macroeconomic and technological factors have driven a fundamental shift toward modular data centers. One of the most significant drivers is the rapid evolution of chip design. "A traditional data center design cycle can take a year or 18 months," Lindsey explained. "But if we see radical Nvidia chip advancements every 12 months, your design could be obsolete before you even break ground." The need for embedded flexibility within data center design has made modular solutions an ideal fit. Labor Scarcity and the Need for Efficiency Another factor accelerating the adoption of modular infrastructure is the labor shortage in construction. "There just aren’t enough people today to build the scale of infrastructure needed for data centers," Lindsey noted. Compounding the issue is an aging workforce, with many skilled professionals nearing retirement. "When they leave, they take decades of institutional knowledge with them." Modular construction mitigates this problem by shifting labor-intensive processes to manufacturing environments where technical expertise is concentrated. By centralizing production, modular providers can reduce reliance on dispersed construction labor while maintaining high precision and efficiency. Liquid Cooling and the Future of High-Density Deployments Flexnode is also a leader in the adoption of high-density liquid-cooled infrastructure. Lindsey attended the CoolerChips event last year and has been vocal about the advantages of liquid cooling for modern workloads. "More recently, modular is everywhere," he said. "We at Flexnode are seeing demand hand over fist for high-density liquid-cooled systems that integrate seamlessly with broader building designs." This demand underscores the shift from the speculative modular edge deployments of five years ago to today’s high-performance, flexible data center solutions. "Modular is no longer just a niche," Lindsey concluded. "It’s a critical strategy for meeting the growing demand for scalable, high-efficiency data center capacity." The realization that liquid cooling would become a building-wide challenge, rather than just an IT challenge, was a pivotal moment for Flexnode. "Four years ago, we recognized that liquid cooling, which had been around for 10 to 15 years in government and research, was now commercially viable. But very few data centers were truly equipped to deploy it to its full potential," Lindsey explained. Flexnode identified an opportunity to deliver integrated liquid-cooled modules that connect IT systems to building infrastructure through a fully embedded design. Rather than developing proprietary liquid cooling technology, Flexnode focuses on being "liquid neutral." "The liquid cooling market is advancing well on its own," Lindsey said. "We want to enable OEM-driven solutions like JetCool, Motivair, Isotope, and ZutaCore, ensuring they perform optimally in an environment designed to support them." Flexnode operates at the building scale, working on innovative heat management strategies that eliminate the need for external heat rejection. "We integrate heat rejection into the panelized construction of our modular data centers," Lindsey explained. This approach pushes forward a broader, integrated building design suited for liquid cooling. The Shift Toward Hybrid and Two-Phase Liquid Cooling David Chernicoff asked Lindsey whether Flexnode leans toward specific liquid cooling methodologies, such as waterless, multi-phase, or single-phase solutions. Lindsey responded that their focus aligns with OEM and ODM preferences. "Right now, we're primarily working with direct-to-chip water-based single-phase cooling," Lindsey said. "But as part of our work with the Cooler Chips program, we’re also developing a hybrid immersion approach with Isotope." This hybrid method integrates both direct-to-chip and immersion cooling. The industry is currently debating whether to move to a single-phase hybrid approach or leapfrog directly to two-phase cooling. "The big challenge with two-phase is the environmental impact of certain chemicals used in the process," Lindsey noted. While companies are actively working to address these concerns, two-phase cooling remains a complex consideration. Even Nvidia is leaning toward a two-phase future. "From what we've heard at CoolerChips, Nvidia sees the next generation as being two-phase oriented," Lindsey said. "But they can speak better to that." With liquid cooling now firmly part of the mainstream conversation, the challenge is not just about advancing the technology but also ensuring that the surrounding infrastructure evolves to support it. Flexnode’s approach—integrating liquid cooling at the building level—positions them at the leading edge of this shift. Customer Demands Drive Cooling Technology Choices As the industry evolves, cooling technology decisions are increasingly shaped by customer preferences. "Right now, it's very much customer-driven for us," Lindsey explained. "We're working with sophisticated customers—hyperscalers and GPU-as-a-service providers—who already know what they want to deploy." While some enterprises may still be evaluating their liquid cooling options, hyperscalers are looking beyond traditional single-phase approaches, including both dielectric and water-based cooling. However, Lindsey emphasized that many of these developments remain in the R&D phase. "We don’t typically recommend one technology over another unless there’s a clear drawback," he said. One challenge with direct-to-chip cooling, for example, is achieving full heat absorption into the liquid. "That’s where hybrid approaches come in," Lindsey noted. He described hybrid designs that integrate both two-phase direct-to-chip cooling and immersion cooling, as seen in the CoolerChips program. "In some cases, direct-to-chip is single-phase, in others, it’s two-phase. We’re working as a category B provider, helping integrate these technologies at the building level." Lindsey also touched on sustainability concerns, particularly around immersion cooling. "Immersion is seen as the most sustainable in terms of energy efficiency, but there are still questions about how immersion fluids impact server longevity over time," he said. Factors like glue degradation and cable insulation breakdown raise questions about immersion cooling’s long-term sustainability profile. Two-phase cooling also presents challenges. "There’s an ongoing discussion about PFAS and finding non-toxic, non-carcinogenic alternatives," Lindsey explained. "Beyond that, two-phase cooling can create cavitational forces that affect motherboard and chip integrity over time. That’s why many in the industry—including Nvidia—are still weighing the trade-offs." With liquid cooling now firmly in the mainstream, the industry’s next challenge is integrating these technologies seamlessly into modular data centers. "It’s not just about cooling IT gear anymore; it’s about designing buildings that fully support liquid cooling at scale," Lindsey concluded. Flexnode’s modular approach positions them at the forefront of this transformation. Modular Configurations and Integrated Power Solutions Finally, Flexnode’s modular approach offers extreme configurability. "Our modules can be standalone data centers or integrated into powered shell facilities," Lindsey explained. "We configure everything from 2 MW to 20 MW standalone deployments, and we can scale up to 200 MW campuses." Beyond footprint flexibility, power integration is a growing focus. "On-prem generation is gaining traction, particularly with fuel-agnostic generators that can switch between natural gas, hydrogen, methane, and propane," Lindsey noted. Collaborating with partners like Hyliion, Flexnode is exploring adaptable power solutions, including fuel cells. Being behind the meter is another key driver. "Utilities are getting smarter about power allocation," Lindsey said. "In Europe, data centers are facing use-it-or-lose-it policies, and in the U.S., regions like Ohio are imposing tariffs on unused capacity." On-site power generation provides greater flexibility, helping data centers scale more efficiently and participate in curtailment programs that balance grid demand. Looking Ahead As modular data centers become a core part of the industry landscape, Flexnode is pushing the boundaries of what’s possible. "We see modular as a natural extension of utilities—a distributed solution that enhances flexibility," Lindsey concluded. "And we’re just getting started."
The latest episode of the DCF Show podcast addresses one of the most critical challenges facing the data center industry: the search for sustainable, high-density power solutions. As hyperscale operators like Google and Meta face growing energy demands and resistance from utilities unable or unwilling to support their expansion, the conversation around nuclear energy has gained momentum. Both established nuclear providers and innovative startups are vying for the data center industry's future business, each offering distinct approaches. Our guest, Matt Loszak, co-founder and CEO of Aalo Atomics, shares insights into his company's fresh approach to nuclear energy. Aalo Atomics, which raised $29.5 million in Series A funding in 2024, has developed a 10-megawatt sodium-cooled reactor that eliminates the need for water cooling, offering greater siting flexibility. Inspired by the Department of Energy’s MARVEL microreactor, Aalo’s design benefits from direct expertise, as the company’s CTO was the chief architect behind MARVEL. Aalo’s vision extends beyond reactor design to full-scale modular plant production. Rather than simply building reactors, the company aims to manufacture complete nuclear plants using prefabricated, modular components that can be shipped in standard containers. These plants are designed to fit within the footprint of a data center and require no onsite water—features that make them especially attractive to hyperscale operators seeking localized, high-density power. Aalo has made significant progress, with the Department of Energy identifying Idaho National Laboratory (INL) as a potential site for its first nuclear facility. The company is on an accelerated timeline, planning to complete a non-nuclear prototype within three months and break ground on its first reactor in about a year—remarkably fast for the nuclear sector. Aalo’s modular nuclear power solution for data centers is designed to deliver 50 megawatts, using a sodium-cooled reactor inspired by the MARVEL microreactor at INL. “In just 30 months, Marvel became the first reactor the DOE has ever authorized for construction,” said Loszak. Aalo has brought in key members from the Marvel project, including its chief architect, to speed up development. During our conversation, Loszak discusses the implications of this new wave of nuclear innovation, including the shifting stance of the Trump administration on nuclear energy, the evolving economics of nuclear power (where past projects faced cost overruns and delays), and common misconceptions about nuclear safety, such as fears of reactor meltdowns and waste management.
The exponential growth of data center energy demand, particularly driven by advancements in Artificial Intelligence (AI), has emerged as one of the most pressing challenges for energy infrastructure globally.  However, existing grid infrastructure is increasingly constrained, particularly in regions with concentrated data center activity. Transmission bottlenecks, aging infrastructure, and long timelines for grid upgrades present significant challenges for meeting this explosive demand. Podcast takeaways: How Microgrids, powered by Distributed Energy Resources (DERs) offer a promising solution by reducing dependency on centralized grids, integrating generation from multiple fuels and storage, and providing load flexibility.  The benefits of a strategy that includes and prepares for Small Modular Reactors (SMRs) when they become commercially available. The immediate and long-term benefits of this multi-year approach through real-world data center examples in Santa Clara, California and Ashburn, Virginia, USA  How to optimize your energy investments, reduce OPEX costs by 60-80%, and significantly reduce CO₂ emissions by using Xendee’s advanced Microgrid Modeling platform to design the right site-specific multi-year strategy.
As high-performance computing (HPC), cloud computing, blockchain, and artificial intelligence (AI) continue to expand globally, the demand for more capable data centers has surged. These next-generation data centers must manage workloads far beyond traditional capacities while addressing challenges such as finding skilled professionals and ensuring operational efficiency. By leveraging software-defined technologies, these data centers achieve better control over physical and virtual resources. Join Alan Farrimond and Andrew Jimenez, industry experts with decades of experience, as they discuss the innovations and strategies that are shaping the future of data centers, focusing on sustainability, energy efficiency, and cutting-edge technologies.
In the latest episode of the Data Center Frontier Show podcast, DCF Editor-in-Chief Matt Vincent sits down with Phill Lawson-Shanks, Chief Innovation Officer at Aligned Data Centers, for a wide-ranging discussion that touches on some of the most pressing trends and challenges shaping the future of the data center industry. From the role of nuclear energy and natural gas in addressing the sector’s growing power demands, to the rapid expansion of Aligned’s operations in Latin America (LATAM), in the course of the podcast Lawson-Shanks provides deep insight into where the industry is headed. Scaling Sustainability: Tracking Embodied Carbon and Scope 3 Emissions A key focus of the conversation is sustainability, where Aligned continues to push boundaries in carbon tracking and energy efficiency. Lawson-Shanks highlights the company’s commitment to monitoring embodied carbon—an effort that began four years ago and has since positioned Aligned as an industry leader. “We co-authored and helped found the Climate Accord with iMasons—taking sustainability to a whole new level,” he notes, emphasizing how Aligned is now extending its carbon traceability standards to ODATA’s facilities in LATAM. By implementing lifecycle assessments (LCAs) and tracking Scope 3 emissions, Aligned aims to provide clients with a detailed breakdown of their environmental impact. “The North American market is still behind in lifecycle assessments and environmental product declarations. Where gaps exist, we look for adjacencies and highlight them—helping move the industry forward,” Lawson-Shanks explains. The Nuclear Moment: A Game-Changer for Data Center Power One of the most compelling segments of the discussion revolves around the growing interest in nuclear energy—particularly small modular reactors (SMRs) and microreactors—as a viable long-term power solution for data centers. Lawson-Shanks describes the recent industry buzz surrounding OKLO’s announcement of a 12-gigawatt deployment with Switch as a significant milestone, calling the move “inevitable.” “There are dozens of nuclear plants operating in the U.S. today, but people just don’t pay much attention to them,” he says. “Companies like OKLO are designing advanced modular reactors that are walk-away safe, reuse spent fuel, and eliminate the risks associated with traditional light-water reactors. This is the path forward.” However, he acknowledges that the widespread adoption of nuclear will take time, given the regulatory hurdles of the Nuclear Regulatory Commission (NRC) and the challenges of getting sites certified. Still, he remains optimistic: “We need this, and as an industry, we’re pre-buying energy because we see the challenges ahead.” Bridging the Energy Gap with Natural Gas and Hydrogen While nuclear is a long-term solution, data centers need reliable power sources today. Lawson-Shanks sees natural gas as a practical interim solution, provided emissions can be mitigated. He also points to hydrogen as an emerging technology with potential, though challenges remain. “Hydrogen is really an energy transportation methodology rather than an energy source,” he explains. “It’s highly corrosive, and the infrastructure isn’t fully in place yet, but it’s something we’re closely monitoring.” He predicts that natural gas reciprocating engines will serve as a bridge solution until nuclear modules become widely available. “Once we reach steady-state nuclear power, those gas engines could replace diesel generators, which we all want to phase out,” he says. Explosive Growth in LATAM and the Evolution of Aligned’s Global Strategy The conversation also covers Aligned’s expansion into Latin America following its acquisition of ODATA. Lawson-Shanks describes the region as a booming market, particularly in Brazil, where Aligned has access to renewable energy through its investment in wind farms. “LATAM is an enormous growth market, and our waterless cooling system is ideal for places like Santiago, where water scarcity makes evaporative cooling unfeasible,” he explains. Aligned is integrating its advanced cooling technologies—such as Delta³ and DeltaFlow—into ODATA’s new facilities, ensuring that sustainability remains a core component of their LATAM operations. Innovating Beyond Cooling: The Future of Heat Reuse Another forward-looking topic is Aligned’s interest in heat reuse, an area where Lawson-Shanks sees significant potential for innovation. Through its partnership with QScale in Canada, Aligned is exploring methods to capture and repurpose waste heat from data centers for other applications. “Their heat reuse strategy is really interesting, and we’re looking at how we can implement similar solutions in North America,” he says, hinting at future developments to come. Looking Ahead: A Future Shaped by Innovation and Sustainability As the conversation wraps up, it’s clear that Lawson-Shanks sees the data center industry at an inflection point. The combination of sustainability commitments, new energy technologies, and rapid global expansion is forcing companies to rethink traditional models and embrace innovation at an unprecedented scale. “We’ve always fought against the idea that data centers have to be built the same way they were in the 1970s,” he says. “We’re constantly redesigning, rethinking how we procure energy, and pushing the industry forward.” With Aligned continuing to lead the charge in sustainability, energy innovation, and international expansion, the insights shared in this episode offer a compelling look at the challenges and opportunities ahead for the data center industry.
Recorded last December, for this episode of the Data Center Frontier Show Podcast, DCF Editor in Chief Matt Vincent spoke with Vantage Data Centers' North American President Dana Adams, and Katilin Monaghan, Vantage Data Centers' North American Public Policy Director. As president of Vantage Data Centers’ North America business, Dana Adams oversees market development, sales, construction and operations across the United States and Canada. With nearly 18 years of experience in the data center sector, Adams has a track record of successfully leading high-growth companies and diverse teams at scale. Prior to joining Vantage, Adams was the Chief Operating Officer for AirTrunk, the hyperscale data center giant serving the Asia-Pacific region. She was responsible for scaling operations, service delivery and customer success from one to five countries and established other critical business capabilities, including award-winning people, culture and sustainability programs, as the company grew from $3 to $10 billion. Earlier in her career, Adams served as vice president and general manager at Iron Mountain where she helped drive nearly $2 billion in growth through global acquisitions and development projects. In addition, she held several leadership positions at Digital Realty, including vice president of portfolio management, where she oversaw $3 billion in data center assets. Considered to be one of the most influential female executives in the industry, Adams was recognized by Data Economy on its power women list in 2019. She was a finalist in the 2020 and 2022 PTC awards as an outstanding female executive, an Infrastructure Masons (IM) 2022 award recipient and was recently featured by InterGlobix Magazine as an Inspiring Woman in Leadership. Adams earned a bachelor’s degree from Boston College and a Master of Business Administration from Simmons University. Kaitlin Monaghan serves as the Director of Public Policy, North America, for Vantage Data Centers. In this role, she is responsible for leading a public policy program to support the company’s North American business. Monaghan partners with site selection, sustainability, tax, legal, energy and construction stakeholders to develop and advocate for Vantage’s position on a multitude of issues in current and future markets.  Prior to joining Vantage, Monaghan held public policy roles at Rivian Automotive and the American Clean Power Association where she managed legislative, regulatory and economic development matters at all levels of government. She also serves as Energy and Environment Co-Chair for the Data Center Coalition (DCC). A Florida native, she is a graduate of the University of Florida with a B.S. in Environmental Science and has a law degree from Florida State University College of Law with a concentration in Energy Law. Podcast Talk on the podcast kicks off with a framing of Vantage Data Centers' recently announced $2 billion investment in a new data center campus in New Albany, Ohio in the environs of Tier 2 industry hotspot Columbus, focusing on sustainability and efficiency. The discussion touches on how the Ohio market is becoming increasingly relevant for data centers due to strong connectivity and power availability, with most major hyperscalers already investing in the region.  Along the way, we learn how Vantage's new campus in New Albany will utilize a sustainable design aimed at achieving LEED Silver certification, emphasizing low power usage effectiveness (PUE) and waterless cooling systems. The discussion also examines how partnerships with local organizations, such as the New Albany Community Foundation and Columbus State Community College Foundation, will support workforce development and community engagement.  Vantage's Adams and Monaghan also speak on how continued collaboration with utilities and policymakers is essential to address power generation challenges while supporting future data center industry growth in North America. Here's a timeline of the interview's key moments: Dana Adams shares insights on how her experience as COO of Air Trunk in Sydney informs her current role, focusing on scaling hyperscale data centers in North America. 1:36 Kaitlin Monaghan discusses her background in energy law and highlights her focus on renewable energy policy. 3:57 Investment trends in Ohio's data center market are discussed. Connectivity and power availability are identified as key factors. 7:11 The forthcoming OH1 data center campus is discussed. It will cover 70 acres and focus on sustainability. 9:57 The 200 megawatt campus will be built in phases. The first phase is set to open in late 2025. 10:37 Sustainable design principles are emphasized in the project. The design aims for low power usage effectiveness and minimal water usage. 11:31 Innovations in Ohio are discussed. The focus is on signal innovations for deployment. 13:00 Sustainable fuels integration is highlighted. Collaboration across the industry is emphasized to increase demand. 13:30 Challenges with new chip designs are addressed. Maximizing efficiency with GPUs in data centers is a key concern. 14:01 Partnerships with local organizations are discussed. Workforce development is emphasized as a key focus. 14:48 The importance of community engagement is highlighted. Vantage's long-term commitment to local hiring is noted. 15:19 Trends in workforce development within the data center industry are analyzed. The significance of workforce as a pillar of sustainability is mentioned. 16:43 Insights into Vantage Data Centers' growth are shared. Anticipation for 2025 includes a focus on infrastructure and workforce needs. 17:49 Challenges in power generation and transmission are addressed. Engagement with utilities and policymakers is emphasized for future growth. 19:54
For this episode of the recurring Data Center Frontier/Nomad Futurist field report podcast series -- aka "Nomads at the Frontier" -- DCF Editor In Chief Matt Vincent checked in for a fun yet informative discussion with Nomad Futurist Foundation Co-Founders Phillip Koblence and Nabeel Mahmood from the grounds of PTC'25, the annual telecom and data center industry conference put on by the Pacific Telecommunications Council in Hawaii, which has become one of the sector's most important live events. Podcast Series Nomad Futurist is a 501(c)(3) non-profit organization established, per its mission statement, "to demystify the world of digital infrastructure and the related technologies that impact every aspect of our daily lives."  Committed to educating youth in underprivileged communities, promoting diversity and inclusion, and opening up opportunities for growth and new career paths, the group says its "primary focus is to empower and inspire younger generations through exposure to the underlying technologies that power our digital world."  Nomad Futurist is known for appointing individuals throughout the data center industry to its ranks of Ambassadors and Advisors, who work to promote the organization's ethos and goals in their professional spheres. The group's members are a pervasive presence in the data center sector, to be found in attendance and presenting at most industry events in the U.S. and abroad. The purpose of the Data Center Frontier/Nomad Futurist joint podcast series is therefore to gather valuable industry insights, expertise and commentary from Nomad Futurist leaders and ambassadors, firsthand and in the field, as they participate in these events.  PTC'25 PTC'25 in Honolulu attracted over 10,000 attendees, highlighting a significant data center presence alongside telecommunications. As revealed in the course of the podcast, key data center topics at this year's PTC included artificial intelligence, power demands, and the integration of natural gas as a bridge for energy needs.  Importantly at this year's PTC, the Nomad Futurist Foundation announced the launch of the Nomad Futurist Academy and an associated job board in furtherance of its mission to enhance career pathways in the data center industry.  During the course of the talk, emphasis was also placed on on the value of "organic networking," with the Futurists advising on the strategic need to balance scheduled meetings with informal interactions at such industry events.  Here's a timeline of the podcast's key moments: PTC'25 Event Overview - The event is noted as one of the largest in years; attendance in Honolulu is reported at over 10,000 individuals. 2:11 - A significant data center presence is highlighted at the event. The program's integration of telecommunications and data center sectors is emphasized. 2:33 - Questions about the logistics of the event are addressed. 3:39 Meeting Intensity at PTC'25 - Nomad Futurist held more than 40 meetings at this year's PTC. The meetings occurred over three days. 4:08 - High levels of physical activity were noted. Walking 10,000 to 30,000 steps a day was common, notes Mahmood. 4:36 - The five-year anniversary of the Nomad Futurist initiative was celebrated at this year's PTC. 6:01 Nomad Futurist Academy Launch  - The Nomad Futurist job board highlighting data center career pathways is mentioned as upcoming news for the Foundation. 6:16 - Discussion turns to power demands in data centers, as discussed at the event. The shift in baseline power requirements from 10-20 megawatts to over 100 megawatts is highlighted. 7:01 - AI and its implications for power needs are explored. Conversation touches on large language models and their impact on efficiency ratios. 9:11 Global Networking at PTC'25  - A significant percentage of PTC attendees are from the United States. Approximately 45-46% of attendees are American, with the rest coming from around the world. 11:25 - The event in Hawaii is praised for its renowned industry networking opportunities. 13:28 - In-person interactions at conferences are emphasized as invaluable. The importance of networking and organic conversations is highlighted as crucial for setting the pace for the year. 13:47
For this episode of the Data Center Frontier Show Podcast, DCF Editor in Chief Matt Vincent and Senior Editor David Chernicoff sat down for a far-reaching discussion with data center industry luminary Ron Vokoun, a 35-year veteran of the construction industry with a primary focus on digital infrastructure.  "I got into telecom back in ’92, which led to data centers," he said. "Probably worked on my first one around ’96 or ’97, and I’ve been involved ever since." Currently the Director of National Market Development for Everus Construction Group, Vokoun has been involved in AFCOM, both regionally and nationally, for nearly two decades and is an emeritus content advisory board member for Data Center World. He has also written extensively for Data Center Dynamics. Vokoun added, "I’ve just always been curious—very much a learner. Being a construction guy, I often write about things I probably have no business writing about, which is always the challenge, but I’m just curious—a lifelong learner. Interestingly, [DCF founder] Rich Miller ... gave me my first blogging opportunity." Here's a timeline of the podcast's highlights: Introductions  - Ron Vokoun shares his extensive background. He has been in the construction industry for 35 years. 1:46 - On his role at Everus Construction Group and the company's diverse services across the nation. 2:07 - Vokoun reflects on his long-standing relationship with Rich Miller. He acknowledges Rich's influence on his blogging career. 3:05 Nuclear Energy  - A discussion about nuclear energy trends occurs. The importance of nuclear energy in data center construction is probed. 3:35 - Natural gas is highlighted as a key trend. Its role as a gateway to hydrogen is emphasized. 3:51 - The impact of recent nuclear developments is analyzed. The reopening of Three Mile Island is noted as significant. 4:55 Future Power Sources for Data Centers  - Discussion turns to the timeline for small modular reactors (SMR). Vokoun expresses some confidence that significant developments will occur within five years. 5:42 - Natural gas is identified as a potential primary power source. Its role as a cleaner alternative to diesel generators is acknowledged. 7:49 Natural Gas Interest   - Vokoun talks about how natural gas generators are being considered by major companies, and how much more implementation is anticipated in the near future. 9:18 - The advantages of multiple power sources are emphasized. Vokoun remarks on how natural gas plants can adjust more quickly than nuclear or coal plants. 10:53 Power Project Lawsuits and Concerns  - Concerns about the impact on residential customers are raised. The relocation of power from one vendor to another is discussed. 12:12 - The potential for increased power generation is highlighted. A net decarbonization effect is suggested due to more carbon-free power sources. 12:59 Impact of Liquid Cooling   - Discussion centers on advancements in power distribution. Insights are shared on liquid cooling infrastructure trends. 13:34 - Direct liquid cooling is noted as prevalent. Immersion cooling is mentioned as having lost traction. 16:06 Immersion Cooling Technologies  - A discussion about immersion cooling technologies occurs. The efficiency of direct to chip cooling is emphasized. 17:12 - Concerns regarding the weight of new racks are raised. The need for plumbing in liquid cooling systems is highlighted. 17:48 - The potential narrowing of the immersion cooling market is predicted. A quick market response is anticipated based on immersion cooling's market share. 19:00 Energy Storage Technologies Overview   - The advantages of various energy storage technologies are discussed. Lead acid, lithium ion, and sodium solutions are mentioned as key options. 20:00 - The shift in market share from lead acid batteries is highlighted. Sodium-based products are noted as an exciting emerging technology. 20:41 - Data centers in new locations are referenced. 21:50 Evolving Site Selection Criteria   - The evolution of site selection for data centers is discussed. The importance of having reliable power sources is emphasized. 22:57 - The rise of data center locations in Indiana is highlighted, as an example of how previously overlooked areas are now experiencing significant development. 24:01
In this episode of the Data Center Frontier Show podcast, Matt Vincent, Editor-in-Chief of Data Center Frontier, talks to Alan Farrimond, Vice President Wesco Data Center Solutions, about how AI, globalization and power challenges are impacting the data center industry. They also discuss some wider challenges across the industry and how Wesco is uniquely positioned to solve those challenges.
On this episode of the DCF Show Podcast, iMasons Climate Accord (ICA) Executive Director, Miranda Gardiner,  shares insights on sustainability and emissions reduction strategies for data centers with DCF editors Matt Vincent and David Chernicoff.  During the course of the talk, Gardiner explains how the iMasons Climate Accord (ICA), as part of Infrastructure Masons, focuses on data center industry emissions reductions as its primary goal, including approximately 300 member companies in the digital infrastructure space. The recent ICA flagship initiative emphasizing the value of Environmental Product Declarations (EPDs) for materials and equipment, as signed by major hyperscale players like AWS, Google, and Microsoft, is also unpacked.  We also learn how the Climate Accord aims to enhance global outreach and collaboration, particularly in regions such as APAC and Latin America, to address local sustainability challenges. Gardiner also discusses how the group's future efforts will prioritize transparency and verification in sustainability claims to ensure accountability within the data center industry.
EdgeConneX's "Customers, People, and Planet" mission is the foundation for its sustainability efforts, shaping how the company designs, builds, and operates data centers worldwide. This podcast explores how this mission is implemented, embedding energy efficiency, renewable energy solutions, and local market engagement into every step of their operations. By prioritizing a balance between environmental responsibility and operational excellence, EdgeConneX demonstrates how sustainability can successfully align with business goals.
This podcast explores EdgeConneX's innovative approach to safety excellence, emphasizing its significant impact on operations and customer loyalty. Central to our discussion is the critical role of collaboration, as showcased by the "One Team, One Mission" theme from the EdgeConneX Safety Summits. EdgeConneX's commitment to safety extends beyond ensuring the well-being of its employees; it is a vital component of building trust with customers. The podcast highlights how EdgeConneX and its partners fortify collaborations to ensure data centers are designed and operated with paramount safety. This collaborative approach involves nurturing a learning culture that empowers employees to proactively identify and address potential risks, fostering an environment of continuous improvement and vigilance.
In today's episode of the Data Center Frontier Show podcast, DCF's editors speak with hyperscale data center industry veteran Yuval Bachar, founder and CEO of hydrogen data center operator EdgeCloudLink (ECL).  Bachar has held data center leadership positions with Microsoft Azure, LinkedIn, Facebook, Cisco, and Juniper Networks. He was a founder of the Open19 project, which creates open hardware designs for enterprise users, and holds eight U.S. patents in data center, networking and system design. During the interview, we asked Bachar about ECL's flagship hydrogen data center projects near Houston, TX and Mountain View, CA. He went on to outline ECL's future plans for expansion and sustainability in response to growing AI demands. Within the context of Bachar's forecast outlook for hydrogen data centers, DCF's editors also inquired about natural gas as a transitional power source and the challenges of natural gas infrastructure.  With the AI boom is driving heavy interest in the upsides of hydrogen data centers, Bachar also took time to emphasize his company's ongoing commitments to sustainable data centers, as reflected by the industry at large. Our hydrogen production strategy discussion also touched on hyperscalers' intense needs for new energy solutions, before circling back around to sustainability in data center operations. Phased Development During our interview, Bachar said that ECL is expanding its hydrogen data center business with a focus on Texas, aiming for 100 megawatts in the first phase of campus development there and additional phases every six months. The company plans to complete four sites in the next four to five years, contingent on hydrogen availability and supply chain capabilities.  He emphasized that the urgency for data centers to meet AI demand is critical, citing estimates of the industry needing 50 to 100 gigawatts of power in the next five years, while highlighting the importance of rapid deployment and sustainable practices. He further noted that ECL is positioned as a significant off-taker for hydrogen, influencing suppliers to invest in cleaner hydrogen production facilities.  Bachar underlined his company's sustainability bona fides by stating, "We can deliver data centers which are fully sustainable right now." He noted that ECL aims to use a blend of gray and blue hydrogen initially in its data centers, transitioning to green hydrogen as production increases.
A datacenter innovator with nearly three decades of experience, Chris Orlando co-founded ScaleMatrix in 2011, a leader in high-density colocation solutions. His passion for pushing boundaries led him to co-found DDC Solutions in 2018. DDC is recognized for delivering the highest air-cooled density data center solutions with the lowest total cost of ownership. Chris's expertise has been instrumental in collaborating with industry giants like Intel, NVIDIA, and AMD to address complex data center challenges. Following a recent investment in DDC, Chris continues to shape DDC's strategic vision as Co-Founder, Chief Strategy Officer, and Board Member.
This episode will explore the evolving role of electrical and digital infrastructure in supporting AI-driven data centers, with a particular focus on the significance of cable management systems like Legrand's Cablofil. As AI technology grows and places increasing demands on data centers, it's crucial to understand how efficient infrastructure can help these centers scale, optimize energy use, and maintain reliable, high-performance environments.
For this episode of the DCF Show podcast, DCF Editor in Chief Matt Vincent spoke with Callum Morrison, Account Director of Cologix; and Wayne Lloyd, CEO of Consensus Core. A new collaboration announced in August 20024 between Consensus Core and Cologix launches the first NVIDIA-powered GPU as a Service (GPUaaS) in the Montreal market, making Cologix’s MTL10 data center the inaugural hub for high-performance AI workloads in Montreal.  During the interview, we discuss: • What GPU-as-a-Service (GPUaaS) is and why it’s so valuable to businesses looking to leverage AI. • Why connectivity and interconnection are critical to support AI applications.  • How Canada’s AI initiative is driving growth and adoption of AI. • The two companies' vision for the future of AI-ready data centers in Canada Cologix is the largest data center provider in Montréal with 12 facilities and has a Canadian interconnection ecosystem of 350 networks, 200+ cloud providers, 15 public cloud onramps and three internet exchanges. Cologix, who bills itself as one of the "leading network-neutral interconnection and hyperscale edge data center" companies in North America, announced the collaboration with Consensus Core, an AI cloud service provider, to support the needs of AI technologies at its MTL10 ScalelogixSM data center in Montréal, Canada.  The collaboration enables Consensus Core to launch a new, NVIDIA-powered GPU-as-a-Service (GPUaaS) in the Canadian market and transforms MTL10 into a hub for its high-performance AI workloads. “As a registered member of the NVIDIA Partner Network, Consensus Core will supercharge AI in Canada,” said Consensus Core CEO Lloyd, who is also a company co-founder.  “Implementing AI in data centers with the powerful NVIDIA accelerated computing platform requires a specialized approach. We have selected Cologix to address this need. As a Canadian company, we sought a partner offering colocation services for GPUs for both Canadian and international clients. Cologix’s hyperscale and highly interconnected data centers enable us to densify and scale our services to meet customer demands efficiently.” Unlike general cloud services that use general-purpose platforms for a wide range of applications, GPUaaS provides specialized, high-performance computing for specific AI tasks.  This benefits companies that want to start doing AI workloads because instead of buying and maintaining their own physical servers and hardware, they can get access to NVIDIA accelerated computing on a per-hour basis from companies like Consensus Core. This means less downtime waiting for delivery and easy-to-use tools to deliver business results faster. Background In October 2024, Cologix announced capital raises of $1.5 billion USD to fuel its next stage of strategic growth by accelerating expansion of AI-ready data centers across key North American markets. Cologix plans to use the capital to support the ongoing build out of large-scale campuses in its core markets, including Ashburn, Columbus and Montréal, as well as to begin new builds on recently acquired land in Columbus, Des Moines and Vancouver. Upon full build out, all of the operator's planned data center construction can support over 650 critical megawatts (MW) of sellable capacity.   The infusion of capital received strong investor demand, underscoring investor confidence in Cologix’s proven business model, growth potential and ability to execute on its strategic initiatives.  The capital raises include a $1.0 billion USD revolving multi-asset development debt facility and an additional $500 million USD in equity from both new and existing investors.  The debt facility is structured to provide Cologix with the flexibility to add new sites over time, offering quick access to capital to fund development projects as needed.  Both the debt and equity raises received strong investor demand and were oversubscribed, underscoring their confidence in Cologix’s proven business model, growth potential and ability to execute on its strategic initiatives. "This is a significant milestone for Cologix and demonstrates the continued trust of our investors, both new and existing," said Scott Schneider, CFO of Cologix. "The combination of debt and equity financing provides us with the flexibility and capital to keep pace with the growing demand for digital infrastructure, particularly as AI, hybrid cloud and interconnection requirements expand. This financing ensures we can continue to scale and deliver on our customers’ needs in a dynamic market." The $1.5 billion USD in financing announced in October followed the company’s successful $1.13 billion USD and $1.07 billion CAD asset-backed securitizations since 2021, as well as a $3.0 billion USD equity recapitalization in 2022, all of which added to positioning Cologix for sustained growth. Deployment NVIDIA's H100 Tensor Core GPU-accelerated clusters will power Consensus Core’s GPUaaS operated in Cologix’s Montréal data center. NVIDIA H100 extends the NVIDIA A100 Tensor Core GPU’s global-to-shared asynchronous transfer capabilities across all address spaces and adds support for tensor memory access patterns. It enables applications to build end-to-end asynchronous pipelines that move data into and off the chip, completely overlapping and hiding data movement with computation. MTL10 is among Cologix’s largest network-neutral data centers, offering connectivity via high-count, diverse and scalable fiber with direct access to the Meet-Me-Room (MMR) in Montréal at Cologix’s MTL3 facility. The data center also offers strong interconnection capabilities to build and scale businesses with more than 100 unique network providers and low-latency connections to hyperscale cloud providers. MTL10 is a 180,000-square-foot, purpose-built facility that is ISO 27001 certified by Schellman and HIPAA, SOC1, SOC2 and PCI compliant. “We’re thrilled to partner with Consensus Core to bring its GPUaaS offering to Canada,” said Sean Maskell, President and General Manager of Cologix Canada, in a press relesae. “Consensus Core’s innovative solution fills a critical gap in the market, empowering businesses of all sizes to leverage the immense power of AI and machine learning. At Cologix, we are deeply committed to supporting the growth of the Canadian technology sector, and this collaboration demonstrates our dedication to providing the essential infrastructure and services that businesses need to thrive in today’s world.” The companies contend that their new collaboration between Cologix and Consensus Core positions MTL10 as the premier hub for high-performance AI in Canada, providing businesses with the infrastructure and tools required to take advantage of the full potential of AI technologies. Podcast During the podcast, Morrison and Lloyd discuss their companies' collaboration on the new AI service and the transition of Consensus Core from specializing in crypto infrastructure to AI, with a focus on GPU as a Service. Wayne explains how this service allows companies to access AI capabilities by overcoming challenges related to chips, power, and data center capacity. Callum highlights Cologix's partnership with Nvidia to enhance efficiency and scalability.  The conversation emphasizes the growing demand for power in AI deployment stakes and the importance of scaling deployments to achieve successful business outcomes. Specific questions for Callum and Wayne regarding the announced collaboration to empower businesses in Canada to leverage the power of AI and machine learning with NVIDIA-Powered GPU-as-a-Service included: What is GPUaaS and how does it fit into the infrastructure ecosystem? How does the collaboration between Cologix and Consensus Core deliver GPUaaS? What are the benefits for businesses? Do Cologix and Consensus Core have plans to expand their collaboration in Canada and/or other markets?
As Infrastructure Masons (iMasons) CEO Santiago Suinaga noted, the sold-out DCD Connect Virginia event in Leesburg on Nov. 6-7 was a standing-room only affair, reflecting the region's huge interest in the data center industry, in a conference which year-over-year "does not disappoint," in the words of International Data Center Authority Chief Certification Officer Mark Gusakov. Both men are key advisors to the Nomad Futurist Foundation.  Nomad Futurist is a 501(c)(3) non-profit organization established, per its mission statement, "to demystify the world of digital infrastructure and the related technologies that impact every aspect of our daily lives."  Committed to educating youth in underprivileged communities, promoting diversity and inclusion, and opening up opportunities for growth and new career paths, the group says its "primary focus is to empower and inspire younger generations through exposure to the underlying technologies that power our digital world."  Nomad Futurist is known for appointing individuals throughout the data center industry to its ranks of Ambassadors and Advisors, who work to promote the organization's ethos and goals in their professional spheres. The group's members are a pervasive presence in the data center sector, to be found in attendance and presenting at most industry events in the U.S. and abroad. The purpose of the Data Center Frontier/Nomad Futurist: Field Report series -- aka "Nomads at the Frontier" -- is therefore to gather recurring industry insight, expertise and commentary from Nomad Futurist leaders and ambassadors, firsthand and in the field, as they participate in these events.  During this impromptu podcast discussion, as recorded on-site at Leesburg's Landsdowne Resort Convention Center, Santiago discusses key topics from the event's iMasons Member Summit, including education programs and community concerns. He highlights challenges as cited from the iMasons State of the Industry report, such as power, talent access, and sustainability planning.  For his part, drawing on perspective gained from his ongoing travels around the industry, Mark emphasizes the need for standardization and correcting misconceptions about the data center industry, while urging professionals to act as ambassadors to improve public understanding as the industry grows its vital workforce and sustainability stakes.  Santiago concurs with the pivotal need to increase data center awareness and bring more people into the industry. Mark concludes  with some vibe check remarks, taking the temperature of Datacenter Dynamics' annual confab in the world's largest data center market.
Demand for data centers has never been higher. In our latest episode, we dive deep into the exploding world of data centers together with JLL's head of Data Center Research and Strategy for the Americas, Andrew Batson. According to JLL’s U.S. Data Center Report, the first half of 2024 shattered all records, but, what does this mean for you? We explore how these facilities have become the foundation of modern society and why securing land, power, and talent is more crucial than ever. How is the industry coping with limited supply in the face of insatiable demand? We'll reveal shocking statistics about the U.S. colocation market's growth and the unbelievably low vacancy rates. Plus, we'll uncover the massive impact of AI on the data center landscape, with investments skyrocketing into the hundreds of billions. We'll discuss the ongoing struggle to find and keep skilled workers in this rapidly expanding field. And while the U.S. power grid seems stable for now, what issues could threaten the industry's future? Join us as we unpack the complexities of the data center boom and explore what it means for the future of our digital economy.
Today our guest is Bill Tierney, Chief Sales Officer for BluePrint Supply Chain.  Join us as we highlight some new research published by Data Center Frontier and BluePrint Supply Chain that addresses data center construction supply chains.  This first-of-its-kind study addresses everything from purchasing and logistics to storage and site setting.  Listeners will get a sneak peak at some of the compelling data the research has gathered and what it means for the current state of the industries construction supply chains.
For this episode of the DCF Show podcast, Data Center Frontier Editor in Chief Matt Vincent and Senior Editor David Chernicoff speak with Tom Dakich, CEO of Quantum Corridor, about compute possibilities for his company's super-fast, super-secure fiber-optic network operating in the area of Chicagoland and Northern Indiana. Almost exactly a year ago, Quantum Corridor launched what the company bills as "one of the fastest, most secure fiber-optic networks in the Western Hemisphere" with its first transmissions from the Chicago ORD 10 Data Center at 350 E. Cermak Rd. to a data center in Hammond, Indiana.  Formed in 2021 as a public-private partnership with the state of Indiana, Quantum Corridor was established to enable advanced Illinois and Indiana tech innovators to exchange data nearly instantaneously, the better to achieve frontline technology breakthroughs.  Funded through a $4.0 million grant from the state of Indiana’s READI grant program and with the cooperation of the Indiana Department of Transportation and Northwest Indiana Forum, Quantum Corridor's network is utilizing 263 miles of new and existing fiber-optic cable beneath the Indiana Toll Road to link data centers, quantum research facilities, life sciences and genome scientists and hyperscalers with industry-shattering speeds and throughput. Transmitting at data speeds reportedly 1,000x faster than traditional networks, on its launch in 2023, Quantum Corridor said the new network aims to enable regional businesses and institutions to achieve breakthroughs in the segments for defense, financial modeling, biotech, cybersecurity, machine learning, research and more. This optimism came on the heels of the Biden-Harris administration’s designation last October of of the Chicago MSA as a U.S. Regional Technology and Innovation Hub. With its first transmissions, Quantum Corridor achieved a latency of 0.266 milliseconds of information exchange over its current 12-mile network—a transmission speed 500 times faster than the blink of an eye and far exceeding the average network’s existing 12-times-longer latency.  The combination of near-instantaneous transmissions paired with massive throughput is expected to enable exponential breakthroughs in modeling and problem solving across myriad industries. Quantum Corridor continues to expand its mileage and connect research facilities. According to the company, the network already has the capacity to transmit nearly the entire current content load of the internet in a single transmission.
The purpose of the Data Center Frontier/Nomad Futurist: Field Report podcast series -- aka "Nomads at the Frontier" -- is to gather recurring industry insight, expertise and commentary from Nomad Futurist Foundation leaders and ambassadors, firsthand and in the field, as they participate in various industry events. Nomad Futurist is a 501(c)(3) non-profit organization established, per its mission statement, "to demystify the world of digital infrastructure and the related technologies that impact every aspect of our daily lives."  Committed to educating youth in underprivileged communities, promoting diversity and inclusion, and opening up opportunities for growth and new career paths, the group says its "primary focus is to empower and inspire younger generations through exposure to the underlying technologies that power our digital world." Nomad Futurist is known for appointing individuals throughout the data center industry to its ranks of Ambassadors and Advisors, who work to promote the organization's ethos and goals in their professional spheres. Nomad Futurist's members are a pervasive presence in the data center sector, to be found in attendance and presenting at most industry events in the U.S. and abroad.  For episode two of the Nomads at the Frontier series, DCF Editor In Chief Matt Vincent moderated a tight yet pithy discussion with Nabeel Mahmood, Co-Founder and Managing Director of Nomad Futurist, and Rob Coyle, Director of Technical Program for the Open Compute Project Foundation, about the newly announced strategic alliance between the two organizations as reflected at the 2024 OCP Global Summit (Oct. 15-17), each taking a shared role in addressing workforce and education challenges in the data center industry. In the podcast, Mahmood and Coyle highlight how the significance of the new alliance between their organizations was reflected at OCP 2024, which was attended by an amazing 7,000 people, and discuss future initiatives to foster collaboration. The discussion covers how this year's event answered the need for standardization in liquid cooling solutions, and how presentation reflected the growing importance of automation and robotics in response to issues ranging from increasing rack density to labor shortages, especially in hyperscale and AI-oriented data centers. The talk also addresses the alliance's joint roadmap to formalize strategic directions for the partnership, with OCP-Nomad Futurists announcements planned over the next three to six months to possibly include events such as hackathons, designathons, and other disruptive initiatives and happenings to engage both industry insiders and newcomers.
In this episode of the Data Center Frontier Show, Matt Vincent, Editor in Chief of Data Center Frontier, is joined by Waleed Zafar, Mission Critical Director at XYZ Reality to discuss using augmented reality to improve Data Center project delivery. XYZ Reality is a leading developer of augmented reality (AR) solutions for construction that give contractors and owners an accurate and objective way to manage and deliver quality projects.
With server densities on the rise, the expansion of cloud services, the rapid adoption of high-performance computing and the explosive growth of AI, data centers need more effective cooling solutions that can handle higher heat loads. Liquid cooling systems are uniquely positioned to fill that need – while also providing a significant reduction in cooling-related energy consumption. In this episode of the Data Center Frontier Show podcast, Matt Vincent, Editor-in-Chief of Data Center Frontier, talks to Pat McGinn, Chief Operations Officer of CoolIT Systems, about how the liquid cooling market has changed in the past 12 years. They also discuss the benefits of single-phase direct-to-chip liquid cooling and McGinn predictions for the market in 2025 and beyond.  Listen to this 18-minute podcast to learn more about: The benefits of liquid cooling for data centers. The liquid cooling options available for data centers. How liquid cooling can help improve data center performance and efficiency. The role of cooling distribution units (CDUs) in liquid cooling. How liquid cooling can help with energy consumption, especially with the rise of AI. If the talk about liquid cooling capacity constraints are accurate. Why you should trust your data center to the liquid cooling experts at CoolIT.
In this episode, we delve into the complex interplay between performance and sustainability in data centers. As technology continues to advance, so too does the demand for powerful, efficient data centers. However, this growing demand also raises concerns about energy consumption and environmental impact.
For this installment of the Data Center Frontier Show podcast, we bring you the first episode in a new series with our friends from the Nomad Futurist Foundation. Nomad Futurist is a 501(c)(3) non-profit organization established, per the group's mission statement, "to demystify the world of digital infrastructure and the related technologies that impact every aspect of our daily lives."  Committed to educating youth in underprivileged communities, promoting diversity and inclusion, and opening up opportunities for growth and new career paths, the group says its "primary focus is to empower and inspire younger generations through exposure to the underlying technologies that power our digital world."  Nomad Futurist is known for appointing individuals throughout the data center industry to its ranks of Ambassadors and Advisors, who work to promote the organization's ethos and goals in their professional spheres. The organization's members are a pervasive presence in the data center sector, to be found in attendance and presenting at most industry events in the U.S. and abroad.  The purpose of the Data Center Frontier/Nomad Futurist: Field Report series -- aka "Nomads at the Frontier" -- is therefore to gather recurring industry insight, expertise and commentary from Nomad Futurist leaders and ambassadors, firsthand and in the field, as they participate in these events.  Yotta 2024 Impressions For the first installment of Nomads at the Frontier, Data Center Frontier's Editor in Chief Matt Vincent called into Las Vegas during the debut of Yotta, an event conceived and brought forth by Data Center Dynamics aimed at unifying leaders and stakeholders in digital infrastructure industry at large. For this interview, DCF spoke with Nomad Futurist Advisors Jodie Lin, Customer Advocate and CSR with data center infrastructure company Mirapath, Inc., and Illissa Miller, CEO of iMiller Public Relations, a firm focused on the digital infrastructure industry, for their reflections and impressions from the environs of Yotta 2024. To begin, we asked Lin and Miller for their top takeaways from the show regarding the confluence between the larger world of digital infrastructure and data centers. Next, given how attuned data centers are to the AI technology shift, we asked Nomads Lin and Miller for their perceptions of the level of preparation within rest of the digital infrastructure space, as heard from at Yotta, for facing up to AI's demands and opportunities. DCF also wondered whether, based on impressions received from Yotta, the data center industry’s obsessions with power, cooling, sustainability, and managing exponential growth in wake of AI seem to be shared equally by the larger world of digital infrastructure. Finally, owing to certain breakthroughs in the areas of regulation, funding, and planned deployment, this year has felt like a tipping point in terms of optimism for advanced nuclear energy, especially in the US data center industry. As such, we asked our Nomads to gauge whether this anticipation for "new nuclear" energy was as palpable in the larger world of digital infrastructure as encountered at Yotta?
Rehlko, formerly Kohler Energy, is setting a new standard in the data center industry by offering the first generator in the data center industry with an Environmental Product Declaration (EPD). The company recently released a brand new EPD in the form of a PEP ecopassport® that provides transparent, third-party verified insights into the KD Series™ generator’s environmental impact across its lifecycle. Here’s a link to the report that details how Rehlko is committed to transparently communicating its product's lifecycle footprint and how the process is accelerating data centers' efforts to measure Scope 3 emissions and work toward net-zero ambitions.
For this episode of the Data Center Frontier Show podcast, we sat down with liquid cooling data center partners Park Place Technologies and ZutaCore. During the podcast, DCF Editor in Chief Matt Vincent spoke with Chris Carreiro, Chief Technology Officer for Park Place Technologies, and Manfreid Chua, Vice President-Business Development, AI & Sustainability for ZutaCore, about how the companies' partnership is enhancing liquid cooling technology prospects for sustainable AI computing. In September, Park Place announced the expansion of its portfolio of IT infrastructure services to include the two major liquid cooling formats for data centers, i.e immersion liquid cooling and direct-to-chip cooling. ZutaCore is a key developer and supplier of direct-to-chip, waterless liquid cooling technology which formally supports NVIDIA's GPUs. Direct-to-chip advanced liquid cooling technologies apply coolant directly to the server components that generate the most heat, including CPUs and GPUs. And Park Place notes that immersion cooling empowers data center operators to do more with less: less space and less energy. Using liquid cooling methods, the company contends that businesses can increase their PUE by up to 18 times, and rack density by up to 10 times. Ultimately, this level of efficiency can lead to power savings of up to 50%, which in turn leads to lower operational costs. Park Place also notes how, from an environmental perspective, liquid cooling is significantly more efficient than traditional air cooling. The company reckons that, at present, air cooling technology only captures 30% of the heat generated by the servers, compared to the 100% captured by immersion cooling, resulting in lower carbon emissions for businesses that opt for immersion cooling methods. Park Place prides itself on providing a single-vendor outlet for the whole liquid cooling technology adoption process, from procuring the hardware, conversion of the servers for liquid cooling, to installation, maintenance, monitoring and management of the hardware and the cooling technology. “Our turn-key liquid cooling offerings have the potential to have a significant impact on our customers’ costs and carbon emissions, two of the key issues they face today,” said Carreiro.  “Park Place Technologies is ideally positioned to help organizations cut their data center operations costs, giving them the opportunity to re-invest in driving innovation across their businesses." In the course of our talk, Carreiro highlighted the challenges of data centers' AI sustainability conundrum, and the corresponding benefits of Park Place's warranties. For his part, ZutaCore's Manfreid Chua delved into the industry's shift from air to liquid cooling due to the demands of generative AI, and the advantages of his company partnering with Park Place for optimizing the energy efficiency footprint of data centers.  Additionally, Chua shared insights regarding the economic value of NVIDIA's AI accelerators, and the finer points of the race to sustainability and net zero for large-scale AI data centers. Chua talked about talk about how resources like land, energy, and water all become possible limiting factors for AI factories at scale, and how liquid cooling can help alleviate such limitations.
Join us for this podcast as we explore the dynamic landscape of data centers and how Artificial Intelligence (AI) has reshaped them. We'll delve into the shift from a 'north-south' traffic system to the sophisticated 'east-west' system that revolutionized data processing. Our guest, Dave Hessong from Corning, illustrates the crucial role of high-speed connections like 800G in meeting AI's demands. The discussion reveals how upgrading to this speed is not just beneficial, but essential in optimizing your data center. Latency, a key factor in network performance, is also a core topic of our conversation. Understanding its significance and how reducing it can enhance performance provides an edge in today's competitive market. The discussion further delves into the importance of state-of-the-art fiber optic cables, connectors, and cabling architecture in boosting a data center's performance. The complexities of AI deployment, its impact on fiber density, and the innovative solutions it necessitates are also explored. As we unveil the future of data centers, the estimated rise in AI capacity and the associated challenges are discussed. These include the increased power requirements and the need for a more organized cable and fiber infrastructure. While 800G might seem like just the beginning, the discussion elaborates on how this transition can future-proof your data centers for the next three to seven years. The extraordinary and transformative impact of AI, still in its infancy, on business and society is also a key highlight. Looking to the future, the anticipated growth in bandwidth as AI continues to evolve, and the exciting prospect of technology reaching 1.6Tbps next year, are discussed. We encourage you to tune in and engage with us as we navigate this rapidly evolving field. Regardless of your level of expertise, this conversation promises valuable insights into the future of data centers. Join us on this enlightening journey into the world of AI and data centers.
Prometheus Hyperscale is the new corporate entity formed this month which expands upon the footprint and the promise of the Wyoming Hyperscale White Box project, first reported on by DCF in 2022.  For this episode of the Data Center Frontier Show podcast, we spoke with Trenton Thornock, founder of Wyoming Hyperscale, who has been appointed as Chief Executive Officer of Prometheus Hyperscale; Trevor Neilson, a seasoned climate-tech CEO and energy transition investor, who joins as the company's President; and John Gross, President of J.M. Gross Engineering, who is handling the project's liquid cooling infrastructure.  The Wyoming Hyperscale White Box data center has been under construction since 2022 on 58 acres of land near Aspen Mountain Evanston, Wyoming, and represents a blueprint for creating super-efficient data centers with low impact on the environment and benefits for the local community. In the companies' transition, Wyoming Hyperscale has merged with Prometheus Hyperscale and been expanded from a 120 MW project to plans for a data center campus with 1 GW of IT capacity. The data center is being built on land owned by Thornock's family, which has been involved in ranching for 6 generations. The location benefits from ready access to renewable energy from nearby wind and solar farms. Wyoming Hyperscale has a contract with Rocky Mountain Power for 120 megawatts of power and a 138 kV substation, which is fed by the same switchgear as the renewable energy generation sites. The site sits on a major east-west fiber highway that tracks the 41st parallel, along which data center hubs have emerged in places like Ohio, Iowa, Nebraska and Utah. The Union-Pacific Railroad line, which provides key right-of-ways for fiber deployment, runs through nearby Aspen Mountain. The Evanston project underscores Prometheus Hyperscale’s commitment to sustainability and innovation. By integrating 100% renewable energy and advanced liquid cooling technology combined with heat reuse, the Evanston facility promises to be one of the most efficient and environmentally friendly data centers in the world.  Importantly, less than 10% of the project’s power development plan is grid dependent (120 MW of 1,220MW or 9.84%). The first facilities yielded by Phase 1 of the Evanston project are expected to come online within the next 18 months. Prometheus Hyperscale has also revealed plans to construct four other data centers across Arizona and Colorado. And as previously reported by DCF, this May saw the announcement of a 20-year power purchase agreement (PPA) by fission-based nuclear small modular reactor (SMR) specialist Oklo to deliver 100 MW of power to Prometheus, using Oklo's Aurora Powerhouse reactors for power generation. "Our partnership with Oklo not only provides us with a reliable, clean energy source but also positions us as a leader in sustainable data center operations," said Thornock. "Sam Altman’s and Jacob Dewitte’s vision for a sustainable future through advanced energy solutions aligns perfectly with our mission at Prometheus Hyperscale." During the podcast, Thornock discussed the evolution of the Wyoming hyperscale project with Prometheus, highlighting its growth to a 1 GW prospect since the groundbreaking of the Evanston County project in 2022. For his part, Trevor Nielsen emphasized increasing demand for Prometheus driven by advancements in computing power and the importance of sustainability in the energy transition.  Our conversation also covered the company's partnership with Oklo, focusing on the streamlined permitting process for small modular reactors in Wyoming and the strategic use of resources for data center energy generation.
Sustainability is a critical factor in data center design.  The topic encompasses a series of design trade offs including: reliability, site selection, water usage, operating parameters, construction materials and cooling efficiencies. Due to a couple of key paradigm shifts in the industry, today’s data center owners and operators are looking to meet their cooling demands with air cooled solutions. All this needs to be done in conjunction with optimizing energy efficiency leading to a significant change in HVAC system products and design.    In this conversation, Jeffrey Jerwers can discuss the trends driving the need for water conservation and associated equipment impact. He will detail the types of economizers available for mechanical cooling systems, application by climate zone and their associated design tradeoffs.
Data centers are complex, high-stakes environments where downtime is not an option. The sheer volume of interconnected systems and components creates a daunting challenge for operators. This is where digital twins shine. Because of this complexity, data centers require a new level of understanding. Digital twins—virtual models fed by real-time DCIM data—can offer a transformative solution. The key is that the Digital Twin is only as good as accurate real-time data. This continuous flow of real-time information allows operators to see the bigger picture, from power usage to equipment health. Imagine a live, digital replica predicting bottlenecks, optimizing cooling, and enabling proactive maintenance. A digital twin can allow your data center to analyze your infrastructure, highlight potential issues, and provide highly accurate details on the impact of proposed changes − viewable as your monitored values change. Watch an ATS or PDU view with the power load that reflects your changing values and plan. Watch a power load peak during a fail-over with your plan changes applied to the real-time data. Power management with digital twin capabilities can simulate the failure of a device or load change and accurately model the effects of that failure, including triggering failure over to redundant partners and cascade failures. With a DCIM solution with digital twin capabilities, you gain insightful reporting that identifies potential risk areas in your infrastructure. For instance, it can flag power distribution gear that represents a single point of failure, which could lead to equipment de-energization and impact customers and SLAs. This proactive approach to risk management is a vital advantage of a DCIM solution with digital twin capabilities. Imagine a planning module based on a digital twin model and its capabilities. It doesn't just show simple details like additional loads but also simulates complex scenarios. For instance, it can predict when a device will fail, reroute load in the virtual model to redundant partners, and show the effect on those devices as well. This comprehensive approach to planning is where a true digital twin adds much more value than a simple load addition or removal. By using real-time DCIM data, digital twins become intelligent partners, ensuring peak performance and a more resilient data center. While the idea of a digital twin for your data center has existed for a long time, operators now can have that digital twin fed by millions of data points per minute inside a full-fledged powerful DCIM. The view of Modius is that anything less makes it just a "Digital Cousin". The company believes its Modius ® OpenData® is the gateway to these next-gen capabilities and is using this podcast to kick off this effort.
As everyone on the Data Center Frontier and Endeavor Business Media (EBM) teams regroups from last week's sold-out DCF Trends Summit (Sept. 4-6) conference in Reston, Virginia, for today's episode of the DCF Show Podcast we bring you something a bit different. Recorded earlier this year, EBM's Data Center, Communications, and Power Infrastructure Confluence Forum is a shared discussion among the lead editors of key brands and publications in EBM's Digital Infrastructure and Energy Groups. The discussion frames and addresses the topic of rapidly expanding stakes and implications for the data center, information and communications technology (ICT), fiber broadband, and on-site power generation infrastructure sectors in the age of advanced computing and connectivity for AI/ML, IoT, 5G LTE, all flavors of Ethernet, and other pertinent technology applications. EBM editors in order of their participation in this discussion include: 00:00 - 14:00 - Matt Vincent, Editor in Chief, Data Center Frontier 14:02 - 26:13 - Patrick McLaughlin, Editorial Director, Cabling Installation & Maintenance 26:13 - 33:51 - Joe Gilliard, Executive Editor, ISE | ICT Solutions & Education 33:52 - 42:11 - Sean Buckley, Editor in Chief, Lightwave and Broadband Pulse (podcast) 42:12 - 1:06 - Rod Walton, Chief Editor, Microgrid Knowledge The discussion winds up with a bit of cross-questioning among the editors. We at Data Center Frontier hope you'll enjoy this podcast, and will resume with our regular, data center industry-specific coverage later this month.
For this episode of the Data Center Frontier Show podcast, we welcome Mark Seymour, Distinguished Engineer with Cadence Design Systems, for a discussion of the big question on everyone’s mind right now in this industry: data center power demand and where it's going in the context of rapid digitalization and exponential growth of HPC and AI computing needs, and how that compares and contrasts, or even conflicts, with increasing environmental concerns and regulations.  The conversation also highlights the importance of digital twins for managing data center efficiency and the advantages of liquid cooling technology, and particularly immersion cooling, as a sustainable alternative to traditional methods. In the course of our interview, Seymour also emphasizes the data center industry's responsiveness to societal demands for sustainability, citing initiatives such as ubiquitous tree planting by project developers, and the need to adapt to new technological challenges. Here's a timeline of the podcast's key moments: 2:59 - Seymour explains that AI is essentially high-performance computing, which is now required in many data centers that previously did not need it. 12:05 - Addressing the challenges and potential of immersion cooling technology: Emphasizing its growing acceptance, but also the need for confidence in its operation. 17:52 - Talk turns to the importance of digital twins in ultimately managing data center efficiency, with Seymour highlighting the necessity for understanding the interrelated behaviors of IT infrastructure and cooling systems. 24:18 - Discussion circles back to immersion cooling as a sustainable option for data centers, with Seymour expounding on its advantages over traditional cooling methods. 27:44 - Seymour elaborates on the improvements in compute efficiency per watt in modern systems, arguing that the data center industry is responding and adapting to societal demands, rather than being inherently unsustainable. 30:42 - Seymour acknowledges the industry's focus on sustainability and environmental impact, citing examples such Cadence's tree planting initiatives and the ongoing challenge of meeting new technological demands. Visit Data Center Frontier
Data Center Frontier opens our podcast interview catching up with CyrusOne CEO Eric Schwartz by discussing the company's recent $12 billion in announced financing, highlighted by a new $8 billion warehouse facility in the U.S. to support growth driven by demand from hyperscalers and AI technologies. In the course of the discussion, Schwartz notes CyrusOne's strong growth trajectory, new leadership, and expansion plans in Europe and Japan, while emphasizing the organization's principles of earning customers' trust and a commitment to operational excellence.  We also receive an update on the progress of the company's Intelliscale offering for build-to-suit AI data centers. Additionally, the talk covers CyrusOne's 2024 sustainability report, focusing on the company's carbon neutrality efforts, renewable energy investments, and the overall industry's commitment to reducing carbon footprints.
This May, Digital Realty (NYSE: DLR) announced a collaboration with Oracle to accelerate the growth and adoption of artificial intelligence (AI) computing among enterprises. For this episode of the Data Center Frontier Show podcast, we asked Digital Realty Chief Revenue Officer Colin McLean to expand on key points of his company's AI data center design and implementation efforts in light of the new partnership with Oracle. In their announcement, Oracle and Digital Realty said their new strategic collaboration aims to develop hybrid integrated solutions that "address data gravity challenges, expedite time to market for enterprises deploying next-generation AI services, and unlock data and AI-based business outcomes." We also asked McLean about how he's seen the trend lines for data center pricing, leasing and capacity changes over the past 5-6 years of the cloud industry, compared to roughly the past year of AI growth since he's been CRO for Digital Realty. Here's a timeline of the interview's key moments:  0:32 - Data Center Frontier asks McLean to elaborate on the salient points of AI data center design in light of Digital Realty's partnership with Oracle. McLean explains the significance of the partnership, emphasizing how it addresses the challenges of managing high-density workloads in AI and data-intensive applications. 6:19 - DCF continues asking about trends in AI data center design, particularly regarding pricing, leasing, and capacity changes over the past few years. Citing how enterprises and service providers are planning for increased capacity requirements due to AI growth, McLean highlights the need for forward-thinking capacity planning due to evolving requirements, increasing workload density, and the introduction of new programs to support higher density requirements. 8:14 - Trends in AI data center design since the industry's ChatGPT inflection point of 2023 are addressed. McLean emphasizes the importance of designing data centers to accommodate mixed densities globally, sustainability considerations, and the need to closely collaborate with clients and partners to meet evolving requirements. 12:20 - McLean discusses the global scale of Digital Realty, emphasizing capacity requirements, major metros, emerging markets like Frankfurt, and the growth of the platform across various regions. 15:06 - DCF Editor in Chief Matt Vincent directs the conversation toward the topic of power, highlighting its significance in the data center industry and asking McLean about aspects related to renewables, the grid, and onsite powering options. 15:45 - McLean elaborates on the importance of sustainability for Digital Realty, emphasizing the company's efforts to work with municipalities, support a greener world, and address power concerns globally, including plans for future expansion into markets such as India and Africa. 17:42 - DCF acknowledges the critical link between sustainability, AI, and power issues, prompting McLean to reiterate the company's commitment to supporting a sustainable world and navigating the balance between local needs and economic growth.
Welcome to the forefront of data center innovation! Today, we'll explore how Artificial Intelligence (AI) is driving a revolution in data center design. We'll delve into three key areas: high-performance networking, cutting-edge cooling solutions and advancements in fiber optic technology. These advancements are all essential for supporting the ever-growing demands of powerful AI systems.
For this episode of the Data Center Frontier Show podcast, we interviewed Chris Downie, Chief Executive Officer for Flexential, who as a frequent industry commentator has emphasized the transformative impact of AI on IT infrastructure and the need for thoughtful deployment in terms of responsible use and ethical standards.  In a recent LinkedIn post, Flexential's CEO wrote, "Drawing on nearly two decades in the data center industry, I've seen transformative changes, but the rise of AI marks a true paradigm shift, redefining our approach to IT infrastructure...The true test of our leadership will be how we manage the dual challenges AI presents—its potential to revolutionize and its power to disrupt. As industry leaders, we must ensure our advancements in AI are matched with advancements in ethical practices. Our legacy will hinge not just on the technologies we deploy, but on the conscientiousness with which we wield them." Additionally on social media, Flexential's Downie recently recounted various interconnection discussions he took part in at this Spring's ITW 2024 (May 14-17) conference in Maryland. We asked Downie about an ITW discussion he took part in regarding the connectivity challenges and solutions essential for supporting new-scale compute campuses, and how these demands are driving the evolution of connectivity infrastructure to meet future needs. "As AI's footprint expands, so does its energy consumption, which can rival that of entire nations," noted Downie. He added, "As cloud services, IoT, AI, and digital transformation demands escalate, the importance of robust, interconnected data centers has never been more critical. Over the last 18 months, AI GPU demand has significantly increased, highlighting the importance of robust networking both inside and between facilities. At Flexential, we're proactively addressing these challenges to ensure our network solutions keep pace with growing demands and industry needs." Our far-ranging podcast discussion also touched on the evolution of high-density data centers among enterprise, cloud and AI use cases, and Downie's assessment of current trends in power and cooling, innovations in liquid and air cooling, as well as sustainable practices, power generation considerations, and workforce challenges in the data center industry. Notably, Flexential CEO Chris Downie is also a member of the Editorial Advisory Board for DCF's inaugural Data Center Frontier Trends Summit, a live conference event to be held from Sept. 4-6 in Reston, Virginia. Here's a timeline of the podcast discussion's key moments: 2:53 - Flexential CEO Chris Downie highlights the transformative impact of AI on IT infrastructure, stressing the need for thoughtful deployment due to the significant pace of change and the implications for privacy, equity, and biases. 13:22 - Downie mentions the opening of new Flexential facilities in Denver and Atlanta, emphasizing the evolving significance of Denver as a destination for large-scale workloads, and the growth potential in Atlanta as a Tier One market. 17:24 - Downie elaborates on the evolution of Flexential's high-density data centers from its gen four to gen five designs, the blend of CPU and GPU infrastructure, the current state of GPU environments, and the ongoing exploration of liquid cooling solutions. 22:34 - The discussion touches on data center considerations for on-site generation, battery backup, sustainability, and alternative energy, prompting Downie to discuss the industry's exploration of new ways to manage power demands, including nuclear and natural gas options. 26:05 - Backup power sources such as hydrogen and batteries are addressed. Talk then then shifts to discussing workforce challenges and community relations in the data center industry. Flexential's Downie reflects on the increasing public understanding of data centers' importance and the evolving generational shift towards appreciating digital infrastructure.
The increased demands of cloud services and artificial intelligence are changing the data center landscape. The days of a 20 MW to 50 MW hyperscale data center being sufficient to meet the requirements of high performance computing are quickly fading away. Today, Exascale data centers capable of providing more than 500 MW of power are increasingly taking center stage, ushering in the gigawatt era.
For this episode of the Data Center Frontier Show Podcast, ark data centers CEO Brett Lindsey explains the reasons behind the company's recent rebranding (from Involta) and its strategic direction toward data center colocation edge and AI disciplines.  Also during the interview, Lindsey discusses the new company's planned expansions and recent entry into new markets such as Green Bay, Wisconsin; its investment strategies; and theories of customer segmentation based on colocation and cloud needs. Additionally coming in for review during the interview are the significance of ark data centers' CMMC [Cybersecurity Maturity Model Certifciation] 2.0 compliance capabilities, its partnerships with government entities, and the company's unique positioning to cater to specific regulatory needs and edge demands.
For this episode of the Data Center Frontier Show Podcast, DCF Editor in Chief Matt Vincent reads down a synopsis of Data Center Frontier's top 5 most-viewed editorial stories of the Second Quarter of 2024 by pageviews. The stories cited in this episode are as follows: 1. The Gigawatt Data Center Campus is Coming 2. IEA Study Sees AI, Cryptocurrency Doubling Data Center Energy Consumption by 2026 3. Land and Expand: New Data Center Developments by Meta, T5, Prime, Ardent, Tract, Microsoft 4. Equinix Puts Down $25M In Data Center Nuclear Power Deal with Sam Altman's Oklo 5. Prologis Launches $25B Dedicated Data Center Arm Led by Compass Co-founder Chris Curtis
Sustainability is a hot topic in the data center industry as operators look to reduce emissions while meeting customers’ ever-increasing demand for power. Can data centers develop and implement renewable energy solutions that will lower emissions and still provide the reliability customers expect? In this episode of the Data Center Frontier Show podcast, Matt Vincent, Editor-in-Chief of Data Center Frontier, talks to Toyebi Adedipe, Sales Manager-Engineered Solutions, Data Centers for Kohler, about the intersection of engineering and renewable energy sources. Specifically, they discuss innovations and technologies that are currently being used and impacting the future of data centers, as well as the design, construction and maintenance of renewable backup power solutions. Listen to this 30-minute podcast to learn more about: · The importance of electrical infrastructure design in data centers, especially concerning reliability and scalability. · How to approach the integration of renewable energy sources into the electrical systems of data centers, considering both technical challenges and environmental benefits. · Key considerations when implementing power distribution and backup systems in a large-scale data center to ensure uninterrupted operation. · Emerging innovations and technologies that can help reduce power consumption and optimize energy use.
As the demand for power intensifies, the need for robust and safe electrical distribution systems in mission critical environments is paramount. Join us for an insightful podcast hosted by Starline, the leader in innovative electrical power solutions, as we delve into the critical strategies and technologies essential for enhancing the safety of electrical distribution in the face of rising power densities in data centers. In this podcast, John Berenbrok, Director of Product Management at Starline, will address the challenges posed by the increasing concentration of electrical power in mission critical environments. We will explore the latest advancements in electrical distribution that ensure not only operational efficiency but also the highest standards of safety. Key discussion points will include: Understanding Power Densities Best Practices in Electrical Distribution Enhanced Safety Solutions This webcast is ideal for data center managers, electrical engineers, and anyone involved in the design, implementation, and maintenance of mission critical electrical distribution systems.
In this AI evolution, the industry has showcased resilience and agility in addressing the new power demands and resiliency challenges, and we’re seeing data center operators quickly adapting to reshape data center design and operation strategies. In this episode of the Data Center Frontier Show podcast, DCF Editor in Chief Matt Vincent speaks with Vance Peterson, Solutions Architect for Schneider Electric, who sheds light on the industry's response to this critical need, unveiling strategies, technologies, and concepts that are revolutionizing power & cooling delivery within data centers, paving the way for unprecedented computational capabilities. Looking toward the future, we explore the imperative transformation of sustainable energy resiliency for multi-sourced critical systems, positioning data centers as prosumers in the new energy landscape. Our guest provides valuable insights into the evolving role of data centers as key players in sustainable energy ecosystems, ensuring reliability and efficiency in an era of dynamic energy demands. Furthermore, the podcast delves into the groundbreaking collaboration between Schneider Electric and NVIDIA, illuminating the optimization of data center infrastructure and advancements in edge AI and digital twin technologies. Finally, we uncover how AI and machine learning technologies are driving software efficiencies and predictive analytics in data centers, showcasing how digital services can revolutionize data center operations and drive unparalleled performance and sustainability. Tune in for a captivating discussion at the forefront of technological advancement with AI, sustainability, and data center innovation in the data center industry.
For this episode of the Data Center Frontier Show podcast, Data Center Frontier Editor in Chief Matt Vincent meets with David Mettler, Executive Vice President of Sales and Marketing for T5 Data Centers. In the course of discussion, Mettler provides an overview of T5 Data Centers' model for building data center capacity in light of current events, emphasizing the importance of power and meeting agreed-upon timelines. Talk centers on how T5 has acquired 160 acres in Grayslake, Illinois for a data center with a power capacity of up to 480 MW, to be delivered between 2027 and 2029. Mettler emphasizes T5's flexibility and how a customer-centric solutions mindset informs the company's data center model.  The discussion also touches on T5's commitment to environmental stewardship, considering various onsite data center energy options such as nuclear and hydrogen. Here's a timeline of the podcast's key moments: 2:17 - David Mettler details T5's acquisition of 160 acres in Grayslake, Illinois for data center development, emphasizing factors such as zoning, an attractive site, and power capacity up to 480 MW delivered between 2027 and 2029. 5:16 - Mettler highlights T5's presence in Chicago, previous and current projects in the area, and the region's favorable utility conditions, tax incentives, and the continuous growth of the market. 6:43 - Mettler outlines T5's model for building data center capacity, emphasizing flexibility, customer-centric solutions, and the importance of delivering on time to maintain customer trust and reputation. 8:51 - Further details on T5's 480 MW power delivery plan in Chicago are explored, involving potential for phasing up to 850 MW based on customer needs, and the attractiveness of the property due to high power demand. 16:39 - DCF Editor in Chief Matt Vincent raises the topic of sustainability, prompting Mettler to elaborate on T5's commitment to environmental stewardship, participation in reporting frameworks, and the challenges of balancing growth with green power limitations. 18:46 - Discussion targets exploration of various energy options such as nuclear, hydrogen, and natural gas, highlighting the industry's focus on meeting energy demands responsibly. 22:06 - The discussion expresses optimism about collaborative efforts across industries to address energy needs, particularly praising innovative "new" nuclear designs and emphasizing the potential of nuclear energy for a sustainable future. 26:07 - Mettler highlights T5s unique perspective in constructing and operating data centers for both owned and client-owned facilities, emphasizing the company's expertise and ownership mentality in delivering tailored solutions, especially in the context of liquid cooling and high power demands.   Recent DCF Show Podcast Episodes Phillip Koblence, COO and Co-Founder, NYI; Co-Founder, Nomad Futurist   Data Center Construction and Dallas Market Talk with Burns & McDonnell  Data Center Frontier's Rich Miller Talks Gigawatt MegaCampus Predictions  ZutaCore Executives Recap NVIDIA GTC Data Center Liquid Cooling Playbook  NVIDIA, Equinix, JetCool Talk Data Center Liquid Cooling, GTC 2024 AI Conference Trends
For this episode of the Data Center Frontier Show Podcast, DCF Editor in Chief Matt Vincent sat down for a chat with  Christopher McLean, PE, ATD, LEED AP. Specializing in the design, operations and construction of data centers, Chris is a Principal at Critical Facility Group in Boston. He previously held Director-level roles at a global engineering and construction corporation, a consulting engineering firm, as well as at a carrier hotel and colocation facility.  Grounded in journeyman desktop support and hardware specification expertise, McLean's data center experience holistically encompasses all aspects of data center delivery, including elements of modular design and construction, design engineering, and facility operations.  He is a frequent presenter at technical conferences, and contributor to industry publications. We caught up with Chris shortly after his appearance presenting an AI facility design and construction case study on the seminar stage at Data Center World.  Our conversation touched on  the challenges posed by high-density AI designs in data centers and the overall "state of liquid cooling" for AI.  Additionally, the importance of a pragmatic approach in recycling IT assets and the adoption of new battery technologies was highlighted.  An increasing interest in nuclear small modular reactors (SMRs) for meeting the power demands of data centers and the challenges of the AI era, and the potential economic and community impact of these technologies, was also discerned and discussed.  Talk also ranged over such subjects as data center controls, building automation, electrical power monitoring systems, and building management systems to enhance total product delivery to data center operators. Here's a timeline of the podcast's key moments: 2:31 - Discussion centers on the increasing interest and viability of nuclear energy, particularly SMRs, in meeting the rising power demands of data centers. 5:42 - Talk turns to the diversity of SMR designs, safety features, public perception challenges, and the potential positive economic impact and innovation these technologies could bring to the industry. 10:00 - DCF leans into Chris' insight as a design engineer, leading to a discussion on the challenges posed by high-density AI designs in data centers, the need for precise load information for effective design, and the necessity of creating flexible environments to accommodate rapidly evolving technology, while avoiding overshooting or undershooting design requirements. 15:32 - DCF solicits opinion on the state of liquid cooling for AI, as the discussion goes on to specifically compare and contrast direct to chip with immersion cooling technologies and methods. 16:02 - Further exploration of the deployment of immersion cooling technology in data centers, with McLean considering the hallmarks of the mechanical engineering team and CFD models being employed at Critical Facility Group in terms of evaluation and potential implementation. 21:59 - Discussion turns to data center BMS trends and insights on the evolution of fire protection in the industry, specifically focusing on the transition from MEP firms to specialty fire protection groups. 25:10 - Thoughts on a pragmatic approach to recycling and sustainability in data centers, focusing on repurposing IT assets, particularly in the context of the AI revolution and the importance of giving obsolete components a second life. 31:04 - Talk ranges from discussion about Single-Pair Ethernet technologies, power issues, renewable energy, battery backup, and the potential future trends in the data center industry. 33:03 - McLean elaborates on the relative adoption of battery technologies including lithium-ion, nickel-zinc, and the challenges faced in replacing valve regulated lead–acid (VRLA) batteries, emphasizing the need for education and innovation in the industry.
The latest episode of the Data Center Frontier Show Podcast presents an Editors' Summit of sorts, as DCF's founder and Editor at Large Rich Miller drops by to join in the discussion with Editor in Chief Matt Vincent and Senior Editor David Chernicoff. The editors discuss the challenges of power availability in leading data center markets and the concept of Gigawatt data Center campuses (as reflected by Rich's latest article) as a solution, focusing on renewable energy and innovative designs. Microsoft's commitment to ten gigawatts of renewable energy, as well as the Infrastructure Masons recommendation of clean energy parks amounting to about the same, is mentioned, along with the challenges posed by climate change and the need for innovation in renewable energy.  The pricing out of small data centers due to demand from hyperscalers is also discussed, as were the present, burgeoning prospects for nuclear energy to power the data center industry, including the absolutely accelerating nuclear SMR frontier, of which much was heard at Data Center World (Apr. 15-18) in Washington, DC.
For this episode of the Data Center Frontier Show podcast, DCF's editors sat down with Udi Paret, Chief Business Officer of ZutaCore, and Alison Deane, ZutaCore's VP of Marketing, to discuss the company's impactful showing at the NVIDIA GTC [GPU Technology Conference] event this past March. Held at the San Jose Convention Center in the heart of Silicon Valley, both ZutaCore executives were intensively on hand for the event. A Busy GTC for Zutacore  At GTC, ZutaCore showcased its direct-to-chip, waterless liquid cooling technology, and announced support for the NVIDIA H100 and H200 Tensor Core GPUs to help maximize data centers' AI performance while delivering sustainability benefits.  "I wore them out," said Deane of her press scheduling at NVIDIA GTC for Paret and his counterpart at GTC, ZutaCore CEO, Erez Freibach. Paret and Deane said that Zutacore drew significant interest at GTC for the breadth of the company's announcements surrounding its HyperCool platform, comprised of direct-to-chip, waterless two-phase liquid cooling technology. ZutaCore's HyperCool dielectric cold plate liquid cooling system involves a direct-contact, self-regulated, pool-boiling based evaporator, enabling networking and simultaneously cooling all chips on-demand.  Several leading server manufacturers are engaged with ZutaCore to complete the certification and testing on the NVIDIA  GPU platforms. Compact, easy to install, and capable of cooling up 1500-watt processors and above, the company notes the platform is also qualified by processor manufacturers Intel and AMD, and deployed in major server manufacturers including Dell, SuperMicro, ASUS, Pegatron. Centrally during the GTC 2024 event, ZutaCore showcased its H100 and H200 waterless dielectric cold plates supporting densities up to 1500W in the booths of Boston Limited, Hyve Solutions, and Pegatron. Comparative Cooling Challenges During the podcast, Paret emphasized the advantages of ZutaCore's Hypercool technology, while addressing comparative challenges faced by single-phase water-based solutions. "The AI explosion is causing a market shift and positioning ZutaCore strategically," he said. With the NVIDIA H100’s ability to speed up large language models by 30x over the previous technology generation, and the H200 being touted as the world’s most efficient GPU for supercharging AI and HPC workloads, it's safe to these are two of the highest performing chips ever designed (even leaving aside NVIDIA's much-balleyhooed Blackwell platform.)  However, with each GPU consuming 700W of power, this will challenge data centers that are already struggling to control factors of heat, energy consumption and footprint.  ZutaCore’s HyperCool direct-to-chip waterless two-phase liquid cooling technology was designed specifically to answer such demands, and has already been proven to cool processors of 1500W or more, and currently for 100 kW per rack of computing power.  “Next-generation GPUs have unique cooling requirements that are most effectively solved by waterless, direct-to-chip liquid cooling technology for current GPU of 1500W while increasing rack-processing density by 300%,” said ZutaCore CEO Freibach, who is a co-founder of the company.  “Not only do hyperscalers eliminate the risk and massive expense of water leakage in the server, but they can also scale their cooling needs with little to no modifications to current real estate, power, or cooling systems. This is a game changer for the future of AI and HPC.” Meanwhile, the ZutaCore executives noted how the increasing need for sustainable AI solutions highlights the importance of sustainable practices in data centers.  In the arena of such concerns, ZutaCore's partnership and white-label sales agreement with Mitsubishi Heavy Industries (MHI) dramatically addresses the pressing challenges faced by data centers today, including the enhancement of heat exhaust efficiency, promotion of energy conservation, and decarbonization. Here's a timeline of key points on the podcast. 1:34 - Udi Paret, CBO of ZutaCore, reflects on the recent NVIDIA GTC event, highlighting the AI explosion and a major shift in design and consumption observed during GTC. Paret notes that CRN listed their company as one of the hottest at the event. 4:09 - Alison Deane, the company's VP of Marketing, discusses ZutaCore's success at GTC in being featured by partners like Boston Limited and Pegatron and showcasing its liquid cooling technology, w hich she says drew significant interest. 10:50 - Udi Paret elaborates on the advantages of the HyperCool technology, emphasizing the platform's elimination of water in servers, the implementation of phase change on the chip for future-proofing, and how this approach addresses challenges faced by single-phase water-based solutions in terms of scalability, sustainability, and performance. 19:01 - Data Center Frontier inquires about the competitiveness of two-phase dielectric direct-to-chip cooling compared to immersion cooling. 22:08 - Udi Paret explains the mechanics surrounding the dissipation of heat from the ZutaCore HyperCool system and emphasizes the platform's high-quality heat reuse capabilities. 26:08 - The discussion touches on ZutaCore partner Mitsubishi Heavy Industries' involvement in data centers, and reflects on the overall industry's growth.Deane and Paret recap more experiences from NVIDIA GTC, highlighting the buzz around AI in general and ZutaCore's innovative liquid cooling solutions in particular, leading to enabling net-zero goals. 28:47 - Udi Paret touches again on the market shift produced by the AI technology explosion, noting vertically integrated plays across various industries which aid in ZutaCore's strategic positioning.
For this episode of the Data Center Frontier Show Podcast, DCF Editor in Chief Matt Vincent sits down for an instructive chat with Phillip Koblence, a strategic executive and ubiquitous thought leader in the data center and network space.  Koblence co-founded NYI in 1996 and has successfully navigated through an ever-shifting infrastructure landscape, growing the company from a single data center in Lower Manhattan to a robust network with executional capabilities in key national and international markets.  His leadership, focus on customer experience, and ability to cut through complexity and hype, has positioned NYI as an industry leader in high-touch infrastructure solutions. Koblence is also CEO of Critical Ventures, a consulting agency offering a range of services to help clients, owners and investors optimize the value of critical infrastructure assets. Koblence sits on the DE-CIX North America Advisory Board as well as on the Board of OIX (formerly Open-IX). He is co-founder of the Nomad Futurist Foundation and podcast, designed to demystify the world of critical infrastructure and inspire younger generations to join the industry.  The interview begins with a discussion of NYI's entry into 60 Hudson Street and the challenges of retrofitting legacy buildings for modern data center needs, while emphasizing the importance of connectivity and collaboration in the digital infrastructure industry, and highlighted the rapid pace of technological advancements such as AI. Here's a timeline of the podcast's highlights: 2:03 - Koblence discusses NYI's entry into Manhattan's historic colocation and interconnection hub, 60 Hudson Street, emphasizing the importance of connectivity in New York City's digital infrastructure evolution. 6:20 - Koblence elaborates on the challenges and considerations when retrofitting legacy buildings like 60 Hudson for modern data center needs, highlighting the importance of creative solutions and understanding the nuances of different deployments. 11:38 - The discussion turns to an exploration of deploying data centers in skyscrapers, the evolving criticality of digital infrastructure, and the need for redundancy and a "data center mindset" in reckoning with society's reliance on connectivity. 20:02 - Remarks on the rapid pace of technological advancements, specifically the increasing densities of GPUs such as Nvidia's H100, H200, Grace Hopper, and Blackwell chips. 20:32 - More on the exponential increase in densities within the digital infrastructure community and predictions of a future "flattening out" of density growth. 23:59 - Koblence emphasizes the continued relevance of legacy facilities such as 2 megawatt (MW) or 5 MW data centers in modern deployments, particularly in major connectivity hubs. The concept of the edge is also discussed in the context of facilitating connectivity with AI sites. 26:59 - Koblence elaborates on the importance of collaboration and creating cohesive solutions across various data center facilities, while emphasizing the role of NYI as a solutions facilitator and discussing partnerships with Hudson IX and other providers. 31:22 -  Koblence elaborates on the mission of the Nomad Futurist foundation to demystify the world of digital infrastructure, highlighting the simplicity of the industry beneath the technical complexities, and emphasizing transparency and accessibility in making connectivity and digital infrastructure understandable and available.   Recent DCF Show Podcast Episodes:  DCF Show: Data Center PR Practice Fosters Coalitions, Community Outreach to Reduce Development Backlash   DCF Show: Data Center Construction and Dallas Market Talk with Burns & McDonnell  DCF Show: The Top 5 Data Center Industry Stories of Q4  DCF Show: Steve Madden, Equinix VP of Digital Transformation and Segmentation Marketing  DCF Show: 8 Key Data Center Industry Themes for 2024, Part 3
As recorded on March 22, 2024, this episode of the Data Center Frontier Show Podcast featured the following participants:  • Matt Vincent, Editor in Chief and Podcast Host, Data Center Frontier  • Ali Heydari, Technical Director and Distinguished Engineer, NVIDIA  • Marcus Hopwood, Product Management Director, Equinix  • Bernie Malouin, CEO and Founder, JetCool    The podcast discussion begins with a focus on NVIDIA's latest insights, as imparted by Heydari, in the context of products, partnerships, and trend-leadership, as revealed at the recent NVIDIA GTC 2024 AI Conference (Mar. 18-21).  The conversation opens up to look at broader implications and developments within the tech and data center industries, such as Equinix's plans to enable liquid cooling at more than 100 data centers globally, and facets of their latest partnership with NVIDIA, as characterized by Hopwood.  The discussion turns to JetCool's history of providing innovative liquid cooling solutions for high-density chipsets, underlining the critical role of cooling technologies in support of the rapid growth of AI applications in data centers.  The talk also explores ways of advancing efficiency and sustainability in high-powered clusters through warm coolants and heat reuse, considering energy efficiency directives in the EU and UK. View a timeline of the podcast's highlights and read the full article about the podcast.
For this episode of the DCF Show podcast, we interview Jason Carolan, Chief Innovation Officer at data center operator Flexential. He’s a 25-year expert in the enterprise IT industry, with experience leading companies through technological evolutions like the one we’re experiencing right now.  Carolan believes there is a bigger story to uncover from the sheer dollar amount of Nvidia’s recent blockbuster valuation. In response to Nvidia’s market dominance in AI and data centers, Carolan wanted to discuss larger trends that may follow from this specific news moment.  According to Carolan:  “Nvidia's earnings results and forecasts for a continued AI boom doesn't come as too much of a surprise with the volume of businesses that are increasingly testing and utilizing the technology. Nvidia's data center business is a combination of GPU and their network technologies, which further showcases the importance of high performance architectures that can support next generation AI demands. The company is currently forecasted to ship 4-5 times more GPUs this coming year – indicating another trend line with little competition in sight.  As inference matures, we will see more diversity in chip suppliers but that is a ways off. The bottom line is that, now with accelerating AI rollouts, companies will need more compute capacity, ultra-high bandwidth and very low latency in order to succeed.”
This January, Milldam Public Relations announced the launch of its Data Center Community Relations Service, which the company's President and Founder Adam Waitkunas claims is the first community relations service exclusively serving the data center space and the digital infrastructure sector.  In addition to tailormade communication strategies, Adam contends that data center community relations will require coalition building and garnering influence with local officials and stakeholders. He says the new service has been launched in response to the recent widespread backlash to data center development and the lack of tools to combat this within the data center industry.  Personally overseeing the new service offering, Adam is a public relations professional with nearly twenty years of data center industry experience and a background in politics and public affairs, including extensive experience in media relations, marketing strategy, business development and strategic partnerships.  Prior to founding Milldam Public Relations in 2005, Adam was the manager of Doug Stevenson's 14th Middlesex District State Representative campaign, which set a record for fundraising for a challenger in a Massachusetts State Representative race. Concord, Massachusetts-based Milldam Public Relations is a full-service public relations firm that provides competitively priced strategic communications, media-relations and event management to a diverse array of clients throughout the country.  The firm has solidified its position as the go-to public relations firm for companies in the critical infrastructure space. Clients from Boston to Los Angeles include: The Association of Information Technology Professionals-Los Angeles, OpTerra Energy Services, The Critical Facilities Summit, Hurricane Electric, Instor Solutions, Inc., and RF Code. Under Adam's direction, Milldam has helped technology clients across the country secure articles in publications such as: The Wall Street Journal, The New York Times, CFO Magazine, Data Center Knowledge, Green Tech Media, The Boston Business Journal, Mission Critical Magazine, The Silicon Valley Business Journal and Capacity Magazine, among others.  Additionally, in his career Adam has helped businesses become thought leaders in their fields and a valued resource for industry-specific media, helping them to increase sales, promote awareness and become attractive targets for M&A.    Data Center Community Relations Service The new service is premised on the reality that, for many years, the data center industry has frequently operated under the radar, but has become more visible within the last few years. Certain communities throughout North America have taken notice and have started pushing back municipally against proposed developments, most notably in Virginia and Arizona.  For example, in recent months, a number of Virginia environmental groups formed a coalition calling s for more oversight of the data center industry. And in January, King George County, Virginia officials voted to renegotiate a prior agreement for a large cloud provider's $6B Virginia data center campus.  The reversal is partly due to growing local political opposition to data center development. With the launch of Milldam's Data Center Community Relations Service, Waitkunas contends that the digital infrastructure sector now has access to an offering that will equip them with the tools necessary to articulate the benefits of data centers to the local community while proactively addressing local concerns such as traffic infrastructure management and noise, helping to ensure a smoother path to success for the development.  Critical infrastructure plays a predominant role in most people's daily lives throughout North America, driving the need for data center operators. Waitkunas points out that strong community engagement is essential for data centers to properly communicate their value and successfully navigate the complexity of community relations.  To help data center developers achieve their goals, Milldam's community relations practice offers the following services:  •    Establishing partnerships with third-party organizations such as Chambers of Commerce. •    Communicating the numerous benefits of data centers in the community, including economic development, infrastructure improvements, and job creation. •    Developing and providing key talking points. •    Ensuring that local decision-makers hear the client's messages. •    Implementing a wide variety of grassroots campaigns and community outreach.  •    Enabling local supporters to serve as ambassadors and equipping them with the tools to communicate the benefits of proposed developments.  •    Building coalitions. •    Garnering the pulse of public opinion. "If the industry fails to properly engage with localities, years of industry progress will be in jeopardy," said Waitkunas. "It's imperative that developers and operators implement community relations to help ensure a seamless development process." Here's a timeline of key discussion points on the podcast: 2:35 - Adam explains that the idea for the practice came from his background in public affairs and politics, and that it involves building coalitions and partnerships with third party organizations to help data centers overcome obstacles they face when moving into suburban areas. 4:41 - Adam discusses the importance of having individual community members form coalitions with data center developers to speak on their behalf and push issues forward. 8:09 - Adam reveals that the firm is currently working with two developers and has proposals out to other organizations since launching the practice in mid-January. 9:16 - On the importance of timing in getting ahead of community concerns and identifying cheerleaders for data center projects. 10:37 - The PR practice wants the local community to be the main cheerleader for data center projects and will help manage the coalition. 13:01 - Adam notes there is still a lot of community education needed on data centers regarding the ins and outs of countering noise and environmental concerns. 15:10 - Adam explains how the PR practice has been doing outreach to large players in the data center industry and tailoring campaigns for each community's concerns. 23:18 - On the necessity for developers to put together community relations plans and crisis communications plans for their data center projects. Here are links to some related DCF articles: The NIMBY Challenge: A Way Forward for the Data Center Industry Rezoning for PW Digital Gateway Data Centers Approved By Virginia's Prince William County Supervisors Keeping Your Cool While Getting Your Work Done iMasons Sharpen Focus on the Community Impact of Data Centers Being a Good Neighbor Means Considering Community Impact During Site Selection Data Center Development Spurs More Debate in Prince William County
For this episode of the DCF Show podcast, Data Center Frontier's Editor in Chief Matt Vincent and Senior Editor David Chernicoff speak with Burns & McDonnell's Robert Bonar, PE, LEED AP, Vice President, Mission Critical Facilities, and Christine Wood, Vice President leading the firm's Dallas-Fort Worth Global Facilities practice.  Burns & McDonnell is a provider of engineering, architecture, construction, environmental and consulting solutions, who as part of its mission-critical and data center practice is brought in to help plan, design, permit, construct and manage client projects in the space. Bonar and Wood begin the podcast by providing an overview of the company and their roles there, along with their backgrounds in the industry.  An overarching theme of the discussion is how a client's selection of a data center and mission critical consultant is based on more than just an ability to meet service needs. The discussion also covers current data center industry construction trends, especially in the areas of siting and power, while probing the similiarities and differences in planning data center builds for enterprise, colocation and hyperscale clients. D-FW Data Center Market Focus Cushman & Wakefield’s 2023 Dallas-Fort Worth Data Center Report stated that the Dallas-Fort Worth data center markets saw record absorption of 386 Megawatts in 2023 -- a nearly 7x increase since 2020 -- driven by exponential growth in demand for cloud computing and AI/machine learning applications.  Cushman & Wakefield further reported the Dallas-Fort Worth market's vacancy to be at an all-time low of 3.73% last year, with colocation rents and data center land prices there continuing to rise. The commercial real estate services company added: "Despite a robust construction pipeline – 1.4 million square feet that can provide 225 MW – the vast majority of the market’s new data center supply for 2024 and 2025 has been pre-leased. Cloud providers securing large campuses through pre-leasing and AI/ML companies leasing the market’s few remaining pockets of available space are the primary drivers of DFW’s record demand." DCF asked Wood and Bonar about the D-FW data center market and Burns & McDonnell's role in it, including the firm's background and present developments there, as well as the location's future roadmap regarding power, interconnectivity, workforce factors. Here's a time line of key discussion points on the podcast: 2:27 - After introductions and table-setting, the Burns & McDonnell experts emphasize the importance of looking at data center client needs holistically and getting ahead of what they need for a given project. 4:53 - Discussion turns to the impact of generative AI on the data center industry and the uptick in demand for first-of-a-kind designs. 8:44 - Further exploration of how the rapid pace of change in the data center industry has bred increased demand in the market for qualities such as speed-to-market and first-of-a-kind design. 9:22 - DCF inquires about planning for different types of data center builds, and the differences between enterprise, colocation, and hyperscale developments, as well as the impact of AI support, are explored. 14:34 - The discussion further illuminates challenges and changes in the data center industry, including the influence of AI technology on new designs and in future-proofing facilities. 15:04 - Burns & McDonell's Wood discusses the D-FW data center market, highlighting its growth potential due to its central location, low real estate costs, and robust power availability. 20:25 - To conclude, DCF's editors circle back to the topic of renewables and solar consulting in relation to data centers, leading to a discussion on combining solar with battery storage for future data center needs. Here are links to some related DCF articles: The Current State of Power Constraints for New Data Center Construction Skybox Plans 300-Megawatt Campus South of Dallas Building Greener: Compass Seeks Sustainability in its Construction, Supply Chain Dallas Sees Record Data Center Leasing Activity in 2022 The Big City Edge: Dallas is a Hotbed for Edge Computing Power Infrastructure and Tax Incentives Drive Dallas Data Center Market
For this episode of the Data Center Frontier Show podcast, it's financial earnings call season, so Editor in Chief Matt Vincent and Senior Editor David Chernicoff take the opportunity to discuss DCF's top 5 most popular data center and cloud computing industry stories for the fourth quarter of 2023, which were as follows:  1. Dominion: Virginia’s Data Center Cluster Could Double in Size Dominion Energy says it has customer contracts that could double the amount of data center capacity in Virginia by 2028 and is planning new power lines to support this growth. Virginia is already the world’s largest market for cloud computing infrastructure. Despite the current power constraints around Ashburn, the data center market in Virginia is positioned to grow much larger. The utility says it has received customer orders that could double the amount of data center capacity in Virginia by 2028, with a projected market size of 10 gigawatts by 2035. That represents a huge increase from current data center power use, which reached 2.67 gigawatts in 2022. The utility’s projections mean that Virginia will continue to experience tensions between the growth of the Internet and the infrastructure to support it. Data Center Frontier's Founder and Editor at Large, Rich Miller, reports. 2. Microsoft Unveils Custom-Designed Data Center AI Chips, Racks and Liquid Cooling At Microsoft Ignite last November, the company unveiled two custom-designed chips and integrated systems resulting from a multi-step process for meticulously testing its homegrown silicon, the fruits of a method the company's engineers have been refining in secret for years, as revealed at its Source blog. The end goal is an Azure hardware system that offers maximum flexibility and can also be optimized for power, performance, sustainability or cost, said Rani Borkar, corporate vice president for Azure Hardware Systems and Infrastructure (AHSI). “Software is our core strength, but frankly, we are a systems company. At Microsoft we are co-designing and optimizing hardware and software together so that one plus one is greater than two,” Borkar said. “We have visibility into the entire stack, and silicon is just one of the ingredients.” The newly introduced Microsoft Azure Maia AI Accelerator chip is optimized for artificial intelligence (AI) tasks and generative AI. For its part, the Microsoft Azure Cobalt CPU is an Arm-based processor chip tailored to run general purpose compute workloads on the Microsoft Cloud. Microsoft said the new chips will begin to appear by early this year in its data centers, initially powering services such as Microsoft Copilot, an AI assistant, and its Azure OpenAI Service. They will join a widening range of products from the company's industry partners geared toward customers eager to take advantage of the latest cloud and AI technology breakthroughs. 3. The Eight Trends That Will Shape the Data Center Industry in 2023 Rich Miller predicted that 2023 would be a year of dueling cross currents that could constrain or accelerate business activity in the sector. DCF's Vincent and Chernicoff briefly review last year's trends, remarking on how so many of them are still in full effect for the industry right now. Scorecard: Looking Back at Data Center Frontier’s 2023 Industry Predictions 4.  Google Is Now Reducing Data Center Energy Use During Local Power Emergencies Last October, Google shared details of a system optimized to reduce the energy use of data centers when there is a local power emergency. Core functions of the system, which has the hallmarks of a universally applicable technology, include postponing low-priority workloads, and moving others to other regions that are less constrained. Regarding the system, Michael Terrell, Google's Senior Director for Energy and Climate, explained in a LinkedIn post how the new demand response capability can temporarily reduce power consumption from Google data centers when it’s needed, and provide flexibility to the local grids that power its data center operations. Demand response helps grid operators serve their customers reliably during times of need, such as in times of supply constraints or extreme weather events. Terrell's post empasized that "demand response can be a big tool to help grids run more cost-effectively and efficiently, and it can accelerate system-wide grid decarbonization." Google’s Climate and Energy teams created the new system, which Terrell called an important development toward running the company's data centers "intelligently, efficiently and carbon-free." 5. Cloudflare Outage: There’s Plenty Of Blame To Go Around The Cloudflare outage in the first week of November drew quite a bit of attention, not only because Cloudflare’s services are extremely popular, so their failure was quickly noticed, but also because of the rapid explanation of the problem posted in the Cloudflare Blog shortly after the incident. This explanation placed a significant portion of the blame squarely on Flexential and their response to the issues with electricity provider PGE, and potential issues that PGE was having. Cloudflare was able to restore most of its services in 8 hours at its disaster recovery facility. It runs its primary services at three data centers in the Hillsboro, Oregon area, geolocated in such a way that natural disasters are unlikely to impact more than a single data center. DCF's David Chernicoff noted, "While almost all of the coverage of this incident starts off by focusing on the problems that might have been caused by Flexential, I find that I have to agree with the assessment of Cloudflare CEO Matthew Prince: To start, this never should have happened.” Here are links to some related DCF articles: DCF Show: Data Center Frontier's Rich Miller Returns For a Visit DCF Tours: Flexential Dallas-Plano Data Center, 18 MW Colocation Facility Meta Previews New Data Center Design for an AI-Powered Future For Leading Cloud Platforms, AI Presents a Major Opportunity AI Propels Cloud Growth, Digital Infrastructure Investment to New Heights
Even in a month where Equinix very notably rolled out its fully managed private cloud service for enabling enterprises to easily acquire and manage their own NVIDIA DGX AI supercomputing infrastructure, the better to build and run custom generative AI models, there was yet another, not unrelated, announcement from the foundational provider of colocation data centers and digital transformation solutions.  It was in the context of the AI platform rollout with NVIDIA that Equinix this month also issued its annual Global Interconnection Index (GXI) 2024 Report, which uncovers digital infrastructure trends driving the decision-making of both enterprises and service providers.  The Equinix statement announcing managed services for the NVIDIA DGX AI supercomputing platform noted that the service includes the NVIDIA DGX systems, NVIDIA networking and the NVIDIA AI Enterprise software platform. For the platform offering, Equinix installs and operates each customer's privately owned NVIDIA infrastructure and can deploy services on their behalf in key locations of its International Business Exchange (IBX) data centers globally.  Equinix also emphasized that its NVIDIA DGX service offers high-speed private network access to global network service providers, enabling quick generative AI information retrieval across corporate wide area networks. In addition, the service provides private, high-bandwidth interconnections to cloud services and enterprise service providers to facilitate AI workloads while meeting data security and compliance requirements. Through its offering of NVIDIA DGX AI supercomputing infrastructure services, Equinix contends that enterprises can scale their infrastructure operations to achieve the level of AI performance needed to develop and run massive models. The company also revealed that early access companies using the service has included leaders in sectors including biopharma, financial services, software, automotive and retail, many of whom are building AI Centers of Excellence to provide a strategic foundation for a broad range of rapidly developing LLM use cases. As a related study Equinix commissions each year, the operator's GXI Report comprises a survey of global IT leaders to gather insight on what’s behind the digital economy. Based on the study's latest findings, Equinix stated its belief that the industry has hit a tipping point in resourcing decisions, vis a vis the notion that buying dedicated IT hardware now puts customers at a competitive disadvantage.  For this episode of the DCF Show podcast, Data Center Frontier editors Matt Vincent and David Chernicoff met with Steve Madden, Equinix VP of Digital Transformation and Segment Marketing, to discuss some of the GXI 2024 report's more meaningful findings related to current data center trends and predictions in digital transformation, IT and spending, including the operator's nearly concurrent AI managed services offering. For instance, the GXI report found that enterprises are growing at a 39% CAGR -- 25% faster than service providers -- reaching 12,908 Tbps of total capacity. DCF asked Madden: Since the global pandemic, how much have enterprises leaned on digital providers to focus on responding to business needs, and does Equinix expect such trends to continue going forward?   Also, the GXI report found that 80% of enterprises will design and run new digital IT infrastructure using subscription-based services by 2026. We asked Madden: What does that mean for data centers? The report also found that by 2025, 85% of global companies will have expanded multicloud access across several regions. We asked: How will data centers best be able to manage such demand?  In his remarks, Madden pointed out that Equinix has the most cloud on-ramps of any data center operator in the world, and predicted that the majority of multinational enterprises will be multi-cloud connected in multiple regions around the world in the near future. Madden noted that nowadays -- i.e. in the post-pandemic age of AI -- enterprises are looking for strategic partners, not just vendors, in composing their infrastructure, and seek to do so with a set of key providers to help them move more quickly in their digital transformations.
This month on the Data Center Frontier Show podcast, we read down site founder and Editor at Large Rich Miller's annual data center industry trends forecast. This week's article read looks at how AI is driving design updates for power and cooling, why air permitting at scale is a hot potato for the industry, and optimal site selection for Green MegaCampuses. Rich Miller has delivered his annual article containing his top data center industry forecasts, predictions and insights for the year ahead. Of chief concern among the 8 key themes forecasted to define the year is how the AI boom will ripple through the digital infrastructure sector in 2024, impacting the availability of data center space, the supply chain, and factors of pricing, cooling, power and design. Since our industry coverage at DCF throughout the year will frequently refer back to this forecast article, we've decided to enumerate all eight themes throughout several podcast episodes this month.  For this episode, we read down the article's themes 6 through 8: 6.  AI Drives Design Updates for Power and Cooling 7.  Air Permitting at Scale is a Hot Potato 8.  Site Selection Optimizes for Green MegaCampuses "Artificial intelligence is hot," writes Miller. "So hot that the AI boom is creating a resource-constrained world, driving stupendous demand for GPUs, data centers and AI expertise. All three are likely to be in short supply, but none so much as wholesale data center space. This is the trend that dominates our annual forecast." Read the full forecast: The Eight Themes That Will Shape the Data Center Industry in 2024
For this episode of the DCF Show podcast, Data Center Frontier spoke with Sam Rabinowitz, CEO of Lantana, a supplier and provider of LED luminaires for the data center industry -- especially for hyperscalers, but also for energy-efficiency retrofits in mature facilities. Key discussion points include the following: 0:15 - Lantana broke into the data center industry by working with a hyperscaler customer to design and implement rapid deployment prototypes for their initial data center builds on the interior structure, including lighting. 3:14 - Lantana's LED fixtures run cool and are energy-efficient, achieving up to 90% efficiency over nearly a decade of use. The LED lighting fixtures are UL certified for elevated ambient operating temperatures, providing operational flexibility for data centers in hot environments. 5:45 - Sam explains how Lantana's focus on energy-efficiency and materials efficiency can lead to cost savings and a positive impact on the environment. 13:26 - Sam emphasizes the importance of a "micro to macro" approach in greening data, starting with individual components, and scaling up to entire campuses and programs. 15:46 - Data Center Frontier Editor in Chief Matt Vincent asks for takes regarding the impact of AI on the data center industry. In response, Sam discusses the need for new products and approaches to designing and engineering data centers to accommodate for chip-level heat. 19:32 - Matt asks about Lantana's plans for 2024. In response, Sam describes Lantana's new products as being tailored for digital infrastructure and expansion of the hyperscalers, as well as furnishing renovations for increased energy efficiency in data centers of all sizes. 26:46 - Sam emphasizes the importance of lighting in data centers for safety and functionality, and the discussion compares it to cabling as a core, fundamental element of every data center. Visit Data Center Frontier.
This month on the Data Center Frontier Show podcast, we read down site founder and Editor at Large Rich Miller's annual data center industry trends forecast.  Since our industry coverage at DCF throughout the year will frequently refer back to this forecast, we've decided to enumerate all eight themes throughout several podcast episodes this month.  Today's read looks at how pricing for AI capacity will probably only continue to trend higher, and how data center supply chain relationships will matter more than ever in 2024. We also examine how more momentum for modular data centers' prefabricated IT ethos should take hold in the coming year. "Artificial intelligence is hot," writes Miller. "So hot that the AI boom is creating a resource-constrained world, driving stupendous demand for GPUs, data centers and AI expertise. All three are likely to be in short supply, but none so much as wholesale data center space. This is the trend that dominates our annual forecast." For this episode, we read down the article's themes 3 through 5: 3.  Pricing for AI Capacity Will Continue Higher 4.  Supply Chain: Relationships Matter More Than Ever 5.  More Momentum for Modular Read the full forecast: The Eight Themes That Will Shape the Data Center Industry in 2024
Data Center Frontier's founder and Editor at Large Rich Miller has delivered his annual article containing his top data center industry forecasts, predictions and insights for the year ahead.  Of chief concern is how the AI boom will ripple through the digital infrastructure sector in 2024, impacting the availability of data center space, the supply chain, and factors of pricing, cooling, power and design. Since our industry coverage at DCF throughout the year will frequently refer back to this forecast, we've decided to enumerate all 8 themes throughout several podcast episodes this month.  For this episode, we read down the article's first two themes: 1. The AI Boom Creates a Data Center Space Crunch 2. Rethinking Power on Every Level  Read the full forecast at Data Center Frontier: The Eight Themes That Will Shape the Data Center Industry in 2024
For this episode of the Data Center Frontier Show podcast, DCF's editors sat down with James Walker, BEng, MSc, CEng, PEng, CEO and board member of Nano Nuclear Energy Inc., and Jay Jiang Yu, Nano Nuclear Energy's founder, executive chairman and president, for a discussion regarding industry news and technology updates surrounding small modular reactor (SMR) and microreactor nuclear onsite power generation systems for data centers. James Walker is a nuclear physicist and was the project lead and manager for constructing the new Rolls-Royce Nuclear Chemical Plant; he was the UK Subject Matter Expert for the UK Nuclear Material Recovery Capabilities, and was the technical project manager for constructing the UK reactor core manufacturing facilities. Walker has extensive experience in engineering and project management, particularly within nuclear engineering, mining engineering, mechanical engineering, construction, manufacturing, engineering design, infrastructure, and safety management. He has executive experience in several public companies, as well as acquiring and re-developing the only fluorspar mine in the U.S. Jay Jiang Yu is a serial entrepreneur and has over 16 years of capital markets experience on Wall Street. He is a private investor in a multitude of companies and has advised a magnitude of private and public company executives with corporate advisory services such as capital funding, mergers and acquisitions, structured financing, IPO listings, and other business development services. He is a self-taught and private self-investor whose relentless passion for international business has helped him develop key, strategic and valuable relationships throughout the world. Yu leads the corporate structuring, capital financings, executive level recruitment, governmental relationships and international brand growth of Nano Nuclear Energy Inc. Previously, he worked as an analyst as part of the Corporate & Investment Banking Division at Deutsche Bank in New York City. Here's a timeline of key points discussed during the podcast: 0:22 - Nano Nuclear Energy Expert Introductions 1:38 - Topic Set-up Re: DCF Senior Editor David Chernicoff's recent data center microreactor and SMR explorations. 1:59 - How microreactors might impact the data center industry. (Can time-to-market hurdles be shrunk?) 2:20 - Chernicoff begins the interview with James and Jay. How the NuScale project difficulties in the SMR segment resulted in the DoD pulling back on preliminary microreactor contracts in Alaska due to market uncertainties directly related to NuScale.  3:23 - Perspectives on NuScale and nuclear power. 4:21 - James Walker on NuScale vs. microreactor prospects:  "They have a very good technology. They're still the only licensed company out there, and they probably will bounce back from this. It's not good optics when people are expecting product to come out of the market. And NuScale was to be the first, but market conditions and the structure of SPACs and the lack of us infrastructure can all complicate what they want to do. Half the reason for them taking so long is because the infrastructure was not in place to support what they wanted to do.  But even hypothetically, even if the SMR market, as an example, was to collapse, microreactors are really targeting a very different area of market. SMRs are looking to power cities and big things like that. Microreactors, you're looking at mine sites, charging stations, free vehicles, disaster relief areas, military bases, remote habitation, where they principally fund all their energy using diesel. It's kind of hitting a different market. So even if the SMR market goes away, there's still a huge, tremendous upside, potential untapped market in the microreactor space." 5:39 - DCF Editor in Chief Matt Vincent asks, "What's the pros and cons of the prospects for microreactors versus what we're commonly thinking about in terms of SMR for data centers?" 5:51 - Nano Nuclear's James Walker responds:  "I would start with the advantages of microreactors over SMR. It's smaller, it'll be cheaper, it'll be safer, it'll be more deployable, you'll have far more economies of scale of producing hundreds of these things. They're easier to decommission, remove, they're easier to take apart.  I mean, logistically, shipping these things around the world as if they were diesel generators is a very feasible prospect. Opex cost will be far lower. Personnel that need to be involved in the day to day physical operation will be negligible.  Where the disadvantage of a microreactor is, is that SMRs would provide a cheaper form of electricity. But as SMRs are providing for cities, microreactors are more for remote locations, remote industrial projects, remote data centers, those kind of things.  You're really competing with sort of the high costs of remote diesel.  As an example, we were speaking with some Canadian government officials and they were saying [with] some of their remote habitations, they can have a community of 800 people, but it still costs $10 million US in fuel alone, ignoring all of the logistical costs of bringing that fuel in on a daily basis, just to power those remote communities that have no possibility of being hooked up to a grid because it's too far.  And that would be the same for all sorts of things, like if you want a remote data center, remote or mining operations, remote industrial projects, oil and gas things, then microreactors aren't really competing with SMRs on cost." 7:33 - Data Center Frontier's David Chernicoff asks: "We're a data center publication, so that obviously is a lot of interest to us, and you pointed out how diesel is the primary methodology for backup power for data centers.  I realize no one has actually shipped a microreactor yet in this form factor. But one of the advantages, for example, that comes from Project PELE from the US DoD was the decision to standardize on Tristructural Isotropic (TRISO) fuel so that for anybody building one, now, the whole issue of building infrastructure to provide the fuel is significantly simplified.  Realistically (and obviously we're asking you to make a projection here, but), when you're able to deliver microreactors at any sort of scale, will they be competitive with diesel generators in the data center space? And I would also allow for you to say, well, diesel generators also have to deal with all the emissions issues, environmental concerns, greenhouse gases, et cetera, that are not issues with a containerized nuclear power plant. So will there be a realistic model there?" 8:45 - James Walker compares the financing costs of diesel generators vs. microreactors. 9:28 - Walker offers this forecast: "With competing with diesel generators, once the infrastructure [for nuclear] is built back up, and you have deconversion facilities and enrichment facilities able to produce High-Assay Low-Enriched Uranium (HALEU) fuel, and companies are able to source this stuff very readily, the capital costs come down markedly. And that'll be the same for people like NuScale. Then there'll be an optimization period, typically, I would expect over an eight-year period of launch. So, say microreactors launch in 2030, nearing 2040, I believe the cost will be competitive with diesel by that point. Because the optimization will kick in, the infrastructure will all be in place. And the economies of scale over which these things are being produced means that, yes, you'll essentially have a nuclear battery that can compete with diesel, that can give you 15 years of clean energy, at a cheaper rate. That's what the projections show currently." 10:31 - Discussion point clarifying that nuclear microreactors for battery backup are being positioned for replacement of diesel generation, as distinct from SMR power plant options. 12:00 - Walker explains how the power range of microreactors can vary. SMRs will give you 100 MW of power for enormous data centers and AI, but microreactors allow for data centers to be sited anywhere. If more power for a larger facility is needed, multiple microreactors can serve into the microgrid at the location. 12:50 - Nano Nuclear's Jay Jiang Yu notes, "We've been contacted by Bitcoin mining companies as well, because they want to actually power their data centers in cold environments like Alaska. We've been contacted many times, actually, and there is like a trending topic on 'Bitcoin nuclear.'"  13:28 - Regarding microreactors' being employed in conjunction with microgrids, DCF's Chernicoff asks: "Do you see this being eventually being sort of a package deal -- not just for data centers (obviously data centers will be a big consumer of this) -- but for deployable microgrids where you have battery power, microreactors providing primary power sources, integrating the microgrid with the local utility grids to allow for providing power back to the grid in times of need, pull power from the grid when it's cheap, that kind of whole microgrid active partner model?" 14:19 - Walker holds forth on nuclear investment stakes, and where microreactor and microgrid technology fits in. 16:16 - On the compactness of microreactors, occupying less than an acre. 17:33 - Asking again about the US DoD's Project PELE, how microreactors were instrumental, and what the project's implications might be for data centers. 18:14 - Walker explains how Project PELE was a microreactor program developed by the  US DoD to create a 1.5 megawatt electric microactor to serve the US military in wider capacity in remote areas such as Iraq or Afghanistan forced to rely entirely on diesel power generation.  Walker adds, "Project PELE, even though it began as a military thing, is probably going to have enormous benefits for the wider microreactor market, because there's a lot of development work that can go into fees and inform commercial and civil designs." 19:58 - DCF's Chernicoff notes: "I presume that one of the biggest factors that PELE brought was the standardization for the fuel, the transportability, the applications people were considering with it, and the form factor. Can I stick it into 40 foot containers and get it to my site? Once you standardize on those things, prices start to come down, and that's going to be a big part of making this acceptable to the data center industry, to replace diesel generators or to build microgrids around." 20:31 - More from Nano Nuclear's Walker on how and why the ultimate aim of microreactors is to replace diesel generators. 21:20 - DCF's Vincent asks the Nano Nuclear experts whether, beyond bitcoin mining data centers, they've fielded much interest from standard data center operators?  21:25 - In response, Walker says: "There's been some big ones. Like Microsoft, as an example, were incredibly interested in powering a lot of their remote data centers with nuclear, and so they've even put out funding opportunities to this effect. But on the smaller front, we've seen Chat GPT talk about powering their centers with nuclear in the future ... It opens up the potential for enormous amounts of expansion. It can reduce a lot of costs, especially capital costs of the startup, and I think that's the big draw here." 22:25 - DCF's Chernicoff asks, "Obviously, if I can plunk a microreactor down in the middle of my data center campus, I don't have to worry about transmitting power through the campus. Are there cost advantages in this? Is it something that the big power providers are looking at as a way to basically build a more distributed power grid?" 23:11 - Walker explains how a large mining company Nano Nuclear worked with did just that, and how use of nuclear energy can work to eliminate energy storage and transmission costs. 24:41 - Addressing nuclear NIMBY issues and PR concerns for builders of data centers. 25:40 - On the inherent safety of microreactors. 27:51 - Down to brass tacks on timeframes for microreactors and SMRs. DCF's Chernicoff asks, What are the obstacles to seeing them deployed within the next decade? 29:20 - On the work of Idaho National Labs in nuclear reactors. 31:03 - Taking it back to current events in closing: On NuScale's travails in 2023, Microsoft's SMR job posting raising hopes for a nuclear energy tipping point in the data center industry, etc.
For this episode of the Data Center Frontier Show podcast, we sit down with Brian Kennedy, Director of Business Development and Marketing at Natron Energy. As recounted by Kennedy in the course of our talk, Colin Wessells founded Natron Energy as a Stanford PhD student in 2012. His vision in building the company, which started in a garage in Palo Alto, was to deliver ultra-safe, high-power batteries.  As stated on the company's website, "After countless hours of development with an ever expanding team of scientists and engineers, Natron now operates a state of the art pilot production line for sodium-ion batteries in Santa Clara, California." The company notes that most industrial power utilizes decades-old, more environmentally hazardous battery technology such as lead-acid and lithium-ion.  In contract, Natron says its "revolutionary sodium-ion battery leverages Prussian Blue electrode materials to deliver a high power, high cycle life, completely fire-safe battery solution without toxic materials, rare earth elements, or conflict minerals." In 2020, Natron became the world’s first sodium-ion battery to achieve a UL 1973 listing for its battery product, and commercial shipments to customers in the data center, forklift, and EV fast-charging markets soon began.  Natron notes that its technology leverages standard, existing li-ion manufacturing techniques, allowing the company to scale quickly. With U.S. and Western-based supply chain and factory agreements in place, Natron says it saw its manufacturing capacity increase 200x in 2022.  In the course of the podcast discussion, Natron's Kennedy provides an update on Natron's data center industry doings this year and into next year. Here's a timeline of key points discussed: :29 - 7x24 Fall Conference Memories :51 - Teeing Up Sodium Ion 1:18 - Talking Pros and Cons, Sustainability 2:15 - Handing It Over to Brian 2:30 - Background on Natron Energy and founder/CEO Colin Wessells 2:55 - Background on Sodium Ion Technology 3:11 - Perfecting a New Sodium Ion Chemistry and Manufacturing with 34 International Patents In Play 3:28 - The Prominent Feature of Sodium-Ion Technology Is Its Inherent Safety; Eliminates Risk of Thermal Runaway 3:51 - U.S. Government ARPA-E Advanced Technology Grants Have Been Pivotal Funding for Natron 4:13 - Sodium Ion Battery Technology Comparison and Value Proposition 5:28 - How Often Is A Data Centers Battery Punctured? Ever Seen a Forklift Driven Through One? 6:10 - On The Science of the Natron Cell's Extremely High Power Density, Fast Discharge and Recharge 6:55 - Comparing Sodium-Ion to Most of the Lithium Chemistries 7:25 - The Meaning of UL Tests 8:00 - Natron Has Published Unredacted UL Test Results 8:35 - On the Longevity of Sodium Ion Batteries 9:51 - "There's No Maintenance Involved." 10:18 - Natron Blue Rack: Applications 10:52 - How Natron Is In the Process of Launching Three Standard Battery Cabinets 11:20 - Performance Enhancements Will Take Standard Data Center Cabinets "Well North" of 250 kW 11:45 - Though Data Centers are Its Largest Market, Natron Also Serves the Oil and Gas Peak Load Shaving and Industrial Spaces  12:21 - Sustainability Advantages 12:51 - ESG Is About More Than Just Direct Emissions 13:15 - The Importance of Considering the Sourcing and Mining of Battery Elements 14:09 - "That Fact That You May Be Pushing [Certain] Atrocities Up the Supply Chain Where You Can't See Them, Doesn't Make It OK" 14:34 - Notes On Supply Chain Security with Secure, U.S.-Based Manufacturing 15:45 - Wrapping Up: Global UPS Manufacturer Selects Natron Battery Cabinet; Looking Ahead to 2024. Here are links to some related DCF articles: Will Battery Storage Solutions Replace Generators? New NFPA Battery Standard Could Impact Data Center UPS Designs Microsoft Taps UPS Batteries to Help Add Wind Power to Ireland’s Grid Data Center of the Future: Equinix Test-Drives New Power, Cooling Solutions Corscale Will Use Nickel-Zinc Batteries in New Data Center Campus
In this episode of the Data Center Frontier Show podcast, Matt Vincent, Editor-in-Chief of Data Center Frontier, and Steven Carlini, Vice President of Innovation and Data Centers for Schneider Electric, break down the challenges of AI for each physical infrastructure category including power, cooling, racks, and software management.
For this episode of the Data Center Frontier Show podcast, DCF's Editor in Chief Matt Vincent chats with Brian Green, EVP Operations, Engineering and Project Management, for EdgeConneX. The discussion touches on data center operations, sustainable implementations/deployments, renewable power strategies, and ways to operationalize renewables in the data center. Under Brian’s leadership, the EdgeConneX Houston data center completed a year-long project measuring the viability of 24/7 carbon-free energy utilizing AI-enabled technology. With this approach, EdgeConneX ensured the data center is powered with 100% renewable electricity, and proved that even if the power grid operates on fossil-fueled electricity generation, real-time hourly increments can be applied to new and existing data centers. As a result, for every given hour, EdgeConneX and its customers can operate throughout the year without emitting any CO2 with zero reliance on fossil standby generation during dark or cloudy periods. This innovative program will be duplicated at other EdgeConneX facilities globally. Another real-world example discussed is related to a facility where the local community complained about the noise of the fans. Brian's team worked to improve the noise level by changing fan speeds, and as a result, the data center and the local community realized multiple benefits, including enhanced community relations by removing the noise disturbance, increased efficiencies, and reducing amount of power used, a big cost-saver for the data center. Along the way, Brian explains how he, along with EdgeConneX team, are big believers in the company's motto: Together, we can innovate for good.
For this special episode of the DCF Show podcast, Data Center Frontier's founder and present Editor at Large, Rich Miller, returns for a visit. Tune in to hear Rich engage with the site's daily editors, Matt Vincent and David Chernicoff, in a discussion covering a range of current data center industry news and views. Topics include: Dominion Energy's transmission line expansion in Virginia; Aligned Data Centers' market exit in Maryland over a rejected plan for backup diesel generators; an update on issues surrounding Virginia's proposed Prince William Digital Gateway project; Rich's take on the recent Flexential/Cloudflare outages in Hillsboro, Oregon; and more. Here's a timeline of key points discussed on the podcast: :10 - For those concerned that the inmates might be running the asylum, the doctor is now in: Rich discusses his latest beat as DCF Editor at Large. 1:30 -  We look at the power situation in No. Virginia as explained by one of Rich's latest articles, vis a vis what's going to be required to support growth already in the pipeline, in the form of contracts that Dominion Energy has for power. "Of course, the big issue there is transmission lines," adds Miller. "That's the real constraint on data center power delivery right now. You can build local lines and even substations much more quickly than you can transmission at the regional level. That's really where the bottlenecks are right now." 3:00 - Senior Editor David Chernicoff asks for Rich's take on Aligned Data Centers' recent market exit in Maryland, related to its rejected plan for backup diesel generators. "Is this really going to be the future of how large-scale data center projects are going to have to be approached, with more focus put on dealing with permission to build?" wonders Chernicoff, adding, "And are we going to see a more structured data center lobbying effort on the local level beyond what, say, the DCC [Data Center Coalition] currently does?" 5:19 - In the course of his reponse, Rich says he thinks we'll see just about every data center company realizing the importance of doing their research on the full range of permissions required to build these megascale campuses, which are only getting bigger. 6:12 - Rich adds that he thinks the situation in Maryland illustrates how it's important for data center developers to step back for a strategic discussion regarding depth of planning. "The first thing to know," he points out, "is that Maryand was eager to have the data center industry. They specifically passed incentives that would make them more competitive with Virginia. They saw that Northern Virginia was getting super crowded...and they thought, we've got lots of resources up here in Frederick County, let's see if we can bring some of these folks across the river. And based on that, the Quantum Loophole team found this site." 8:20 - Rich goes on to note how "the key element for a lot of data centers is fiber, and a key component, both strategically and from an investment perspective [in Maryland] is that Quantum Loophole needed to have a connection to the Northern Virginia data center cluster in Ashburn, in Data Center Alley - which is not that far as the crow flies, but to get fiber there, they wound up boring a tunnel underneath the Potomac River, an expensive and time-consuming project that they're in the late stages of now. That's a big investment, and all that was done with the expectation that Maryland wanted data centers." 10:26 - Rich summarizes how the final ruling for Aligned in Maryland "was, effectively, that you can have up to 70 MW but beyond that, you have to follow this other process [where] you're more like a power plant than a data center with backup energy." He adds, "I think one of the issues was [in determining], will all of this capacity ever be turned on all at once? Obviously with diesel generators, that's a lot of emissions. So the air quality boards are wrestling with, on the one hand, having a large company that wants to bring in a lot of investment, a lot of jobs; the flip side is, it's a lot of diesel at a time when we're starting to see the growing effects of climate change, and everybody's trying to think about how we deal with fossil fuel generation. The bottom line is, Aligned pulled out and said, this is just not working. The Governor of Maryland, understanding the issues at stake and the amount of investment that has already been brought there, says that he is working with the legislature to try to 'create some regulatory predictability' for the data center industry. Because it used to be that 70 MW was a lot of capacity, but with the way the industry is going right now, that's not so much." 12:06 - In response to David's reiterated question as to whether the data center industry will now increasingly have to rethink it's whole approach to permitting prior to starting construction, Rich notes, "There's a lot of factors that go into site selection, you're looking at land, fiber, power. The regulatory environment around it, whether there's going to be local resistance, has also become part of the conversation, and rightfully so. One of the things that's definitely going to happen is that data centers have to think hard about their impact on the communities where they're locating, and try to develop sensible policies about how they, for lack of a better term, can be good neighbors, and fit into the communities where they're operating." 14:20 - Taking the discussion back across state lines, Editor in Chief Matt Vincent asks for an update on Rich's thoughts surrounding contentious plans by QTS and Compass Datacenters for a proposed new campus development, dubbed the Prince William Digital Gateway, near a Civil War historic site in Prince William County, Virginia. "This is one of the most unique proposals in the history of the data center industry," explains Miller. "It would be the largest data center project ever proposed. And of course, it's become an enormous political hot potato. It's the first time where we've really seen data centers on the ballot in local elections." 20:41 - After hearing some analysis of the business and political angles in Prince William County, Vincent asks whether Miller thinks the PW Digital Gateway project's future is in doubt, or if it's just that we don't know what's going to happen? 22:50 - Vincent asks Miller for his take on the recent data center outage affecting Flexential and Cloudflare, as written up for DCF by Chernicoff, particularly in the area of incident reports and their usefulness. In the course of responding to a follow-on point by David, Rich says, "I think the question for both levels of providers is, are you delivering on your promises, and what do you need to do to ensure that you can? Let's face it, stuff breaks, stuff happens. The data center industry, I think, is fascinating because people really think about failure modes and what happens, and customers need to do the same." 32:14 - To conclude, Vincent asks for Miller's thoughts on the AI implications of Microsoft's cloud-based supercomputer, running Nvidia H100 GPUs, ranking third on the world's top 500 supercomputers list, as highlighed at the recently ongoing SC23 show in Denver. Here are links to some related DCF articles: -- Dominion: Virginia’s Data Center Cluster Could Double in Size -- Dominion Resumes New Connections, But Loudoun Faces Lengthy Power Constraints -- DCF Show: Data Center Diesel Backup Generators In the News -- Cloudflare Outage: There’s Plenty Of Blame To Go Around -- Microsoft Unveils Custom-Designed Data Center AI Chips, Racks and Liquid Cooling
Ten years into the fourth industrial revolution, we now live in a “datacentered” world where data has become the currency of both business and personal value. In fact, the value proposition for every Fortune 500 company involves data. And now, seemingly out of nowhere, artificial intelligence has come along and is looking to be one of the most disruptive changes to digital infrastructure that we’ve ever seen. In this episode of the Data Center Frontier Show podcast, Matt Vincent, Editor-in-Chief of Data Center Frontier, talks to Sean Farney, Vice President for Data Center Strategy for JLL Americas, about how AI will impact data centers.
The Legend Energy Advisors (Legend EA) vision of energy usage is one in which all companies have real-time visibility into related processes and factors such as equipment efficiency, labor intensity, and consumption of power and other energy resources across their operations. During this episode of the Data Center Frontier Show podcast, the company's CEO and founder, Dan Crosby, and his associate, Ralph Rodriguez, RCDD, discussed the Legend Analytics platform, which offers commodity risk assessment infrastructure services, and real-time metering for energy usage and efficiency. The firm contends that only through such "total transparency" will their clients be able to "radically impact" energy and resource consumption intensity at every stage of their businesses. "My background was in construction and energy brokerage for a number of years before founding Legend," said Crosby. "The basis of it was helping customers understand how they're using energy, and how to use it better so that they can actually interact with markets more proactively and intelligently." "That helps reduce your carbon footprint in the process," he added. "Our mantra is: it doesn't matter whether you're trying to save money or save the environment, you're going to do both of those things through efficiency -- which will also let you navigate markets more efficiently." Legend EA's technology empowers the firm's clients to integrate all interrelated energy components of their businesses, while enabling clear, coherent communication across them. This process drives transparency and accountability on “both sides of the meter,” as reckoned by the company, the better to eliminate physical and financial waste. As stated on the firm's website, "This transparency drives change from the bottom up, enabling legitimate and demonstrable changes in enterprises’ environmental and financial sustainability." Legend Analytics is offered as a software as a service (SaaS) platform, with consulting services tailored to the needs of individual customers, who include industrial firms and data center operators, in navigating the power market. Additionally, the Ledge device, a network interface card (NIC), was recently introduced by Legend EA as a way to securely gather energy consumption data from any system in an organization and bring it to the cloud in real-time. Here's a timeline of key points discussed on the podcast: 1:15 - Crosby details the three interconnected parts of his firm's service: commodity risk assessment, infrastructure services, and the Legend Analytics platform for understanding energy usage and efficiency. 2:39 - Crosby explains how the Legend Analytics platform works in the case of data center customers, by providing capabilities such as real-time metering at various levels of a facility, as well as automated carbon reporting. 4:46 - The discussion unpacks how the platform is offered as a SaaS, and includes consulting services tailored to each customer's needs. 7:49 - Notes on how the Legend Analytics platform can gather data from disparate systems and consolidate it into one dashboard, allowing for AI analysis and identification of previously unknown issues. 10:25 - Crosby reviews the importance of accurate and real-time emissions tracking for ESG reporting, and provides examples of how the Legend Analytics platform has helped identify errors and save costs for clients. 12:23 - Crosby explains how the company's new, proprietary NIC device, dubbed the Ledge, can securely gather data from any system and bring it to their cloud in real time, lowering costs and improving efficiency. 23:54 - Crosby touches on issues including challenges with power availability; trends in building fiber to power; utilizing power capacity from industrial plants; and on-site generation for enabling stable voltage. Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn, following us on X/Twitter and Facebook, and signing up for our weekly newsletters.
For this episode of the Data Center Frontier Show Podcast, we sat down for a chat with Andy Pernsteiner, Field CTO of VAST Data. The VAST Data Platform embodies a revolutionary approach to data-intensive AI computing which the company says serves as "the comprehensive software infrastructure required to capture, catalog, refine, enrich, and preserve data" through real-time deep data analysis and deep learning. In September, VAST Data announced a strategic partnership with CoreWeave, whereby CoreWeave will employ the VAST Data Platform to build a global, NVIDIA-powered accelerated computing cloud for deploying, managing and securing hundreds of petabytes of data for generative AI, high performance computing (HPC) and visual effects (VFX) workloads. That announcement followed news in August that Core42 (formerly G42 Cloud), a leading cloud provider in the UAE and VAST Data had joined forces in an ambitious strategic partnership to build a central data foundation for a global network of AI supercomputers that will store and learn from hundreds of petabytes of data. This week, VAST Data has announced another strategic partnership with Lambda, a, Infrastructure-as-a-Service and compute provider for public and private NVIDIA GPU infrastructure, that will enable a hybrid cloud dedicated to AI and deep learning workloads. The partners will build an NVIDIA GPU-powered accelerated computing platform for Generative AI across both public and private clouds. Lambda selected the VAST Data Platform to power its On-Demand GPU Cloud, providing customer GPU deployments for LLM training and inference workloads. The Lambda, CoreWeave and Core42 announcements represent three burgeoning AI cloud providers within the short space of three months who've chosen to standardize with VAST Data as the scalable data platform behind their respective clouds. Such key partnerships position VAST Data to innovate through a new category of data infrastructure that will build the next-generation public cloud, the company contends As Field CTO at VAST Data, Andy Pernsteiner is helping the company's customers to build, deploy, and scale some of the world’s largest and most demanding computing environments. Andy spent the past 15 years focused on supporting and building large scale, high performance data platform solutions. As recounted by his biographical statement, from his humble beginnings as an escalations engineer at pre-IPO Isilon, to leading a team of technical ninjas at MapR, Andy has consistently been on the frontlines of solving some of the toughest challenges that customers face when implementing big data analytics and new-generation AI technologies. Here's a timeline of key points discussed on the podcast: 0:00 - 4:12 - Introducing the VAST Data Platform; recapping VAST Data's latest news announcements; and introducing VAST Data's Field CTO, Andy Pernsteiner. 4:45 - History of the VAST Data Platform. Observations on the growing "stratification" of AI computing practices. 5:34 - Notes on implementing the evolving VAST Data managed platform, both now and in the future. 6:32 - Andy Pernsteiner: "It won't be for everybody...but we're trying to build something that the vast majority of customers and enterprises can use for AI/ML and deep learning." 07:13 - Reading the room, when very few inside that have heard of "a GPU..." or know what its purpose and role is inside AI/ML infrastructure. 07:56 - Andy Pernsteiner: "The fact that CoreWeave exists at all is proof that the market doesn't yet have a way of solving for this big gap between where we are right now, and where we need to get tom in terms of generative AI and in terms of deep learning." 08:17 - How VAST started as a data storage platform, and was extended to include an ambitious database geared for large-scale AI training and inference. 09:02 - How another aspect of VAST is consolidation, "considering what you'd have to do to stitch together a generative AI practice in the cloud." 09:57 - On how the biggest customer bottleneck now is partly the necessary infrastructure, but also partly the necessary expertise. 10:25 - "We think that AI shouldn't just be for hyperscalers to deploy" - and how CoreWeave fits that model. 11:15 - Additional classifications of VAST Data customers are reviewed. 12:02 - Andy Pernsteiner: "One of the unique things that CoreWeave does is they make it easy to get started with GPUs, but also have the breadth and scale to achieve a production state - versus deploying at scale in the public cloud." 13:15 - VAST Data sees themselves bridging the gap between on-prem and in the cloud. 13:35 - Can we talk about NVIDIA for a minute? 14:13 - Notes on NVIDIA's GPU Direct Storage, which VAST Data is one of only a few vendors to enable. 15:10 - More on VAST Data's "strong, fruitful" years-long partnership with NVIDIA. 15:38 - DCF asks about the implications of recent reports that NVIDIA has asked about leasing data center space for its DGX Cloud service. 16:39 - Bottom line: NVIDIA wants to give customers an easy way to use their GPUs. 18:13 - Is VAST Data being positioned as a universally adopted AI computing platform? 19:22 - Andy Pernsteiner: "The goal was always to evolve into a company and into a product line that would allow the customer to do more than just store the data." 20:24 - Andy Pernsteiner: "I think that in the space that we're putting much of our energy into, there isn't really a competitor." 21:12 - How VAST Data is unique in its support of both structured and unstructured data. 22:08 - Andy Pernsteiner: "In many ways, what sets companies like CoreWeave apart from some of the public cloud providers is they focused on saying, we need something extremely high performance for AI and deep learning. The public cloud was never optimized for that - they were optimized for general purpose. We're optimized for AI and deep learning, because we started from a place where performance, cost and efficiency were the most important things." 23:03 - Andy Pernsteiner: "We're unique in this aspect: we've developed a platform from scratch that's optimized for massive scale, performance and efficiency, and it marries very well with the deep learning concept." 24:20 - DCF revisits the question of bridging the perceptible gap in industry knowledge surrounding AI infrastructure readiness. 25:01 - Comments on the necessity of VAST partnering with organizations to build out infrastructure. 26:12 - Andy Pernsteiner: "It's very fortunate that Nvidia acquired Mellanox in many ways, because it gives them the ability to be authoritative on the networking space as well. Because something that's often overlooked when building out AI and deep learning architectures is that you have GPUs and you have storage, but in order to feed it, you need a network that's very high speed and very robust, and that hasn't been the design for most data centers in the past." 27:43 - Andy Pernsteiner: "One of the unique things that we do, is we can bridge the gap between the high performance networks and the enterprise networks." 28:07 - Andy Pernsteiner: "No longer do people have to have separate silos for high performance and AI and for enterprise workloads. They can have it in one place, even if they keep the segmentation for their applications, for security and other purposes. We're the only vendor that I'm aware of that can bridge the gaps between those two worlds, and do so in a way that lets customers get the full value out of all their data." 28:58 - DCF asks: Armed with VAST Data, is a company like CoreWeave ready to go toe-to-toe with the big hyperscale clouds -  or is that not what it's about? 30:38 - Andy Pernsteiner: "We have an engineering organization that's extremely large now that is dedicated to building lots of new applications and services. And our focus on enabling these GPU cloud providers is one of the top priorities for the company right now." 32:26 - DCF asks: Does a platform like VAST Data's address the power availability dilemma that's going to be involved with data centers' widespread uptake of AI computing? Here are some links to some recent related DCF articles: Nvidia is Seeking to Redefine Data Center Acceleration Summer of AI: Hyperscale, Colocation Data Center Infrastructure Focus Tilts Slightly Away From Cloud AI and HPC Drive Demand for Higher Density Data Centers, New As-a-Service Offerings How Intel, AMD and Nvidia are Approaching the AI Arms Race Nvidia is All-In on Generative AI
For the latest episode of the Data Center Frontier Show Podcast, editors Matt Vincent and David Chernicoff sat down with Mike Jackson, Global Director of Product, Data Center and Distributed IT Software for Eaton. The purpose of the talk was to learn about the company's newly launched BrightLayer Data Centers suite, and how it covers the traditional DCIM use case - and a lot more. According to Eaton, the BrightLayer Data Centers suite's digital toolset enables facilities to efficiently manage an increasingly complex ecosystem of IT and OT assets, while providing full system visibility into data center white space, grey space and/or distributed infrastructure environments. "We're looking at a holistic view of the data center and understanding the concepts of space, power, cooling, network fiber," said Jackson. "It starts with the assets and capacity, and understanding: what do you have, and how is it used?" Here's a timeline of points discussed on the podcast: 0:39 - Inquiring about the BrightLayer platform and its relevance to facets of energy, sustainability, and design in data centers. 7:57 - Explaining the platform's "three legs of the stool":  Data center performance management, electrical power monitoring, and distributed IT performance management. Jackson describes how all three elements are part of one code base. 10:42 - Jackson recounts the BrightLayer Data Center suite's beta launch in June and the product's official, commercial launch in September; whereby, out of the gate, over 30 customers are already actively using the platform across different use cases. 13:02 - Jackson explains how the BrightLayer Data Center suite's focus on performance management and sustainability is meant to differentiate the platform from other DCIM systems, in attracting both existing and new Eaton customers. 17:16 - Jackson observes that many customers are being regulated or pushed into sustainability goals, and how the first step for facilities in this situation is measuring and tracking data center consumption. He further contends that the BrightLayer tools can help reduce data center cooling challenges while optimizing workload placement for sustainability, and cost savings. 20:11 - Jackson talks about the importance of integration with other software and data center processes, and the finer points of open API layers and out-of-the-box integrations. 22:26 - In terms of associated hardware, Jackson reviews the Eaton EnergyAware UPS series' ability to proactively manage a data center's power drop via handling utility and battery sources at the same time. He further notes that many customers are now expressing interest in microgrid technology and use of alternative energy sources. 27:21 - Jackson discusses the potential for multitenant data centers to use smart hardware and software to offset costs and improve efficiency, while offering new services to customers and managed service providers. Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn.
For this episode of the Data Center Frontier Show Podcast, DCF editors Matt Vincent and David Chernicoff chat with Tiffany Osias, VP of Colocation for Equinix. Osias begins by discussing the company's investment in a range of data center innovations to help its customers enter new markets and gain competitive advantages through burgeoning AI and machine learning tools. In the course of the discussion, we also learn about Equinix's deployment of closed loop liquid cooling technologies in six data centers in 2023, and where the company stands on offering increased rack densities for powering AI workloads. Osias also discusses developments along the course of Equinix helping its customers to optimize their hybrid cloud and multi-cloud architectures and strategies. Data center sustainability also factors into the conversation, as Osias touches on how Equinix aims to achieve 100% renewable energy coverage by 2030. Here's a timeline of key discussion points in Data Center Frontier's podcast interview with Equinix VP of Colocation, Tiffany Osias: 1:09 - Osias explains how Equinix invests in data center innovation to help its customers enter new markets, contain costs, and gain a competitive advantage, especially as AI and machine learning become more prevalent in decision-making processes. 1:50 - The discussion turns to how Equinix enables its customers' use of AI by providing secure, reliable service and efficient cooling options, including advanced liquid cooling technologies. 4:07 - Osias remarks on how Equinix plans to deploy closed loop liquid cooling in six data centers in 2023 to meet increasing demand from customers for full liquid-to-liquid environments. 5:49 - We learn how Equinix offers high-density racks for customers running 10-50+ kW per rack, and provides a bespoke footprint for each customer based on their power consumption needs and cooling capabilities. 7:14 - Osias remarks on how liquid cooling can have a positive impact on data center sustainability by reducing physical footprint, carbon emissions from manufacturing, and improving cooling efficiency. The company's use of renewable energy is also examined. 10:19 - Osias describes how AI impacts the Equinix approach to data center infrastructure, and the importance of partnerships and interconnection strategies. 12:09 - Osias discusses how Equinix aims to achieve 100% renewable energy coverage by 2030 and has made progress towards that goal. 13:21 - Notes on how Equinix helps customers optimize their hybrid multi-cloud architecture and interconnect with cloud and storage providers. Read the full article about the podcast for interview transcript highlights, plus a recent video from Equinix regarding data center sustainability.
Data Center Frontier editors Matt Vincent and David Chernicoff recently caught up with Cyxtera's Field CTO Holland Barry on the occasion of Cyxtera and Hewlett Packard Enterprise (HPE) announcing a new collaboration to help simplify customers' hybrid IT strategies. Cyxtera is now leveraging HPE's GreenLake edge-to-cloud platform to support an enterprise bare metal platform. The podcast discussion extends to how Cyxtera is presently focused on supporting AI workloads in data centers and collaborating with HPE to offer a multi-hybrid cloud strategy. Barry revealed during the podcast that Cyxtera already supports 70 kilowatt racks in 18 markets, and is discussing expanded deployments with its customers and partners. Barry added that many present customers are considering moving to private cloud platforms, due to rising public cloud costs and unexpected fees. Barry said, "My function here at Cyxtera deals largely with the technologies that we both implement internally and also that we deploy within the data centers themselves to make sure that experience of being in the data center colo facility is seamless, feels as much like cloud as it can in terms of the provisioning of services, how we bill for things, things like that." He added, "Generally speaking, I'm a technologist at heart and I just want to make sure that what we're building is what's useful for the market to consume." Here's a list of key points discussed on the podcast: 2:01 - Barry talks about Cyxtera's vision for supporting AI workloads in data centers, including cooling technologies, network speed, power designs, and accommodating adjacencies with edge and public cloud platforms. 4:18 - DCF's Chernicoff asks if Cyxtera will offer 70 kilowatt racks a la Digital Realty. Barry explains that Cyxtera already supports this capacity in 18 of its markets, and is in active discussions with customers and partners over an expansion. 5:40 - Barry discusses how Cyxtera's collaboration with HPE addresses rising cloud costs and furthers a multi-hybrid cloud strategy, including Cyxtera's new enterprise bare metal platform and options for opex financing models. 7:53 - Use cases for customers moving to the HPE GreenLake solution via Cyxtera are discussed, including repatriating cloud workloads and tech refreshes. 15:03 - Asked about the convergence of cloud and hybrid IT strategies, Barry says that Cyxtera views themselves as part of such transformations and says the provider is up front with its customers about what workloads are best suited for their platform. The trend of recalibrating workloads from the public cloud to data centers for better cost management is also discussed. 18:22 - Barry expounds on how egress fees and other unexpected costs can lead to a "death by 1000 cuts" situation for public cloud users, driving them to consider private cloud options 19:57 - Barry observes that many customers are realizing the costs of the public cloud and considering moving to a private cloud solution, and emphasizes the importance of Cyxtera making this transition as easy as possible through technology choices and partnerships. 21:57 - Barry comments on the new Cyxtera partnership with HPE in the context of providing choices and solutions to make moving customer workloads to their venue as easy as possible, with the goal of building a multi-hybrid cloud reality in the future. Background on Cyxtera Cyxtera Technologies operates a global network of 60 data centers, supports 2,300 customers, and had $746 million in revenue in 2022. The company was formed in 2016 when Medina Capital, led by former Terremark CEO Manuel Medina, teamed with investors including BC Partners to buy the data center portfolio of CenturyLink for $2.15 billion. It was at that time one of several data center players seeking to build a colocation business atop a portfolio of data centers spun off by telecom companies. This April, Data Center Frontier's Rich Miller reported that Cyxtera Technologies was reportedly fielding interest from suitors as it sought to reduce its debt load. Data Center Dynamics at that time shared that Cyxtera was exploring options for a sale or capital raise, citing a Bloomberg story that said private equity suitors were studying the company's operations. Shares of Cyxtera had fallen sharply in value during that timeframe and were trading at 31 cents a share at one point, giving the company a market capitalization of about $55 million, a far cry from the $3.4 billion valuation placed on the company when it went public in 2021 through a merger with Starboard Value Acquisition Corp. In May, as shares of Cyxtera fell to new lows, DCF reported lenders for the company said they would provide the colocation provider with $50 million in new funding, allowing it more time to arrange a sale or line up new capital. In June, the colocation provider filed for Chapter 11 bankruptcy. After working for months to find a buyer or reduce its debt load, the company decided it would now restructure through a pre-packaged bankruptcy. The Chapter 11 filing was part of an arrangement with its lenders, who retained the right to gain a controlling equity interest in the company under terms of a restructuring agreement. At the time of the bankruptcy filing, some of the company's lenders committed to provide $200 million in financing to enable Cyxtera to continue operating as it restructures. "Cyxtera expects to use the Chapter 11 process to strengthen the company's financial position, meaningfully deleverage its balance sheet and facilitate the business’s long-term success," the company said in a press release." More details have been made available on Cyxtera's restructuring web site. Cyxtera subsidiaries in the United Kingdom, Germany and Singapore are not included in the bankruptcy case, which was filed in New Jersey. Dgtl Infra's Mary Zhang has done significant recent reporting over the summer on the story of Cyxtera's existing lease rejections in wake of the bankruptcy filing, as well as charting the company's timeline extension for its bankruptcy-led sale process into late September. In his June reporting on Cyxtera's bankruptcy filing, DCF's Miller noted that: "Since Cyxtera leases many of its data centers, Cyxtera's Chapter 11 filing creates a potential challenge for its landlords. Cyxtera leases space in 15 facilities operated by Digital Realty, representing $61.5 million in annual revenue, or about 1.7 percent of Digital's annual revenue. It also leases space in 6 data centers owned by Digital Core REIT, a Singapore-based public company sponsored by Digital Realty. That includes two sites in Los Angeles, three in Silicon Valley and one in Frankfurt. The $16.3 million in annual rent from Cyxtera represents 22.3 percent of revenue for Digital Core REIT. A bankruptcy filing provides debtors with the opportunity to reject leases to reduce their real estate costs. In its press release, Cyxtera noted that it "is continuing to evaluate its data center footprint, consistent with its commitment to optimizing operations." An August report from Bloomberg stated that Cyxtera had drawn interest for its assets from multiple parties, including Brookfield Infrastructure Partners and Digital Realty Trust Inc., according to people with knowledge of the situation. as reported by Bloomberg's Reshmi Basu. In a recent email to this editor regarding Cyxtera, DCF's Miller opined further: "The key questions for Cyxtera are really all about the bankruptcy outcome, and where that stands. The future could be very different for Cyxtera depending who the winning bidder is and whether they would reject leases. For example, Digital Realty is reported to be one of the bidders. That makes sense, as Cyxtera leases 12 facilities from them and DLR has a vested interest in protecting that income. But if Digital wins the auction, do they keep leasing space in Cyxtera’s many non-Digital sites? Or do they reject those leases and consolidate? The auction winner will guide future strategy for Cyxtera. And it could be very different if it’s a private equity firm vs. a strategic buyer like Digital Realty." On August 7, concurrent with its business update for Q2, Cyxtera announced that it had reached a key milestone in its Chapter 11 process by filing a proposed plan of reorganization with the U.S. Bankruptcy Court for the District of New Jersey, and said it had reached an agreement with its lenders to optimize the company's capital structure and reduce its pre-filing funded debt by more than $950 million. In its Q2 update, the company said it had delivered solid growth in total revenue, recurring revenue, core revenue and transaction adjusted EBITDA. Cyxtera's August 7 press release added that negotiations around the company's sale alternative remained active. According to a press release, the proposed reorganization plan is supported by certain of Cyxtera’s lenders who collectively hold over two-thirds of the company’s outstanding first lien debt, and are parties to Cyxtera’s previously announced restructuring support agreement. The company said the proposed plan provides flexibility for the company to pursue a balance sheet recapitalization or a sale of the business. Cyxtera noted that if the plan is approved and a recapitalization is consummated, the lenders have committed to support a holistic restructuring of the company’s balance sheet. Such a restructuring would eliminate more than $950 million of Cyxtera’s pre-filing debt and provide the company with enhanced financial flexibility to invest in its business for the benefit of its customers and partners. For Q2 of 2023, Cyxtera said its total revenue increased by $14.9 million, or 8.1% YoY, to $199.0 million in the second quarter of 2023. On a constant currency basis, the company's total revenue increased by $15.1 million, or 8.2% YoY. Recurring revenue increased by $15.8 million, or 9.1% YoY, to $190.0 million in the second quarter. Cyxtera added that its core revenue increased by $17.4 million, or 10.3% YoY, to $186.2 million in the second quarter. Finally, the company said its transaction Adjusted EBITDA increased by $6.4 million, or 10.7%, to $66.4 million and increased by $6.5 million, or 10.9% YoY, on a constant currency basis, in the second quarter. Carlos Sagasta, Cyxtera’s Chief Financial Officer, said, “We are pleased to have delivered another quarter of solid growth across the business, underscoring the strength of our offering and the value we create for our global customers. We expect to continue building on this momentum as we successfully complete the process to strengthen our financial position for the long term.” The press release added that in either a recapitalization or sale scenario, the company remains on track to emerge from the court-supervised process no later than the fall of this year. The company said it had received multiple qualified bids to date. Final bids from interested parties in the sale process were originally due on August 18, a deadline which came and went. An auction slated for August 30 was also cancelled. Nelson Fonseca, Cyxtera’s Chief Executive Officer, commented, “We continue to make important progress in our court-supervised process, while demonstrating solid performance across our business. Filing this plan with the support of our lenders provides us a path to emerge in a significantly stronger financial position.” Here are links to some recent DCF articles on Cyxtera: Colocation Provider Cyxtera Files for Chapter 11 Bankruptcy Cyxtera Gets $50 Million Funding, More Time to Seek a Buyer As Stock Price Slumps, Cyxtera Reportedly Mulling Capital Raise or Sale Cyxtera Goes Public as Starboard SPAC Acquisition Closes Cyxtera to Go Public Through $3.4 Billion Merger With Starboard SPAC
Recognizing how data center liquid cooling technology has taken the spotlight this year, in this episode of the Data Center Frontier Show podcast, DCF Editor in Chief Matt Vincent sits down with Mark Fenton, Sr. Product Marketing Manager for Cadence Design Systems; and Mark Seymour, Distinguished Engineer with Cadence and co-founder and CEO of Future Facilities, Ltd., a company specializing in digital twin technology for data centers, whom Cadence acquired in July of 2022. The discussion unpacks some of the implications of the rise data center liquid cooling technology for data center designs in the era of AI, as proclaimed earlier this month in the pages of VentureBeat. Here's a timeline of points discussed on the podcast: 1:06 - Transitioning to the "AI Era" of Data Centers? 2:06 - The Cloud and AI Are Absolutely Symbiotic 3:40 - Liquid Cooling Customers: Traditional vs. Now 5:43 - The Beauty of Direct Liquid to Chip Technologies 7:07 - The Issue with Rack Retrofits 8:17 - Timing of Liquid Cooling Imperatives for Data Center Design 11:02 - Cost Considerations for Liquid Cooling: Is PUE a Bad Premise? 13:13 - How Data Center Design Tools are Accounting for Liquid Cooling Technologies 14:40 - Digital Twins for Air Cooled vs. Liquid Cooling Data Centers 16:30 - Liquid Cooling Doesn't Stop Inside the White Space 17:31 - How Liquid Cooling Improves Sustainability and ESG for Data Centers 18:46 - Liquid Cooling Can Potentially Produce Higher-Quality Waste Heat 20:18 - The Holistic Efficiencies of Data Center Liquid Cooling 22:29 - From Opportunities to Challenges 23:32 - Data Centers Love a Silver Bullet 25:34 - Evolution of Data Center Liquid Cooling Designs 26:21 - The Problem Is Power Densities are Rising 27:36 - Drawing Distinctions for Immersion Cooling 29:35 - Immersion Cooling Maintenance Questions Here are links to some recent DCF articles on data center liquid cooling technology: Investors Are Warming Up to Liquid Cooling Liquid Cooling Is In Your Future. Are You Ready? How to Get Started on Your Immersion Cooling Journey Direct Liquid Cooling - The Ultimate Guide for Data Centers Liquid Cooling: Going Beyond Water Four Factors to Consider When Selecting the Right Glycol-Based Fluid for Liquid Cooling Why Liquid Cooling is Critical for Your Data Center's Future
Premised on DCF's recent article series centered on data center diesel backup generator technology, the latest episode of the Data Center Frontier Show podcast finds site editors Matt Vincent and David Chernicoff recounting how Aligned Data Centers' Quantum Loophole campus was recently called out by the State of Maryland over a permitting snag in a contentiously approved plan for construction of 168 data center diesel generators, amounting to over 500 MW of backup power generation. Data centers like Aligned's Quantum Loophole campus, which is being raised on the site of a former aluminum smelting plant, seek to do in Maryland what so many others are doing next door in Northern Virginia. Maryland does want the data center business, but isn't having it without certain qualifications to be met in the form of the state's Certificate of Public Convenience and Necessity (CPCN) licensing process. As recorded by DCD, in wake of the permitting snag, Maryland officials have wondered aloud about clean energy alternatives, even to the point of expressing incredulity that use of carbon-emitting technology is even on the table -- especially given certain outside realities, not least being Aligned's use of microgrid power in its Plano, Texas data center. Chernicoff and Vincent sidle up to the conclusion that a modular, incremental technology approach allows for a mosaic of available data center backup power generation solutions including diesel to be used, which the overall industry currently requires. Chernicoff also notes how Tier 4 standards for data center diesel power have gotten significantly cleaner after two decades of refinement. Here’s a timeline of points discussed on the podcast: 1:05 - The Issue with Aligned Data Centers' Quantum Loophole Campus In Maryland 2:00 - Diesel and Maryland Are At Loggerheads 4:00 - If Someplace Ever Screamed Out for a Microgrid ... 5:20 - Perceptions of Diesel Power 6:00 - Cleaner Generators and Backup Power Runtime Realities 6:42 - The 3 Big Players in the Data Center Diesel Generators 7:14 - Competitive Advantages of No-Load Maintenance 8:20 - Alternatives to Diesel: Microgrid, Battery Backup, SMR, and Biodiesel Technologies 9:44 - A Catch-22 Situation for Data Centers 10:41 - Bits and Pieces of Technology 10:59 - The Benefit of Building from a Clean Slate 11:29 - Building an Entire Data Center Campus, You Expect To Be There For a Decade or Three 12:00 - Could a Microgrid Ever Furnish On-Demand Gigawatt Power? 12:27 - Enclosures for Diesel Backup Power Generators 13:21 - Quality of Support a Huge Competitive Factor 14:17 - The Scoop on Supply Chain 15:15 - Diesel Generator Sizing Concerns 16:01 - Overprovisioning for Backup Power Is an Issue 17:10 - Where Diesel Power Generation Meets Sustainability 18:08 - A Stepping Stone to Other Backup Power Solutions? Here are links to some recent DCF articles on backup power for data centers: Top-Level Issues to Consider When Selecting Backup Generator Technology Sustainability Advantages of HVO Fuel for Diesel Generators Virginia Ends Effort to Shift Data Centers to Generators in Grid Alerts New Technology and Practices Improve the Environmental Performance of Diesel Generators Beyond Diesel: Sustainable Onsite Power for Data Centers Microsoft Plans to Stop Using Diesel Generators by 2030 Google Looks to Batteries as Replacement for Diesel Generators Rethinking the Data Center: Hydrogen Backup is Latest Microsoft Moonshot
According to a recent State of the Edge report, global capital expenditure on IT equipment for edge infrastructure is projected to grow to $104 billion by 2028. Moreover, recent IDC research forecasts worldwide spending on edge computing platforms to reach nearly $274 billion by 2025. As AFL executives in a related DCF 'Voices of the Industry' essay from earlier this year explained further, "Edge data centers are key to unleashing advanced use cases resulting in new user experiences and new business opportunities." As recently as last month, a market brief from JLL unpacked just why smaller data centers are taking off, as AI, 5G and hybrid work fuel an exponential expansion of edge computing footprints. As noted by the brief, "Hyperscale centers are usually located in cities and can typically house 10,000 racks with a capacity in excess of 80 MW. Edge data centers by comparison, have a smaller capacity between 500 kilowatts to 2 MW and, as the name suggests, are located on the outer edge of networks. They bring computing capability geographically closer to those users situated further away from the heart of the cloud." “These assets are increasingly important to the architecture of computing networks, thanks to the continued adoption of IoT devices and now the rise of generative AI applications, and machine learning,” added Tom Glover, JLL Head of EMEA Data Center Transactions. For its part, PricewaterhouseCoopers International Limited (PwC) recently noted that "the global market for edge data centers is expected to nearly triple to $13.5 billion in 2024 from $4 billion in 2017, thanks to the potential for these smaller, locally located data centers to reduce latency, overcome intermittent connections and store and compute data close to the end user." PwC's edge data center examination cautioned, "However, the right timing and strategy for moving data centers (and related services) to the edge will be different for each organization, depending on the conditions, environment and business opportunities in its marketplace." So even a just a cursory reading of the business and technology prospects for edge data centers told DCF's editors that it was time for a podcast discussion probing the history and reach of this most evergreen (yet paradoxically sometimes elusive) technology topic for our industry. Here's a summary points discussed by of DCF editors Matt Vincent and David Chernicoff in today's podcast. 1:01 - Framing the topic with a Bill Kleyman quote. 2:06 - Comparing and contrasting the "original" or "local" edge vs. the hyperscale version. 3:33 - How a lot of edge data centers have come out of the CDN model. 4:23 - From Google and AWS to Akamai, Cloudflare and Rackspace. 6:46 - Optimizing delivery at the edge to challenge the hyperscalers for business. 7:45 - Blending edge computing and edge data centers to move data around as little as possible. 9:11 - 5G and telco: The 'red-headed stepchild' of the edge data center? 9:43 - "If you think about it, every cell tower you see has a data center attached to it." 11:32 - Many major CSPs didn't expect the kind of usage their cell towers are getting. 12:35 - On self-driving cars and autonomous vehicles as competitive edge use-cases. 14:26 - Leveraging 5G, actual connectivity, and localized data centers. 15:17 - How latency and bandwidth have become huge issues in gaining a business advantage. 15:34 - Edge permutations redux. 16:47 - "You just had to work AI into the conversation, didn't you?" 17:50 - When data center-quality analytics live in the trunk of a vehicle. 18:25 - Qualcomm: "Wherever a phone is, that's the edge." 19:21 - "Think of the issues involved. The backhaul, the latency, the security of that data moving across that much fiber." 20:15 - Latency Makes People Go Away 21:31 - "There's certainly a lot more edge-type data centers being built than giant hyperscale data centers." 22:03 - What have supply chain issues done to these smaller data center builds? 22:40 - How edge data center development may depend on what the market does. 23:12 - Engineering the industrial vs. the suburban edge in rural areas. 26:10 - Closing thoughts: "What's old is new again...The first point of contact is the edge." Here are links to some recent DCF stories on edge data centers: Akamai Bets on Bringing Cloud In Closer with 5 New Data Center Sites Getting Closer to the Edge: Data Centers Move Closer to Consumption Roundtable: Growth Seen Across Many Flavors of Edge Computing Data Center Insights: Phillip Marangella of EdgeConneX Tower Operators Step Up the Pace of Their Edge Deployments Let Form Follow Function at the Edge
The latest episode of the Data Center Frontier Show podcast begins with the site's editors Matt Vincent and David Chernicoff  commemorating the "hand off" of show hosting duties from DCF founder and Editor at Large Rich Miller. As duly noted, this transition of course occurs amid another terrifying, annually recurring "hottest year on record" for planet Earth. The discussion between editors unfolds to focus on two areas wherein David has done significant reporting this year, based on a wealth of accumulated industry knowledge: data center cooling and the future of power for data centers. Tune in to hear the editors unsuccessfully attempt to bypass the topic of A.I. for even just three minutes... Here’s a timeline of points Matt and David discuss on the podcast: 0:00 - Podcast Hand-Off Notes: 'Oh Captain, My Captain' 1:17 - Hello to David Chernicoff in the Hottest Year on Record (Again) 2:03 - Cooling and the Future of Power (and the Impact of AI)  2:55 - "Whatever space you give it, it will fill." 3:08 - Doing the math for a tray of NVIDIA H100 processors.  4:02 - 10 kW isn't high-density anymore (and DoE's COOLERCHIPS program knows it). 5:02 - Becoming better corporate citizens of the world (or at least Northern Va.) 6:40 - Notes on CO2 Cooling and Liquid Cooling  7:21 - Replacing HFCs for Less GHGs 9:26 - Liquid Cooling: A Whole Different Ball of Wax 10:44 - Devil's Advocate: Water-based Cooling 13:14 - Incremental Cooling Processes and the Real World 14:17 - "Musk bought 10,000 H100 CPUs..." 15:04 - Cooling in the Hybrid Cloud Environment 16:02 - Data Centers, the Utility Crunch and Nuclear Power 18:01 - The Main Issue is the Grid Itself 19:21 - The Building of New Substations Has to Occur 20:31 - "Is AI the straw that breaks the camel's back?" 23:02 - Implications for Edge Data Centers 24:53 - The Next Hurdle for AI: The Speed of Interconnection
What might AI - artificial intelligence - mean for the data center industry? On this week’s Data Center Frontier Show, host Rich Miller chats with DCF Senior Editor David Chernicoff, who has been digging into all things AI, including its potential impact on cloud platforms, chip makers, hardware startups, server vendors, and colocation providers. They take a deep dive into the implications of AI for the data center industry. If you are at all interested in AI and data centers, this is the podcast for you.  Here’s a timeline of topics David and Rich discuss on the podcast: 1:15 – David shares a bit about his career and "Data Center Journey"  3:45 – "An Interesting Time for Hardware:" Trends driving development of chips and servers.      6:15 – The history of rack density, and the arrival of ChatGPT and generative AI.   11:30 – How AI might be disruptive, and how data and cost factor into its business impact.   17:15 – Echoes of the early days of cloud, and what that tells us about AI's trajectory. 21:30 -- Rich and David discuss the opportunities for colocations, OEMS and the edge.    24:45 -- The Road Ahead: Trends to watch as generative AI evolves, including societal issues  Here's a link to some of David's recent DCF stories on AI and its impact on various sectors within digital infrastructure.   How Intel, AMD and Nvidia are Approaching the AI Arms Race The AI Arms Race: Startups Offer New Technology, Target Niche Markets For Leading Cloud Platforms, AI Presents A Major Opportunity Dell Technologies, HPE Pursue Multiple Paths into Enterprise AI  Did you like this episode? Be sure to subscribe to the Data Center Frontier show so you get future episodes on your app.
DCF Show host Rich Miller chats with Bill Kleyman, a long-time contributor to Data Center Frontier and one of the keynote speakers at the upcoming Data Center World 2023, where he will share insights from the AFCOM State of the Data Center 2023 industry survey. Bill and Rich dive deep into all the hot topics - including the rise of AI, rising rack density, the cloud  capacity crunch and supply chain and nuclear-powered data centers. It's a fun and interesting discussion.  Here’s a timeline of topics Bill and Rich discuss on the podcast: 1:45 – State of the Data Center: Key trends in Bill's keynote summarizing the AFCOM survey.  9:45 – Trends in rack density and cooling: Will data centers look more like HPC?     17:00 – Nuclear-powered data centers: Why we're hearing more about this, and the prospects for small modular reactors.   22:15 – Bill and Rich talk supply chain, and the ripple effects on data center delivery.  26:00 – The NIMBY Problem: Why community relations matters for data center companies. 29:15  -- Is there a data center shortage on the horizon.   31:00 -- On the front lines of the AI Boom. Bill's work with Neu.ro. "This is an absolutely critical point for our industry." 37:00 -- AI "hallucinations" and reliability. How do we assess societal impact? 43:00 - How might AI address automation and staffing challenges in the data center industry? 49:15 - The shape of the hybrid cloud: Bill's take on the balance between cloud, colo and on-premises data centers.
The cloud computing boom has beennfueled by an infux of capital. One of the most prominent growth stories is NTT Global Data Centers, which has become the world's third-largest data center operator and is building across the United States. On this week’s Data Center Frontier Show, we chat with Steve Lim, the Senior VP of Marketing for NTT Global Data Centers, who has had a front row seat for the company's enormous growth, and shares his take on some of the trends and markets playing a role in NTT's data center journey (as well as his own).  Here’s a timeline of topics Steve and Rich discuss on the podcast: 1:45 – NTT's progress in the U.S., from RagingWire to GDCA.  5:15 – How a capital partner like NTT can bring new scale to an operating platform. 9:00 – Northern Virginia: Demand continues in a region facing land and power constraints.   14:00 – How data center sub-markets develop, including the role of availability zones. 16:25 – Why community relations matter for data center companies. 21:00  -- Base isolation: Seismic risk and NTT's growth strategy in Santa Clara.   26:20 -- Why Hillsboro is the new hotness in the hyperscale sector.   32:15 -- Steve shares his Data Center Journey. Be sure to subscribe to the Data Center Frontier show so you get future episodes on your app. We'd love it if you "like" the DCF Show so others can enjoy it as well.
Across Silicon Valley, there are innovations underway that will change the way data centers are cooled. Greg Stover works with technology disruptors to help understand new processor designs and their implications for the design of racks and data halls. DCF Show host Rich Miller talks with Stover, the Global Director of Hi-Tech Development for Vertiv, about trends in processors and how they may accelerate the adoption of liquid cooling.  Greg also discusses the evolution of Vertiv, which he describes as "a $5 billion startup," and what the company sees ahead for  the data center and cloud computing industry in 2023.  Here’s a timeline of topics Greg and Rich discuss on the podcast: 1:20 – About Vertiv - "We're a $5 billion startup" - and Greg's role working with tech disruptors.  3:25 – The state of the chip sector, and what it means for the data center sector. 6:30 – What AI adoption means for IT-focused businesses.  .   10:15 – Liquid cooling: What does the transition look like?. 13:45 – Greg's outlook for the future of data center cooling. 16:00  -- The Metaverse question - what might it mean for business and infrastructure.   20:00 -- Edge use cases, and how to plan for edge computing infrastructure. 28:00 -- Greg shares his "Data Center Journey." Be sure to subscribe to the Data Center Frontier show so you get future episodes on your app. We'd love it if you "like" the DCF Show so others can enjoy it as well.
Immersion cooling, in which servers are submerged in liquid coolant, has a high "cool factor" but low adoption. On this week’s show, talk with JD Enright about all things immersion cooling. In our wide-ranging discussion, we explore how hyperscale operators and the crypto sector are approaching the use of immersion, the potential for its use in edge computing, and how TMGcore is serving the market, including its robotic system for swapping out submerged servers. As President and CEO of TMGcore, Enright is working to enable more companies to take advantage of the benefits of immersion cooling, which supports higher power densities, and also offers potential economic benefits by allowing data centers to operate servers without a raised floor, computer room air conditioning (CRAC) units or chillers  Here’s a timeline of topics JD and Rich discuss on the podcast: 1:00 – Background on TMGcore and what it does.  3:00 – Trends in data center cooling, rack density, and where immersion fits. 9:00 – The role of hyperscale providers in new technology adoption, and their interest in immersion cooling.   13:00 – Why the cryptocurrency sector has embraced immersion cooling and deployed it at scale. 17:45 – .How immersion cooling can play a role in the growth of edge computing. 24:00  -- TMGcore's development of a robotics system to manage servers in an immersion enclosure.  28:30 -- What's ahead for TMGcore and its immersion technology. Be sure to subscribe to the Data Center Frontier show so you get future episodes on your app.
As a large energy user, the data center industry has a key role to play in the global response to climate change. Pankaj Sharma of Schneider Electric is on the front lines of this effort, working with data center operators and suppliers on a holistic approach to making digital infrastructure more sustainable.  Sharma joins DCF Editor Rich Miller for a wide-ranging discussion about the growing sense of urgency for climate action, the Schneider sustainability framework for data centers, and how the metaverse may impact how we power and cool our critical infrastructure.  Here’s a timeline of topics Pankaj and Rich discuss on the podcast: 1:00 – Pankaj's role at Schneider Electric and his Data Center Journey.  3:20 – Why sustainability is so important for data centers and IT applications 5:30 – How the Schneider Electric Sustainability Framework can help data center operators respond.   10:00 – How the industry experience with PUE provides insights on sustainability responses. 12:45 – The growing sense of urgency for climate action, in the data center industry and beyond. 14:15 – Pankaj offers his take on the recent Schneider Electric Innovation Summit and key takeaways.  20:15 – The Metaverse and what it means for data centers and mission-critical infrastructure. 23:15 - The future of cooling, and how metaverse compute tech may impact cooling. 25:10 - The data center supply chain is "a huge challenge" but is getting better. Here are some links with more about the topics we discuss: Schneider Sustainability Framework Offers Roadmap for Climate Response: The framework helps data center users  identify, measure and manage their carbon impact, and is intended to spark an acceleration of the climate response from the industry. Data Center Metrics Every Data Center Operator Should Measure:  Without standardized sustainability metrics, it’s difficult to ensure internal alignment between design, procurement, operations, and sustainability teams. Schneider Electric proposes five categories of data center sustainability metrics that can be used to report on environmental sustainability. Schneider Electric Innovation Summit: The Future Requites Smaller Faster, Smarter, Cleaner:   “There is massive pressure from investors, regarding ESG (the environmental, social and governance aims of companies)," company CEO Jean-Pascale Tricoire said. "If you want to attract good people, you need to have a plan for sustainability.” (From our sister pub Energy Tech)  Did you like this episode? Be sure to subscribe to the Data Center Frontier show so you get future episodes on your app.
On this week's Data Center Frontier Show, we talk site selection with Ernest Popescu, the Vice President for Global Site Development for Iron Mountain Data Centers. All around the globe, data center operators are facing challenges in finding enough land, power and water to support the rapid growth of cloud computing. As campuses become MegaCampuses, finding suitable real estate is becoming difficult. That's why site selection has become one of the most important skillsets in the data center business. Popescu has lengthy experience in hyperscale site development and capacity planning for Amazon Web Services and Facebook, but saw a unique opportunity in the growth of Iron Mountain Data Centers, which has been winning both enterprise and cloud deals. Popescu is also intrigued by the potential for tapping Iron Mountain's global real estate footprint of more than 1,400 document storage locations to support edge computing. Here's a timeline of topics DCF Editor Rich Miller and Ernest discuss on the podcast: 1:00 - Ernest's Data Center Journey. 2:45 - Why data center capacity planning is difficult. "The past isn't necessarily a good indicator of the future," says Ernest. 5:30 - After working so much with hyperscalers, why Iron Mountain? 10:00 - Why site selection is such a hot topic in 2022. 12:45 - How about faster networks? Do they change any of the methodology around site selection. 16:00 - Edge computing, and why this is an opportunity for Iron Mountain. 23:00 - What Ernest likes best about this job.
EdgeConneX has built one of unique growth stories in digital infrastructure. An early leader in edge computing, EdgeConneX later began building huge data centers for hyperscale operators. It's a result of following the customers, says Phillip Marangella, the Chief Marketing Officer of EdgeConneX. On today's show, Marangella speaks with host Rich Miller about how edge computing has evolved and where it is headed, how EdgeConneX is seeking to build a diverse workforce, and how the company's 2020 acquisition by EQT Infrastructure has accelerated growth. Here's a timeline of topics Philllip and I discuss on the podcast: 1:00 - The EdgeConneX story: What it does and who it helps. 7:00 - The demand tidal wave: "We're all building as fast as we can." 9:15 - Understanding the Edge: "It's the user experience ... It's not rocket science." 13:00 - Building the worfroce of the future, and why diversity matters at EdgeConneX. 18:00 - Capital matters: How EQT's backing has turbo-charged growth. 21:30 - Digital infrastructure is a global story. If you enjoy this episode, be sure to subscribe to the Data Center Frontier show.
Sabey Data Centers has deployed more than 4 million square feet of mission-critical space across the United States, including a new data center in Austin, Texas. Sabey Chief Revenue Officer Tim Mirick joins us to talk about Sabey's history in data center development, and its big 2022 expansions in Austin and Central Washington. Tim shares insights into how Sabey thinks about expansion and growth markets, the rising profile of digital infrastrucutre, and the new technologies driving data center growth. Here's a timeline of topics Rich and Tim  discuss on the podcast: 1:00 - Sabey's history as a pioneer in data center development. 2:15 - Sabey's entry into the Austin, Texas market. 4:30 - Why Sabey likes Central Washington for data center development. 11:00 - The rising profile of data centers and digital infrastructure. 15:30 - How Sabey evaluates growth opportunities. 21:00 - Data center design innovations bring introduced at the Austin campus. Be sure to subscribe to the Data Center Frontier Show for more great data center content!
In our Earth Day edition, DCF Editor Rich Miller  talks with Sean Farney about the data center industry's progress on confronting climate change, and how sustainability has become a "board-level imperative" for all companies.  Sean has been involved in some of the most innovative projects in digital infrastructure, including one of the first web-scale data centers deploying servers in containers, and an edge computing startup focused on converting retail stores into data centers. In a wide-ranging conversation, Sean and Rich discuss the future of backup power and the essential role of generators, the potential role of microgrids, the growing challenges procuring power in major global markets, and whether vacant retail and office space will become part of the edge computing, landscape. Here's a timeline of topics Sean and I discuss on the podcast: 7:00 - Sustainability as a "board-level imperative." 12:00 - Evolutionary steps and optimizing existing infrastructure. 17:45 - How microgrids can enable better sustainability and reliability. 21:30 - Site selection gets harder. Is "bring your own power" the future for hyperscale data centers? 25:30 - What edge infrastructure looks like, and how it may evolve. 30:00 - Can retail stores and office space be converted into digital infrastructure? Editor's Note: After we recorded this podcast, Sean began a new position with JLL.
For Kirk Offel, the bottom line of his business is simple. "We're changing lives and saving people by finding them a sense of purpose," says Offel, the CEO of Overwatch Mission Critical, which provides military veterans with expertise in mission-critical operations and management. Overwatch helps data center owners and operates manage their facilities, but its biggest impact is in the lives of the military veterans it trains, employs, and places into data center careers. In this podcast, Offel discusses his career in the military and the data center business, why he created Overwatch, and the importance of its mission in providing purpose to veterans. Kirk also shares about the qualities that military veterans can bring to the mission critical industry, including their integrity, tenacity, and their selfless attitude. It's an interesting and important conversation. Here's a timeline of topics Kirk and DCF Show host Rich Miller  discuss on the podcast: 2:00 - Kirk's career journey through the military and the data center industry. 5:45 - An introduction to Overwatch Mission Critical and its mission serving veterans. 22:00 - Kirk discusses Overwatch's business model and service portfolio, and how it has evolved. 32:15 - The challenges and opportunities in launching a startup at the beginning of the COVID-19 pandemic. 38:00 - The Road Ahead: The keys to filling the talent gap in the data center industry Here are some links with more about Overwatch and its mission:  Data Center Firm Provides Veterans With Tech Skills, Sense of Purpose https://datacenterfrontier.com/data-center-firm-helps-give-military-veterans-a-purpose/ Overwatch Mission Critical https://weareoverwatch.com/ Be sure to subscribe to the Data Center Frontier Show for more great data center content!
The emergence of the Omicron variant of COVID-19 is making  headlines around the world. It's not yet clear how serious Omicron may become, but it is a worrisome development in a pandemic that has extracted a high toll. What might Omicron mean to the data center sector and cloud computing? That's our topic today on the Data Center Frontier show, where we look at three areas of potential business impact: travel restrictions, supply chain disruptions, and the progress of the enterprise recovery in IT spending. Important Note: Our podcast focuses on the data center business and cloud infrastructure, so if you're seeking general healthcare information, please consult your medical professional or sources that specialize in healthcare. Here's a timeline of topics discussed on the podcast: 1:45 - What we know, and don't know, about the Omicron variant and its potential impact. 6:15 - The important role of data centers and cloud services during the COVID-19 pandemic. 6:50 - Data is global, and travel restrictions are a challenge for new deployments. 7:50- How travel restrictions could impact trade shows, conferences and in-person events. 10:20 - The state of the supply chain, and how data center operators are managing project delivery. 13:40 - The road ahead for the enterprise IT recovery. Here are some resources discussed on the podcast: How Data Centers Are Navigating the Supply Chain Crisis: https://datacenterfrontier.com/how-data-centers-are-navigating-the-supply-chain-crisis/   Be sure to subscribe to the Data Center Frontier show.
We're jazzed this week to have Bill Kleyman as our guest. Bill is Executive VP of Digital Solutions at Switch, as well as author of the AFCOM State of the Data Center research report and head of millennial outreach for Infrastructure Masons. Bill has been an editorial contributor for both Data Center Knowledge and DCF, and is a technology evangelist with nearly boundless enthusiasm for his work. In this podcast, Bill and DCF Editor Rich Miller dig into some of the prevailing trends in digital infrastructure (digital transformation, cloud repatriation, FinOps and, yes, ROBOTS!)  as well as the big challenges facing the data center industry (staffing, sustainability and inclusion). It's a fun, wide-ranging conversation. Here's a timeline of topics Bill and I discuss on the podcast: 3:30 - The latest trends in how enterprise customers are using cloud, service providers and on-premises data centers. 7:30 - Cloud Repatriation: Is this a thing? How important is this trend? Bill digs into the details. 11:15 - FinOps is the hot new infrastructure specialty. Bill and Rich discuss the power of FinancialOps. 15:00 - The data center staffing challenge: Initiatives to introduce young workers to data center careers. 26:00 - How robots and autonomous systems may help maintain data centers. 35:00 -  Why the growing focus on ESG is good for data centers and good for the planet. Here are links to stories we discuss on the podcast: Infrastructure Masons: https://imasons.org/ Sustainable Finance: The Next Frontier in the Data Center Climate Response: https://datacenterfrontier.com/sustainable-finance-the-next-frontier-in-data-centers-climate-response/ Oracle Unleashes Robot Dogs in Chicago: https://www.youtube.com/watch?v=HqL9mqmxh58 Be sure to subscribe for more great data center content!
The data center business has gone global. For developers, this means extending their operations across borders and oceans. For a first-hand account of this experience, Rich talks with AJ Byers, of Compass Datacenters about their new campus in Israel, opportunities in Europe, and the quest for more clean electricity in Montreal, where crypto and cannabis firms are also seeking power to expand their operations.
The Blackstone acquisition of QTS Realty for $10 billion is the biggest deal ever for the data center industry. We highlighted the potential for these huge deals in DCF's 2021 forecast. Here's some thoughts on the Blackstone-QTS deal, what it means for data center M&A, and what comes next. Links: Blackstone to Acquire QTS in Blockbuster Data Center Deal https://datacenterfrontier.com/qts-acquired-by-blackstone-in-blockbuster-data-center-ma-deal/ Data Center M&A on the Horizon as New SPACs go Shopping https://datacenterfrontier.com/data-center-ma-on-the-horizon-as-new-spacs-go-shopping/ Pandemic Could Drive Data Center M&A, Reshaping Industry Landscape https://datacenterfrontier.com/pandemic-could-drive-data-center-ma-reshaping-industry-landscape/ For all our 2021 predictions: Eight Trends That Will Shape the Data Center in 2021. https://datacenterfrontier.com/eight-trends-that-will-shape-the-data-center-industry-in-2021/ Be sure to subscribe for more great data center content!
How do you build innovation into data center development? This week our guest is Nancy Novak, the Chief Innovation Officer at Compass Datacenters,. Nancy and DCF Show Host Rich Miller discuss how innovation is critical to faster and larger capacity deployments, and helped Compass deliver new projects during the pandemic. Nancy also shares her take on diversity in the data center industry - where there's progress, and how to meet the staffing challenges to come. Here's a timeline of topics discussed on the podcast: 3:45 - The critical importance of sustainability in data center development. 8:35 - How COVID-19 altered the data center market. 10:00 - Why hyperscale operators are leasing more data center space, and why this trend may continue. 11:30 - How data growth drives innovation. 14:45 - Why diversity matters, and how (and whether) the data center industry is making progress. 18:30 - Construction tech, and how exoskeletons can support diversity. 22:20 - The "Cool Factor" and how it can help build the data center workforce of the future. Here are some resources that tie into our discussion: How Technology Can Transform Data Center Construction https://datacenterfrontier.com/how-technology-can-transform-data-center-construction/ The People Challenge: Global Data Center Staffing Forecast, 2020-2025 https://datacenter.uptimeinstitute.com/2021-staffing-report.html Executive Insights: Nancy Novak https://datacenterfrontier.com/executive-insights-nancy-novak-compass-infrastructure-masons/ Be sure to subscribe!
Rich Miller sits down with Chris Crosby, the CEO of Compass Datacenters, for a wide-ranging conversation about the latest trends in the data center sector. Chris is an industry veteran who was in on the ground floor of the data center boom in working with CoreSite, Digital Realty and now Compass. The conversation explores demand trends from edge to cloud, where new data centers are being built, and why a data center developer has a research and development unit.  Here's a timeline of topics Rich Miller and Chris Crosby discuss on the podcast: 4:00 - Demand for edge computing, and where we are on the growth curve. 6:39 - How Compass approaches client-centric data center design and construction 9:55 - Why Compass has a Research & Development division (most developers don't), and why it matters. 20:35 - The primacy of the network 22:45 - Data center site selection, and how Compass has been early in identifying cloud computing hubs 25:20 - "Availability Zone Thinking" and how resiliency designs guide where data centers are built 26:30 - Investors love the data center industry. Chris discusses the benefits of patient capital. Here are some stories at Data Center Frontier that provide additional information about items Chris and Rich discuss. Making Concrete Greener: Addressing Cement's Carbon Problem https://datacenterfrontier.com/making-concrete-greener-addressing-cements-carbon-problem/ Compass Scales Up for Growth in Top Hyperscale Markets https://datacenterfrontier.com/compass-scales-up-for-growth-in-top-hyperscale-markets Compass EdgePoint Sees Opportunity in Building Better Networks https://datacenterfrontier.com/compass-edgepoint-sees-opportunity-in-building-better-networks/ Be sure to subscribe!
Our society relies on data as never before. The world needs more data  center capacity, and it needs it yesterday. The key to meeting this challenge is industrialization – the ability to add cloud capacity in new places at Internet speed. Rich Miller looks at how the data center development is getting bigger, faster and more efficient.  Here are links to the additional resources Rich mentions: Building Through the Pandemic: Data Centers Added 17 Million SF in 2020 https://datacenterfrontier.com/building-through-the-pandemic-data-centers-added-17-million-sf-in-2020/ At Cyrus One, More Sky for Bigger Clouds https://datacenterfrontier.com/at-cyrusone-more-sky-for-bigger-clouds/ How Technology Can Transform Data Center Construction https://datacenterfrontier.com/how-technology-can-transform-data-center-construction/ Voice of the Industry (Expert Columns) Voices: Project Buffering to Build Data Centers Faster https://datacenterfrontier.com/project-buffering-data-centers/ Voices:  5 Critical Success Factors in Data Center Construction https://datacenterfrontier.com/success-factors-data-center-construction/ Voices: New Data Centers Designs for Hyperscale Cloud and the Enterprise https://datacenterfrontier.com/data-center-designs-hyperscale-cloud/ This is the third in a series of broadcasts on the Eight Trends That Will Shape the Data Center in 2021. https://datacenterfrontier.com/eight-trends-that-will-shape-the-data-center-industry-in-2021/ Be sure to subscribe so you won't miss future installments!
Cloud computing can be a catalyst for action on climate change. Customers are demanding it, and the planet needs it. Rich Miller outlines how the cloud's massive energy footprint positions the data center industry to drive a global shift to renewably-powered business.  Here are links to the additional resources Rich mentions:  Report: Green Data Centers & The Sustainability Imperative https://datacenterfrontier.com/white-paper/green-data-centers-iron-mountain/ Report: The RIse of the Sustainable Data Center https://datacenterfrontier.com/white-paper/rise-sustainable-data-center/ Webinar: Greening the Data Center  https://event.on24.com/eventRegistration/EventLobbyServlet?target=reg20.jsp&referrer=&eventid=2966296&sessionid=1&key=03286873B0FA5C265BAB5A7E7D6C5219&regTag=&V2=false&sourcepage=register Here are two article series on the topic: The Sustainability Imperative https://datacenterfrontier.com/green-data-center-imperative/ Tackling Data Center Water Usage https://datacenterfrontier.com/data-center-water-usage/ This is the second in a series of broadcasts on the Eight Trends That Will Shape the Data Center in 2021.  hhttps://datacenterfrontier.com/eight-trends-that-will-shape-the-data-center-industry-in-2021/ Be sure to subscribe so you won't miss future installments!
Enterprise IT must adapt to a new landscape created by the COVID19 pandemic. DCF Editor Rich Miller outlines why this will matter to data center professionals and end users.  This is the first in a series of broadcasts outlining the Eight Trends That Will Shape the Data Center in 2021. In 2020, when cloud technology enabled society to retool to survive the pandemic. As the world slowly defines the contours of the “next normal” in its battle with COVID-19, flexibility and resiliency are the business attributes that will matter most. Many organizations are not yet wired for this. BONUS RESOURCES on Enterprise IT trends Data Center Roundtable: The Year Ahead in Enterprise IT Spending https://datacenterfrontier.com/roundtable-the-year-ahead-in-enterprise-it-spending/ The Enterprise Cloud Shift Accelerates in 2021 https://datacenterfrontier.com/data-bytes-idc-sees-enterprise-cloud-shift-accelerating-in-2021/ Be sure to check our our free Annual Forecast, outlining the DCF take on the most important trends to watch this year: The Eight Trends That Will Shape the Data Center Industry in 2021 https://datacenterfrontier.com/eight-trends-that-will-shape-the-data-center-industry-in-2021/ To hear about the other trends in future episodes, please subscribe.
DCF Show host Rich Miller speaks talks with Dean Nelson, the founder of Infrastructure Masons and an industry thought leader who has helped build some of the world's largest data center networks for Sun Microsystems, eBay and Uber. Rich and Dean discuss the essential role the data center industry has played during the COVID-19 pandemic, Infrastructure Masons' mission to ensure that "every click improves the future," and Dean's work as CEO of Virtual Power Systems and its vision for software-defined power.  Here's some links with additional information about Dean's work recent work, the iMasons and software-defined power: With Nelson as CEO, VPS Advances Vision for Software-Defined Power https://datacenterfrontier.com/with-nelson-as-ceo-vps-advances-vision-for-software-defined-power/ Dean Nelson's Next Chapter: Family, Phlanthropy, iMasons and Startups https://datacenterfrontier.com/dean-nelsons-next-chapter-family-philanthropy-masons-and-startups/  Infrastructure Masons: https://imasons.org/ Virtual Power Systems: https://virtualpowersystems.com/
The cloud is extending to the stars. DCF Show host Rich Miller talks with Doug Mohney, who writes about the intersection of space and the data center industry. You'll hear some of the use cases that combine data centers and satellites, and how large cloud computing players like Amazon Web Services and Microsoft Azure are using satellites to extend their reach beyond the atmosphere.    LINKS: Data Centers Above The Clouds: Colocation Goes to Space https://datacenterfrontier.com/data-centers-above-the-clouds-colocation-goes-to-space/ AWS Ground Station Connects the Amazon Cloud to Space Satellites https://datacenterfrontier.com/aws-ground-station-connects-the-amazon-cloud-to-space-satellites/   Azure Space Connects Microsoft Edge Modules to Satellites (Including SpaceX) https://datacenterfrontier.com/azure-space-connects-microsoft-edge-modules-to-satellites-including-spacex/   DCF Space and Satellites Channel https://datacenterfrontier.com/tag/satellite/   Doug Mohney's Space IT Bridge site https://www.spaceitbridge.com/   Follow Doug on Twitter https://twitter.com/DougonIPComm
Will your data be stored in DNA and holograms? Microsoft is working on new storage technologies to house massive amounts of data. Some of them feel like science fiction, but could disrupt how we store and manage data. DCF Editor Rich Miller explains how data can be stored in DNA and holograms. The biggest advantage: The potential to shrink Walmart-sized data centers.
Google’s Grace Hopper Cable connecting the U.S. and Europe is the latest in a series of new subsea cable projects. Data Center Frontier Show host Rich Miller looks at how cloud growth is reshaping how data travels around the globe.
Host Rich Miller talks with cooling expert Kevin Facinelli, President, Data Center Cooling at Nortek Air Solutions about the challenges of keeping servers cool as data centers get larger and taller. Also: Will vacant office space house servers in the poet-COVID economy?   Rich and Kevin discuss key trends in data center cooling, including: How to adapt cooling systems for hyperscale data center designs The challenges of cooling servers in warm climates like Singapore How Nortek's StatePoint cooling solution addresses these challenges Trends in rack power density, and how to manage them. Will hotter, more powerful servers force a shift from air cooling to liquid cooling? The strategic challenges the COVID-19 pandemic poses for data centers. An opportunity in the work-from-home boom: Creating micro-data centers in vacant office space. Kevin discusses this trend, what he's seeing, and what may lie ahead.                SHOW NOTES   Here are some resources about Nortek:   Nortek Air Solutions web site https://www.nortekair.com/products/data-center-products/   Here's the white paper Rich & Kevin discussed on the podcast:   Solutions to Data Center Water & Power Availability https://datacenterfrontier.com/white-paper/solutions-data-center-water-power/   An overview of Nortek's StatePoint cooling system and its use in warm climates: https://datacenterfrontier.com/new-design-helps-facebook-keep-data-cool-in-hot-climates/    Decreasing Water & Power Will Help Data Centers Reach Sustainability Goals (Nortek Voices column on DCF) https://datacenterfrontier.com/power-water-data-centers-sustainability-goals/   Amid COVID-19, Legionella Raises Key Questions for Data Centers (Nortek Voices column on DCF) https://datacenterfrontier.com/legionella-is-the-covid-19-of-the-data-center-industry/   Connect with Kevin on LinkedIn: https://www.linkedin.com/in/ksfacinelli/   Here's some of Data Center Frontier's in=depth coverage of some of the trends discussed today:   Data Center Cooling Channel https://datacenterfrontier.com/category/data-center-cooling/   Rack Density Keeps Rising at Data Centers https://datacenterfrontier.com/rack-density-keeps-rising-at-enterprise-data-centers/   As Rack Densities Rise, Liquid Cooling Specialists Begin to See Gains  https://datacenterfrontier.com/as-rack-densities-rise-liquid-cooling-specialists-begin-to-see-gains/​
Our 2020 podcast season kicks off with a series of shows based on DCF’s annual forecast, which we call “Eight Trends That Will Shape the Data Center in 2020.” On this podcast, we explore two of the most important trends: data tonnage, and the hardware arms race around artificial intelligence, or AI. Our number one trend is the explosive growth of data, which will be felt in 2020 like never before. Data tonnage creates challenges in both the distribution and concentration of data. Also, artificial intelligence (AI) plays a starring role in this data tsunami. AI is a hardware-intensive computing technology that will analyze data both near and far. That includes everything from algorithm training at cloud campuses to inference engines running on smartphones. Our podcast host, Rich Miller dives deeper into both these subjects which will definitely impact the data center.   Links: The Eight Trends That Will Shape the Data Center Industry in 2020​ https://datacenterfrontier.com/the-eight-trends-that-will-shape-the-data-center-industry-in-2020/​ Scorecard: Looking Back at DCF’s 2019 Predictions https://datacenterfrontier.com/scorecard-looking-back-at-dcfs-2019-predictions/ Too Big to Deploy: How GPT-2 is Breaking Servers https://towardsdatascience.com/too-big-to-deploy-how-gpt-2-is-breaking-production-63ab29f0897c Data Gravity is Shifting the Data Center Network  https://datacenterfrontier.com/data-gravity-is-shifting-the-data-center-network-but-in-which-direction/ New AI Chips Seek to Reshape Data Center Design, Cooling https://datacenterfrontier.com/new-ai-chips-seek-to-reshape-data-center-design-cooling/
While robots in the data center has been proposed and tested for years, the reality is getting close to fruition, with the introduction of a robotic system to hoist servers from a tank of cooling liquid. Is this another step towards more "lights out" mission-critical facilities such as data centers? Our host, Rich Miller, explores the history and future of robots in the data center.​ LINKS:   Inside Facebook's BluRay Data Center https://datacenterfrontier.com/inside-facebooks-blu-ray-cold-storage-data-center/   2013 DCK series 1. The Robot-Driven Data Center of the Future https://www.datacenterknowledge.com/archives/2013/05/22/the-data-center-of-tomorrow-totally-lights-out-within-5-years   2. The Role of Robotics in Data Center Automation https://www.datacenterknowledge.com/archives/2013/12/18/role-robotics-data-center-automation   3. I, Data Center: An Interview With a Robotics Professional https://www.datacenterknowledge.com/archives/2013/12/19/data-center-interview-robotics-professional   IBM Uses Roombas to Protect its Data Centers https://www.geek.com/news/ibm-uses-custom-ever-vigilant-roombas-to-protect-its-data-centers-1557368/   Robots Now Annihilate Hard Drives in Google Data Centers https://www.datacenterknowledge.com/google-alphabet/robots-now-annihilate-hard-drives-google-data-centers   DE-CIX: Meet Patchy McPatchbot - First Robot at an IX https://www.de-cix.net/en/access/the-apollon-platform/patchy-mcpatchbot   Wave 2 Wave: Rome https://www.wave-2-wave.com/rome   Scott Noteboom Joins Liquid Cooling Startup Submer https://datacenterfrontier.com/scott-noteboom-joins-liquid-cooling-startup-submer/   TMG core Unveils Robot-Managed Immersion Data Centers  https://datacenterfrontier.com/tmgcore-unveils-robot-managed-immersion-data-centers/
Everyone's talking about data center merger mania. In this episode, host Rich Miller talks about two deals, one that happened and one that didn't. The deal that happened is Digital Realty's acquisition of European colocation and interconnection provider, Interxion for $8.4 billion.  He will cover the important strategic factors that make this deal a win for Digital. There's also the deal that didn't happen -- involving CyrusOne, one of the largest wholesalers in the data center business. In its earnings call, CyrusOne has decided to remain independent. Both of these deals have strategic implications for the data center industry.
The road to the self-driving car of the future will be one where data, hardware and data centers play a major role. Autonomous vehicles are the equivalent of supercomputers rolling down the highway, creating a mind-boggling amount of data.  Rich Miller, the founder and editor of Data Center Frontier, takes listeners through what the impact of autonomous vehicles will have on infrastructure and data centers.
Northern Virginia is home to the largest concentration of data centers in the world, with more than 5M SF of data center space, and more on the way. Ashburn is located in Loudoun County, which is home to more than 100 data centers. Our host Rich Miller explains how Ashburn became Data Center Alley and what’s going on there today. Links mentioned: Northern Virginia Market Update https://datacenterfrontier.com/northern-virginia-less-hyper-but-still-plenty-of-scale/ In Loudoun, Neighbors Want Better Looking Data Centers https://datacenterfrontier.com/in-loudoun-neighbors-want-better-looking-data-centers/  Silicon Valley Data Center Market Report https://datacenterfrontier.com/white-paper/silicon-valley-data-center-market-2/
A look at the world of hyperscale data centers, and an introduction to the podcast, including a brief history of the data center industry, background on editor Rich Miller and Data Center Frontier. Rich also provides a preview of the season ahead. Hyperscale Data Centers: A Data Center Frontier Special Report More Insights and trends on hyperscale computing and data centers