A few weeks ago, when discussing ~Modernizing Utility Control Centers~, I emphasized that meeting tomorrow’s needs isn’t about simply “throwing tech at the problem and hoping for the best.” As customer rates rise to levels some may deem unacceptable, utilities must be deliberate about investments. Every implementation now carries the expectation that it must deliver impact beyond its cost.

Because of this, utilities have sometimes been slow to adopt new tools, but the realities are settling in: change is required, and there is no time to lose. The truth is that most utilities have been caught flat-footed by the AI explosion. AI and cloud growth are reshaping electricity demand faster than utilities can adapt. As the Wall Street Journal notes, ~nearly 400 GW of data center requests are already in U.S. interconnection queues—more than half of the Lower 48’s summer peak demand~. Like many other industries, grid operators are being asked to do more with less.

The Scale of the Challenge

When I first started operating the grid in 2008, automated reclosing was the “new tech.” It was impressive, though the running joke was that by the time a scheme finished responding to a fault, feeders hundreds of miles away would be tripping as well. With today’s interconnected grid, that joke isn’t far from the truth—the butterfly effect is real.

What’s changed most, however, is the nature of the loads we’re trying to restore. The system was once built to a standard where closing to restore load usually wasn’t a concern; there was enough capacity. Those days are over. First came inevitable overloads, then vendors responded with smarter logic (like Intelliteam) that could calculate loads before closing and lock out if an overload alarm would result. The problem? Sometimes you need to go into alarm to get the lights back on.

It became a little game we called “hide the load”—shifting customer demand among multiple circuits to avoid emergency overloads. Sometimes you win, sometimes you lose.

This game grew harder in the mid-2010s when data centers began appearing in formerly “traditional” load sites. Historically, load followed a bell curve. If necessary, I could run a circuit in overload for an hour or two, knowing the load would soon taper off at sunset. Those days are gone. Load curves are now flat. Who is using all the power in the middle of the night, when factories, office buildings, and diners are closed? Data never sleeps.

Today, data centers consume about ~4% of U.S. electricity, with projections rising to 12% by 2028~. On top of that, their requirements are far tighter than those of residential or commercial customers. Where most utilities operate within a ±7% voltage band for residential and ±10% for commercial, data centers expect ±3%. Operators strive to maintain tolerances as close to zero as possible, but increased load drives voltage down—a self-reinforcing problem. All of this makes control centers and their operators busier than ever.

What It Means for Control Centers

Demand growth complicates forecasting, dispatch, and outage management. Control centers must balance high-demand clusters like Northern Virginia’s “Data Center Alley” against regional reliability. This is no longer just a planning issue—it’s a real-time operations challenge.

Traditionally, the response to load growth was to reconfigure circuits or shift load from heavily loaded circuits. That’s no longer possible when every circuit is pushed to its limit.

So control centers are leaning on technology: Dynamic Line Ratings (DLR), Volt/VAR Optimization (VVO), and Grid-Enhancing Technologies (GETs) are all being deployed. But across all of these solutions, one factor is consistent: an operator is still required—to run them, to monitor them, or to manage exceptions. Regardless of automation levels, the operator must remain at the center.

As I used to tell my operators: I couldn’t predict exactly what their job would look like in ten years, but I could guarantee they would still have one. So far, that promise has held true.

Industry Response — Early Innovations

Technology is essential, and I’m one of the strongest advocates for continuing to build out data centers. But expansion must be done responsibly—not with the shotgun approach we’ve often seen. This is not just a utility issue; it’s a community issue.

A North Carolina State study estimates ~electricity prices could rise 8% nationwide by 2030 due to AI demand growth~. Already, ~AI data centers are contributing to above-average residential bill increases~. If we’re sharing the grid, then those who use the most power should also share responsibility for maintaining its quality and availability.

We’re beginning to see that happen. ~Google is pausing or shifting AI workloads during grid stress~. Others are exploring ~co-locating data centers with generation assets to ease transmission bottlenecks~. If everyone contributes a little, no one bears the full burden.

Utilities themselves aren’t standing still. Companies like Duke Energy are experimenting with ~AI-driven predictive maintenance~ and deploying ~digital twins for grid optimization~.

When I joined my utility in the late 2000s, I entered a control room of operators who resisted change. Some even retired rather than shave their beards to be fitted for COVID masks. Operators can be many things—including, at times, unreasonable.

But with the old guard retiring, a new generation of operators is entering control rooms. This group isn’t afraid of change—they’re asking for it. It’s refreshing. This problem won’t be solved by a one-size-fits-all solution, but it will benefit greatly from an energetic, curious, and adaptable workforce.

Risks, Roadblocks, and Recommendations

There are many potential solutions to the challenges facing control centers and the grid. All will be needed to confront the hydra-headed problems of the future:

  • Infrastructure gap: Interconnection queues stretch years.

  • Regulatory tension: Who pays for grid upgrades—ratepayers or tech giants?

  • AI governance: As Business Insider notes, ~utilities are still “tiptoeing” into AI~, with limited frameworks for scaling.

  • Workforce challenge: Operators and engineers must adapt to AI-driven decision tools.

We may have been caught off guard by the rapid rise of data centers, but there’s only one way to eat an elephant: one bite at a time.

Immediate (0–12 months) – Ease cognitive load quickly

  • Pilot AI tools in outage forecasting, predictive maintenance, and load balancing to reduce manual decision pressure in real time.

  • Reassess dispatch protocols and operator training so new digital aids complement—rather than complicate—existing workflows.

Intermediate (1–3 years) – Share the load

  • Collaborate with hyperscalers on demand response and flexible workload scheduling to shift stress away from operators at peak times.

  • Expand successful AI pilots into broader deployment for forecasting, visibility, and DER coordination—automating routine decisions so operators can focus on exceptions.

Long-Term (3+ years) – Redesign for resilience

  • Model extreme demand growth scenarios to help operators anticipate future states, not just react to them.

  • Modernize control centers with advanced situational awareness, automation, and redesigned spaces that reduce fatigue and improve focus.

  • Institutionalize continuous workforce development so operators remain confident and adaptable as technology and load conditions evolve.

Foundational (Across All Horizons) – Non-negotiables for operator success

  • Cybersecurity first: Prevent cyber incidents that could overwhelm operators.

  • Cross-utility & regulator alignment: Reduce duplicative reporting and fragmented standards that add to operator burden.

  • Data infrastructure & governance: Ensure operators receive trusted, streamlined information rather than conflicting dashboards.

  • Workforce culture & change management: Position new tools as aids, not replacements, to build trust and adoption.

Closing / Forward Look

It is still up for debate how AI will ultimately fit into business models. What’s certain is that it will play a role—and just as certain is the fact that we will need fully qualified operators to manage and run the grid.

The AI/data center boom is not a passing trend—it’s a secular shift in electricity demand. Control centers will either be overwhelmed or become the strategic hub of resilience. Operators, and the control centers in which they work, are inherently risk-averse—and for good reason. For years, many have pushed back on this trend as too risky. Now, it’s time to address the risks and leverage the opportunities data centers present.

As I used to tell storm managers during storm duty: before we rebuild the house, we first need to put out the fire. Step one is to stabilize operations and ease the immediate burdens on control centers. Step two is to build for the long term.

The question is no longer whether AI will reshape demand, but whether control centers can seize this moment to redefine their role—from reactive firefighting to proactive resilience.

Keep Reading

No posts found