Typical Data Center Power Infrastructure
Most data centers get their primary electricity from the wider municipal electric grid. The facility will then either have one or several transformers in place to take in the energy, while also ensuring the power coming in is of the right voltage and the right type of current (converted from AC to DC typically).
Some data centers supplement their energy from the wider grid or completely remove the need for it thanks to on on-site electrical generation equipment - either in the form of stand-alone generators or with alternative energy sources such as solar photovoltaic panels and wind-powered turbines.
The power then gets transferred to Main Distribution Boards (MDBs). According to engineer Hans Vreeburg, these “are panels or enclosures that house fuses, circuit breakers, and ground leakage protection units, take the low-voltage electricity and distribute it to a number of endpoints, such as Uninterrupted Power Supply (UPS) systems or load banks.”
Not only does a UPS help to “clean up” the electricity pulsing through by ensuring that issues like surges don’t impact equipment, but each one is responsible for supplying power to a number of breakers. In a standard data center environment, no more than seven or eight servers are connected to an individual breaker, but that number will depend on both the capacity of the breaker and the efficiency of the server.
UPS systems also serve as an initial backup, in case of a power outage or similar issue. A typical UPS can provide power to servers and breakers for up to five minutes; that way, there’s enough time to get a backup generator going immediately following an outage or similar issue with the wider electric grid.
Backup Power in Data Centers
In order to ensure continuous uptime and minimize outages as much as possible, most data centers have a backup power source on site or nearby. More often than not, backup power supply comes from a fuel generator, itself powered by gasoline or diesel.
How Much Energy Does a Data Center Use?
In order to keep data centers running continuously and without interruption, managers must use a lot of electricity. According to one report, the entire data center industry uses over 90 billion kilowatt-hours of electricity annually. This is the equivalent output of roughly 34 coal-powered power plants.
On a global scale, 3 percent of all electricity used in the world goes to data centers. These 416 terawatts is much more than all of the electricity used by all of the United Kingdom.
There are a few reasons why energy use is so high - and growing - in data center environments. Not only do servers and other critical pieces of IT equipment require a lot of energy to work, so too do all of the ancillary equipment. Lights, cooling systems, monitors, humidifiers, etc. all need electricity, and can sometimes escalate energy bills.
Power Usage Efficiency (PUE)
To determine how much electricity in a data center goes towards servers versus non-IT equipment, facilities measure energy input and usage effectiveness through a Power Usage Efficiency (PUE) score. A score of 1 means that every single iota of energy in a data center goes towards servers and nothing else, while a score of 2 means that ancillary equipment uses just as much electricity as servers and other IT components.
According to the latest survey from the Uptime Institute, the average PUE of a data center stands at 1.58. This figure has been declining steadily since 2007 (when it was 2.5) and 2013 (when it was 1.65). The average PUE for a Google data center is 1.12, but its facility in Oklahoma had a score of just 1.08 during the last three months of 2018.
How Much Power Does a Server Rack Consume?
At a per-rack level, the Uptime Institute’s latest survey found that around one in five have a density of 30 kilowatts (kW) or higher, indicating the growing presence of high density computing. Half said their current rack density was between 10 and 29 kW. On an individual server level, most are set to run at 600 watts max.