Abstract

Physics-based modeling aids in designing efficient data center power and cooling systems. These systems have traditionally been modeled independently under the assumption that the inherent coupling of effects between the systems has negligible impact. This study tests the assumption through uncertainty quantification of models for a typical 300 kW data center supplied through either an alternating current (AC)-based or direct current (DC)-based power distribution system. A novel calculation scheme is introduced that couples the calculations of these two systems to estimate the resultant impact on predicted power usage effectiveness (PUE), computer room air conditioning (CRAC) return temperature, total system power requirement, and system power loss values. A two-sample z-test for comparing means is used to test for statistical significance with 95% confidence. The power distribution component efficiencies are calibrated to available published and experimental data. The predictions for a typical data center with an AC-based system suggest that the coupling of system calculations results in statistically significant differences for the cooling system PUE, the overall PUE, the CRAC return air temperature, and total electrical losses. However, none of the tested metrics are statistically significant for a DC-based system. The predictions also suggest that a DC-based system provides statistically significant lower overall PUE and electrical losses compared to the AC-based system, but only when coupled calculations are used. These results indicate that the coupled calculations impact predicted general energy efficiency metrics and enable statistically significant conclusions when comparing different data center cooling and power distribution strategies.

1 Introduction

The growth of the data center industry calls for energy efficient practices. In 2018, global data center electricity consumption exceeded 200 TWh each year, which is roughly 1% of the worldwide electricity use and more than the national energy consumption of some countries, including Iran [1]. Shehabi et al. [2] projected that U.S. data centers would consume approximately 73 × 109 kWh of electricity in 2020, while estimated consumption in 2014 was 70 × 109 kWh, representing around 1.8% of total U.S. electricity consumption. More recently, Masanet et al. [3] estimated that the data center computing workload increased by about 550% from 2010 to 2018, while global data center electricity consumption increased by 6% due to significant improvements in energy efficiency between this time interval, yet Shehabi et al. [4] doubted that the improvements in energy efficiency would be sufficient to offset the energy demand for the rapidly expanding industry. At the same time, some recent reports also predicted that data centers in 2025 will consume around 20% of global electricity production [5]. The recent expansion of data-intensive technologies such as cryptocurrency, artificial intelligence, autonomous vehicles, digitized manufacturing, and energy systems further increase the demand for data processing and storing in data centers. Data centers are responsible for approximately 0.5% of U.S. greenhouse emissions [6] due to their large energy consumption. Therefore, improving data center energy efficiency is critical to enhancing energy security and reducing environmental burden.

The typical data center energy efficiency metric is the overall power usage effectiveness (PUE), defined as the ratio of annual total power draw divided by information technology (IT) load [7]
(1)
where Ptot and PIT are the annual total and IT electrical power draw, respectively. Since nearly all the total power draw includes IT load, cooling load, and losses in power conversion, then cooling and electrical power system-specific PUE values may also be calculated
(2)
(3)
This breakdown in PUE provides data center operators more knowledge to make informative decisions on improving the energy efficiency of their facilities (e.g., if PUEC>PUEE for a given facility, then the operator can focus efforts on improving the efficiency of the cooling system). The electrical power systems include all electrical loads except those by the cooling system, so
(4)
where the subscripts C and E refer to cooling and electrical power systems, respectively. Therefore
(5)

Instantaneous PUE values, which are used in this study, apply the above equations except use instantaneous power draws instead of annual values.

Cooling systems have been the primary focus of data center energy research since cooling systems generally dominate the non-IT power consumption in PUE calculations [8]. Tools such as the in-house flow network software Villanova Thermodynamic Analysis of Systems (vtas) [9] or the data center computational fluid dynamics (CFD) software 6sigmadc [10] have been developed to aid in improving data center cooling system efficiency by predicting metrics such as the cooling system PUE (i.e., PUEC). These tools provide an ability for data center designers and managers to compare different cooling strategies and to optimize the chosen strategy.

This study uses VTAS because of its flexibility in incorporating a wide variety of cooling equipment (e.g., chillers, evaporative coolers, and cooling towers) and cooling strategies. For example, VTAS has been used to show that the second-law efficiency of cooling systems increases when a traditional air cooling strategy containing computer room air conditioning (CRAC) or computer room air handler (CRAH) units is replaced with hybrid liquid-air cooling (in-row, overhead, or rear door coolers) [11] or direct (cold plate) liquid cooling equipment [12]. VTAS has also been used to demonstrate reasonable agreement with CFD modeling for exergy destruction due to data center airflow mixing [13]. VTAS has also been combined with CFD to show how variations in whitespace (i.e., airspace) flow patterns influence the system-wide exergy destruction [14] and to develop strategies for implementing hybrid liquid-air cooling equipment [15]. The software tool has been validated to an experimental testbed as part of a study that suggests the combination of supervisory control and data acquisition and ON/OFF system-level controls provides low cooling system PUE values while maintaining reliability [16].

Contributions of electrical power system inefficiencies to the overall PUE should also be considered. Fan et al. [17] used modeling to discover opportunities for energy savings in clusters (i.e., thousands of servers) when studying the power provisioning for a warehouse-scale computer installation. In addition, Meisner et al. [18] proposed mechanisms to eliminate idle power waste in servers and showed a 74% average server power reduction by combining an energy conservation approach (PowerNap) and a power provisioning approach (redundant array for inexpensive load sharing, or RAILS).

Some studies have combined the effects of both cooling and power distribution inefficiencies in data centers. Pelley et al. [19] show a theoretical framework for total data center power calculations that include the effects of power distribution from the power distribution unit (PDU) to the servers (including load distribution), and the influence of cooling equipment and airflow recirculation. They provide a parametric power distribution model in their study instead of a physics-based detailed analysis of components. Also, Tran et al. [20] developed the datacenter workload energy simulation tool to estimate the energy consumption of all cooling and power equipment in a data center.

These approaches perform basic data center energy consumption calculations that include contributions of both cooling and power delivery equipment, but they do not incorporate the fact that an inherent coupling exists between cooling and power systems in data centers: each piece of cooling equipment requires a power feed, and electrical inefficiencies translate into cooling loads provided that the source of the inefficiencies reside within the data hall (Fig. 1). Moreover, when the cooling system power draw is increased to handle cooling loads by electrical equipment inefficiencies, then the corresponding power losses are increased. As a result, the traditionally uncoupled calculated cooling system, power distribution, and overall PUE values will be artificially low. This inherent coupling has not yet been explored analytically, yet it could be important for accurate predictions of PUE values. The key purpose of this study is therefore to determine if this error is statistically significant when compared to the inherent uncertainties within the model. This examination is performed on models of typical alternating current (AC)-based and direct current (DC)-based power delivery systems, enabling the determination of statistical significance in several predicted data center energy efficiency metrics (cooling system PUE, electrical system PUE, overall PUE, CRAC return temperature, total grid power requirement, and total electrical equipment losses) for uncoupled versus coupled calculations as well as for comparing the two systems.

Fig. 1
The inherent coupling of data center cooling and power systems. Adapted from Ref. [22].
Fig. 1
The inherent coupling of data center cooling and power systems. Adapted from Ref. [22].
Close modal

This work builds upon preliminary work by the authors [21] to estimate the influence of coupling electrical and mechanical system calculations on energy efficiency metrics. The preliminary work first introduced a standalone power system calculation scheme and then described the relationship between cooling and power systems. The coupled calculations suggest a significant increase in the cooling system PUE. However, the electrical system calculation framework presented in that work was not stable for data centers beyond ten racks and did not allow for converging power flows (i.e., multiple power sources). Additionally, no system-wide validation was performed, and the power system component models showed only modest agreement in validation exercises to data from The Green Grid [22]. Also, no formal analysis was presented for statistical significance when comparing uncoupled and coupled results. Finally, the reported quantitative impact of coupling on energy efficiency metrics was overestimated due to an error later found in the software [23]. This study therefore advances the previous study in four ways:

  1. A new power system calculation framework has been developed to allow for converging power flows and with improved stability for data centers beyond ten racks. The old framework, which began by assuming values of component efficiencies and calculating the current at the loads and working back to the grid, and then adjusting the currents based on updated component efficiencies and line losses using derived correlations, is unstable since the component efficiency calculations would diverge unless the initially guessed efficiencies are close to the final values. The new framework addresses this condition-related problem by calibrating component efficiencies to experimental data.

  2. The power distribution component efficiency ranges have been calibrated to available published data and additional new data from experimental data center measurements. The new models therefore agree with experimental measurements.

  3. The energy efficiency metric calculations have been corrected.

  4. A formal statistical analysis has been performed to test for significant differences in key metrics.

In addition, these calculations have been extended to compare the efficiencies of AC versus DC-based power distribution.

This study provides significant advancements forward in describing how (1) to model data center power delivery systems, (2) to quantify the influence of coupling cooling and power delivery system calculations on cooling system PUE, electrical system PUE, and overall PUE predictions, and (3) to use statistical analysis on the results of the coupled calculations to determine if one cooling or power delivery strategy is significantly more efficient than another. The study also demonstrates that some key data center metrics are significantly reduced when DC-based power systems is used in place of AC-based systems when the coupled equations are applied for each system model.

2 Methodology

The coupled calculation scheme requires independent algorithms for cooling and power system calculations, followed by translating information between the systems in an iterative fashion until convergence. VTAS, the tool used for the analysis in this study, was originally developed for data center cooling systems analysis. The concept behind the VTAS modeling scheme is that various components in a data center (e.g., cooling and IT equipment) are linked through fluid loops, such as CRAC units interacting with servers through an air loop. The loops contain a closed network of fluid branches. In a typical cooling system calculation, fluid flow rates are calculated based on user-specified pump/fan and fluid branch network information. An energy balance is then performed to determine the equipment capacity based on the known instantaneous IT load. The component heat exchange, fluid stream inlet temperature, and fluid stream outlet temperature are used to size the components and to calculate the component exergy destruction. Transient simulations may subsequently be run using the configuration as a starting state. The cooling system algorithm has been substantially discussed and validated elsewhere [9,11,12,16]. The cooling system for this study features two CRAC units, each providing 10 m3/s of supply airflow at 20 °C and 50% relative humidity.

Power system modeling follows a similar network modeling scheme as the cooling system analysis. Power components are connected by electrical lines with calculated inlet and outlet voltages and currents, enabling calculation of component efficiencies. The AC-based and DC-based power distribution systems are shown in Figs. 2 and 3, respectively. In both systems, 13.8 kVAC, three-phase power is received from the electric grid and passed through an AC transformer, which reduces the voltage to 480 VAC. In the AC-based power distribution system, the power is then split to mechanical equipment and to a second transformer, which in turn steps down the power to 208 VAC. This circuit then passes through an uninterruptible power supply (UPS), which is modeled as a rectifier and inverter in series. The power leaving the inverter is then distributed to the server power supply units (PSUs) through a series of row-based PDUs and racks.

Fig. 2
AC-based system electrical model overview. The server PSUs contain a rectifier and a DC–DC converter that lowers the DC voltage to 12 V. Only PDU-1 and Rack 1-1 are expanded here for clarity. All components downstream of the electric grid contribute to heat gains within the data center whitespace except the outdoor air CRAC blower.
Fig. 2
AC-based system electrical model overview. The server PSUs contain a rectifier and a DC–DC converter that lowers the DC voltage to 12 V. Only PDU-1 and Rack 1-1 are expanded here for clarity. All components downstream of the electric grid contribute to heat gains within the data center whitespace except the outdoor air CRAC blower.
Close modal
Fig. 3
DC-based system electrical model overview. The server PSUs contain a DC–DC converter that lowers the DC voltage to 12 V. Only PDU-1 and Rack 1-1 are expanded here for clarity. All components downstream of the electric grid contribute to heat gains within the data center whitespace except the outdoor air CRAC blower.
Fig. 3
DC-based system electrical model overview. The server PSUs contain a DC–DC converter that lowers the DC voltage to 12 V. Only PDU-1 and Rack 1-1 are expanded here for clarity. All components downstream of the electric grid contribute to heat gains within the data center whitespace except the outdoor air CRAC blower.
Close modal

The DC-based power distribution system differs from the AC-based system in that the second AC transformer and AC UPS are replaced with a single transformation of 480 Vac to 380 Vdc using a DC UPS containing a controlled three-phase bridge rectifier, denoted here as a component named “RectifierV.” The output DC power is then fed into the PDUs for distribution to the racks and servers. It should be noted that the AC PSUs contain a rectifier and DC/DC buck converter in series, whereas the DC PSUs only contain the buck converter. Both AC PSUs and DC PSUs terminate with a load at 12 volts of direct current. The test data center used in this study contains four rows of 10 racks, each containing 15 servers. Each server load is 500 W, leading to a total IT load of 300 kW.

2.1 Power System Components.

Fixed component efficiencies, defined as real power out divided by real power in, are directly used for all component models: AC transformers (step down in AC voltage), rectifiers (AC-to-DC conversion), inverters (DC-to-AC conversion), DC/DC buck converters (step down in DC voltage), and the RectifierV (AC-to-DC conversion plus step down in voltage) component. The range of component efficiencies in AC and DC power system models were calibrated to match available data by The Green Grid [22], Southern California Edison (Rosemead, CA) [24], and measurements in an experimental data center at Binghamton University (Binghamton, NY). Data from The Green Grid include transformer efficiencies over various load levels, indicating a nearly flat efficiency of ∼0.97 when the loads exceed 20%. The Green Grid data for an AC UPS also indicates ∼0.90 efficiency for loads exceeding 20%. Finally, Southern California Edison's data of an AC PSU indicates an efficiency range of 0.87–0.90 for loads exceeding 40%.

Calibration of component efficiencies to Binghamton University experimental data required models similar to those shown in Figs. 2 and 3 except with a single rack of 48 servers and without mechanical power feeds. These single-rack models are sufficient for calibrating specific component efficiencies since experimental data pertain only to the PDU level. Power system measurements were used at a variety of server load levels for both systems, although the variation in results is not captured in the models due to the use of fixed component efficiencies.

The data from modeling the DC-based single-rack system were used to calibrate the efficiency of RectifierV. The rectifier within the UPS draws more power than at the PDU since the latter only reports rack power draw. Therefore, the efficiency at the PDU measurement position is calculated as the DC PDU reading divided by the UPS rectifier DC power, or the efficiency of RectifierV per Fig. 3. The experimental data for loads ranging from 25% to 100% indicate an efficiency range of η=0.935–0.950, so an efficiency range of η=0.94±0.03 is used for RectifierV.

The experimental dataset also compares AC-based and DC-based PSUs by determining the ratio of power for both systems at the PDU. Since fixed efficiencies are used in this study, and the buck converter efficiencies are assumed to be equivalent for both systems, then a choice of a PSU rectifier efficiency of 0.90 achieves a ratio of 1.11, which falls in the experimental data range of 1.09–1.15 while adhering to Southern California Edison's [24] AC UPS efficiency range.

The calibrated component efficiency ranges and the results of the calibration exercises are summarized in Tables 1 and 2, respectively. Table 1 also includes conservative engineering judgment ranges for the coefficient of performance (COP) for the CRAC units and the efficiencies of fans in the system. These component metric ranges form the basis for the subsequent statistical analysis.

Table 1

Base case efficiency values imposed for various system components

Component modelMetricaBasis
AC transformerη=0.97±0.03The Green Grid [23] AC transformer data, assuming 20% load
Rectifier (AC UPS)η=0.95±0.03The Green Grid [23] UPS data, modeled as series combination of rectifier and inverter, 20% load
Rectifier (AC PSU)η=0.90±0.03Binghamton University experimental data and Southern California Edison [25] AC PSU data, modeled as series combination of rectifier and DC–DC converter, 40% load
Inverter (AC UPS)η=0.95±0.03The Green Grid [23] UPS data, modeled as series combination of rectifier and inverter, 20% load
DC–DC converterη=0.97±0.03Southern California Edison [25] AC PSU data, modeled as series combination of rectifier and DC–DC converter, 40% load
RectifierVη=0.94±0.03Binghamton University experimental data
CRACCOP =3.0±0.5Typical value
Fan efficiencyη=0.7±0.2Typical value
Component modelMetricaBasis
AC transformerη=0.97±0.03The Green Grid [23] AC transformer data, assuming 20% load
Rectifier (AC UPS)η=0.95±0.03The Green Grid [23] UPS data, modeled as series combination of rectifier and inverter, 20% load
Rectifier (AC PSU)η=0.90±0.03Binghamton University experimental data and Southern California Edison [25] AC PSU data, modeled as series combination of rectifier and DC–DC converter, 40% load
Inverter (AC UPS)η=0.95±0.03The Green Grid [23] UPS data, modeled as series combination of rectifier and inverter, 20% load
DC–DC converterη=0.97±0.03Southern California Edison [25] AC PSU data, modeled as series combination of rectifier and DC–DC converter, 40% load
RectifierVη=0.94±0.03Binghamton University experimental data
CRACCOP =3.0±0.5Typical value
Fan efficiencyη=0.7±0.2Typical value
a

Expanded uncertainty with 95% confidence.

Table 2

Comparison to calibration data

Component/subsystemAvailable dataModelb
AC Transformerη0.97 [23]η=0.97±0.03
AC UPSη0.90 [23]η=0.90±0.06c
AC PSU0.87η0.90 [25]η=0.92±0.06d
RectifierVa0.935η0.95η=0.94±0.03
AC/DC PSU power drawa1.09–1.151.11
Component/subsystemAvailable dataModelb
AC Transformerη0.97 [23]η=0.97±0.03
AC UPSη0.90 [23]η=0.90±0.06c
AC PSU0.87η0.90 [25]η=0.92±0.06d
RectifierVa0.935η0.95η=0.94±0.03
AC/DC PSU power drawa1.09–1.151.11
a

Experimental data range per Binghamton University data center measurements.

b

Expanded uncertainty with 95% confidence.

c

Uncertainty range calculated using AC UPS rectifier and AC UPS inverter in series.

d

Uncertainty range calculated using AC PSU rectifier and DC-DC converter in series.

2.2 Power System Framework.

A modified power system calculation framework is used in this study to address the limitations (i.e., the inability to handle converging power flows, and lack of robustness in regards to scalability) of the preliminary method described in Ref. [21]. The approach here that addresses these two issues is to setup and solve a system of nonlinear equations. This nonlinear equation set includes component losses, line losses, and a direct solution for real and reactive system input power values. The algorithm is also designed to allow for flexibility in defining loads as AC or DC, input power sources as AC or DC, and AC or DC electrical junctions. The system assumes balance in three-phase AC transmission. The unknowns in the system are:

  1. The input real power (AC and DC power sources) and input reactive power (AC power sources only) to the system.

  2. The voltage magnitudes (AC and DC electrical lines) and phase angles (AC electrical lines only) at the beginning and end points of all electrical lines.

The system of nonlinear equations solves for these unknowns at each iteration. Figure 4 shows that component m is defined as upstream of component k, which is upstream of component n. A power flow defined as going from k to m uses the subscript km, a flow from k to n uses the subscript kn, and so forth.

Fig. 4
Nomenclature used in setting up nonlinear set of equations. Voltage magnitudes and phase angles are designated as U and θ, respectively. The subscripts i and e signify inlet and exit values, respectively.
Fig. 4
Nomenclature used in setting up nonlinear set of equations. Voltage magnitudes and phase angles are designated as U and θ, respectively. The subscripts i and e signify inlet and exit values, respectively.
Close modal
In order to describe the system of equations, a power balance is performed at each component. Figure 5 shows a typical scenario for an AC-based power source k: real power (Pk) and reactive power (Qk) are provided into the component, and the real and reactive power exiting the component in the electrical line toward component n is defined as Pkn and Qkn. Note that Qk=Qkn=0 for DC-based sources. A simple real and reactive power balance on the source k results in the equations [25]
(6)
(7)
Fig. 5
Nomenclature used for an AC-based power source
Fig. 5
Nomenclature used for an AC-based power source
Close modal
In addition, the exiting voltage magnitude (Uk,e) and phase angle (θk) values from the source are specified by the user. Clearly, the power flows Pkn and Qkn need to be expressed in terms of U and θ, along with the complex line impedance. If the complex voltage is E=Uejθ, then the complex current in a line with admittance Y is
(8)
where it is understood that Ek=Ek,e and En=En,i. The complex power leaving the component is
(9)
where Ikn* is the complex conjugate of Ikn. Plugging Eq. (8) into Eq. (9), assuming a small phase shift in the lines, yields relations for the real and reactive power as
(10)
(11)

where G and B are conductance and susceptance, respectively.

Similarly, for the power defined as leaving component k and heading toward an upstream component m (Fig. 6), then the real and reactive power are calculated using Skm=Ek,iIkm*. It follows that the power components are
(12)
(13)
Fig. 6
Nomenclature used in power balance equations for component k
Fig. 6
Nomenclature used in power balance equations for component k
Close modal
For DC circuits, the expressions are drawn in a similar manner as described above. All power is real, and Pkn is calculated as
(14)
Similarly
(15)
The above expressions for Pkn,Qkn,Pkm, and Qkm make up an essential part of the system of equations through the use of power balances. The real and reactive power balances for a nonsource component are per Fig. 6 
(16)
(17)

These equations are modified for specific components:

  • A component embedded in a DC circuit will not have any reactive power balance (12).

  • A rectifier will have Qkn=Qk,loss= 0.

  • An inverter will only consider Eq. (16).

The above system of power balance equations provides a set of nonlinear equations for each component. However, establishing the values of Uk,e and θk,e (if applicable) for each component k is also part of this system of equations, which achieves the same number of equations and unknowns. The application of these equations is achieved through component-specific means as described in Table 3.

Table 3

Outlet voltage and phase angle relations for component k

ComponentOutlet voltage relationOutlet phase angle relation
AC transformerUk,e is user specifiedθk,e=θk,i
RectifierUk,e=Uk,iN/A
RectifierVUk,e is user specifiedN/A
InverterUk,e=Uk,iθk,e=0
Buck converterUk,e is user specifiedN/A
Electrical junctionUk,e=Uk,i for all branchesθk,e=θk,i for all branches
ComponentOutlet voltage relationOutlet phase angle relation
AC transformerUk,e is user specifiedθk,e=θk,i
RectifierUk,e=Uk,iN/A
RectifierVUk,e is user specifiedN/A
InverterUk,e=Uk,iθk,e=0
Buck converterUk,e is user specifiedN/A
Electrical junctionUk,e=Uk,i for all branchesθk,e=θk,i for all branches

The electrical power system solution algorithm is as follows per Fig. 7:

Fig. 7
Algorithm for solving the power distribution system
Fig. 7
Algorithm for solving the power distribution system
Close modal
  1. (1)

    Calculate G and B for all electrical lines based on user-specified wire length, gage, and nearest-neighbor spacing.

  2. (2)

    Determine the size of the system of equations using the criteria specified above.

  3. (3)

    Provide an initial guess for the solution vector by:

    1. Assuming that all phase angles are zero

    2. Assuming zero voltage drop in each electrical line

    3. Assuming zero input power to the system

  4. (4)

    Populate the stiffness matrix and force vector by applying the power balance equations and outlet voltage/phase angle relations for all components.

  5. (5)

    Solve the linearized system of equations.

  6. (6)

    Update the inlet power to the power sources, the voltages and phase angles in the electrical lines, and calculate the complex current for each electrical line.

  7. (7)

    Update the component real and reactive (if applicable) power losses.

  8. (8)

    Check for convergence and update the old solution vector using successive under relaxation. A relaxation parameter of 0.5 is used here.

  9. (9)

    Go to step 5 until converged using the two-norm of the absolute change in the solution vector between successive iterations as the convergence criterion (10−2 in this study). Convergence was achieved in all system models in this study.

The above algorithm was verified through comparison to the old calculation framework [21] by modeling a redundant AC-based data center power structure based on the single-rack system described in Sec. 2.

2.3 Coupled Power and Cooling System Calculations.

The implementation of the coupling of the two systems follows a standard iteration cycle as shown in Fig. 8:

Fig. 8
Algorithm for coupling mechanical and electrical system models
Fig. 8
Algorithm for coupling mechanical and electrical system models
Close modal
  1. The mechanical system calculates the cooling equipment power requirements assuming no electrical inefficiencies in the system.

  2. The electrical system model is updated with power feeds to the cooling equipment.

  3. The electrical system is solved to determine the various electrical power losses.

  4. The relative error norm is calculated based on the change in total system real power in each iteration
    (18)
    where Ptot,i is the total system electrical power input for iteration i. Convergence is achieved when the error norm falls below 10−3.
  5. The additional heat sources due to electrical power losses are incorporated into the cooling system calculations. The cooling calculations are then repeated.

  6. Go to step 2 and iterate until convergence. Convergence was achieved for all of the simulations used in this study.

2.4 Statistical Significance Testing.

Statistical significance is tested using a modification of a two-sample z-test for comparing means. A set of N=50 simulations each for AC-based and DC-based power systems were performed to collect data, with random sampling of input variables along a normal distribution following uncertainty intervals provided in Table 1 with a hard efficiency upper limit of 1.0. The results are analyzed through first calculating the mean and standard deviation of the samples, followed by a random standard uncertainty of the sample mean, s¯. The combined standard uncertainty is then calculated using [26]
(19)

where the systematic standard uncertainty, b¯, is based on conservative engineering judgment and is assumed to be identical for each case (AC-based versus DC-based, uncoupled versus coupled).

Statistical significance is tested for coupled metrics (y) greater than uncoupled values (x) in cooling system PUE, electrical system PUE, overall PUE, CRAC return temperature, total grid power requirement, and power loss. The approach is also used to test metrics in AC-based systems being greater than those for DC-based system. Here, the combined standard uncertainty of the normal distribution representing the difference in distributions is calculated as
(20)

Statistical significance is then achieved when (x¯y¯)<1.645u¯xy.

3 Results

3.1 Base Case Systems.

The predictions for an AC-based system with the mean values of parameters in Table 1 with uncoupled and coupled calculations are shown in Table 4. The coupled simulations converged in six iterations. The results show that only those components upstream of the cooling system components (i.e., the CRAC units and fans) have additional losses due to the coupled calculations as expected since the server power draw is fixed. Incorporating the additional cooling loads increases the predicted compressor power for each CRAC unit by over 60%, thereby increasing the predicted cooling system PUE by 20%. The CRAC blower's predicted power consumption is not affected by the coupling since the air volumetric flow rate is not affected in the model. The cooling system equipment represents a minor part of the overall grid power draw in the data center as seen by a real power requirement increase of 17%, so as a result a negligible growth in electrical system PUE is seen because of the coupled calculations.

Table 4

Details of uncoupled and coupled system calculations for base case AC-based power distribution system

MetricUncoupledPartially coupledd,eCoupledd,f
Upstream AC transformer loss16.7 kW18.4 kW (+9%)19.5kW (+17%)
Downstream AC transformer loss11.9 kW11.9 kW11.9kW
UPS rectifier loss19.2 kW19.2 kW19.2kW
UPS inverter loss18.3 kW18.3 kW18.3kW
PSU rectifier lossa57.4 W57.4 W57.4 W
PSU DC/DC converter lossa15.7 W15.7 W15.7 W
CRAC compressor powerb71.5 kW98.5 kW (+38%)116 kW (+62%)
CRAC blower powerb,c545 W545 W545 W
Cooling system PUE1.481.66 (+12%)1.77 (+20%)
Electrical system PUE1.381.39 (+1%)1.39 (+1%)
Overall PUE1.862.04 (+10%)2.16 (+16%)
Total grid power requirementPin=557kW,Qin=1.14 kvarPin=613kW (+10%), Qin=1.24 kvar (+9%)Pin=649kW (+17%), Qin=1.31 kvar (+15%)
Total electrical equipment losses257 kW313 kW (+22%)349 kW (+36%)
MetricUncoupledPartially coupledd,eCoupledd,f
Upstream AC transformer loss16.7 kW18.4 kW (+9%)19.5kW (+17%)
Downstream AC transformer loss11.9 kW11.9 kW11.9kW
UPS rectifier loss19.2 kW19.2 kW19.2kW
UPS inverter loss18.3 kW18.3 kW18.3kW
PSU rectifier lossa57.4 W57.4 W57.4 W
PSU DC/DC converter lossa15.7 W15.7 W15.7 W
CRAC compressor powerb71.5 kW98.5 kW (+38%)116 kW (+62%)
CRAC blower powerb,c545 W545 W545 W
Cooling system PUE1.481.66 (+12%)1.77 (+20%)
Electrical system PUE1.381.39 (+1%)1.39 (+1%)
Overall PUE1.862.04 (+10%)2.16 (+16%)
Total grid power requirementPin=557kW,Qin=1.14 kvarPin=613kW (+10%), Qin=1.24 kvar (+9%)Pin=649kW (+17%), Qin=1.31 kvar (+15%)
Total electrical equipment losses257 kW313 kW (+22%)349 kW (+36%)
a

Values shown are for a single PSU. All PSUs have identical losses.

b

Values shown are for a single CRAC unit. Both CRACs have identical characteristics.

c

Inside air.

d

Values in parentheses indicate change relative to uncoupled case.

e

Partially coupled: single update to cooling system design to account for additional cooling loads from electrical inefficiencies.

f

Coupled: iterative update in electrical and cooling system design to account for additional cooling loads from electrical inefficiencies and 30% partial cooling load from mechanical equipment operation.

Of note is the large growth in cooling load (267 kW), which is as nearly large as the IT load itself (300 kW). A fully uncoupled model does not take into account any cooling loads generated by electrical losses, whereas the coupled scheme here incorporates these loads after the first iteration. The resultant change in PUE values and cooling load, shown in the table as a “partially coupled” case, suggest a growth in real power requirement by 56 kW due to the additional CRAC power required to handle the additional 186 kW from electrical losses outside the CRAC compressor. The remaining 36 kW growth in real power for iterations 2–6 stem from cooling loads by the CRAC units themselves, assuming 30% of total CRAC power draw in this model. It should be noted that this approach represents a “worst case” scenario where all electrical losses and a significant portion of cooling equipment cooling load are located within the data hall.

Table 5 provides the results for uncoupled and coupled calculations associated with a DC-based power distribution system. The coupled system calculations also converge in six iterations and have no impact on PSU component losses like in the AC-based system model. The DC-based system has less electrical equipment power loss than the AC-based system, so as a result the coupling has less of an impact on data center energy efficiency metrics. This is seen in a smaller increase in cooling system and overall PUE compared to the impact of coupling in the AC-based system model. The reduced electrical equipment power loss results in less cooling loads in a coupled system, so as a result the CRAC compressor requirement increase is lower for the DC-based system model compared to the AC-based system model.

Table 5

Details of uncoupled and coupled system calculations for DC-based power distribution system

MetricUncoupledPartially coupledd,eCoupledd,f
AC transformer loss15.2 kW16.1 kW (+6%)17 kW (+12%)
RectifierV loss35.9 kW35.9 kW35.9 kW
PSU DC/DC converter lossa15.5 W15.5 W15.5 W
CRAC compressor powerb71.5 kW86.1 kW (+20%)101 kW (+41%)
CRAC blower powerb,c545 W545 W545 W
Cooling system PUE1.481.58 (+7%)1.68 (+14%)
Electrical system PUE1.211.211.21
Overall PUE1.691.79 (+6%)1.89 (+12%)
Total grid power requirementPin=506kW, Qin=0.365 kvarPin=536kW (+6%), Qin=0.412 kvar (+13%)Pin=567kW (+12%), Qin=0.466 kvar (+28%)
Total electrical equipment losses206 kW236 kW (+15%)267 kW (+30%)
MetricUncoupledPartially coupledd,eCoupledd,f
AC transformer loss15.2 kW16.1 kW (+6%)17 kW (+12%)
RectifierV loss35.9 kW35.9 kW35.9 kW
PSU DC/DC converter lossa15.5 W15.5 W15.5 W
CRAC compressor powerb71.5 kW86.1 kW (+20%)101 kW (+41%)
CRAC blower powerb,c545 W545 W545 W
Cooling system PUE1.481.58 (+7%)1.68 (+14%)
Electrical system PUE1.211.211.21
Overall PUE1.691.79 (+6%)1.89 (+12%)
Total grid power requirementPin=506kW, Qin=0.365 kvarPin=536kW (+6%), Qin=0.412 kvar (+13%)Pin=567kW (+12%), Qin=0.466 kvar (+28%)
Total electrical equipment losses206 kW236 kW (+15%)267 kW (+30%)

aValues shown are for a single PSU. All PSUs have identical losses.

b

Values shown are for a single CRAC unit. Both CRACs have identical characteristics.

c

Inside air.

d

Values in parentheses indicate change relative to uncoupled case.

e

Partially coupled: single update to cooling system design to account for additional cooling loads from electrical inefficiencies.

f

Coupled: iterative update in electrical and cooling system design to account for additional cooling loads from electrical inefficiencies and 30% partial cooling load from mechanical equipment operation.

3.2 Statistical Significance Tests.

Tables 6 and 7 indicates uncertainty ranges for data center energy efficiency metrics when sampled input values from Table 1 are provided for AC-based and DC-based power distribution systems, respectively. Both tables include the chosen values for the systematic standard uncertainty associated with each metric. Table 6 shows that the cooling system PUE, the overall PUE, the CRAH return air temperature, and the total electrical equipment losses are all deemed statistically significant. These results are expected when examining Table 4 since the mean values for the metrics in Table 6 are similar to those in Table 4, and the effects of coupled calculations in Table 4 are most apparent for those metrics listed as statistically significant in Table 6. It should be noted that the increased CRAC load in Table 4 corresponds to an increased CRAC return temperature, hence the statistical significance for this metric in Table 6.

Table 6

Influence of coupling in cooling and power delivery system calculations for AC-based systema

MetricSystematic UncertaintyUncoupledCoupledSignificant?b
Cooling system PUE0.11.49±0.201.81±0.20Y
Electrical system PUE0.11.39±0.201.40±0.20N
Overall PUE0.11.88±0.202.21±0.21Y
CRAH return air temperature, °C332.7±6.040.9±6.0Y
Total grid power requirement, kW10% of uncoupled mean value565±114663±114N
Total electrical equipment losses, kW10% of uncoupled mean value265±53.6363±54.5Y
MetricSystematic UncertaintyUncoupledCoupledSignificant?b
Cooling system PUE0.11.49±0.201.81±0.20Y
Electrical system PUE0.11.39±0.201.40±0.20N
Overall PUE0.11.88±0.202.21±0.21Y
CRAH return air temperature, °C332.7±6.040.9±6.0Y
Total grid power requirement, kW10% of uncoupled mean value565±114663±114N
Total electrical equipment losses, kW10% of uncoupled mean value265±53.6363±54.5Y
a

Uncertainties indicate the bounds of a 95% confidence interval.

b

Statistical significance is tested for the coupled values greater than the uncoupled values for all metrics with 95% confidence.

Table 7

Influence of coupling in cooling and power delivery system calculations for DC-based systema

MetricSystematic uncertaintyUncoupledCoupledSignificant?b
Cooling system PUE0.11.48±0.201.67±0.20N
Electrical system PUE0.11.20±0.201.21±0.20N
Overall PUE0.11.68±0.201.88±0.20N
CRAH return air temperature, °C332.7±6.037.9±6.0N
Total grid power requirement, kW10% of uncoupled mean value504±114565±114N
Total electrical equipment losses, kW10% of uncoupled mean value204±53.5265±54.0N
MetricSystematic uncertaintyUncoupledCoupledSignificant?b
Cooling system PUE0.11.48±0.201.67±0.20N
Electrical system PUE0.11.20±0.201.21±0.20N
Overall PUE0.11.68±0.201.88±0.20N
CRAH return air temperature, °C332.7±6.037.9±6.0N
Total grid power requirement, kW10% of uncoupled mean value504±114565±114N
Total electrical equipment losses, kW10% of uncoupled mean value204±53.5265±54.0N
a

Uncertainties indicate the bounds of a 95% confidence interval.

b

Statistical significance is tested for the coupled values greater than the uncoupled values for all metrics with 95% confidence.

The results of the statistical analysis for the DC-based system model, shown in Table 7, surprisingly shows no statistical significance in any category. The reduced impact of coupling seen in this system model in Table 5 compared to Table 4 results in the lack of statistical significance due to the uncertainty range provided in Table 7. These results suggest that coupled calculations are not necessary for efficient power delivery systems but becomes increasingly important as the electrical system PUE increases.

The quantified uncertainty ranges and systematic standard uncertainties in Tables 6 and 7 lend itself to determining the statistical significance when comparing AC-based versus DC-based power distribution systems when coupling is included. Table 8 provides the results of testing the statistical significance of the calculated data center system metrics in AC-based versus DC-based power distribution systems, indicating that differences in statistical significance are seen when comparing uncoupled versus coupled results. No statistical significance is seen in any metrics when comparing the uncoupled cases, whereas examination of the coupled cases indicates a statistically significant lower overall PUE and electrical equipment losses for the DC-based system. The reason for the lack of statistical significance for uncoupled systems are the similar values of their metrics, whereas the coupled calculations exacerbates the differences in electrical losses to the point where some metrics become statistically significant.

Table 8

Influence of coupling in comparing cooling and power delivery system calculations for AC- and DC-based systemsa

Significant?b
MetricUncoupledCoupled
Cooling system PUENN
Electrical system PUENN
Overall PUENY
CRAH return air temperature, °CNN
Total grid power requirement, kWNN
Total electrical equipment losses, kWNY
Significant?b
MetricUncoupledCoupled
Cooling system PUENN
Electrical system PUENN
Overall PUENY
CRAH return air temperature, °CNN
Total grid power requirement, kWNN
Total electrical equipment losses, kWNY
a

Systematic uncertainty values and uncertainty intervals are provided in Tables 6 and 7.

b

Statistical significance is tested for the AC-based system values greater than the DC-based system values with 95% confidence.

3.3 Discussion.

One advantage of the network modeling approach used by VTAS is the inherent flexibility to model a wide variety of cooling systems (e.g., CRAC or CRAH-based cooling [9], rear door heat exchangers [11], water-based cold plates [12], and systems with airside economization [16]) and power distribution systems (e.g., AC-based and DC-based systems as in this study). For each component in the cooling network, the user can either specify performance metrics (e.g., the coefficient of performance for a CRAC unit) or derive the metrics using physics-based component models. The electrical power system components also have this capability, but the instability in their physics-based efficiency calculations call for calibrated, user-defined efficiencies as used in this study. VTAS can also perform parameter sweeping to ascertain the impact of input parameters on system-level metrics such as PUE. Therefore, future work can investigate the influence of coupling the cooling and power distribution system calculations for alternative cooling and power distribution system configurations.

4 Conclusions

Several insights can be gained from this study, notably that coupling system calculations becomes increasingly important as the inefficiencies in the electrical power distribution system increase. Coupling the system calculations tends to impact the cooling equipment load most significantly—and therefore the cooling system PUE—whereas the electrical system PUE is largely unaffected since the coupling only affects the power draw by cooling components and their upstream power distribution components. In addition, the overall PUE is affected for the AC-based system when coupled calculations are used, and the coupled calculations suggest that the DC-based system has a statistically significant lower overall PUE than an AC-based system for the data center model in this study. Finally, coupled calculations are necessary for comparing two different power distribution strategies since the coupling enhances differences in electrical system efficiencies, enabling a greater possibility for passing statistical significance tests. Future work should explore the coupling effect on PUE predictions for models of alternative cooling systems (CRAH-chiller-cooling tower, evaporative cooler, etc.) and the impact of spatial heterogeneity in IT equipment utilization.

Acknowledgment

This material is based upon work supported by the National Science Foundation under Grant No. IIP-1738782. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. Advice from industrial mentors as part of the NSF Industry/University Cooperative Research Center (I/UCRC) in Energy-Smart Electronic Systems (ES2), and measurement data from the Binghamton University data center under the supervision of Kanad Ghosh and Bahgat Sammakia are greatly appreciated.

Funding Data

  • Directorate for Engineering (Grant No. IIP-1738782; Funder ID: 10.13039/100000084).

Nomenclature

b¯ =

systematic standard uncertainty

B =

susceptance, S

E =

complex voltage, V

G =

conductance, S

I =

current, A

N =

number of samples

P =

real power, W

Q =

reactive power, var

s¯ =

random standard uncertainty of the sample mean

S =

complex power, VA

U =

voltage magnitude, V

Y =

admittance, S

Greek Symbols
ϵ =

error

η =

efficiency

θ =

phase angle, radians

Subscripts
C =

cooling system

e =

exit

E =

electrical system

i =

inlet

IT =

information technology

loss =

loss

tot =

total

References

1.
Jones
,
N.
,
2018
, “
How to Stop Data Centres From Gobbling Up the World's Electricity
,”
Nature
,
561
(
7722
), pp.
163
166
.10.1038/d41586-018-06610-y
2.
Shehabi
,
A.
,
Smith
,
S. J.
,
Sartor
,
D. A.
,
Brown
,
R. E.
,
Herrlin
,
M.
,
Koomey
,
J. G.
,
Masanet
,
E. R.
,
Horner
,
N.
,
Azevedo
,
I. L.
, and
Lintner
,
W.
,
2016
, “
United States Data Center Energy Usage Report
,” Lawrence Berkeley National Laboratory, Berkeley, CA, Report No.
Lbnl-1005775.
3.
Masanet
,
E.
,
Shehabi
,
A.
,
Lei
,
N.
,
Smith
,
S.
, and
Koomey
,
J.
,
2020
, “
Recalibrating Global Data Center Energy-Use Estimates
,”
Science
,
367
(
6481
), pp.
984
986
.10.1126/science.aba3758
4.
Shehabi
,
A.
,
Smith
,
S. J.
,
Masanet
,
E.
, and
Koomey
,
J.
, Dec.
2018
, “
Data Center Growth in the United States: Decoupling the Demand for Services From Electricity Use
,”
Environ. Res. Lett
,
13
(
12
), p.
124030
.10.1088/1748-9326/aaec9c
5.
Andrae
,
A.
,
2017
, “
Total Consumer Power Consumption Forecast
,”
Nord. Digit. Bus. Summit
.https://www.researchgate.net/publication/320225452_Total_Consumer_Power_Consumption_Forecast
6.
Abu Bakar Siddik
,
M.
,
Shehabi
,
A.
, and
Marston
,
L.
,
2021
, “
The Environmental Footprint of Data Centers in the United States
,”
Environ. Res. Lett
,
16
(
6
), p.
064017
.10.1088/1748-9326/abfba1
7.
ISO/IEC,
2016
, “
Information Technology—Data Centres—Key Performance Indicators—Part 2: Power Usage Effectiveness (PUE)
,” ISO/IEC, Geneva, Switzerland, Standard No. ISO/IEC 30134-2:2016.
8.
The Green Grid
,
2007
, “
The Green Grid Data Center Power Efficiency Metrics: PUE and DCiE
,” White Paper.https://www.missioncriticalmagazine.com/ext/resources/MC/Home/Files/PDFs/TGG_Data_Center_Power_Efficiency_Metrics_PUE_and_DCiE.pdf
9.
Wemhoff
,
A. P.
,
del Valle
,
M.
,
Abbasi
,
K.
, and
Ortega
,
A.
,
2013
, “
Thermodynamic Modeling of Data Center Cooling Systems
,”
ASME
Paper No. IPACK2013-73116.10.1115/IPACK2013-73116
10.
Future Facilities Ltd., 2021, “6SigmaRoom CFD Software,“ Future Facilities Ltd., London, UK, accessed Aug. 23, 2021, https://www.futurefacilities.com/products/6sigmaroom/
11.
Bhalerao
,
A.
,
Ortega
,
A.
, and
Wemhoff
,
A. P.
,
2014
, “
Thermodynamic Analysis of Hybrid Liquid-Air-Based Data Center Cooling Strategies
,”
ASME
Paper No. IMECE2014-38359.10.1115/IMECE2014-38359
12.
Bhalerao
,
A.
, and
Wemhoff
,
A. P.
,
2015
, “
Thermodynamic Analysis of Full Liquid-Cooled Data Centers
,”
ASME
Paper No. IPACK2015-48439.10.1115/IPACK2015-48439
13.
Bhalerao
,
A.
,
Fouladi
,
K.
,
Silva-Llanca
,
L.
, and
Wemhoff
,
A. P.
,
2016
, “
Rapid Prediction of Exergy Destruction in Data Centers Due to Airflow Mixing
,”
Numer. Heat Transfer Part A Appl.
,
70
(
1
), pp.
48
63
.10.1080/10407782.2016.1139984
14.
Fouladi
,
K.
,
Wemhoff
,
A. P.
,
Silva-Llanca
,
L.
,
Abbasi
,
K.
, and
Ortega
,
A.
,
2017
, “
Optimization of Data Center Cooling Efficiency Using Reduced Order Flow Modeling Within a Flow Network Modeling Approach
,”
Appl. Therm. Eng.
,
124
, pp.
929
939
.10.1016/j.applthermaleng.2017.06.057
15.
Fouladi
,
K.
,
Schaadt
,
J.
, and
Wemhoff
,
A. P.
,
2017
, “
A Novel Approach to the Data Center Hybrid Cooling Design With Containment
,”
Numer. Heat Transfer Part A Appl.
,
71
(
5
), pp.
477
487
.10.1080/10407782.2016.1277932
16.
Khalid
,
R.
, and
Wemhoff
,
A. P.
,
2019
, “
Thermal Control Strategies for Reliable and Energy-Efficient Data Centers
,”
ASME J. Electron. Packag.
,
141
(
4
), p. 041004.10.1115/1.4044129
17.
Fan
,
X.
,
Weber
,
W.-D.
, and
Barroso
,
L. A.
,
2007
, “
Power Provisioning for a Warehouse-Sized Computer
,”
ACM International Symposium on Computer Architecture
, San Diego, CA, June 9–13, pp.
13
23
.https://static.googleusercontent.com/media/research.google.com/en//archive/power_provisioning.pdf
18.
Meisner
,
D.
,
Gold
,
B. T.
, and
Wenisch
,
T. F.
,
2009
, “
PowerNap: Eliminating Server Idle Power
,” Proceeding of 14th International Conference on Architectural Support for Programming Languages and Operating Systems—ASPLOS, Washington, DC, Mar. 7–11, Paper No.
76386
.10.1145/2528521.1508269
19.
Pelley
,
S.
,
Meisner
,
D.
,
Wenisch
,
T. F.
, and
Vangilder
,
J. W.
,
2009
, “
Understanding and Abstracting Total Data Center Power
,”
Work. Energy-Efficient Des.
, University of Michigan, Ann Arbor, MI.https://web.eecs.umich.edu/~twenisch/papers/weed09.pdf
20.
Tran
,
V. G.
,
Debusschere
,
V.
, and
Bacha
,
S.
,
2013
, “
Data Center Energy Consumption Simulator From the Servers to Their Cooling System
,” Proceedings of PowerTech,
IEEE Grenoble
, Grenoble, France, June 16–20, Paper No.
101578
.10.1109/PTC.2013.6652466
21.
Ahmed
,
F.
, and
Wemhoff
,
A. P.
,
2017
, “
Thermodynamic Analysis of Coupled Mechanical and Power Systems in Data Centers
,” Proceedings of ITherm, Orlando, FL, May 30–June 2, Paper No.
129570
.10.1109/ITHERM.2017.7992591
22.
Grid
,
T. G.
,
2008
, “
Quantitative Analysis of Power Distribution Configurations for Data Centers
,” White Paper.https://www.missioncriticalmagazine.com/ext/resources/MC/Home/Files/PDFs/TGG_Qualitative_Analysis_of_Power_Distribution_Configs_for_Data_Centers_WP4_FINAL.pdf
23.
Ahmed
,
F.
,
2018
, “
Development of Components for Data Center Power Distribution Systems and Application of Coupled Mechanical and Power System Calculations
,” Masters' thesis,
Villanova University
, Villanova, PA.
24.
Southern California Edison Design & Engineering Services
,
2007
, “
Efficient Power Supplies for Data Center and Enterprise Servers
,” White Paper.https://www.etcc-ca.com/reports/efficient-power-supplies-data-center-and-enterprise-servers-0
25.
Andersson
,
G.
,
2004
,
Modelling and Analysis of Electric Power Systems
,
Swiss Federal Institute of Technology
,
Zurich, Switzerland
.
26.
American Society of Mechanical Engineers
,
2013
, “
Test Uncertainty—Performance Test Codes
,” ASME, New York, Report No. PTC 19.1.