In recent years, efficiency has become one of the most buzzed-about topics in the uninterruptible power system (UPS) world — not surprising, considering the topic’s relevance to organizations seeking to slash ever-escalating energy bills.
While a highly efficient UPS can certainly help cut utility costs, it is important to recognize that several factors influence a unit’s true efficiency. In this three-part series, we will not only help make sense of the numbers, but also explore the various aspects of UPS efficiency — what it means, how it is impacted by aspects such as load level and design, and how to get the most out of your UPS.
To begin with, the efficiency rating of a UPS — which is represented in percentage form — indicates the amount of incoming utility power that is used to power the load versus how much is lost in UPS operation. For example, a UPS that is rated 98 percent efficient will pass 98 percent of utility power to the load, while a unit providing 94 percent efficiency will deliver 94 percent. The remaining balance represents wasted operating expenses. However, while a nameplate comparison would seemingly indicate that the higher percent the efficiency rating, the greater the energy savings, these assessments often don’t tell the full story.
Over the past decade, UPS systems have gained significant improvements in energy efficiency. Combined with other total cost of ownership advantages such as bolstered reliability and reduced footprint, these advances have prompted many to upgrade legacy UPS systems. The need to improve efficiencies and minimize power consumption has even been recognized by Environmental Protection Agency (EPA), which introduced an ENERGY STAR Program for data centers featuring independently certified products that save energy without sacrificing features or functionality. An ENERGY STAR certified UPS, for instance, can reduce energy losses by 30 to 55 percent, according to the EPA.
Indeed, even small increases in efficiency can translate to thousands of dollars in savings, resulting from the ability to achieve more real power while lowering cooling costs. Utility costs are especially important because they are recurring, ongoing expenses over the long-term, and efficiency can play a significant role in minimizing utility costs. Selecting a UPS that is just 1 percent more efficient can result in enormous savings in power consumption over the lifetime of the UPS, which is typically 15 years. While actual savings depend on utility rates and the load supported, conservative calculations estimate lifetime savings to be hundreds of thousands of dollars. In addition, less efficient UPSs discharge more heat, which in turn requires more cooling and further boosts operating costs.
UPSs are frequently regarded as one of the biggest offenders of energy loss in power distribution systems. A UPS’s efficiency rating follows a curve that is primarily determined by power capacity utilization; when the UPS is not running at full load, its efficiency will be lower. But if you base expected efficiency solely on a UPS’s efficiency rating, you’re making a huge mistake. Efficiency ratings are often misunderstood, and side-by-side comparisons of these numbers can be misleading because they don’t take into consideration the effects of UPS capacity, which can dramatically camouflage the net efficiency of a UPS. In our next installment, we will examine the critical role that load levels play on a UPS’s efficiency.