We see them all around the country: windowless, block buildings surrounded by security fencing. What
many people don’t realize is that these fortress-like telco central offices are undergoing a makeover today because of the rapid transformation of communications technology.
The central offices have a long history: they were once home to the switching stations where operators connected calls via the old switchboards. Later, the industry replaced that cumbersome setup with multiple switching capable of handling hundreds of thousands of calls each hour.
The shift to edge and cloud computing
With the industry now experiencing a seismic shift over to cloud- and edge-based computing, the need to store all the associated equipment creates fresh challenges. Central offices are attractive locations for several reasons. First, they’re spacious—plenty of room for servers, power supplies and other equipment. These buildings are already in place and highly connected, so there’s no need to start from scratch.
The imperative to find more space for edge/cloud servers and other equipment is only going to intensify in the near future. In fact, it will expand at a breakneck pace. To provide some perspective: in 2015, usage was at about 72.5 exabytes—a staggering amount (an exabyte is the equivalent of a billion gigabytes). But it’s not quite so impressive when compared to the projected usage in 2020, which is 194.4 exabytes. That’s a near-incomprehensible quantity of data, and the telco industry must prepare accordingly without the luxury of time.
Accommodating the growth
With this kind of growth on the horizon, the need for more data centers has become an utter necessity. But the equipment needs to fit uniformly into the allotted space, a space that wasn’t built for that purpose. That need calls for some careful standardization, which is where the Open Compute Project (OCP) comes in with a rack standard specifically for the existing central office. This new standard is called CG-OpenRack-19. It captures the efficiency and scale of the OpenRack standard for large data centers, with considerations for EIA-310 compatibility and carrier-grade requirements.
Organizations are orchestrating a rapid response to this need with the unveiling of compute nodes, sleds, frames and integration that follow the carrier-grade OCP-Accepted OpenSled specification developed to work with products following the CG-OpenRack-19.
Many benefits
The beauty of the carrier-grade OpenSled and CG-OpenRack-19 architecture is that it renders much of the facility renovation—and its related costs—unnecessary. New racks can roll into the central office just as they are, without the need for extensive interior construction and retrofitting.
The difference in cost can be dramatic. A job that would have required a three-month renovation is now often a simple, three-day process. The monetary savings here are obvious.
This technology is working in tandem with other efficiency measures such as network virtualization. By using software in the place of bulky hardware, the new data centers will free up large blocks of space for other equipment.
Some key advantages of the OCP-OpenRack-19 carrier-grade architecture include:
- DC power is provided to all the servers and storage devices from a common rectifier. This aggregation and rack-level power conversion dramatically improves energy efficiency, combines redundancy components and isolates source-power changes to a single location.
- By eliminating the power supplies from each server and storage enclosure, there is more space available and better airflow.
- Localized cooling (per-sled thermal management) allows only enough cooling where it is needed.
- Airflow impedance of one sled does not affect another, so there is no minimum impedance per sled, reducing the overall power consumed for cooling.
- Architecture allows customers to fit the existing site layout and meet specific agency requirements and environments. It supports heterogeneous racks that include RF emissions, acoustic noise, NEBS, and seismic and test suites such as NEBS.
- Fan aggregation over multiple servers improves efficiency and airflow while reducing acoustic noise and frequency.
- Rack-based blind-mate power and optical interconnects make sled replacement almost instant (< 1 minute).
- Predefined server-to-port associations drastically reduce system setup time and operator costs, and are not affected by sled replacement/upgrade (no risk to system configuration and connectivity).
As data usage increases in the coming years, industry ingenuity is making it all possible. The collaborative effort of experts from various companies and disciplines is ensuring that the systems will be in place to meet the demands of the future…because the future won’t wait for us.
Jeff Sharpe is director of product strategy, network and communications solutions for ADLINK Technology
Interested in cutting-edge network solutions? Network with experts at the 2018 Smart Industry Conference. Click here to learn more.