Contemporary data centers require enhanced design and build standards compared to traditional centers.
The first data centers were developed in the 1960s and elements of their basic design are still evident in centers today. But in order for a data center to return investment, design, and build must now meet a growing list of requirements to achieve increasingly higher standards.
The reasons given for centers in Enterprise Ireland’s white paper, The Future Data Center, indicate that there is no single reason for investment but a range of possible drivers, such as the need to increase IT capacity, to reduce operating costs, to improve security, or simply to replace an old data center.
These varied investment drivers mean the process of design and construction must be smarter. Key changes to the process include:
- Standards and legislative requirements
- Technologies now available to facilitate key data center functions
- The exemplar provided by global cloud and colocation providers
- The radically changing decision making and procurement processes.
Standards and legislative requirements
As with any design and construction project, the process of designing and building a data center must conform to local legislative requirements in terms of land use and planning permits, utility access and provisioning, environmental considerations, building codes as well as health, safety, and labor laws.
However, there are further requirements, some mandatory and some voluntary, when designing and building a data center. These may include legislation designed to reduce energy consumption or wastage, industry standards to evaluate the levels of redundancy for which the data center is designed and at which it is operated (such as TIA-942, the Uptime Institute Classification System and CENELEC EN 50600).
The evolution of data centers that house data from across a wide geographic region, as well as increasing concern about the privacy implications of data traffic, has led to a raft of agreements and legislation to shore up data privacy and sovereignty practices. Further standards apply to the security of data, both physical and cyber.
Hot technology innovation
As server densities rise, the amount of heat generated is also increasing. New processors generate more than five times the heat of older processors, and new servers and switches could generate up to ten times the heat per square foot as those from ten years ago. More recently, data centers have started to explore a number of new cooling technologies and architectures to add further efficiency and cope with increasing rack densities.
These include the introduction of vertical exhaust ducts, heat wheels, various close-coupled cooling approaches as well as chipset and server-based solutions, such as processors that generate less heat, increasing the highest temperature at which they work reliably, improved heat transfer through changing the layout of components within the chassis, and developing solutions that immerse components in coolant.
The one set of facilities that stand to gain the most from adopting the technological innovations of major cloud players are also those that may be most threatened by the continuing growth of such cloud players.
These are colocation providers, whose enterprise client base has migrated into the cloud and who may only be partially able to redress the loss of that revenue stream through attracting cloud and managed service providers. They have needed to evolve a business model based on connectivity inside and outside the facility and an interlinked ecosystem with internal communities and dense, modular, converged systems to meet the need for variable and scalable loads.
While information on the practices of these organizations is hard to come by, some do share information on their design and build practices. In 2011, Facebook launched its Open Compute project. The project was initially intended to share information on hardware performance, to increase data center efficiency and sustainability and the desire to ‘demystify’ data centers. This process has continued and expanded since, among global hyper-scale companies.
Decision-making and procurement processes
The process of decision-making and procurement for data center build has become more thorough and more accountable as data centers become more mission-critical and, therefore, more risk-averse. One consequence is that procurement has become more formalized, less reliant on an open tender and more involved, down to items of lower value.
The people involved in decision-making have also changed. Broadly, involvement in decisions now reaches far more widely across an organization, is based on skill set and capability, and will include external specialists within specially constituted project teams.
Partly as a result of the increased capabilities offered by major global suppliers, there is a trend towards a single provider of facility components, including enclosures, power distribution, and protection, cooling, cabling, and monitoring, rather than relying on different specialist providers.
In terms of design and build, most projects researched changed some of their design parameters as they progressed, normally as the result of different areas of expertise being introduced. This means that flexibility needs to be built into the process, usually around key ‘milestone’ decision points at which new contracts will be tendered or agreed.
The future of data centers?
The centers will follow some projected overall construction sector trends but not all of them. A key difference between many construction projects and the data center is the balance between the ‘upfront’ costs of construction and subsequent operational costs.
While there is no such thing as an average data center or an average commercial construction, the very high operating costs of the data center mean that the construction sector will need to focus on ‘whole of life’ not just the initial construction costs.