Underwater Data Centers: From Project Natick to Highlander’s Commercial Project

microsoft's underwater data centers - 1linecrypto
As global demand for cloud computing, artificial intelligence, and data-intensive services continues to rise, the physical infrastructure supporting the digital world is facing increasing pressure. Traditional land-based data centers consume vast amounts of energy, require extensive cooling systems, and compete for space near urban hubs where latency must be kept low. In response, researchers and engineers have begun exploring unconventional environments to host computing infrastructure, one of the most promising being the ocean, introducing the concept of underwater data centers.

The concept of underwater data centers first gained global attention through Microsoft’s Project Natick, launched in 2015. By submerging sealed data center modules on the seafloor, the project aimed to leverage naturally cold seawater for passive cooling, reduce energy consumption, and place computing resources closer to coastal population centers. Project Natick demonstrated not only technical feasibility but also improved hardware reliability in a controlled, oxygen-free underwater environment, challenging long-held assumptions about how and where data centers must operate.

Building on these early experiments, China has taken a significant step forward with Hainan’s first commercial underwater data center project, marking the transition from experimental research to real-world deployment. Unlike pilot programs, the Hainan project is designed for continuous commercial operation, supporting high-performance computing workloads while integrating with renewable energy sources and marine ecosystems. This shift signals a growing confidence in underwater data centers as a viable component of future digital infrastructure.

Project Natick and Underwater Data Centers

Project Natick was a long-term research initiative led by Microsoft to explore the feasibility of deploying standardized, containerized data centers underwater. The core idea was to investigate whether submerging sealed data center modules on the seafloor could deliver meaningful advantages in terms of energy efficiency, reliability, latency, and sustainability compared to conventional land-based facilities. Rather than designing bespoke systems for each location, Natick focused on creating manufacturable, transportable, and rapidly deployable units that could serve cloud users around the world.

The project was motivated by a simple observation: nearly half of the global population lives within close proximity to the ocean. Placing computing infrastructure offshore, near major coastal population centers, could significantly improve service responsiveness while alleviating constraints related to land use, cooling water consumption, and infrastructure build-out timelines.

Phases and Deployment History

The idea was first coined in 2013. Phase 1 of Project Natick was primarily a proof-of-concept. A small-scale underwater data center vessel was deployed on the seafloor approximately one kilometer off the Pacific coast of the United States. This deployment ran from August to November 2015 and was designed to validate fundamental assumptions around underwater operation, including thermal performance, environmental stability, and system monitoring.

Phase 2 marked a major step toward real-world viability. Its goal was to demonstrate that a full-scale underwater data center could be economically manufactured, shipped, and deployed in less than 90 days from decision to power-on. The Phase 2 vessel was deployed in June 2018 at the European Marine Energy Centre (EMEC) in the Orkney Islands, United Kingdom, a site chosen for its access to offshore renewable energy and challenging marine conditions.

Technical Architecture and Specifications

The Phase 2 Natick module housed 12 server racks, containing a total of 864 standard Microsoft data center servers, many equipped with FPGA acceleration for enhanced performance. Storage capacity reached 27.6 petabytes, comparable to the storage required for roughly five million high-definition movies. In computational terms, the system delivered processing power equivalent to several thousand high-end consumer PCs.

The sealed pressure vessel measured approximately 12.2 meters in length and 2.8 meters in diameter (around 3.18 meters including external components), similar in size to a standard 40-foot ISO shipping container. The subsea docking structure extended to 14.3 meters in length and 12.7 meters in width, providing mechanical stability and connectivity on the seafloor.
Electrical power consumption was approximately 240 kilowatts, and the system was designed to operate for up to five years without any physical maintenance, a critical requirement for underwater deployments.

Cooling, Environment, and Reliability

One of Project Natick’s defining technical advantages was its use of the natural cold temperatures of deep seawater for cooling. This eliminated the need for traditional air conditioning systems and entirely removed freshwater consumption for cooling purposes. The sealed, nitrogen-filled environment also reduced corrosion and human-induced failures, creating a more controlled operating condition than many land-based data centers.

Microsoft leveraged AI-driven monitoring systems to assess server health and environmental parameters continuously. Machine learning models were used to detect early signs of hardware degradation and to study correlations between operating conditions and server longevity. Importantly, despite its unique location, Natick functioned like a conventional data center from a software perspective; its computing resources could be used for cloud services, artificial intelligence workloads, and machine learning tasks just like any terrestrial Microsoft facility.

Customer and Operational Benefits

Rapid provisioning was a central value proposition of Project Natick. By relying on factory-built, standardized modules, Microsoft demonstrated that large-scale data center capacity could be deployed from planning to operation in under 90 days. This model enables cloud providers to respond far more quickly to shifts in demand compared to multi-year land construction projects.

Latency reduction was another major benefit. With data centers located offshore near coastal cities, the physical distance between users and compute resources is dramatically reduced. Given that internet signals travel roughly 200 kilometers per millisecond, moving a data center from thousands of kilometers away to within a few hundred kilometers can reduce round-trip latency by tens of milliseconds. This improvement is critical for latency-sensitive applications such as online gaming, real-time collaboration, interactive web services, and high-frame-rate media streaming.

Sustainability and Environmental Vision

Sustainability was embedded into the design philosophy of Project Natick. The underwater data centers were envisioned to operate using locally produced renewable energy, such as offshore wind, tidal, or wave power. When co-located with these sources, Natick-style deployments could theoretically operate as zero-emission data centers, producing no operational waste from power generation, cooling, or on-site human activity.

The modules were also designed with full lifecycle recycling in mind, constructed from recyclable materials, and intended to be dismantled and recycled at the end of their operational life. The long-term vision included “lights-out” operation, with no on-site personnel for extended periods, high system reliability, and potential operational lifetimes of up to a decade.

Broader Impact and Legacy

Project Natick demonstrated that underwater data centers are not merely a conceptual novelty but a technically viable alternative to traditional infrastructure. Its findings influenced industry-wide discussions on sustainable computing, edge deployment, and non-traditional data center locations. Most importantly, Natick helped shift underwater data centers from experimental speculation toward commercially relevant design principles, paving the way for later initiatives such as China’s Hainan underwater data center projects.

Deployment and Lifecycle of Underwater Data Centers

Underwater data centers are typically designed as sealed, modular units that are fully assembled and tested on land before deployment. This factory-first approach allows hardware, networking, and software systems to be validated in controlled conditions, reducing the risk of failure once the unit is submerged. After assembly, the module is transported by ship to the deployment site and carefully lowered to the seafloor using cranes or specialized offshore equipment.

Site selection plays a critical role in deployment. Ideal locations are usually shallow to mid-depth coastal waters with stable seabed conditions, low seismic risk, and proximity to coastal population centers. Once positioned, the module is anchored or docked to a subsea foundation structure and connected to shore via subsea power and fiber-optic cables. These connections enable continuous electrical supply and high-bandwidth data transmission, integrating the underwater facility seamlessly into existing cloud or edge computing networks.

Cooling and Environmental Integration

A defining characteristic of underwater data centers is their reliance on ambient seawater for cooling. The naturally low and stable temperatures of deep or offshore waters allow heat generated by servers to be dissipated passively through heat exchangers, eliminating the need for conventional air conditioning systems. This approach significantly reduces energy overhead and removes the need for freshwater consumption, a major constraint for land-based facilities.

To minimize environmental impact, deployments are engineered to avoid thermal or chemical discharge into the surrounding ecosystem. Heat transfer is controlled and localized, and long-term monitoring studies have shown minimal disturbance to marine life when installations are properly designed and sited.

Monitoring and Remote Operations

Once operational, underwater data centers function in a fully remote and autonomous mode. Continuous monitoring is achieved through extensive sensor networks that track temperature, humidity, pressure, power consumption, vibration, and hardware health. These sensors feed real-time data back to onshore control centers via fiber-optic links.

Advanced analytics and AI-driven monitoring systems are often used to detect early signs of component degradation or abnormal behavior. Predictive maintenance models can identify potential failures well before they occur, allowing operators to manage risk without physical intervention. From a software and workload perspective, the underwater facility behaves like a conventional data center, supporting cloud services, artificial intelligence workloads, and edge computing applications.

Maintenance Philosophy and Reliability

Unlike traditional data centers that rely on frequent human access, underwater data centers are designed around a maintenance-free operational philosophy. The sealed environment limits exposure to oxygen, moisture, and human error, factors that account for a significant portion of hardware failures on land. As a result, systems are engineered for high reliability, redundancy, and long uninterrupted service periods, often ranging from five to ten years.

This approach shifts the focus from reactive maintenance to design-level resilience, where components are selected and configured to operate reliably for the entire deployment lifecycle. Software updates, workload management, and performance optimization are all handled remotely.

Retrieval, Decommissioning, and Recycling

At the end of its operational life, an underwater data center is retrieved rather than abandoned. The module is disconnected from power and network cables, lifted back to the surface using marine recovery equipment, and transported to shore. Once recovered, hardware can be refurbished, recycled, or securely decommissioned according to data protection and environmental regulations.

Underwater data center concepts are explicitly designed with full lifecycle sustainability in mind. Structural materials are chosen for recyclability, and the modular architecture allows entire units to be replaced or upgraded without disturbing the seabed for extended periods. This retrieval-based model contrasts with permanent offshore infrastructure and aligns more closely with circular-economy principles.

Operational Implications

Together, these deployment and lifecycle practices redefine how data centers can be built and operated. By shifting infrastructure offshore, underwater data centers reduce land use, lower cooling energy demands, and bring compute resources closer to coastal populations. While still an emerging approach, their deployment model reflects a broader trend toward modular, low-latency, and environmentally conscious digital infrastructure.

Highlander’s Commercial Underwater Data Center in China

Highlander’s underwater data center project in China represents one of the first serious attempts to move underwater computing infrastructure beyond pilot experiments and into sustained commercial operation. Located off the coast of Hainan Province, the project builds on lessons learned from earlier research initiatives while shifting the focus toward real-world service delivery, operational continuity, and economic viability.

Unlike experimental deployments designed primarily for data collection and validation, Highlander’s initiative was conceived as a revenue-generating data center, intended to support enterprise workloads, high-performance computing (HPC), and AI-driven applications. This transition marks a key milestone in the evolution of underwater data centers, demonstrating confidence in the technology’s maturity and long-term reliability.

Deployment Architecture and Design Philosophy

Highlander’s system follows a modular, containerized architecture, with sealed data center units assembled and tested on land before being deployed to the seabed. These modules are installed in relatively shallow coastal waters near Hainan, enabling efficient connection to mainland power grids and fiber-optic networks while maintaining access to stable underwater temperatures for cooling.

The underwater placement allows the facility to leverage natural seawater cooling, significantly reducing energy overhead compared to conventional air-cooled data centers. The sealed internal environment also minimizes exposure to oxygen and humidity, improving hardware stability and reducing failure rates over extended operational periods.

The Hainan underwater data center is designed to support compute-intensive commercial workloads, including artificial intelligence training, big data analytics, and scientific simulations. These applications benefit from both the high-density compute environment and the proximity to regional demand centers, particularly in southern China and the broader Asia-Pacific region.

By operating as part of the commercial cloud and edge computing ecosystem, Highlander’s facility demonstrates that underwater data centers can integrate seamlessly with existing digital infrastructure, rather than functioning as isolated or experimental systems.

Sustainability and Strategic Significance

Sustainability is a central driver behind the Hainan project. By eliminating freshwater cooling, reducing land use, and enabling integration with offshore renewable energy sources, Highlander’s underwater data center aligns with China’s broader goals around energy efficiency and low-carbon digital infrastructure.

Strategically, the project signals China’s intent to play a leading role in next-generation data center innovation. It demonstrates how underwater deployments can complement terrestrial facilities, particularly in coastal regions where land availability, energy costs, and cooling demands present growing challenges.

Implications for the Future of Underwater Data Centers

Highlander’s commercial deployment in Hainan represents a critical proof point: underwater data centers are no longer confined to research labs or short-term pilots. Instead, they are emerging as a viable infrastructure option for high-density, low-latency, and energy-efficient computing.

As more operators evaluate offshore and underwater deployments, the Hainan project provides an early reference model for how underwater data centers can be designed, operated, and scaled in a commercial context, bridging the gap between experimental innovation and practical digital infrastructure.

The Future of Underwater Data Centers: Benefits and Challenges

As digital demand continues to accelerate, driven by artificial intelligence, edge computing, and data-intensive services, underwater data centers are gaining attention as a potential complement to traditional infrastructure. While still an emerging approach, early deployments suggest that sub-sea computing could play a meaningful role in addressing energy efficiency, latency, and sustainability challenges. At the same time, several technical, environmental, and economic hurdles must be resolved before widespread adoption becomes feasible.

Key Benefits and Opportunities

One of the most compelling advantages of underwater data centers is energy-efficient cooling. By leveraging the naturally low and stable temperatures of seawater, these systems can significantly reduce the energy required for thermal management, which remains one of the highest operational costs in land-based data centers. The elimination of freshwater cooling also offers a clear advantage in water-stressed regions.

Proximity to users represents another major benefit. With nearly half of the global population living near coastlines, underwater data centers enable computing resources to be placed closer to demand centers. Reduced physical distance translates directly into lower latency, improving performance for real-time applications such as online gaming, immersive media, industrial control systems, and AI inference at the edge.

From an infrastructure perspective, underwater data centers support a modular and rapidly deployable model. Factory-built units can be assembled, tested, and deployed far faster than traditional facilities that require land acquisition, permitting, and extensive construction. This flexibility allows operators to respond more quickly to shifts in regional demand.

Sustainability is a long-term opportunity. When paired with offshore renewable energy sources such as wind, tidal, or wave power, underwater data centers could operate with extremely low or even zero operational emissions. Their sealed, “lights-out” design also reduces human intervention, lowering failure rates and extending hardware lifespans.

Technical and Operational Challenges

Despite these advantages, underwater data centers face several engineering and operational challenges. Reliability is paramount, as physical access for repairs is costly and complex. Systems must be designed for long-term operation without maintenance, placing high demands on hardware quality, redundancy, and predictive monitoring systems.

Deployment and retrieval costs are considered significant. Specialized marine vessels, subsea infrastructure, and environmental surveys add complexity compared to land-based installations. While modular design helps offset these costs at scale, underwater data centers must demonstrate consistent economic competitiveness over their full lifecycle.

One other challenge lies in standardization and scalability. Unlike traditional data centers, which benefit from decades of established standards, underwater deployments still lack universally accepted design, safety, and regulatory frameworks. Scaling from isolated projects to large offshore clusters will require clearer guidelines, interoperability standards, and industry-wide best practices.

Environmental and Regulatory Considerations

Environmental impact is considered a critical area of scrutiny. Although studies from pilot projects indicate minimal disruption to marine ecosystems, long-term and large-scale deployments require continuous environmental monitoring. Heat dissipation, seabed interaction, and potential effects on local biodiversity must be carefully managed to ensure regulatory compliance and public acceptance.

Regulatory complexity also poses a challenge. Underwater data centers sit at the intersection of maritime law, energy regulation, environmental protection, and data governance. Navigating these overlapping frameworks, particularly in international waters or cross-border deployments, adds uncertainty for operators and investors.

Looking ahead, underwater data centers are unlikely to replace terrestrial facilities entirely. Instead, they are best understood as a strategic complement, particularly well-suited to coastal regions, latency-sensitive workloads, and sustainability-driven deployments. Continued advances in materials science, AI-based monitoring, and offshore renewable energy integration are expected to improve both performance and economic viability.

As early commercial projects mature and more operational data becomes available, underwater data centers may transition from niche innovation to a recognized component of global digital infrastructure. Their future will depend not only on technical success but also on careful environmental stewardship, regulatory clarity, and alignment with broader energy and sustainability goals.

EndNote

Underwater data centers represent a rare convergence of engineering innovation, sustainability ambition, and changing digital demand. From early research initiatives like Microsoft’s Project Natick to China’s first commercial deployments, these systems have challenged conventional assumptions about where computing infrastructure must reside and how it should be operated. While still in the early stages of adoption, underwater data centers have already demonstrated tangible benefits in cooling efficiency, latency reduction, and operational resilience.

At the same time, their future success will depend on more than technical feasibility alone. Long-term environmental monitoring, regulatory alignment, and economic scalability will determine whether underwater deployments remain specialized solutions or evolve into a mainstream component of global data infrastructure. As cloud services, artificial intelligence, and edge computing continue to expand, the ocean may increasingly be viewed not as a boundary but as a viable frontier for sustainable digital growth.

SIGN UP TO GET THE LATEST NEWS

Newsletter

Subscription