top of page

PECM, AI, and Direct-To-Chip Cooling

  • Writer: Kirk Abolafia
    Kirk Abolafia
  • 4 days ago
  • 4 min read

As AI chips scale up in demand, power density, and heat flux, liquid cooling plates with extreme-tolerance microchannels are increasingly critical for serving the entire next generation of global tech. And PECM may be able to help.


Want a quick refresher of PECM's capabilities in microchannels and other tight-tolerance, sensitive features? Read more here.


Contact us anytime at info@voxelinnovations.com


DoE data suggests AI energy usage will triple by 2028, with cooling alone accounting for over 40% of that total power usage. With advanced AI datacenter rack deployments edging towards 100kW, one of the greatest challenges for manufacturers isn’t necessarily the heat, but the heat fluctuations of this infrastructure.

data center
Server racks in NOIRLab's HQ in Tuscon, AZ. Credit: Wikimedia Commons.

Heat-flux is particularly challenging as AI workloads can swing from idle to full compute load in milliseconds, spiking junction temperatures and leaving cooling systems scrambling to keep up. Without rapid, uniform heat removal, these temperature fluctuations can lead to significant performance throttling, material fatigue, or outright component failure. Therefore, the tight-tolerance, high-surface-quality microchannel features in cold plates and heat exchangers are absolutely critical: maximizing contact area, keeping coolant flow uniform and reallocating heat away from dangerous “hot spots”. 


Studies in high-performance computing suggest that seemingly minuscule changes can result in significantly higher per-watt performance, model throughput, reduced energy costs, and longer hardware life. Specifically, small improvements in thermal uniformity (even down to a 2-degree Celsius difference) and improved-tolerance microchannel designs led to 13% lower temperature fluctuations and up to 90% increases in computing throughput in one study.  

So, how are key manufacturers utilizing the importance of heat-flux control and thermal uniformity in-action?

Direct-to-Chip Cooling 

In response to increased energy demand and heat-flux, traditional air cooling infrastructure for datacenters has been slowly replaced by direct-to-chip cooling, which places a precision-engineered cold plate directly on top of a processor, as opposed to conventional cooling systems that dissipate heat only after it has spread through the board and surrounding components. In “direct-to-chip", liquid coolant flows through sub-millimeter channels inside the plate, absorbing heat at the point of generation before it can spread into surrounding components. The warmed coolant is then circulated out of the plate and through a secondary heat exchanger, where the heat is transferred to a facility water loop or even an external cooling tower.



direct to chip cooling
Direct-to-chip is ideal for high-heat-flux environments. Credit: Submer.com

By “killing the heat at its source”, direct-to-chip cooling prevents additional cooling efforts to surrounding components and the use of liquid reduces (or eliminates) the need for energy-intensive chillers—not to mention reducing the overall load for the data centers’ HVAC system.   Ultimately, direct-to-chip cooling handles far higher heat flux than air or indirect liquid cooling (currently handling >40kW-per-rack loads, far exceeding current air-cooling capabilities) and is especially crucial as chip power densities increase at seemingly exponential rates.    

However, this groundbreaking technology does not come without its drawbacks.


Conventional Challenges   


Generally, conventional manufacturing methods struggle at producing the tight-tolerance microchannels and similar internal features found in direct-to-chip cold plates. Often, these channels have sub-millimeter widths, are only several hundred millimeters long, and are routed through intricate serpentine or parallel flow paths. These channels must also remain dimensionally precise along their entire length, to ensure both uniform coolant distribution and a predictable tolerance to heat flux.   

Mechanical polishing, for instance, faces challenges machining deep or serpentine passages without altering geometry. Abrasive flow machining risks rounding edges or leaving inconsistent wall finishes, drastically affecting laminar flow and heat-transfer consistency. Precision milling also struggles in high-aspect-ratio features where tool wear, burrs, and vibration can compromise tolerances. Thermal-reliant processes like laser ablation may introduce heat-affected zones or microcracks that degrade long-term reliability. The reality is that manufacturers are often forced to compromise by relaxing tolerances and accepting inferior surface qualities (thereby limiting cooling performance) or by committing to processes only capable of achieving these features alongside significant time and cost drawbacks.  


Voxel’s PECM Advantage 


Voxel’s PECM technology may excel at producing the tight-tolerance, sub-millimeter microchannels that define modern AI cold plates and heat exchangers. Using an electrically shaped cathode and precisely controlled electrolyte flow, PECM removes material atom-by-atom, delivering uniform wall finishes along the full length of serpentine or parallel channel networks without altering geometry or damaging delicate features. The process consistently achieves superfinished surfaces (down to 0.005-0.4 µm Ra) creating ultra-smooth internal walls that can maintain even coolant flow, minimize pressure drop, and maximize heat transfer efficiency. Because PECM is non-contact, it avoids tool-induced stress, distortion, and heat-affected zones, making it ideal for high-power AI cooling hardware where thermal performance significantly depends on micron-level precision.  


Thin walled features in inconel
PECM excels at parallel-processing thin-walled features often found in heat exchangers and chip-cooling infrastructure in data centers.

Additionally, PECM can process multiple microchannels in parallel, machining entire arrays of passages simultaneously rather than one at a time. As AI data centers expand and demand for high-performance cold plates accelerates, this inherent scalability not only keeps unit costs competitive, but also ensures repeatable precision across thousands of identical parts, helping manufacturers meet the near-exponential demand for AI infrastructure.  

Ultimately, PECM may enable manufacturers to meet the significant challenges of dealing with unprecedented power densities and thermal loads in AI-enabled data centers with its ability to machine sub-micron surface finishes, ultra-tight geometries, and high scalability in-parallel, feature-to-feature and/or part-to-part. With smoother, more efficient microchannels at production volumes, data centers can focus on optimization and infrastructure expansions while simultaneously reducing energy waste, throttling, and extending hardware lifespans with improved heat-flux resistance.  


Interested in learning more? Contact us at info@voxelinnovations.com Or, call toll-free at: 1-800-404-7165

 
 
 

Comments


© 2025 by Voxel Innovations Inc.

  • LinkedIn
  • YouTube

Knightdale, NC

info@voxelinnovations.com

984-464-0701
Toll-Free: 1-800-404-7165

Image of Voxel Innovation's ISO certifications

Learn about our accessibility features here

bottom of page