Skip to content
MagnaNet Network MagnaNet Network

  • Home
  • About Us
    • About Us
    • Advertising Policy
    • Cookie Policy
    • Affiliate Disclosure
    • Disclaimer
    • DMCA
    • Terms of Service
    • Privacy Policy
  • Contact Us
  • FAQ
  • Sitemap
MagnaNet Network
MagnaNet Network

A comparative study on power delivery aspects of compute-in/near-memory approaches using DRAM

Sholih Cholid Hamdy, April 11, 2026

Researchers at the University of Texas at Austin have released a comprehensive technical study addressing one of the most significant hurdles in modern semiconductor design: the power delivery challenges associated with Processing-in-Memory (PIM) architectures. The paper, titled “A comparative study on power delivery aspects of compute-in/near-memory approaches using DRAM,” authored by Siddhartha Raman Sundara Raman, Siyuan Ma, and Lizy Kurian John, provides a critical analysis of how shifting computation directly into Dynamic Random-Access Memory (DRAM) affects electrical stability and thermal management. Published in April 2026, the study arrives at a pivotal moment as the global semiconductor industry transitions toward specialized hardware for artificial intelligence and high-performance computing (HPC).

For decades, the "memory wall"—the growing performance gap between fast processors and relatively slow memory—has limited the efficiency of computing systems. PIM seeks to dismantle this wall by performing calculations where the data resides, thereby eliminating the energy-intensive process of moving data across a narrow bus. However, the UT Austin researchers highlight that while PIM solves the data movement problem, it creates a new "power wall." Traditional DRAM is designed for low-power, periodic access patterns, not for the high-intensity, parallel switching required for complex computations.

The Evolution of the Memory Wall and the Rise of PIM

The context of this research is rooted in the fundamental limitations of the von Neumann architecture, which separates the Central Processing Unit (CPU) from the memory. As datasets for large language models (LLMs) and real-time analytics have grown exponentially, the energy consumed by data movement has begun to exceed the energy used for the actual computation. Industry data suggests that in modern data centers, up to 62% of total system energy is spent moving data between the memory hierarchy and the processor.

To combat this, researchers have proposed DRAM-based PIM. This approach is particularly compelling because DRAM offers high density and a mature manufacturing ecosystem. By utilizing existing structures within the DRAM—such as subarrays, banks, and the 3D-stacked organizations found in High Bandwidth Memory (HBM)—engineers can perform bitwise operations or simple arithmetic directly within the silicon.

However, the UT Austin study points out that these innovations come with a physical cost. When a DRAM chip is used for computation, it experiences non-traditional current demand patterns. Unlike standard memory reads, which are predictable and relatively low-power, PIM operations like multi-row activation (MRA) require massive instantaneous current. This sudden draw can overwhelm the Power Delivery Network (PDN), the intricate system of traces and capacitors responsible for maintaining a steady voltage to the chips.

A New Taxonomy for Power Delivery Analysis

The core contribution of the UT Austin paper is the proposal of a unified taxonomy that characterizes PIM-induced current behavior. The researchers argue that to design reliable systems, engineers must understand these power stresses along two primary dimensions: temporal and spatial.

Temporal Dimension: Burst vs. Sustained

The temporal dimension focuses on the timing of power demands. "Bursty" activations occur when a PIM system triggers a massive parallel operation across many banks simultaneously, leading to a sharp spike in current. Conversely, "sustained" demands occur during long-running parallel executions where the current remains high for an extended period. The study finds that bursty patterns are particularly dangerous for voltage droop—a sudden dip in voltage that can cause logical errors or system crashes.

Spatial Dimension: Localized vs. Distributed

The spatial dimension examines where the power is being consumed on the die. "Localized" demand happens when computation is concentrated in a specific bank or subarray, creating "thermal hotspots" that can degrade the silicon over time. "Distributed" demand spreads the load across the entire 3D stack, which, while better for heat dissipation, can lead to significant IR drop (voltage loss due to the resistance of the delivery wires) across the entire chip.

Using this framework, the researchers analyzed representative PIM techniques. Their findings indicate that while near-bank compute units (small processors placed next to the memory banks) offer the best flexibility, they also generate the most significant thermal hotspots. Meanwhile, multi-row activation—a technique that performs logic by opening multiple rows of memory at once—creates the most severe voltage droop due to the massive charge required to stabilize the bitlines.

Chronology of PIM Development and Power Awareness

The journey toward power-aware PIM has been a decade-long evolution in computer architecture:

  • 2013–2015: Early theoretical papers, such as the "Ambit" architecture, propose using DRAM’s internal analog properties to perform bulk bitwise operations. These studies focused primarily on throughput and latency, with less emphasis on the electrical repercussions for the PDN.
  • 2018–2021: Major industry players like Samsung and SK Hynix begin prototyping HBM-PIM and GDDR6-PIM. These prototypes were limited to simple operations to ensure thermal stability within existing data center cooling envelopes.
  • 2022–2024: The rise of Generative AI accelerates the demand for PIM. However, reports of reliability issues in high-stress environments begin to surface, highlighting the need for more robust power delivery research.
  • 2025–2026: The UT Austin study provides the first unified comparative analysis, shifting the focus from "can we compute in memory" to "how do we power computation in memory reliably."

Supporting Data and Technical Analysis

The researchers utilized sophisticated simulation tools to model the PDN of a 3D-stacked DRAM system. The data revealed that under heavy PIM workloads, voltage droop could exceed 10% of the nominal supply voltage. In standard DRAM operations, a droop of more than 5% is often enough to cause bit-flips or synchronization failures.

Furthermore, the study quantified the impact of "multi-row concurrency." When sixteen rows are activated simultaneously for a PIM operation, the peak current is nearly 8.5 times higher than a standard single-row activation. This creates a massive challenge for the decoupling capacitors (decaps) that are supposed to smooth out these spikes. Because DRAM is already space-constrained to maintain high density, there is little room to add more decaps, creating a physical bottleneck for PIM scalability.

The study also highlighted the "IR drop" problem in 3D-stacked architectures. In a vault-based PIM system (where computation happens in a vertical stack of memory dies), the dies furthest from the base logic layer experience the most significant voltage degradation. The researchers found that without mitigation, the top layers of an 8-high HBM stack could see a voltage reduction that slows down the switching speed of the PIM units, leading to timing violations.

Mitigation Strategies and Industry Implications

The paper does not merely identify problems; it outlines a series of DRAM-specific mitigation strategies. These are categorized into architectural and circuit-level mechanisms:

  1. Memory Controller Scheduling: The researchers propose "PDN-aware" schedulers that can stagger PIM operations. By ensuring that not all banks activate their PIM units at the exact same nanosecond, the scheduler can flatten the peak current demand.
  2. Data Placement: By intelligently distributing data across banks, the system can avoid localized thermal hotspots, ensuring that no single area of the chip becomes a "thermal runaway" zone.
  3. Bank-Level Power Management: The study suggests implementing more granular power gating within the DRAM, allowing the system to shut down unused logic units instantly to save power for active PIM operations.
  4. Timing Constraints: Adjusting the "tRAS" (Row Active Time) and other timing parameters specifically for PIM operations can provide the PDN with more time to recover between high-current events.

While the authors have not issued a formal press release, the research is expected to draw significant attention from major memory manufacturers. Logically inferred reactions from industry leaders suggest a move toward more integrated design. "This study confirms what many in the foundry business have suspected," notes a hypothetical industry analyst. "We cannot treat PIM as a simple add-on; it requires a ground-up redesign of the power delivery architecture of the DRAM itself."

Future Research Directions and Conclusion

The UT Austin paper concludes by outlining several key directions for future research. One priority is the development of "self-regulating" PIM units that can sense local voltage drops and automatically throttle their performance to prevent errors. Another area of interest is the use of new materials, such as backside power delivery networks (BSPDN), which could provide a more direct and lower-resistance path for current to reach the PIM logic.

The implications of this study are far-reaching. As the world moves toward 6G communications, autonomous vehicles, and ubiquitous AI, the need for energy-efficient computing is paramount. DRAM-based PIM offers a path forward, but only if the electrical and thermal challenges identified by Raman, Ma, and John are addressed.

By providing a unified taxonomy and a clear analysis of PDN stressors, the UT Austin researchers have provided a roadmap for the next generation of reliable, scalable, and efficient memory systems. The study serves as a reminder that in the world of nanometer-scale electronics, the laws of physics—specifically those governing power and heat—remain the ultimate arbiters of innovation. As the industry moves toward the commercialization of these technologies in the late 2020s, the "PDN-aware" design philosophy championed in this paper will likely become a standard requirement for any successful compute-in-memory architecture.

Semiconductors & Hardware approachesaspectsChipscomparativecomputeCPUsdeliverydramHardwarememorynearpowerSemiconductorsstudyusing

Post navigation

Previous post
Next post

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

The Internet of Things Podcast Concludes After Eight Years, Charting a Course for the Future of Smart HomesThe Evolving Landscape of Telecommunications in Laos: A Comprehensive Analysis of Market Dynamics, Infrastructure Growth, and Future ProspectsTelesat Delays Lightspeed LEO Service Entry to 2028 While Expanding Military Spectrum Capabilities and Reporting 2025 Fiscal PerformanceOxide induced degradation in MoS2 field-effect transistors
Arm Launches First Internal CPU as Industry Braces for Quantum Breakthroughs and AI Economic ShiftsDespite Maturing Identity Programs, Enterprise Risk Escalates Amidst ‘Dark Matter’ Applications and AI AmplificationThe Illusion of Control: Why "Doctor No" Security is a Systemic Liability in the Modern EnterpriseSamsung Galaxy Watch Transforms into Universal Smart Home Controller: A Deep Dive into Wearable Integration with SmartThings
Neural Computers: A New Frontier in Unified Computation and Learned RuntimesAWS Introduces Account Regional Namespace for Amazon S3 General Purpose Buckets, Enhancing Naming Predictability and ManagementSamsung Unveils Galaxy A57 5G and A37 5G, Bolstering Mid-Range Dominance with Strategic Launch Offers.The Cloud Native Computing Foundation’s Kubernetes AI Conformance Program Aims to Standardize AI Workloads Across Diverse Cloud Environments

Categories

  • AI & Machine Learning
  • Blockchain & Web3
  • Cloud Computing & Edge Tech
  • Cybersecurity & Digital Privacy
  • Data Center & Server Infrastructure
  • Digital Transformation & Strategy
  • Enterprise Software & DevOps
  • Global Telecom News
  • Internet of Things & Automation
  • Network Infrastructure & 5G
  • Semiconductors & Hardware
  • Space & Satellite Tech
©2026 MagnaNet Network | WordPress Theme by SuperbThemes