Réserver une démonstration

Cut patent&paper research from weeks to hours with PatSnap Eureka AI!

Essayer maintenant

In-memory analog computing landscape 2026

In-Memory Analog Computing Technology Landscape 2026 — PatSnap Insights
Technology Intelligence

In-memory analog computing has evolved from instrumentation-era analog matrices to production-ready AI accelerators, with RRAM, PCM, ferroelectric memory, and flash-based architectures all competing for dominance — and no single device technology having emerged as the clear winner as of the most recent patent filings in 2025.

PatSnap Insights Team Innovation Intelligence Analysts 11 min read
Partager
Reviewed by the PatSnap Insights editorial team ·

How In-Memory Analog Computing Works: The Physics Behind the Paradigm

In-memory analog computing (IMAC) eliminates the von Neumann memory–processor bottleneck by merging data storage and computation within the same physical substrate — so data never travels across a shared bus to a remote CPU. The core computational mechanism is Ohm’s Law: stored conductance values in a crossbar non-volatile memory (NVM) array multiply an applied voltage representing the input vector, and currents summed along bitlines via Kirchhoff’s current law encode the dot product — completing a full vector–matrix multiplication (VMM) in the analog domain in a single clock cycle.

4
Primary NVM substrate families evaluated (RRAM, PCM, MRAM, FeRAM/FeFET)
1980
Earliest analog memory matrix patent in dataset (Tektronix, GB)
131,072
Patterns stored in INFN-Milan AM06 associative memory chip (65 nm CMOS)
2025
Most recent active filings: Macronix (JP) and Seoul National University (KR)

This massively parallel VMM capability is what makes IMAC so compelling for AI workloads. Deep learning inference is dominated by matrix multiplication — and performing it in the analog domain, directly within the memory array, avoids the energy and latency penalty of moving weights and activations across a memory bus on every cycle. According to research from the National Institute of Standards and Technology and surveyed in the dataset, this architecture is particularly suited to the energy-per-inference constraint that defines edge AI deployment.

What is the von Neumann bottleneck?

The von Neumann bottleneck is the performance and energy penalty incurred when a processor must repeatedly fetch data from a separate memory unit over a shared bus. In-memory analog computing eliminates this by co-locating computation with storage, performing arithmetic directly within or adjacent to the memory array without data movement.

Three primary substrate technologies appear repeatedly across the patent and literature dataset: resistive random-access memory (RRAM), ferroelectric memory, and phase-change memory (PCM). A fourth strand — memcapacitive/memcomputing systems — is represented by the Dynamic Computing Random Access Memory (DCRAM) concept introduced by Universitat Autonoma de Barcelona in 2014. Flash-based analog neural memory, as disclosed by Silicon Storage Technology in a 2024 JP patent, represents the most commercially mature embodiment in the dataset.

In-memory analog computing performs vector–matrix multiplication in a single clock cycle by exploiting Ohm’s Law across crossbar NVM arrays: stored conductance values multiply applied input voltages, and currents are summed along bitlines via Kirchhoff’s current law to encode the dot product in the analog domain.

Four Decades of Innovation: From Analog Matrices to Computational SSDs

Publication dates in the retrieved dataset span from 1980 to 2025, with four identifiable innovation phases that trace the field’s evolution from instrumentation hardware to production AI accelerators.

Figure 1 — In-Memory Analog Computing Innovation Phases: Key Milestones by Era
In-Memory Analog Computing Innovation Timeline: Four Phases from Foundational Analog Memory to Computational SSDs (1980–2025) PHASE 1 PHASE 2 PHASE 3 PHASE 4 1980–1995 2014–2016 2019–2021 2022–2025 Foundational Analog Memory Memcomputing & RRAM Foundations RRAM In-Memory Consolidation System-Level & Specialized Architectures Tektronix Analog Matrix (GB, 1980) Panasonic Image Processing (JP, 2003) DCRAM / Memcapacitive (UAB, 2014) U. Tennessee Analog Compute (USA, 2015) NTHU RRAM In-Memory Survey (Taiwan, 2019) Duke Neuromorphic CMOS/eNVM (USA, 2019) CEA-LETI Emerging Memory (France, 2021) SST Analog Neural Memory (JP, 2024) Macronix Computational SSD (JP, 2025) Seoul Nat’l Univ. NV-CAM (KR, 2025) RRAM / Core IMC Memcomputing / Emerging NVM System-Level / Commercial
The in-memory analog computing landscape spans four distinct phases from 1980 to 2025, with the most recent phase (2022–2025) characterised by system-level integration patents from Macronix International and Seoul National University.

Phase 1 (1980–1995) established the instrumentation and imaging roots of analog memory, with Tektronix’s 1980 GB patent on high-speed analog memory matrices for signal capture and Panasonic’s 2003 JP patent on analog memory for image processing representing this era. Phase 2 (2014–2016) introduced the memcomputing paradigm: Universitat Autonoma de Barcelona’s DCRAM paper in 2014 demonstrated massively parallel polymorphic digital logic using memcapacitive systems, with the same hardware reprogrammed by varying control signals only.

Phase 3 (2019–2021) saw concentrated consolidation around RRAM-based in-memory computing, with National Tsing Hua University surveying RRAM nanoscale device-to-system integration including 3D-stackable RRAM and on-chip training, Duke University comparing CMOS and emerging NVM neuromorphic implementations, and CEA-LETI connecting storage evolution to AI compute applications. According to research published by IEEE, this period marked the transition from device-physics demonstrations to architecture-level system proposals.

“The memory–compute integration boundary is moving from DRAM-adjacent to persistent storage — with the 2025 Macronix computational SSD patent pushing in-memory compute down to the SSD tier.”

Phase 4 (2022–2025) represents the leading edge: filings from Macronix International (JP, 2025), Silicon Storage Technology (JP, 2024), and Seoul National University (KR, 2025) signal a transition toward system-level integration, with claims covering decoder architectures, physical layout, and computational SSD system designs rather than pure array-level device physics.

The in-memory analog computing patent and literature dataset spans publication dates from 1980 to 2025, with four identifiable phases: Foundational Analog Memory (1980–1995), Memcomputing and RRAM Foundations (2014–2016), RRAM-Based In-Memory Computing Consolidation (2019–2021), and System-Level and Specialized Architecture Filings (2022–2025).

Four Technology Clusters Competing for the AI Inference Market

The IMAC patent and literature landscape organises into four distinct technology clusters, each targeting the AI inference market from a different device and architecture angle. RRAM crossbar arrays are the most extensively documented, but flash-based neural memory, memcomputing/logic-in-memory, and NV-CAM/TCAM architectures each represent active competitive fronts.

Cluster 1: RRAM Crossbar In-Memory Computing

RRAM crossbar arrays exploit conductance-state programmability to store synaptic weights and perform analog VMM. National Tsing Hua University’s 2019 survey covers RRAM device properties for analog synapse implementation, 3D-stackable RRAM, and on-chip training. Khalifa University’s 2021 paper introduced an XNOR-based RRAM-CAM with a time-domain analog adder for similarity computation, specifically designed to avoid the voltage saturation issues common to pure analog summation — a significant practical advance for production deployment.

Map the full RRAM in-memory computing patent landscape with PatSnap Eureka’s AI-powered search.

Explore RRAM Patents in PatSnap Eureka →

Cluster 2: Flash-Based Analog Neural Memory

Non-volatile flash memory cells, programmable to multi-level analog conductance states, serve as synaptic weights in neural network inference engines. Silicon Storage Technology’s 2024 JP patent — the most directly relevant active patent in this dataset for commercial AI deployment — discloses word line decoders, control gate decoders, bit line decoders, low/high-voltage row decoders, and physical layout designs for non-volatile flash arrays in deep learning analog neural systems. Poly-AI Technology Limited’s 2022 KR patent describes conversion of trained neural network topology into equivalent analog component networks with weight matrix calculation.

Cluster 3: Memcomputing and Logic-in-Memory

This cluster encompasses architectures where memory cells are redesigned to natively execute Boolean and arithmetic logic, reducing or eliminating data movement entirely. Universitat Autonoma de Barcelona’s DCRAM concept (2014) enables massively parallel polymorphic digital logic with the same hardware by varying control signals only. Politecnico di Torino’s 2019 survey systematically maps how data-intensive applications drive NVM-based compute–storage integration. Macronix International’s 2025 JP patent extends this to the SSD tier, claiming a computational SSD using memory-side resources to perform search, compute, and access operations while minimising bus transfers.

Cluster 4: Emerging NV-TCAM and CAM for AI Search

Ternary content-addressable memory (TCAM) implemented in emerging NVM enables parallel search at memory density and energy levels unachievable with SRAM-based TCAM. Fudan University’s 2022 review covers four emerging NVM types for non-volatile TCAM, discussing both SRAM and NV-TCAM for parallel search and AI including neuroscience-oriented computing. Seoul National University’s 2025 KR patent describes a CAM device with a voltage generator and priority encoder for in-memory search operations. IMEC VZW’s 2021 EP patent covers a hardware temporal memory system using scalar-valued memory cells with row/column addressing for sequence-learning compute tasks.

Figure 2 — Technology Cluster Distribution: Active Patent Filings by Approach (2019–2025)
In-Memory Analog Computing Active Patent Filings by Technology Cluster (RRAM, Flash Neural Memory, Logic-in-Memory, NV-CAM/TCAM) 2019–2025 0 2 4 6 6 RRAM Crossbar 2 Flash Neural Mem. 3 Logic-in- Memory 3 NV-CAM / TCAM Assignee / Paper Count
RRAM crossbar in-memory computing has the largest representation in the dataset with 6 substantive papers and patents; Logic-in-Memory and NV-CAM/TCAM each have 3; Flash Neural Memory has 2 active commercial filings. Counts reflect substantive technical contributions in the retrieved dataset only.
Key finding: Hybrid analog–digital interfaces are a high-value IP area

Pure-analog output is being replaced with analog-compute/digital-readout hybrids to manage noise and variability. Khalifa University’s RRAM-CAM with time-domain summation (2021) and IMEC’s temporal memory patent (EP, 2021) both exemplify this approach. Hybrid interface IP is identified in the dataset as a high-value, underappreciated filing area.

Where In-Memory Analog Computing Is Being Deployed

The dominant application in the dataset is neural network inference acceleration — particularly VMM as the compute kernel of deep learning. National Tsing Hua University’s 2019 RRAM survey explicitly frames resistive memory as enabling efficient hardware for “matrix-multiplication-dependent neural networks,” and Silicon Storage Technology’s 2024 JP patent targets deep learning artificial neural networks specifically.

Beyond mainstream AI inference, the dataset reveals several additional deployment contexts. Spiking neural network (SNN) applications are addressed by Drexel University’s 2022 review, which covers NVM physical properties for in-memory and in-device computing with spike-based neuromorphic architectures, combining NVM’s non-volatility with spike-timing-dependent plasticity for ultra-low-power learning. Sandia National Laboratories’ 2020 roadmap paper frames market and policy strategy for neuromorphic hardware at the national level.

The INFN-Milan AM06 associative memory chip, developed for the ATLAS Fast TracKer detector in a particle physics experiment, stores 131,072 patterns in 65 nm CMOS and implements massively parallel in-memory pattern recognition — demonstrating in-memory analog computing applied to high-energy physics at 40 MHz event processing rates.

A recurring cluster in the dataset covers associative memory and analog pattern-recognition hardware developed for particle physics experiments. The AM06 chip for the ATLAS Fast TracKer detector (INFN-Milan, 2017) implements massively parallel in-memory pattern recognition in 65 nm CMOS, storing 131,072 patterns. INFN-Pisa’s 2016 paper on the “artificial retina” for track reconstruction at the LHC deploys a massively parallel, biologically inspired algorithm in FPGA for 40 MHz event processing — a demanding real-time compute requirement met through in-memory pattern matching.

At the storage tier, Macronix International’s 2025 JP patent extends the in-memory computing concept to the SSD level, reducing processor utilisation and bus bandwidth by performing search and compute tasks inside the storage device. This is directly applicable to data centre and cloud storage disaggregated architectures, a deployment model that organisations including OECD have identified as central to next-generation computing infrastructure. Anabrid GmbH’s 2021 paper benchmarks modern analog computers against digital processors for computational fluid dynamics and ODE/PDE solving, representing the scientific simulation application domain.

Track emerging NV-CAM, neuromorphic, and computational SSD patent filings in real time with PatSnap Eureka.

Monitor the IMAC Patent Landscape in PatSnap Eureka →

Geographic and Assignee Landscape: Who Holds the Active Patents

Among active, technically relevant patents in the dataset, corporate filers are concentrated in Asia-Pacific jurisdictions, with European research institutes also present. The academic and government research literature is dominated by US institutions, while the most recent active patent filings (2021–2025) are distributed across JP, KR, and EP jurisdictions with no single geography holding more than two active filings.

Figure 3 — Active IMAC Patent Filings by Jurisdiction (2021–2025)
In-Memory Analog Computing Active Patent Filings by Jurisdiction 2021–2025: JP, KR, EP Distribution Actif Filings JP (Japan) Silicon Storage Technology (2024) Macronix International (2025) KR (South Korea) Seoul National University (2025) Poly-AI Technology Limited (2022) EP (Europe) IMEC VZW (2021)
Active IMAC patent filings from 2021–2025 are evenly distributed across JP, KR, and EP jurisdictions, with no single geography dominant — indicating an open landscape for international filing programs.

The US academic contribution is substantial in the literature: University of Tennessee, Duke University, George Washington University, Sandia National Laboratories, NIST, Yale University, Drexel University, and Auburn University collectively represent the largest share of foundational and survey literature in the dataset. However, US corporate patent filings are notably absent from the active 2021–2025 cohort in this dataset.

European research institutes — CEA-LETI/Universite Grenoble Alpes (France), Politecnico di Torino (Italy), Universitat Autonoma de Barcelona (Spain), Anabrid GmbH (Germany), IMEC VZW (Belgium), and INFN (Italy) — are strongly represented in the literature and EP patent filings. Khalifa University (UAE) contributes a significant 2021 paper on RRAM-CAM for hyperdimensional computing. According to WIPO patent filing data, East Asia and Europe have been the most active regions for advanced memory technology IP filings in recent years, consistent with this dataset’s distribution.

The large number of inactive KR filings from the 1986–2000 period (Hitachi, Toshiba, Mitsubishi Electric, Panasonic) reflects historical semiconductor memory development rather than in-memory computing per se, and should not be interpreted as current competitive activity in the IMAC space.

Strategic Implications for IP Teams and R&D Leaders

Five strategic signals emerge from the most recently dated results in the dataset (2022–2025), each with direct implications for IP strategy and R&D investment decisions in the IMAC space.

NVM device selection remains the critical risk factor. RRAM is the most extensively cited substrate for analog in-memory computing across the dataset, but CEA-LETI and Fudan University data confirm that PCM, MRAM, and FeFET are all competitive. IP strategists should monitor device-level claims broadly across all four families, as no single device technology has emerged as the clear production winner as of the most recent filings.

System-level integration is the current battleground. Early patents covered device physics and single-array operation; the 2024–2025 filings from Macronix International and Silicon Storage Technology indicate competition has moved up the stack to decoder architectures, physical layout, and computational SSD system claims. R&D teams should prioritise system-level and peripheral circuit innovation over pure array-level filing, according to the patent signal in this dataset.

The analog–digital hybrid boundary is strategically important. Pure-analog output is being replaced with analog-compute/digital-readout hybrids to manage noise and variability. The Khalifa University RRAM-CAM with time-domain summation (2021) and the IMEC temporal memory patent (EP, 2021) both illustrate this transition. Hybrid interface IP is identified as a high-value, underappreciated filing area.

Edge AI and resource-constrained inference are the near-term commercial pull. The dataset consistently positions in-memory analog computing against transformer and CNN inference on edge devices, where energy per inference operation is the binding constraint. Product teams should prioritise benchmark metrics including energy per MAC operation, analog weight precision, and endurance cycles over raw throughput. Research published by Nature has highlighted the energy efficiency advantages of analog in-memory approaches for edge deployment contexts.

Geographic diversification of filing strategy is warranted. Active, technically relevant patents in this dataset span JP, KR, EP, and US jurisdictions with no single jurisdiction holding more than two active filings. For IP strategists, this indicates the landscape is still open for broad international filing programs, particularly in KR and JP where institutional filers — Seoul National University, Silicon Storage Technology, and Macronix — are demonstrably active.

As of the most recent patent filings in this dataset (2025), no single non-volatile memory device technology — among RRAM, PCM, MRAM, and FeRAM/FeFET — has emerged as the clear production winner for analog in-memory computing AI accelerators, according to reviews by CEA-LETI and Fudan University.

Questions fréquentes

In-memory analog computing — key questions answered

Still have questions? Let PatSnap Eureka answer them for you.

Ask PatSnap Eureka for a deeper answer →

Références

  1. Recent development in analog computation: a brief overview — University of Tennessee, 2015, USA
  2. Advances in Emerging Memory Technologies: From Data Storage to Artificial Intelligence — CEA-LETI / Universite Grenoble Alpes, 2021, France
  3. New Logic-In-Memory Paradigms: An Architectural and Technological Perspective — Politecnico di Torino, 2019, Italy
  4. Neuromorphic Computing Systems: From CMOS To Emerging Nonvolatile Memory — Duke University, 2019, USA
  5. Resistive Memory-Based In-Memory Computing: From Device and Large-Scale Integration System Perspectives — National Tsing Hua University, 2019, Taiwan
  6. Dynamic computing random access memory — Universitat Autonoma de Barcelona, 2014, Spain
  7. RRAM-based CAM combined with time-domain circuits for hyperdimensional computing — Khalifa University, 2021, UAE
  8. The trend of emerging non-volatile TCAM for parallel search and AI applications — Fudan University, 2022, China
  9. Nonvolatile Memories in Spiking Neural Network Architectures: Current and Emerging Trends — Drexel University, 2022, USA
  10. A Roadmap for Reaching the Potential of Brain-Derived Computing — Sandia National Laboratories, 2020, USA
  11. The impact of on-chip communication on memory technologies for neuromorphic systems — Yale University, 2018, USA
  12. Using analog computers in today’s largest computational challenges — Anabrid GmbH, 2021, Germany
  13. Analog Neural Memory System — Silicon Storage Technology, Inc., 2024, JP
  14. Hardware implementation of a temporal memory system — IMEC VZW, 2021, EP
  15. Architecture for computational memories and memory systems — Macronix International, 2025, JP
  16. Content addressable memory device and operating method thereof — Seoul National University Industry-Academic Cooperation Foundation, 2025, KR
  17. Analog hardware implementation of neural networks — Poly-AI Technology Limited, 2022, KR
  18. AM06: the Associative Memory chip for the Fast TracKer in the upgraded ATLAS detector — INFN-Milan, 2017, Italy
  19. The artificial retina for track reconstruction at the LHC crossing rate — INFN-Pisa, 2016, Italy
  20. High-speed acquisition system employing an analogue memory matrix — Tektronix Inc., 1980, GB
  21. Analog memory and image processing system — Panasonic Corporation, 2003, JP
  22. WIPO — World Intellectual Property Organization: Global Patent Filing Trends
  23. IEEE — Institute of Electrical and Electronics Engineers: Emerging Memory and Neuromorphic Computing Research
  24. Nature — Analog In-Memory Computing for Edge AI Applications

All data and statistics in this article are sourced from the references above and from PatSnap‘s proprietary innovation intelligence platform. This landscape is derived from a targeted set of patent and literature records and represents a snapshot of innovation signals within this dataset only; it should not be interpreted as a comprehensive view of the full industry.

Votre partenaire en IA agentique
pour une innovation plus intelligente

PatSnap fuses the world’s largest proprietary innovation dataset with cutting-edge AI to
supercharge R&D, IP strategy, materials science, and drug discovery.

Réserver une démonstration