Phase 1: How BSI Rewrote the Sensitivity Equation (2010–2013)
Backside illumination (BSI) delivered a 2.7× sensitivity improvement over conventional front-side illumination (FSI) by separating the optical path from the metal interconnect layers—an architectural shift that allowed photosensitive area to grow from roughly 55% to over 70% of each pixel’s surface. This single structural change enabled Sony to shrink pixel pitch while keeping quantum efficiency above 75% across the 400–700 nm visible spectrum, laying the physical foundation for every subsequent generation of CMOS image sensor innovation.
The manufacturing breakthroughs that made BSI viable at scale were equally significant. Plasma-activated fusion bonding enabled room-temperature wafer bonding with high alignment accuracy, while copper-electroplating-based Through-Silicon Via (TSV) formation allowed electrical connections through thinned photodiode wafers—achieving a wafer thickness of just 21 µm. According to research published in peer-reviewed semiconductor engineering literature, these process advances made BSI a manufacturable, not merely theoretical, architecture for high-volume smartphone sensors.
In a conventional front-side illuminated (FSI) sensor, metal interconnect wiring sits between the lens and the photodiode, blocking a portion of incoming light. BSI flips the silicon wafer so that light strikes the photodiode directly from the back, removing the interconnect obstruction and dramatically increasing the effective light-collecting area per pixel.
Dark current—the spurious signal generated even in the absence of light—was another key target during this phase. Hydrocarbon-molecular-ion (C₃H₆) implantation in double epitaxial silicon wafers achieved a 40% reduction in white spot defects, a critical quality metric for professional and scientific imaging applications. These manufacturing refinements, documented across 30 academic papers cross-validated in this analysis, reflect the depth of process engineering investment that underpinned Sony’s BSI transition.
Sony’s backside illumination (BSI) technology increased the photosensitive fill factor of CMOS image sensor pixels from approximately 55% to over 70%, delivering a 2.7× sensitivity improvement over front-side illumination (FSI) and enabling pixel shrinkage while maintaining quantum efficiency above 75% across the 400–700 nm visible spectrum.
Phase 2: Stacked Architecture and the Logic-Die Leap (2013–2020)
Sony’s introduction of 3D stacked CMOS sensors in 2013—pioneered in a 1/4-inch 8MP device—was the most consequential structural innovation since BSI itself: by bonding a dedicated logic die beneath the BSI pixel chip, Sony decoupled image capture from signal processing for the first time, enabling on-chip real-time HDR, high-speed readout at up to 480 fps in Super 35 format, and advanced noise reduction algorithms without sacrificing pixel area.
Two bonding approaches drove this phase. Wafer-on-Wafer (WoW) hybrid bonding with oxide/copper direct bonding enabled high-density interconnects for compact smartphone sensors. For larger-format devices—full-frame and Super 35—Chip-on-Chip (CoC) stacking allowed logic die size to be optimised independently of the pixel array, a critical capability for professional cinema cameras. The 65nm logic process used in advanced stacked designs achieved pixel performance comparable to conventional BSI at a pixel pitch of just 1.1 µm.
“Sony’s 2019–2021 patent filing peak—averaging 1,105 patents per year—reflects the intensity of R&D investment required to optimise stacked sensor power management, signal processing algorithms, and hybrid bonding reliability at 300 mm wafer scale.”
The patent data makes the scale of this investment concrete. The 2019–2021 period saw the highest filing concentration in Sony’s image sensor history, with between 1,043 and 1,183 patents filed per year. This surge reflects not just pixel architecture work but an expanding scope: signal processing algorithms, HDR control, power management circuits, gyro sensor integration, and biometric authentication features were all being developed in parallel as Sony moved from hardware-centric to system-level innovation.
Explore Sony’s full stacked sensor patent portfolio and competitive filing trends with PatSnap Eureka.
Analyse Patents with PatSnap Eureka →Sony’s 3D stacked CMOS image sensor, first introduced in a 1/4-inch 8MP device in 2013, combines a backside-illuminated pixel chip with a separate logic die, enabling on-chip real-time HDR and high-speed readout at up to 480 fps in Super 35 format. Sony filed between 1,043 and 1,183 image sensor patents per year during the 2019–2021 peak R&D period.
Phase 3: The 2-Layer Transistor Pixel and What It Unlocks (2021–Present)
Sony’s world-first 2-layer transistor pixel stacked CMOS sensor, announced in December 2021, resolves a fundamental constraint of all previous architectures: in conventional stacked sensors, photodiodes and pixel transistors compete for space on the same substrate layer, forcing trade-offs between light collection area and readout circuitry. By placing photodiodes on the top substrate and pixel transistors on a separate underlying substrate, Sony eliminated that competition—delivering approximately 2× saturation signal level, which directly translates to wider dynamic range and lower noise in high-contrast and low-light conditions.
Sony first implemented 2-layer transistor pixel technology in the Xperia 1 V smartphone in 2023. By 2024–2025, the technology was branded LYTIA for mobile sensors. In 2025, it was extended to automotive imaging with the IMX735—a 17.42 MP sensor achieving 130 dB dynamic range in priority mode and supporting LED flicker mitigation for ADAS applications.
The automotive extension is particularly significant. Sony’s IMX735 targets tunnel-entry and backlit driving scenarios where dynamic range requirements reach 106–130 dB—far beyond what conventional sensors can deliver. The sensor also incorporates horizontal readout for lidar synchronisation and LED flicker mitigation algorithms, reflecting the specialised signal processing demands of advanced driver-assistance systems (ADAS). According to reporting by WIPO‘s technology trend analyses and corroborated by industry sources, automotive represents the fastest-growing segment for high-performance image sensors globally.
On the mobile side, the LYTIA brand launched in 2024–2025 targets high-contrast and low-light smartphone photography, with edge AI integration enabling real-time computational photography directly on the sensor. Recent patents filed between 2022 and 2025 show Sony embedding convolution and product-sum operation units within the pixel layer itself—a move toward on-sensor AI acceleration that would reduce latency and power consumption compared to offloading processing to a dedicated application processor. Magnetic detection units integrated within the sensor package are also appearing in recent filings, enabling optical image stabilisation without external gyroscopic sensors and reducing overall camera module footprint.
Sony’s 2-layer transistor pixel stacked CMOS sensor, announced in December 2021 and first commercialised in the Xperia 1 V smartphone in 2023, places photodiodes on the top substrate and pixel transistors on a separate underlying substrate, achieving approximately 2× saturation signal level compared to conventional stacked sensors. The technology was extended to automotive imaging in 2025 with the IMX735, which achieves 130 dB dynamic range in priority mode.
Patent Portfolio Signals: Where Sony Is Investing Next
Sony’s 14,335 image sensor patents from 2010 to 2026 carry an average citation count of 127 per patent—a metric that indicates high technical influence within the field, as tracked by databases such as those maintained by the European Patent Office. The distribution across technology categories reveals a strategic shift from pure pixel hardware toward system-level and application-specific innovation that has been accelerating since 2015.
From Hardware to System-Level Innovation (2015–2020)
Between 2015 and 2020, Sony’s patent focus broadened to include signal processing algorithms, HDR control circuits, power management, and features that would have been considered peripheral to an image sensor a decade earlier: integrated gyro sensors for optical stabilisation, biometric authentication embedded within the sensor package, and hardware-based image forgery prevention. This expansion reflects a deliberate strategy to make the sensor a platform, not merely a component.
Diversification into Specialised Applications (2020–2026)
The most recent filing cohort signals three emerging vectors. First, on-sensor AI acceleration: patents filed from 2022 onward describe product-sum and convolution operation units integrated for AI-driven image enhancement at the pixel layer—a capability that standards bodies including IEEE have identified as central to next-generation edge vision systems. Second, multi-spectral sensing: polarisation-sensitive pixels and wavelength-selective structures appear in patents targeting NDVI measurement and scientific imaging. Third, medical imaging: surgical system patents describe silicon and InGaAs dual-sensor fusion for procedures requiring both visible and near-infrared imaging simultaneously.
Patent counts for 2024–2026 are subject to the approximately 18-month publication lag standard in patent systems globally. Actual 2024–2025 filing activity is likely higher than currently visible in public databases. The 2019–2021 peak of 1,043–1,183 patents per year may therefore not represent the absolute ceiling of Sony’s recent filing activity.
Event-based vision sensors represent a fourth emerging vector. Sony’s 2025 collaboration with Prophesee on stacked event-based sensors for ultra-low-power AI applications marks an architectural departure from conventional frame-based imaging—a technology class that Nature Electronics has described as potentially transformative for always-on vision at the edge. Separately, Sony announced 394 fps high-speed global shutter sensors for industrial and automotive use in late 2024, extending the speed frontier of conventional frame-based capture.
Track Sony’s emerging patent vectors in real time—from on-sensor AI to event-based vision—using PatSnap Eureka’s R&D intelligence tools.
Explore Full Patent Data in PatSnap Eureka →Sony’s 14,335 image sensor patents filed between 2010 and 2026 carry an average citation count of 127 per patent. The three largest technology focus areas are electronic equipment (2,181 patents), information processing (1,755 patents), and photoelectric conversion (1,699 patents). Peak filing activity occurred in 2019–2021, averaging 1,105 patents per year.
Market Position and the Road to $26.9 Billion by 2026
Sony holds approximately 42–43% of the global CMOS image sensor market as of 2021–2022, a position built on the successive architectural advantages described above. The 2022 market correction—driven by smartphone inventory destocking and the impact of U.S. trade restrictions—temporarily compressed revenue, but recovery began in 2023 as automotive and industrial demand offset continued softness in consumer electronics.
The structural growth story remains intact. The global CMOS image sensor market is projected to reach $26.9 billion by 2026, growing at a 6.0% compound annual growth rate from 2021. Automotive, medical, and security applications are the primary growth drivers—precisely the segments into which Sony has been diversifying its sensor portfolio since 2020. The strategic implication is that Sony’s application diversification is not merely a hedge against smartphone market saturation; it is a deliberate alignment with the fastest-growing demand pools in the industry.
“Sony’s smartphone share is projected to decline from 66% of its sensor revenue mix in 2021 to approximately 45% by 2026—not because smartphone volumes are falling, but because automotive, medical, and industrial applications are growing faster.”
Manufacturing complexity remains the most significant structural challenge. Each architectural leap—from FSI to BSI, from BSI to stacked, from stacked to 2-layer transistor pixel—requires increasingly advanced wafer bonding, TSV formation, and hybrid integration processes. Yield management at 300 mm wafer scale with sub-micron overlay accuracy is a non-trivial engineering problem, and the cost structure of 2-layer pixel manufacturing will need to compress before the technology can penetrate mid-range smartphone tiers at volume. The next frontier, based on recent patent signals, involves integrating event-driven sensing, on-sensor AI acceleration, and heterogeneous stacking combining silicon with III-V semiconductor materials—a combination that would represent the fourth major inflection point in Sony’s image sensor roadmap.