Abstract:Nonlinear computation is essential for a wide range of information processing tasks, yet implementing nonlinear functions using optical systems remains a challenge due to the weak and power-intensive nature of optical nonlinearities. Overcoming this limitation without relying on nonlinear optical materials could unlock unprecedented opportunities for ultrafast and parallel optical computing systems. Here, we demonstrate that large-scale nonlinear computation can be performed using linear optics through optimized diffractive processors composed of passive phase-only surfaces. In this framework, the input variables of nonlinear functions are encoded into the phase of an optical wavefront—e.g., via a spatial light modulator (SLM)—and transformed by an optimized diffractive structure with spatially varying point-spread functions to yield output intensities that approximate a large set of unique nonlinear functions–all in parallel. We provide proof establishing that this architecture serves as a universal function approximator for an arbitrary set of bandlimited nonlinear functions, also covering wavelength-multiplexed nonlinear functions as well as multi-variate and complex-valued functions that are all-optically cascadable. Our analysis also indicates the successful approximation of typical nonlinear activation functions commonly used in neural networks, including the sigmoid, tanh, ReLU (rectified linear unit), and softplus. We numerically demonstrate the parallel computation of one million distinct nonlinear functions, accurately executed at wavelength-scale spatial density at the output of a diffractive optical processor. Furthermore, we experimentally validated this framework using in situ optical learning and approximated 35 unique nonlinear functions in a single shot using a compact setup consisting of an SLM and an image sensor. These results establish diffractive optical processors as a scalable platform for massively parallel universal nonlinear function approximation, paving the way for new capabilities in analog optical computing based on linear materials.
Yibo Dong, Yuchi Bai, Qiming Zhang, Haitao Luan, Min Gu
DOI:10.1186/s43593-025-00106-9
Abstract:Optical diffractive neural networks (DNNs) offer superb parallelism and scalability for the direct analogue processing of planar information. However, their complete reliance on coherent light interference constrains the integration and computational frequency, as well as demonstrating low diffraction efficiency and robustness. Here, we present an optical graphics processing unit (OGPU) with a vertically integrated architecture, addressing these challenges through the use of an addressable vertical-cavity surface-emitting laser (VCSEL) array. This array functions as a high-speed planar information fan-in device, with each unit exhibiting individually coherent and mutually incoherent (MI) properties. We develop MI-DNNs that leverage the direct operations of spatially incoherent light while preserving the benefits of coherent computing. Therefore, the entire computing system with free-space architecture can be miniaturized to a handheld form factor. The OGPU operates efficiently under ultralow-light conditions (as low as 3.52 aJ/μm2 per frame) and achieves a record image processing speed of 25 million frames per second. The OGPU has a computational power of 77.3 tera-operations per second (TOPS) and an energy efficiency of 950 TOPS/W. The OPGU achieves competitive image classification accuracy of up to 98.6% and serves as versatile parallel convolutional kernels for image processing tasks, including edge extraction and image denoising.
Abstract:The diffraction limit, rooted in the wave nature of light and formalized by the Heisenberg uncertainty principle, imposes a fundamental constraint on optical resolution and device miniaturization. The recent discovery of the singular dispersion equation in dielectric media provides a rigorous, lossless framework for overcoming this barrier. Here, we demonstrate that achieving such confinement necessarily involves a new class of optical eigenmodes—narwhal-shaped wavefunctions—which emerge from the singular dispersion equation and uniquely combine global Gaussian decay with local power-law enhancement. These wavefunctions enable full-space field localization beyond conventional limits. Guided by this principle, we design and experimentally realize a three-dimensional sub-diffraction-limited cavity that supports narwhal-shaped wavefunctions, achieving an ultrasmall mode volume of 5 × 10−7λ3. We term this class of systems singulonic, and define the emerging field of singulonics as a new nanophotonic paradigm—establishing a platform for confining and manipulating light at deep-subwavelength scales without dissipation, enabled by the singular dispersion equation. Building on this extreme confinement, we introduce singular field microscopy: a near-field imaging technique that employs singulonic eigenmodes as intrinsically localized, background-free light sources. This enables optical imaging at a spatial resolution of λ/1000, making atomic-scale optical microscopy possible. Our findings open new frontiers for unprecedented control over light–matter interactions at the smallest possible scales.
Keywords:Singular dispersion equation;Narwhal shaped wavefunction;Singulonics;Singulonic nanocavity;Singularity;Singular field microscopy;Twisted lattice nanocavity;Power law enhancement
Abstract:Acoustic perception is a fairly basic but extraordinary feature in nature, relying on multidimensional signal processing for detection, localization, and recognition. Replicating this capability in compact artificial systems, however, remains a formidable challenge due to limitations in scalability, sensitivity, and integration. Here, imitating the auditory system of insects, we introduce an opto-acoustic perception paradigm using fully-stabilized dual-soliton microcombs. By integrating digitally stabilized on-chip dual-microcombs, silicon optoelectronics and bionic fiber-microphone arrays on a single platform, we achieve parallelized interrogation of over 100 sensors. Leveraging the low-noise, multi-channel coherence of fully-stabilized soliton microcombs, this synergy enables ultra-sensitive detection of 29.3 nPa/Hz1/2, sub centimeter precise localization, real-time tracking and identification for versatile acoustic targets. Bridging silicon photonics, optical fiber sensing and intelligent signal processing in a chiplet microsystem, our scheme delivers out-of-lab deployable capability on autonomous robotics. This work not only deepens the understanding of frequency comb science, but also establishes a concept of dual-comb-driven sensor networks as a scalable foundation for next-generation opto-acoustic intelligence.
Binbin Nie, Xiaomin Lv, Chen Yang, Rui Ma, Kaixuan Zhu, Ze Wang, Yanwu Liu, Zhenyu Xie, Xing Jin, Guanyu Zhang, Du Qian, Zhenyu Chen, Qiang Luo, Shuting Kang, Guowei Lv, Qihuang Gong, Fang Bo, Qi-Fan Yang
DOI:10.1186/s43593-025-00093-x
Abstract:Chip-scale integration of optical frequency combs, particularly soliton microcombs, enables miniaturized instrumentation for timekeeping, ranging, and spectroscopy. Although soliton microcombs have been demonstrated on various material platforms, realizing complete comb functionality on photonic chips requires the co-integration of high-speed modulators and efficient frequency doublers, features that are available in a monolithic form on X-cut thin-film lithium niobate (TFLN). However, the pronounced Raman nonlinearity associated with extraordinary light in this platform has so far precluded soliton microcomb generation. Here, we report the generation of transverse-electric-polarized soliton microcombs with a 25 GHz repetition rate in high-Q microresonators on X-cut TFLN chips. By precisely orienting the racetrack microresonator relative to the optical axis, we mitigate Raman nonlinearity and enable soliton formation under continuous-wave laser pumping. Moreover, the soliton microcomb spectra are extended to 350 nm with pulsed laser pumping. This work expands the capabilities of TFLN photonics and paves the way for the monolithic integration of fast-tunable, self-referenced microcombs.
Keywords:Thin film lithium niobate;Optical frequency comb;Optical microresonator;Nonlinear photonics
Abstract:Microresonator-based Kerr frequency combs (“Kerr microcombs”) constitute chip-scale frequency combs of broad spectral bandwidth and repetition rate ranging from gigahertz to terahertz. A critical application that exploits the coherence and high repetition rate of microcombs is microwave and millimeter-wave generation. Latest endeavor applying two-point optical frequency division (OFD) to photonic-chip-based microcombs has created microwaves with remarkably low phase noise. Nevertheless, existing approaches to achieve exceptionally coherent microcombs still require extensive active locking, additional lasers, and external RF or microwave sources, as well as sophisticated initiation. Here we demonstrate a simple and entirely passive (no active locking) architecture, which incorporates an optoelectronic oscillator (OEO) and symphonizes a coherent microcomb and a low-noise microwave spontaneously. Our OEO microcomb leverages state-of-the-art integrated chip devices, including a high-power DFB laser, a broadband silicon Mach–Zehnder modulator, an ultralow-loss silicon nitride microresonator, and a high-speed photodetector. Each can be manufactured in large volume with low cost and high yield using established CMOS and III-V foundries. Our system synergizes a microcomb of 10.7 GHz repetition rate and an X-band microwave with phase noise of − 97/ − 126/ − 130 dBc/Hz at 1/10/100 kHz Fourier frequency offset, yet does not demand active locking, additional lasers, and external RF or microwave sources. With potential to be fully integrated, our OEO microcomb can become an invaluable technology and building block for microwave photonics, radio-over-fiber, and optical communication.
Abstract:Overcoming the diffraction limit in optics is challenging and demands a profound understanding of light behavior under extreme confinement. The newly developed framework, termed singulonics, demonstrates the formation of sub-diffraction limit narwhal-shaped optical modes under specific conditions. Experimental implementation of this field confinement within a tiny sub-wavelength volume enables the acquisition of near-field super resolution images.
Abstract:Optical Skyrmions have been demonstrated in many forms, but single photon Skyrmions have remained elusive. Now they are reported from nano-structured emitters, heralding quantum topologies on-chip.