Abstract:Historically constrained by the low throughput of traditional Time-Correlated Single-Photon Counting (TCSPC), a new method utilizes an optimized, time-gated single-photon camera acquisition scheme for achieving single-molecule lifetime precision comparable to TCSPC, but across a wide field of view simultaneously. This high-throughput capability transforms single-molecule FLIM from a specialized, time-consuming effort into a scalable, quantitative assay. The resulting multimodal imaging platform provides unprecedented simultaneous insight into the nanoscale location (SMLM) and local environment (FLIM) of biological molecules, advancing our ability to map complex biophysical parameters like FRET efficiency in living systems.
Abstract:Overcoming the diffraction limit in optics is challenging and demands a profound understanding of light behavior under extreme confinement. The newly developed framework, termed singulonics, demonstrates the formation of sub-diffraction limit narwhal-shaped optical modes under specific conditions. Experimental implementation of this field confinement within a tiny sub-wavelength volume enables the acquisition of near-field super resolution images.
Abstract:Optical Skyrmions have been demonstrated in many forms, but single photon Skyrmions have remained elusive. Now they are reported from nano-structured emitters, heralding quantum topologies on-chip.
Abstract:Compact, high-quality imaging systems are highly desired for scientific, industrial, and consumer applications. Metalenses combined with computational imaging offer a promising solution for developing such systems, yet their performance is fundamentally limited by the commonly used point-to-point imaging model, which forces trade-offs between aperture size, F-number, field of view (FOV), waveband width, and image quality. Here, we experimentally demonstrate that a neural array imaging model can overcome these long-standing trade-offs, achieving a 25-Hz full-color imaging camera with a 2.76-mm aperture, 1.45 F-number, 50\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$^{\circ }$$\end{document} FOV, and a spectral range of 400–700 nm. The camera achieves image quality comparable to commercial compound lenses (e.g., Edmund 33-300) in both indoor and outdoor environments, while reducing the total track length by a factor of 13. We further demonstrate its suitability for object detection and depth estimation in real-world scenarios. This neural array imaging model is also applied to polarization imaging, showcasing its scalability and versatility for broadband applications.
Abstract:Nonlinear computation is essential for a wide range of information processing tasks, yet implementing nonlinear functions using optical systems remains a challenge due to the weak and power-intensive nature of optical nonlinearities. Overcoming this limitation without relying on nonlinear optical materials could unlock unprecedented opportunities for ultrafast and parallel optical computing systems. Here, we demonstrate that large-scale nonlinear computation can be performed using linear optics through optimized diffractive processors composed of passive phase-only surfaces. In this framework, the input variables of nonlinear functions are encoded into the phase of an optical wavefront—e.g., via a spatial light modulator (SLM)—and transformed by an optimized diffractive structure with spatially varying point-spread functions to yield output intensities that approximate a large set of unique nonlinear functions–all in parallel. We provide proof establishing that this architecture serves as a universal function approximator for an arbitrary set of bandlimited nonlinear functions, also covering wavelength-multiplexed nonlinear functions as well as multi-variate and complex-valued functions that are all-optically cascadable. Our analysis also indicates the successful approximation of typical nonlinear activation functions commonly used in neural networks, including the sigmoid, tanh, ReLU (rectified linear unit), and softplus. We numerically demonstrate the parallel computation of one million distinct nonlinear functions, accurately executed at wavelength-scale spatial density at the output of a diffractive optical processor. Furthermore, we experimentally validated this framework using in situ optical learning and approximated 35 unique nonlinear functions in a single shot using a compact setup consisting of an SLM and an image sensor. These results establish diffractive optical processors as a scalable platform for massively parallel universal nonlinear function approximation, paving the way for new capabilities in analog optical computing based on linear materials.
Yibo Dong, Yuchi Bai, Qiming Zhang, Haitao Luan, Min Gu
DOI:10.1186/s43593-025-00106-9
Abstract:Optical diffractive neural networks (DNNs) offer superb parallelism and scalability for the direct analogue processing of planar information. However, their complete reliance on coherent light interference constrains the integration and computational frequency, as well as demonstrating low diffraction efficiency and robustness. Here, we present an optical graphics processing unit (OGPU) with a vertically integrated architecture, addressing these challenges through the use of an addressable vertical-cavity surface-emitting laser (VCSEL) array. This array functions as a high-speed planar information fan-in device, with each unit exhibiting individually coherent and mutually incoherent (MI) properties. We develop MI-DNNs that leverage the direct operations of spatially incoherent light while preserving the benefits of coherent computing. Therefore, the entire computing system with free-space architecture can be miniaturized to a handheld form factor. The OGPU operates efficiently under ultralow-light conditions (as low as 3.52 aJ/μm2 per frame) and achieves a record image processing speed of 25 million frames per second. The OGPU has a computational power of 77.3 tera-operations per second (TOPS) and an energy efficiency of 950 TOPS/W. The OPGU achieves competitive image classification accuracy of up to 98.6% and serves as versatile parallel convolutional kernels for image processing tasks, including edge extraction and image denoising.
Abstract:The diffraction limit, rooted in the wave nature of light and formalized by the Heisenberg uncertainty principle, imposes a fundamental constraint on optical resolution and device miniaturization. The recent discovery of the singular dispersion equation in dielectric media provides a rigorous, lossless framework for overcoming this barrier. Here, we demonstrate that achieving such confinement necessarily involves a new class of optical eigenmodes—narwhal-shaped wavefunctions—which emerge from the singular dispersion equation and uniquely combine global Gaussian decay with local power-law enhancement. These wavefunctions enable full-space field localization beyond conventional limits. Guided by this principle, we design and experimentally realize a three-dimensional sub-diffraction-limited cavity that supports narwhal-shaped wavefunctions, achieving an ultrasmall mode volume of 5 × 10−7λ3. We term this class of systems singulonic, and define the emerging field of singulonics as a new nanophotonic paradigm—establishing a platform for confining and manipulating light at deep-subwavelength scales without dissipation, enabled by the singular dispersion equation. Building on this extreme confinement, we introduce singular field microscopy: a near-field imaging technique that employs singulonic eigenmodes as intrinsically localized, background-free light sources. This enables optical imaging at a spatial resolution of λ/1000, making atomic-scale optical microscopy possible. Our findings open new frontiers for unprecedented control over light–matter interactions at the smallest possible scales.
Keywords:Singular dispersion equation;Narwhal shaped wavefunction;Singulonics;Singulonic nanocavity;Singularity;Singular field microscopy;Twisted lattice nanocavity;Power law enhancement
Abstract:Acoustic perception is a fairly basic but extraordinary feature in nature, relying on multidimensional signal processing for detection, localization, and recognition. Replicating this capability in compact artificial systems, however, remains a formidable challenge due to limitations in scalability, sensitivity, and integration. Here, imitating the auditory system of insects, we introduce an opto-acoustic perception paradigm using fully-stabilized dual-soliton microcombs. By integrating digitally stabilized on-chip dual-microcombs, silicon optoelectronics and bionic fiber-microphone arrays on a single platform, we achieve parallelized interrogation of over 100 sensors. Leveraging the low-noise, multi-channel coherence of fully-stabilized soliton microcombs, this synergy enables ultra-sensitive detection of 29.3 nPa/Hz1/2, sub centimeter precise localization, real-time tracking and identification for versatile acoustic targets. Bridging silicon photonics, optical fiber sensing and intelligent signal processing in a chiplet microsystem, our scheme delivers out-of-lab deployable capability on autonomous robotics. This work not only deepens the understanding of frequency comb science, but also establishes a concept of dual-comb-driven sensor networks as a scalable foundation for next-generation opto-acoustic intelligence.