Open Access
30 January 2024 Tutorial on compressed ultrafast photography
Yingming Lai, Miguel Marquez, Jinyang Liang
Author Affiliations +
Abstract

Significance

Compressed ultrafast photography (CUP) is currently the world’s fastest single-shot imaging technique. Through the integration of compressed sensing and streak imaging, CUP can capture a transient event in a single camera exposure with imaging speeds from thousands to trillions of frames per second, at micrometer-level spatial resolutions, and in broad sensing spectral ranges.

Aim

This tutorial aims to provide a comprehensive review of CUP in its fundamental methods, system implementations, biomedical applications, and prospect.

Approach

A step-by-step guideline to CUP’s forward model and representative image reconstruction algorithms is presented with sample codes and illustrations in Matlab and Python. Then, CUP’s hardware implementation is described with a focus on the representative techniques, advantages, and limitations of the three key components—the spatial encoder, the temporal shearing unit, and the two-dimensional sensor. Furthermore, four representative biomedical applications enabled by CUP are discussed, followed by the prospect of CUP’s technical advancement.

Conclusions

CUP has emerged as a state-of-the-art ultrafast imaging technology. Its advanced imaging ability and versatility contribute to unprecedented observations and new applications in biomedicine. CUP holds great promise in improving technical specifications and facilitating the investigation of biomedical processes.

1.

Introduction

Optical imaging of transient events in their actual time of occurrence exerts compelling scientific significance and practical merits.1 Occurring in two-dimensional (2D) space and at femtosecond (1  fs=1015  s) to microsecond (1  μs=106  s) time scales, these transient events reflect many important fundamental mechanisms in biology.24 However, many transient phenomena are either nonrepeatable or difficult to reproduce. Examples include the spontaneous synaptic activities,5 nanoparticles’ luminescence lifetime at different temperatures,6 and light scattering in living tissue.7 Under these circumstances, the conventional pump–probe methods, requiring numerous repeatable experiments, are inapplicable. Meanwhile, the pump–probe approaches sense photons’ time-of-arrival using complex apparatus to perform time-consuming scanning in either space or time. In these cases, even if the transient phenomena are reproducible, these methods would suffer from substantial inaccuracy due to experimental perturbation and low productivity due to the events’ low occurrence rates.

Single-shot ultrafast optical imaging techniques8,9 can overcome these limitations by capturing the entire dynamic process in real time (i.e., in the actual duration of the event’s occurrence) without repeating measurements. Benefiting from advancements in optoelectronics, laser science, information theory, and computational techniques, single-shot ultrafast optical imaging has become a burgeoning research field in the past decade. Thus far, the mainstream techniques can be generally categorized into the domains of active illumination and passive detection. For the former, temporal information of a dynamic scene is mapped into an optical marker (e.g., spectrum and spatial frequency) of one or multiple ultrashort probe pulses. On the detection side, appropriate devices and methods (e.g., color filter and spatial Fourier transformation) are used to extract the corresponding optical marker, which deduces the scene’s evolution. These active-illumination-based approaches feature femtosecond temporal resolution by leveraging ultrashort durations of ultrafast probe pulses and provide high sensitivity by being compatible with advanced cameras based on the charge-coupled device (CCD) or complementary metal–oxide semiconductor (CMOS) technology. Nonetheless, they cannot capture the self-luminescence scenes, including dynamic scattering,10 photoluminescence intensity decay,11 and plasma emission.12 Passive detection can overcome this limitation. In this category, receive-only ultrafast detectors are used to record the emitted and/or scattered photons from the dynamic scene. Various mechanisms, including Kerr-effect-based time gating,13 deflection of moving photoelectrons by a varying electrical field,14 and charge transfer in a series of registers,15 have been used to provide ultrahigh temporal resolution. Meanwhile, the inferior bandwidth of electronics to its optical counterpart caps the ultimate imaging speed of these passive-detection approaches lower to the active-illumination modalities. Overall, the active illumination and passive detection approaches often carry highly complementary technical specifications. Altogether, they incessantly expand the human vision to see previous inaccessible events.

Among existing techniques, compressed ultrafast photography (CUP) has emerged as a potent single-shot ultrafast optical imaging modality.16 Invented in 2014 in Dr. Lihong V. Wang’s laboratory,17 CUP innovatively synergizes compressed sensing (CS) with streak imaging. Leveraging the sparsity existing in the targeted scenes, the operation of this hybrid approach includes physical data acquisition followed by computational image reconstruction.18,19 In data acquisition, the light from a 2D dynamic scene is recorded in one or more snapshots in a single shot via a CS paradigm containing spatial encoding, temporal shearing, and spatiotemporal integration. Different from conventional ultrafast imaging, the acquired snapshot often bears no resemblance to the scene. Then, the snapshot is input into an algorithm to retrieve the movie of the target dynamic scene by solving a minimization problem.20

CUP provides many attractive conceptual novelties and practical advantages. First, the spatial encoding and temporal shearing operations allow a mixture of information between time and space, which enables CUP to have a large sequence depth (i.e., the number of frames in each recorded movie) compared with other single-shot ultrafast imaging systems based on spatial frequency multiplexing,2124 spectral filtering,2530 and beam splitting.3134 Meanwhile, it overcomes the limitations in sensing dimension in conventionally regarded one-dimensional (1D) high-speed sensors.14,35 Compared with ultrafast CCD sensors that have a low fill factor, CUP uses spatiotemporal multiplexing to effectively enhance the light throughput in data acquisition, which improves the feasibility of image reconstruction.36 It is compatible with many scientific-grade CCD/CMOS sensors without interrupting their normal operations, which retains their responsive spectrum and sensitivity while endowing them with ultrahigh speeds.37 Second, its generic sensing paradigm can be embodied in both active-illumination and passive-detection schemes. Each major operation (i.e., spatial encoding, temporal shearing, and spatiotemporal integration) can be optically realized by various devices, indicating high design flexibility, multi-spatiotemporal-scale imaging ability, and broad spectral coverage. Third, computational image reconstruction, as an indispensable step in CUP’s operation, lifts certain burdens in system design from hardware. Advances in CS,38 machine learning,39 and information theory40 can be directly implemented in CUP’s image reconstruction. Finally, CUP exhibits light throughput advantages by capturing information in two spatial dimensions and time simultaneously in a single exposure. In contrast, the multiple-shot methods can only collect information from a column (from point scanning) or a slice (from line scanning) of the datacube.41 Meanwhile, distinguished from single-shot framing (or mapping) photography,30,34,42 CUP maintains time continuity in data acquisition, which further enhances the amount of acquirable information.18

Because of its unprecedented imaging ability, CUP immediately became a research focus since its invention. New designs in hardware and innovative development of image reconstruction are being reported frequently. New applications in biomedicine, physics, and engineering are highlighted. Comprehensive reviews of CUP can be found in the literature.16,43 Other reviews of CUP are included in the surveys of ultrafast imaging technologies.1,8,9,18,4448 However, thus far, there has not been a practical guide for developing CUP systems using an anatomy fashion. Thus, in this tutorial, we first review the operating principle of CUP with simulation examples (in Matlab and Python) to guide readers on how to generate compressively recorded snapshots from a spatiotemporal datacube using the forward model as well as how to reconstruct the spatiotemporal datacube from the snapshots using representative methods in analytical-modeling-based approach and machine learning. Then, we will provide an extensive survey of existing methods for each of the major operations in CUP’s sensing paradigm—spatial encoding, temporal shearing, and spatiotemporal integration. Afterward, we will discuss the representative applications of CUP in biomedicine. Finally, we summarize CUP’s accomplishments and provide the prospect of its future development.

2.

Method

A schematic of dual-view CUP is shown in Fig. 1. In data acquisition, a dynamic scene is imaged by front optics and split into two arms. The transmitted component forms the image of the dynamic scene on a spatial encoder. Unlike many other compressive temporal imaging modalities that use multiple fast-changing patterns during image acquisition,4954 a single static pattern is used for CUP’s spatial encoder. Then, the frames in the spatially encoded scene are deflected by a temporal shearing unit to different spatial positions along the sweeping direction. Finally, the encoded and sheared scene is spatiotemporally integrated by a sensor, producing a compressive 2D snapshot, which is defined as the time-sheared view and used hereafter in this tutorial. This paradigm to capture the time-sheared view was implemented in the original CUP configuration.17 In the ensuing implementations, it was found that a direct capture of a time-integrated snapshot of the dynamic scene could enhance the reconstructed imaging quality.55 Defined as the time-unsheared view and used hereafter in this tutorial, this snapshot outlines the region of occurrence of the dynamic scene, which reduces the number of unknowns for image reconstruction and facilitates its convergence to the optimal result. It is particularly useful when the dynamic scene occurs on a static or slowly moving object (e.g., intensity decay of photoluminescence emitted from nanoparticle-labeled cells56). It is noted that CUP systems of more than two views have been featured in recent progress to further boost image quality.5766 For example, lossless-encoding CUP contains the time-unsheared view and two complementary time-sheared views.58 Nonetheless, the formation of these views shares similar data acquisition paradigms as the ones described above and thus is not discussed here.

Fig. 1

Operating principle of dual-view CUP. The illustration depicts the beam paths for time-sheared and time-unsheared views, represented by magenta and green colors, respectively.

JBO_29_S1_S11524_f001.png

To assist readers in comprehending CUP’s paradigm, in Secs. 2.1 and 2.2, we provide in-depth theoretical derivation and simulation. The presented examples are meticulously designed to use basic features and functions in Matlab (R2020b) and Python (version 3.9). Two versions of Python codes are prepared for readers with different levels of programming experience.

2.1.

Forward Model

The forward model of CUP formulates the process of recording a three-dimensional [3D; i.e., (x,y,t)] scene to one or a few 2D snapshots. In general, this forward model can be expressed mathematically using either element-wise or matrix-vector notations.

2.1.1.

Element-wise notation

Many mathematical and scientific libraries are designed to efficiently handle element-wise operations. The dynamic scene and the binary-valued encoding mask are denoted by FRM×N×L and RRM×N, respectively. M and N represent the data lengths in the two spatial dimensions, and L signifies the data length in time. The discrete output from the sensor for the time-sheared view (hence the subscript “ts”) can be modeled as

Eq. (1)

(Gts)i,j=l=0L1F¯i,j,lR¯i,j+(Ets)i,j,
where
F¯i,j,l={Fi,jl,lif  lj[N+(l1)]0otherwise,
and
R¯i,j={Ri,jlif  lj[N+(l1)]0otherwise,
where F¯i,j,l is the intensity of the (i,j,l)’th element of a right-zero-padded version with a frame-dependent right circular shifting of the dynamic scene’s datacube with F¯RM×[N+(L1)]×L. R¯i,j stands for the intensity of the (i,j)’th element a right-zero-padded version with a frame-dependent right circular shifting of the spatial encoder with R¯RM×[N+(L1)]. (Gts)i,j is the intensity measured on the (i,j)’th element of the sensor with GtsRM×[N+(L1)]. (Ets)i,j stands for the noise of the (i,j)’th element in Gts with EtsRM×[N+(L1)].

The discrete output for the time-unsheared view (hence “tu” as the subscript) can be modeled as

Eq. (2)

(Gtu)i,j=l=0L1Fi,j,l+(Etu)i,j,
where (Gtu)i,j is the intensity measured at the (i,j)’th element of the time-unsheared view with GtuRM×N, and (Etu)i,j represents the noise in Gtu with EtuRM×N.

As an example, a Matlab script that simulates dual-view CUP’s forward model, with a linear shearing operator and a pseudorandom binary mask, is shown in Algorithm 1. Moreover, a step-by-step guide with illustrations of the “cell-division” dynamic scene is shown in Fig. 2. The ground truth video was taken from the public “Mouse Embryo Tracking” database67 and can be downloaded using the link in Ref. 68.

Algorithm 1

Simulating dual-view CUP’s forward model with the element-wise notation using Matlab.

%% Example of dual-view CUP’s sensing process
% Encoding step (generating R)
load(‘Cell.mat’)% Loading the example video – F
[M,N,L] = size(F);% Calculating the video dimensions % (y,x,t)->(M,N,L)
R = 1*(rand(M,N)>0.5);% Mask initialization with a % transmittance of ∼50%
R = repmat(R,1,1,L);
Gts = F.* R;% Spatial encoding – Hadamard product
Gts = padarray(Gts,[0,L-1,0],0,‘post’);% Right column zero padding
for l=0:L-1 % Shearing operation
 Gts(:,:,l+1) =circshift(Gts(:,:,l+1),[0,l]);
end
Gts =sum(Gts,3);% Integration of time-sheared view
Gtu =sum(F,3);% Integration of time-unsheared view

Fig. 2

Illustrations of simulating dual-view CUP’s forward model with the element-wise notation using Matlab.

JBO_29_S1_S11524_f002.png

2.1.2.

Matrix-vector notation

The element-wise notation of CUP’s forward model, despite owning simple expression and easy comprehension, is inherently limited by its sequential execution. This characteristic engenders surplus calculations within specific functions when the modeling is subjected to extensive datasets or algorithms mandating intricate computations—such as matrix inversion, matrix factorization, eigenvalue decomposition, and low-rank approximation. Thus, most practices use matrix-vector operations by converting F into vector fRn×1, where n=M·N·L. In this way, CUP’s forward model can be expressed by matrix multiplication, which can be computed by powerful linear algebra methods to concisely formulate solutions.

In particular, dual-view CUP’s forward model, by following a matrix-vector representation, can be expressed as

Eq. (3)

g=[gtsgtu]=Φf=[TtsSCTtu]f,
where, gtsRmts×1 and gtuRmtu×1 are the vectorial version of the time-sheared view and the time-unsheared view with sizes mts=M·[N+(L1)] and mtu=M·N, respectively. gRm×1 is the vectorial version of the concatenated two views with a size m=mts+mtu. ΦRm×n is the dual-view CUP’s sensing matrix. CRn×n is the spatial encoding matrix. SR(mts·L)×n is the temporal shearing matrix. TtsRmts×(mts·L) and TtuRmtu×n are the spatiotemporal integration matrices of the time-sheared view and the time-unsheared view, respectively. Φts=TtsSCRmts×n is also defined as the time-sheared sensing matrix.

The entries of C, S, Tts, and Ttu are given as

Eq. (4)

Ci,j={rvif  i=j0otherwise,

Eq. (5)

Si,j={1if  i=j+M·L·jM·N0otherwise,

Eq. (6)

Tts=1LTImts×mts,

Eq. (7)

Ttu=1LTImtu×mtu.

In Eq. (4), v=mod(j,M·N) with vW, and rv{0,1} is the value at the v’th position of rRM·N×1, which is the vectorial version of the encoding mask R. In Eqs. (6) and (7), 1LRL×1 is an all-one vector. I is the identity matrix. The matrix Ttu has a structure similar to Tts (i.e., a horizontal concatenation of identity matrices) but with a shorter diagonal dimension. Thus, the sensing matrix Φ=[ΦtsTtu] can be directly defined as

Eq. (8)

Φi,j={rvif  i=v+M·jM·N1if  i=v+mts0otherwise.

A Matlab script for constructing the matrices C, S, Tts, Ttu, and Φ is presented in Algorithm 2. In this example, M×N×L=256×256×25  pixels. The sensing matrix of the time-sheared view Φts is created by assembling a series of diagonal patterns that cyclically repeat along the horizontal direction, shifting downward by M rows following each iteration. The sensing matrix Φ, illustrated schematically in Fig. 3, is created by vertically concatenating Φts with Ttu.

Algorithm 2

Programming matrices of dual-view CUP’s operation using Matlab.

clear all; close all; clc
M = 256; N = 256; L = 25;
R = 1*(rand(M,N)>0.5);
R = repmat(R,1,1,L);
%%
n = M*N*L;
mts = M*(N+L-1);
mtu = M*N;
%% Encoding matrix (C)
i = 0:n-1;
j = 0:n-1;
C = sparse(i+1,j+1,R(:),n,n);
%% Shearing matrix (S)
j = 0:n-1;
i = j+(M*L)*floor(j/(M*N));
S = sparse(i+1,j+1,1,mts*L,n);
%% Integration matrices (Ts and Tu)
Tts = kron(ones(1,L),speye(mts,mts));
Ttu = kron(ones(1,L),speye(mtu,mtu));
%% CUP sensing matrix (\Phi)
Phi = [((Tts*S)*C)’,Ttu’]’;

Fig. 3

Construction of the matrices of dual-view CUP’s operation using Matlab. (a) Spatial encoding matrix. (b) Temporal shearing matrix. (c) Spatiotemporal integration matrix for the time-sheared view. (d) Spatiotemporal integration matrix for the time-unsheared view. (e) Sensing matrix of dual-view CUP. Insets: Zoomed-in views of local regions (indicated by the red boxes with different line types).

JBO_29_S1_S11524_f003.png

2.2.

Image Reconstruction

After data acquisition, the captured 2D snapshots (i.e., Gts and Gtu) are input to an algorithm to reconstruct the dynamic scene. To date, analytical-modeling-based algorithms are dominantly used in CUP’s reconstruction because they can incorporate prior knowledge about the imaging system and the underlying physics of light propagation,6974 leading to accurate reconstructions. Before getting into sophisticated analytical-modeling-based reconstruction algorithms for CUP, let us analyze the structure of a basic optimization problem:

Eq. (9)

f˜=argminf12gΦf22+λφ(f),
where ·22 is the 2-norm, λ>0 is a regularization parameter, φ(·):RnR is a convex and smooth function, and f˜Rn×1 represents the reconstruction.

Various reconstruction algorithms7578 are developed based on Eq. (9). A popular choice, especially in the early stage of CUP’s development, is the two-step iterative shrinkage/thresholding (TwIST) algorithm.78 The regularizer, φ(f), can be set to various forms, including Ψf1 and fTV, where ·1 represents the 1-norm, ΨRn×n is an arbitrary representation basis matrix, and ·TV represents the total-variation (TV) regularization.79 The TwIST algorithm combines the shrinkage operation used in iterative soft-thresholding algorithms with a correction step that enforces fidelity to the measurements. It exploits the sparsity naturally embedded in the transient scene via the regularizer. In particular, the 1-norm requires prior knowledge about the scene to select an adequate representation basis. The TV-norm uses spatiotemporal correlation by removing low variations between neighboring pixels. These characteristics enable the TwIST algorithm to efficiently recover a transient scene from an underdetermined measurement.

Later, more advanced image reconstruction algorithms are developed based on the paradigm of alternating direction method of multipliers (ADMM),69,8083 which sets φ(f)=ff(k)22, where f(k) stands for the k’th reconstruction with k={0,,K1} and KN as the total number of the algorithm’s iterations. The ADMM has gained increasing popularity due to its flexibility to customize the optimization steps by incorporating additional constraints (e.g., noise reduction algorithms or neural-network approaches), which is hence selected for this tutorial.

2.2.1.

Analytical-modeling-based approaches

The ADMM accomplishes distributed convex optimization using a divide-and-conquer approach, where a global problem is split into a few subproblems.84 Leveraging dual decomposition and the augmented Lagrangian (AL) methods for constrained optimization,77 the ADMM solves problems expressed by the following form:

Eq. (10)

minimize  γ(f)+ψ(z)subject to  Df+Bz=b,
where {γ,ψ} are convex functions. In the equality constraint, D and B are arbitrary matrices that establish a linear relationship between the objective variable (i.e., f) and an auxiliary variable (i.e., z) that works as prior information. The variable b represents the limits (or bounds) for the equality constraint. For instance, in dual-view CUP, an equality constraint can be imposed by setting D=Ttu, B=I, z=gtu, and b=0.

Equation (10) can be solved using the method of Lagrange multipliers, which is a mathematical technique used to optimize a function subject to equality constraints. For Eq. (10), its Lagrangian function is defined as

Eq. (11)

L(f,z,ν)=γ(f)+ψ(z)+νT(Df+Bzb),
where ν is the Lagrange multiplier. As a scaling factor, ν enables constructing, from Eq. (10), an unconstrained optimization function, in which the gradients of both the objective function and the constraint function are proportional to each other at the optimal solution.84 Then, Eq. (11) is rewritten as85

Eq. (12)

minf,zmaxνL(f,z,ν).

Equation (12) is maximized when ν+ unless Df+Bzb=0. By converting the maximization problem into a minimization problem [i.e., maxνL(f,z,ν)=minνL(f,z,ν)] (Ref. 86) and using a proximal term87 to solve the new minimization problem, Eq. (12) results in

Eq. (13)

minf,z(argminνL(f,z,ν)+12ρνν¯22),
where ρ>0 is the penalty parameter, and ν¯ is a previous estimate of ν. Note that the “argmin” in Eq. (13) is now a convex quadratic function with the trivial solution ν=ν¯+ρ(Df+Bzb). By inserting this trivial solution into Eq. (12), the AL-based dual-problem can be obtained as

Eq. (14)

argminf,zγ(f)+ψ(z)+ν¯T(Df+Bzb)+ρ2Df+Bzb22.

Finally, Eq. (14) can be split into three optimization problems:

Eq. (15)

f(k+1)argminfγ(f)+(ν(k))T(Df+Bz(k)b)+ρ2Df+Bz(k)b22,

Eq. (16)

z(k+1)argminzψ(z)+(ν(k))T(Df(k+1)+Bzb)+ρ2Df(k+1)+Bzb22,

Eq. (17)

ν(k+1)ν(k)+ρ(Df(k+1)+Bz(k+1)b),
where {ν(k+1),ν(k)} are the equal expressions of {ν,ν¯}, respectively. In this strategy, Eqs. (15)–(17) are solved in an alternating and iterative form to find a point that belongs to the intersection of the two closed convex solution sets. Here, for each step, all the parameters are fixed except the optimization variables [e.g., f in Eq. (15) and z in Eq. (16)]. Then, by repeatedly projecting the updated variables onto each set, the algorithm converges toward a point that satisfies the constraints of all the sets simultaneously.

After defining the core structure of the ADMM algorithm, the following sections discuss two popular variants of the ADMM in image processing.84

Scaled form ADMM

The scaled form84 can be obtained using the equality νTr+(ρ/2)r22=ρ2r+w22ρ2w22 (Ref. 86) with r=Df+Bzb and the scaled Lagrange multiplier w=1ρν. This implementation modifies Eq. (14) as

Eq. (18)

argminf,zγ(f)+ψ(z)+ρ2Df+Bzb+w22ρ2w22.

Then, setting B=D=I and b=0, i.e., f=z, Eq. (18) can be split into three optimization problems:

Eq. (19)

f(k+1)argminfγ(f)+ρ2fz(k)+w(k)22,

Eq. (20)

z(k+1)argminzψ(z)+ρ2f(k+1)z+w(k)22,

Eq. (21)

w(k+1)w(k)+ρ(f(k+1)z(k+1)).

The scaled form of ADMM [i.e., Eqs. (19)–(21)] exhibits an improved convergence rate compared with the standard ADMM [i.e., Eqs. (15)–(17)]. The acceleration is achieved by introducing ρ as a scaling factor, which is particularly beneficial for large-scale optimization problems or problems with slow convergence rates. Further insights into these considerations, including heuristics for the effective selection of an appropriate scaling factor, can be found in Ref. 84.

Plug-and-play ADMM

The ADMM’s modular structure is one of its main features because it enables the decomposition of a complex optimization problem [i.e., Eqs. (14) or (18)] into several simpler subproblems [i.e., Eqs. (15)–(17) or Eqs. (19)–(21)] that can be solved independently or using established solution methods. Moreover, ADMM’s versatility enables modeling different sparse-based optimization problems. For example, the Tikhonov optimization problem can be modeled by setting ψ(z)=z22 in Eq. (20). As another example, by setting B=Ψ, D=I, b=0, f=Ψz, and ψ(z)=z1 in Eq. (18), Eq. (20) can be converted into the basis-pursuit denoising problem. In this regard, a popular framework is the plug-and-play (PnP)-ADMM,69 which allows plugging in an off-the-shelf image-denoising algorithm as a solver for the subproblems (see a Matlab implementation in Algorithm 3). In the PnP-ADMM, by setting γ(f)=Φfg22, Eq. (19) has the closed-form solution

Eq. (22)

fk+1=[ΦTΦ+ρ2I]1[ΦTg+ρ2(zw)].

Then, Eq. (20) can be rewritten as a denoising problem by setting ρ=1σ2, resulting in

Eq. (23)

z(k+1)=argminzψ(z)+12σ2zz˜(k)22,
where z˜(k)=f(k+1)+w(k), and σ represents the denoising strength.88 Equation (23) can be solved as

Eq. (24)

z=Dσ(z˜(k))=Dσ(f(k+1)+w(k)),
where Dσ is a denoiser. Note that the PnP-ADMM algorithm supports any denoiser that fulfills restrictive conditions, such as being non-expansive and having a symmetric Jacobian.89 For example, the block-matching and 3D filtering algorithm has been extensively used to enhance the denoising capabilities of the ADMM algorithm while preserving textures and fine details.90,91

Algorithm 3

Simulating dual-view CUP’s image reconstruction by a PnP-ADMM algorithm using Matlaba.

clear all
close all
clc
%% Load datacube
load(‘Cell.mat’)
[M,N,L] = size(F);
F = F./max(F(:));
n = M*N*L;
mts = M*(N+L-1);
mtu = M*N;
m = mts + mtu;
global m
%% Mask
R = 1*(rand(M,N)>0.5);
R = repmat(R,1,1,L);
%% Sensing matrix
j = 0:n-1;
i = mod(j,M*N)+M*floor(j/(M*N));
Phi_ts = sparse(i+1,j+1,R(:),mts,n);
Phi_tu = kron(ones(1,L),speye(mtu,mtu));
Phi = [Phi_ts;Phi_tu];
%% Measurement
G = Phi*F(:);
G = G/max(G(:));
G = G/L;
%% PnP-ADDM parameters
addpath(genpath(‘./denoisers/RF/’));
dim = size(F);
A = @(F,trans_flag) afun2(F,trans_flag,Phi);
method = ‘RF’;
lambda = 0.25;
opts.rho = 0.1;
opts.gamma = 1;
opts.max_itr = 2000;
opts.print = true;
%% Main routine
F_tilde = PlugPlayADMM_general(F,G,A,lambda,method,opts,dim);

aFunctions used in the above script can be downloaded from Ref. 92.

2.2.2.

Deep-learning approaches

Deep-learning approaches have been increasingly featured owing to their faster reconstruction compared with their analytical-modeling-based counterparts. Recent advances have allowed embedding mathematical properties offered by the CS theory by designing custom layers that emulate the forward sensing model or exploiting spatiotemporal sparsity via image-denoising nets.93 Given access to rich available training datasets, many novel methods based on convolutional neural networks (CNNs) have been developed for CUP’s reconstruction as well as for the encoding mask design, including the end-to-end CNN with residual learning,94 the U-Net-based DeepCUP,95 the hybrid algorithm that combines the AL method with deep learning,96 and the snapshot-to-video autoencoder based on a generative adversarial network.97103

Here, we review a representative CNN—the deep high-dimensional adaptive net (D-HAN)104 that offers multifaceted supervision to CUP by optimizing the encoding mask, sensing the shearing operation, and reconstructing the 3D datacubes. The main goal of the D-HAN is to leverage the merits of both the ADMM and the network-based CS methods by mapping one iteration of the ADMM steps to a deep network architecture. For these reasons, the D-HAN will be used as a benchmark to explain how to link the CUP’s forward model with a CNN approach.

Originally designed to use only the time-sheared view, the D-HAN is composed of two cascaded neural networks: a deep-unfolding-based network to embody the sensing model of the time-sheared view in CUP and a U-Net architecture105 to further improve image reconstruction (Fig. 4) by exploiting the spatiotemporal correlation of the transient scene. The deep-unfolding net and the U-Net manifest the “divide-and-conquer” approach embedded in the ADMM. Then, the time-unsheared view was incorporated to boost the reconstruction performance by using it as an initialization for the deep-unfolding network and a prior restriction in the loss function. This configuration leverages the original D-HAN’s mathematical advantages and the reduction of unknowns via prior information. This design is memory efficient and thus essential for learning to reconstruct high-dimensional datacubes.

Fig. 4

Schematic of the D-HAN for dual-view CUP’s image reconstruction. BN, batch normalization; ReLU, rectified linear activation unit. Adapted with permission from Ref. 104.

JBO_29_S1_S11524_f004.png

In this regard, the ADMM-based inverse problem can be formulated using the ADMM’s scaled form [i.e., Eqs. (19)–(21)]. Note that in Eq. (19), the analytical inverse model of f refers to a quadratic problem with the closed-form that involves the inversion of a n×n size matrix [see Eq. (22)]. Toward this goal, the Sherman–Woodbury–Morrison (SWM) matrix inversion lemma106—a mathematical theorem allowing calculating a matrix’s inverse by converting it into a full rank matrix—and the full-column rank properties are exploited to simplify the process to a smaller-scale matrix inversion and obtain the closed-form solution of the first inverse model in Eq. (22)

Eq. (25)

f=ρ˜1[IΦtsT[ρ˜I+ΦtsΦtsT]1Φts][ΦtsTgts+ρ˜(zw)],
where ΦtsΦtsTRmts×mts represents a matrix product resulting in a diagonal matrix, and ρ˜=ρ/2.

To implement the D-HAN, the first step is to define the operators of dual-view CUP’s data acquisition. First, the direct sensing operators of the time-sheared view and the time-unsheared view, denoted by Gts and Gtu, are expressed as

Eq. (26)

Gts(F,R)=l=0L1R(Γ(l),F:,:,lR),

Eq. (27)

Gtu(F)=l=0L1F:,:,l.

Here, Gts(·):RM×N×LRM×[N+(L1)] and Gtu(·):  RM×N×LRM×N. They are shown as the magenta and orange layers in Fig. 4. represents the Hadamard product. The operator R(·):  RM×NRM×[N+(L1)] introduces a right-zero-padding (i.e., [F:,:,l,0] with 0RM×(L1)) followed by a right-horizontal circular shifting of Γ(l) pixels. A script of Python to construct the sensing operators Gts and Gtu are presented in Algorithm 4.

Algorithm 4

Programming the direct sensing operators of the time-sheared view and the time-unsheared view (i.e., Gts and Gtu) using TensorFlow.

## Direct sensing operator time-sheared view
class DirectSensing_ts(tf.keras.layers.Layer):
def __init__(self, L, M, N, **kwargs):
  self.L = L
  self.M = M
  self.N = N
  super(DirectSensing_ts, self).__init__(**kwargs)
def get_config(self):
  config = super().get_config().copy()
  config.update({
   ‘bands’: self.L})
  return config
def call(self, F, R, **kwargs):
  F = tf.multiply(R, F)
  F = tf.pad(F, [[0, 0], [0, 0], [0, self.L - 1], [0, 0]],name = “padsensing”)
  Gts = None
  for i in range(0,self.L):
   if Gts is not None:
    Gts = Gts  + tf.roll(F[:,:,:,i], shift=i, axis=2)
   else:
    Gts = F[:,:,:,i]
  Gts = tf.expand_dims(Gts ,axis=-1)
  Gts = tf.math.divide(Gts ,self.L)
  return Gts
  ## → Output: Compressed measurement
  ## Direct sensing operator time-unsheared view
Gtu = tf.math.reduce_mean(F,axis=-1)

Then, the transpose sensing operators of the time-sheared view and the time-unsheared view, shown as the red and purple layers respectively in Fig. 4, are defined as

Eq. (28)

Fts(Gts,R)=S(Γ(l),GtsR(Γ(l),R)),

Eq. (29)

Ftu(Gtu)=(Gtu):,:,l.

Here, Fts(·):  RM×[N+(L1)]RM×N×L and Ftu(·):  RM×NRM×N×L. They return a datacube from a 2D compressed measurement. S(·) is an operator that performs a left-horizontal circular shifting of Γ(l) pixels, followed by the removal of the last (L1) columns in the resulting shifted matrix to preserve the spatial dimension of the datacube. Algorithm 5 presents a Python script to construct Fts and Ftu.

Algorithm 5

Programming the transpose sensing operators of the time-sheared view and the time-unsheared view (i.e., Fts and Ftu) using TensorFlow.

## Transpose sensing operator of the time-sheared view
class Transposesensing_ts(tf.keras.layers.Layer):
def __init__(self, L, M, N, **kwargs):
  self.L = L
  self.M = M
  self.N = N
  super(Transposesensing_ts, self).__init__(**kwargs)
def get_config(self):
  config = super().get_config().copy()
  config.update({
   ‘bands’: self.L})
  return config
def call(self, Gts, R, **kwargs):
  F = None
  R = R[0,:,:,0]
  Gts = Gts[:,:,:,0]
  for i in range(0,self.L):
   if F is not None:
    Ab = tf.roll(Gts, shift=-i, axis=2)
    Ax = tf.expand_dims(tf.multiply(R, Ab[:,:,0:self.N]), -1)
    F = tf.concat([F, Ax], axis=-1)
   else:
    Ab = tf.roll(Gts, shift=0, axis=2)
    F = tf.expand_dims(tf.multiply(R,Ab[:,:,0:self.N]), -1)
  F = self.L*F
  return F
## Transpose sensing operator of the time-unsheared view
Gtu = tf.expand_dims(Gtu,axis=-1)
F_tu = tf.broadcast_to(Gtu, [Gtu.shape[0], M, N, L])

Finally, the inverse operator of the time-sheared view, shown as the brown layer in Fig. 4, is defined as

Eq. (30)

Its(Gts,R)=Gts(l=0L1R(Γ(l),R°2)+ρ˜I)°1,
where Its(·):RM×[N+(L1)]RM×[N+(L1)]. (·)°2 and (·)°1 represent the Hadamard quadratic power and the Hadamard inverse operation, respectively. An example script of Python to construct the inverse operator of the time-sheared view Its is summarized in Algorithm 6.

Algorithm 6

Programming the inverse operator of the time-sheared view (i.e., Its) using TensorFlow.

class InverseOperator_ts(tf.keras.layers.Layer):
def __init__(self, L, M, N, **kwargs):
  self.L = L
  self.M = M
  self.N = N
  super(InverseOperator_ts, self).__init__(**kwargs)
def get_config(self):
  config = super().get_config().copy()
  config.update({
   ‘bands’: self.L})
  return config
def build(self, input_shape):
  Lambda = tf.constant_initializer(1)
  Tau = tf.constant_initializer(1)
  Psi = np.zeros([self.M,self.N+self.L-1])
  Psi = tf.constant_initializer(Psi)
  self.Lambda = self.add_weight(name=“Lbd,” initializer=Lambda, shape=(1),trainable=True)
  self.Tau = self.add_weight(name=“Tau,” initializer=Tau, shape=(1),trainable=True,constraint=tf.keras.constraints. MaxNorm(max_value=1, axis=0))
  self.Psi = self.add_weight(name=“Psi,” initializer=Psi, shape=(self.M,self.N+self.L-1),trainable=True)
  super(InverseOperator_ts, self).build(input_shape)
def call(self, Gts, R, **kwargs):
  Gts = Gts[:,:,:,0]
  R1 = tf.broadcast_to(R,[1,self.M,self.N,self.L])
  Gp = DirectSensing_ts(L=self.L, M=self.M, N=self.N, name=‘DirectPr_InitInv’)(R, R1)
  Gp = Gp[:,:,:,0]
  Gp = Gp/(self.Lambda) + tf.ones(Gp.shape)
  Inv = tf.math.reciprocal(Gp, name=None)
  Gts = tf.multiply((self.Tau**2)*Inv+(1-self.Tau**2)*self.Psi,Gts)
  Gts = tf.expand_dims(Gts,axis=-1)
  F = TransposeSensing_ts(L=self.L, M=self.M,N=self.N, name=‘TransPr_InitInv’)(Gts,R)
 #
  F = F/(self.Lambda**2)
  return F

Following the definition of these five operators, the next step is to model the SWM matrix approach. Toward this goal, Eq. (25) is split into two main equations ΦtsTgts+ρ˜(zw) and ρ˜1Iρ˜1ΦtsT[ρ˜I+ΦtsΦtsT]1Φts. In the D-HAN, the first equation is reflected as Fts coupled to two 2D convolutional layers, each of which with a batch normalization operation (referred to hereafter as a 2D convolutional + batch normalization (BN) layer and shown in cyan in Fig. 4). The output from the 2D convolutional + BN layer is added with an estimate from the time-unsheared view generated by Gtu and Ftu. Subsequently, the second equation is represented by two parallel arms. The upper arm, corresponding to ρ˜1ΦtsT[ρ˜I+ΦtsΦtsT]1Φts, is composed of Gtu as a first layer followed by Its and Fts along with four 2D convolutional + BN layers. The bottom arm, which corresponds to ρ˜1I, has three 2D convolutional + BN layers. The outputs of both arms are subtracted and given as the input to the U-Net in the D-HAN that reflects Eq. (24). In the U-Net, the datacube passes through an encoding pathway comprised of max-pooling layers that simultaneously reduce the spatial dimension and increase the channels. This down step returns a smaller-size datacube with the more meaningful high-level details of the image (e.g., edges, textures, or shapes) linked to the scene’s sparsity. Then, in the decoding step, comprised of upsampling layers, the U-Net reconstructs the full-size datacube (denoted by F˜) using these learned high-level details.

The loss function L(·), used to learn the D-HAN’s weights, is established as

Eq. (31)

L(F)=l1(F,F˜)+l1(Gts,G˜ts)+l1(Gtu,G˜tu)+lSSIM(F,F˜),
where F˜ is the D-HAN’s output, G˜ts and G˜tu are estimations of the compressed measurement from F˜ using Eqs. (26) and (27), respectively. l1(·) is the l1-norm operator, and lSSIM(·) represents the structural similarity (SSIM) index.107

2.2.3.

Simulation

CUP’s image reconstruction of the “cell-division” scene is simulated using both the analytical-modeling-based algorithm (in Matlab) and deep-learning algorithm (in Python). The dimensions of the datacube were set as M×N×L=256×256×25  pixels, and the binary mask holds the structure proposed in Ref. 104. Four popular databases—“SumMe,”108 “Need for Speed,”109 “Sports Videos in the Wild,”110 and “Mouse Embryo Tracking”67—were used to train the D-HAN. The PnP-ADMM algorithm and a pretrained version of the D-HAN can be downloaded from Ref. 92 (Matlab 2022b) and Ref. 111 (Python, TensorFlow). In addition, a more beginner-friendly Python version is available in Ref. 112, which was trained on the Google Colaboratory (CoLab) application—a free Jupyter Notebook interactive development environment for Python hosted in Google’s cloud.

Six exemplary frames of the scene (as the ground truth) and their corresponding frames reconstructed by single-view and dual-view CUP using the PnP-ADMM and the D-HAN are shown in Fig. 5(a). The movie is shown in Video 1. As shown in Figs. 5(b) and 5(c), results show that implementing the dual-view approach exceeds the reconstruction performance of a single-view CUP in terms of the average peak signal-to-noise ratio (PSNR) defined as PSNR¯=1Ll=0L1[10log10([max(F:,:,l)]2mtu1vec(F:,:,l)vec(F^:,:,l)22)] and the average SSIM index113 defined as SSIM¯=1Ll=0L1[[Lum(F:,:,l,F^:,:,l)]α[Cont(F:,:,l,F^:,:,l)]β[Struc(F:,:,l,F^:,:,l)]γ]. Here, vec(·) is a vectorization operator, and F^ is the reconstructed result. The operators Lum(·)=2μx·μy+C1μx2+μy2+C1, Cont(·)=2σx·σy+C2σx2+σy2+C2, and Struc(·)=σxy+C3σxσy+C3 measure the similarities in luminance, contrast, and structure, respectively, where μx,μy,σx,σy, and σxy are the local means, standard deviation, and cross-covariance for the images. {α,β,γ}>0 are parameters used to adjust the relative importance of the three components. C1,C2, and C3 are constants to stabilize the division with weak denominator. For the results shown in Fig. 5, SSIM’s parameters were set as α=β=γ=1, C1=0.012, C2=0.032, and C3=C2/2. The D-HAN obtains a better average PSNR and a comparable average SSIM to the PnP-ADMM approach in both single-view and dual-view CUP.

Fig. 5

Simulation of CUP’s image reconstruction of the “cell-division” dynamic scene using the PnP-ADMM algorithm and the D-HAN. (a) Six selected frames of the ground truth and the single-view and dual-view reconstructions using the PnP-ADMM and D-HAN algorithms. (b) PSNR of each reconstructed frame. (c) As (b), but showing the SSIM index (Video 1, MP4, 408 KB [URL: https://doi.org/10.1117/1.JBO.29.S1.S11524.s1]).

JBO_29_S1_S11524_f005.png

CUP’s performance decreases with higher noise and stronger compression. Table 1 illustrates this general trend from an ablation analysis of the “cell-division” dynamic scene using the PnP-ADMM algorithm. Higher noise levels reduce spatial resolution. Higher compression ratios result in stronger blurring in the temporal shearing direction, which further decreases the spatial resolution in that direction.59,91,114 Both factors hamper the reconstruction algorithm’s ability to accurately place the correct amount of intensity from the compressed snapshot to the appropriate spatiotemporal position in the reconstructed datacube.

Table 1

Average PSNRs in reconstructed datacubes with different compression ratios and signal-to-noise ratios (SNRs).

Compression ratioSNR (dB)
15202530Infinity
10.1×26.3 ± 0.827.9 ± 0.529.5 ± 0.730.0 ± 0.830.4 ± 0.9
11.9×26.2 ± 0.627.7 ± 0.529.2 ± 0.629.9 ± 0.730.2 ± 0.8
16.4×26.1 ± 0.527.5 ± 0.528.7 ± 0.529.7 ± 0.729.9 ± 0.7
22.9×25.8 ± 0.827.4 ± 0.428.2 ± 0.628.8 ± 0.728.9 ± 0.8
45.6×25.6 ± 0.927.1 ± 0.828.1 ± 0.928.6 ± 1.028.8 ± 1.1

3.

System

The construction of a CUP system involves a careful selection of three crucial components. First, a spatial encoder modulates the dynamic event. Second, a temporal shearing unit deflects the spatially encoded frames to different spatial positions according to their time of arrival. Finally, a 2D sensor integrates the spatially encoded and temporally sheared datacube into the time-sheared view. For dual-view CUP, another 2D sensor integrates the dynamic scene into the time-unsheared view. To date, many approaches have been implemented to devise each component. A comprehensive survey of these implementations, their advantages, and limitations will be presented in this section, followed by a discussion on important steps to calibrate a CUP system.

3.1.

Spatial Encoder

The selection of a suitable spatial encoder in a CUP system includes encoding pattern design and the encoder’s implementation. Because CUP relies on CS principles, its sensing matrix can be designed based on the restricted isometry property (RIP) to ensure its incoherence to the representation matrix of the scene. Notably, the sensing matrix based on a random pattern has been verified to meet the RIP criterion for a wide range of representation bases.115 Therefore, pseudorandom masks [Fig. 6(a)] are dominantly implemented as spatial encoders in reported CUP systems.

Fig. 6

Representative encoding patterns for CUP. (a) Pseudorandom pattern. (b) Deep-learning-optimized pattern. Insets: Zoomed-in views of local regions.

JBO_29_S1_S11524_f006.png

The RIP also provides a valuable metric for evaluating the encoder’s quality. The general strategy is to reduce the coherence between the sensing matrix and the representation matrices to ensure that the projection of high-dimensional data onto a lower-dimensional space preserves the essential data features.116,117 It guarantees that the compressed measurements retain sufficient information to accurately reconstruct the original signal. To date, several works have improved the mask via deep learning.104,118,119 As an example, an encoding mask designed via the D-HAN is shown in Fig. 6(b), where the shearing operation in CUP’s forward model and the training data produce horizontal stripe-like structures.104

As summarized in Table 2, four approaches have been used to implement CUP’s spatial encoders: digital micromirror devices (DMDs),17,66,102,120,121 liquid-crystal spatial light modulators (LC-SLMs),134 high-definition printing,97,135 and photolithography.91 Among them, the DMD, as a reflective binary-amplitude spatial light modulator,136 can provide reconfigurable, stable, and broadband encoding [Fig. 7(a)]. However, due to the micromirror’s tilt angle, the DMD is often required to be placed in the Littrow configuration in CUP systems17,37,59 to retro-reflect the incident light. Since the DMD is not parallel to the object plane, this design limits the field of view (FOV). Moreover, the DMD’s structure limits light efficiency in three main aspects. First, since the DMD has a 94% fill factor, a part of incident light is lost in the gaps between neighboring micromirrors. Second, as a 2D diffraction grating,136 the DMD has an overall diffraction efficiency of 86%,137 which indicates energy loss in high-diffraction orders. Finally, the aluminum coating of the micromirrors has a reflectivity of 89% in the visible spectrum with a dip at around 800 nm corresponding to the absorption of inter-band transitions in aluminum. Consequently, the constructed CUP system may not have an optimal spectral response for the dynamic scenes under investigation.

Table 2

Representative approaches for CUP’s spatial encoders.

ApproachAdvantageLimitationReferences
DMD— Programmable encoding— Restricted FOV due to the Littrow configuration17, 55, 57, 58, 6061.62, 6465.66, 96, 99, 102, 104, 114, 116, 120121.122.123.124.125.126.127.128.129.130.131.132.133
— Broad operating spectrum— Energy loss due to the limited fill factor and diffraction
— Nonoptimal spectral response to the micromirror’s coating
LC-SLM— Programmable encoding— Wavelength and polarization sensitive modulation134
— Phase and amplitude modulation in grayscale— Relatively low fill factor for the transmissive type
— Reflective and transmissive encoding ability— Flicker noise
High-definition printing— Transmissive encoding— Unreconfigurable encoding97, 135
— Low cost
— Broad operating spectrum
Photolithography— Transmissive encoding— Unreconfigurable encoding91
— High resolution
— Broad operating spectrum— High cost

Fig. 7

Pseudorandom binary masks displayed on representative spatial encoders. (a) DMD. (b) LC-SLM. (c) Plastic mask fabricated by high-definition printing. (d) Chromium mask made by photolithography. Inset: Zoom-in view of a local region.

JBO_29_S1_S11524_f007.png

Another choice of reconfigurable spatial encoding is LC-SLMs. They have been widely implemented in coded optical imaging.18,138141 LC-SLMs can simultaneously modulate amplitude and phase in grayscale.134 In the context of CUP, they can provide both reflective and transmissive spatial encoding [Fig. 7(b)]. Nonetheless, LC-SLMs could also bring in some limitations in spatial encoding. Its modulation is sensitive to both the wavelength and polarization. Moreover, a relatively low fill factor of transmissive LC-SLMs (e.g., 58%)142 and the flicker noise could limit pattern quality and encoding stability.143

Besides using the programmable devices, an encoded mask can be directly fabricated on a substrate. As a representative approach, high-definition printing can manufacture encoding masks at up to 50,800 dots per in. resolutions, with up to 30  in.×30  in. in size, at $16.7 per in.2 (Ref. 144) [Fig. 7(c)]. In one printing task, users can pack multiple masks with different encoding pixel sizes down to 7  μm and different pattern types as well as calibration patterns, such as single pinholes, pinhole arrays, and slits. As another approach, photolithography can produce spatial encoders with nanometer-level encoding pixel sizes over inches [Fig. 7(d)]. As an example, a 3-in. × 3-in. mask with 125-nm resolution can be fabricated at ∼$6,000. As a well-established fabrication technique, photolithography can be used with various materials to target different spectral bands.145 These fabricated coded masks can be directly inserted in CUP systems, which conserves space for a more compact system design. Although capable of providing broadband and transmissive encoding, these two approaches can only prepare fixed spatial encoders. In addition, the almost unavoidable defect pixels in the fabricated encoder request careful calibration to build an accurate sensing matrix.

3.2.

Temporal Shearing Unit

Depending on the necessity of external power, temporal shearing units can be classified into passive units and active units (Table 3). The former deflects the temporal information transferred to certain photon tags (e.g., wavelengths) by exploiting the properties in these tags (e.g., color dispersion). Being jitter-free, these compact units bring in stable operation without increasing the control complexity.59 The active units are driven by time-varying electric signals to trigger deflection. Usually integrated into the detection side of the imaging systems, they enable receive-only detection, which is specifically suited for capturing self-luminescent and color-selective events.16,18,66

Table 3

Representative methods for CUP’s temporal shearing units.

CategoryApproachAdvantageLimitationReferences
PassiveGrating— Compact— Requirement of chirped pulse illumination29, 59, 121
— Low cost
— Jitter-free— Fixed shearing rate
— Ultrafast shearing
Metalens— Compact, lightweight, and less complex optomechanically— Requirement of chirped pulse illumination146
— Joint temporal shearing and imaging— Fixed shearing rate
— Jitter-free— Limited aperture size
— Ultrafast shearing— High cost
ActiveImage-converter streak tube— Receive-only detection— High cost17, 55, 57, 58, 6061.62, 6465.66, 91, 96, 99, 102, 104, 114, 116, 120, 121, 123124.125.126.127.128.129.130.131, 147
— Tunable shearing speeds— Space-charge effect
— Ultrafast shearing— Electronic jitter
— Low overall efficiency
— Spectra limited by the photocathode
Rotating mirror— Receive-only detection
— Tunable shearing speeds
— Relative slow shearing speed37, 97, 148, 149
— All-optical operation
— Broad operating spectrum
— Low cost
TDI technique— Receive-only detection— Fixed shearing speed134, 150
— Joint temporal shearing and spatiotemporal integration— Relative slow shearing speed
Electro-optical deflector— Receive-only detection— Small numerical aperture122
— All-optical operation— High operating voltage
— Ultrafast shearing— Limited deflection angle
Molecular deflector— Receive-only detection— Requirement of an ultrafast, high-intensity pump laser pulse151
— All-optical operation
— Small size
— Ultrafast shearing

Figure 8 shows two examples of passive temporal shearing units. Both need to team up with a chirped ultrashort probe pulse, which maps the temporal information of the event to its spectral band. As shown in Fig. 8(a), the modulated chirped pulse is spatially dispersed by a grating.59 In recent years, the development of metamaterials has made metalenses a potential passive temporal shearing unit. They consist of an array of waveguide structures with a subwavelength size, with resonant metamaterial elements etched into the surface [Fig. 8(b)].152,153 Metalenses can strongly disperse light while manipulating its phase, amplitude, and polarization.154 This property has been exploited in hyperspectral imaging.155 Grafting this sensing paradigm in CUP, a metalens integrates imaging and temporal shearing, which greatly reduces the system’s size and complexity.146 Besides the aforementioned two units, other dispersive optical elements such as kinoforms,156 zone plates,157 and diffractive optical elements (DOEs)158 could also be used for passive temporal shearing of chirped pulses.

Fig. 8

Representative passive temporal shearing units for CUP. The temporal information is mapped to the spectrum and deflected to different spatial positions by (a) a diffraction grating and (b) a metalens. t1 to tn, temporal information.

JBO_29_S1_S11524_f008.png

Active temporal shearing units have also been featured in many CUP systems. As an example, the image-converter streak tube is shown in Fig. 9(a). Such a device works by directing the dynamic scene onto a photocathode, where the incident photons are converted to photoelectrons. After being accelerated by a pulling voltage added on a metal mesh, these photoelectrons are temporally sheared by a varying electric field produced by applying a voltage to a pair of sweep electrodes. Then, the photoelectric signal is amplified by a microchannel plate. Finally, the photoelectrons bombard a phosphor screen and are converted back to photons.59,91 The configuration of the image-converter streak tube takes advantage of the movement of electrons under high-voltage electric fields, enabling ultrafast shearing for the CUP system to provide up to femtosecond-level temporal resolution.58,59 However, this operation is inevitably affected by the electronic jitter. Moreover, due to the space-charge effect in electronic imaging,160 a trade-off needs to be made between the incident light intensity and the signal gain, which limits the imaging quality of the streak tube-based CUP systems.37,151 The efficiency of image-converter streak tubes is also inherently limited by the photon–electron–photon conversion. The quantum yield of the photocathode is moderate for the visible light and decreases dramatically for near-infrared light.161,162 The phosphor screen also has a relatively low conversion efficiency, especially for the fast-responding types.163 The limited overall efficiency makes the image-converter streak tubes less suitable for imaging faint transient events.

Fig. 9

Representative active temporal shearing units for CUP. (a) Image-converter streak tube. (b) Rotating mirror. (c) TDI mode of a CCD camera. (d) Electro-optical deflector. (e) Ultrashort-pulse-induced CO2 molecule deflector. α, Deflection angle. (b) Reprinted with permission from Ref. 135. (d) Adapted with permission from Ref. 159. (e) Adapted with permission from Ref. 151.

JBO_29_S1_S11524_f009.png

Rotating mirrors are another popular choice of active temporal shearing units for CUP. The mirror rotation continuously alters the angle of incidence, hence shearing the reflected light [Fig. 9(b)]. Rotating mirrors are preferred to be placed at the Fourier plane of a 4f-system so that after the second lens, the chief rays of all temporal frames can propagate and enter the sensor perpendicularly, which avoids aberrations introduced by the field curvature.164 Producing tunable temporal resolutions typically from hundreds of nanoseconds to microseconds, they are much slower than the image-converter streak tube. However, the all-optical operation avoids the space-charge effect, which enables optics-limited spatial resolution and high dynamic ranges.37 Moreover, by circumventing the photon-to-photoelectron conversion in a photocathode, rotating-mirror-based CUP systems can employ sensors in matching responsive bands to sense photons with relatively low energy (e.g., in the infrared range). Leveraging high reflectivity coatings (e.g., >95% at 0.4 to 20.0  μm),165 these CUP systems are attractive candidates for high-sensitivity transient imaging at broad spectral bands.

Besides these two popular approaches, other specialized optical and/or electronic devices have been implemented as CUP’s temporal shearing units. As an example, Fig. 9(c) shows the operation principle of the time-delay-integration (TDI) mode of a CCD camera. Initially developed to visualize moving objects under extremely low light levels, the TDI configuration employs a long exposure during which the generated photoelectrons shift down row by row before eventually reading out.166 In this way, the read-out data are the integration of information from different rows at different time points. Such a mechanism enables TDI cameras to combine the operations of temporal shearing and spatiotemporal integration, which considerably reduces the system’s complexity.134,150 Recently, electron-transfer-based temporal shearing has also been implemented in a streak-camera sensor.167,168 Hundreds of sampling and storage cells are placed underneath a line of photodiodes. During the sensor’s exposure, the 1D signal is sampled and sequentially stored at a temporal resolution of 500 ps. Although its 1D FOV excludes its implementation with CUP, this highly integrated device marks its potential to be further developed for future CUP systems.

Electro-optic crystals can also be used as the temporal shearing unit of CUP systems. As shown in Fig. 9(d), a time-varying electric field is applied to modulate the gradient of the refractive index of an electro-optic crystal. In this way, this electro-optic deflector (EOD) can direct the incident light to different propagation directions according to its time of arrival.159,169 The EOD is currently the only all-optical shearing unit capable of achieving 50×109 frames per second (fps) in a CUP system.122 However, the shortcomings of small numerical aperture, high operating voltage, and limited deflection angle still hinder EODs for further applications in CUP.

Finally, transient materials’ behaviors have been proposed as CUP’s temporal deflectors. Figure 9(e) depicts how the transient alignment of CO2 molecules excited by an ultrashort laser pulse can induce a time-varying refractive-index gradient, resulting in different deflection angles to temporally shear the dynamic scene.151 Although having not been experimentally demonstrated, this mechanism could open a new avenue of transient-event-assisted ultrafast imaging. The fast responses of properly selected materials could push CUP’s imaging speed to the quadrillion fps level.170

3.3.

2D Sensor

After being spatially encoded and temporally sheared, the dynamic scene is spatiotemporally integrated over each pixel by a 2D sensor. Most of the current commercial cameras (e.g., CCD, CMOS, scientific CMOS, and electron-multiplying CCD cameras) have been implemented to construct a CUP system. Nonetheless, used as the last component in a CUP system, the 2D sensor needs to be carefully selected to accommodate the characteristics of the dynamic scenes, the spatial encoders, and the temporal shearing units. In terms of spectral responsiveness, the 2D sensors are desired to have the highest sensitivity at the corresponding spectra of the dynamic scene. However, it might be restricted by the device. For example, for the image-converter streak tube, the quantum yield of the deployed camera should be peaked at the wavelength of the phosphor screen (e.g., 540 nm of a P43 phosphor screen). The pixel size of these sensors is required to sufficiently sample each encoding pixel for the given system’s magnification.

The shutter type is another important factor in the sensor selection. Overall, the global shutter is much preferred for CUP operation compared with the rolling shutter. Figure 10(a) shows a simulated dynamic scene of a rotating spinner with constant intensity. For a rolling-shutter sensor, the exposure of each row starts sequentially from the top to the bottom for the same period and ends at a different time point. The induced rolling-shutter effect distorts the image of fast-moving objects.171 CUP can overcome this distortion by putting the information back in the correct spatiotemporal position. However, due to the different starting times of exposure, only a part of FOV can be reconstructed for images at the beginning and the end of the movie, as shown in Fig. 10(b). This issue can be bypassed by limiting the occurrence of the dynamic event when all rows are under exposure. In contrast, the global shutter, which can be implemented in both CCD and CMOS sensors, allows capturing the dynamic scene over the full FOV [Fig. 10(c)] and thus avoid the time-windowing effect of the rolling shutter.

Fig. 10

Comparison between the rolling shutter and the global shutter in CUP’s data acquisition. (a) 10 representative frames of a simulated rotating-spinner scene. (b) Rolling shutter’s operating principle (top-left panel), the produced 2D snapshots (top-middle and top-right panels), and the illustration of CUP’s reconstructed frames (bottom panel). (c) As (b), but showing the results produced by the global shutter. F1 to F10, frame indices.

JBO_29_S1_S11524_f010.png

3.4.

Calibrations

In this section, we outline a few important calibration steps in CUP’s operation. They are necessary for both physical data acquisition and computational reconstruction of the dynamic scene.

3.4.1.

Co-registration of multiple views

Due to the difference among individual imaging arms, the acquired snapshots may have different aberrations. It is thus indispensable to accurately co-register all the views for accurate image reconstruction. Toward this goal, a static image of the time-sheared view is acquired by turning off the shearing unit (Fig. 11). In Matlab, the co-registration can be carried out using “control point registration” in the “Image Processing Toolbox.”172 The function “cpselect” opens a window for the user to select at least four pairs of control points in both views. Then, the function “fitgeotform2d” estimates the transformation matrix that best aligns the control points. Finally, the function “imwarp” applies the transformation matrix to complete the co-registration. The co-registered time-unsheared view and the time-sheared view are then fed into the reconstruction algorithms.

Fig. 11

Co-registration for dual-view CUP.

JBO_29_S1_S11524_f011.png

3.4.2.

Acquisition of the encoding mask

The experimentally captured encoding mask image produces better reconstruction than the design file of the used pattern because it takes into consideration various practical imperfections. For example, the DMD’s micromirrors may have a different orientation than those of the sensors. The fixed encoders fabricated by high-definition printing or photolithography also have defective pixels or membrane curving. These cases cannot be eliminated even if the imaging system is tuned with a proper magnification that matches the size of the encoding pixels to that of the sensor’s pixels. Thus, a mask image is captured by tuning off the shearing unit and then binarized for CUP’s image reconstruction (Fig. 12). Besides background subtraction and white-field correction, threshold selection and edge detection are combined to optimize binarization.57 This calibration can also reduce aberrations and field curvature.123 In practice, the FOV and the maximum shearing distance are also limited to ensure high quality in captured images.

Fig. 12

Binarization of the captured encoding mask image. (a) Section cropped from the acquired mask image. (b) Cropped section after background subtraction and white-field correction. (c) Image binarization by applying a threshold to (b). (d) Image binarization by detecting edges in panel (b). (e) Combining (c) and (d) using OR operation. Adapted with permission from Ref. 57.

JBO_29_S1_S11524_f012.png

3.4.3.

Linearity test of shearing operation

Linear temporal shearing is used in CUP’s forward model (see Sec. 2.1). However, various experimental factors could deviate the linear temporal shearing operation, including misalignment, jitter, and imperfect instrument responses.173 Therefore, a linearity test is required to assess the system’s performance and to compensate for these factors. An example of a rotating-mirror-based CUP system is shown in Fig. 13(a). About 100 frames containing number indices and short lines were displayed on a DMD at 20 kHz. From the recorded snapshot, the displacements between the centroids of the consecutive short lines were calculated to determine the rotating mirror’s shearing operation. In this example, the shearing deviates from a linear function by 2 pixels over 100 frames.104

Fig. 13

Linearity test of the temporal shearing operation. (a) Test of a rotating-mirror-based temporal shearing. Left panel: composite of the 100 frames with consecutive short lines and frame indices. Right panel: analysis of displacement in the time-sheared view. (b) Test of a diffraction-grating-based temporal shearing. Left panel: setup (top) of generating pulses with selected wavelengths (bottom). Right panel: result of the linearity test by illuminating a square pattern with the narrow-band pulses. (c) Test of a streak-tube-based temporal shearing. Left panel: setup of the test. Right panel: cross-section in the streak measurement of the pulse train generated by the etalon. (a) Adapted with permission from Ref. 104. (b) Adapted with permission from Ref. 59. (c) Adapted with permission from Ref. 174.

JBO_29_S1_S11524_f013.png

As another example, Fig. 13(b) shows the linearity test of a diffraction-grating-based CUP system.59 A tunable bandpass filter was built based on a rotating grating [top-left panel in Fig. 13(b)] to produce pulses with a selected wavelength [bottom-left panel in Fig. 13(b)]. The generated narrowband pulses illuminated a small square pattern, whose positions in the streak images were measured to obtain the relationship with wavelengths [right panel in Fig. 13(b)]. Finally, an example of the linearity test of an image-converter streak tube is shown in Fig. 13(c). Following a calibration protocol similar to that of the diffraction grating-based CUP system, a pulse train with a known interval was generated by an etalon. The linearity was computed by measuring the deflected pulses’ positions.174

4.

Biomedical Applications

Many biological processes, such as blood flow, brain activities, or cellular dynamics, are not repeatable. Single-shot CUP provides an innovative and complementary tool to probe these events, which generates valuable insights for the fundamental understanding of their underlying mechanisms. In this section, we will focus on four representative biomedical applications of CUP.

4.1.

Neuroimaging

Monitoring the spatiotemporal dynamics of neuron signaling is essential to the understanding of the brain’s structure and function. Direct visualization can aid researchers and clinicians in studying neurological disorders, cognitive processes, and brain development. Frame rates at the level of one billion fps are demanded to image the propagation of action potentials (APs) in myelinated axons (100  m/s) with high spatial resolution and in real time. Unreachable by conventional electronic sensors, this requirement poses a considerable technical challenge to neuroimaging research.

Overcoming this challenge, CUP imaged phase and lifetime dynamics evoked by neuronal activities. As an example, by combining Mach–Zehnder interferometry and utilizing its ultrafast imaging speed and large sequence depth, differentially enhanced CUP (Diff-CUP) imaged internodal current flow in myelinated axons from the sciatic nerves of Xenopus laevis frog at 200×109  fps65 [Fig. 14(a)]. The high phase sensitivity of Diff-CUP enables simultaneously capturing the substantial cellular deformations and consequent phase alterations induced by passive current flows (i.e., without the amplification of the electrical current)175,176 resulting from a 10-V and 1-μs pulse injected into the axon [Fig. 14(b)]. The reconstructed correlation curves of each segment of the FOV (labeled with numbers 1 to 8) reveal the microsecond-level phase changes induced by the propagating internodal current flow [Fig. 14(c)], whose conduction speeds in myelinated axons were calculated to be 100±26  m/s. To date, Diff-CUP is the fastest imaging-based approach for assessing AP-related conduction.

Fig. 14

CUP of neuronal activities. (a) Schematic of Diff-CUP. Inset: Adhesion microscope slide for Diff-CUP imaging. BS, beam splitter; DG, delay generator; E(t), transient field stimulation; HWP, half-wave plate; LN, lithium niobate; OB, objective lens; PBS, polarizing beam splitter; PG, pulse generator; SC, streak camera. (b) Spatiotemporal interferogram of a propagating internodal current flow in a myelinated axon captured by Diff-CUP. (c) Reconstruction of the current flow signals based on the stimulus interferogram. Black dashed lines indicate the signal region of the internodal current flow. T, the propagation time of the internodal current flow within the FOV. (d) Schematic of compressed FLIM. (e) Six representative frames from the reconstruction of a cultured hippocampal neuron upon potassium stimulation at 100 fps. (f) Time-lapsed lifetime and intensity curves of a cultured hippocampal neuron. (g) Intensity (top panel) and lifetime (bottom panel) waveforms of neural spikes (black lines) and their means (green lines) for a cultured hippocampal neuron under stimulation. (a)–(c) Adapted with permission from Ref. 65. (d)–(g) Adapted with permission from Ref. 60.

JBO_29_S1_S11524_f014.png

CUP is also implemented as a CS-based fluorescence lifetime imaging microscopy (FLIM)60 to record high-resolution 2D lifetime images of immunofluorescently stained neurons [Fig. 14(d)]. With an imaging speed of 10×109  fps, this CUP-based FLIM system captured the fluorescence intensity decay in real time, which produced a 2D lifetime map. Leveraging the intrinsic frame rate of the internal CMOS camera, lifetime maps were generated at 100 fps. This technique visualized neural spike dynamics via the fluorescence intensity and donor lifetime decrease during Förster resonance energy transfer.177 Figure 14(e) illustrates six representative lifetime images of a cultured hippocampal neuron at 100 fps. The time courses of the averaged fluorescence intensity variation and lifetime of this sample over 1 s are plotted in Fig. 14(f). Finally, the hippocampal neuron’s fluorescence intensity and lifetime waveforms of single AP and their means [black lines and green line in Fig. 14(g), respectively] were acquired experimentally. This analysis revealed that a single spiking event led to an average relative fluorescence intensity change (ΔF/F) of 2.9% and a lifetime change of 0.7  ns.

4.2.

Temperature Sensing

Temperature, as an important biomarker, is linked to many biological processes (e.g., metabolism178) and medical procedures (e.g., photothermal therapy179). Accurate and real-time temperature sensing is important to pathology diagnostics, physiology monitoring, and therapeutical efficiency. Photoluminescence thermometry presents an emerging method by utilizing the temperature-sensitive optical emissions of photoluminescent materials as well as optical detections at high spatial resolution. Its merits include noncontactness, high adaptability to a broad temperature range, high accuracy, flexibility in sample selection, and suitability for diverse environments.180 Thus, photoluminescence thermometry is increasingly featured in recent advances in optical temperature measurements.

The success of photoluminescence thermometry depends on two essential constituents: temperature indicators and optical imaging instruments. Recent advances in biochemistry, materials science, and molecular biology have unveiled numerous labeling indicators for photoluminescence thermometry.162,181,182 From semiconductor quantum dots183 and organic fluorophores184 to rare-earth-doped phosphors,185 the diversity of these agents allows for tailored temperature sensing across different thermal sensitivities, optical properties, and response times for biomedical applications.186189 For example, lanthanide-doped upconverting nanoparticles (UCNPs), which can sequentially absorb two (or more) low-energy near-infrared photons and convert them to one higher-energy photon, enable biocompatible temperature sensing with low excitation power densities and high sensitivity.190,191

CUP has enabled wide-field temperature mapping using photoluminescence lifetimes of UCNPs.135 In the schematic shown in Fig. 15(a), near-infrared pulses, generated by a 980-nm continuous-wave (CW) laser and an optical chopper, are focused on the back focal plane of an objective lens to form wide-field illumination. The excited UCNPs on the sample emit visible upconversion luminescence. After passing the filter, the dynamic photoluminescence of a selected emission band is imaged by a rotating mirror-based dual-view CUP system at 33,000 fps. The reconstructed lifetime images in the UCNPs’ two upconversion emission bands at different temperatures are shown in Fig. 15(b). The averaged intensity decays [Fig. 15(c)] enable the establishment of the temperature-lifetime relationship [Fig. 15(d)]. Furthermore, the system tracked the 2D temperature of a moving onion epidermis sample labeled by UCNPs at a rate of 20 lifetime-maps per second [Fig. 15(e)]. The intensity decays of four selected areas [labeled in the top-left panel in Fig. 15(e)] are shown in Fig. 15(f). It is worth noting that the fluences of the four selected areas are different but the measured photoluminescence lifetimes remain stable, showing the lifetime-based approach contributed by CUP is more reliable in accurate temperature sensing.

Fig. 15

CUP of temperature sensing. (a) Schematic of wide-field photoluminescence lifetime thermometry based on a dual-view rotating-mirror CUP system. L1 to L5, lenses. (b) Lifetime maps of the two emission bands (i.e., S43/2I415/2 and F49/2I415/2) of the used UCNPs under different temperatures. (c) Normalized photoluminescence decays of the two emission bands after averaging over the FOV. (d) Temperature-lifetime relationship of both emission bands. (e) Selected time-unsheared views (top row) and reconstructed lifetime maps (bottom row) of a moving onion epidermis cell sample labeled by UCNPs. (f) Intensity decays at four selected areas with different intensities marked in (e). Adapted with permission from Ref. 135.

JBO_29_S1_S11524_f015.png

4.3.

Microfluidics

A rotating-mirror CUP system has been applied to the video recording of complex fluid dynamics and interactions at the microscale.148 A schematic of rotating-mirror-based CUP is shown in Fig. 16(a). This system observed flow droplet samples within a microfluidic chip.149 Two immiscible liquids of transparent oil and chemical dye, injected through a motorized dispenser, flowed in the chip channels at 0.9  m/s. Three separate measurements are recorded at 3000 fps, 50,000 fps, and 120,000 fps [Fig. 16(b)]. These experimental results show a high reconstruction quality with well-preserved edge features in the frames, showing clear and distinguishable droplets flowing in the microfluidic chip. These results show CUP’s potential to visualize cell-shape changes in response to rapid external stimuli or internal dynamics in microfluidics,192194 which will provide new insights into cellular biomechanical properties that are closely linked to cellular function and disease development.

Fig. 16

CUP of microfluidics. (a) Schematic of a rotating-mirror-based CUP system. (b) Snapshot of flowing immiscible liquids (i) and representative frames from the reconstructed videos at (ii) 3000, (iii) 50,000, and (iv) 120,000 fps. Adapted with permission from Ref. 148 and Ref. 149.

JBO_29_S1_S11524_f016.png

4.4.

Photoacoustic Imaging

CUP can also contribute to photoacoustic (PA) imaging. Figure 17 shows a simulation study on implementing CUP with optical interferometric detection of PA waves.195,196 In the proposed system schematically shown in Fig. 17(a), a pulsed laser illuminates a biological sample. The induced PA effect generates thermoelastic initial pressure, which is detected at the surface of the sample with a Fabry–Pérot etalon (FPE). This interaction of the ultrasonic waves with the surface of the FPE results in the modulation of the reflected CW laser beam on the opposite side of the FPE.197 The modulated CW laser beam is then imaged by a CUP system based on a DMD and a galvanometer scanner. Figure 17(b) shows a simulation of this method to image the initial pressure distribution of 12 vessel-like structures.

Fig. 17

CUP of photoacoustic imaging. (a) Proposed system schematic. CAM, camera; COL, collimator; CW, continuous-wave; DMD, digital micromirror device; FPE, Fabry–Pérot etalon; L, lens; λ/4, quarter wave plate; LP, linear polarizer; OI, optical isolator; PBS, polarizing beam splitter; SMF, single-mode fiber. (b) Simulation of image reconstruction of initial pressure distribution. Adapted with permission from Ref. 195.

JBO_29_S1_S11524_f017.png

5.

Prospect

CUP has largely advanced ultrafast imaging instrumentation. Its generic sensing model indicates that deployed components, rather than the theory, limit the system’s performance. Therefore, CUP has vast potential to be further improved in its imaging capability. In this section, we outline seven aspects of CUP’s future technical development.

5.1.

Faster

As currently the world’s fastest optical imaging technology, CUP naturally carries a mission to explore even higher speeds in optical imaging. Since CUP’s invention, innovation in temporal shearing units has been a focus of its technical improvement. From 100×109  fps of the original CUP system,17 various image-converter streak tubes have been deployed to increase its imaging speed to 10×1012  fps,114 which currently holds the world record for single-shot receive-only ultrafast imaging. However, at this speed, the image quality is considerably affected by the space-charge effect,160 posing challenges in further improvement of frame rates. In the future, transient perturbation in reflective index induced by a temporally modulated ultrashort laser pulse or molecular orientation could bring higher imaging speeds and circumvent image degradation.151,170

Leveraging the advance in chirped pulse illumination, CUP systems using passive temporal shearing units have boosted the imaging speed to 3.85×1012  fps,121 70×1012  fps,57 219×1012  fps,198 and 256×1012  fps.63 The last value marks the fastest speed in single-shot optical imaging. In the future, by synergizing the ultra-broadband ultrashort pulses199 and photonic streaking in gas,170,200 CUP’s imaging speed could top quadrillion fps, entering the attosecond-level imaging regime.

5.2.

Clearer

A higher spatial resolution allows CUP to visualize fine details. In biomedicine, this ability can transfer to informative depiction of cellular and tissue morphology, accurate diagnostics, and precise treatment. Nonetheless, in CUP’s operation, both spatial encoding in data acquisition and denoising in image reconstruction could reduce the effective system bandwidth. To visualize the targeted spatial details, a common practice is to magnify the scene, which unavoidably reduces the FOV. Thus, how to regain the lost bandwidth to achieve a diffraction-limited spatial resolution is an important research direction of CUP. One potential approach is subpixel shifting.201 In particular, a DOE could be used to duplicate the dynamic scene to multiple bands, each would be encoded with the same encoding mask but with a different subpixel shift. A joint image reconstruction using all the captured snapshots could recover the original optical bandwidth.

Another interesting research direction is super-resolution CUP. As many ultrafast phenomena also occur at the nanoscale, overcoming the diffraction limit in the CUP system will likely open avenues for new studies not possible before, including temperature dynamics in mitochondria,202 conformational transitions of protein,203 evolutions of membrane fragments produced by cellular lysis.204 Toward this goal, CUP can be incorporated into existing super-resolution microscopy techniques (e.g., structured illumination microscopy205) or bypass the optical diffraction limit by electron imaging (e.g., transmission electron microscopy117).

5.3.

Broader Spectrum

Although CUP has been experimentally demonstrated in the ultraviolet, visible, and near-infrared spectral ranges, extending its imaging capability to a broader spectrum will likely continue in future application-driven development. Toward this goal, the spatial encoder should have high contrast in the desired spectrum. Besides the popular broadband metallic masks made from aluminum, silver, or chromium,17,91,147 photonic crystals with broad tunable bandgaps can selectively block specific wavelengths,206 giving them the potential to be used as a spatial encoder in certain spectra. For temporal shearing units, the photocathode in the image-converter streak tube excludes high-sensitivity imaging for wavelengths of >950  nm. Contrarily, leveraging its all-optical functionality, rotating mirrors can fill out this gap, which will likely lead to the development of CUP for deep ultraviolet, mid-infrared, and far-infrared spectra. Moreover, advanced design and fabrication of metasurfaces and metalenses could potentially extend CUP to a spectrum from extreme ultraviolet to terahertz.207,208

5.4.

Smarter

Many deep learning-based approaches have been used in CUP’s image reconstruction.94104 Harnessing the power of artificial intelligence, they have unlocked new capabilities for analyzing ultrafast events, such as real-time data processing, on-device analysis, and on-time feedback. It is expected that these deep-learning algorithms will provide multifacet supervision to CUP systems in the future. For example, the next-generation systems could autonomously adjust the patterns loaded on the spatial encoder according to the initial classification of the dynamic scene. These systems could also monitor the nonlinear shearing operation and adaptively compensate for it in image reconstruction or system alignment.104

5.5.

Higher Dimensions

Recent developments in CUP have explored high-dimensional ultrafast imaging. To date, several advanced systems—such as multispectral CUP,120 stereo-polarimetric CUP,57 and spectral-volumetric CUP62—have pushed the overall sensing capability to four dimensions and even five dimensions. In the future, by extending the configuration used in stereo-polarimetric CUP57 to generate multiple perspectives of the dynamic scene, light-field imaging could be incorporated into CUP. Ultimately, single-shot imaging of seven-dimensional plenoptic function would be within reach. Using CUP to sense other photon tags that are not included in the conventionally defined plenoptic function is also a future direction. CUP has already enabled amplitude and phase imaging of a femtosecond laser pulse.63 CUP could also be combined with other existing technologies, such as the transport-of-intensity equation 209 and coherent modulation,210 for ultrafast quantitative phase imaging. Finally, recent advancements in on-chip polarization imaging and metasurface-based angular moment separation could incorporate these parameters into CUP’s measurement scope.211,212

5.6.

Smaller

CUP systems with a compact size are important to studies that require restricted weight and space. For biomedicine, it will offer the ability to mount the system the same way as conventional cameras on microscopes and hand-held systems as well as in operating rooms. An innovative optical design that folds the optical path could reduce the system size.213 Selecting a multifunctional component (e.g., a metalens and a TDI sensor) provides another approach to reducing the number of optical elements in CUP systems. Advances in sensor design and nanofabrication could provide the streak imaging sensor167,168 with a 2D FOV. All efforts will contribute to engineering compact and even miniature CUP systems in the future.

5.7.

Cheaper

Besides reducing the size of a CUP system, making an economical CUP system carries considerable value from both research and commercialization perspectives. To manufacture a fixed spatial encoder, high-resolution printing offers the lowest cost (i.e., <$20 per in.2). For reconfigurable spatial encoders, a 0.47″ DMD chip (1920×1080 micromirrors; 5.4-μm pitch) costs ∼$120.214 Future development could develop a DMD controller specifically tailored for CUP with much-reduced functionality compared with existing ones to decrease the cost. For the temporal shearing unit, the first approach to reduce the cost is to find a replacement for expensive image-converter streak tubes. Electro-optic modulators have made their debut in this direction,122 which produced an imaging speed of 50×109  fps. For a rotating-mirror CUP system, a viable strategy is to add the spatial encoder and an affordable rotating mirror (e.g., galvanometer scanners and polygonal mirrors) in front of the CCD/CMOS cameras existing in the system. It is envisaged that a minor addition in cost could endow ultrahigh-speed imaging to existing cameras while retaining their inherent advantages (e.g., in sensitivity and sensing spectrum).

6.

Conclusions

In this tutorial, we have elucidated the fundamentals of CUP. We have provided Matlab codes that create CUP’s sensing matrices and simulate the acquired snapshots based on the forward model. Matlab/Python codes and examples are also included for two respective reconstruction algorithms—one based on analytical modeling using the ADMM and the other on deep learning using the D-HAN. To facilitate comprehension, a “cell-division” scene is simulated step by step alongside the provided codes. A fully operational CUP system relies on three essential hardware components: a spatial encoder, a temporal shearing unit, and a 2D sensor. We have surveyed representative implementations of each component as well as calibration steps in both data acquisition and image reconstruction.

Ever since its invention, CUP has stayed under the spotlight in research as an emerging and innovative imaging platform. Its evolution has been shaped by the innovation of imaging strategies and the adaptive optimization of its key components, leading to its widespread implementation in various biomedical applications. CUP—as currently the world’s fastest single-shot optical imaging modality—is positioned for future advancements in imaging speed, spatial resolution, sensing spectrum, artificial intelligence, imaging dimensionality, system size, and manufacturing cost. CUP is highly anticipated to make more remarkable progress in biomedicine.

Disclosures

The authors have no conflicts of interest to declare.

Code and Data Availability

All data and software in support of this work are available in the manuscript and can be downloaded from Refs. 68, 92, 111, and 112.

Acknowledgments

The authors sincerely thank Dr. Lihong V. Wang for his guidance, advice, and support of our entire body of work in CUP. He is a living legend and a true mentor, either directly or indirectly, to us and everyone working in the field of optical imaging and biomedical optics. The authors also thank Hanzi Liu and Christian-Yves Côté for preparing Figs. 7(b) and 7(d). This work was supported in part by the Natural Sciences and Engineering Research Council of Canada (Grant Nos. RGPIN-2017-05959, RGPAS-2017-507845, and I2IPJ-555593-20), the Canada Research Chairs Program (Grant No. CRC-2022-00119), the Canada Foundation for Innovation and Ministère de l’Économie et de l’Innovation du Québec (Grant No. 37146), the Canadian Cancer Society (Grant No. 707056); the New Frontiers in Research Fund (Grant No. NFRFE-2020-00267), the Fonds de Recherche du Québec–Nature et Technologies (Grant Nos. 203345–Centre d’Optique, Photonique, et Lasers, PBEEE-2023-2024-V1-334852).

References

1. 

J. Liang, L. V. Wang, “Ultrafast optical imaging,” Handbook of Laser Technology Applications, 315 –328 CRC Press( (2021). Google Scholar

2. 

H. Roder, “Stepwise helix formation and chain compaction during protein folding,” Proc. Natl. Acad. Sci., 101 (7), 1793 –1794 https://doi.org/10.1073/pnas.0308172101 (2004). Google Scholar

3. 

D. N. Ku, “Blood flow in arteries,” Annu. Rev. Fluid Mech., 29 (1), 399 –434 https://doi.org/10.1146/annurev.fluid.29.1.399 ARVFA3 0066-4189 (1997). Google Scholar

4. 

C. A. Day and A. K. Kenworthy, “Tracking microdomain dynamics in cell membranes,” Biochim. Biophys. Acta Biomembr., 1788 (1), 245 –253 https://doi.org/10.1016/j.bbamem.2008.10.024 BBBMBS 0005-2736 (2009). Google Scholar

5. 

H. Astacio, A. Vasin and M. Bykhovskaia, “Stochastic properties of spontaneous synaptic transmission at individual active zones,” J. Neurosci., 42 (6), 1001 –1019 https://doi.org/10.1523/JNEUROSCI.1162-21.2021 JNRSDS 0270-6474 (2022). Google Scholar

6. 

F. Vetrone et al., “Temperature sensing using fluorescent nanothermometers,” ACS Nano, 4 (6), 3254 –3258 https://doi.org/10.1021/nn100244a ANCAC3 1936-0851 (2010). Google Scholar

7. 

Y. Liu et al., “Optical focusing deep inside dynamic scattering media with near-infrared time-reversed ultrasonically encoded (TRUE) light,” Nat. Commun., 6 (1), 5904 https://doi.org/10.1038/ncomms6904 NCAOBW 2041-1723 (2015). Google Scholar

8. 

J. Liang and L. V. Wang, “Single-shot ultrafast optical imaging,” Optica, 5 (9), 1113 –1127 https://doi.org/10.1364/OPTICA.5.001113 (2018). Google Scholar

9. 

H. Mikami, L. Gao and K. Goda, “Ultrafast optical imaging technology: principles and applications of emerging methods,” Nanophotonics, 5 497 –509 https://doi.org/10.1515/nanoph-2016-0026 (2016). Google Scholar

10. 

Q. Miao et al., “Molecular afterglow imaging with bright, biodegradable polymer nanoparticles,” Nat. Biotechnol., 35 (11), 1102 –1110 https://doi.org/10.1038/nbt.3987 NABIF9 1087-0156 (2017). Google Scholar

11. 

P. S. May and M. Berry, “Tutorial on the acquisition, analysis, and interpretation of upconversion luminescence data,” Method. Appl. Fluoresc., 7 (2), 023001 https://doi.org/10.1088/2050-6120/ab02c6 (2019). Google Scholar

12. 

R. Kodama et al., “Fast heating of ultrahigh-density plasma as a step towards laser fusion ignition,” Nature, 412 (6849), 798 –802 https://doi.org/10.1038/35090525 (2001). Google Scholar

13. 

H. Zhang et al., “Efficient optical Kerr gate of tellurite glass for acquiring ultrafast fluorescence,” J. Opt., 14 (6), 065201 https://doi.org/10.1088/2040-8978/14/6/065201 (2012). Google Scholar

14. 

J. Wang et al., “Beam parameters measurement with a streak camera in HLS,” in Proc. of PAC09 (2009), Google Scholar

15. 

T. G. Etoh et al., “A 16 Mfps 165 kpixel backside-illuminated CCD,” in IEEE Int. Solid-State Circuits Conf., 406 –408 (2011). https://doi.org/10.1109/ISSCC.2011.5746372 Google Scholar

16. 

D. Qi et al., “Single-shot compressed ultrafast photography: a review,” Adv. Photonics, 2 (1), 014003 https://doi.org/10.1117/1.AP.2.1.014003 AOPAC7 1943-8206 (2020). Google Scholar

17. 

L. Gao et al., “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature, 516 (7529), 74 –77 https://doi.org/10.1038/nature14005 (2014). Google Scholar

18. 

J. Liang, “Punching holes in light: recent progress in single-shot coded-aperture optical imaging,” Rep. Prog. Phys., 83 (11), 116101 https://doi.org/10.1088/1361-6633/abaf43 RPPHAG 0034-4885 (2020). Google Scholar

19. 

J. N. Mait, G. W. Euliss and R. A. Athale, “Computational imaging,” Adv. Opt. Photonics, 10 (2), 409 –483 https://doi.org/10.1364/AOP.10.000409 AOPAC7 1943-8206 (2018). Google Scholar

20. 

J. Hunt et al., “Metamaterial apertures for computational imaging,” Science, 339 (6117), 310 –313 https://doi.org/10.1126/science.1230054 SCIEAS 0036-8075 (2013). Google Scholar

21. 

W. Oh et al., “High-speed polarization sensitive optical frequency domain imaging with frequency multiplexing,” Opt. Express, 16 (2), 1096 –1103 https://doi.org/10.1364/OE.16.001096 OPEXFF 1094-4087 (2008). Google Scholar

22. 

A. Ehn et al., “FRAME: femtosecond videography for atomic and molecular dynamics,” Light Sci. Appl., 6 (9), e17045 https://doi.org/10.1038/lsa.2017.45 (2017). Google Scholar

23. 

H. Li et al., “Investigation of single-shot high-speed photography based on spatial frequency multiplexing,” J. Opt. Soc. Am. A, 40 (3), 521 –529 https://doi.org/10.1364/JOSAA.480778 JOAOD6 0740-3232 (2023). Google Scholar

24. 

Q. Yue et al., “One-shot time-resolved holographic polarization microscopy for imaging laser-induced ultrafast phenomena,” Opt. Express, 25 (13), 14182 –14191 https://doi.org/10.1364/OE.25.014182 OPEXFF 1094-4087 (2017). Google Scholar

25. 

K. Nakagawa et al., “Sequentially timed all-optical mapping photography (STAMP),” Nat. Photonics, 8 (9), 695 –700 https://doi.org/10.1038/nphoton.2014.163 NPAHBY 1749-4885 (2014). Google Scholar

26. 

T. Suzuki et al., “Sequentially timed all-optical mapping photography (STAMP) utilizing spectral filtering,” Opt. Express, 23 (23), 30512 –30522 https://doi.org/10.1364/OE.23.030512 OPEXFF 1094-4087 (2015). Google Scholar

27. 

M. Touil et al., “Acousto-optically driven lensless single-shot ultrafast optical imaging,” Light Sci. Appl., 11 (1), 66 https://doi.org/10.1038/s41377-022-00759-y (2022). Google Scholar

28. 

G. Gao et al., “Ultrafast all-optical solid-state framing camera with picosecond temporal resolution,” Opt. Express, 25 (8), 8721 –8729 https://doi.org/10.1364/OE.25.008721 OPEXFF 1094-4087 (2017). Google Scholar

29. 

X. Zeng et al., “High-spatial-resolution ultrafast framing imaging at 15 trillion frames per second by optical parametric amplification,” Adv. Photonics, 2 (5), 056002 https://doi.org/10.1117/1.AP.2.5.056002 AOPAC7 1943-8206 (2020). Google Scholar

30. 

X. Zeng et al., “Review and prospect of single-shot ultrafast optical imaging by active detection,” Ultrafast Sci., 3 0020 https://doi.org/10.34133/ultrafastscience.0020 (2023). Google Scholar

31. 

S. S. Harilal et al., “Plume splitting and sharpening in laser-produced aluminium plasma,” J. Phys. D Appl. Phys., 35 (22), 2935 https://doi.org/10.1088/0022-3727/35/22/307 (2002). Google Scholar

32. 

Y. Fang et al., “A four-channel ICCD framing camera with nanosecond temporal resolution and high spatial resolution,” J. Mod. Opt., 68 (13), 661 –669 https://doi.org/10.1080/09500340.2021.1937735 JMOPEW 0950-0340 (2021). Google Scholar

33. 

L. Cester et al., “Time-of-flight imaging at 10 ps resolution with an ICCD camera,” Sensors, 19 (1), 180 https://doi.org/10.3390/s19010180 SNSRES 0746-9462 (2019). Google Scholar

34. 

X. Liu et al., “Diffraction-gated real-time ultrahigh-speed mapping photography,” Optica, 10 (9), 1223 –1230 https://doi.org/10.1364/OPTICA.495041 (2023). Google Scholar

35. 

Y. Tsuchiya and Y. Shinoda, “Recent developments of streak cameras,” Proc. SPIE, 0533 110 –116 https://doi.org/10.1117/12.946548 PSISDG 0277-786X (1985). Google Scholar

36. 

T. G. Etoh et al., “The theoretical highest frame rate of silicon image sensors,” Sensors, 17 (3), 483 https://doi.org/10.3390/s17030483 SNSRES 0746-9462 (2017). Google Scholar

37. 

X. Liu et al., “Single-shot compressed optical-streaking ultra-high-speed photography,” Opt. Lett., 44 (6), 1387 –1390 https://doi.org/10.1364/OL.44.001387 OPLEDP 0146-9592 (2019). Google Scholar

38. 

R. M. Willett, R. F. Marcia and J. M. Nichols, “Compressed sensing for practical optical imaging systems: a tutorial,” Opt. Eng., 50 (7), 072601 https://doi.org/10.1117/1.3596602 (2011). Google Scholar

39. 

C. Zuo et al., “Deep learning in optical metrology: a review,” Light Sci. Appl., 11 (1), 39 https://doi.org/10.1038/s41377-022-00714-x (2022). Google Scholar

40. 

M. A. Alonso, “Wigner functions in optics: describing beams as ray bundles and pulses as particle ensembles,” Adv. Opt. Photonics, 3 (4), 272 –365 https://doi.org/10.1364/AOP.3.000272 AOPAC7 1943-8206 (2011). Google Scholar

41. 

N. Hagen et al., “Snapshot advantage: a review of the light collection improvement for parallel high-dimensional measurement systems,” Opt. Eng., 51 (11), 111702 https://doi.org/10.1117/1.OE.51.11.111702 (2012). Google Scholar

42. 

J. Yao et al., “Exploring femtosecond laser ablation by snapshot ultrafast imaging and molecular dynamics simulation,” Ultrafast Sci., 2022 9754131 https://doi.org/10.34133/2022/9754131 (2022). Google Scholar

43. 

P. Wang, L. V. Wang, “Compressed ultrafast photography,” Coded Optical Imaging, Springer Nature( (2023). Google Scholar

44. 

D. Faccio and A. Velten, “A trillion frames per second: the techniques and applications of light-in-flight photography,” Rep. Prog. Phys., 81 (10), 105901 https://doi.org/10.1088/1361-6633/aacca1 RPPHAG 0034-4885 (2018). Google Scholar

45. 

M. GarciaLechuga, J. Solis, J. Siegel, “Probing matter by light,” Ultrafast Laser Nanostructuring: The Pursuit of Extreme Scales, 277 –319 Springer International Publishing, Cham (2023). Google Scholar

46. 

F. S. Oktem, L. Gao, F. Kamalabadi, “Computational spectral and ultrafast imaging via convex optimization,” Handbook of Convex Optimization Methods in Imaging Science, 105 –127 Springer International Publishing, Cham (2018). Google Scholar

47. 

L. Gao and L. V. Wang, “A review of snapshot multidimensional optical imaging: measuring photon tags in parallel,” Phys. Rep., 616 1 –37 https://doi.org/10.1016/j.physrep.2015.12.004 PRPLCM 0370-1573 (2016). Google Scholar

48. 

K. Uchiyama et al., “Various ultra-high-speed imaging and applications by Streak camera,” (2016). Google Scholar

49. 

P. Llull et al., “Coded aperture compressive temporal imaging,” Opt. Express, 21 (9), 10526 –10545 https://doi.org/10.1364/OE.21.010526 OPEXFF 1094-4087 (2013). Google Scholar

50. 

Y. Sun, X. Yuan and S. Pang, “Compressive high-speed stereo imaging,” Opt. Express, 25 (15), 18182 –18190 https://doi.org/10.1364/OE.25.018182 OPEXFF 1094-4087 (2017). Google Scholar

51. 

M. Qiao, X. Liu and X. Yuan, “Snapshot spatial–temporal compressive imaging,” Opt. Lett., 45 (7), 1659 –1662 https://doi.org/10.1364/OL.386238 OPLEDP 0146-9592 (2020). Google Scholar

52. 

X. Yuan, Y. Sun and S. Pang, “Compressive video sensing with side information,” Appl. Opt., 56 (10), 2697 –2704 https://doi.org/10.1364/AO.56.002697 APOPAI 0003-6935 (2017). Google Scholar

53. 

L. Wang et al., “Spatial-temporal transformer for video snapshot compressive imaging,” IEEE Trans. Pattern Anal. Mach. Intell., 45 (7), 9072 –9089 https://doi.org/10.1109/TPAMI.2022.3225382 ITPIDJ 0162-8828 (2023). Google Scholar

54. 

X. Yuan et al., “Plug-and-play algorithms for video snapshot compressive imaging,” IEEE Trans. Pattern Anal. Mach. Intell., 44 (10), 7093 –7111 https://doi.org/10.1109/TPAMI.2021.3099035 ITPIDJ 0162-8828 (2022). Google Scholar

55. 

L. Zhu et al., “Space- and intensity-constrained reconstruction for compressed ultrafast photography,” Optica, 3 (7), 694 –697 https://doi.org/10.1364/OPTICA.3.000694 (2016). Google Scholar

56. 

H. Gao et al., “A simple yet effective AIE-based fluorescent nano-thermometer for temperature mapping in living cells using fluorescence lifetime imaging microscopy,” Nanoscale Horiz., 5 (3), 488 –494 https://doi.org/10.1039/C9NH00693A (2020). Google Scholar

57. 

J. Liang et al., “Single-shot stereo-polarimetric compressed ultrafast photography for light-speed observation of high-dimensional optical transients with picosecond resolution,” Nat. Commun., 11 (1), 5252 https://doi.org/10.1038/s41467-020-19065-5 NCAOBW 2041-1723 (2020). Google Scholar

58. 

J. Liang et al., “Single-shot real-time video recording of a photonic Mach cone induced by a scattered light pulse,” Sci. Adv., 3 (1), e1601814 https://doi.org/10.1126/sciadv.1601814 STAMCV 1468-6996 (2017). Google Scholar

59. 

P. Wang, J. Liang and L. V. Wang, “Single-shot ultrafast imaging attaining 70 trillion frames per second,” Nat. Commun., 11 (1), 2091 https://doi.org/10.1038/s41467-020-15745-4 NCAOBW 2041-1723 (2020). Google Scholar

60. 

Y. Ma et al., “High-speed compressed-sensing fluorescence lifetime imaging microscopy of live cells,” Proc. Natl. Acad. Sci., 118 (3), e2004176118 https://doi.org/10.1073/pnas.2004176118 (2021). Google Scholar

61. 

T. Kim et al., “Picosecond-resolution phase-sensitive imaging of transparent objects in a single shot,” Sci. Adv., 6 (3), eaay6200 https://doi.org/10.1126/sciadv.aay6200 STAMCV 1468-6996 (2020). Google Scholar

62. 

P. Ding et al., “Single-shot spectral-volumetric compressed ultrafast photography,” Adv. Photonics, 3 (4), 045001 https://doi.org/10.1117/1.AP.3.4.045001 AOPAC7 1943-8206 (2021). Google Scholar

63. 

H. Tang et al., “Single-shot compressed optical field topography,” Light Sci. Appl., 11 (1), 244 https://doi.org/10.1038/s41377-022-00935-0 (2022). Google Scholar

64. 

J. Liang et al., “Encrypted three-dimensional dynamic imaging using snapshot time-of-flight compressed ultrafast photography,” Sci. Rep., 5 (1), 15504 https://doi.org/10.1038/srep15504 SRCEC3 2045-2322 (2015). Google Scholar

65. 

Y. Zhang et al., “Ultrafast and hypersensitive phase imaging of propagating internodal current flows in myelinated axons and electromagnetic pulses in dielectrics,” Nat. Commun., 13 (1), 5247 https://doi.org/10.1038/s41467-022-33002-8 NCAOBW 2041-1723 (2022). Google Scholar

66. 

C. Yang et al., “Compressed ultrafast photography by multi-encoding imaging,” Laser Phys. Lett., 15 (11), 116202 https://doi.org/10.1088/1612-202X/aae198 1612-2011 (2018). Google Scholar

67. 

M. Cicconet et al., “Label free cell-tracking and division detection based on 2D time-lapse images for lineage analysis of early embryo development,” Comput. Biol. Med., 51 24 –34 https://doi.org/10.1016/j.compbiomed.2014.04.011 CBMDAW 0010-4825 (2014). Google Scholar

68. 

M. Marquez, Y. Lai and J. Liang, “CUP-Tutorial_Data_1,” https://drive.google.com/file/d/1aAD4DfSimEth25aSoDQkVIpNQvPjPXvh/view?usp=drive_link (2023). Google Scholar

69. 

S. H. Chan, X. Wang and O. A. Elgendy, “Plug-and-play ADMM for image restoration: fixed-point convergence and applications,” IEEE Trans. Comput. Imaging, 3 (1), 84 –98 https://doi.org/10.1109/TCI.2016.2629286 (2016). Google Scholar

70. 

S. Sreehari et al., “Plug-and-play priors for bright field electron tomography and sparse interpolation,” IEEE Trans. Comput. Imaging, 2 (4), 408 –423 https://doi.org/10.1109/TCI.2016.2599778 (2016). Google Scholar

71. 

A. Gnanasambandam et al., “Megapixel photon-counting color imaging using quanta image sensor,” Opt. Express, 27 (12), 17298 –17310 https://doi.org/10.1364/OE.27.017298 OPEXFF 1094-4087 (2019). Google Scholar

72. 

Y. Sun et al., “Scalable plug-and-play ADMM with convergence guarantees,” IEEE Trans. Comput. Imaging, 7 849 –863 https://doi.org/10.1109/TCI.2021.3094062 (2021). Google Scholar

73. 

A. M. Teodoro, J. M. Bioucas-Dias and M. A. Figueiredo, “A convergent image fusion algorithm using scene-adapted Gaussian-mixture-based denoising,” IEEE Trans. Image Process., 28 (1), 451 –463 https://doi.org/10.1109/TIP.2018.2869727 IIPRE4 1057-7149 (2018). Google Scholar

74. 

W. Dong et al., “Model-guided deep hyperspectral image super-resolution,” IEEE Trans. Image Process., 30 5754 –5768 https://doi.org/10.1109/TIP.2021.3078058 IIPRE4 1057-7149 (2021). Google Scholar

75. 

M. A. T. Figueiredo, R. D. Nowak and S. J. Wright, “Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems,” IEEE J. Sel. Top. Signal Process., 1 (4), 586 –597 https://doi.org/10.1109/JSTSP.2007.910281 (2007). Google Scholar

76. 

S. J. Wright, R. D. Nowak and M. A. T. Figueiredo, “Sparse reconstruction by separable approximation,” IEEE Trans. Signal Process., 57 (7), 2479 –2493 https://doi.org/10.1109/TSP.2009.2016892 ITPRED 1053-587X (2009). Google Scholar

77. 

M. V. Afonso, J. M. Bioucas-Dias and M. A. Figueiredo, “An augmented Lagrangian approach to the constrained optimization formulation of imaging inverse problems,” IEEE Trans. Image Process., 20 (3), 681 –695 https://doi.org/10.1109/TIP.2010.2076294 IIPRE4 1057-7149 (2010). Google Scholar

78. 

J. M. Bioucas-Dias and M. A. Figueiredo, “A new TwIST: two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. Image Process., 16 (12), 2992 –3004 https://doi.org/10.1109/TIP.2007.909319 IIPRE4 1057-7149 (2007). Google Scholar

79. 

L. I. Rudin, S. Osher and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Phys. D, 60 (1), 259 –268 https://doi.org/10.1016/0167-2789(92)90242-F (1992). Google Scholar

80. 

M. Marquez et al., “Snapshot compressive spectral depth imaging from coded aberrations,” Opt. Express, 29 (6), 8142 –8159 https://doi.org/10.1364/OE.415664 OPEXFF 1094-4087 (2021). Google Scholar

81. 

M. Marquez et al., “Compressive spectral imaging via deformable mirror and colored-mosaic detector,” Opt. Express, 27 (13), 17795 –17808 https://doi.org/10.1364/OE.27.017795 OPEXFF 1094-4087 (2019). Google Scholar

82. 

M. Marquez, H. Rueda-Chacon and H. Arguello, “Compressive spectral light field image reconstruction via online tensor representation,” IEEE Trans. Image Process., 29 3558 –3568 https://doi.org/10.1109/TIP.2019.2963376 IIPRE4 1057-7149 (2020). Google Scholar

83. 

X. Yuan et al., “Plug-and-play algorithms for large-scale snapshot compressive imaging,” in Proc. IEEE/CVF Conf. Comput. Vis. and Pattern Recognit., 1447 –1457 (2020). https://doi.org/10.1109/CVPR42600.2020.00152 Google Scholar

84. 

S. Boyd et al., “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends Mach. Learn., 3 (1), 1 –122 https://doi.org/10.1561/2200000016 (2011). Google Scholar

85. 

D. Knowles, “Lagrangian duality for dummies,” (2010). https://www-cs.stanford.edu/~davidknowles/lagrangian_duality.pdf (23 December 2023). Google Scholar

86. 

S. P. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press( (2004). Google Scholar

87. 

N. Parikh and S. Boyd, “Proximal algorithms,” Found. Trends Optim., 1 (3), 127 –239 https://doi.org/10.1561/2400000003 (2014). Google Scholar

88. 

S. H. Chan, “Performance analysis of plug-and-play ADMM: a graph signal processing perspective,” IEEE Trans. Comput. Imaging, 5 (2), 274 –286 https://doi.org/10.1109/TCI.2019.2892123 (2019). Google Scholar

89. 

E. T. Reehorst and P. Schniter, “Regularization by denoising: clarifications and new interpretations,” IEEE Trans. Comput. Imaging, 5 (1), 52 –67 https://doi.org/10.1109/TCI.2018.2880326 (2018). Google Scholar

90. 

K. Dabov et al., “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Trans. Image Process., 16 (8), 2080 –2095 https://doi.org/10.1109/TIP.2007.901238 IIPRE4 1057-7149 (2007). Google Scholar

91. 

Y. Lai et al., “Single-shot ultraviolet compressed ultrafast photography,” Laser Photonics Rev., 14 (10), 2000122 https://doi.org/10.1002/lpor.202000122 (2020). Google Scholar

92. 

M. Marquez, Y. Lai and J. Liang, “CUP-Tutorial_Data_2,” (2023). https://drive.google.com/drive/folders/1lc6W9zg79T2dHSjyxC0oYTt8nuyCNeqS?usp=drive_link (23 December 2023). Google Scholar

93. 

I. Lemhadri et al., “LassoNet: a neural network with feature sparsity,” J. Mach. Learn. Res., 22 (127), 5633 –5661 (2021). Google Scholar

94. 

M. Qiao et al., “Deep learning for video compressive sensing,” APL Photonics, 5 (3), 030801 https://doi.org/10.1063/1.5140721 (2020). Google Scholar

95. 

A. Zhang et al., “Single-shot compressed ultrafast photography based on U-net network,” Opt. Express, 28 (26), 39299 –39310 https://doi.org/10.1364/OE.398083 OPEXFF 1094-4087 (2020). Google Scholar

96. 

C. Yang et al., “High-fidelity image reconstruction for compressed ultrafast photography via an augmented-Lagrangian and deep-learning hybrid algorithm,” Photonics Res., 9 (2), B30 –B37 https://doi.org/10.1364/PRJ.410018 (2021). Google Scholar

97. 

X. Liu et al., “Single-shot real-time compressed ultrahigh-speed imaging enabled by a snapshot-to-video autoencoder,” Photonics Res., 9 (12), 2464 –2474 https://doi.org/10.1364/PRJ.422179 (2021). Google Scholar

98. 

Q. Shen, J. Tian and C. Pei, “A novel reconstruction algorithm with high performance for compressed ultrafast imaging,” Sensors, 22 (19), 7372 https://doi.org/10.3390/s22197372 SNSRES 0746-9462 (2022). Google Scholar

99. 

C. Yang et al., “Improving the image reconstruction quality of compressed ultrafast photography via an augmented Lagrangian algorithm,” J. Opt., 21 (3), 035703 https://doi.org/10.1088/2040-8986/ab00d9 (2019). Google Scholar

100. 

Z. Kaitao et al., “CUP-VISAR image reconstruction based on low-rank prior and total-variation regularization,” High Power Laser Part. Beams, 35 (7), 072002 https://doi.org/10.11884/HPLPB202335.230011 QYLIEL 1001-4322 (2023). Google Scholar

101. 

X. Wang et al., “Research of CUP-VISAR velocity reconstruction based on weighted DRUNet and total variation joint optimization,” Opt. Lett., 48 (20), 5181 –5184 https://doi.org/10.1364/OL.498607 OPLEDP 0146-9592 (2023). Google Scholar

102. 

Y. Ma, X. Feng and L. Gao, “Deep-learning-based image reconstruction for compressed ultrafast photography,” Opt. Lett., 45 (16), 4400 –4403 https://doi.org/10.1364/OL.397717 OPLEDP 0146-9592 (2020). Google Scholar

103. 

Y. He et al., “High-speed compressive wide-field fluorescence microscopy with an alternant deep denoisers-based image reconstruction algorithm,” Opt. Lasers Eng., 165 107541 https://doi.org/10.1016/j.optlaseng.2023.107541 (2023). Google Scholar

104. 

M. Marquez et al., “Deep-learning supervised snapshot compressive imaging enabled by an end-to-end adaptive neural network,” IEEE J. Sel. Top. Signal Process., 16 (4), 688 –699 https://doi.org/10.1109/JSTSP.2022.3172592 (2022). Google Scholar

105. 

O. Ronneberger, P. Fischer and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” Lect. Notes Comput. Sci., 9351 234 –241 https://doi.org/10.1007/978-3-319-24574-4_28 LNCSD9 0302-9743 (2015). Google Scholar

106. 

W. W. Hager, “Updating the inverse of a matrix,” SIAM Rev., 31 (2), 221 –239 https://doi.org/10.1137/1031049 SIREAD 0036-1445 (1989). Google Scholar

107. 

H. Zhao et al., “Loss functions for image restoration with neural networks,” IEEE Trans. Comput. Imaging, 3 (1), 47 –57 https://doi.org/10.1109/TCI.2016.2644865 (2016). Google Scholar

108. 

M. Gygli et al., “Creating summaries from user videos,” Lect. Notes Comput. Sci., 8695 505 –520 https://doi.org/10.1007/978-3-319-10584-0_33 LNCSD9 0302-9743 (2014). Google Scholar

109. 

H. Kiani Galoogahi et al., “Need for speed: a benchmark for higher frame rate object tracking,” in Proc. IEEE Int. Conf. Comput. Vis., 1125 –1134 (2017). https://doi.org/10.1109/ICCV.2017.128 Google Scholar

110. 

S. M. Safdarnejad et al., “Sports videos in the wild (SVW): a video dataset for sports analysis,” in 11th IEEE Int. Conf. and Workshops on Autom. Face and Gesture Recognit. (FG), 1 –7 (2015). https://doi.org/10.1109/FG.2015.7163105 Google Scholar

111. 

M. Marquez, Y. Lai and J. Liang, “CUP-Tutorial_Data_3,” (2023). https://drive.google.com/drive/folders/1-RL-8EKG-evQ1CJoQD9GS4N_BzWGTBj9?usp=drive_link (23 December 2023). Google Scholar

112. 

M. Marquez, Y. Lai and J. Liang, “CUP-Tutorial_Data_4,” (2023). https://drive.google.com/drive/folders/1Bt1yldG9Bik3C8I56a8nOxyyzlHkWIMu?usp=drive_link (23 December 2023). Google Scholar

113. 

W. Zhou et al., “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process., 13 (4), 600 –612 https://doi.org/10.1109/TIP.2003.819861 IIPRE4 1057-7149 (2004). Google Scholar

114. 

J. Liang, L. Zhu and L. V. Wang, “Single-shot real-time femtosecond imaging of temporal focusing,” Light Sci. Appl., 7 (1), 42 https://doi.org/10.1038/s41377-018-0044-7 (2018). Google Scholar

115. 

E. J. Candes and T. Tao, “Near-optimal signal recovery from random projections: universal encoding strategies?,” IEEE Trans. Inf. Theory, 52 (12), 5406 –5425 https://doi.org/10.1109/TIT.2006.885507 IETTAW 0018-9448 (2006). Google Scholar

116. 

C. Yang et al., “Optimizing codes for compressed ultrafast photography by the genetic algorithm,” Optica, 5 (2), 147 –151 https://doi.org/10.1364/OPTICA.5.000147 (2018). Google Scholar

117. 

X. Liu et al., “Single-shot real-time sub-nanosecond electron imaging aided by compressed sensing: analytical modeling and simulation,” Micron, 117 47 –54 https://doi.org/10.1016/j.micron.2018.11.003 MICNB2 0047-7206 (2019). Google Scholar

118. 

M. Iliadis, L. Spinoulas and A. K. Katsaggelos, “Deep fully-connected networks for video compressive sensing,” Digit. Signal Process., 72 9 –18 https://doi.org/10.1016/j.dsp.2017.09.010 DSPREJ 1051-2004 (2018). Google Scholar

119. 

M. Iliadis, L. Spinoulas and A. K. Katsaggelos, “DeepBinaryMask: learning a binary mask for video compressive sensing,” Digit. Signal Process., 96 102591 https://doi.org/10.1016/j.dsp.2019.102591 DSPREJ 1051-2004 (2020). Google Scholar

120. 

C. Yang et al., “Hyperspectrally compressed ultrafast photography,” Phys. Rev. Lett., 124 (2), 023902 https://doi.org/10.1103/PhysRevLett.124.023902 PRLTAO 0031-9007 (2020). Google Scholar

121. 

Y. Lu et al., “Compressed ultrafast spectral-temporal photography,” Phys. Rev. Lett., 122 (19), 193904 https://doi.org/10.1103/PhysRevLett.122.193904 PRLTAO 0031-9007 (2019). Google Scholar

122. 

C. Yang et al., “Single-shot receive-only ultrafast electro-optical deflection imaging,” Phys. Rev. Appl., 13 (2), 024001 https://doi.org/10.1103/PhysRevApplied.13.024001 PRAHB2 2331-7019 (2020). Google Scholar

123. 

Y. N. Mishra et al., “Single-pulse real-time billion-frames-per-second planar imaging of ultrafast nanoparticle-laser dynamics and temperature in flames,” Light Sci. Appl., 12 (1), 47 https://doi.org/10.1038/s41377-023-01095-5 (2023). Google Scholar

124. 

F. Cao et al., “Single-shot spatiotemporal intensity measurement of picosecond laser pulses with compressed ultrafast photography,” Opt. Lasers Eng., 116 89 –93 https://doi.org/10.1016/j.optlaseng.2019.01.002 (2019). Google Scholar

125. 

C. Jin et al., “Weighted multi-scale denoising via adaptive multi-channel fusion for compressed ultrafast photography,” Opt. Express, 30 (17), 31157 –31170 https://doi.org/10.1364/OE.469345 OPEXFF 1094-4087 (2022). Google Scholar

126. 

S. Nagai et al., “Construction of multi-directional high-speed imaging system using a streak camera and image compression method,” High Energy Density Phys., 37 100902 https://doi.org/10.1016/j.hedp.2020.100902 HEDPBW 1574-1818 (2020). Google Scholar

127. 

Z. Guan et al., “Study on the length of diagnostic time window of CUP-VISAR,” Meas. Sci. Technol., 32 (12), 125208 https://doi.org/10.1088/1361-6501/ac29d4 MSTCEP 0957-0233 (2021). Google Scholar

128. 

C. Jin et al., “Single-shot real-time imaging of ultrafast light springs,” Sci. China Phys. Mech. Astron., 64 (12), 124212 https://doi.org/10.1007/s11433-021-1789-6 SCPMCL 1674-7348 (2021). Google Scholar

129. 

L. Fan et al., “Real-time observation and control of optical chaos,” Sci. Adv., 7 (3), eabc8448 https://doi.org/10.1126/sciadv.abc8448 STAMCV 1468-6996 (2021). Google Scholar

130. 

U. Teğin, P. Wang and L. V. Wang, “Real-time observation of optical rogue waves in spatiotemporally mode-locked fiber lasers,” Commun. Phys., 6 (1), 60 https://doi.org/10.1038/s42005-023-01185-1 (2023). Google Scholar

131. 

Y. Yang et al., “A diagnostic system toward high-resolution measurement of wavefront profile,” Opt. Commun., 456 124554 https://doi.org/10.1016/j.optcom.2019.124554 OPCOB8 0030-4018 (2020). Google Scholar

132. 

J. C. Jing, X. Wei and L. V. Wang, “Spatio-temporal-spectral imaging of non-repeatable dissipative soliton dynamics,” Nat. Commun., 11 (1), 2059 https://doi.org/10.1038/s41467-020-15900-x NCAOBW 2041-1723 (2020). Google Scholar

133. 

X. Wei et al., “Real-time frequency-encoded spatiotemporal focusing through scattering media using a programmable 2D ultrafine optical frequency comb,” Sci. Adv., 6 (8), eaay1192 https://doi.org/10.1126/sciadv.aay1192 STAMCV 1468-6996 (2020). Google Scholar

134. 

J. Park et al., “Snapshot multidimensional photography through active optical mapping,” Nat. Commun., 11 (1), 5602 https://doi.org/10.1038/s41467-020-19418-0 NCAOBW 2041-1723 (2020). Google Scholar

135. 

X. Liu et al., “Fast wide-field upconversion luminescence lifetime thermometry enabled by single-shot compressed ultrahigh-speed imaging,” Nat. Commun., 12 (1), 6401 https://doi.org/10.1038/s41467-021-26701-1 NCAOBW 2041-1723 (2021). Google Scholar

136. 

J. Liang et al., “Homogeneous one-dimensional optical lattice generation using a digital micromirror device-based high-precision beam shaper,” J. Micro/Nanolithogr. MEMS MOEMS, 11 (2), 023002 https://doi.org/10.1117/1.JMM.11.2.023002 (2012). Google Scholar

137. 

, “DMD optical efficiency for visible wavelengths (Rev. B),” (2023). https://www.ti.com/lit/an/dlpa083b/dlpa083b.pdf?ts=1701702604909&ref_url=https%253A%252F%252Fwww.ti.com%252Fproduct%252FDLP230NP (23 December 2023). Google Scholar

138. 

W. Chi and N. George, “Phase-coded aperture for optical imaging,” Opt. Commun., 282 (11), 2110 –2117 https://doi.org/10.1016/j.optcom.2009.02.031 OPCOB8 0030-4018 (2009). Google Scholar

139. 

W. Chi and N. George, “Optical imaging with phase-coded aperture,” Opt. Express, 19 (5), 4294 –4300 https://doi.org/10.1364/OE.19.004294 OPEXFF 1094-4087 (2011). Google Scholar

140. 

R. Zhu, T.-H. Tsai, D. J. Brady, “Coded aperture snapshot spectral imager based on liquid crystal spatial light modulator,” in OSA Tech. Digest, Front. Opt., FW1D.4 (2013). https://doi.org/10.1364/FIO.2013.FW1D.4 Google Scholar

141. 

J. Liang, “Introduction to coded optical imaging,” Coded Optical Imaging, Springer Nature( (2024). Google Scholar

142. 

, “LC 2012 spatial light modulator (transmissive),” (2023). https://holoeye.com/products/spatial-light-modulators/lc-2012-spatial-light-modulator-transmissive/ (23 December 2023). Google Scholar

143. 

J. García-Márquez et al., “Flicker minimization in an LCoS spatial light modulator,” Opt. Express, 20 (8), 8431 –8441 https://doi.org/10.1364/OE.20.008431 OPEXFF 1094-4087 (2012). Google Scholar

144. 

, ( (2023). https://www.fineline-imaging.com/ (23 December 2023). Google Scholar

145. 

J. del Barrio and C. Sánchez-Somolinos, “Light to shape the future: from photolithography to 4D printing,” Adv. Opt. Mater., 7 (16), 1900598 https://doi.org/10.1002/adom.201900598 2195-1071 (2019). Google Scholar

146. 

M. Marquez et al., “Metalens-based compressed ultra-compact femtophotography: analytical modeling and simulations,” Ultrafast Sci., 4 0052 https://doi.org/10.34133/ultrafastscience.0052 (2023). Google Scholar

147. 

J. Yao et al., “Multichannel-coupled compressed ultrafast photography,” J. Opt., 22 (8), 085701 https://doi.org/10.1088/2040-8986/aba13b (2020). Google Scholar

148. 

A. Matin and X. Wang, “Video encryption/compression using compressive coded rotating mirror camera,” Sci. Rep., 11 (1), 23191 https://doi.org/10.1038/s41598-021-02520-8 SRCEC3 2045-2322 (2021). Google Scholar

149. 

A. Matin and X. Wang, “Compressive coded rotating mirror camera for high-speed imaging,” Photonics, 8 (2), 34 https://doi.org/10.3390/photonics8020034 (2021). Google Scholar

150. 

J. Park and L. Gao, “Continuously streaming compressed high-speed photography using time delay integration,” Optica, 8 (12), 1620 –1623 https://doi.org/10.1364/OPTICA.437736 (2021). Google Scholar

151. 

D. Qi et al., “100-trillion-frame-per-second single-shot compressed ultrafast photography via molecular alignment,” Phys. Rev. Appl., 15 (2), 024051 https://doi.org/10.1103/PhysRevApplied.15.024051 PRAHB2 2331-7019 (2021). Google Scholar

152. 

G. D. Bai et al., “Multitasking shared aperture enabled with multiband digital coding metasurface,” Adv. Opt. Mater., 6 (21), 1800657 https://doi.org/10.1002/adom.201800657 2195-1071 (2018). Google Scholar

153. 

Y. Wu et al., “TiO2 metasurfaces: from visible planar photonics to photochemistry,” Sci. Adv., 5 (11), eaax0939 https://doi.org/10.1126/sciadv.aax0939 STAMCV 1468-6996 (2019). Google Scholar

154. 

M. Khorasaninejad and F. Capasso, “Metalenses: versatile multifunctional photonic components,” Science, 358 (6367), eaam8100 https://doi.org/10.1126/science.aam8100 SCIEAS 0036-8075 (2017). Google Scholar

155. 

X. Hua et al., “Ultra-compact snapshot spectral light-field imaging,” Nat. Commun., 13 (1), 2732 https://doi.org/10.1038/s41467-022-30439-9 NCAOBW 2041-1723 (2022). Google Scholar

156. 

T. R. M. Sales and G. M. Morris, “Diffractive–refractive behavior of kinoform lenses,” Appl. Opt., 36 (1), 253 –257 https://doi.org/10.1364/AO.36.000253 APOPAI 0003-6935 (1997). Google Scholar

157. 

R. Menon et al., “Maskless lithography,” Mater. Today, 8 (2), 26 –33 https://doi.org/10.1016/S1369-7021(05)00699-1 MATOBY 1369-7021 (2005). Google Scholar

158. 

P. Wang and R. Menon, “Computational multispectral video imaging,” J. Opt. Soc. Am. A, 35 (1), 189 –199 https://doi.org/10.1364/JOSAA.35.000189 JOAOD6 0740-3232 (2018). Google Scholar

159. 

G. R. B. E. Römer and P. Bechtold, “Electro-optic and acousto-optic laser beam scanners,” Phys. Proc., 56 29 –39 https://doi.org/10.1016/j.phpro.2014.08.092 PPHRCK 1875-3892 (2014). Google Scholar

160. 

B.-L. Qian and H. E. Elsayed-Ali, “Electron pulse broadening due to space charge effects in a photoelectron gun for electron diffraction and streak camera systems,” J. Appl. Phys., 91 (1), 462 –468 https://doi.org/10.1063/1.1419209 JAPIAU 0021-8979 (2002). Google Scholar

161. 

Y. Wang et al., “Dark current and sensitivity measurements for structured S20 photocathodes,” J. Phys. D: Appl. Phys., 39 (20), 4341 https://doi.org/10.1088/0022-3727/39/20/009 JPAPBE 0022-3727 (2006). Google Scholar

162. 

M. Liu et al., “Short-wave infrared photoluminescence lifetime mapping of rare-earth doped nanoparticles using all-optical streak imaging,” Adv. Sci., 11 2305284 https://doi.org/10.1002/advs.202305284 (2023). Google Scholar

163. 

V. Degtyareva et al., “Femtosecond streak tubes designing, manufacturing, and testing,” in 25th Int. Congr. on High-Speed Photogr. and Photonics, (2003). https://doi.org/10.1117/12.516876 Google Scholar

164. 

J. Sasián, Introduction to Aberrations in Optical Imaging Systems, Cambridge University Press( (2013). Google Scholar

165. 

, “Small beam diameter scanning galvo mirror systems,” (2023). https://www.thorlabs.com/newgrouppage9.cfm?objectgroup_id=3770&pn=GVS001#4419 (23 December 2023). Google Scholar

166. 

H. Wong, Y. L. Yao and E. S. Schlig, “TDI charge-coupled devices: design and applications,” IBM J. Res. Dev., 36 (1), 83 –106 https://doi.org/10.1147/rd.361.0083 IBMJAE 0018-8646 (1992). Google Scholar

167. 

W. Uhring et al., “A scalable architecture for multi millions frames per second CMOS sensor with digital storage,” in 16th IEEE Int. New Circuits and Syst. Conf. (NEWCAS), 252 –255 (2018). https://doi.org/10.1109/NEWCAS.2018.8585644 Google Scholar

168. 

W. Uhring, J.-B. Schell and L. Hébrard, “Integrated streak camera with on chip averaging for signal to noise ratio improvement,” in IARIA SIGNAL 2019, (2019). Google Scholar

169. 

Y. Chiu et al., “Electro-optic beam scanner in KTiOPO4,” Appl. Phys. Lett., 69 (21), 3134 –3136 https://doi.org/10.1063/1.116806 APPLAB 0003-6951 (1996). Google Scholar

170. 

K. T. Kim et al., “Photonic streaking of attosecond pulse trains,” Nat. Photonics, 7 (8), 651 –656 https://doi.org/10.1038/nphoton.2013.170 NPAHBY 1749-4885 (2013). Google Scholar

171. 

J. Chun, H. Jung and C.-M. Kyung, “Suppressing rolling-shutter distortion of CMOS image sensors by motion vector detection,” IEEE Trans. Consum. Electron., 54 (4), 1479 –1487 https://doi.org/10.1109/TCE.2008.4711190 ITCEDA 0098-3063 (2008). Google Scholar

172. 

, “Control point registration,” (2023). https://www.mathworks.com/help/images/control-point-registration.html (23 December 2023). Google Scholar

173. 

Y. Lai et al., “Compressed ultrafast tomographic imaging by passive spatiotemporal projections,” Opt. Lett., 46 (7), 1788 –1791 https://doi.org/10.1364/OL.420737 OPLEDP 0146-9592 (2021). Google Scholar

174. 

T. Zhu et al., “Note: x-ray streak camera sweep speed calibration,” Rev. Sci. Instrum., 81 (5), 056108 https://doi.org/10.1063/1.3429939 RSINAK 0034-6748 (2010). Google Scholar

175. 

D. Purves, Brains as Engines of Association: an Operating Principle for Nervous Systems, Oxford University Press( (2019). Google Scholar

176. 

M. B. Jackson, “Passive current flow and morphology in the terminal arborizations of the posterior pituitary,” J. Neurophysiol., 69 (3), 692 –702 https://doi.org/10.1152/jn.1993.69.3.692 JONEA4 0022-3077 (1993). Google Scholar

177. 

Y. Gong et al., “Imaging neural spiking in brain tissue using FRET-opsin protein voltage sensors,” Nat. Commun., 5 (1), 3674 https://doi.org/10.1038/ncomms4674 NCAOBW 2041-1723 (2014). Google Scholar

178. 

J. F. Gillooly et al., “Effects of size and temperature on metabolic rate,” Science, 293 (5538), 2248 –2251 https://doi.org/10.1126/science.1061967 SCIEAS 0036-8075 (2001). Google Scholar

179. 

X. Zhu et al., “Temperature-feedback upconversion nanocomposite for accurate photothermal therapy at facile temperature,” Nat. Commun., 7 (1), 10437 https://doi.org/10.1038/ncomms10437 NCAOBW 2041-1723 (2016). Google Scholar

180. 

B. B. Lahiri et al., “Medical applications of infrared thermography: a review,” Infrared Phys. Technol., 55 (4), 221 –235 https://doi.org/10.1016/j.infrared.2012.03.007 IPTEEY 1350-4495 (2012). Google Scholar

181. 

R. K. Benninger et al., “Quantitative 3D mapping of fluidic temperatures within microchannel networks using fluorescence lifetime imaging,” Anal. Chem., 78 (7), 2272 –2278 https://doi.org/10.1021/ac051990f ANCHAM 0003-2700 (2006). Google Scholar

182. 

G. Schlegel et al., “Fluorescence decay time of single semiconductor nanocrystals,” Phys. Rev. Lett., 88 (13), 137401 https://doi.org/10.1103/PhysRevLett.88.137401 PRLTAO 0031-9007 (2002). Google Scholar

183. 

S. Kalytchuk et al., “Temperature-dependent exciton and trap-related photoluminescence of CdTe quantum dots embedded in a NaCl matrix: implication in thermometry,” Small, 12 (4), 466 –476 https://doi.org/10.1002/smll.201501984 SMALBC 1613-6810 (2016). Google Scholar

184. 

T. Qin et al., “Organic fluorescent thermometers: Highlights from 2013 to 2017,” Trends Anal. Chem., 102 259 –271 https://doi.org/10.1016/j.trac.2018.03.003 (2018). Google Scholar

185. 

T. Chihara et al., “Biological deep temperature imaging with fluorescence lifetime of rare-earth-doped ceramics particles in the second NIR biological window,” Sci. Rep., 9 (1), 12806 https://doi.org/10.1038/s41598-019-49291-x SRCEC3 2045-2322 (2019). Google Scholar

186. 

D. Jaque and F. Vetrone, “Luminescence nanothermometry,” Nanoscale, 4 (15), 4301 –4326 https://doi.org/10.1039/c2nr30764b NANOHL 2040-3364 (2012). Google Scholar

187. 

H. Kurokawa et al., “High resolution imaging of intracellular oxygen concentration by phosphorescence lifetime,” Sci. Rep., 5 (1), 10657 https://doi.org/10.1038/srep10657 SRCEC3 2045-2322 (2015). Google Scholar

188. 

J. Yang, H. Yang and L. Lin, “Quantum dot nano thermometers reveal heterogeneous local thermogenesis in living cells,” ACS Nano, 5 (6), 5067 –5071 https://doi.org/10.1021/nn201142f (2011). Google Scholar

189. 

S. Kiyonaka et al., “Genetically encoded fluorescent thermosensors visualize subcellular thermoregulation in living cells,” Nat. Methods, 10 (12), 1232 –1238 https://doi.org/10.1038/nmeth.2690 1548-7091 (2013). Google Scholar

190. 

M. Runowski et al., “Upconverting lanthanide doped fluoride NaLuF4: Yb3+-Er3+-Ho3+-optical sensor for multi-range fluorescence intensity ratio (FIR) thermometry in visible and NIR regions,” J. Lumin., 201 104 –109 https://doi.org/10.1016/j.jlumin.2018.04.040 JLUMA8 0022-2313 (2018). Google Scholar

191. 

A. Nexha et al., “Lanthanide doped luminescence nanothermometers in the biological windows: strategies and applications,” Nanoscale, 13 (17), 7913 –7987 https://doi.org/10.1039/D0NR09150B NANOHL 2040-3364 (2021). Google Scholar

192. 

O. Otto et al., “Real-time deformability cytometry: on-the-fly cell mechanical phenotyping,” Nat. Methods, 12 (3), 199 –202 https://doi.org/10.1038/nmeth.3281 1548-7091 (2015). Google Scholar

193. 

J. Jeong et al., “Accurately tracking single-cell movement trajectories in microfluidic cell sorting devices,” PLoS One, 13 (2), e0192463 https://doi.org/10.1371/journal.pone.0192463 POLNCL 1932-6203 (2018). Google Scholar

194. 

D. L. van der Ven et al., “Microfluidic jet impact: spreading, splashing, soft substrate deformation and injection,” J. Colloid Interface, 636 549 –558 https://doi.org/10.1016/j.jcis.2023.01.024 (2023). Google Scholar

195. 

D. Green, A. Gelb and G. P. Luke, “Sparsity-based recovery of three-dimensional photoacoustic images from compressed single-shot optical detection,” J. Imaging, 7 (10), 201 https://doi.org/10.3390/jimaging7100201 (2021). Google Scholar

196. 

D. Green, A. Gelb and G. P. Luke, “Compressed single-shot photoacoustic image reconstruction of a 3D pressure distribution,” in OSA Tech. Digest, OSA Imaging and Appl. Opt. Congr. 2021 (3D, COSI, DH, ISA, pcAOP), CM2E.5 (2021). https://doi.org/10.1364/COSI.2021.CM2E.5 Google Scholar

197. 

E. Zhang, J. Laufer and P. Beard, “Backward-mode multiwavelength photoacoustic scanner using a planar Fabry-Perot polymer film ultrasound sensor for high-resolution three-dimensional imaging of biological tissues,” Appl. Opt., 47 (4), 561 –577 https://doi.org/10.1364/AO.47.000561 APOPAI 0003-6935 (2008). Google Scholar

198. 

P. Wang and L. V. Wang, “Single-shot reconfigurable femtosecond imaging of ultrafast optical dynamics,” Adv. Sci., 10 (13), 2207222 https://doi.org/10.1002/advs.202207222 (2023). Google Scholar

199. 

R. Safaei et al., “High-energy multidimensional solitary states in hollow-core fibres,” Nat. Photonics, 14 (12), 733 –739 https://doi.org/10.1038/s41566-020-00699-2 NPAHBY 1749-4885 (2020). Google Scholar

200. 

Z. Zhu et al., “Attosecond pulse retrieval from noisy streaking traces with conditional variational generative network,” Sci. Rep., 10 (1), 5782 https://doi.org/10.1038/s41598-020-62291-6 SRCEC3 2045-2322 (2020). Google Scholar

201. 

W. Luo et al., “Pixel super-resolution using wavelength scanning,” Light Sci. Appl., 5 (4), e16060 https://doi.org/10.1038/lsa.2016.60 (2016). Google Scholar

202. 

X. Di et al., “Quantitatively monitoring in situ mitochondrial thermal dynamics by upconversion nanoparticles,” Nano Lett., 21 (4), 1651 –1658 https://doi.org/10.1021/acs.nanolett.0c04281 NALEFD 1530-6984 (2021). Google Scholar

203. 

M. Orozco, “A theoretical view of protein dynamics,” Chem. Soc. Rev., 43 (14), 5051 –5066 https://doi.org/10.1039/C3CS60474H CSRVBR 0306-0012 (2014). Google Scholar

204. 

L. Turnbull et al., “Explosive cell lysis as a mechanism for the biogenesis of bacterial membrane vesicles and biofilms,” Nat. Commun., 7 (1), 11220 https://doi.org/10.1038/ncomms11220 NCAOBW 2041-1723 (2016). Google Scholar

205. 

P. Kner et al., “Super-resolution video microscopy of live cells by structured illumination,” Nat. Methods, 6 (5), 339 –342 https://doi.org/10.1038/nmeth.1324 1548-7091 (2009). Google Scholar

206. 

M. W. Haakestad et al., “Electrically tunable photonic bandgap guidance in a liquid-crystal-filled photonic crystal fiber,” IEEE Photonics Technol. Lett., 17 (4), 819 –821 https://doi.org/10.1109/LPT.2004.842793 IPTLEL 1041-1135 (2005). Google Scholar

207. 

M. Ossiander et al., “Extreme ultraviolet metalens by vacuum guiding,” Science, 380 (6640), 59 –63 https://doi.org/10.1126/science.adg6881 SCIEAS 0036-8075 (2023). Google Scholar

208. 

Q. Cheng et al., “Broadband achromatic metalens in terahertz regime,” Sci. Bull., 64 (20), 1525 –1531 https://doi.org/10.1016/j.scib.2019.08.004 (2019). Google Scholar

209. 

C. Zuo et al., “Transport of intensity equation: a tutorial,” Opt. Lasers Eng., 135 106187 https://doi.org/10.1016/j.optlaseng.2020.106187 (2020). Google Scholar

210. 

F. Zhang et al., “Phase retrieval by coherent modulation imaging,” Nat. Commun., 7 (1), 13367 https://doi.org/10.1038/ncomms13367 NCAOBW 2041-1723 (2016). Google Scholar

211. 

M. Dai et al., “On-chip mid-infrared photothermoelectric detectors for full-Stokes detection,” Nat. Commun., 13 (1), 4560 https://doi.org/10.1038/s41467-022-32309-w NCAOBW 2041-1723 (2022). Google Scholar

212. 

C. Huang et al., “Ultrafast control of vortex microlasers,” Science, 367 (6481), 1018 –1021 https://doi.org/10.1126/science.aba4597 SCIEAS 0036-8075 (2020). Google Scholar

213. 

B. Heshmat et al., “Photography optics in the time dimension,” Nat. Photonics, 12 (9), 560 –566 https://doi.org/10.1038/s41566-018-0234-0 NPAHBY 1749-4885 (2018). Google Scholar

214. 

, “0.47-inch 1080p HSSI DLP® digital micromirror device (DMD),” (2023). https://www.ti.com/product/DLP471NE (23 December 2023). Google Scholar

Biography

Yingming Lai received his BSc degree in optoelectronics from the Southern University of Science and Technology, China, in 2019, and his MSc degree in energy and materials science from Institut National de la Recherche Scientifique (INRS)–Université du Québec, Canada, in 2021. Currently, he is a PhD candidate in the Laboratory of Applied Computational Imaging at INRS. His main research areas are computational imaging, compressive sensing, and ultrafast optical imaging.

Miguel Marquez received his BSc degree in computer science, his MSc degree in applied mathematics, and his PhD in physics from the Universidad Industrial de Santander, Colombia, in 2015, 2018, and 2022, respectively. He is currently a postdoctoral fellow in the Laboratory of Applied Computational Imaging at the Institut National de la Recherche Scientifique (INRS)–Université du Québec, Canada. His main research interests include optical and computational imaging, compressive sensing, high-dimensional signal processing, and optimization algorithms.

Jinyang Liang is an associate professor at the Institut National de la Recherche Scientifique (INRS)–Université du Québec, Canada. He directs the Laboratory of Applied Computational Imaging. He holds Canada Research Chair in Ultrafast Computational Imaging (Tier II). His research interests include ultrafast imaging, computational optics, optical physics, and biophotonics. He received his PhD in electrical engineering from the University of Texas at Austin in 2012. From 2012 to 2017, he was a postdoctoral trainee at Washington University in St. Louis and California Institute of Technology under the supervision of Dr. Lihong V. Wang.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Yingming Lai, Miguel Marquez, and Jinyang Liang "Tutorial on compressed ultrafast photography," Journal of Biomedical Optics 29(S1), S11524 (30 January 2024). https://doi.org/10.1117/1.JBO.29.S1.S11524
Received: 22 September 2023; Accepted: 28 December 2023; Published: 30 January 2024
Advertisement
Advertisement
KEYWORDS
Image restoration

Matrices

Ultrafast phenomena

Photography

Ultrafast imaging

Computer programming

Reconstruction algorithms

Back to Top