Principle

Here, we establish the mathematical foundation for our approach, deriving the mathematical analogy between a dynamic scattering medium and dynamic illumination (Fig. 1).

Fig. 1: Noninvasive imaging through dynamic scattering, concept.figure 1

Our approach leverages matrix-based techniques that were developed for imaging through static scatter using multiple illuminations (a), to image through dynamically varying media (b). The enabling underlying mathematical principle is the commutativity property of convolution, making the image formation equation in the static-scattering case ((a), bottom) mathematically equivalent to the dynamic case ((b), bottom), with only the roles of the point spread function (PSF) and the object are interchanged. a Conventional matrix-based approaches image through static scattering media by processing a set of captured frames of the scattered light, each obtained by illuminating the object with a different unknown random illumination13,14. In the common case of isoplanatic scattering, each captured image is the convolution of the scattering PSF with the illuminated object ((a), bottom). b The case of rapidly-varying dynamic scattering poses a challenge for conventional matricial approaches, as multiple captures within the sample decorrelation time are impossible. However, mathematically, for a static object, the captured frames at different times are given by the same convolution equation of the static scattering case (a), just with the roles of object and PSF interchanged. Thus, the object and PSF can be reconstructed by applying the conventional matricial algorithms13,14,57 on the captured frames in the dynamic medium case.

In isoplanatic (coherent or incoherent) imaging conditions, the image plane distribution is given by a convolution of the object’s optical field (in the coherent case) or intensity (in the incoherent case), denoted as O, with the effective (field or intensity) point spread function (PSF) P:

$$I({{{\bf{r}}}})=P({{{\bf{r}}}})*O({{{\bf{r}}}})$$

(1)

It is important to note that this convolution model is strictly valid only for objects within an isoplanatic patch. All experiments in this work were designed within this constraint. Potential extensions to larger fields of view or thick complex media are discussed in the “Discussion” section.

In the common case where no scattering is present, the PSF is a narrow, sharply peaked function. Consequently, the image on the camera sensor I provides a good direct representation of the object, with a resolution given by the PSF. In the case where isoplanatic scattering or aberrations are present, i.e., in the optical memory-effect range2, Eq. (1) still holds. However, the scattering PSF is a complex and potentially spatially extended speckle pattern. This results in a low-contrast, blurry, and seemingly information-less image on the camera sensor2. The goal of computational scattering compensation is to retrieve the object function, O, and potentially the scattering PSF, P, without prior knowledge of either O or P.

The state-of-the-art techniques for computational scattering compensation rely on measuring the reflection matrix of the sample using a set of controlled9,10,36 or random13,14,15 spatial illumination patterns. The reflection matrix is obtained by multiple recordings, m = 1… M, of the scattered complex-valued light field in the coherent case (or scattered light intensity in the incoherent case14) under these different illuminations, with each recorded frame in these measurements being expressed as (Fig. 1a):

$${I}_{m}({{{\bf{r}}}})=P({{{\bf{r}}}}) * {O}_{m}({{{\bf{r}}}})$$

(2)

Where Om denotes the mth realization of the illuminated object, \({O}_{m}(r)=O(r){I}_{m}^{{{{\rm{ill}}}}}(r)\), and \({I}_{m}^{{{{\rm{ill}}}}}(r)\) is the mth illumination pattern. By arranging these measured images into columns of a matrix A, we can write the measured dataset as A = PO, where P is a convolution (Toeplitz) matrix, and O is a matrix containing the different illuminated object realizations in its columns. Following Lee et al. and Weinberg et al.13,14, in the case of uncorrelated illuminations, (defined as \({\langle {\hat{O}}_{m}({r}_{i}){\hat{O}}_{m}({r}_{j})\rangle }_{m}\propto | O({r}_{i}){| }^{2}{\delta }_{i,j}\), where \(\hat{O}(r) {=}^{{\mathrm{def}}} O(r)-{\langle {O}_{m}(r)\rangle }_{m}\)), the I-CLASS (Incoherent Closed-Loop Accumulation of Single Scattering) algorithm14 enables simultaneous retrieval of both P(r) and ∣O(r)∣2 by decomposing the covariance matrix of A, Cov(A), to Cov(A) = PCov(O)PT 13,14.

In dynamic scattering scenarios, where the PSF varies in an uncorrelated manner, but the target object remains relatively unchanged, the state-of-the-art matrix-based approaches fail. However, in such cases, assuming a sufficiently short exposure time, each camera frame (or hologram in the coherent case) (1) can be written as (Fig. 1b):

$${I}_{m}({{{\bf{r}}}})={P}_{m}({{{\bf{r}}}}) * O({{{\bf{r}}}})$$

(3)

With O as the static object function and Pm as the mth PSF. Due to the convolution commutativity property, equation (3) can be written as:

$${I}_{m}({{{\bf{r}}}})=O({{{\bf{r}}}}) * {P}_{m}({{{\bf{r}}}})$$

(4)

Since Equations (4) and (2) have the exact same form, just with the roles of the object and PSF exchanged (Fig. 1), the CTR-CLASS13 (Compressed Time-Reversal CLASS) or I-CLASS14 algorithm can be applied on the measurements Im(r), to allow the efficient extraction of the object O(r). More specifically, in matrix form, arranging the measurements Im(r) as columns in a matrix A, allows us to write it as A = OP, where O is now a Toeplitz matrix with the object function as the convolution kernel, and the columns of P are the different PSF realizations. For uncorrelated PSF realizations (\({\langle {\hat{P}}_{m}({r}_{i}){\hat{P}}_{m}({r}_{j})\rangle }_{m}\propto {\delta }_{i,j}\), Cov(P) is a diagonal matrix, and the CLASS algorithm can be applied on Cov(A) (for a discussion of the important case of residual spatial correlations in the covariance matrix see Supplementary Section S2).

Thus, the presented approach is realized by performing the following steps: (1) capture m = 1… M scattered light patterns through a rapidly varying medium; (2) arrange the measurements as columns in a matrix A; (3) apply the I-CLASS14 algorithm on A to retrieve the hidden target object (effectively applying the matrix-based adapted CLASS algorithm on the covariance of A).

Experimental results: incoherent imaging

As a first demonstration, we experimentally demonstrate the effectiveness of our method in conventional transmission imaging through dynamically varying scattering. The optical setup is schematically illustrated in Fig. 2a. A conventional widefield microscope captures images of the various objects through a rotating diffuser illuminated by a spatially incoherent LED illumination at 625 nm central wavelength (see “Methods”). For each imaged object, M = 150 short-exposure (0.5–7 ms, see “Methods”), camera frames were captured and processed by the I-CLASS algorithm14. The diffuser rotation between captures was such that the (PSFs) of the different captures were uncorrelated (see Supplementary Section S1).

Fig. 2: Incoherent imaging proof of principle through a dynamic diffuser.figure 2

a Experimental setup: a conventional widefield microscope records M = 150 distorted images of incoherently-illuminated targets through a dynamically rotating scattering diffuser. b, c Captured raw camera frames. d I-CLASS corrected image, revealing the fine details and features of the target. The reconstructed PSF is given in Supplementary Section S1 and Supplementary Movie S1. e Reference image of the object as imaged without the diffuser present. f–q Same as (b–e) for different target objects and diffusers. Colormaps are scaled between the minimum and maximum values of each reconstruction. Scale bars, 100 μm.

Note that although these initial experiments utilize a transmission geometry with illumination from behind the target, the addition of a scattering medium between the light source and the target in these experiments would not change the imaging performance as long as a sufficiently high intensity is passed through the scattering medium, as the incoherent imaging configuration only requires homogeneous illumination of the target. Additionally, our subsequent experimental demonstrations in epi-illumination and detection in fluorescence microscopy and coherent holographic imaging demonstrate applicability to noninvasive imaging across diverse optical configurations and imaging modalities.

Figure 2b–q presents the experimental imaging results after applying the I-CLASS reconstruction algorithm. Example raw captured frames of the target objects distorted by the diffuser are given in Fig. 2b, c, f, g, j, k, n, o. As expected, the details of the objects are distorted due to scattering compared to their direct imaging without the diffuser present (Fig. 2e, i, m, q). The reconstructed images obtained by applying the I-CLASS algorithm14 on the captured frames (after multiplying each frame in Fig. 2b, c, f, g by a fixed scalar value between 1 and 2, see “Methods” and Supplementary Section S2) reveal fine details and features of the objects, are presented in Fig. 2c, g, k, o. The reconstructed object allows estimation of the PSF of each frame by frame-wise deconvolution of each of the raw captured frames with the reconstructed object (Supplementary Section S1). Examples for the full dataset of captured frames, reconstructed objects, and reconstructed PSFs are presented in Supplementary Section S1 and Supplementary Videos S1 and S2.

As a second demonstration, we performed transmission imaging through a naturally-dynamic scatterer. To this end, we replaced the rotating diffuser with a 1 mm-thick cuvette containing 45 μm-diameter Polystyrene beads in solution, creating a dynamically varying scattering as the beads flow freely in the suspension. Additionally, a static diffuser with a 0.5∘ scattering angle was placed adjacent to the cuvette to ensure no ballistic component was present in the captured scattered light images. The experimental setup and results for these experiments are presented in Fig. 3. Using this configuration, we imaged and reconstructed two resolution test targets. The captured camera frames (Fig. 3b, f, c, g) appear highly blurred, exhibiting no clear features. However, the I-CLASS corrected images (Fig. 3d, h) successfully reconstruct the fine features of the target objects, demonstrating the effectiveness of the approach in correcting such natural dynamic scattering where no ballistic component is present. For reference, direct images of the same resolution targets captured without the scattering medium, using the same widefield transmission microscope, are provided in Fig. 3e, i.

Fig. 3: Incoherent imaging through dynamic scattering.figure 3

a Experimental setup: a conventional widefield microscope records M = 150 distorted images of incoherently-illuminated targets through a 1 mm-thick cuvette filled with a solution of 45 μm polystyrene beads. A static diffuser is added in front of the cuvette to ensure no ballistic component is present. b, c Experimental camera frames of target imaged through a dynamically rapidly varying medium taken at distinct times. d I-CLASS reconstructed images. e Images of the object with the cuvette removed. f–i Same as (b–e) for a different target object. Insets in h, i show zoomed-in areas marked by red-dashed lines. Colormaps are scaled between the minimum and maximum values of each reconstruction. Scale bars, 150 μm.

As an additional proof of principle we tested our approach in a fluorescence microscopy experiment performed in epi-detection geometry through dynamic scattering. The setup for this experiment is depicted in Fig. 4a. It is a conventional widefield fluorescence microscope with a rotating diffuser placed between the microscope objective and the fluorescent sample (see “Methods”). The illumination source is a narrowband spatially-incoherent source composed of a 200-mW CW laser (06-MLD-488, Cobolt) and a rapidly rotating diffuser (see “Methods”). An sCMOS camera (Andor Neo 5.5) captures M = 150 images of the scattered fluorescence light through a dichroic mirror and appropriate emission filters.

Fig. 4: Fluorescence microscopy through dynamic scattering.figure 4

a Experimental setup: a conventional widefield fluorescence microscope records M = 150 distorted images of fluorescent 10 μm-diameter beads through a dynamically rotating scattering diffuser. b Experimental camera frames of fluorescent objects imaged through an optical diffuser, showing distorted images due to scattering. c I-CLASS corrected images reveal fine details and features of the objects. d Reference images of the objects without scattering layers. e–g Same as (b–d) for different target objects. Insets in b–g show zoomed-in areas marked by red-dashed lines. Colormaps are scaled between the minimum and maximum values of each reconstruction. Scale bars, 100 μm.

Figure 4 presents the result of the fluorescence microscopy experiments. Figure 4b, e show sample captured frames of two targets composed of fluorescent beads (Fluoresbrite YG microspheres 10 μm), as conventionally imaged through the optical diffuser. The significant distortion due to scattering can be observed in both the raw frames and the zoomed-in areas marked by red-dashed lines. The I-CLASS reconstructed images (Fig. 4c, f), after multiplying each frame by a fixed scalar value (see “Methods” and Supplementary Section S2), successfully recover fine details and features of the objects. For comparison, Fig. 4d, g display the direct images of the objects as imaged without the scattering present.

Experimental results: holographic coherent imaging

As a final demonstration of the generality of the approach and its applicability to various imaging modalities, we applied it to coherent holographic reflective imaging. The results of this study are presented in Fig. 5. The experimental setup is schematically depicted in Fig. 5a: a reflective target (USAF resolution target) is illuminated by a wide illumination beam through a dynamically rotating diffuser. The illumination is provided by a 632 nm Gaussian beam from a Helium-Neon laser (HNL210L, Thorlabs), which is focused to a tight spot on the diffuser surface to ensure that the object illumination remains relatively constant while the diffuser is varied (see below). An sCMOS camera holographically records M = 180 reflected scattered light fields by imaging the diffuser back surface with a 4f imaging system. A reference beam with a proper optical path delay matching the target distance is used for off-axis holographic acquisition37.

Fig. 5: Experimental coherent reflection-imaging through dynamic scattering.figure 5

a Experimental setup: a reflective target is illuminated through a dynamically rotating scattering diffuser. M = 180 reflected light fields are holographically recorded in an off-axis holography configuration using a reference arm. b Example of the recorded distorted fields after computational propagation to the object plane. c One example of the recorded field intensity after computational propagation to the object plane. d Reconstructed object intensity, after applying the I-CLASS algorithm to compensate for scattering, followed by numerical propagation to the object plane (see Supplementary Fig. S6). e Complex-valued field amplitude PSFs (APSFs) estimated from each captured field. f Reference intensity image of the object without the diffuser present. Scale bars, 1 mm.

To reconstruct the target, each captured field was digitally propagated to the target plane using Fresnel propagation. As expected, without scattering compensation, the reconstructed fields are highly distorted, and the target object features cannot be observed (Fig. 5b).

If the image-forming equation has the same form as Eq. (4), the CTR-CLASS or I-CLASS algorithms can be directly applied to the holographically captured fields. However, different from the spatially incoherent illumination case (Figs. 24), where the illumination at the target plane was homogeneous regardless of the scattering layer dynamics, a significant challenge in the coherent imaging configuration is ensuring a constant illumination of the target despite the dynamic scattering introduced by the rotating diffuser. Specifically, in the coherent illumination case (Fig. 5), the presence of the dynamic scatterer in the illumination path of a wide beam generates a speckle illumination field at the target plane that may be subject to variations between scatterer realizations. Consequently, the measured field at the camera for the mth realization, \({E}_{m}^{{{{\rm{cam}}}}}({{{\bf{r}}}})\), can be expressed as:

$${E}_{m}^{{{{\rm{cam}}}}}({{{\bf{r}}}})={P}_{m}^{{{{\rm{coh}}}}}({{{\bf{r}}}}) * \left[O({{{\bf{r}}}}){E}_{m}^{{{{\rm{ill}}}}}({{{\bf{r}}}})\right]$$

(5)

Where \({E}_{m}^{{{{\rm{ill}}}}}({{{\bf{r}}}})\) is the illumination field at the target plane at the mth realization, \({P}_{m}^{{{{\rm{coh}}}}}({{{\bf{r}}}})\) denotes the complex-valued field amplitude point spread function (APSF) introduced by the scattering medium in the mth realization, and O(r) is the target object spatial reflectivity. From Eq. (5) it is evident that if the illumination pattern remains unchanged across scattering realizations: \({E}_{m}^{{{{\rm{ill}}}}}({{{\bf{r}}}})={E}^{{{{\rm{ill}}}}}({{{\bf{r}}}})\), then O(r)Eill(r) = Oeff(r), may be considered as an effective static object serving as the single object function that is assumed in the model of Eq. (4). To ensure this is the case, we have focused the Gaussian illumination beam on the scattering layer surface such that the beam waist at the scattering layer surface is sufficiently smaller than the correlation length of the scattering layer. This minimizes variations in \({E}_{m}^{{{{\rm{ill}}}}}({{{\bf{r}}}})\) between realizations, allowing the illumination field to be treated as effectively constant. Under this assumption, the measured fields follow the equation:

$${E}_{m}^{{{{\rm{cam}}}}}({{{\bf{r}}}})\approx {P}_{m}^{{{{\rm{coh}}}}}({{{\bf{r}}}}) * {O}^{{{{\rm{eff}}}}}({{{\bf{r}}}})$$

(6)

Since Equations (6) and (4) share the same form, the CTR-CLASS13 can be applied to the measurements \({E}_{m}^{{{{\rm{cam}}}}}({{{\bf{r}}}})\), allowing the efficient reconstruction of the object field phase at the scattering layer plane. To reconstruct both the phase and amplitude of the object field, we applied the I-CLASS algorithm14, which also estimates the object field amplitude from the M captured fields. The I-CLASS reconstructed object field at the scattering layer plane is numerically propagated to the target plane by Fresnel propagation to reconstruct the target, faithfully retrieving the fine features of the target (Fig. 5d). With the reconstructed target obtained, the APSF for each frame, \({P}_{m}^{{{{\rm{coh}}}}}({{{\bf{r}}}})\), can be estimated. This can be achieved by calculating the phase introduced by the rotating diffuser at each realization, by taking15: \({\hat{P}}_{m}^{{{{\rm{coh}}}}}(r)={{{\mathcal{F}}}}\left\{{e}^{i{\hat{\phi }}_{{{{\rm{diff}}}}}}\right\}={{{\mathcal{F}}}}\left\{{e}^{i\arg \left(\frac{{E}_{m}^{{{{\rm{cam}}}}}({r}_{{{{\rm{cam}}}}})}{{E}_{{{{\rm{CLASS}}}}}({r}_{{{{\rm{cam}}}}})}\right)}\right\}\) are the measured fields at the diffuser plane, and ECLASS is the CLASS reconstructed object field at the diffuser plane. As shown in previous works15,38, the CLASS reconstructed object field also contains the spherical phase from the propagation distance between the object and the scattering layer. Thus, \({\hat{\phi }}_{{{{\rm{diff}}}}}\) provides an estimation of the diffuser phase. Several estimated APSFs are shown in Fig. 5e.