To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Presented is a novel way to combine snapshot compressive imaging and lateral shearing interferometry in order to capture the spatio-spectral phase of an ultrashort laser pulse in a single shot. A deep unrolling algorithm is utilized for snapshot compressive imaging reconstruction due to its parameter efficiency and superior speed relative to other methods, potentially allowing for online reconstruction. The algorithm’s regularization term is represented using a neural network with 3D convolutional layers to exploit the spatio-spectral correlations that exist in laser wavefronts. Compressed sensing is not typically applied to modulated signals, but we demonstrate its success here. Furthermore, we train a neural network to predict the wavefronts from a lateral shearing interferogram in terms of Zernike polynomials, which again increases the speed of our technique without sacrificing fidelity. This method is supported with simulation-based results. While applied to the example of lateral shearing interferometry, the methods presented here are generally applicable to a wide range of signals, including Shack–Hartmann-type sensors. The results may be of interest beyond the context of laser wavefront characterization, including within quantitative phase imaging.
Motivated by problems from compressed sensing, we determine the threshold behaviour of a random $n\times d \pm 1$ matrix $M_{n,d}$ with respect to the property ‘every $s$ columns are linearly independent’. In particular, we show that for every $0\lt \delta \lt 1$ and $s=(1-\delta )n$, if $d\leq n^{1+1/2(1-\delta )-o(1)}$ then with high probability every $s$ columns of $M_{n,d}$ are linearly independent, and if $d\geq n^{1+1/2(1-\delta )+o(1)}$ then with high probability there are some $s$ linearly dependent columns.
In order to merge the advantages of the traditional compressed sensing (CS) methodology and the data-driven deep network scheme, this paper proposes a physical model-driven deep network, termed CS-Net, for solving target image reconstruction problems in through-the-wall radar imaging. The proposed method consists of two consequent steps. First, a learned convolutional neural network prior is introduced to replace the regularization term in the traditional iterative CS-based method to capture the redundancy of the radar echo signal. Moreover, the physical model of the radar signal is used in the data consistency layer to encourage consistency with the measurements. Second, the iterative CS optimization is unrolled to yield a deep learning network, where the weight, regularization parameter, and the other parameters are learnable. A quantity of training data enables the network to extract high-dimensional characteristics of the radar echo signal to reconstruct the spatial target image. Simulation results demonstrated that the proposed method can achieve accurate target image reconstruction and was superior to the traditional CS method, in terms of mean squared error and the target texture details.
Accurate, robust and fast image reconstruction is a critical task in many scientific, industrial and medical applications. Over the last decade, image reconstruction has been revolutionized by the rise of compressive imaging. It has fundamentally changed the way modern image reconstruction is performed. This in-depth treatment of the subject commences with a practical introduction to compressive imaging, supplemented with examples and downloadable code, intended for readers without extensive background in the subject. Next, it introduces core topics in compressive imaging – including compressed sensing, wavelets and optimization – in a concise yet rigorous way, before providing a detailed treatment of the mathematics of compressive imaging. The final part is devoted to recent trends in compressive imaging: deep learning and neural networks. With an eye to the next decade of imaging research, and using both empirical and mathematical insights, it examines the potential benefits and the pitfalls of these latest approaches.
This chapter provides an introduction to uncertainty relations underlying sparse signal recovery. We start with the seminal work by Donoho and Stark (1989), which defines uncertainty relations as upper bounds on the operator norm of the band-limitation operator followed by the time-limitation operator, generalize this theory to arbitrary pairs of operators, and then develop, out of this generalization, the coherence-based uncertainty relations due to Elad and Bruckstein (2002), plus uncertainty relations in terms of concentration of the 1-norm or 2-norm. The theory is completed with set-theoretic uncertainty relations which lead to best possible recovery thresholds in terms of a general measure of parsimony, the Minkowski dimension. We also elaborate on the remarkable connection between uncertainty relations and the “large sieve,” a family of inequalities developed in analytic number theory. We show how uncertainty relations allow one to establish fundamental limits of practical signal recovery problems such as inpainting, declipping, super-resolution, and denoising of signals corrupted by impulse noise or narrowband interference.
In compressed sensing (CS) a signal x ∈ Rn is measured as y =A x + z, where A ∈ Rm×n (m<n) and z ∈ Rm denote the sensing matrix and measurement noise. The goal is to recover x from measurements y when m<n. CS is possible because we typically want to capture highly structured signals, and recovery algorithms take advantage of a signal’s structure to solve the under-determined system of linear equations. As in CS, data-compression codes take advantage of a signal’s structure to encode it efficiently. Structures used by compression codes are much more elaborate than those used by CS algorithms. Using more complex structures in CS, like those employed by data-compression codes, potentially leads to more efficient recovery methods requiring fewer linear measurements or giving better reconstruction quality. We establish connections between data compression and CS, giving CS recovery methods based on compression codes, which indirectly take advantage of all structures used by compression codes. This elevates the class of structures used by CS algorithms to those used by compression codes, leading to more efficient CS recovery methods.
Fast and accurate unveiling of power-line outages is of paramount importance not only for preventing faults that may lead to blackouts but also for routine monitoring and control tasks of the smart grid. This chapter presents a sparse overcomplete model to represent the effects of (potentially multiple) power line outages on synchronized bus voltage angle measurements. Based on this model, efficient compressive sensing algorithms can be adopted to identify outaged lines at linear complexity of the total number of lines. Furthermore, the effects of uncertainty in synchronized measurements will be analyzed, along with the optimal placement of measurement units.
Sign truncated matching pursuit (STrMP) algorithm is presented in this paper. STrMP is a new greedy algorithm for the recovery of sparse signals from the sign measurement, which combines the principle of consistent reconstruction with orthogonal matching pursuit (OMP). The main part of STrMP is as concise as OMP and hence STrMP is simple to implement. In contrast to previous greedy algorithms for one-bit compressed sensing, STrMP only need to solve a convex and unconstrained subproblem at each iteration. Numerical experiments show that STrMP is fast and accurate for one-bit compressed sensing compared with other algorithms.
In this paper, we consider signal recovery via $l_{1}$-analysis optimisation. The signals we consider are not sparse in an orthonormal basis or incoherent dictionary, but sparse or nearly sparse in terms of some tight frame $D$. The analysis in this paper is based on the restricted isometry property adapted to a tight frame $D$ (abbreviated as $D$-RIP), which is a natural extension of the standard restricted isometry property. Assuming that the measurement matrix $A\in \mathbb{R}^{m\times n}$ satisfies $D$-RIP with constant ${\it\delta}_{tk}$ for integer $k$ and $t>1$, we show that the condition ${\it\delta}_{tk}<\sqrt{(t-1)/t}$ guarantees stable recovery of signals through $l_{1}$-analysis. This condition is sharp in the sense explained in the paper. The results improve those of Li and Lin [‘Compressed sensing with coherent tight frames via $l_{q}$-minimization for $0<q\leq 1$’, Preprint, 2011, arXiv:1105.3299] and Baker [‘A note on sparsification by frames’, Preprint, 2013, arXiv:1308.5249].
Standard techniques in matrix factorization (MF) – a popular method for latent factor model-based design – result in dense matrices for both users and items. Users are likely to have some affinity toward all the latent factors – making a dense matrix plausible, but it is not possible for the items to possess all the latent factors simultaneously; hence it is more likely to be sparse. Therefore, we propose to factor the rating matrix into a dense user matrix and a sparse item matrix, leading to the blind compressed sensing (BCS) framework. To further enhance the prediction quality of our design, we aim to incorporate user and item metadata into the BCS framework. The additional information helps in reducing the underdetermined nature of the problem of rating prediction caused by extreme sparsity of the rating dataset. Our design is based on the belief that users sharing similar demographic profile have similar preferences and thus can be described by the similar latent factor vectors. We also use item metadata (genre information) to group together the similar items. We modify our BCS formulation to include item metadata under the assumption that items belonging to common genre share similar sparsity pattern. We also design an efficient algorithm to solve our formulation. Extensive experimentation conducted on the movielens dataset validates our claim that our modified MF framework utilizing auxiliary information improves upon the existing state-of-the-art techniques.
In this paper, a novel approach for quantifying the parametric uncertainty associated with a stochastic problem output is presented. As with Monte-Carlo and stochastic collocation methods, only point-wise evaluations of the stochastic output response surface are required allowing the use of legacy deterministic codes and precluding the need for any dedicated stochastic code to solve the uncertain problem of interest. The new approach differs from these standard methods in that it is based on ideas directly linked to the recently developed compressed sensing theory. The technique allows the retrieval of the modes that contribute most significantly to the approximation of the solution using a minimal amount of information. The generation of this information, via many solver calls, is almost always the bottle-neck of an uncertainty quantification procedure. If the stochastic model output has a reasonably compressible representation in the retained approximation basis, the proposed method makes the best use of the available information and retrieves the dominant modes. Uncertainty quantification of the solution of both a 2-D and 8-D stochastic Shallow Water problem is used to demonstrate the significant performance improvement of the new method, requiring up to several orders of magnitude fewer solver calls than the usual sparse grid-based Polynomial Chaos (Smolyak scheme) to achieve comparable approximation accuracy.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.