We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter, we establish the mathematical foundation for hard computing optimization algorithms. We look at the classical optimization approaches and extend our discussion to include iterative methods, which hold a special role in machine learning. In particular, we review the gradient decent method, Newton’s method, the conjugate gradient method and the quasi-Newton’s method. Along with the discussion of these optimization methods, implementation using Matlab script as well as considerations for use in neural network training algorithms are provided. Finally, the Levenberg-Marquardt method is introduced, discussed, and implemented in Matlab script to compare its functioning with the other four iterative algorithms introduced in this chapter.
Edited by
Jong Chul Ye, Korea Advanced Institute of Science and Technology (KAIST),Yonina C. Eldar, Weizmann Institute of Science, Israel,Michael Unser, École Polytechnique Fédérale de Lausanne
Edited by
Jong Chul Ye, Korea Advanced Institute of Science and Technology (KAIST),Yonina C. Eldar, Weizmann Institute of Science, Israel,Michael Unser, École Polytechnique Fédérale de Lausanne
We provide a short, self-contained introduction to deep neural networks that is aimed at mathematically inclined readers. We promote the use of a vect--matrix formalism that is well suited to the compositional structure of these networks and that facilitates the derivation/description of the backpropagation algorithm. We present a detailed analysis of supervised learning for the two most common scenarios, (i) multivariate regression and (ii) classification, which rely on the minimization of least squares and cross-entropy criteria, respectively.
Edited by
Jong Chul Ye, Korea Advanced Institute of Science and Technology (KAIST),Yonina C. Eldar, Weizmann Institute of Science, Israel,Michael Unser, École Polytechnique Fédérale de Lausanne
Since the groundbreaking performance improvement by AlexNet at the ImageNet challenge, deep learning has provided significant gains over classical approaches in various fields of data science including imaging reconstruction. The availability of large-scale training datasets and advances in neural network research have resulted in the unprecedented success of deep learning in various applications. Nonetheless, the success of deep learning appears very mysterious. The basic building blocks of deep neural networks are convolution, pooling, and nonlinearity, which are primitive tools of mathematics. Interestingly, the cascaded connection of these primitive tools results in superior performance over traditional approaches. To understand this mystery, one can go back to the basic ideas of the classical approaches to understand the similarities and differences from modern deep-neural-network methods. In this chapter, we explain the limitations of the classical machine learning approaches, and provide a review of mathematical foundations to understand why deep neural networks have successfully overcome their limitations.
Edited by
Jong Chul Ye, Korea Advanced Institute of Science and Technology (KAIST),Yonina C. Eldar, Weizmann Institute of Science, Israel,Michael Unser, École Polytechnique Fédérale de Lausanne
Inspired by the success of deep learning in computer vision tasks, deep learning approaches for various MRI problems have been extensively studied in recent years. Early deep learning studies for MRI reconstruction and enhancement were mostly based on image-domain learning. However, because the MR signal is acquired in the k-space domain, researchers have demonstrated that deep neural networks can be directly designed in k-space to utilize the physics of MR acquisition. In this chapter, the recent trend of k-space deep learning for MRI reconstruction and artifact removal are reviewed. First, scan-specific k-space learning, which is inspired by parallel MRI, is covered. Then we provide an overview of data-driven k-space learning. Subsequently, unsupervised learning for MRI reconstruction and motion artifact removal are discussed.
Edited by
Jong Chul Ye, Korea Advanced Institute of Science and Technology (KAIST),Yonina C. Eldar, Weizmann Institute of Science, Israel,Michael Unser, École Polytechnique Fédérale de Lausanne
Edited by
Jong Chul Ye, Korea Advanced Institute of Science and Technology (KAIST),Yonina C. Eldar, Weizmann Institute of Science, Israel,Michael Unser, École Polytechnique Fédérale de Lausanne
Edited by
Jong Chul Ye, Korea Advanced Institute of Science and Technology (KAIST),Yonina C. Eldar, Weizmann Institute of Science, Israel,Michael Unser, École Polytechnique Fédérale de Lausanne
Ultrasound imaging (US) is susceptible to several types of artifacts. Most artifacts appear because of transducer limitations and simplified assumptions on the wave propagation. The artifacts are sometimes used as a component that contains tissue information; however, they often lead to a misinterpretation in the clinical diagnosis. Therefore, to improve the clinical utility of ultrasound in difficult-to-image patients and settings, a number of artifact removal methods have been proposed that aim at boosting image quality. Classical optimization-based methods have severe limitations due to their limited performance and high computation requirements. Furthermore, it is difficult to obtain parameters for producing high-quality output. A quick remedy for the aforementioned issues is the deep learning approach, which offers high performance compared with the traditional methods despite the significantly reduced runtime complexity. Another big advantage is that the same parameters as those learned during the training phase can be used to process different input images. This has motivated the scientific community to design deep-neural-network-based approaches for US artifact removal tasks.
Edited by
Jong Chul Ye, Korea Advanced Institute of Science and Technology (KAIST),Yonina C. Eldar, Weizmann Institute of Science, Israel,Michael Unser, École Polytechnique Fédérale de Lausanne
Edited by
Jong Chul Ye, Korea Advanced Institute of Science and Technology (KAIST),Yonina C. Eldar, Weizmann Institute of Science, Israel,Michael Unser, École Polytechnique Fédérale de Lausanne
Edited by
Jong Chul Ye, Korea Advanced Institute of Science and Technology (KAIST),Yonina C. Eldar, Weizmann Institute of Science, Israel,Michael Unser, École Polytechnique Fédérale de Lausanne
In this chapter, we provide an overview of a recent image-reconstruction method that uses a deep generative algorithm for dynamic magnetic resonance-imaging (dMRI). We begin by briefly introducing the imaging modality of dMRI, the associated image-reconstruction problem, and existing reconstruction approaches. Next, we introduce the time-dependent deep image prior (TD-DIP), which exploits the structure of convolutional neural networks (CNNs) as a regularizing prior. We show some representative results and discuss the pros and cons of this regularizing paradigm. Finally, we discuss a few potential remaining limitations.
Edited by
Jong Chul Ye, Korea Advanced Institute of Science and Technology (KAIST),Yonina C. Eldar, Weizmann Institute of Science, Israel,Michael Unser, École Polytechnique Fédérale de Lausanne
CryoGAN uses ideas from deep generative adversarial learning to perform image reconstruction in single-particle cryo-electron microscopy (cryo-EM). In this chapter, we begin by introducing single-particle cryo-EM. We then formulate the associated image-reconstruction problem and discuss the main solutions found in the literature. Next, we describe the CryoGAN algorithm and show some representative results. Finally, we discuss what our experiences with Cryo-GAN suggest about the advantages and disadvantages of such deep generative adversarial methods in single-particle cryo-EM and beyond.
Edited by
Jong Chul Ye, Korea Advanced Institute of Science and Technology (KAIST),Yonina C. Eldar, Weizmann Institute of Science, Israel,Michael Unser, École Polytechnique Fédérale de Lausanne
Quantitative phase imaging (QPI) refers to label-free techniques that produce images containing morphological information. In this chapter, we focus on 2D phase imaging with a holographic setup. In such a setting, the complex-valued measurements contain both intensity and phase information. The phase is related to the distribution of the refractive index of the underlying specimen. In practice, the collected phase happens to be wrapped (i.e., modulo 2π of the original phase) and one gains quantitative information on the sample only once the measurements are unwrapped. The process of phase unwrapping relies on the solution of an inverse problem, for which numerous methods exist. However, it is challenging to unwrap the phases of particularly complex or thick specimens such as organoids. Under such extreme conditions, classical methods often exhibit unwrapping errors. In this chapter we first formulate the problem of phase unwrapping and review the existing methods to solve it. Then, we present an application of a regularizing neural network to phase unwrapping, which allows us to outline the advantages of a training-free approach, i.e., a deep image prior, over classical methods or supervised learning.
Edited by
Jong Chul Ye, Korea Advanced Institute of Science and Technology (KAIST),Yonina C. Eldar, Weizmann Institute of Science, Israel,Michael Unser, École Polytechnique Fédérale de Lausanne
Edited by
Jong Chul Ye, Korea Advanced Institute of Science and Technology (KAIST),Yonina C. Eldar, Weizmann Institute of Science, Israel,Michael Unser, École Polytechnique Fédérale de Lausanne
In this chapter, we review biomedical applications and breakthroughs via leveraging algorithm unrolling, an important technique that bridges between traditional iterative algorithms and modern deep learning techniques. To provide context, we start by tracing the origin of algorithm unrolling and providing a comprehensive tutorial on how to unroll iterative algorithms into deep networks. We then extensively cover algorithm unrolling in a wide variety of biomedical imaging modalities and delve into several representative recent works in detail. Indeed, there is a rich history of iterative algorithms for biomedical image synthesis, which makes the field ripe for unrolling techniques. In addition, we put algorithm unrolling into a broad perspective, in order to understand why it is particularly effective, and discuss recent trends. Finally, we conclude the chapter by discussing open challenges and suggesting future research directions.
Edited by
Jong Chul Ye, Korea Advanced Institute of Science and Technology (KAIST),Yonina C. Eldar, Weizmann Institute of Science, Israel,Michael Unser, École Polytechnique Fédérale de Lausanne
From unconditional synthesis to conditional synthesis, GANs have shown that they are a perfect fit for such problems thanks to their ability to learn probability distributions. For unconditional synthesis, the objective is to stochastically generate MR images of target contrast. Conditional synthesis refers to the case where the model learns nonlinear mapping to the different MR tissue contrasts without altering the physiological information. Furthermore, by merging collaborative information of multiple contrast images, missing data imputation among many different domains is also effectively solved with GANs. Although promising results are seen, development in the area is still at its early stage. Interesting research directions are proposed from prior work, including the application of more advanced methods and rigorous validation in clinical settings. In effect, MRI image synthesis techniques should be able to reduce the burden of costly MR scans, benefiting both patients and hospitals.
Edited by
Jong Chul Ye, Korea Advanced Institute of Science and Technology (KAIST),Yonina C. Eldar, Weizmann Institute of Science, Israel,Michael Unser, École Polytechnique Fédérale de Lausanne
The development of deep learning reconstruction methods for accelerated MR acquisitions has been an ongoing area of research for the last several years. It has been repeatedly demonstrated that deep learning methods can outperform classic reconstruction approaches in terms of both quantitative image metrics like MSE to ground truth as well as qualitative reader studies where radiologists have been questioned in a subjective way. We present the basics and well-known approaches for MR image reconstruction via deep learning.
Edited by
Jong Chul Ye, Korea Advanced Institute of Science and Technology (KAIST),Yonina C. Eldar, Weizmann Institute of Science, Israel,Michael Unser, École Polytechnique Fédérale de Lausanne
In this chapter, we review largely targeted tasks in the computed tomography (CT) literature, including low-dose CT, sparse-view CT, limited angle CT, interior CT, etc. We present deep-learning-based methods which operate as image post-processing techniques or raw-to-image mapping techniques.