In deep learning, interval neural networks are used to quantify the uncertainty of a pre-trained neural network. Suppose we are given a computational problem
$P$ and a pre-trained neural network
$\Phi _P$ that aims to solve
$P$. An interval neural network is then a pair of neural networks
$(\underline {\phi }, \overline {\phi })$, with the property that
$\underline {\phi }(y) \leq \Phi _P(y) \leq \overline {\phi }(y)$ for all inputs
$y$, where the inequalities are meant componentwise.
$(\underline {\phi }, \overline {\phi })$ are specifically trained to quantify the uncertainty of
$\Phi _P$, in the sense that the size of the interval
$[\underline {\phi }(y),\overline {\phi }(y)]$ quantifies the uncertainty of the prediction
$\Phi _P(y)$. In this paper, we investigate the phenomenon when algorithms cannot compute interval neural networks in the setting of inverse problems. We show that in the typical setting of a linear inverse problem, the problem of constructing an optimal pair of interval neural networks is non-computable, even with the assumption that the pre-trained neural network
$\Phi _P$ is an optimal solution. In other words, there exist classes of training sets
$\Omega$, such that there is no algorithm, even randomised (with probability
$p \geq 1/2$), that computes an optimal pair of interval neural networks for each training set
${\mathcal{T}} \in \Omega$. This phenomenon happens even when we are given a pre-trained neural network
$\Phi _{{\mathcal{T}}}$ that is optimal for
$\mathcal{T}$. This phenomenon is intimately linked to instability in deep learning.