Random-effects meta-analysis is a widely applied methodology to synthesize research findings of studies related to a specific scientific question. Besides estimating the mean effect, an important aim of the meta-analysis is to summarize the heterogeneity, that is, the variation in the underlying effects caused by the differences in study circumstances. The prediction interval is frequently used for this purpose: a 95% prediction interval contains the true effect of a similar new study in 95% of the cases when it is constructed, or in other words, it covers 95% of the true effects distribution on average in repeated sampling. In this article, after providing a clear mathematical background, we present an extensive simulation investigating the performance of all frequentist prediction interval methods published to date. The work focuses on the distribution of the coverage probabilities and how these distributions change depending on the amount of heterogeneity and the number of involved studies. Although the single requirement that a prediction interval has to fulfill is to keep a nominal coverage probability on average, we demonstrate why the distribution of coverages should not be disregarded. We show that for meta-analyses with small number of studies, this distribution has an unideal, asymmetric shape. We argue that assessing only the mean coverage can easily lead to misunderstanding and misinterpretation. The length of the intervals and the robustness of the methods concerning the non-normality of the true effects are also investigated.