Skip to main content Accessibility help
×
Hostname: page-component-5b777bbd6c-sbgtn Total loading time: 0 Render date: 2025-06-24T06:49:58.370Z Has data issue: false hasContentIssue false

References

Published online by Cambridge University Press:  16 May 2025

Paul Fearnhead
Affiliation:
Lancaster University
Christopher Nemeth
Affiliation:
Lancaster University
Chris J. Oates
Affiliation:
University of Newcastle upon Tyne
Chris Sherlock
Affiliation:
Lancaster University
Get access
Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2025

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Book purchase

Temporarily unavailable

References

Ahn, Sungjin, Korattikara, Anoop, Liu, Nathan, Rajan, Suju, and Welling, Max. 2015. Large-scale distributed Bayesian matrix factorization using stochastic gradient MCMC. Pages 9–18 of Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM.Google Scholar
Aicher, Christopher, Ma, Yi-An, Foti, Nicholas J., and Fox, Emily B. 2019. Stochastic gradient MCMC for state space models. SIAM Journal on Mathematics of Data Science, 1(3), 555587.CrossRefGoogle Scholar
Aicher, Christopher, Putcha, Srshti, Nemeth, Christopher, Fearnhead, Paul, and Fox, Emily. 2023. Stochastic gradient MCMC for nonlinear state space models. Bayesian Analysis, 1(1), 123.Google Scholar
Anastasiou, Andreas, Barp, Alessandro, Briol, François-Xavier, Ebner, Bruno, Gaunt, Robert E., Ghaderinezhad, Fatemeh, Gorham, Jackson, Gretton, Arthur, Ley, Christophe, Liu, Qiang, Mackey, Lester, Oates, Chris J., Reinert, Gesine, and Swan, Yvik. 2023. Stein’s method meets computational statistics: A review of some recent developments. Statistical Science, 38(1), 120139.CrossRefGoogle Scholar
Andrieu, Christophe, Durmus, Alain, Nüsken, Nikolas, and Roussel, Julien. 2021. Hypocoercivity of piecewise deterministic Markov process-Monte Carlo. The Annals of Applied Probability, 31(5), 24782517.CrossRefGoogle Scholar
Baker, Jack, Fearnhead, Paul, Fox, Emily, and Christopher, Nemeth. 2018. Large-scale stochastic sampling from the probability simplex. Curran Associates. Pages 6721–6731 of 32nd Conference Advances in Neural Information Processing Systems.Google Scholar
Baker, Jack, Fearnhead, Paul, Fox, Emily B, and Nemeth, Christopher. 2019. Control variates for stochastic gradient MCMC. Statistics and Computing, 29(3), 599615.CrossRefGoogle Scholar
Bardenet, Rémi, Doucet, Arnaud, and Holmes, Chris. 2014. Towards scaling up Markov chain Monte Carlo: An adaptive subsampling approach. Proceedings of Machine Learning Research. Pages 405–413 of Proceedings of the International Conference on Machine Learning (ICML).Google Scholar
Barp, Alessandro, Simon-Gabriel, Carl-Johann, Girolami, Mark, and Mackey, Lester. 2022. Targeted separation and convergence with kernel discrepancies. In: NeurIPS 2022 Workshop on Score-Based Methods. New Orleans.Google Scholar
Beck, Amir, and Marc, Teboulle. 2003. Mirror descent and nonlinear projected subgradient methods for convex optimization. Operations Research Letters, 31(3), 167175.CrossRefGoogle Scholar
Bernardo, José M., and Smith, Adrian FM. 2009. Bayesian Theory. John Wiley & Sons.Google Scholar
Besag, Julian. 1994. Comments on “Representations of knowledge in complex systems”by U. Grenander and M. I. Miller. Journal of the Royal Statistical Society Series B, 56, 591592.Google Scholar
Beskos, Alexandros, Roberts, Gareth, and Andrew, Stuart. 2009. Optimal scalings for local Metropolis–Hastings chains on nonproduct targets in high dimensions. Annals of Applied Probability, 19(3), 863898.CrossRefGoogle Scholar
Beskos, Alexandros, Pillai, Natesh, Roberts, Gareth, Sanz-Serna, Jesus-Maria, and Stuart, Andrew. 2013. Optimal tuning of the hybrid Monte Carlo algorithm. Bernoulli, 19(5A), 15011534.CrossRefGoogle Scholar
Bierkens, Joris. 2016. Non-reversible Metropolis–Hastings. Statistics and Computing, 26(6), 12131228.CrossRefGoogle Scholar
Bierkens, Joris, and Andrew, Duncan. 2017. Limit theorems for the zig-zag process. Advances in Applied Probability, 49(3), 791825.CrossRefGoogle Scholar
Bierkens, Joris, and Gareth, Roberts. 2017. A piecewise deterministic scaling limit of lifted Metropolis–Hastings in the Curie–Weiss model. The Annals of Applied Probability, 27, 846882.CrossRefGoogle Scholar
Bierkens, Joris, and Verduyn Lunel, Sjoerd M. 2022. Spectral analysis of the zigzag process. Pages 827–860 of Annales de l'Institut Henri Poincare (B) Probabilites et statistiques, vol. 58. Institut Henri Poincaré.Google Scholar
Bierkens, Joris, Bouchard-Côté, Alexandre, Doucet, Arnaud, Duncan, Andrew B., Fearnhead, Paul, Lienart, Thibaut, Roberts, Gareth, and Vollmer, Sebastian J. 2018. Piecewise deterministic Markov processes for scalable Monte Carlo on restricted domains. Statistics & Probability Letters, 136, 148154.CrossRefGoogle Scholar
Bierkens, Joris, Roberts, Gareth O., and Zitt, Pierre-André. 2019a. Ergodicity of the zigzag process. The Annals of Applied Probability, 29(4), 22662301.CrossRefGoogle Scholar
Bierkens, Joris, Fearnhead, Paul, and Roberts, Gareth O. 2019b. The Zig-Zag process and super-efficient sampling for Bayesian analysis of big data. Annals of Statistics, 47(3), 12881320.CrossRefGoogle Scholar
Bierkens, Joris, Grazzi, Sebastiano, Kamatani, Kengo, and Gareth, Roberts. 2020. The boomerang sampler. Pages 908–918 of International Conference on Machine Learning. PMLR.Google Scholar
Bierkens, Joris, Kamatani, Kengo, and Roberts, Gareth O. 2022. High-dimensional scaling limits of piecewise deterministic sampling algorithms. The Annals of Applied Probability, 32(5), 33613407.CrossRefGoogle Scholar
Bierkens, J., Kamatani, K. and Roberts, G. O. (2023). Scaling of Piecewise Deterministic Monte Carlo for Anistropic Targets. arXiv preprint, arXiv:2305.00694.Google Scholar
Bierkens, Joris, Grazzi, Sebastiano, Meulen, Frank van der, and Schauer, Moritz. 2023b. Sticky PDMP samplers for sparse and local inference problems. Statistics and Computing, 33(1), 8.CrossRefGoogle Scholar
Blei, David M., Ng, Andrew Y., and Jordan, Michael I. 2003. Latent dirichlet allocation. Journal of Machine Learning Research, 3(Jan), 9931022.Google Scholar
Bou-Rabee, Nawaf, and Sanz-Serna, Jesús María. 2017. Randomized Hamiltonian Monte Carlo. The Annals of Applied Probability, 27(4), 21592194.CrossRefGoogle Scholar
Bouchard-Côté, Alexandre, Vollmer, Sebastian J, and Doucet, Arnaud. 2018. The bouncy particle sampler: A nonreversible rejection-free Markov chain Monte Carlo method. Journal of the American Statistical Association, 113(522), 855867.CrossRefGoogle Scholar
Brooks, Stephen P., and Andrew, Gelman. 1998. General methods for monitoring convergence of iterative simulations. Journal of Computational and Graphical Statistics, 7(4), 434455.CrossRefGoogle Scholar
Brooks, Steve, Gelman, Andrew, Jones, Galin, and Meng, Xiao-Li. 2011. Handbook of Markov Chain Monte Carlo. CRC Press.CrossRefGoogle Scholar
Brosse, Nicolas, Durmus, Alain, Moulines, Éric, and Pereyra, Marcelo. 2017. Sampling from a log-concave distribution with compact support with proximal Langevin Monte Carlo. Pages 319–342 of Conference on Learning Theory. PMLR.Google Scholar
Brosse, Nicolas, Durmus, Alain, and Moulines, Éric. 2018. The promises and pitfalls of Stochastic Gradient Langevin Dynamics. Curran Associates. Pages 8278–8288 of 32nd Conference on Neural Information Processing Systems.Google Scholar
Bubeck, Sébastien, Eldan, Ronen, and Lehec, Joseph. 2018. Sampling from a logconcave distribution with projected Langevin Monte Carlo. Discrete & Computational Geometry, 59(4), 757783.CrossRefGoogle Scholar
Cabezas, Alberto, Corenflos, Adrien, Lao, Junpeng, and Louf, Rémi. 2024. BlackJAX: Composable Bayesian inference in JAX. arXiv preprint arXiv:2402.10797.Google Scholar
Caflisch, Russel E. 1998. Monte Carlo and quasi-Monte Carlo methods. Acta Numerica, 7, 149.CrossRefGoogle Scholar
Carmeli, Claudio, De Vito, Ernesto, and Toigo, Alessandro. 2006. Vector valued reproducing kernel Hilbert spaces of integrable functions and Mercer theorem. Analysis and Applications, 4(04), 377408.CrossRefGoogle Scholar
Chatterji, Niladri, Flammarion, Nicolas, Ma, Yian, Bartlett, Peter, and Michael, Jordan. 2018. On the theory of variance reduction for stochastic gradient Monte Carlo. Pages 764–773 of International Conference on Machine Learning. PMLR.Google Scholar
Chen, Fang, Lovász, László, and Pak, Igor. 1999. Lifting Markov chains to speed up mixing. Pages 275–281 of Proceedings of the 31st Annual ACM Symposium on Theory of Computing.CrossRefGoogle Scholar
Chen, Tianqi, Fox, Emily, and Carlos, Guestrin. 2014. Stochastic gradient Hamiltonian Monte Carlo. Pages 1683–1691 of International Conference on Machine Learning.Google Scholar
Chen, Wilson Ye, Barp, Alessandro, Briol, François-Xavier, Gorham, Jackson, Girolami, Mark, Mackey, Lester, and Oates, Chris. 2019. Stein point Markov chain Monte Carlo. Pages 1011–1021 of International Conference on Machine Learning. PMLR.Google Scholar
Chevallier, Augustin, Power, Sam, Wang, Andi Q, and Fearnhead, Paul. 2021. PDMP Monte Carlo methods for piecewise-smooth densities. arXiv:2111.05859.Google Scholar
Chevallier, Augustin, Fearnhead, Paul, and Matthew, Sutton. 2023. Reversible jump PDMP samplers for variable selection. Journal of the American Statistical Association, 118(544), 29152927.CrossRefGoogle Scholar
Christensen, Ole F., Roberts, Gareth O., and Rosenthal, Jeffrey S. 2005. Scaling Limits for the Transient Phase of Local Metropolis-Hastings Algorithms. Journal of the Royal Statistical Society. Series B (Statistical Methodology), 67(2), 253268.CrossRefGoogle Scholar
Chwialkowski, Kacper, Strathmann, Heiko, and Arthur, Gretton. 2016. A kernel test of goodness of fit. Pages 2606–2615 of International Conference on Machine Learning. PMLR.Google Scholar
Conway, John B. 2010. A Course in Functional Analysis. Second ed. Springer.Google Scholar
Corbella, Alice, Spencer, Simon E. F., and Roberts, Gareth O. 2022. Automatic Zig-Zag sampling in practice. Statistics and Computing, 32(6), 107.CrossRefGoogle Scholar
Coullon, Jeremie, and Christopher, Nemeth. 2022. SGMCMCJax: A lightweight JAX library for stochastic gradient Markov chain Monte Carlo algorithms. Journal of Open Source Software, 7(72), 4113.CrossRefGoogle Scholar
Coullon, Jeremie, South, Leah, and Christopher, Nemeth. 2023. Efficient and generalizable tuning strategies for stochastic gradient MCMC. Statistics and Computing, 33(3), 66.CrossRefGoogle Scholar
Cowles, Mary Kathryn, and Carlin, Bradley P. 1996. Markov chain Monte Carlo convergence diagnostics: A comparative review. Journal of the American Statistical Association, 91(434), 883904.CrossRefGoogle Scholar
Cox, John, Ingersoll Jr, Jonathan E., and Ross, Stephen A. 1985. A theory of the term structure of interest rates. Econometrica, 53(2), 385408.CrossRefGoogle Scholar
Creutz, Michael. 1988. Global Monte Carlo algorithms for many-fermion systems. Physical Review D, 38(Aug), 12281238.CrossRefGoogle ScholarPubMed
Dalalyan, Arnak S., and Avetik, Karagulyan. 2019. User-friendly guarantees for the Langevin Monte Carlo with inaccurate gradient. Stochastic Processes and their Applications, 129(12), 52785311.CrossRefGoogle Scholar
Davis, Mark H. A. 1984. iecewise-deterministic Markov processes: A general class of non-diffusion stochastic models. Journal of the Royal Statistical Society: Series B (Methodological), 46(3), 353376.CrossRefGoogle Scholar
Deligiannidis, George, Bouchard-Côté, Alexandre, and Doucet, Arnaud. 2019. Exponential ergodicity of the bouncy particle sampler. The Annals of Statistics, 47, 12681287.CrossRefGoogle Scholar
Deligiannidis, George, Paulin, Daniel, Bouchard-Côté, Alexandre, and Doucet, Arnaud. 2021. Randomized Hamiltonian Monte Carlo as scaling limit of the bouncy particle sampler and dimension-free convergence rates. The Annals of Applied Probability, 31(6), 26122662.CrossRefGoogle Scholar
Diaconis, Persi, Holmes, Susan, and Neal, Radford M. 2000. Analysis of a nonreversible Markov chain sampler. Annals of Applied Probability, 10(3), 726752.CrossRefGoogle Scholar
Doucet, Arnaud, Johansen, Adam M., et al. 2009. A tutorial on particle filtering and smoothing: Fifteen years later. Handbook of Nonlinear Filtering, 12(656-704), 3.Google Scholar
Duane, Simon, Kennedy, A. D., Pendleton, Brian J., and Roweth, Duncan. 1987. Hybrid Monte Carlo. Physics Letters B, 195(2), 216222.CrossRefGoogle Scholar
Dubey, Kumar Avinava, Reddi, Sashank J., Williamson, Sinead A., Poczos, Barnabas, Smola, Alexander J., and Xing, Eric P. 2016. Variance reduction in stochastic gradient Langevin dynamics. Pages 1154–1162 of Advances in Neural Information Processing Systems. Curran Associates.Google Scholar
Eberle, Andreas. 2016. Reflection couplings and contraction rates for diffusions. Probability Theory and Related Fields, 166(3), 851886.CrossRefGoogle Scholar
Fearnhead, Paul, Bierkens, Joris, Pollock, Murray, Roberts, Gareth O., et al. 2018. Piecewise deterministic Markov processes for continuous-time Monte Carlo. Statistical Science, 33(3), 386412.CrossRefGoogle Scholar
Fisher, Matthew, and Oates, Chris J. 2024. Gradient-free kernel Stein discrepancy. Advances in Neural Information Processing Systems, 36, 2385523885.Google Scholar
Gamerman, Dani, and Lopes, Hedibert F. 2006. Markov Chain Monte Carlo: Stochastic Simulation for Bayesian Inference. CRC press.CrossRefGoogle Scholar
Gelman, Andrew, and Rubin, Donald B. 1992. Inference from iterative simulation using multiple sequences. Statistical Science, 7(4), 457472.CrossRefGoogle Scholar
Gelman, Andrew, Carlin, John B., Stern, Hal S., Dunson, David B., Vehtari, Aki, and Rubin, Donald B. 2014. Bayesian Data Analysis. Vol. 2. CRC Press.Google Scholar
Geyer, Charles J. 1992. Practical Markov chain Monte Carlo. Statistical Science, 7(4), 473483.Google Scholar
Girolami, Mark, and Ben, Calderhead. 2011. Riemann manifold Langevin and Hamiltonian Monte Carlo methods. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 73(2), 123214.CrossRefGoogle Scholar
Gong, Wenbo, Li, Yingzhen, and Hernández-Lobato, José Miguel. 2020. Sliced Ker- nelized Stein Discrepancy. International Conference on Learning Representations. Curran Associates.Google Scholar
Gorham, Jackson, and Lester, Mackey. 2015. Measuring sample quality with Stein’s method. Pages 226–234 of Advances in Neural Information Processing Systems. Curran Associates.Google Scholar
Gorham, Jackson, and Lester, Mackey. 2017. Measuring sample quality with kernels. Pages 1292–1301 of Proceedings of the 34th International Conference on Machine Learning. PMLR.Google Scholar
Gorham, Jackson, Duncan, Andrew B., Vollmer, Sebastian J., and Lester, Mackey. 2019. Measuring sample quality with diffusions. The Annals of Applied Probability, 29(5), 28842928.CrossRefGoogle Scholar
Gorham, Jackson, Raj, Anant, and Lester, Mackey. 2020. Stochastic Stein discrepancies. Advances in Neural Information Processing Systems, 33, 1793117942.Google Scholar
Grathwohl, Will, Wang, Kuan-Chieh, Jacobsen, Jörn-Henrik, Duvenaud, David, and Zemel, Richard. 2020. Learning the stein discrepancy for training and evaluating energy-based models without sampling. Pages 3732–3747 of International Conference on Machine Learning. PMLR.Google Scholar
Green, Peter J., and Antonietta, Mira. 2001. Delayed rejection in reversible jump Metropolis–Hastings. Biometrika, 88(4), 10351053.CrossRefGoogle Scholar
Grenander, Ulf, and Michael I., Miller 1994. Representations of knowledge in complex systems. Journal of the Royal Statistical Society: Series B (Methodological), 56(4), 549581.CrossRefGoogle Scholar
Gustafson, Paul. 1998. A guided walk Metropolis algorithm. Statistics and Computing, 8(4), 357364.CrossRefGoogle Scholar
Hastings, W. Keith. 1970. Monte Carlo sampling methods using Markov chains and their applications. Biometrika, 57, 97109.CrossRefGoogle Scholar
Heidelberger, Philip, and Welch, Peter D. 1981. A Spectral method for confidence interval generation and run length control in simulations. Communications of the ACM, 24(4), 233245.CrossRefGoogle Scholar
Hodgkinson, Liam, Salomone, Robert, and Fred, Roosta. 2020. The reproducing Stein kernel approach for post-hoc corrected sampling. arXiv:2001.09266.Google Scholar
Hoffman, Matthew, Radul, Alexey, and Pavel, Sountsov. 2021. An Adaptive-MCMC scheme for setting trajectory lengths in Hamiltonian Monte Carlo. Pages 3907–3915 of Banerjee, Arindam, and Fukumizu, Kenji (eds), Proceedings of The 24th International Conference on Artificial Intelligence and Statistics and Proceedings of Machine Learning Research. PMLR, vol. 130.Google Scholar
Hoffman, Matthew D, and Gelman, Andrew. 2014. The No-U-Turn sampler: Adaptively setting path lengths in Hamiltonian Monte Carlo. Journal of Machine Learning Research, 15(1), 15931623.Google Scholar
Horowitz, Alan M. 1991. A generalized guided Monte Carlo algorithm. Physics Letters B, 268(2), 247252.CrossRefGoogle Scholar
Hsieh, Ya-Ping, Kavis, Ali, Rolland, Paul, and Cevher, Volkan. 2018. Mirrored Langevin Dynamics. Pages 2883–2892 of Advances in Neural Information Processing Systems. Curran Associates.Google Scholar
Huggins, Jonathan, and Lester, Mackey. 2018. Random feature Stein discrepancies. Advances in Neural Information Processing Systems, 31, 19031913.Google Scholar
Huggins, Jonathan, and James, Zou. 2017. Quantifying the accuracy of approximate diffusions and Markov chains. Pages 382–391 of Artificial Intelligence and Statistics, PMLR.Google Scholar
Johndrow, James E., Pillai, Natesh S., and Aaron, Smith. 2020. No free lunch for approximate MCMC. arXiv:2010.12514.Google Scholar
Jones, Galin L., and Hobert, James P. 2001. Honest exploration of intractable probability distributions via Markov chain Monte Carlo. Statistical Science, 16(4), 312334.Google Scholar
Kamatani, Kengo. 2020. Random walk Metropolis algorithm in high dimension with non-Gaussian target distributions. Stochastic Processes and their Applications, 130(1), 297327.CrossRefGoogle Scholar
Kanagawa, Heishiro, Barp, Alessandro, Simon-Gabriel, Carl-Johann, Gretton, Arthur, and Mackey, Lester. 2024. Controlling moments with Kernel Stein Discrepancies. arXiv:2211.05408v4.Google Scholar
Karvonen, Toni, Oates, Chris J., and Simo, Sarkka. 2018. A Bayes-Sard cubature method. Advances in Neural Information Processing Systems, 31, 58865897.Google Scholar
LeCam, Lucien. 1986. Asymptotic Methods in Statistical Decision Theory. Springer series in statistics. Springer.Google Scholar
Lewis, P. A. W, and Shedler, Gerald S. 1979. Simulation of nonhomogeneous Poisson processes by thinning. Naval Research Logistics Quarterly, 26(3), 403413.CrossRefGoogle Scholar
Li, Wenzhe, Ahn, Sungjin, and Max, Welling. 2016. Scalable MCMC for mixed membership stochastic blockmodels. Pages 723–731 of Artificial Intelligence and Statistics.Google Scholar
Lindvall, Torgny, and Rogers, L. Cris G. 1986. Coupling of multidimensional diffusions by reflection. The Annals of Probability, 14(3), 860872.CrossRefGoogle Scholar
Liu, Qiang, and Jason, Lee. 2017. Black-box importance sampling. Pages 952–961 of Artificial Intelligence and Statistics. PMLR.Google Scholar
Liu, Qiang, Lee, Jason, and Michael, Jordan. 2016. A kernelized Stein discrepancy for goodness-of-fit tests. Pages 276–284 of International Conference on Machine Learning. PMLR.Google Scholar
Livingstone, Samuel, and Giacomo, Zanella. 2022. The Barker proposal: Combining robustness and efficiency in gradient-based MCMC. Journal of the Royal Statistical Society Series B: Statistical Methodology, 84(2), 496523.CrossRefGoogle ScholarPubMed
Ludkin, M, and Sherlock, C. 2022. Hug and hop: A discrete-time, nonreversible Markov chain Monte Carlo algorithm. Biometrika, 110(2), 301318.CrossRefGoogle Scholar
L’Ecuyer, Pierre, and Lemieux, Christiane. 2002. Recent advances in randomized quasi-Monte Carlo methods. In: Dror, Moshe, L’Ecuyer, Pierre, and Szidarovszky, Ferenc (eds), Modeling Uncertainty: An Examination of Stochastic Theory, Methods, and Applications. Springer.Google Scholar
Ma, Yi-An, Chen, Tianqi, and Emily, Fox. 2015. A complete recipe for stochastic gradient MCMC. Pages 2917–2925 of Advances in Neural Information Processing Systems. Curran Associates.Google Scholar
Ma, Yi-An, Foti, Nicholas J., and Fox, Emily B. 2017. Stochastic gradient MCMC methods for hidden Markov models. Pages 2265–2274 of International Conference on Machine Learning. PMLR. Curran Associates.Google Scholar
Majka, Mateusz B., Mijatović, Aleksandar, and Szpruch, Łukasz. 2020. Non-asymptotic bounds for sampling algorithms without log-concavity. The Annals of Applied Probability, 30(4), 15341581.CrossRefGoogle Scholar
Metropolis, Nicholas, Rosenbluth, Arianna W., Rosenbluth, Marshall N., Teller, Augusta H., and Edward, Teller. 1953. Equation of state calculations by fast computing machines. The Journal of Chemical Physics, 21(6), 10871092.CrossRefGoogle Scholar
Meyn, Sean P., and Tweedie, Richard L. 2012. Markov Chains and Stochastic Stability. Springer Science & Business Media.Google Scholar
Meyn, Sean P., Tweedie, Robert L., et al. 1994. Computable bounds for geometric convergence rates of Markov chains. The Annals of Applied Probability, 4(4), 9811011.CrossRefGoogle Scholar
Michel, Manon, Kapfer, Sebastian C., and Werner, Krauth. 2014. Generalized eventchain Monte Carlo: Constructing rejection-free global-balance algorithms from infinitesimal steps. The Journal of Chemical Physics, 140(5), 054116-1-054116-8.CrossRefGoogle ScholarPubMed
Michel, Manon, Durmus, Alain, and Sénécal, Stéphane. 2020. Forward event-chain Monte Carlo: Fast sampling by randomness control in irreversible Markov chains. Journal of Computational and Graphical Statistics, 29(4), 689702.CrossRefGoogle Scholar
Nagapetyan, Tigran, Duncan, Andrew B, Hasenclever, Leonard, Vollmer, Sebastian J., Szpruch, Lukasz, and Konstantinos, Zygalakis. 2017. The true cost of stochastic gradient Langevin dynamics. arXiv:1706.02692.Google Scholar
Neal, Radford M. 2003. Slice sampling. The Annals of Statistics, 31(3), 705767.CrossRefGoogle Scholar
Neal, Radford M. 2004. Improving asymptotic variance of MCMC estimators: Nonreversible chains are better. arXiv preprint math/0407281.Google Scholar
Neal, Radford M. 2011. MCMC using Hamiltonian dynamics. In: Brooks, Steve, Gelman, Andrew, Jones, Galin L., and Meng, Xiao-Li (eds), Handbook of Markov chain Monte Carlo. CRC Press.Google Scholar
Nemeth, Christopher, and Paul, Fearnhead. 2021. Stochastic gradient markov chain monte carlo. Journal of the American Statistical Association, 116(533), 433450.CrossRefGoogle Scholar
Nemeth, Christopher, and Chris, Sherlock. 2018. Merging MCMC subposteriors through Gaussian-process approximations. Bayesian Analysis, 13(2), 507530.CrossRefGoogle Scholar
Nemeth, Christopher, Fearnhead, Paul, and Lyudmila, Mihaylova. 2016. Particle approximations of the score and observed information matrix for parameter estimation in state–space models with linear computational cost. Journal of Computational and Graphical Statistics, 25(4), 11381157.CrossRefGoogle Scholar
Norris, James R. 1998. Markov Chains. Cambridge University Press.Google Scholar
Oates, Chris J., Girolami, Mark, and Nicolas, Chopin. 2017. Control functionals for Monte Carlo integration. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 79(3), 695718.CrossRefGoogle Scholar
Oksendal, Bernt. 2013. Stochastic Differential Equations: An Introduction with Applications. Springer Science & Business Media.Google Scholar
Pagani, Filippo, Chevallier, Augustin, Power, Sam, House, Thomas, and Simon, Cotter. 2020. NuZZ: numerical Zig-Zag sampling for general models. arXiv:2003.03636.Google Scholar
Patterson, Sam, and Teh, Yee Whye. 2013. Stochastic gradient Riemannian Langevin dynamics on the probability simplex. Pages 3102–3110 of Advances in Neural Information Processing Systems. Curran Associates.Google Scholar
Peters, Elias A. J. F, and de With, G. 2012. Rejection-free Monte Carlo sampling for general potentials. Physical Review E, 85(2), 026703.CrossRefGoogle ScholarPubMed
Phillips, David B, and Smith, Adrian F. M. 1996. Bayesian model comparison via jump diffusions. In: Gilks, Wally R, Richardson, Sylvia, and Spiegelhalter, David (eds), Markov Chain Monte Carlo in Practice. Chapman & Hall, CRC.Google Scholar
Pollock, Murray, Fearnhead, Paul, Johansen, Adam M., and Roberts, Gareth O. 2020. Quasi-stationary Monte Carlo and the ScaLE algorithm. Journal of the Royal Statistical Society Series B: Statistical Methodology, 82(5), 11671221.CrossRefGoogle Scholar
Press, William H., Teukolsky, Saul A., Vetterling, William T., and Flannery, Brian P 2007. Numerical Recipes in C++: The Art of Scientific Computing. Cambridge University Press.Google Scholar
Putcha, Srshti, Nemeth, Christopher, and Paul, Fearnhead. 2023. Preferential Subsampling for Stochastic Gradient Langevin Dynamics. Pages 8837–8856 of International Conference on Artificial Intelligence and Statistics. PMLR.Google Scholar
Raginsky, Maxim, Rakhlin, Alexander, and Matus, Telgarsky. 2017. Non-convex learning via stochastic gradient langevin dynamics: A nonasymptotic analysis. Pages 1674–1703 of Conference on Learning Theory. PMLR.Google Scholar
Rasmussen, Carl Edward, and Williams, Christopher K. I. 2005. Gaussian Processes for Machine Learning. The MIT Press.CrossRefGoogle Scholar
Riabiz, Marina, Chen, Wilson Ye, Cockayne, Jon, Swietach, Pawel, Niederer, Steven A., Mackey, Lester, and Oates, Chris J. 2022. Optimal thinning of MCMC output. Journal of the Royal Statistical Society Series B: Statistical Methodology, 84(4), 10591081.CrossRefGoogle Scholar
Riou-Durand, Lionel, and Vogrinc, Jure. 2023. Metropolis Adjusted Langevin Trajectories: A robust alternative to Hamiltonian Monte Carlo.Google Scholar
Ripley, Brian D. 2009. Stochastic Simulation. John Wiley & Sons.Google Scholar
Robbins, Herbert, and Sutton, Monro. 1951. A stochastic approximation method. The Annals of Mathematical Statistics, 22(3), 400407.CrossRefGoogle Scholar
Robert, Christian P. 2007. The Bayesian Choice: From Decision-Theoretic Foundations to Computational Implementation. Springer.Google Scholar
Robert, Christian P., and George, Casella. 1999. Monte Carlo Statistical Methods. Springer.CrossRefGoogle Scholar
Roberts, Gareth O., and Rosenthal, Jeffrey S. 1998. Optimal scaling of discrete approximations to Langevin diffusions. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 60(1), 255268.CrossRefGoogle Scholar
Roberts, Gareth O., and Rosenthal, Jeffrey S. 2001. Optimal scaling for various Metropolis-Hastings algorithms. Statistical Science, 16(4), 351367.CrossRefGoogle Scholar
Roberts, Gareth O., and Rosenthal, Jeffrey S. 2004. General state space Markov chains and MCMC algorithms. Probability Surveys, 1, 2071.CrossRefGoogle Scholar
Roberts, Gareth O., and Tweedie, Richard L. 1996. Exponential convergence of Langevin distributions and their discrete approximations. Bernoulli, 2(4), 341363.CrossRefGoogle Scholar
Roberts, Gareth O., and Tweedie, Richard L. 1999. Bounds on regeneration times and convergence rates for Markov chains. Stochastic Processes and their Applications, 80(2), 211229.CrossRefGoogle Scholar
Roberts, Gareth O., Gelman, Andrew, and Gilks, Walter R. 1997. Weak convergence and optimal scaling of random walk Metropolis algorithms. The Annals of Applied Probability, 7, 110120.Google Scholar
Rogers, Leonard C. G., and Williams, David. 2000a. Diffusions, Markov Processes, and Martingales: Volume 1, Foundations. Cambridge University Press.Google Scholar
Rogers, Leonard C. G., and Williams, David. 2000b. Diffusions, Markov Processes, and Martingales: Volume 2, Ito Calculus. Cambridge University Press.Google Scholar
Rosenthal, Jeffrey S. 1995. Minorization conditions and convergence rates for Markov chain Monte Carlo. Journal of the American Statistical Association, 90(430), 558566.CrossRefGoogle Scholar
Rubinstein, R. Y., and Kroese, D. P. 2008. Simulation and the Monte Carlo Method. John Wiley & Sons.Google Scholar
Scott, Steven L., Blocker, Alexander W., Bonassi, Fernando V., Chipman, Hugh A., George, Edward I., and McCulloch, Robert E. 2016. Bayes and big data: The consensus Monte Carlo algorithm. International Journal of Management Science and Engineering Management, 11(2), 7888.CrossRefGoogle Scholar
Sherlock, C., Thiery, A. H., Roberts, G. O., and Rosenthal, J. R. 2015. On the efficiency of pseudo-marginal random walk Metropolis algorithms. Annals of Statistics, 43(1), 238275.CrossRefGoogle Scholar
Sherlock, Chris, and Gareth, Roberts. 2009. Optimal scaling of the random walk Metropolis on elliptically symmetric unimodal targets. Bernoulli, 15(3), 774798.CrossRefGoogle Scholar
Sherlock, Chris, and Thiery, Alexandre H. 2022. A discrete bouncy particle sampler. Biometrika, 109(2), 335349.CrossRefGoogle Scholar
Sherlock, Chris, Urbas, Szymon, and Matthew, Ludkin. 2023. The apogee to apogee path sampler. Journal of Computational and Graphical Statistics, 32(4), 14361446.CrossRefGoogle Scholar
Shi, Jiaxin, Zhou, Yuhao, Hwang, Jessica, Titsias, Michalis, and Lester, Mackey. 2022. Gradient estimation with discrete Stein operators. Advances in Neural Information Processing Systems, 35, 2582925841.Google Scholar
Sohl-Dickstein, Jascha, Mudigonda, Mayur, and Michael, DeWeese. 2014. Hamiltonian Monte Carlo without detailed balance. Pages 719–726 of International Conference on Machine Learning. PMLR.Google Scholar
Stein, Charles. 1972. A bound for the error in the normal approximation to the distribution of a sum of dependent random variables. Pages 583–603 of Proceedings of the 6th Berkeley Symposium on Mathematical Statistics and Probability, Volume 2: Probability Theory, vol. 6. University of California Press.Google Scholar
Stephens, Matthew. 2000. Bayesian analysis of mixture models with an unknown number of components-an alternative to reversible jump methods. Annals of Statistics, 28(1), 4074.CrossRefGoogle Scholar
Sun, Hongwei. 2005. Mercer theorem for RKHS on noncompact sets. Journal of Complexity, 21(3), 337349.CrossRefGoogle Scholar
Sun, Yi, Schmidhuber, Jürgen, and Gomez, Faustino. 2010. Improving the asymptotic performance of Markov chain Monte-Carlo by inserting vortices. Pages 2235–2243 of Lafferty, J., Williams, C.K.I., Shawe-Taylor, J., Zemel, R. S., and Culotta, A. (eds), Advances in Neural Information Processing Systems, vol. 23. Curran Associates.Google Scholar
Sutton, Matthew, and Paul, Fearnhead. 2023. Concave-convex PDMP-based sampling. Journal of Computational and Graphical Statistics, 32(4), 14251435.CrossRefGoogle Scholar
Suwa, Hidemaro, and Synge, Todo. 2010. Markov chain Monte Carlo method without detailed balance. Physical Review Letters, 105(12), 120603.CrossRefGoogle ScholarPubMed
Teh, Yee Whye, Thiery, Alexandre H., and Vollmer, Sebastian J. 2016. Consistency and fluctuations for stochastic gradient Langevin dynamics. The Journal of Machine Learning Research, 17(1), 193225.Google Scholar
Teymur, Onur, Gorham, Jackson, Riabiz, Marina, and Chris, Oates. 2021. Optimal quantisation of probability measures using maximum mean discrepancy. Pages 1027–1035 of International Conference on Artificial Intelligence and Statistics. PMLR.Google Scholar
Turitsyn, Konstantin S, Chertkov, Michael, and Vucelja, Marija. 2011. Irreversible Monte Carlo algorithms for efficient sampling. Physica D: Nonlinear Phenomena, 240(4-5), 410414.CrossRefGoogle Scholar
Vanetti, Paul, Bouchard-Côté, Alexandre, Deligiannidis, George, and Doucet, Arnaud. 2017. Piecewise-deterministic Markov chain Monte Carlo. arXiv:1707.05296.Google Scholar
Vats, Dootika, and Christina, Knudson. 2021. Revisiting the Gelman–Rubin diagnostic. Statistical Science, 36(4), 518529.CrossRefGoogle Scholar
Vehtari, Aki, Gelman, Andrew, Simpson, Daniel, Carpenter, Bob, and Bürkner, Paul-Christian. 2021. Rank-normalization, folding, and localization: An improved R for assessing convergence of MCMC. Bayesian Analysis, 1(1), 128.Google Scholar
von Renesse, Max-K, and Sturm, Karl-Theodor. 2005. Transport inequalities, gradient estimates, entropy and Ricci curvature. Communications on Pure and Applied Mathematics, 58(7), 923940.CrossRefGoogle Scholar
Vyner, Callum, Nemeth, Christopher, and Chris, Sherlock. 2023. SwISS: A scalable Markov chain Monte Carlo divide-and-conquer strategy. Stat, 12(1), e523.CrossRefGoogle Scholar
Welling, Max, and Teh, Yee W. 2011. Bayesian learning via stochastic gradient Langevin dynamics. Pages 681–688 of Proceedings of the 28th International Conference on Machine Learning (ICML-11).Google Scholar
Wenliang, Li K., and Heishiro, Kanagawa. 2021. Blindness of score-based methods to isolated components and mixing proportions. Proceedings of the NeurIPS Workshop “Your Model is Wrong: Robustness and Misspecification in Probabilistic Modeling”. Curran Associates.Google Scholar
Wu, Changye, and Robert, Christian P. 2017. Generalized bouncy particle sampler. arXiv:1706.04781.Google Scholar
Wu, Changye, and Robert, Christian P. 2020. Coordinate sampler: A non-reversible Gibbs-like MCMC sampler. Statistics and Computing, 30(3), 721730.CrossRefGoogle Scholar
Xifara, T., Sherlock, C., Livingstone, S., Byrne, S., and Girolami, M. 2014. Langevin diffusions and the Metropolis-adjusted Langevin algorithm. Statistics & Probability Letters, 91, 1419.CrossRefGoogle Scholar

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×