This article explores the potential of large language models (LLMs), particularly through the use of contextualized word embeddings, to trace the evolution of scientific concepts. It thus aims to extend the potential of LLMs, currently transforming much of humanities research, to the specialized field of history and philosophy of science. Using the concept of the virtual particle – a fundamental idea in understanding elementary particle interactions – as a case study, we domain-adapted a pretrained Bidirectional Encoder Representations from Transformers model on nearly a century of Physical Review publications. By employing semantic change detection techniques, we examined shifts in the meaning and usage of the term “virtual.” Our analysis reveals that the dominant meaning of “virtual” stabilized after the 1950s, aligning with the formalization of the virtual particle concept, while the polysemy of “virtual” continued to grow. Augmenting these findings with dependency parsing and qualitative analysis, we identify pivotal historical transitions in the term’s usage. In a broader methodological discussion, we address challenges such as the complex relationship between words and concepts, the influence of historical and linguistic biases in datasets, and the exclusion of mathematical formulas from text-based approaches.