This article examines how the obligation to take precautionary measures to verify targets under international humanitarian law (IHL) can be applied with artificial intelligence decision support systems (AI-DSS). It uses the reported deployment of systems like ‘Lavender’ and ‘Where's Daddy?’ by Israel in the Gaza War as an illustrative example, breaks down the use of AI-DSS into stages – legal qualification, classification, and identification/location – and evaluates how precautions to verify can reduce the risk of false positives in each of these stages. It argues that precautions to verify must be applied at all stages, and discusses factors that affect their feasibility. The article concludes that while human oversight remains essential, precautions specific to AI-DSS outside the realm of the human operator are possible, and at times, necessary to ensure compliance with IHL.