No CrossRef data available.
Published online by Cambridge University Press: 15 August 2025
A deep reinforcement learning method for training a jellyfish-like swimmer to effectively track a moving target in a two-dimensional flow was developed. This swimmer is a flexible object equipped with a muscle model based on torsional springs. We employed a deep Q-network (DQN) that takes the swimmer’s geometry and dynamic parameters as inputs, and outputs actions that are the forces applied to the swimmer. In particular, an action regulation was introduced to mitigate the interference from complex fluid–structure interactions. The goal of these actions is to navigate the swimmer to a target point in the shortest possible time. In the DQN training, the data on the swimmer’s motions were obtained from simulations using the immersed boundary method. During tracking a moving target, there is an inherent delay between the application of forces and the corresponding response of the swimmer’s body due to hydrodynamic interactions between the shedding vortices and the swimmer’s own locomotion. Our tests demonstrate that the swimmer, with the DQN agent and action regulation, is able to dynamically adjust its course based on its instantaneous state. This work extends the application scope of machine learning in controlling flexible objects within fluid environments.