To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A ground vortex engendered by the interaction of uniform flow over a plane surface with suction into a cylindrical conduit whose axis is normal to the cross-flow and parallel to the ground plane is investigated in wind tunnel experiments. The formation and evolution of the columnar vortex and its ingestion into the conduit’s inlet are explored using planar/stereo particle image velocimetry over a broad range of formation parameters that include the speeds of the inlet and cross-flows and the cylinder’s elevation above the ground plane with specific emphasis on the role of the surface vorticity layer in the vortex initiation and sustainment. The present investigations show that the appearance of a ground vortex within the inlet face occurs above a threshold boundary of two dimensionless formation parameters, namely the inlet’s momentum flux coefficient and its normalised elevation above the ground surface. Transitory initiations of wall-normal columnar vortices are spawned within a countercurrent shear layer that forms over the ground plane within a streamwise domain on the inlet’s leeward side by the suction flow into the duct. At low suction speeds, these wall-normal vortices are advected downstream with the cross-flow but when their celerity is reversed with increased suction, they are advected towards the cylinder’s inlet, gain circulation and stretch along their centrelines and become ingested into the inlet at a threshold defined by the formation parameters. Finally, the present investigations demonstrated that reduction of the countercurrent shear within the wall vorticity layer by deliberate, partial bypass of the inlet face flow through the periphery of the cylindrical duct can significantly delay the ingestion of the ground vortex to higher level thresholds of the formation parameters.
The motivations for acting and planning with probabilistic models are about handling uncertainty in a quantitative way, with optimal or near-optimal decisions. The future is never entirely and precisely predictable. Uncertainty can be due to exogenous events in the environment, from nature and other actors, to noisy sensing and information gathering actions, to possible failures and outcomes of imprecise or intrinsically nondeterministic actions. Models are necessarily incomplete. Knowledge about open environments is partial. Part of what may happen can be only be modeled with uncertainty. Even in closed predictable environments, complete deterministic models may be too complex to develop. The three chapters in Part III tackle acting, planning, and learning in a probabilistic setting.
This chapter is about planning techniques for solving MDP problems. It presents algorithms that seeks optimal or near-optimal solution policies for a domain. Most of the chapter is focused on indefinite-horizon goal reachability domains that have positive costs and a safe solution; they may have dead ends, but those are avoidable. The chapter presents dynamic programming algorithms, heuristics search methods and their heuristics, linear programming methods, and online and Monte Carlo tree search techniques.
In probabilistic models, an action can have several possible outcomes that are not equally likely; their distribution can be estimated relying on statistics of past observations. The purpose is to act optimally with respect to an optimization criterion of the estimated likelihood of action effects and their cost. The usual formal probabilistic models are Markov decision processes (MDPs). An MDP is a nondeterministic state-transition system with a probability distribution and a cost distribution. The probability distribution defines how likely it is to get to a state 𝑠′ when an action 𝑎 is performed in a state 𝑠. The chapter presents MDPs in flat then structured state-space representations. Section 8.3 covers modeling issues of a probabilistic domain with MDPs and variants such as the stochastic shortest path model (SSP) or the constrained MDP (C-MDP) model. Section 8.4 focuses on acting with MDPs. Partially observable MDPs and other extended models are discussed in Section 8.5.
This chapter is about a refinement acting engine (RAE) used on a hierarchical task-oriented representation. It relies on an expressive, general-purpose language that offers rich programming control structures for online decision-making. A collection of refinement methods describes alternative ways to handle tasks and react to events. A method can be any complex algorithm, decomposing a task into subtasks and primitive actions. Subtasks are refined recursively. Nondeterministic actions trigger sensory-motor procedures that query and change the world nondeterministically. We assume that the methods are manually specified and that RAE chooses the appropriate method for the task and context at hand heuristically.
The recent developments of large language models (LLMs) and their extension in multimodal foundation models have introduced new perspectives in AI. An LLM is basically a very large neural net trained as a statistical predictor of the likely continuation of a sequence of words. LLMs have excellent competencies over a broad set of NLP tasks. Additionally, LLMs demonstrate the emergence of deliberation capabilities for reasoning, common sense, problem solving, code writing, and planning. These abilities have not been designed for in LLMs. They are unexpected and remain to a large extent poorly understood. Although error-prone and imperfect, they open up promising perspectives for acting, planning, and learning, which are presented in this chapter.
This chapter is about domain-independent classical-planning algorithms, which until recently were the most widely studied class of AI planning algorithms. The chapter classifies and describes a variety of forward search, backward search, and plan-space planning algorithms, as well as heuristics for guiding the algorithms.