To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This text is devoted to the statics of rigid laminas on a plane and to the first-order instantaneous kinematics (velocities) of rigid laminas moving over a plane. Higher-order instantaneous kinematic problems, which involve the study of accelerations (second-order properties) and jerk (third-order properties) are not considered.
This text is influenced by the book Elementary Mathematics from an Advanced Standpoint: Geometry, written by the famous German geometer Felix Klein. It was published in German in 1908 and the third edition was translated into English and published in New York by the Macmillan Company in 1939. The book was part of a course of lectures given to German High School Teachers at Göttingen in 1908. Klein was admonishing the teachers for not teaching geometry correctly, and he was essentially providing a proper foundation for its instruction.
The present text stems from the undergraduate course “The Geometry of Robot Manipulators,” taught in the Mechanical Engineering Department at the University of Florida. This course is based on Klein's development of the geometry of points and lines in the plane and upon his elegant development of mechanics: “A directed line-segment represents a force applied to a rigid body. A free plane-segment, represented by a parallelogram of definite contour sense, and the couple given by two opposite sides of the parallelogram, with arrows directed opposite to that sense, are geometrically equivalent configurations, i.e., they have equal components with reference to every coordinate system.”
In this chapter we study the worst case setting. We shall present results already known as well as showing some new results. As already mentioned in the Overview, precise information about what is known and what is new can be found in the Notes and Remarks.
Our major goal is to obtain tight complexity bounds for the approximate solution of linear continuous problems that are defined on infinite dimensional spaces. We first explain what is to be approximated and how an approximation is obtained. Thus we carefully introduce the fundamental concepts of solution operator, noisy information and algorithm. Special attention will be devoted to information, which is most important in our analysis. Information is, roughly speaking, what we know about the problem to be solved. A crucial assumption is that information is noisy, i.e., it is given not exactly, but with some error.
Since information is usually partial (i.e., many elements share the same information) and noisy, it is impossible to solve the problem exactly. We have to be satisfied with only approximate solutions. They are obtained by algorithms that use information as data. In the worst case setting, the error of an algorithm is given by its worst performance over all problem elements and possible information. A sharp lower bound on the error is given by a quantity called radius of information. We are obviously interested in algorithms with the minimal error.
In the process of doing scientific computations we always rely on some information. In practice, this information is typically noisy, i.e., contaminated by error. Sources of noise include
previous computations,
inexact measurements,
transmission errors,
arithmetic limitations,
an adversary's lies.
Problems with noisy information have always attracted considerable attention from researchers in many different scientific fields, e.g., statisticians, engineers, control theorists, economists, applied mathematicians. There is also a vast literature, especially in statistics, where noisy information is analyzed from different perspectives.
In this monograph, noisy information is studied in the context of the computational complexity of solving mathematical problems.
Computational complexity focuses on the intrinsic difficulty of problems as measured by the minimal amount of time, memory, or elementary operations necessary to solve them. Information-based complexity (IBC) is a branch of computational complexity that deals with problems for which the available information is
partial,
noisy,
priced.
Information being partial means that the problem is not uniquely determined by the given information. Information is noisy since it may be contaminated by error. Information is priced since we must pay for getting it. These assumptions distinguish IBC from combinatorial complexity, where information is complete, exact, and free.
Since information about the problem is partial and noisy, only approximate solutions are possible. Approximations are obtained by algorithms that use this information.
This chapter deals with the average case setting. In this setting, we are interested in the average error and cost of algorithms. The structure of this chapter is similar to that of the previous chapter. That is, we first deal with optimal algorithms, then we analyze the optimal information, and finally, we present some complexity results.
To study the average error and/or cost, we have to replace the deterministic assumptions of the worst case setting by stochastic assumptions. That is, we assume some probability distribution µ on the space F of problem elements as well as some distribution of the information noise. The latter means that information is corrupted by random noise. Basically, we consider Gaussian distributions (measures) which seem to be most natural and are most often used in modeling.
In Section 3.2, we give a general formulation of the average case setting. We also introduce the concept of the (average) radius of information which, as in the worst case, provides a sharp lower bound on the (average) error of algorithms.
Then we pass to linear problems with Gaussian measures. These are problems where the solution operator is linear, µ is a Gaussian measure, and information is linear with Gaussian noise. In Section 3.3, we recall the definition of a Gaussian measure on a Banach space, listing some important properties. In Sections 3.4 to 3.6 we study optimal algorithms.