To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Chapter 5 compares laws of employment protection, compensation, and labor unions in the three countries, and describes how the different laws affect incentive bargaining of the firm and corporate governance. The US employment-at-will rule gives employers almost complete discretion to dismiss employees unless there are either contractual protections or discrimination. The Japanese abusive dismissal rule strictly restricts employers’ discretion in dismissing employees even in business downturns. Relative to Japanese companies, the US companies rely heavily on performance-based pay, which includes generous stock options. Among the compensation packages in China, the portion of payment for social insurance and welfare benefits is large. Performance-based bonuses play a significant role in privately owned enterprises (POEs). The US labor unions are basically industry unions and adversarial to management, while Japanese labor unions are company unions and are rather agreeable to management. All labor unions in China are government-backed, organized only on individual enterprises, and expected to mitigate labor disputes.
Multiple mobile manipulators (MMs) show superiority in the tasks requiring mobility and dexterity compared with a single robot, especially when manipulating/transporting bulky objects. However, closed-chain of the system, redundancy of each MM, and obstacles in the environment bring challenges to the motion planning problem. In this paper, we propose a novel semi-coupled hierarchical framework (SCHF), which decomposes the problem into two semi-coupled sub-problems. To be specific, the centralized layer plans the object’s motion first, and then the decentralized layer independently explores the redundancy of each robot in real-time. A notable feature is that the lower bound of the redundancy constraint metric is ensured, besides the closed-chain and obstacle-avoidance constraints in the centralized layer, which ensures the object’s motion can be executed by each robot in the decentralized layer. Simulated results show that the success rate and time cost of SCHF outperform the fully centralized planner and fully decoupled hierarchical planner significantly. In addition, cluttered real-world experiments also show the feasibility of the SCHF in the transportation tasks. A video clip in various scenarios can be found at https://youtu.be/Y8ZrnspIuBg.
Where the reason for dismissal concerns business reorganisation rather than individual fault, there is a statutory right for employees with a qualifying period of continuous service to claim redundancy payments based on the number of years of service. In some cases of economic dismissal, the reason for dismissal may not fall within the statutory concept of redundancy, but in such cases dismissal can be regarded as fair as dismissal for some other substantial reason. There is statutory protection for wages and some compensation for dismissal in the event of the employer’s insolvency. Dismissals in connection with the sale of the business or outsourcing to a different contractor are automatically unfair dismissals unless the transferor or transferee can demonstrate that the workers were dismissed for redundancy unconnected to the sale.
How to interrogate and improve your writing. Correcting errors; removing redundant phrases; trimming or augmenting attribution in speech; integrating action and speech; checking dialogue for authenticity; monitoring sentence length; balancing the extent of detail and description; scrutinising the chronology of description; checking narrative viewpoint is secure.
While speakers are theorized to ideally not include unnecessary information (redundancy) in their utterances, in reality, they often do so. One potential reason is that linguistic redundancy facilitates language communication, especially when the addressee (interlocutor) is linguistically less competent (e.g., an artificial system). In three experiments, we examined whether linguistic redundancy may arise as a result of people’s tendency to use similar linguistic features as their interlocutor does during communication (i.e., linguistic alignment) and whether redundancy alignment (if any) differs with a human interlocutor versus a computer interlocutor. We also examined whether redundancy alignment is affected by the perceived competency of the interlocutor, participants’ abilities in theory of mind (ToM), and if redundancy alignment varied across time during the experiment. Participants carried out a picture matching and naming task with a human or computer interlocutor who either always or never included redundancies in their utterances. Redundancy alignment was found across all experiments, in that speakers produced more redundancies with a redundant interlocutor compared to a non-redundant one. This alignment was also modulated by the perceived competency of the interlocutor, the time course of the interaction, and ToM abilities, suggesting that redundancy usage is affected by both automatic and strategic mechanisms of linguistic alignment.
Although several patterns of word-formation appear to introduce tautology, in practice they are probably not felt to be tautologous by the speakers and listeners who are faced with them.
The nonlinear response of the hull girder to global loads is treated in this chapter. These include torsional loads, the result of major damage leading to loss of longitudinal strength of part of the hull girder, and hull girder collapse. In the case of torsional loads, of critical importance is the position of the shear centre, and this depends on hull girder geometry (closed or open section). The effect of structural arrangements is then described in relation to longitudinal warping. The effect of discontinuities is discussed and design issues are considered. Combined and coupled horizontal bending and torsion are treated next. The next section deals with the determination of reserve strength of the hull girder following damage. The approach followed by a classification society to calculating residual strength is described and the use of IACS Common Structural Rules in calculating residual strength of oil tankers is presented. The topic of the last part of the chapter is the ultimate strength of the hull girder in longitudinal bending. The need to calculate ultimate strength is discussed, followed by the calculation of ultimate strength using a simplified, upper bound approach. Progressive collapse analysis is presented and this allows for the gradual spread of elasto-plastic behaviour in individual stiffened plate elements of the hull girder.
The aim in Chapter 7 is to take into account the role of the means of information transmission on the nature of the states that can be perceived. Our point of departure is the recognition that the information we obtain is acquired by observers who monitor fragments of the same environment that decohered the system, einselecting preferred pointer states in the process. Moreover, we only intercept a fraction of the environment. The only information about the system that can be transmitted by its fraction must have been reproduced in many copies in that environment. This process of amplification limits what can be found out to the states einselected by decoherence. Quantum Darwinism provides a simple and natural explanation of this restriction, and, hence, of the objective existence—the essence of classicality—for the einselected states. This chapter introduces and develops information-theoretic tools and concepts (including, e.g., redundancy) that allow one to explore and characterize correlations and information flows between systems, environments, and observers, and illustrates them on an exactly solvable yet non-trivial model.
This chapter covers digital information sources in some depth. It provides intuition on the information content of a digital source and introduces the notion of redundancy. As a simple but important example, discrete memoryless sources are described. The concept of entropy is defined as a measure of the information content of a digital information source. The properties of entropy are studied, and the source-coding theorem for a discrete memoryless source is given. In the second part of the chapter, practical data compression algorithms are studied. Specifically, Huffman coding, which is an optimal data-compression algorithm when the source statistics are known, and Lempel–Ziv (LZ) and Lempel–Ziv–Welch (LZW) coding schemes, which are universal compression algorithms (not requiring the source statistics), are detailed.
In Chapter 13 we will discuss how to produce compression schemes that do not require a priori knowledge of the generative distribution. It turns out that designing a compression algorithm able to adapt to an unknown distribution is essentially equivalent to the problem of estimating an unknown distribution, which is a major topic of statistical learning. The plan for this chapter is as follows: (1) We will start by discussing the earliest example of a universal compression algorithm (of Fitingof). It does not talk about probability distributions at all. However, it turns out to be asymptotically optimal simultaneously for all iid distributions and with small modifications for all finite-order Markov chains. (2) The next class of universal compressors is based on assuming that the true distribution belongs to a given class. These methods proceed by choosing a good model distribution serving as the minimax approximation to each distribution in the class. The compression algorithm for a single distribution is then designed as in previous chapters. (3) Finally, an entirely different idea are algorithms of Lempel–Ziv type. These automatically adapt to the distribution of the source, without any prior assumptions required.
The purpose of this note is to show that Van den Wollenberg's method of redundancy analysis is a special case of a simultaneous linear prediction method offered by Fortier.
A distinction is drawn between redundancy measurement and the measurement of multivariate association for two sets of variables. Several measures of multivariate association between two sets of variables are examined. It is shown that all of these measures are generalizations of the (univariate) squared-multiple correlation; all are functions of the canonical correlations, and all are invariant under linear transformations of the original sets of variables. It is further shown that the measures can be considered to be symmetric and are strictly ordered for any two sets of observed variables. It is suggested that measures of multivariate relationship may be used to generalize the concept of test reliability to the case of vector random variables.
Concision is about more than writing like Hemingway or following Strunk & White’s edict to eliminate unnecessary words. Instead, concision relies on writers recognizing the myriad redundancies in English, a reflection of its evolution from the collision of Latin, French, and Old English in the decades following the Norman Conquest. Moreover, redundancies also litter English in the form of redundant modifiers, throat-clearing, and metadiscourse. By recognizing these words and phrases, writers can quickly pare sentences to their essentials, without fretting over the havoc deletions can wreak on the meaning of their sentences.
We report findings from a corpus-based investigation of three young children growing up in German-English bilingual environments (M = 3;0, Range = 2;3–3;11). Based on 2,146,179 single words and two-word combinations in naturalistic child speech (CS) and child-directed speech (CDS), we assessed the degree to which the frequency distribution of CDS predicted CS usage over time, and systematically identified CS that was over- or underrepresented in the corpus with respect to matched CDS baselines. Results showed that CDS explained 61% of the variance in CS single-word use and 19.3% of the variance in two-word combinations. Furthermore, the bilingual nature of the over or -underrepresented CS was partially attributable to factors beyond the corpus statistics, namely individual differences between children in their bilingual learning environment. In two out of the three children, overrepresented two-word combinations contained higher levels of syntactic slot redundancy than underrepresented CS. These results are discussed with respect to the role that redundancy plays in producing semiformulaic slot-and-frame patterns in CS.
Lie detection research comparing manual and automated coding of linguistic cues is limited. In Experiment 1, we attempted to extend this line of research by directly comparing the veracity differences in manual coding and two coding software programs (Text Inspector and Linguistic Inquiry and Word Count [LIWC]) on the linguistic cue “total details” across eight published datasets. Mixed model analyses revealed that LIWC showed larger veracity differences in total details than Text Inspector and manual coding. Follow-up classification analyses showed that both automated coding and manual coding could accurately classify honest and false accounts. In Experiment 2, we examined if LIWC’s sensitivity to veracity differences was the result of honest accounts including more redundant (repeated) words than false accounts as LIWC—but not Text Inspector or manual coding—accounts for redundancy. Our prediction was supported, and the most redundant words were function words. The results implicated that automated coding can detect veracity differences in total details and redundancy, but it is not necessarily better than manual coding at accurately classifying honest and false accounts.
Text comprehension and picture comprehension can be synthesized into a common conceptual framework which differentiates between external and internal descriptive and depictive representations. Combining this framework with the human cognitive architecture including sensory registers, working memory, and long-term memory leads to an integrated model of text and picture comprehension. The model consists of a descriptive branch and a depictive branch of processing. It includes multiple sensory modalities. Due to a flexible combination of sensory modalities and representational formats, the model covers listening comprehension, reading comprehension, visual picture comprehension, and sound comprehension. The model considers text comprehension and picture comprehension to be different routes for constructing mental models and propositional representations with the help of prior knowledge. It allows us to explain the effects of coherence, text modality, split attention, text–picture contiguity, redundancy, sequencing, and the effects of different types of visualization.
This chapter completes our critical exploration of Popper’s key work, the Logic of Scientific Discovery and how it applies to corpus linguistics. In this chapter we address the question of how easily linguistics may be viewed as a science, in Popper’s terms. We also consider important critiques of Popper’s work and use those to both clarify and, where necessary, adapt the framework.
Sensor placement optimization (SPO) is usually applied during the structural health monitoring sensor system design process to collect effective data. However, the failure of a sensor may significantly affect the expected performance of the entire system. Therefore, it is necessary to study the optimal sensor placement considering the possibility of sensor failure. In this article, the research focusses on an SPO giving a fail-safe sensor distribution, whose sub-distributions still have good performance. The performance of the fail-safe sensor distribution with multiple sensors placed in the same position will also be studied. The adopted data sets include the mode shapes and corresponding labels of structural states from a series of tests on a glider wing. A genetic algorithm is used to search for sensor deployments, and the partial results are validated by an exhaustive search. Two types of optimization objectives are investigated, one for modal identification and the other for damage identification. The results show that the proposed fail-safe sensor optimization method is beneficial for balancing the system performance before and after sensor failure.
Building up from first principles and simple scenarios, this comprehensive introduction to rigid body dynamics gradually introduces readers to tools to address involved real-world problems, and cutting-edge research topics. Using a unique blend of conceptual, theoretical and practical approaches, concepts are developed and rigorously applied to practical examples in a consistent and understandable way. It includes discussion of real-world applications including robotics and vehicle dynamics, and over 40 thought-provoking fully worked examples to cement readers' understanding. Providing a wealth of resources allowing readers to confidently self-assess – including over 100 problems with solutions, over 400 high quality multiple choice questions, and end-of-chapter puzzles dealing with everyday situations – this is an ideal companion for undergraduate students in aerospace, civil and mechanical engineering.