To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Carl Linnaeus dubbed his own species Homo sapiens, meaning something like “wise (or knowledgeable) man.” This is a bit overly self-congratulatory, but it does focus attention on a feature that sets humans apart. Humans inquire about the world and about themselves, and – sometimes, anyway – thereby acquire knowledge, wisdom, and understanding that surpasses that of even the most clever ostriches, squirrels, and mushrooms. Humans engage in inquiry about everything under the sun, and a good many things above it as well. Humans will even engage in inquiry about things that have no spatiotemporal relationship to the sun at all – things like the number 7, the orthocenter of a triangle, and the intricacies of the fictional world imagined in Frank Herbert’s Dune. At some critical stage in evolutionary history, humans even began to turn their inquiring gaze back on inquiry itself.
In the last chapter, we saw that no truth-function could capture the logic of natural language indicative conditional reasoning better than the material conditional. But this leaves open the question: can ordinary indicative conditional reasoning be properly captured with any truth function? There are good reasons to think that the logic of material conditionals departs in important ways from the logic of natural language indicatives, whatever their logical similarities might be. This is a topic which we will begin to explore in this chapter, and continue to explore in the next.
Semantics is the study of meaning. But ‘meaning’ in what sense? We use this term in many different contexts and it’s not clear exactly what – if anything – unites them. We might ask, for instance, about the the meaning of life. But we also might ask about the meaning of ‘life’, and that is a very different kettle of fish. Getting a full and confident grip on the meaning of ‘life’ might, sadly, leave one quite in the dark as to the meaning of life.
In this chapter we review the qualitative difference between explicit knowledge and implicit knowledge (underlying mental representation). The chapter focuses on whether instruction affects the latter. We review the accepted finding that instruction does not affect ordered development. We also review the issue of whether instruction affects rate of development and ultimate attainment. We review important variables in the research on instructed acquisition including type of knowledge measured, the nature of assessments used in the research, and short-term vs. long-term studies, among others.
Yes, the developers of contemporary classical logic had a utopian vision. Instead of trying to rehabilitate the festering logical mess that is natural language, they’d develop a new, logically pure, language – one that was free from all of the defects that make it so hard to track or model right reasoning using natural languages. If they succeeded, they’d have a language that would give them some hope of systematizing logic and clarifying our reasoning (instead of one that constantly bewitched them into philosophical confusions).
This chapter lays the foundation for how the field of second language acquisition arose. We briefly review the pioneering work in the late 1950s and 1960s in first language acquisition (e.g., Berko Gleason, Brown, Klima & Bellugi). We also review the generative revolution in linguistics and how it laid the groundwork for the idea of constrained language acquisition. We then review the seminal articles by S. Pit Corder (1967) and Larry Selinker (1972) that posited the major questions in second language acquisition, and end with the pioneering work that mirrored research in first language acquisition (e.g., Dulay & Burt, Krashen, Wode). We end the chapter with the major question that launched second language acquisition research in the early 1970s: Are L1 and L2 acquisition similar or different?
Descriptions like ‘the man’ and ‘a turkey’ seem so simple and foundational to the way we talk that it’s shocking that one can muster more than a few short paragraphs to elucidate their semantics. The more one thinks about how we use these structures in language, however, the more puzzling and intractable they become. Over a century ago, Bertrand Russell tried to set out a simple, elegant theory of descriptions. That should have been the end of it. But it was just the beginning.
This chapter defines what kind of input contains the data necessary for acquisition (communicatively embedded input) and focuses on its fundamental role in acquisition. Subsequently, we review the claims on the role of output and interaction, focusing on these major issues: Comprehensible output is necessary for acquisition; comprehensible output is beneficial for acquisition; comprehensible output does little to nothing for acquisition. We also discuss the nature of interaction more generally, focusing on whether interaction affects the acquisition of formal features of language.
The move from to can seem like a significant ramping up in terms of the complexity and difficulty of proof-making. In this chapter we’ll pause for a bit and work through some more proofs.
In chapter 10, we saw that contemporary classical logic departs from its Aristotelian roots in its tolerance for empty predicates like ‘– is a unicorn’, ‘– is a leprechaun’, and ‘– is a tasty kale recipe’. From the standpoint of formal semantics, such predicates can simply be assigned the null set as their extensions. This makes for some awkwardness, to be sure. It means that claims like ‘All leprechauns are Canadian’ should be counted as true (albeit vacuously so).
In addition to our stock of generic predicates – ready to be interpreted as one needs for whatever context one is using them – we have also introduced one (and only one) special predicate that has the same interpretation across all models: the identity predicate, ‘=’. Given the meaning that we have given to this predicate, it is possible to give Intro and Elim rules for it.