We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This book takes as its starting point recent debates over the dematerialisation of subject matter which have arisen because of changes in information technology, molecular biology, and related fields that produced a subject matter with no obvious material form or trace. Arguing against the idea that dematerialisation is a uniquely twenty-first century problem, this book looks at three situations where US patent law has already dealt with a dematerialised subject matter: nineteenth century chemical inventions, computer-related inventions in the 1970s, and biological subject matter across the twentieth century. In looking at what we can learn from these historical accounts about how the law responded to a dematerialised subject matter and the role that science and technology played in that process, this book provides a history of patentable subject matter in the United States. This title is available as Open Access on Cambridge Core.
Kenneth I. Kellermann, National Radio Astronomy Observatory, Charlottesville, Virginia,Ellen N. Bouton, National Radio Astronomy Observatory, Charlottesville, Virginia
The history of radio astronomy has been a series of discoveries, mostly serendipitous, using a new instrument, or using an old instrument in a new unintended way. Theoretical predictions have had little influence, and in some cases actually delayed the discovery by discouraging observers. Many of the key transformational discoveries were made while investigating other areas of astronomy; others came as a result of commercial and military pursuits unrelated to astronomy. We discuss how the transformational serendipitous discoveries in radio astronomy depended on luck, age, education, and the institutional affiliation of the scientists involved, and we comment on the effect of peer review in the selection of research grants, observing time, and the funding of new telescopes, and speculate on its constraint to new discoveries. We discuss the decrease in the rate of new discoveries since the Golden Years of the 1960s and 1970s and the evolution of radio astronomy to a big science user oriented discipline. We conclude with a discussion of the impact of computers in radio astronomy and speculations on the potential for future discoveries in radio astronomy – the unknown unknowns.
It has been suggested that providing multiple computers with automatic reward dispensers as enrichment to captive orangutans (Pongo spp) (as opposed to a single computer, with a care-staff person delivering reinforcers) might help improve behavioural outcomes. The purpose of the current study was to test this hypothesis by providing two computers with automatic reward dispensers to eight orangutans housed in four male-female pairs at Zoo Atlanta, USA. Subjects were observed for ten days during each of three phases: a baseline phase (during which, no computer was provided); immediately followed by Phase 1 (during which, one computer system was provided to each pair of subjects); immediately followed by Phase 2 (during which, two computer systems were provided to each pair). Data were collected in 1-h sessions using instantaneous scan sampling. There was no habituation to the computer system, nor were there any significant increases in aggression, rough scratching, and abnormal behaviours in either computer phase, which indicates that computer-joystick systems are effective as enrichment for captive orangutans. However, a high level of interest in the computer was shown by only a few individuals, which highlights a need to take into consideration individual differences when providing computerised enrichment to captive non-human primates. It would also be advisable to provide other forms of enrichment to increase activity levels for individuals which are not interested in interacting with a computer, as well as to help increase the diversity of behaviours being stimulated by the enrichment.
You wouldn’t call it a classic joke. It’s more of a quip, to be honest; something you might hear at a computer-science convention. It is said that the number of people predicting the end of Moore’s law doubles every two years. Lol.
For the uninitiated, Moore’s law refers to Gordon Moore’s prediction, in 1965, that the number of transistors on a computer microchip would double every two years while the cost of computers would be halved. It was a brave prediction to make when microprocessors and home computers were still just a distant dream. But despite the countless experts predicting the demise of Moore’s law, as the quip insinuates, it has remained true for almost five decades, as Figure 31.1 demonstrates.
This chapter reviews the implications of learning sciences (LS) research for schools, including assessment, curriculum, teaching practice, and systemic transformation. A central theme of this review is the role of technology in education – its history, its failings and successes, and how future technology designs can be grounded in LS. The chapter then describes some trends and opportunities in the field of LS research, including the integration of individual and sociocultural research approaches; the ways that LS research can contribute to equity and diversity in learning and in schools; and the sociology and history of LS as a discipline.
Black adults are approximately twice as likely to develop Alzheimer’s disease (AD) than non-Hispanic Whites and access diagnostic services later in their illness. This dictates the need to develop assessments that are cost-effective, easily administered, and sensitive to preclinical stages of AD, such as mild cognitive impairment (MCI). Two computerized cognitive batteries, NIH Toolbox-Cognition and Cogstate Brief Battery, have been developed. However, utility of these measures for clinical characterization remains only partially determined. We sought to determine the convergent validity of these computerized measures in relation to consensus diagnosis in a sample of MCI and healthy controls (HC).
Method:
Participants were community-dwelling Black adults who completed the neuropsychological battery and other Uniform Data Set (UDS) forms from the AD centers program for consensus diagnosis (HC = 61; MCI = 43) and the NIH Toolbox-Cognition and Cogstate batteries. Discriminant function analysis was used to determine which cognitive tests best differentiated the groups.
Results:
NIH Toolbox crystallized measures, Oral Reading and Picture Vocabulary, were the most sensitive in identifying MCI apart from HC. Secondarily, deficits in memory and executive subtests were also predictive. UDS neuropsychological test analyses showed the expected pattern of memory and executive functioning tests differentiating MCI from HC.
Conclusions:
Contrary to expectation, NIH Toolbox crystallized abilities appeared preferentially sensitive to diagnostic group differences. This study highlights the importance of further research into the validity and clinical utility of computerized neuropsychological tests within ethnic minority populations.
Chapter 9 examines the bubble in internet and other technology stocks that occurred at the end of the 1990s. This bubble witnessed the coming to market of many young firms which had never generated a profit. The excitement resulted in the NASDAQ index trebling in value in the 18 months prior to its peak in March 2000. By the end of 2000, however, it had lost more than half of its value. This bubble in tech stocks was not confined to the United States – it was a global phenomenon. The chapter then uses the bubble triangle can explain the causes of the dot-com bubble. The spark was provided by the new internet technology. Marketability increased as a result of new technology and many more companies floating on stock exchanges. Monetary conditions were loose in the runup of the bubble and there was a sharp rise in margin lending. Speculation was rampant in the runup, thanks to the rise of the day trader. The chapter concludes by arguing that the modest levels of economic damage associated with the bursting of the dot-com bubble suggest it could have been useful. However, its minor economic impact might also have made the authorities and investors complacent about the housing bubble which followed on its heels.
As the field of applied linguistics ponders and even embraces the myriad roles technology affords language education, we frame this critical report within the context of the Modern Language Association's 2007 report, along with earlier state-of-the-field Annual Review of Applied Linguistics (ARAL) pieces (e.g., Blake, 2007; 2011) to consider not only where we've come from but also, crucially, where the field is headed. This article begins with an overview of the field, examining the role of technology and how it has been leveraged over decades of language teaching. We also explore issues such as the goals established by the Modern Language Association (MLA) with respect to shaping technological vision and the role of technology in enhancing the field of language education. We use this critical assessment to offer insights into how the field of computer-assisted language learning (CALL) can help shape the future of language teaching and learning.
A major challenge facing archaeologists is communicating our research to the
public. Thankfully, new computational tools have enabled the testing and
visualization of complex ideas in an easily packageable format. In this article
we illustrate not only how agent-based modeling provides a platform for
communicating complex ideas, but also how these game-like computer models can be
explored and manipulated by members of the public therefore increasing their
engagement in archaeological explanations. We suggest that these new digital
tools serve as an excellent aid for education on the importance of
archaeological sites and artifacts. To illustrate the above we walk the reader
through a step-by-step pipeline of how to run an ABM model as an experiment and
how to export it into a form ready to be sent to SHPO and THPO offices in tandem
with reports. Ultimately, we hope that this work will help demystify the
computational archaeology process and lead to more fluency in using agent-based
modeling in research and outreach.
Objectives: The reduction in cognitive decline depends on timely diagnosis. The aim of this systematic review was to analyze the current available information and communication technologies-based instruments for cognitive decline early screening and detection in terms of usability, validity, and reliability.
Methods: Electronic searches identified 1,785 articles of which thirty-four met the inclusion criteria and were grouped according to their main purpose into test batteries, measures of isolated tasks, behavioral measures, and diagnostic tools.
Results: Thirty one instruments were analyzed. Fifty-two percent were personal computer based, 26 percent tablet, 13 percent laptop, and 1 was mobile phone based. The most common input method was touchscreen (48 percent). The instruments were validated with a total of 4,307 participants: 2,146 were healthy older adults (M = 73.59; SD = 5.12), 1,104 had dementia (M = 74.65; SD = 3.98) and 1,057 mild cognitive impairment (M = 74.84; SD = 4.46). Only 6 percent were administered at home, 19 percent reported outcomes about usability, and 22 percent about understandability. The methodological quality of the studies was good, the weakest methodological area being usability. Most of the instruments obtained acceptable values of specificity and sensitivity.
Conclusions: It is necessary to create home delivered instruments and to include usability studies in their design. Involvement of people with cognitive decline in all phases of the development process is of great importance to obtain valuable and user-friendly products. It would be advisable for researchers to make an effort to provide cutoff points for their instruments.
This study aimed to assess head and neck cancer patient satisfaction with the use of a touch-screen computer patient-completed questionnaire for assessing Adult Co-morbidity Evaluation 27 co-morbidity scores prior to treatment, along with its clinical reliability.
Methods:
A total of 96 head and neck cancer patients were included in the audit. An accurate Adult Co-morbidity Evaluation 27 co-morbidity score was achieved via patient-completed questionnaire assessment for 97 per cent of participants.
Results:
In all, 96 per cent of patients found the use of a touch-screen computer acceptable and would be willing to use one again, and 62 per cent would be willing to do so without help. Patients were more likely to be willing to use the computer again without help if they were aged 65 years or younger (χ2 test; p = 0.0054) or had a performance status of 0 or 1 (χ2 test; p = 0.00034).
Conclusion:
Use of a touch-screen computer is an acceptable approach for assessing Adult Co-morbidity Evaluation 27 scores at pre-treatment assessment in a multidisciplinary joint surgical–oncology clinic.
The Internet may reduce constraints on a farmer's ability to receive and manage information, regardless of where the farm is located or when the information is used. Using a count data estimation procedure, this study attempts to examine the key farm, operator, regional, and household characteristics that influence the number of Internet applications used by farm households. Findings indicate that educational level of the farm operator, farm size, farm diversification, off-farm income, off-farm investments, and regional location of the farm have a significant impact on the number of Internet applications used.
A minicomputer online information retrieval program is described that is designed to facilitate timely distribution of agricultural market news to state and county research and extension faculty. These faculty have designed programs that extend this information to clientele in their areas. An evaluation indicates that users find great value in this network. Usage has grown rapidly over the period the network has been available. This program is available and can be used by other states and clientele.
Computers and the Internet have created a revolution in the way astronomy can be communicated to the public. At Sydney Observatory we make full use of these recent developments. In our lecture room a variety of sophisticated computer programs can show, with the help of a projection TV system, the appearance and motion of the sky at any place, date or time. The latest HST images obtained from the Internet can be shown, as can images taken through our own Meade 16 in telescope. This recently installed computer-controlled telescope, with its accurate pointing, is an ideal instrument for a light-polluted site such as ours.
Computers change rapidly, yet the last survey on computer use in agriculture was in 1991. We surveyed Great Plains producers in 1995 and used logit analysis to characterize adopters and non-adopters. About 37% of these producers use computers which is consistent with the general population. We confirmed previous surveys emphasizing the importance of education, age/experience, and other farm characteristics on adoption. However, we also found that education and experience may no longer be a significant influence. Future research and education could focus on when and where computers are most needed, and therefore when adoption is most appropriate.
The potential use of computers and electronic technology have created considerable interest among educators in agricultural economics. This paper provides an overview of the use of electronic technology within agricultural economics curricula; examines areas in which technological development offers promise and examines issues associated with adoption of the technology.
The geometric multigrid method (GMG) is one of the most efficient solving techniques for discrete algebraic systems arising from elliptic partial differential equations. GMG utilizes a hierarchy of grids or discretizations and reduces the error at a number of frequencies simultaneously. Graphics processing units (GPUs) have recently burst onto the scientific computing scene as a technology that has yielded substantial performance and energy-efficiency improvements. A central challenge in implementing GMG on GPUs, though, is that computational work on coarse levels cannot fully utilize the capacity of a GPU. In this work, we perform numerical studies of GMG on CPU-GPU heterogeneous computers. Furthermore, we compare our implementation with an efficient CPU implementation of GMG and with the most popular fast Poisson solver, Fast Fourier Transform, in the cuFFT library developed by NVIDIA.