We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper discusses aspects of recruiting subjects for economic laboratory experiments, and shows how the Online Recruitment System for Economic Experiments can help. The software package provides experimenters with a free, convenient, and very powerful tool to organize their experiments and sessions.
Cap is a software package (citeware) for economic experiments enabling experimenters to analyze emotional states of subjects using z-Tree and FaceReader™. Cap is able to create videos of subjects on client computers based on stimuli shown on screen and restrict recording material to relevant time frames. Another feature of Cap is the creation of time stamps in csv format at prespecified screens (or at prespecified points in time) during the experiment, measured on the client computer. The software makes it possible to import these markers into FaceReader™ easily. Cap is the first program that significantly simplifies the process of connecting z-Tree and FaceReader™ with the additional benefit of extremely high precision. This paper describes the usage, underlying principles as well as advantages and limitations of Cap. Furthermore, we give a brief outlook of how Cap can be beneficial in other contexts.
We introduce the network model as a formal psychometric model, conceptualizing the covariance between psychometric indicators as resulting from pairwise interactions between observable variables in a network structure. This contrasts with standard psychometric models, in which the covariance between test items arises from the influence of one or more common latent variables. Here, we present two generalizations of the network model that encompass latent variable structures, establishing network modeling as parts of the more general framework of structural equation modeling (SEM). In the first generalization, we model the covariance structure of latent variables as a network. We term this framework latent network modeling (LNM) and show that, with LNM, a unique structure of conditional independence relationships between latent variables can be obtained in an explorative manner. In the second generalization, the residual variance–covariance structure of indicators is modeled as a network. We term this generalization residual network modeling (RNM) and show that, within this framework, identifiable models can be obtained in which local independence is structurally violated. These generalizations allow for a general modeling framework that can be used to fit, and compare, SEM models, network models, and the RNM and LNM generalizations. This methodology has been implemented in the free-to-use software package lvnet, which contains confirmatory model testing as well as two exploratory search algorithms: stepwise search algorithms for low-dimensional datasets and penalized maximum likelihood estimation for larger datasets. We show in simulation studies that these search algorithms perform adequately in identifying the structure of the relevant residual or latent networks. We further demonstrate the utility of these generalizations in an empirical example on a personality inventory dataset.
MTVE is an open-source software tool (citeware) that can be applied in laboratory and online experiments to implement video communication. The tool enables researchers to gather video data from these experiments in a way that these videos can be later used for automatic analysis through machine learning techniques. The browser-based tool comes with an easy user interface and can be easily integrated into z-Tree, oTree (and other experimental or survey tools). It provides the experimenters control over several communication parameters (e.g., number of participants, resolution), produces high-quality video data, and circumvents the Cocktail Party Problem (i.e., the problem of separating speakers solely based on audio input) by producing separate files. Using some of the recommended Voice-to-Text AI, the experimenters can transcribe individual files. MTVE can merge these individual transcriptions into one conversation.
stratEst is a software package for strategy frequency estimation in the freely available statistical computing environment R (R Development Core Team in R Foundation for Statistical Computing, Vienna, 2022). The package aims at minimizing the start-up costs of running the modern strategy frequency estimation techniques used in experimental economics. Strategy frequency estimation (Stahl and Wilson in J Econ Behav Organ 25: 309–327, 1994; Stahl and Wilson in Games Econ Behav, 10: 218–254, 1995) models the choices of participants in an economic experiment as a finite-mixture of individual decision strategies. The parameters of the model describe the associated behavior of each strategy and its frequency in the data. stratEst provides a convenient and flexible framework for strategy frequency estimation, allowing the user to customize, store and reuse sets of candidate strategies. The package includes useful functions for data processing and simulation, strategy programming, model estimation, parameter testing, model checking, and model selection.
OpenMx is free, full-featured, open source, structural equation modeling (SEM) software. OpenMx runs within the R statistical programming environment on Windows, Mac OS–X, and Linux computers. The rationale for developing OpenMx is discussed along with the philosophy behind the user interface. The OpenMx data structures are introduced—these novel structures define the user interface framework and provide new opportunities for model specification. Two short example scripts for the specification and fitting of a confirmatory factor model are next presented. We end with an abbreviated list of modeling applications available in OpenMx 1.0 and a discussion of directions for future development.
Text is a major medium of contemporary interpersonal communication but is difficult for social scientists to study unless they have significant resources or the skills to build their own research platform. In this paper, we introduce a cloud-based software solution to this problem: ReChat, an online research platform for conducting experimental and observational studies of live text conversations. We demonstrate ReChat by applying it to a specific phenomenon of interest to political scientists: conversations among co-partisans. We present results from two studies, focusing on (1) self-selection factors that make chat participants systematically unrepresentative and (2) a pre-registered analysis of loquaciousness that finds a significant association between speakers’ ideological extremity and the amount they write in the chat. We conclude by discussing practical implications and advice for future practitioners of chat studies.
One pedagogical finding that has gained recent attention is the utility of active, effortful retrieval practice in effective learning. Essentially, humans learn best when they are asked to actively generate/recall knowledge for themselves, rather than receiving knowledge passively. In this paper, we (a) provide a framework for both practice and assessment within which students can organically develop active study habits, (b) share resources we have built to help implement such a framework in the linguistics classroom, and (c) provide some examples and evaluation of their success in the context of an introductory phonetics/phonology course.
Chapter 7 shows how in the 1980s patent law came to view computer-related subject matter through the lens of ‘abstractness’ and the role that materiality played in determining the fate of that subject matter. The chapter also looks at how as a result of changes in technology, patent law gradually shifted away from the materiality of the subject matter to look at its specificity and in so doing how the subject matter was dematerialised.
After looking at how software was created and consumed in the 1960s and, as this changed, how it gave rise to questions about the role intellectual property might play in the emerging software industry, Chapter 5 looks at the contrasting ways that patentable subject matter was seen within the information technology industry and how these views were received within the law.
Chapter 6 looks at the problems patent law experienced in the 1960s and 1970s in attempting to reconcile the conflicting views of the industry about what the subject matter was and how it should be interpreted.
A Microsoft® Visual Basic software, WinClbclas, has been developed to calculate the chemical formulae of columbite-supergroup minerals based on data obtained from wet-chemical and electron-microprobe analyses and using the current nomenclature scheme adopted by the Commission on New Minerals, Nomenclature and Classification (CNMNC) of the International Mineralogical Association (IMA) for columbite-supergroup minerals. The program evaluates 36 IMA-approved species, three questionable in terms of their unit-cell parameters, four insufficiently studied questionable species and one ungrouped species, all according to the dominant valance and constituent status in five mineral groups including ixiolite (MO2), wolframite (M1M2O4), samarskite (ABM2O8), columbite (M1M2O6) and wodginite (M1M2M32O8). Mineral compositions of the columbite supergroup are calculated on the basis of 24 oxygen atoms per formula unit. However, the formulae of the five ixiolite to wodginite groups can be estimated by the program on the basis of their cation and anion values in their typical mineral formulae (e.g. 4 cations and 8 oxygens for the wodginite group) with normalisation procedures. The Fe3+ and Fe2+ contents from microprobe-derived total FeO (wt.%) amounts are estimated by stoichiometric constraints. WinClbclas allows users to: (1) enter up to 47 input variables for mineral compositions; (2) type and load multiple columbite-supergroup mineral compositions in the data entry section; (3) edit and load the Microsoft® Excel files used in calculating, classifying, and naming the columbite-supergroup minerals, together with the total monovalent to hexavalent ion; and (4) store all the calculated parameters in the output of a Microsoft® Excel file for further data evaluation. The program is distributed as a self-extracting setup file, including the necessary support files used by the program, a help file and representative sample data files.
Medical devices increasingly include software components, which facilitate remote patient monitoring. The introduction of software into previously analog medical devices, as well as innovation in software-driven devices, may introduce new safety concerns – all the more so when such devices are used in patients’ homes, well outside of traditional health care delivery settings. We review four key mechanisms for the post-market surveillance of medical devices in the United States: (1) Post-market trials and registries; (2) manufacturing plant inspections; (3) adverse event reporting; and (4) recalls. We use comprehensive regulatory data documenting adverse events and recalls to describe trends in the post-market safety of medical devices, based on the presence or absence of software. Overall, devices with software are associated with more reported adverse events (i.e. individual injuries and deaths) and more high-severity recalls, compared to devices without software. However, in subgroup analyses of individual medical specialties, we consistently observe differences in recall probability but do not consistently detect differences in adverse events. These results suggest that adverse events are a noisy signal of post-market safety and not necessarily a reliable predictor of subsequent recalls. As patients and health care providers weigh the benefits of new remote monitoring technologies against potential safety issues, they should not assume that safety concerns will be readily identifiable through existing post-market surveillance mechanisms. Both health care providers and developers of remote patient monitoring technologies should therefore consider how they might proactively ensure that newly introduced remote patient monitoring technologies work safely and as intended.
This book takes as its starting point recent debates over the dematerialisation of subject matter which have arisen because of changes in information technology, molecular biology, and related fields that produced a subject matter with no obvious material form or trace. Arguing against the idea that dematerialisation is a uniquely twenty-first century problem, this book looks at three situations where US patent law has already dealt with a dematerialised subject matter: nineteenth century chemical inventions, computer-related inventions in the 1970s, and biological subject matter across the twentieth century. In looking at what we can learn from these historical accounts about how the law responded to a dematerialised subject matter and the role that science and technology played in that process, this book provides a history of patentable subject matter in the United States. This title is available as Open Access on Cambridge Core.
A modern automatic weather station (AWS) is a sophisticated collection of various components, sensors and electronics modules tied together by software, together making up a data acquisition and processing system. Many of today and tomorrow’s products follow a broadly similar set of basic processes, and this chapter sets out to explain these basic processing steps, keeping technical terminology to a minimum, and illustrating three different approaches to ‘system architecture’. The oversight provided by this chapter provides familiarity with the key concepts, system approaches and application types, and from there users can review potential products and suppliers using Internet search facilities to gather up-to-date product information.
The differences between AI software and normal software are important as these have implications for how a transaction of AI software will be treated under sales law. Next, what it means to own an AI system – whether it is a chattel, merely a software, or something more than a software – is explored. If AI is merely a software, it will be protected by copyright, but there will be problems with licensing. But if AI is encapsulated in a physical medium, the transaction may be treated as one of the sale of goods, or a sui generis position may be taken. A detailed analysis of the Court of Justice of the European Union’s decision in Computer Associates v The Software Incubator is provided. An AI transaction can be regarded as a sale of goods. Because the sale of goods regime is insufficient, a transaction regime for AI systems has to be developed, which includes ownership and fair use (assuming AI is regarded as merely a software) and the right to repair (whether AI is treated as goods or software).
Germany’s 2019 Digital Healthcare Act (Digitale-Versorgung-Gesetz, or DVG) created a number of opportunities for the digital transformation of the healthcare delivery system. Key among these was the creation of a reimbursement pathway for patient-centered digital health applications (digitale Gesundheitsanwendungen, or DiGA). Worldwide, this is the first structured pathway for “prescribable” health applications at scale. As of October 10, 2023, 49 DiGA were listed in the official directory maintained by Germany’s Federal Institute for Drugs and Medical Devices (BfArM); these are prescribable by physicians and psychotherapists and reimbursed by the German statutory health insurance system for all its 73 million beneficiaries. Looking ahead, a major challenge facing DiGA manufacturers will be the generation of the evidence required for ongoing price negotiations and reimbursement. Current health technology assessment (HTA) methods will need to be adapted for DiGA.
Methods
We describe the core issues that distinguish HTA in this setting: (i) explicit allowance for more flexible research designs, (ii) the nature of initial evidence generation, which can be delivered (in its final form) up to one year after becoming reimbursable, and (iii) the dynamic nature of both product development and product evaluation. We present the digital health applications in the German DiGA scheme as a case study and highlight the role of RWE in the successful evaluation of DiGA on an ongoing basis.
Results
When a DiGA is likely to be updated and assessed regularly, full-scale RCTs are infeasible; we therefore make the case for using real-world data and real-world evidence (RWE) for dynamic HTAs.
Conclusions
Continous evaluation using RWD is a regulatory innovation that can help improve the quality of DiGAs on the market.
REAP-2 is an interactive dose-response curve estimation tool for Robust and Efficient Assessment of drug Potency. It provides user-friendly dose-response curve estimation for in vitro studies and conducts statistical testing for model comparisons with a redesigned user interface. We also make a major update of the underlying estimation method with penalized beta regression, which demonstrates great reliability and accuracy in dose estimation and uncertainty quantification. In this note, we describe the method and implementation of REAP-2 with a highlight on potency estimation and drug comparison.
In Chapter 1 the different medical study designs are discussed and the difference between age, period and cohort effects is explained. Furthermore, some general information (e.g. prior knowledge, software used for the examples) needed to work through the book is provided. Finally, there is a short section in which the differences between the second and third edition are outlined.
An emergent volume electron microscopy technique called cryogenic serial plasma focused ion beam milling scanning electron microscopy (pFIB/SEM) can decipher complex biological structures by building a three-dimensional picture of biological samples at mesoscale resolution. This is achieved by collecting consecutive SEM images after successive rounds of FIB milling that expose a new surface after each milling step. Due to instrumental limitations, some image processing is necessary before 3D visualization and analysis of the data is possible. SEM images are affected by noise, drift, and charging effects, that can make precise 3D reconstruction of biological features difficult. This article presents Okapi-EM, an open-source napari plugin developed to process and analyze cryogenic serial pFIB/SEM images. Okapi-EM enables automated image registration of slices, evaluation of image quality metrics specific to pFIB-SEM imaging, and mitigation of charging artifacts. Implementation of Okapi-EM within the napari framework ensures that the tools are both user- and developer-friendly, through provision of a graphical user interface and access to Python programming.