We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Principal components analysis can be redefined in terms of the regression of observed variables upon component variables. Two criteria for the adequacy of a component representation in this context are developed and are shown to lead to different component solutions. Both criteria are generalized to allow weighting, the choice of weights determining the scale invariance properties of the resulting solution. A theorem is presented giving necessary and sufficient conditions for equivalent component solutions under different choices of weighting. Applications of the theorem are discussed that involve the components analysis of linearly derived variables and of external variables.
Guttman's assumption underlying his definition of “total images” is rejected: Partial images are not generally convergent everywhere. Even divergence everywhere is shown to be possible. The convergence type always found on partial images is convergence in quadratic mean; hence, total images are redefined as quadratic mean-limits. In determining the convergence type in special situations, the asymptotic properties of certain correlations are important, implying, in some cases, convergence almost everywhere, which is also effected by a countable population or multivariate normality or independent variables. The interpretations of a total image as a predictor, and a “common-factor score”, respectively, are made precise.
Helices are one of the most frequently encountered symmetries in biological assemblies. Helical symmetry has been exploited in electron microscopic studies as a limited number of filament images, in principle, can provide all the information needed to do a three-dimensional reconstruction of a polymer. Over the past 25 years, three-dimensional reconstructions of helical polymers from cryo-EM images have shifted completely from Fourier–Bessel methods to single-particle approaches. The single-particle approaches have allowed people to surmount the problem that very few biological polymers are crystalline in order, and despite the flexibility and heterogeneity present in most of these polymers, reaching a resolution where accurate atomic models can be built has now become the standard. While determining the correct helical symmetry may be very simple for something like F-actin, for many other polymers, particularly those formed from small peptides, it can be much more challenging. This review discusses why symmetry determination can be problematic, and why trial-and-error methods are still the best approach. Studies of many macromolecular assemblies, such as icosahedral capsids, have usually found that not imposing symmetry leads to a great reduction in resolution while at the same time revealing possibly interesting asymmetric features. We show that for certain helical assemblies asymmetric reconstructions can sometimes lead to greatly improved resolution. Further, in the case of supercoiled flagellar filaments from bacteria and archaea, we show that the imposition of helical symmetry can not only be wrong, but is not necessary, and obscures the mechanisms whereby these filaments supercoil.
Treating images as data has become increasingly popular in political science. While existing classifiers for images reach high levels of accuracy, it is difficult to systematically assess the visual features on which they base their classification. This paper presents a two-level classification method that addresses this transparency problem. At the first stage, an image segmenter detects the objects present in the image and a feature vector is created from those objects. In the second stage, this feature vector is used as input for standard machine learning classifiers to discriminate between images. We apply this method to a new dataset of more than 140,000 images to detect which ones display political protest. This analysis demonstrates three advantages to this paper’s approach. First, identifying objects in images improves transparency by providing human-understandable labels for the objects shown on an image. Second, knowing these objects enables analysis of which distinguish protest images from non-protest ones. Third, comparing the importance of objects across countries reveals how protest behavior varies. These insights are not available using conventional computer vision classifiers and provide new opportunities for comparative research.
Hyperplexed in-situ targeted proteomics via antibody immunodetection (i.e., >15 markers) is changing how we classify cells and tissues. Differently from other high-dimensional single-cell assays (flow cytometry, single-cell RNA sequencing), the human eye is a necessary component in multiple procedural steps: image segmentation, signal thresholding, antibody validation, and iconographic rendering. Established methods complement the human image evaluation, but may carry undisclosed biases in such a new context, therefore we re-evaluate all the steps in hyperplexed proteomics. We found that the human eye can discriminate less than 64 out of 256 gray levels and has limitations in discriminating luminance levels in conventional histology images. Furthermore, only images containing visible signals are selected and eye-guided digital thresholding separates signal from noise. BRAQUE, a hyperplexed proteomic tool, can extract, in a marker-agnostic fashion, granular information from markers which have a very low signal-to-noise ratio and therefore are not visualized by traditional visual rendering. By analyzing a public human lymph node dataset, we also found unpredicted staining results by validated antibodies, which highlight the need to upgrade the definition of antibody specificity in hyperplexed immunostaining. Spatially hyperplexed methods upgrade and supplant traditional image-based analysis of tissue immunostaining, beyond the human eye contribution.
Analytical electron microscopy was used to confirm the location of pillars of zirconia in pillared montmorillonite. Data show that the pillared clay is of “high” quality, with surface areas ranging from 200 to 250 m2/g and (001) spacings in the 17–18 Å range. The zirconia-rich pillars were observed using bright-field imaging, annular dark-field imaging, and energy-filtered imaging. The composition of the pillars was confirmed by performing nano-analysis using energy-dispersive X-ray spectroscopy and electron energy-loss spectroscopy. The pillars apparently have an irregular shape <50 Å in size. The shape and relatively large size of the pillars suggest that zirconia dispersion is not ideally distributed in this sample. This study is apparently the first report of electron microscopy observation of pillaring material in clays.
Imaging platforms for generating highly multiplexed histological images are being continually developed and improved. Significant improvements have also been made in the accuracy of methods for automated cell segmentation and classification. However, less attention has focused on the quantification and analysis of the resulting point clouds, which describe the spatial coordinates of individual cells. We focus here on a particular spatial statistical method, the cross-pair correlation function (cross-PCF), which can identify positive and negative spatial correlation between cells across a range of length scales. However, limitations of the cross-PCF hinder its widespread application to multiplexed histology. For example, it can only consider relations between pairs of cells, and cells must be classified using discrete categorical labels (rather than labeling continuous labels such as stain intensity). In this paper, we present three extensions to the cross-PCF which address these limitations and permit more detailed analysis of multiplex images: topographical correlation maps can visualize local clustering and exclusion between cells; neighbourhood correlation functions can identify colocalization of two or more cell types; and weighted-PCFs describe spatial correlation between points with continuous (rather than discrete) labels. We apply the extended PCFs to synthetic and biological datasets in order to demonstrate the insight that they can generate.
Nanostructural analysis of pillared clay samples using high-resolution transmission electron microscopy has been developed. Montmorillonite samples were pillared using partially hydrolyzed Al and Fe solutions. Two samples, M01 and M05, corresponding to Fe/(Fe+Al) ratios of 0.1 and 0.5, respectively, were analyzed. The different steps of image filtration, resulting from filtration by ring-shaped masks, are illustrated and discussed from lattice imaging of sample M01. This procedure is used to show the heterogeneous distribution of the basal spacings in the different ordered domains. Domains of mesoporosity and distribution of the different Fe species are studied specifically in the sample M05. The quantitative HRTEM results are discussed and compared with X-ray diffraction patterns obtained from the same sample.
Invasive emergent and floating macrophytes can have detrimental impacts on aquatic ecosystems. Management of these aquatic weeds frequently relies upon foliar application of aquatic herbicides. However, there is inherent variability of overspray (herbicide loss) for foliar applications into waters within and adjacent to the targeted treatment area. The spray retention (tracer dye captured) of four invasive broadleaf emergent species (water hyacinth, alligatorweed, creeping water primrose, and parrotfeather) and two emergent grass-like weeds (cattail and torpedograss) were evaluated. For all species, spray retention was simulated using foliar applications of rhodamine WT (RWT) dye as a herbicide surrogate under controlled mesocosm conditions. Spray retention of the broadleaf species was first evaluated using a CO2-pressurized spray chamber overtop dense vegetation growth or no plants (positive control) at a greenhouse (GH) scale. Broadleaf species and grass-like species were then evaluated in larger outdoor mesocosms (OM). These applications were made using a CO2-pressurized backpack sprayer. Evaluation metrics included species-wise canopy cover and height influence on in-water RWT concentration using image analysis and modeling techniques. Results indicated spray retention was greatest for water hyacinth (GH, 64.7 ± 7.4; OM, 76.1 ± 3.8). Spray retention values were similar among the three sprawling marginal species alligatorweed (GH, 37.5 ± 4.5; OM, 42 ± 5.7), creeping water primrose (GH, 54.9 ± 7.2; OM, 52.7 ± 5.7), and parrotfeather (GH, 48.2 ± 2.3; OM, 47.2 ± 3.5). Canopy cover and height were strongly correlated with spray retention for broadleaf species and less strongly correlated for grass-like species. Although torpedograss and cattail were similar in percent foliar coverage, they differed in percent spray retention (OM, 8.5± 2.3 and 28.9 ±4.1, respectively). The upright leaf architecture of the grass-like species likely influenced the lower spray retention values in comparison to the broadleaf species.
How does a ‘space culture’ emerge and evolve, and how can archaeologists study such a phenomenon? The International Space Station Archaeological Project seeks to analyse the social and cultural context of an assemblage relating to the human presence in space. Drawing on concepts from contemporary archaeology, the project pursues a unique perspective beyond sociological or ethnographical approaches. Semiotic analysis of material culture and proxemic analysis of embodied space can be achieved using NASA's archives of documentation, images, video and audio media. Here, the authors set out a method for the study of this evidence. Understanding how individuals and groups use material culture in space stations, from discrete objects to contextual relationships, promises to reveal intersections of identity, nationality and community.
Amyloid plaques, one of the main hallmarks of Alzheimer's disease (AD), are classified into diffuse (associated with cognitive impairment) and dense-core types (a common finding in brains of people without Alzheimer's disease (non-AD) and without impaired cognitive function) based on their morphology. We tried to determine the usability of gray-level co-occurrence matrix (GLCM) texture parameters of homogeneity and heterogeneity for the differentiation of amyloid plaque images obtained from AD and non-AD individuals. Images of amyloid-β (Aβ) immunostained brain tissue samples were obtained from the Aging, Dementia and Traumatic Brain Injury Project. A total of 1,039 plaques were isolated from different brain regions of 69 AD and non-AD individuals and used for further GLCM analysis. Images of Aβ stained plaques show higher values of heterogeneity parameters and lower values of homogeneity parameters in AD patients, and vice versa in non-AD patients. Additionally, GLCM analysis shows differences in Aβ plaque texture between different brain regions in non-AD patients and correlates with variables that characterize patient's dementia status. The present study shows that GLCM texture analysis is an efficient method to discriminate between different types of amyloid plaques based on their morphology and thus can prove as a valuable tool in the neuropathological investigation of dementia.
Epithelial–mesenchymal transition (EMT) is an essential biological process, also implicated in pathological settings such as cancer metastasis, in which epithelial cells transdifferentiate into mesenchymal cells. We devised an image analysis pipeline to distinguish between tissues comprised of epithelial and mesenchymal cells, based on extracted features from immunofluorescence images of differing biochemical markers. Mammary epithelial cells were cultured with 0 (control), 2, 4, or 10 ng/mL TGF-β1, a well-established EMT-inducer. Cells were fixed, stained, and imaged for E-cadherin, actin, fibronectin, and nuclei via immunofluorescence microscopy. Feature selection was performed on different combinations of individual cell markers using a Bag-of-Features extraction. Control and high-dose images comprised the training data set, and the intermediate dose images comprised the testing data set. A feature distance analysis was performed to quantify differences between the treatment groups. The pipeline was successful in distinguishing between control (epithelial) and the high-dose (mesenchymal) groups, as well as demonstrating progress along the EMT process in the intermediate dose groups. Validation using quantitative PCR (qPCR) demonstrated that biomarker expression measurements were well-correlated with the feature distance analysis. Overall, we identified image pipeline characteristics for feature extraction and quantification of immunofluorescence images to distinguish progression of EMT.
We provide an introduction of the functioning, implementation, and challenges of convolutional neural networks (CNNs) to classify visual information in social sciences. This tool can help scholars to make more efficient the tedious task of classifying images and extracting information from them. We illustrate the implementation and impact of this methodology by coding handwritten information from vote tallies. Our paper not only demonstrates the contributions of CNNs to both scholars and policy practitioners, but also presents the practical challenges and limitations of the method, providing advice on how to deal with these issues.
Annual resolution sediment layers, known as varves, can provide continuous and high-resolution chronologies of sedimentary sequences. In addition, varve counting is not burdened with the high laboratory costs of geochronological analyses. Despite a more than 100-year history of use, many existing varve counting techniques are time consuming and difficult to reproduce. We present countMYvarves, a varve counting toolbox which uses sliding-window autocorrelation to count the number of repeated patterns in core scans or outcrop photos. The toolbox is used to build an annually-resolved record of sedimentation rates, which are depth-integrated to provide ages. We validate the model with repeated manual counts of a high sedimentation rate lake with biogenic varves (Herd Lake, USA) and a low sedimentation rate glacial lake (Lago Argentino, Argentina). In both cases, countMYvarves is consistent with manual counts and provides additional sedimentation rate data. The toolbox performs multiple simultaneous varve counts, enabling uncertainty to be quantified and propagated into the resulting age-depth model. The toolbox also includes modules to automatically exclude non-varved portions of sediment and interpolate over missing or disrupted sediment. CountMYvarves is open source, runs through a graphical user interface, and is available online for download for use on Windows, macOS or Linux at https://doi.org/10.5281/zenodo.4031811.
Multicomponent polymer systems are of interest in organic photovoltaic and drug delivery applications, among others where diverse morphologies influence performance. An improved understanding of morphology classification, driven by composition-informed prediction tools, will aid polymer engineering practice. We use a modified Cahn–Hilliard model to simulate polymer precipitation. Such physics-based models require high-performance computations that prevent rapid prototyping and iteration in engineering settings. To reduce the required computational costs, we apply machine learning (ML) techniques for clustering and consequent prediction of the simulated polymer-blend images in conjunction with simulations. Integrating ML and simulations in such a manner reduces the number of simulations needed to map out the morphology of polymer blends as a function of input parameters and also generates a data set which can be used by others to this end. We explore dimensionality reduction, via principal component analysis and autoencoder techniques, and analyze the resulting morphology clusters. Supervised ML using Gaussian process classification was subsequently used to predict morphology clusters according to species molar fraction and interaction parameter inputs. Manual pattern clustering yielded the best results, but ML techniques were able to predict the morphology of polymer blends with ≥90% accuracy.
Symmetry is omnipresent in nature and we encounter symmetry routinely in our everyday life. It is also common on the microscopic level, where symmetry is often key to the proper function of core biological processes. The human brain is exquisitely well suited to recognize such symmetrical features with ease. In contrast, computational recognition of such patterns in images is still surprisingly challenging. In this paper we describe a mathematical approach to identifying smaller local symmetrical structures within larger images. Our algorithm attributes a local symmetry score to each image pixel, which subsequently allows the identification of the symmetrical centers of an object. Though there are already many methods available to detect symmetry in images, to the best of our knowledge, our algorithm is the first that is easily applicable in ImageJ/FIJI. We have created an interactive plugin in FIJI that allows the detection and thresholding of local symmetry values. The plugin combines the different reflection symmetry axis of a square to get a good coverage of reflection symmetry in all directions. To demonstrate the plugins potential, we analyzed images of bacterial chemoreceptor arrays and intracellular vesicle trafficking events, which are two prominent examples of biological systems with symmetrical patterns.
Deep learning has pushed the scope of digital pathology beyond simple digitization and telemedicine. The incorporation of these algorithms in routine workflow is on the horizon and maybe a disruptive technology, reducing processing time, and increasing detection of anomalies. While the newest computational methods enjoy much of the press, incorporating deep learning into standard laboratory workflow requires many more steps than simply training and testing a model. Image analysis using deep learning methods often requires substantial pre- and post-processing order to improve interpretation and prediction. Similar to any data processing pipeline, images must be prepared for modeling and the resultant predictions need further processing for interpretation. Examples include artifact detection, color normalization, image subsampling or tiling, removal of errant predictions, etc. Once processed, predictions are complicated by image file size – typically several gigabytes when unpacked. This forces images to be tiled, meaning that a series of subsamples from the whole-slide image (WSI) are used in modeling. Herein, we review many of these methods as they pertain to the analysis of biopsy slides and discuss the multitude of unique issues that are part of the analysis of very large images.
Time-resolved imaging of molecules and materials made of light elements is an emerging field of transmission electron microscopy (TEM), and the recent development of direct electron detection cameras, capable of taking as many as 1,600 fps, has potentially broadened the scope of the time-resolved TEM imaging in chemistry and nanotechnology. However, such a high frame rate reduces electron dose per frame, lowers the signal-to-noise ratio (SNR), and renders the molecular images practically invisible. Here, we examined image noise reduction to take the best advantage of fast cameras and concluded that the Chambolle total variation denoising algorithm is the method of choice, as illustrated for imaging of a molecule in the 1D hollow space of a carbon nanotube with ~1 ms time resolution. Through the systematic comparison of the performance of multiple denoising algorithms, we found that the Chambolle algorithm improves the SNR by more than an order of magnitude when applied to TEM images taken at a low electron dose as required for imaging at around 1,000 fps. Open-source code and a standalone application to apply Chambolle denoising to TEM images and video frames are available for download.
This research attempts to systematically establish shape descriptor states through elliptic Fourier analysis (EFA) using pili (Canarium ovatum Engl.) kernel as a model. Kernel images of 53 pili accessions from the National Plant Genetic Resources Laboratory (NPGRL), University of the Philippines Los Baños were acquired using VideometerLab 3. Shape features, such as roundness, compactness and elongation, were extracted from the images. Shapes outlines were characterized using elliptic Fourier coefficients calculated from SHAPE version 1.3 software. Principal component analysis and cluster analysis were used to elucidate clusters representing the shape descriptor states. The first principal component accounts for the variation in length to width ratio; whereas, the second and third principal components explain the variation in the location of the widest portion and the truncation of the apex and base of the kernel, respectively. Cluster analysis separated the different accessions into six distinct clusters at 0.04 Euclidian distance. Six descriptor states, narrowly elliptic, elliptic, widely elliptic, ovate, obovate and lance-ovate, were characterized from the shape outlines and visualized through R's shape on r package. The discrimination between clusters was validated through MANOVA and LDA with 95% correct classification. The Fourier coefficients were also able to represent the variation observed from the physical properties of shape. The method may be used in establishing shape descriptors of all plant parts of all crop species.
Economic pressures continue to mount on modern-day livestock farmers, forcing them to increase herds sizes in order to be commercially viable. The natural consequence of this is to drive the farmer and the animal further apart. However, closer attention to the animal not only positively impacts animal welfare and health but can also increase the capacity of the farmer to achieve a more sustainable production. State-of-the-art precision livestock farming (PLF) technology is one such means of bringing the animals closer to the farmer in the facing of expanding systems. Contrary to some current opinions, it can offer an alternative philosophy to ‘farming by numbers’. This review addresses the key technology-oriented approaches to monitor animals and demonstrates how image and sound analyses can be used to build ‘digital representations’ of animals by giving an overview of some of the core concepts of PLF tool development and value discovery during PLF implementation. The key to developing such a representation is by measuring important behaviours and events in the livestock buildings. The application of image and sound can realise more advanced applications and has enormous potential in the industry. In the end, the importance lies in the accuracy of the developed PLF applications in the commercial farming system as this will also make the farmer embrace the technological development and ensure progress within the PLF field in favour of the livestock animals and their well-being.