To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Analog electronic circuits are generally offered as a core subject during the third or fourth semester of the second year in a four-year course in the electronics and communication, instrumentation and control, and computer engineering branches. It is an important subject and may be slightly toned down in the electrical, civil, chemical, and information technology branches of engineering. Design of discrete and linear integrated circuits (ICs), digital electronic modules, and electronic instrumentation are some of the obvious areas where knowledge of microelectronic circuits becomes essential. Therefore, it becomes important for students at this level of study to be proficient in electronic circuit analysis and their usage in relevant areas.
There are many fine books on electronic (devices and) circuits. Many of them have combined “devices” and “circuits”; a good practice, but sometimes resulting in the book becoming bulky. The idea here is to provide a text that deals with the fundamentals of analog electronic circuits for those who already have a basic knowledge of electronic devices, like semiconductor diodes, bipolar junction transistors (BJTs), and metal oxide semiconductor transistors (MOSFETs), either as full subject or as an introductory subject. Serious effort has been made in preparing the text so that it is not only as study material for examinations but also emphasizes fundamental concepts without being overly voluminous.
When transistors are used as switches, they operate either in cut off or in saturation mode. Whereas, when transistors are used to amplify small signals, a quiescent operating point is selected somewhere in the middle of the conduction range. The region of the location of a quiescent point depends on the kind of amplifier. For example, an amplifier may be used for maximum voltage and/or current gain, or high input resistance, or power gain. In some applications, an amplifier ought to consume minimum power, especially when it is used with a battery-operated device. After selecting the quiescent operating point, it is also required that it remains stable. If there is some change in the operating temperature or variation in supply voltage, the operating point may change its location. Variations due to the manufacturing tolerance in component values and in transistor parameters also affect the quiescent point. Irrespective of the reason, it is required that the quiescent point should remain located within specified limits.
Three amplifier configurations are commonly used while employing either BJT or FET amplification. The configuration depends on the terminals, out of the three, that is common to the input and the output of the amplifier. These configurations are studied on the basis of their characteristics, such as voltage gain, current gain, input and output resistance, and bandwidth, i.e., the frequency range within which the amplifier operates without any significant reduction in the output waveform. The operating frequency range becomes limited as the voltage gain drops at low and high operating frequencies. Hence, the study of frequency response becomes important.
• Phenomenon of global warming and its connection with industrialization
• Concerns and threats of global warming and climate change
• Impact of carbon emissions on global warming
• Initiatives towards reduction of carbon emissions and preventing global warming
• Concepts of Earth Overshoot Day, sustainable development and net-zero emissions
• United Nations’ sustainable development goals.
• Link between energy demand and global warming
• How to decarbonize the energy system
Introduction
Sustainable development, in recent years, has emerged as one of the most talked-about concepts. What does this term mean, and why has it become so important? The Industrial Revolution, which gained momentum in the 19th century, was a landmark event. It represented the culmination of human efforts of thousands of years. The revolution led to great inventions, making life better and easier. The human efforts involved in day-to-day activities have decreased continuously, and automation has resulted in increased human comfort. All sectors of our life, be it agriculture, transport, and even daily routine work at our homes, have been made easier by this revolution. But these developments have extracted a significant cost, particularly on the environment.
The effect of industrial activities on the environment has been described in a poignant way by @SDGoals. It shows that if we scale down the age of the earth from its actual value of 4.6 billion years to 46 years, then on the same scale human life has been on the earth for about 4 hours only. The Industrial Revolution, on this scale, began only a minute ago, and in that time, we have destroyed more than half of the world's forests.
The UN, realizing the importance of preventing damage to the climate and warming of the planet, started working in this area more than 50 years ago. But the real transformation has come after the Paris Agreement and the adoption of SDGs. The climate change challenge was largely absent from the agenda of the countries and considerations in policy formulation in even the most advanced countries in the world. Growing evidence of the threat of global warming led to a change in the approach, with a radical change seen after the declaration of the Paris Agreement and the 17 SDGs.
The most important component of the increased concern over climate change is related to energy. Energy is the dominant contributor to climate change, accounting for a minimum around 60% of total global greenhouse gas emissions, and in fact some studies have shown this share to be more than 70%. All the related key terms in vogue these days, such as ‘low-carbon system’, ‘decarbonization’, ‘net-zero system’, and ‘carbon-neutral system’, have energy at the centre. Irrespective of the solutions adopted and the timelines set by different countries, it is agreed upon by all concerned that transition to a low-carbon climate cannot be achieved without decarbonizing the energy systems.
• Steps involved for developing sustainable organizations
• Case study on a university campus
• Integration of green sources of energy
• Implementation of energy efficiency measures
• Ensuring participation of stakeholders for energy conservation
Introduction
The achievement of SDGs defined under the Paris Agreement requires concerted efforts at the international, national, state, organization, and individual levels. The organizations which follow the principles of sustainable development can serve as a role model for others to follow.
Colleges for higher education and the universities also have an important role to play in achieving the SDGs in general and in the adoption and promotion of green sources of electricity in particular. Goal 4 of SDGs, although, is specific to the availability of quality education to all, but these institutions can play a much broader role in realizing the wide-ranging SDGs. For example, Goal 9: Industry, infrastructure and innovation; Goal 12: Responsible production and consumption; and Goal 13: Climate Action cannot possibly be achieved without the mindful and positive influence of higher education institutions.
More importantly, these institutes need to work on the creation of awareness about the need for sustainable development and SDGs, a crucial requirement for their achievement. The institutes should also make sustainable development an integral part of their future plans. Green and renewable sources of energy like solar PV should be adopted for existing buildings, and these should be made mandatory for the new buildings. The academic institutes, more importantly, should practice on their campuses what they are preaching in the class.
" Working of solar PV power plants and their benefits
" Different configurations of solar PV systems, such as grid-connected, stand-alone, and hybrid solar PV plants
" Metering mechanisms, such as net metring and gross metring
" Working and classification of different types of inverters used in solar energy generation
" Different performance evaluation parameters for solar PV power plants and effect of environmental conditions
" Components used in solar PV power plants
" Challenges related to the large-scale integration of solar PV plants with the power grid
Introduction
Solar energy is a renewable source of energy, and when electricity is produced from solar, it does not lead to any CO2 emissions. Apart from being a green and renewable source of energy, solar is the simplest system of electricity generation. As described by Professor Martin Green, ‘The whole photovoltaic technology itself is a bit magical. Sunlight just falls on this inert material and you get electricity straight out of it.’ This technology has emerged as the most powerful solution for decarbonizing the energy system.
The solar PV plants can be installed in two modes: grid-connected and off-grid system. At present, grid-connected solar PV (GCSPV) plants are the most commonly used systems. Although solar PV cells, were discovered in the year 1953, solar PV plants for generating electricity did not gain widespread acceptance primarily because of the panel cost as well as the issues with the batteries involved. GCSPV technology has removed the weak link, the battery from the system, making it an efficient, economical, and durable system with minimum maintenance requirements. These benefits have made the solar PV the fastest rising system in the world.
A current mirror is a transistor-based circuit that the current level is controlled in an adjacent transistor, and the adjacent transistor essentially acts as a current source. Such circuits are now considered a commonly used building block in a number of analog integrated circuits (IC). Operational amplifiers, operational transconductance amplifiers, and biasing networks are examples of such circuits that essentially use current mirrors. Analog IC implementation techniques such as current-mode and switched-current circuits use current mirrors as basic circuit elements.
A significant advantage associated with the current mirrors is that they act as a near-ideal current source while fabricated using transistors and can replace large-value passive resistances in analog circuits, saving large chip area.
The later part of the chapter discusses another important analog circuit, namely, differential amplifier. As the name suggests, differential amplifiers amplify the difference between two signals that are applied to their two inputs. In addition to the differential amplification, it is also required that differential amplifiers suppress unwanted signal, which is present on the two input signals in the form of a common-mode signal. A differential amplifier is a particularly very useful and essential part of operational amplifiers. A differential pair is the basic building block of a differential amplifier that comprises of two transistors in a special form of connection.
While an understanding of electronic principles is vitally important for scientists and engineers working across many disciplines, the breadth of the subject can make it daunting. This textbook offers a concise and practical introduction to electronics, suitable for a one-semester undergraduate course as well as self-guided students. Beginning with the basics of general circuit laws and resistor circuits to ease students into the subject, the textbook then covers a wide range of topics, from passive circuits to semiconductor-based analog circuits and basic digital circuits. Exercises are provided at the end of each chapter, and answers to select questions are included at the end of the book. The complete solutions manual is available for instructors to download, together with eight laboratory exercises that parallel the text. Now in its second edition, the text has been updated and expanded with additional topic coverage and exercises.
The objective of this work is to investigate the unexplored laminar-to-turbulent transition of a heated flat-plate boundary layer with a fluid at supercritical pressure. Two temperature ranges are considered: a subcritical case, where the fluid remains entirely in the liquid-like regime, and a transcritical case, where the pseudo-critical (Widom) line is crossed and pseudo-boiling occurs. Fully compressible direct numerical simulations are used to study (i) the linear and nonlinear instabilities, (ii) the breakdown to turbulence, and (iii) the fully developed turbulent boundary layer. In the transcritical regime, two-dimensional forcing generates not only a train of billow-like structures around the Widom line, resembling Kelvin–Helmholtz instability, but also near-wall travelling regions of flow reversal. These spanwise-oriented billows dominate the early nonlinear stage. When high-amplitude subharmonic three-dimensional forcing is applied, staggered $\Lambda$-vortices emerge more abruptly than in the subcritical case. However, unlike the classic H-type breakdown under zero pressure gradient observed in ideal-gas and subcritical regimes, the H-type breakdown is triggered by strong shear layers caused by flow reversals – similar to that observed in adverse pressure gradient boundary layers. Without oblique wave forcing, transition is only slightly delayed and follows a naturally selected fundamental breakdown (K-type) scenario. Hence in the transcritical regime, it is possible to trigger nonlinearities and achieve transition to turbulence relatively early using only a single two-dimensional wave that strongly amplifies background noise. In the fully turbulent region, we demonstrate that variable-property scaling accurately predicts turbulent skin-friction and heat-transfer coefficients.
The present work aims at exploring the scale-by-scale kinetic energy exchanges in multiphase turbulence. For this purpose, we derive the Kármán–Howarth–Monin equation which accounts for the variations of density and viscosity across the two phases together with the effect of surface tension. We consider both conventional and phase conditional averaging operators. This framework is applied to numerical data from detailed simulations of forced homogeneous and isotropic turbulence covering different values for the liquid volume fraction, the liquid–gas density ratio, the Reynolds number and the Weber number. We confirm the existence of an additional transfer term due to surface tension. Part of the kinetic energy injected at large scales is transferred into kinetic energy at smaller scales by classical nonlinear transport while another part is transferred to surface energy before being released back into kinetic energy, but at smaller scales. The overall kinetic energy transfer rate is larger than in single-phase flows. Kinetic energy budgets conditioned in a given phase show that the scale-by-scale transport of turbulent kinetic energy due to pressure is a gain (loss) of kinetic energy for the lighter (heavier) phase. Its contribution can be dominant when the gas volume fraction becomes small or when the density ratio increases. Building on previous work, we hypothesise the existence of a pivotal scale above which kinetic energy is stored into surface deformation and below which the kinetic energy is released by interface restoration. Some phenomenological predictions for this scale are discussed.
The linear Faraday instability of a viscous liquid film on a vibrating substrate is analysed. The importance is in the first step in applications for ultrasonic liquid-film destabilisation. The equations of motion are linearised and solved for a liquid film with constant thickness vibrating in a direction normal to its interface with an ambient gaseous medium treated as dynamically inert. Motivated by empirical evidence and the weakly nonlinear analysis of Miles (J. Fluid Mech., vol. 248, 1993, pp. 671–683), we choose an ansatz that the free liquid-film surface forms a square-wave pattern with the same wavenumbers in the two horizontal directions. The result of the stability analysis is a complex rate factor in the time dependency of the film surface deformation caused by the vibrations at a given excitation frequency and vibration amplitude. The analysis allows Hopf bifurcations in the liquid-film behaviour to be identified. Regimes of the deformation wavenumber and the vibration amplitude characterised by unstable film behaviour are found. Inside the regimes, states with given values of the deformation growth rate are identified. The influence of all the governing parameters, such as the vibration amplitude and frequency, the deformation wavenumber and the liquid material properties, on the liquid-film stability is quantified. Non-dimensional relations for vibration amplitudes characteristic for changing stability behaviour are presented.
The turbulent evolution of the shallow water system exhibits asymmetry in vorticity. This emergent phenomenon can be classified as ‘balanced’, that is, it is not due to the inertial-gravity-wave modes. The quasi-geostrophic (QG) system, the canonical model for balanced motion, has a symmetric evolution of vorticity, thus misses this phenomenon. Here, we present a next-order-in-Rossby extension of QG, $\textrm {QG}^{+1}$, in the shallow water context. We recapitulate the derivation of the model in one-layer shallow water grounded in physical principles and provide a new formulation using ‘potentials’. Then, the multi-layer extension of the shallow water quasi-geostrophic equation ($\textrm {SWQG}^{+1}$) model is formulated for the first time. The $\textrm {SWQG}^{+1}$ system is still balanced in the sense that there is only one prognostic variable, potential vorticity (PV), and all other variables are diagnosed from PV. It filters out inertial-gravity waves by design. This feature is attractive for modelling the dynamics of balanced motions that dominate transport in geophysical systems. The diagnostic relations connect ageostrophic physical variables and extend the massively useful geostrophic balance. Simulations of these systems in classical set-ups provide evidence that $\textrm {SWQG}^{+1}$ captures the vorticity asymmetry in the shallow water system. Simulations of freely decaying turbulence in one layer show that $\textrm {SWQG}^{+1}$ can capture the negatively skewed vorticity, and simulations of the nonlinear evolution of a baroclinically unstable jet show that it can capture vorticity asymmetry and finite divergence of strain-driven fronts.
This article presents the implementation of a new monopulse auto-tracking architecture at the Oran ground station (GS). This architecture is based on a metaheuristic particle swarm optimisation (PSO) algorithm, which measures and adjusts the Q(R) summation associated with the satellite’s main beam direction to ensure optimal synchronisation between the GS and the satellite in terms of antenna pointing. This implementation was validated through practical tests during ALSAT-2B satellite flybys, comprising two distinct scenarios. In the first scenario, the satellite captures new images while simultaneously transmitting data from previously recorded images, thus leading to a misalignment between its antenna and the GS. The second scenario focused solely on data transmission; the satellite being directly aligned with the GS. The results indicate that the pointing error accuracy remains below 0.6 degrees, in accordance with the nominal specifications, thereby enhancing communication performance with a higher received signal level of −55 dBm, which resulted in no loss of images.
The interaction between cavitation bubbles and particles near rigid boundaries plays a crucial role in applications from surface cleaning to cavitation erosion. We present a combined experimental, numerical and theoretical investigation of how boundary layer flows affect particle motion during the growth and collapse of the cavitation bubble. Using laser-induced cavitation bubbles and particles of varying radius ratios and stand-off distances, we observe that increasing the bubble-to-particle size ratio suppresses particle displacement. Through one-way coupled simulations and theoretical modelling, we demonstrate that this suppression arises from a shift in the dominant forces acting on the particle: for small radius ratios, the pressure gradient force governs particle motion, while for large ratios, the interplay between added mass, lubrication, and pressure gradient forces becomes significant due to boundary layer growth in the bubble-induced stagnation flow. Based on a theoretical framework combining potential flow theory and axisymmetric viscous stagnation flow analysis, we identify the inviscid- and viscous-flow dominated regimes characterised by the combination of the stand-off distance, the bubble-to-particle radius ratio, and the bubble Reynolds number. Finally, we derive scaling laws for particle displacement consistent with experiments and simulations. These findings advance our understanding of unsteady boundary layer effects in cavitation bubble-particle interactions, offering new insights for applications in microparticle manipulation and flow measurements.
Unstable approaches are one of the main safety concerns that contribute to approach and landing accidents. The International Air Transport Association reports that, between 2012 and 2016, 61% of accidents occurred during the approach and landing phase, of which 16% involved unstable approaches. This study addresses this issue by applying the Functional Resonance Analysis Method to examine the dynamics of stable approaches. A total of 195 aviation safety reports, which referred to near-miss data from a single airline, were used in the analysis to identify both actual and aggregated variability. The findings revealed that variability mainly occurred in the following functions: control speed, configure aircraft for landing, communicate with air traffic control and manage flight paths. Effective communication, coordination and collaboration, as well as monitoring, briefings and checklists, were key factors in managing the variability of a stable approach. The study reveals how adopting a perspective of ‘how things go right’ provides insightful findings regarding approach stability, complementing traditional approaches focused on ‘what went wrong’. This study also highlights the value of utilising the Functional Resonance Analysis Method to analyse near-miss data and uncover systemic patterns in everyday flight operations.
Control of small turbojet engines is challenging due to the small number of sensors and actuators. In these engines, typically the spool speed and exhaust gas temperature are the measured variables and the fuel flow is the only manipulated variable. However, the thrust command must be achieved and the engine’s structural and operational limitations must be safeguarded. In this research, a minimum selector control structure with a saturation function is presented for controlling small turbojet engines. One control loop is considered to control the spool speed and another loop is used to manage the exhaust gas temperature. The output of the control loops is the fuel flow rate and the minimum value is selected between them. To prevent the compressor surge and combustor blow-out during engine acceleration/deceleration, a fuel flow rate saturation is defined. Due to the switching structure of the proposed controller and existence of the saturation function, stability analysis is a critical issue. Therefore, a methodology is presented to analyse the stability of the proposed structure. In simulation study, a nonlinear thermodynamic model that matches more than 90% with the test data is used and the response of the proposed controller is compared with a proportional integral (PI) controller. In a comprehensive scenario, the throttle degree varies from 75% to 100%. Using the PI controller, some outputs have some overshoot and the exhaust gas temperature exceeds the corresponding constraint by 40K. While the proposed minimum selector controller, in addition to accurately fulfilling the thrust command, fully protects the limitations governing engine variables.