To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Deep nets have done well with early adopters, but the future will soon depend on crossing the chasm. The goal of this paper is to make deep nets more accessible to a broader audience including people with little or no programming skills, and people with little interest in training new models. A github is provided with simple implementations of image classification, optical character recognition, sentiment analysis, named entity recognition, question answering (QA/SQuAD), machine translation, speech to text (SST), and speech recognition (STT). The emphasis is on instant gratification. Non-programmers should be able to install these programs and use them in 15 minutes or less (per program). Programs are short (10–100 lines each) and readable by users with modest programming skills. Much of the complexity is hidden behind abstractions such as pipelines and auto classes, and pretrained models and datasets provided by hubs: PaddleHub, PaddleNLP, HuggingFaceHub, and Fairseq. Hubs have different priorities than research. Research is training models from corpora and fine-tuning them for tasks. Users are already overwhelmed with an embarrassment of riches (13k models and 1k datasets). Do they want more? We believe the broader market is more interested in inference (how to run pretrained models on novel inputs) and less interested in training (how to create even more models).
Astronomers depend on light for their understanding of the cosmos beyond the confines of the Solar System. Many of the most exciting discoveries over the last couple of decades were made possible by new generations of cameras and telescopes, both on the ground and in space. The resulting observations captured the imagination not just of the scientists but also of the general public. Dr Crawford will discuss the new facilities anticipated coming online over the next ten years or so – how they’ll not only change our view of the Universe, but also alter the way we do Astronomy.
Can we trust the judgement of machines that see? Computer vision is being entrusted with ever more critical tasks: from access control by face recognition, to diagnosis of disease from medical scans and hand-eye coordination for surgical and nuclear decommissioning robots, and now to taking control of motor vehicles.
Can we 'see' photons, black holes, curved spacetime, quantum jumps, the expansion of the universe, or quanta of space? Physics challenges appearances, showing convincingly that our everyday vision of reality is limited, approximate and badly incomplete. Established theories such as quantum theory and general relativity and investigations like loop quantum gravity have a reputation of obscurity. Many suggest that science is forcing us into a counterintuitive and purely mathematical understanding of reality. I disagree. I think that there is a visionary core at the root of the best science. Where 'visionary' truly means formed by visual images. Our mind, even when dealing with abstract and difficult notions, relies on images, metaphors and, ultimately, vision. Contrary to what is sometimes claimed, science is not just about making predictions: it is about understanding, and, for this, developing new eyes to see. I shall illustrate this point with some concrete cases, including the birth of quantum theory in Einstein’s intuition, curved spacetimes and quanta of space.
Sophie Hackford explores the idea that the way that computers see the world is becoming our dominant reality. The idea that a physical object, and its data ‘exhaust’, are in constant dialogue with each other. As machine autonomy creeps into our everyday lives, we are creating a physical internet, where people, objects, vehicles move as seamlessly in the real world as data moves around the internet. Digital bots or ‘agents’ might represent us in interactions with our banks, friends, colleagues. Autonomous companies might soon be big players in the economy. Hackford will explore a world where human and machine ‘vision’ will collaborate, compete and even merge together.
When Turner daubed a red buoy in his seascape Helvoetsluys, what did he mean? In nature, red may repel or attract, signalling toxicity or ripeness, anger, ruddy health or sexual readiness. For Turner, the red created contrast, and in making that mark, he meant to generate salience and arouse interest, to dominate his rivals and draw in his admirers. Colour has long excited emotions and intellectual debate, not only for visual art, but also in philosophy, psychology and physiology. In contemporary vision science studies, colour helps people find objects faster, discern material properties, learn, conceptualise and memorise. Yet colour is made in the mind, not out there in the world. It is a subjective phenomenon, a personal possession, one that varies between individual eyes, and one that people cling to with ardour when challenged: witness the public divide over the 'blue/black', 'white/gold' dress. So the question is not only what does colour mean, in life and in art, but how does it mean anything? How does the human brain create colour, stabilise it, and make its meaning? And why does it evoke emotion and aesthetic appreciation?
Eyes abound in the animal kingdom. Some are large as basketballs and others are just fractions of a millimetre. Eyes also come in many different types, such as the compound eyes of insects, the mirror eyes of scallops or our own camera-like eyes. Common to all animal eyes is that they serve the same fundamental role of collecting external information for guiding the animal’s behaviour. But behaviours vary tremendously across the animal kingdom, and it turns out this is the key to understanding how eyes evolved. In the lecture we will take a tour from the first animals that could only sense the presence of light, to those that saw the first crude image of the world and finally to animals that use acute vision for interacting with other animals. Amazingly, all these stages of eye evolution still exist in animals living today, and this is how we can unravel the evolution of behaviours that has been the driving force behind eye evolution.