To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Often when you're trying to solve a problem, you pull out a pen or pencil and grab a handy piece of paper to write down a column of numbers, draw a diagram, or make a list. In solving the problem, you might sum the numbers, trace lines in the diagram, or match items of the list with items of a second list. These auxiliary scribblings are used to organize information – as with the list or column of numbers – or represent objects so they can be easily manipulated – as with the diagram. Data structures are the algorithmic analog of these handy pieces of paper.
In Chapter 11 we saw that browsers and web servers in a computer network exchange information by bouncing small packages of data from one computer to another until they arrive at their final destination. How can we represent a computer network so that an algorithm can figure out what sequence of computers to use in transferring a data package? For that matter, how do we represent airline schedules, circuit diagrams, computer file systems, road maps and medical records so that they can be manipulated algorithmically?
Many algorithms use special data structures to represent their inputs and outputs or to perform intermediate calculations. In Chapter 7 we used lists and vectors to keep track of journal entries.
Hardly a day goes by that I don't write at least one short computer program: a few lines of code to explore an idea or help organize my thoughts. I think of it as simply talking with my computer, and more and more often there is a computer available to talk with, often several of them joining in the conversation simultaneously. Each time you click on a link in a browser, you cause a sequence of computations involving dozens if not hundreds of computers scattered all over the world.
Making a computation happen is not, however, the same thing as programming. There are lots of powerful programs written by talented programmers that you can call up with a click of a mouse or few keystrokes. These programs animate computers, breathing life and spirit into lumps of metal and plastic. Even if you know what's going on inside computers and computer programs, it's easy to imagine that programs are spells and the programmers who create them are sorcerers. When you click on the icon for a program, you invoke these spells and the spells conjure up spirits in the machine. But this book isn't about invoking the spells of others; it's about creating your own spells and conjuring spirits of your own design.
This is not to say I won't encourage you to use code written by other programmers. Quite the contrary: an important part of the power of computing is that good spells can be reused as often as needed.
When I use the term “hacker” I mean someone who enjoys programming and is good at it. Hackers in my experience tend to be an opinionated and individualistic lot and they tend to appreciate strong opinions and independence of thought in others. The slogan of the Perl language is “There's more than one way to do it”, and most hackers are motivated to exploring different ways of doing the same job. That said, if adopting a standard way of doing something provides leverage for building better software, then most hackers will agree to adopt the standard (after much dickering about the details, of course).
If you write code because you want other people to use it, it behooves you to use a language that others are familiar with, adopt standard conventions for input and output so that others can interact with your code without learning some new set of conventions, and provide your code in a format so that others can use it as a component of a larger project without having to understand all the details of your implementation. The struggle to meet these basic criteria requires the hacker to negotiate, make concessions and, generally, work within a community of potential users to produce, adopt and adhere to reasonable standards.
Building great software requires a great deal of discipline and interpersonal skill – in sharp contrast with the stereotype of a hacker as an unkempt, uncommunicative obsessive compulsive lacking basic hygiene and addicted to highly caffeinated drinks.
Most physicists believe that the speed of light is a fundamental limit on how quickly we can move through space. This claim is based on the predictions of mathematical theories and the results of experiments that appear to support them. According to theory, it doesn't matter whether you move through space with a pogo stick or an anti-matter drive, you're still subject to the rules governing all matter in the universe and thus unable to exceed the speed of light.
What if there are limits on what you can compute? Pharmaceutical companies simulate interactions at the atomic level in searching for molecules to cure diseases. There could be viruses for which it will take years to find a vaccine – there is simply no way to speed up the necessary computations. Software developers who write the programs that keep airplanes flying and emergency rooms functioning would like to prove that their code won't malfunction and put lives at risk. But maybe it's impossible to provide such assurances.
In some cases, computational limitations can work to our advantage. Some programs exploit the difficulty of computing answers to particular problems; for example, the most popular encryption schemes for transferring information securely on the World Wide Web rely on the difficulty of computing the prime factors of large composite integers. Of course, if someone figures out how to factor large numbers efficiently, our privacy will be seriously threatened.
The first computers were used primarily to manage information for large companies and perform numerical calculations for the military. Only a few visionaries saw computing as something for everyone or imagined it could become a basic service like the telephone or electric power. This failure of imagination was due in large part to the fact that the people who controlled computing in the early years weren't the ones actually programming computers. If you worked for a large corporation or industrial laboratory, then you might have strictly limited access to a computer, but otherwise you were pretty much out of luck.
In the early years of computing, users submitted their programs to computer operators to be run in batches. You would hand the operator a stack of cards or a roll of paper tape punched full of holes that encoded your program. An operator would schedule your program to be run (possibly in the middle of the night) with a batch of other programs and at some point thereafter you would be handed a printout of the output generated by your program. You didn't interact directly with the computer and if your program crashed and produced no output, you'd have very little idea what had gone wrong.
The people who ran computer facilities were horrified at the idea of having users interact directly with their precious computers.
Programming languages come in all shapes and sizes and some of them hardly seem like programming languages at all. Of course, that depends on what you count as a programming language; as far as I'm concerned, a programming language is a language for specifying computations. But that's pretty broad and maybe we should narrow our definition to include only languages used for specifying computations to machines, that is, languages for talking with computers. Remember, though, that programmers often communicate with one another by sharing code and the programming language used to write that code can significantly influence what can or can't be easily communicated.
C, Java and Scheme are so-called general-purpose, high-level programming languages. Plenty of other programming languages were designed to suit particular purposes, among them the languages built into mathematical programming packages like Maple, Matlab and Mathematica. There are also special-purpose languages called scripting languages built into most word-processing and desktop-publishing programs that make it easier to perform repetitious tasks like personalizing invitations or making formatting changes throughout a set of documents.
Lots of computer users find themselves constantly doing routine housecleaning tasks like identifying and removing old files and searching for documents containing specific pieces of information. Modern operating systems generally provide nice graphical user interfaces to make such house-cleaning easier, but many repetitive tasks are easy to specify but tedious to carry out with these fancy interfaces.
Programming languages, like natural languages, have a vocabulary (lexicon) and rules of syntax (grammar) that you have to learn in order to communicate. Just as unfamiliar grammatical conventions can make learning a new natural language difficult, unfamiliar programming-language syntax can make learning to program difficult. English speakers learning Japanese have to get used to the fact that Japanese verbs generally come at the end of the sentence. With computer languages, the problem is made worse by the fact that computers are much less adept at handling lexically and syntactically mangled programs than humans are at grasping the meaning of garbled speech.
If you want to talk with computers, however, you're going to have to learn a programming language. Just as you learn new natural languages to communicate with other people and experience other cultures, you learn a programming language to communicate with computers and other programmers and to express computational ideas concisely and clearly. The good news is that learning one programming language makes it a lot easier to learn others.
When you start learning to program, you may find yourself consumed with sorting out the lexical and syntactic minutiae of the programming language. You'll have to look up the names of functions and operators and memorize the particular syntax required to invoke them correctly. You may end up spending obscene amounts of time tracking down obscure bugs caused by misplaced commas or missing parentheses.
While writing the previous chapter, I got to thinking about concepts in computer science that connect the microscopic, bit-level world of logic gates and machine language to the macroscopic world of procedures and processes we've been concerned with so far. In listing concepts that might be worth mentioning, I noticed that I was moving from computer architecture, the subdiscipline of computer science concerned with the logical design of computer hardware, to operating systems, the area dealing with the software that mediates between the user and the hardware.
In compiling my list, I was also struck by how many “computerese” terms and phrases have slipped into the vernacular. Interrupt handling (responding to an unexpected event while doing something else) and multitasking (the concurrent performance of several tasks) are prime examples. The common use of these terms concerns not computers but human information processing. I don't know what you'd call the jargon used by psychologists and cognitive scientists to describe how humans think. The word “mentalese” is already taken: the philosopher Jerry Fodor postulates that humans represent the external world in a “language of thought” that is sometimes called “mentalese.” Fodor's mentalese is more like machine language for minds. I'm interested in the language we use to describe how we think, how our thought processes work – a metalanguage for talking about thinking.
One consequence of inexpensive computer memory and storage devices is that much less gets thrown out. People who normally wouldn't characterize themselves as packrats find themselves accumulating megabytes of old email messages, news articles, personal financial data, digital images, digital music in various formats and, increasingly, animations, movies and other multimedia presentations. For many of us, digital memory serves to supplement the neural hardware we were born with for keeping track of things; the computer becomes a sort of neural prosthetic or memory amplifier.
However reassuring it may be to know that every aspect of your digital lifestyle is stored on your computer's hard drive, storing information doesn't do much good if you can't get at what you need when you need it. How do you recall the name of the restaurant your friend from Seattle mentioned in email a couple of years back when she told you about her new job? Or perhaps you're trying to find the recommendation for a compact digital camera that someone sent you in email or you saved from a news article. It's tough remembering where you put things and you'd rather not look through all your files each time you want to recall a piece of information.
In 1999, when NASA launched the first of its Earth Observing System (EOS) satellites, they knew they would have to do something with the terabytes (a terabyte is a billion bytes) of data streaming down from these orbiting observers.
With all my mumbo-jumbo about conjuring up spirits and casting spells, it's easy to lose track of the fact that computers are real and there is a very precise and concrete connection between the programs and fragments of code you run on your computer and the various electrical devices that make up the hardware of your machine. Interacting with the computer makes the notions of computing and computation very real, but you're still likely to feel shielded from the hardware – as indeed you are – and to be left with the impression that the connection to the hardware is all very difficult to comprehend.
For some of you, grabbing a soldering iron and a handful of logic chips and discrete components is the best path to enlightenment. I used to love tinkering with switching devices scavenged from the local telephone company, probing circuit boards to figure out what they could do and then making them do something other than what they were designed for. Nowadays, it's easier than ever to “interface” sensors and motors to computers, but it still helps to know a little about electronics even if you're mainly interested in the software side of things.
I think it's a good experience for every computer scientist to learn a little about analog circuits (for example, build a simple solid-state switch using a transistor and a couple of resistors) and integrated circuits for memory, logic and timing (build a circuit to add two binary numbers out of primitive logic gates).
The Admission of Venereal Patients . . . [is] a Subversion of the Charity, or a Misapplication of the Money given in trust for the Poor . . . the Society [has] constantly rejected Venereal Patients for the very reason of Being Venereal.
So wrote one of the governors of the Westminster Infirmary in 1738. It is clearly a strong invective against allowing so-called “foul” patients into hospitals. Many have presumed that this policy was pervasive in early modern London. It was not.
Considerably more scholarship has explored venereal disease in the modern period. However, there is a growing body of literature on the early modern period that has explored medical treatises, graphic art, and literature, analyzing the various meanings that early modern doctors, artists, and playwrights attached to sexual infection. Yet early modern institutional care has received rather less attention. Robert Jütte has identified the area as a notable gap in the literature and called for further research. This study hopes to add to Jütte’s work on Germany and that of Jon Arrizabalga, John Henderson, and Roger French on Italy.
Discussions of institutional care for venereal disease in early modern England have tended to assume that the attitude expressed above by the governors of the Westminster Infirmary was standard throughout the period. Moreover, English scholarship has focused the lion’s share of attention on one particular hospital, the London Lock Hospital. The Lock, a voluntary hospital devoted exclusively to venereal disease, was established in 1747. Its appearance in the mid-eighteenth century led many to assume that impoverished venereal patients seeking treatment earlier had nowhere to turn. Historians presumed that the Lock Hospital must have filled some void, that prior hospitals must have excluded venereal patients on moral grounds. With the Enlightenment, in this view, came new tolerance and a new hospital as its manifestation. A kind of whiggishness has colored many discussions of the Lock, portraying it as a progressive step in the march of modernity. To make this case scholars have asserted that early hospital provision for venereal patients was scant or nonexistent. Arguing from a slightly different vantage point, some recent historians of sexuality have advanced a similar picture.
As the eighteenth century dawned, beds in royal hospital foul wards were becoming harder to get. The financial effects of fire and war in the 1690s severely depleted the coffers as the seventeenth century drew to a close. This fiscal pressure forced St. Bartholomew’s to stop paying to support foul patients in the outhouses after 1696. So as the new century began, venereal patients now had to come up with the four pence per day in order to stay in the outhouses, even though the hospital continued to pay to support the hundreds of patients treated each month in the clean wards of the main hospital. This two-tiered fee structure would last in one form or another throughout the century.
Bart’s did resume paying to support some venereal patients in 1703. However, the figures betray a clear shift in policy. In 1703 the governors spent just £93 to support outhouse patients while they spent in excess of £1,900 to feed clean patients. When translated into fiscal terms this means that that the hospital paid to support about 314 clean patients at a time, while they supported on average only about fifteen foul patients throughout the course of the year. In stark contrast to the pattern in the seventeenth century, when venereal patients represented such a significant portion of St. Bartholomew’s charity cases—well over 20 percent in some years—in 1703 they represented less than 5 percent of the patients supported by the governors.
However, it is important to bear in mind that these figures do not represent all the patients treated at the two outhouses, but only those receiving full hospital charity. The hospital now classified foul patients as either needy of charity or capable of paying their own way. The hospital consented to support only those who were “entirely destitute of Mony, or friends, & parish settlements.” So in late 1702 or early 1703 the governors resumed their charitable support for poor foul patients, but for a much more limited group. This renewal shows that the hospital had not entirely abandoned the people struck by the dual dilemmas of poverty and the pox. There remained some commitment to helping impoverished foul patients. However, governors now drastically limited the amount of money devoted to the cause, and began to call on parishes to contribute towards the support of their own venereal paupers.
On August 22, 1728, Flora Price applied to her churchwardens in the parish of St. Margaret’s Westminster. When questioned by the overseer’s of the poor she admitted that she was poxed and sought their help. The clerk recorded that she “be admitted into ye House till such time she can be got into ye Hospital for cure of the foul distemper.” However, she never entered a hospital. Instead, it seems she entered the workhouse and underwent mercury treatment there. Following her salivation she was discharged on October 21. The workhouse committee ordered “That Flora Price be discharged ye house & to have some old cloaths & to be sent to Bridewell upon any new Application.” Such stories are ubiquitous in eighteenth-century parochial records. Workhouse admission records register the admission of sick paupers week in and week out throughout the entire eighteenth century. Given the great prevalence of the pox one should not be surprised to learn that foul patients like Flora Price were ever-present in these institutions.
Yet the medical role of the eighteenth-century workhouse has received little attention. Studies of early modern English medical institutions have generally focused on the large hospitals like St. Bartholomew’s and St.Thomas’s or on the growing number of private specialist charities like the Lock Hospital. Yet, in parochial workhouse infirmaries there existed an important level of institutional health care for the very poor. Many paupers like Flora Price did not run immediately to a hospital when they became ill. Often their first stop (or their last resting place) was the workhouse.
Over the past decades social historians have tirelessly explored the massive landscape of English parochial records, which has yielded a wealth of rich data on the English poor. However, too few early modern medical historians have mined this body of material, much of which concerns issues related to health and healing. Focusing on a single disease in these institutions allows access to the much larger issue of eighteenth-century workhouse medicine, which still awaits proper investigation. Overall, the assumption continues that the medicalization of workhouses was a nineteenth-century phenomenon and a product of the New Poor Law. Just one example is the recent reaction to evidence of medical care in London workhouses from 1837, which, we read, “was important from an early date.”