To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The aim of these notes is to describe the monadic and incremental approaches to the denotational semantics of programming languages. This is done via the use of suitable typed metalanguages, which capture the relevant structure of semantic categories. The monadic and incremental approaches are formulated in the setting of a type-theoretic framework for the following reasons:
a type theory with dependent types allows a precise, concise and general description of the two approaches, based on signatures as abstract representations for languages;
there are various implementations (e.g. LEGO and CoQ) which provide computer assistance for several type-theories, and without computer assistance it seems unlikely that either of the two approaches can go beyond toy languages.
On the other hand, the monadic and incremental approaches can be described already with a naive set-theoretic semantics. Therefore, knowledge of Domain Theory and Category Theory becomes essential only in Section 6.
The presentation adopted differs from advanced textbooks on denotational semantics in the following aspects:
it makes significant use of type theory as a tool for describing languages and calculi, while this is usually done via a set of formation or inference rules;
it incorporates ideas from Axiomatic and Synthetic Domain Theory into metalanguages, while most metalanguages for denotational semantics are variants of LCF;
it stresses the use of metalanguages to give semantics via translation (using the monadic and incremental approaches), but avoids a detailed analysis of the categories used in denotational semantics.
The aim of these notes is to explain how games can provide an intensional semantics for functional programming languages, and for a theory of proofs. From the point of view of program semantics, the rough idea is that we can move from modelling computable functions (which give the ‘extensional’ behaviour of programs) to modelling ‘intensional’ aspects of the algorithms themselves. In proof theory, the tradition has been to consider syntactic representations of (what are presumably intended to be ‘intensional’) proofs; so the idea is to give a more intrinsic account of a notion of proof.
Three main sections follow this Introduction. Section 2 deals with games and partial strategies; it includes a discussion of the application of these ideas to the modelling of algorithms. Section 3 is about games and total strategies; it runs parallel to the treatment in Section 2, and is quite compressed. Section 4 gives no more than an outline of more sophisticated notions of game, and discusses them as models for proofs. Exercises are scattered through the text.
I very much hope that the broad outline of these notes will be comprehensible on the basis of little beyond an understanding of sequences (lists) and trees. However the statements of some results and some of the exercises presuppose a little knowledge of category theory, of domain theory and of linear logic.
The “classical” paradigm for denotational semantics models data types as domains, i.e. structured sets of some kind, and programs as (suitable) functions between domains. The semantic universe in which the denotational modelling is carried out is thus a category with domains as objects, functions as morphisms, and composition of morphisms given by function composition. A sharp distinction is then drawn between denotational and operational semantics. Denotational semantics is often referred to as “mathematical semantics” because it exhibits a high degree of mathematical structure; this is in part achieved by the fact that denotational semantics abstracts away from the dynamics of computation—from time. By contrast, operational semantics is formulated in terms of the syntax of the language being modelled; it is highly intensional in character; and it is capable of expressing the dynamical aspects of computation.
The classical denotational paradigm has been very successful, but has some definite limitations. Firstly, fine-structural features of computation, such as sequentially, computational complexity, and optimality of reduction strategies, have either not been captured at all denotationally, or not in a fully satisfactory fashion. Moreover, once languages with features beyond the purely functional are considered, the appropriateness of modelling programs by functions is increasingly open to question. Neither concurrency nor “advanced” imperative features such as local references have been captured denotationally in a fully convincing fashion.
Computational behaviours are often distributed, in the sense that they may be seen as spatially separated activities accomplishing a joint task. Many such systems are not meant to terminate, and hence it makes little sense to talk about their behaviours in terms of traditional input-output functions. Rather, we are interested in the behaviour of such systems in terms of their often complex patterns of stimuli/response relationships varying over time. For this reason such systems are often referred to as reactive systems.
Many structures for modelling reactive systems have been studied over the past 20 years. Here we present a few key models. Common to all of them, is that they rest on an idea of atomic actions, over which the behaviour of a system is defined. The models differ mainly with respect to what behavioural features of systems are represented. Some models are more abstract than others, and this fact is often used in informal classifications of the models with respect to expressibility. One of our aims is to present principal representatives of models, covering the landscape from the most abstract to the most concrete, and to formalise the nature of their relationships by explicitly representing the steps of abstraction that are involved in moving between them. In following through this programme, category theory is a convenient language for formalising the relationships between models.
To give an idea of the role categories play, let us focus attention on transition systems as a model of parallel computation.
Why do I keep doing this? I keep bringing up the “should all variables be accessed through methods” debate whenever I see people taking a dogmatic position, that is, one that they don't explain. It wasn't until I rewrote the whole thing as patterns for the book that I realized the key issue here is communication.
I'm a little disappointed reading this now that I didn't try to write Direct Access and Indirect Access as patterns. That would have made the reasoning behind the options much more obvious. I guess I just wasn't ready to use patterns to address such fundamental questions. Now I don't even hesitate- I'm so pattern soaked now I can't help it.
Anyway, if this one bugs you, ignore it, all except the part about making accessors private by default.
A debate has been raging on both CompuServe and the Internet lately about the use and abuse of accessing methods for getting and setting the values of instance variables. Since this is the closest thing I've seen to a religious war in a while, I thought I'd weigh in, not with the definitive answer, but with at least a summary of the issues and arguments on both sides. As with most, uh, “discussions” generating lots of heat, the position anyone takes has more to do with attitude and experience than with objective truth.
More thinking about design/modeling. This one covers my pet peeve-people who use fixed-sized collections with meaningful indexes (e.g. “1 is red, 2 is blue, 3 is green”). In my patterns book, I covered this in some detail when I talk about your program talking to you. Darn it, if red, green, and blue go together, then make an object for them, figure out what it should be called, and figure out what it should do. If you don't create the easy objects, how will you ever be able to see to create the hard ones?
Let's see if I can get through this third column on how objects are born without blushing. So far we've seen two patterns: “Objects from States” and “Objects from Collections.” This time we'll look at two more sources of objects: “Objects from Variables” and “Objects from Methods.” All four patterns have one thing in common—they create objects that would be difficult or impossible to invent before you have a running program.
These patterns are part of the reason I am suspicious of any methodology that smacks of the sequence, “design, then program.” The objects that shape the way I think about my programs almost always come out of the program, not out of my preconceptions. Thinking “the design phase is over, now I just have to push on and finish the implementation” is a sure way to miss these valuable objects and end up with a poorly structured, inflexible application to boot.
The Smalltalk Report occupies an important position in legitimizing Smalltalk. While it has in the past seemed the ugly stepchild of the SIGS family, the mere fact of its existence has gone far towards convincing reluctant decision makers that Smalltalk is worth betting on
When I started writing for The Smalltalk Report, I had already made something of a name for myself in the Smalltalk world. The CRC paper was out and making its splash, I had been working on Smalltalk in various guises for eight years, and I was well into my tenure at MasPar.
My life in a startup cloister was a big part of my decision to begin writing the column. Startups are great fun, but you don't join one to see the world and become famous (if you're not in sales, anyway). Writing the column kept me in touch with my friends.
In the end, the benefits of writing the column were much greater than I had imagined, as were the pains. It always seemed that the next deadline hit just after I'd finished the last column. Dragging fingers to keyboard when a paying customer was already waiting for code was tough. However, I got much more from the column than I put into it. First, I learned to write. You will see a distinct change in my writing style from the first columns to the last.
We present an actor language which is an extension of a simple functional language, and provide an operational semantics for this extension. Actor configurations represent open distributed systems, by which we mean that the specification of an actor system explicitly takes into account the interface with external components. We study the composability of such systems. We define and study various notions of testing equivalence on actor expressions and configurations. The model we develop provides fairness. An important result is that the three forms of equivalence, namely, convex, must, and may equivalences, collapse to two in the presence of fairness. We further develop methods for proving laws of equivalence and provide example proofs to illustrate our methodology.
“I'm not dead yet.” That's what I thought when Don Jackson at SIGS offered to put my articles together into a book. It's probably that every book I've ever seen with “Complete” or “Collected” in the title is no longer with us. Last I checked, I'm still here.
Having established that I am alive enough to be typing this, let's get to the point of the Preface—convincing you to buy this book. You are standing here trying to decide whether to spend your hard-earned dinero for that exquisite literature you saw with the swords and dragons and stuff on the cover or a collection of my articles. Here's my pitch—my entire career has been spent learning how to communicate with other people about programs. This book chronicles how I learned what I know and how I learned to tell people stories about it.
I just finished my first book, The Smalltalk Best Practice Patterns. It is easy to see how a book written end to end can have a single theme. This book has no such theme. It has a story—no, two stories.
The first story could be called “Kent Discovers the Importance of People.” I got into computing to avoid having to deal with people. As a sophomore in high school, I took physics instead of biology so I wouldn't have to try to understand “all that squishy stuff.” The rest of my academic career was spent in search of deeper understanding of the mechanics of things, whether the topic was computing or music.
The previous column got me started examining why people create classes. About this time, I had collected enough patterns to begin thinking about the patterns book. Of course, at first I was going to cover all of programming/analysis/design/project management/etc. in a single book. This exploration was the beginning of trying to write the analysis/design portion of the book.
One of the things I like about writing a column is that it forces you to think hard about a topic at regular intervals. I'm the kind of person who dives deep into a topic until I'm bored, and then drifts until something else catches my eye. I get to study lots of cool stuff that way, but I don't really penetrate to insight. Writing a column returns me to roughly the same place every month and pushes me to find something new. The result is much more valuable thinking.
Previously, I talked about how objects could be created from the states of objects that acted like finite-state machines (the “Objects from States” pattern). I'll continue on the theme of where objects come from for this and several issues.
I won't be saying much about the conventional source of objects, the user's world. There are lots of books that will tell you how to find those objects. Instead, I'll focus on finding new objects in running programs.
I think this was originally titled “Inheritance: The Rest of the Rest of the Story,” but it got edited. Oh well.
The pattern presented here is another in the “Transformation Series.” It recommends letting inheritance evolve from the need to reduce code duplication.
In the June issue, where I took on accessor methods, I stated that there was no such thing as a truly private message. I got a message from Nikolas Boyd reminding me that he had written an earlier article describing exactly how to implement really truly private methods. One response I made was that until all the vendors ship systems that provide method privacy, Smalltalk cannot be said to have it. Another is that I'm not sure I'd use it even if I had it. It seems like some of my best “reuse moments” occur when I find a supposedly private method in a server that does exactly what I want. I don't yet have the wisdom to separate public from private with any certainty.
On a different note, I've been thinking about the importance of bad style. In this column, I always try to focus on good style, but in my programming there are at least two phases of project development where maintaining the best possible style is the farthest thing from my mind. When I am trying to get some code up and running I often deliberately ignore good style, figuring that as soon as I have everything running I can simply apply my patterns to the code to get well-structured code that does the same thing.
Syntax-directed development is a software development method in which, the syntax of the input of the application plays a central role. The syntax description forms a frame on which semantic actions, attributes, local and global information can be attached. Much research has been done to develop theories and algorithms needed for robots to process information and interact with the environment. This paper describes how LL(1) descriptions can be used to produce automatically motion control software, object recognizers and human-computer interfaces.
Yet another brutal review. I never write a tough review without questioning myself: “Is it just me? Am I just not smart enough to get this product? Who am I to tell someone else what to do?”
I am just now getting comfortable with writing what I know and trusting my readers to take what I say and add their perspective and experience to it.
Should you be using Distributed Smalltalk? That is the question I'll address here. This isn't a full-blown product review, nor a technical piece. I'll introduce the history and technical background of Distributed Smalltalk as they apply to the question of who should be using it.
First, what is Distributed Smalltalk? It is a Common Object Request Broker Architecture (CORBA)-compliant extension to ParcPlace System's Visual-Works developed and marketed by Hewlett-Packard. “HP? The hardware company? Those C++ guys?”
My first reaction when I saw that HP had done a Smalltalk product was, “What does HP know about Smalltalk?” The answer to this question is twofold. One answer is “a lot.” HP has been involved peripherally in Smalltalk since it first escaped Xerox. They were one of the first four companies to write a virtual machine. They have also had pockets of interest in Smalltalk ever since. Their 700 series of workstations has held the title for fastest Smalltalk for several years.
A special issue of the Journal of Functional Programming will be devoted to the use of functional programming in theorem proving. The submission deadline is 31 August 1997.
The histories of theorem provers and functional languages have been deeply intertwined since the advent of Lisp. A notable example is the ML family of languages, which are named for the meta language devised for the LCF theorem prover, and which provide both the implementation platform and interaction facilities for numerous later systems (such as Coq, HOL, Isabelle, NuPrl). Other examples include Lisp (as used for ACL2, PVS, Nqthm) and Haskell (as used for Veritas).
This special issue is devoted to the theory and practice of using functional languages to implement theorem provers and using theorem provers to reason about functional languages. Topics of interest include, but are not limited to:
– architecture of theorem prover implementations
– interface design in the functional context
– limits of the LCF methodology
– impact of host language features
– type systems
– lazy vs strict languages
– imperative (impure) features
– performance problems and solutions
– problems of scale
– special implementation techniques
– term representations (e.g. de Bruijn vs name carrying vs BDDs)
Well, at least I didn't just present instance-specific behavior as a pointy-hat technique. I tried to extract some cultural lessons from it. Now that the Smalltalk world has contracted to a couple of big players, there isn't enough cultural diversity left to analyze. Ah, the olden days, back when we used to have to walk miles barefoot in the snow to get coal to shovel into our Smalltalk machines… Now I sound like an old fart
In the last issue, I wrote about what instance-specific behavior is, why you would choose to use it, and how you implement it in Smalltalk-80 …er…Objectworks\Smalltalk (which way does the slash go, anyhow?) …er… VisualWorks (is that a capital W or not?). This month's column offers the promised Digitalk Smalltalk/V OS/2 2.0 implementation (thanks to Mike Anderson for the behind-the-scenes info) and a brief discussion of what the implementations reveal about the two engineering organizations.
I say “brief discussion” because as I got to digging around I found many columns' worth of material there for the plucking. I'll cover only issues raised by the implementation of classes and method look-up. Future columns will contrast the styles as they apply to operating system access, user interface frameworks, and other topics.
Just when I write something that is too short, I write some that is too long. I was really in the groove of writing columns by this time, and I'd wait until the last minute. Sometimes, as in the previous column, this left me a little short. Sometimes as in this column, I ended up starting something I couldn't finish.
This column points to one of my weaknesses as a writer—I don't turn to pictures nearly soon enough. The material covered here would make much more sense with a few well-chosen pictures. If you get bogged down, try drawing the pictures yourself. That's what I do, even if I don't often publish them.
I kind of ran out of steam towards the end of that last series on creating new objects. I think the message that many of the most important objects are not the ones you find by underlining nouns in problem statements is still valid. The objects that emerge (if you're watching for them) late in the game, during what is typically thought of as maintenance, can profoundly affect how you as a programmer view the system. By the time I got to the fourth part, though, I was tired of the topic. Those last couple of patterns still deserve some reexamination in the future.