This website is about the 2008 edition of the school. Visit this year's edition of LASER.

LASER Summer School on Software Engineering


Concurrency and Correctness

September 7-13, 2008 - Elba Island, Italy

Tryggve Fossum (Intel)
Maurice Herlihy (Brown University)
Tony Hoare (Microsoft Research)
Bertrand Meyer (ETH Zurich, director)
Robin Milner (Cambridge University)
Peter O'Hearn (Queen Mary University of London)
Daniel A. Reed (Microsoft Research and UNC Chapel Hill)

Print the LASER 2008 poster poster



Chip level Multi Processors

Speaker: Tryggve Fossum, Intel Fellow and Director of Microarchitecture Development, Intel Corporation

The focus of the lectures will be Chip level Multi Processors (CMP). We will study the major components: processor cores, cache hierarchy, chip inter connection network, memory access, and IO. In the process, we will look at issues like the underlying technologies, parallel and single threaded performance, programming CMP's, hardware multi threading, heterogeneous cores, power management, chip resource management, chip layout, Error handling, system interface, multi-socket scaling. We will review some sample designs from Academia and Industry.

Short biography:
Tryggve Fossum is an Intel Fellow specializing in microprocessor architecture. He has a PhD from the Univ. of Illinois, and has worked in processor design at DEC and Compaq, from the first VAX to the last Alpha processor. At Intel he has worked on Itanium and Xeon processors.

The Art of Multiprocessor Programming

Speaker: Maurice Herlihy, Brown University

The computer industry is undergoing a paradigm shift. Chip manufacturers are shifting development resources away from single-processor chips to a new generation of multi-processor chips known as multicores.
This fundamental change in our core computing architecture will require a fundamental change in how we program. The art of multiprocessor programming, currently mastered by few, is more complex than programming uniprocessor machines, and requires an understanding of new computational principles, algorithms, and programming tools.
Current computation on uniprocessor and multiprocessor architectures have many aspects in common. The key issue that distinguishes multiprocessor programming from concurrent uniprocessor programming is the need to understand how concurrent computations on separate processors coordinate with one another, a broad and intricate problem area we call multiprocessor synchronization.

Short biography:
Maurice Herlihy received an A.B. degree in Mathematics from Harvard University and a Ph.D. degree in Computer Science from MIT. He has been an Assistant Professor in the Computer Science Department at Carnegie Mellon University, a member of the research staff at Digital Equipment Corporation's Cambridge (MA) Research Lab, and a consultant for Sun Microsystems. He is now a Professor of Computer Science at Brown University.
Prof. Herlihy's research centers on practical and theoretical aspects of multiprocessor synchronization, with a focus on wait-free and lock-free synchronization. His 1991 paper "Wait-Free Synchronization" won the 2003 Dijkstra Prize in Distributed Computing, and he shared the 2004 Goedel Prize for his 1999 paper "The Topological Structure of Asynchronous Computation." He is a Fellow of the ACM.

Correct Contract-Covered Concurrent Computation

Speaker: Bertrand Meyer, ETH Zurich and Eiffel Software

Contract-based techniques, building on both axiomatic semantics and object-oriented principles to establish a sound methodology for sequential development, can be extended to concurrent programming provided we accept a new semantic interpretation of contracts. This set of lectures reviews the state of the art in concurrent object-oriented programming and presents the theory and practice of a model, based on contracts, that yields correct concurrent programs.

Short biography:
Bertrand Meyer is Professor of Software Engineering at ETH Zürich and Chief Architect of Eiffel Software. His current research interests include object technology, proofs and tests of classes, object-oriented concurrency.

Bigraphs: a space for mobile interaction

Speaker: Robin Milner, Cambridge University

After three decades of work in concurrent processes, I still seek a model that equally underlies concurrent programming, pervasive computing, biological systems and process calculi. The bigraph model owes a lot to existing spatial process calculi and graph rewriting; it uses some elementary algebra and category theory, all of which I shall explain in the lectures. I shall show how the constructions of mobile ambients, pi calculus, Petri nets and CSP can be expressed in the framework, which yields some theory common to all of them. An example of this theory is a behavioral congruence derived via labelled transition systems.
The course will treat many examples, and there will be a lot of exercises. I hope that my book (with the same title as above) will be published in time for the course. If time I shall explain a stochastic treatment of behaviour, with an application to biology.

Short biography:
Robin Milner graduated from Cambridge in 1958. After short posts he joined the University of Edinburgh in 1973, where he co-founded the Laboratory for Foundation of Computer Science in 1986. He was elected Fellow of the Royal Society in 1988, and in 1991 won the ACM's A.M. Turing Award. He joined Cambridge University in 1995, headed the Computer Laboratory there for four years, and retired in 2001. His research achievements (often joint) include: the system LCF, a model for many later systems for interactive reasoning; systems; Standard ML, an industry-scale but rigorously based programming language; the Calculus of Communicating Systems (CCS); the Pi Calculus. Currently he works on Bigraphs, a topographical model which aims to provide a theoretical foundation for mobile interactive systems.

Separation Logic and Concurrency

Speaker: Peter O'Hearn Queen Mary University of London

Concurrently executing processes raise conceptual and practical problems because the fantastic complexity of interactions between processes quickly exceeds what the human mind can grasp. This problem is set to become all the more important with the increasing appearance of multi-core processors, which will lead to a demand for more and more concurrency to be utilized in programs of all kinds. Independently of the technological situation, though, the problem of concurrency is fundamental, and highly nontrivial. Concurrency has been the subject of extensive theoretical work for over four decades, but designing, understanding and analyzing concurrent programs remains difficult.
I will describe techniques for specifying and verifying concurrent programs. An emphasis will be placed on modularity, whereby an attempt is made to specify program parts in isolation from other program components. Modularity holds out the hope of reasoning methods that scale, enabling us to contain the complexity of understanding concurrent programming components. But it is technically challenging. I will present both positive results, on proof techniques that have been developed in the literature, and negative results or examples that are difficult for current techniques to deal with in anything other than a brute force (and impractical) manner.

Short biography:
Peter O'Hearn received his PhD from Queen's University in Kingston, Candada, in 1991 and was on faculty at Syracuse University until 1996, when he moved to Queen Mary, University of London, where he is a Professor of Computer Science. Throughout the 90s, O'Hearn worked on denotational semantics of programs. Then, around the turn of the millennium, he and John Reynolds discovered Separation Logic, which addressed the 30 year-old problem of efficient reasoning about linked data structures in memory. He went on to develop a Concurrent Separation Logic, which provides a modular proof method for shared-memory concurrent programs. Recently, with a vibrant community of researchers in the southeast of England, he has been tackling the problem of automatic verification and analysis of programs' use of the heap, as well as automatic program-termination analysis. In 2007 O'Hearn received the Royal Society Wolfson Research Merit Award for his work on semantics, logic, and program analysis.

Living in the Multicore Computing Clouds

Speaker: Daniel A. Reed, Microsoft Research and UNC Chapel Hill

We are in the midst of multiple technological transitions. The first is the shift to large-scale, heterogeneous multicore processors, a consequence of power constraints and the decreasing performance returns from single threaded processor designs. The second is the rise of software as a service, supported by large computing clouds – network connected compute and data servers. Both of these have profound implications for the design of parallel computing software, tools and algorithms, as well as next-generation applications. We will discuss the challenges for parallel and distributed software tools and outline some possible solutions that bridge the parallel and distributed systems worlds in support of consumer, business and scientific applications.

Short biography:
Daniel A. Reed is Scalable and Multicore Computing Strategist at Microsoft Research. Previously, he was the Chancellor’s Eminent Professor at the University of North Carolina at Chapel Hill, and the Director of the Renaissance Computing Institute (RENCI), which explores the interactions of computing with the sciences, arts and humanities. Dr. Reed is a member of President Bush’s Council of Advisors on Science and Technology (PCAST), charged with providing advice on science and technology issues and challenges to the President. He is chair of the board of directors of the Computing Research Association, which represents the major academic departments and industrial research laboratories in North America. He was previously Director of the National Center for Supercomputing Applications (NCSA) and one of the principal investigators and chief architect for the NSF TeraGrid.


Chair of Software Engineering - ETH Zürich Last update: 22.08.2008