eth logo

LASER Summer School on Software Engineering

Practical Techniques of Software Quality

September 12 - 18, 2004
Elba, Italy

Directors: Bertrand Meyer (ETH Zurich), Carlo Ghezzi (Politecnico di Milano), Wolfgang Pree (Universitaet Salzburg)



Title: Formal Methods Made Practical

Speaker: Jean-Raymod Abrial

Description: My purpose in this course is to show how Formal Methods can be used as practical tools for Software Engineers and more generally System Designers.

By formal methods, I mean a number of mathematically based techniques essentially used to construct models of complex systems. The model of a system is not the system itself: it is an abstract representation of it, allowing to reason about the real system without having it at disposal yet. Besides helping the designer to construct the model of a system, formal method techniques are also used to reason about the system by considering its model only.

In mature engineering disciplines (e.g. civil engineering, avionics, etc.), people are used to building such models before constructing systems. These models are called "blue prints" in the wider sense of the term. In these disciplines, it is also common practice to use such models to reason about the future system they represent. And, in the development process used to build a system, the model reasoning phase lasts for quite a long time before effectively constructing the final system. The people involved in these disciplines also know very well that a shortsighted and naive model can lead to very serious flaws in their final product. Such a modeling practice has proved to have many advantages and provide far better results than those obtained when one was reasoning directly on the final product only. This is clearly not the case in our less mature discipline where people still spend significant time and money in testing and reshaping their final products, paying far less attention to the modeling phase (if any).

However, the fact that one uses mathematical techniques does not magically make the modeling activity a trivial one. One of the reasons for this is that real models can be quite large and elaborate. In fact, modeling a large and complex system is not an easy task, and there is no unique and best way to do it: this is essentially an experimental activity which is mostly learned through practice. Therefore, one of our goals in this course is to present general patterns to be used systematically in our model construction. Refinement of models, their decomposition into smaller ones and possible instantiations of more generic ones are three such patterns among the basic ingredients we might present.

To validate these techniques we shall present a number of case studies ranging from complex pointer program and distributed protocol to complete system development involving software but also equipment and communication.

Further reading:  

Slides: 12 | 3 | 4 | 5

Title: Formal methods for Asynchrony and Security

Speaker: Ernie Cohen

Specific topics:

  • Reasoning about asynchronous systems

Invariants are the standard tool for reasoning about state-based concurrent systems. However, when components of a system are allowed to drift out of (physical) synchronization, these invariants become complex and highly recursive, and provide little reasoning leverage. I'll describe some attacks on this problem, each based on separating out the "hard" concurrency (where race conditions predominate) from the "easy" concurrency (where computations are allowed to drift out of synchronization without changing the essential structure of these computations). These approaches include weakening the programming logic, "reduction" theorems that allow asynchrony to be introduced or eliminated via program transformation, and using locking protocols to express global synchronization structure.

  • Reasoning about cryptographic protocols

A number of approaches have been proposed for reasoning (manually or automatically) about cryptographic protocols in perfect cryptography models. I'll describe a simple alternative based on first-order invariants. This method provides the easy automation of weaker methods such as backward symbolic search, while retaining the power of more difficult approaches based on recursive invariants. It also provides smooth integration with non-cryptographic programming methods. I'll also describe how the approach extends to more interesting attacker/encryption models.

  • Building trustworthy computing platforms

Today's commercially dominant computing platforms are far from trustworthy. I'll talk about some of the practical engineering challenges in making these platforms more secure without breaking the open model that has made the PC architecture so successful. Along the way, I'll talk about some of the issues in building high-assurance secure platforms, such as access control, information flow, minimizing the TCB, verifying low-level code, and the horrors of PC hardware. 

Further reading:  

Title: Agile Software Development - Practices, Patterns, Tools

Speaker: Erich Gamma

Description: Agile development is the ability to deliver quality software on-time in the face of changing requirements. Agile developers survive in a world of changing requirements by following a set of development practices in a disciplined way and are low-key on ceremony. They use test-driven development to get immediate feedback after changing the system. By frequently refactoring their code, agile developers avoid that the code quality degrades and the system becomes tangled and unmaintainable. Agile developers are familiar with design patterns and know when to apply them and as important when not to apply them to build systems with a flexible and maintainable architecture. Finally, they need tools that support their development practices. However, agile developers know that it is the people and not processes, practices, patterns or tools that make it all work.

In this course I will cover agile practices, design patterns and modern software development tools. To do so I will walk through different case studies. I will use eclipse, a popular Open Source project both as the tool supporting agile development but also as one example of a large scale system, which was developed using agile practices.

Specific topics: The lectures will include the following topics:

  • Agile design using design patterns.
  • An introduction to eclipse. 
  • Comprehensive unit testing.
  • Refactoring practices. 
  • A look inside the agile eclipse Open Source development process.

Further reading: 

Title: The Eiffel method in practice

Speaker: Bertrand Meyer

Description: Object-oriented programming has become mainstream, but doing it right may require questioning some of the accepted practices. This series of lectures describes the principles of software development in Eiffel, with a particular emphasis on design principles, the application of Design by Contract ideas, and newer mechanisms such as agents. Numerous examples will be covered. The last lecture covers a mathematical basis for the approach.

Course materials: slides (updated)

Title: Formal methods for probabilistic systems

Speaker: Carroll Morgan

Description:  Probabilistic techniques in computer programs and systems are becoming more and more widely used, for increased efficiency (as in random algorithms), for symmetry breaking (distributed systems) or as an unavoidable artefact of applications (modelling fault-tolerance). Because interest in them has been growing so strongly, stimulated by their many potential uses, there has been a corresponding increase in the study of their correctness - for the more widespread they become, the more we will depend on understanding their behaviour, and their limits, exactly.

In the six lectures of this course I will address that last concern, of understanding, presenting a method for rigorous reasoning about probabilistic programs and systems . It provides an operational model -how they work- and an associated program logic -how we should reason about them- that are designed to fit together. The technique is simple in principle, and its aim is to increase dramatically the effectiveness of our analysis and use of probabilistic techniques in practice.

The approach is most compelling when applied to programs which are intricate either in their implementation or their design, or have generic features such as undetermined size or other parameters. They might appear as probabilistic source-level portions of large sequential programs, or as abstractions from the probabilistic modules of a comprehensive system-level design; we provide specific examples of both situations.

The course will cover the logic, its model and examples of its use.

Specific topics: The lectures will include the following topics, though possibly not in the exact order shown:

  • Elementary probabilistic logic for straight-line sequential programs.
  • A relational/operational sequential semantic model.
  • Interpretation of the logic in the model; the geometric view.
  • Looping programs.
  • Quantitative mu-calculus, probabilistic temporal logic and two-player games.
  • Examples.

Expected student background: Familiarity with Hoare/Dijkstra program logic, and elementary probability theory, would be useful but is by no means essential. Exposure to standard techniques for constructing semantic models would be relevant (powerdomains, fixed points, lattices), although these will be explained as needed.

Further reading: Any of the papers referred to in the downloadable Contents and Preface found at (see its page viii), particularly these:

The full citations are found at the end of the downloaded file; the course will be based on this text, expected to be available in October 2004.

Slides: intro12 | 3 | 4 | 5 | 6

Title: Software engineering for telecommunication systems

Speaker: Pamela Zave

Description: Communication services make network connections easier to use, by people or by software agents. They perform a variety of functions such as finding correct endpoints for connections, building multipoint or multichannel connections from primitive connections, handling failures, providing security, adapting signals, and automating network administration. Telephone services are the oldest communication services, but today these functions are required by almost every connection-oriented application of networking.

Typically communication services are built by adding features over time. Even when incremental development is not required, there are many advantages to decomposing the complexity of a communication service into features that can be understood independently.

These lectures explain how the Distributed Feature Composition (DFC) architecture can be used to describe and implement communication services. DFC supports feature modularity, feature composition, and formal analysis of feature interactions. In some cases, correct feature interactions can be determined from general-purpose principles and policies. An implementation of DFC functions as middleware, providing support functions for feature programs.

Slides: 1 | 2 | 3 | 4 | 5 | 6 | bibliography

Title: Model Checking Component-Based Software

Speaker: Natasha Sharygina

Slides: 1 | 2 | 3


Department of Computer Science Last update: 28-Sep-2004 by