This website is about the 2009 edition of the school. Visit this year's edition of LASER.

LASER Summer School on Software Engineering


Software Testing: The Practice And The Science

September 6-12, 2009 - Elba Island, Italy

Alberto Avritzer (Siemens)
Michel Cukier (University of Maryland)
Yuri Gurevich (Microsoft Research)
Mark Harman (King's College London)
Bertrand Meyer (ETH Zurich, co-director)
Tom Ostrand (AT&T)
Mauro Pezzè (University of Lugano)
Elaine Weyuker (AT&T, co-director)

Print the LASER 2009 poster poster



Software Performance Testing for Scalability and Reliability Assessment: Theory and Practice

Speaker: Alberto Avritzer, Siemens

In this sequence of lectures we describe some of the relationships between software architecture, software performance testing, and software scalability and reliability assessment. We introduce some of the key architecture concepts that are related to scalability and reliability assessment of large mission critical systems. We present an automated approach for the generation of performance tests, a reliability and a scalability metric, and describe the application of our approaches to several large telecommunication systems. We conclude by describing the application of software rejuvenation approaches to the performance assurance of systems that degrade.

Short biography:
Alberto Avritzer received a Ph.D. in Computer Science from the University of California, Los Angeles, an M.Sc. in Computer Science for the Federal University of Minas Gerais, Brazil, and the B.Sc. in Computer Engineering from the Technion, Israel Institute of Technology. He is currently a Senior Member of the Technical Staff in the Software Engineering Department at Siemens Corporate Research, Princeton, New Jersey. Before moving to Siemens Corporate Research, he spent 13 years at AT&T Bell Laboratories, where he developed tools and techniques for performance testing and analysis. He spent the summer of 1987 at IBM Research, at Yorktown Heights. His research interests are in software engineering, particularly software testing, monitoring and rejuvenation of smoothly degrading systems, and metrics to assess software architecture, and he has published over 50 papers in journals and refereed conference proceedings in those areas. He is a member of ACM SIGSOFT, and IEEE. Dr. Avritzer can be reached at alberto.avritzer AT

On the Quantification of Computer Security

Speaker: Michel Cukier, University of Maryland

In these lectures, we will discuss the issues related to the quantification of computer security. We will present various case studies conducted at the University of Maryland and in collaboration with AT&T Labs. We will highlight the limitations of these studies. Then we will consider how such studies could be improved through the use of concepts developed in software testing or software reliability. We hope that the discussions during the lectures will lead to a more rigorous evaluation framework that could be accepted and used by the security community.

Short biography:
Michel Cukier is an Associate Professor of Reliability Engineering at the University of Maryland, College Park. Michel received a degree in physics engineering from the Free University of Brussels, Belgium, in 1991, and a doctorate in computer science from the National Polytechnic Institute of Toulouse, France, in 1996. From 1996 to 2001, he was a researcher at the University of Illinois, Urbana-Champaign. He joined the University of Maryland in 2001 as Assistant Professor. His research covers dependability and security issues. His latest research focuses on the empirical quantification of computer security. He has published over 60 papers in journals and refereed conference proceedings in those areas.

Model Based Testing, and Scientific Experimentation

Speaker: Yuri Gurevich, Microsoft Research

The lectures split into two parts. In one part we speak of the research and practice of model based testing at Microsoft, especially of the research and practice of testing based on the theory of abstract state machines. In the other, more speculative part, we attempt to examine the foundations of software testing. For example, are the basic notions well defined? What's a bug really? We also attempt to put software testing into the general perspective of scientific experimentation. How typical and how different software experimentation is?

Short biography:
Yuri Gurevich was an algebraist in Soviet Union, then a logician in Israel, and then a computer scientist in the USA. After teaching for a long while at the University of Michigan he joined Microsoft Research in 1998 where he built a group on Foundations of Software Engineering. Currently he is Principal Researcher at Microsoft Research in Redmond, WA. He is also Professor Emeritus at the University of Michigan, ACM Fellow, Guggenheim Fellow, a member of Academia Europaea, and Dr. Honoris Causa of two universities.

SBSE for Testing: Software Testing as Automated Optimization (Guest Lecture)

Speaker: Mark Harman, King's College London

The aim of Search Based Software Engineering (SBSE) research is to move software engineering problems from human-based search to machine-based search, using a variety of techniques from the metaheuristic search, operations research and evolutionary computation paradigms. As a result, human effort moves up the abstraction chain to focus on guiding the automated search, rather than performing it. The idea is to exploit humans' creativity and machines' tenacity and reliability, rather than requiring humans to perform the more tedious, error prone and thereby costly aspects of the engineering process. This session will briefly describe the search based approach, providing pointers to the literature, current results and trends and directions for future work in SBSE for Software Testing. The session will include an interactive component in which participants will work on formulating a software testing problem as a search based optimization problem.

Short biography:
Mark Harman is professor of Software Engineering in the Department of Computer Science at King's College London. He is widely known for work on source code analysis and testing and he was instrumental in the founding of the field of Search Based Software Engineering, a field that currently has active researchers in 24 countries and for which he has given 14 keynote invited talks. Professor Harman is the author of over 150 refereed publications, on the editorial board of 7 international journals and has served on 90 programme committees. He is director of the CREST centre at King's College London. More details are available from the CREST website:

Programs can test themselves

Speaker: Bertrand Meyer, ETH Zurich and Eiffel Software

The idea of tests conceived as after-the-fact quality checks on software is fundamentally flawed, if only because it does not support the indispensable automation of the testing process. Equipping programs with contracts makes it possible to treat software as a self-verifying artifact and to address the two key steps of test automation: test case generation and test interpretation (oracles). A complementary technique (Andreas Leitner's "Contract-Driven Development") automatically turns failed executions - one of the most important sources of information about bugs, but usually lost after the development phase - into regressions tests.
The resulting automatic testing techniques have already been integrated into Eiffel tools and have uncovered, in a totally "push-button" mode, hundreds of bugs in released software. The lectures describe the principles, applications and open issues of this approach designed to produce programs that they test themselves.

Short biography:
Bertrand Meyer is professor of software engineering at ETH Zurich and Chief Architect at Eiffel Software. His latest book (Springer, 2009) is an introductory programming textbook: “Touch of Class”.

From testing to dynamic analysis

Speaker: Mauro Pezzè, University of Lugano

Executing software both in testing environments as well as in the field produces a lot of information about the program behavior. In classic software engineering, this information is mostly lost. Dynamic analysis is emerging as an efficient approach to capture information about software execution and help software engineers understand program behavior, identify failures, diagnose faults, prune test suites and manage software evolution. In this sequence of lectures, we will learn some popular dynamic analysis techniques, we will see how they can be effectively used to gather information about software execution, identify failures, diagnose faults and generate efficient test suites. We will look at the problem of generating accurate dynamic models by suitably selecting test cases, and we will see how models of dynamic behavior can support program evolution and autonomic computing.

Short biography:
Mauro Pezzè is professor of software engineering at the University of Lugano and at the University of Milano Bicocca. He is associate editor of ACM Transactions on Software Engineering and Methodology, member of the Steering Committee of the International Conference on Software Testing and Analysis, and Senior Member of IEEE. He coauthored with Michal Young a book on Software Testing and Analysis - Process, Principles and Techniques, recently published by John Wiley and translated in German and Portuguese. His current interests are in autonomic computing, self-healing systems, static and dynamic analysis, testing of complex software systems.

Software Fault Prediction - What, Why, When, Where and How

Speaker: Elaine Weyuker and Tom Ostrand, AT&T

In these talks we will discuss our experience with statistical models to predict which files of a large software system are most likely to contain the largest numbers of faults. We will speak about the use of both structural and historical software characteristics, which ones have the greatest impact on accurate prediction, and which ones are of minimal importance. We will describe the tool we've built to automatically make the predictions, and also describe a series of large case studies that we've done applying our prediction models to several large industrial software systems. We will also consider how to assess the effectiveness of the predictions..

Short biography:
Elaine Weyuker is an AT&T Fellow doing software engineering research at AT&T Labs. Prior to moving to AT&T she was a professor of computer science at NYU's Courant Institute of Mathematical Sciences. Her research interests currently focus on software fault prediction, software testing, and software metrics and measurement. In an earlier life, Elaine did research in Theory of Computation and is the co-author of a book "Computability, Complexity,and Languages" with Martin Davis and Ron Sigal.
Elaine is the recipient of the 2008 Anita Borg Institute Technical Leadership Award and 2007 ACM/SIGSOFT Outstanding Research Award. She is also a member of the US National Academy of Engineering, an IEEE Fellow, and an ACM Fellow and has received IEEE's Harlan Mills Award for outstanding software engineering research, Rutgers University 50th Anniversary Outstanding Alumni Award, and the AT&T Chairman's Diversity Award as well has having been named a Woman of Achievement by the YWCA. She is the chair of the ACM Women's Council (ACM-W) and a member of the Executive Committee of the Coalition to Diversify Computing.


Chair of Software Engineering - ETH Zürich Last update: 22.07.2009