Systems Modeling, Requirements, Interface Specifications and Architectures in a Concurrent World

by Manfred Broy, TU München

Systems such as Cyber-Physical Systems exist and act in a distributed concurrent world where space and time are most important issues. This is in contrast to concurrency as it appears in computer systems, operating systems and programming languages. When modeling systems that are closely related to their physical operational context we need models that allow us to express aspects of the physical world such as time, interaction, concurrency and distribution. The lectures will present a number of models capable to capture a model-based way such types of systems as a basis for writing requirements and finally systems specifications.


From Concurrent Objects to Transactional Memory

by Maurice Herlihy, Brown University

CONCURRENT OBJECTS

The behavior of concurrent objects is best described through their safety and liveness properties, often referred to as correctness and progress. In this lecture, we examine various ways of specifying correctness and progress.

SPIN LOCKS AND CONTENTION

When writing programs for uniprocessors, it is usually safe to ignore the underlying system's architectural details. Unfortunately, multiprocessor programming has yet to reach that state, and for the time being, it is crucial to understand the underlying machine architecture. The goal of this lecture is to understand how architecture affects performance, and how to exploit this knowledge to write efficient concurrent programs. We revisit the familiar mutual exclusion problem, this time with the aim of devising mutual exclusion protocols that work well with today's multiprocessors.

LINKED LISTS

In an earlier lecture, we saw how to build scalable spin locks that provide mutual exclusion efficiently, even when they are heavily used. We might think that it is now a simple matter to construct scalable concurrent data structures: take a sequential implementation of the class, add a scalable lock field, and ensure that each method call acquires and releases that lock. We call this approach "coarse-grained synchronization".

Often, coarse-grained synchronization works well, but there are important cases where it does not. The problem is that a class that uses a single lock to mediate all its method calls is not always scalable, even if the lock itself is scalable. Coarse-grained synchronization works well when levels of concurrency are low, but if too many threads try to access the object at the same time, then the object becomes a sequential bottleneck, forcing threads to wait in line for access.

HASH MAPS

In earlier lectures, we studied how to extract parallelism from data structures like queues, stacks, and counters, that seemed to provide few opportunities for parallelism. In this lecture we take the opposite approach. We study concurrent hashing, a problem that seems to be ``naturally parallelizable'' or, using a more technical term, "disjoint--access--parallel", meaning that concurrent method calls are likely to access disjoint locations, implying that there is little need for synchronization.

FUTURES, SCHEDULING, AND WORK DISTRIBUTION

In this lecture we show how to decompose certain kinds of problems into components that can be executed in parallel. Some applications break down naturally into parallel threads. For example, when a request arrives at a web server, the server can just create a thread (or assign an existing thread) to handle the request. Applications that can be structured as producers and consumers also tend to be easily parallelizable. In this lecture, however, we look at applications that have inherent parallelism, but where it is not obvious how to take advantage.

TRANSACTIONAL MEMORY

We now turn our attention from devising data structures and algorithms to critiquing the tools we use to solve these problems. These tools are the synchronization primitives provided by today's architectures, encompassing various kinds of locking, both spinning and blocking, and atomic operations such as CAS and its relatives. They have mostly served us well. We, the community of multiprocessor programmers, have been able to construct many useful and elegant data structures. Nevertheless, everyone knows that the tools are flawed. In this lecture, we review and analyze the strengths and weaknesses of the standard synchronization primitives, and describe some emerging alternatives that are likely to extend, and perhaps even to displace many of today's standard primitives.


Model-based design and analysis of concurrent and adaptive software

by Jeff Kramer, Imperial College London

Concurrency is useful in a wide range of applications. Errors in these applications and systems may be life threatening, adversely affect our quality of life or may have severe financial implications. An understanding of the principles of concurrent programming and an appreciation of how it is practiced is essential for software engineering professionals. This tutorial takes a structural model-based approach to the design of concurrent and distributed programs. The models, based on labelled transition systems LTS (a form of finite state machine), provide insight into the composite behaviour of the programs composed from components, and can be automatically analysed to check various safety and liveness properties before implementation.

Furthermore, we believe that such an architectural model-based approach is also beneficial in the design, analysis and construction of adaptive software systems. Rigorous techniques are needed to develop adaptive systems which can cope with both changes in the environment and with changing goals. The objective is to minimise the degree of explicit management necessary for construction and subsequent evolution whilst preserving the safety properties implied by its specification. We focus on an architectural approach to self-management, in which software components automatically configure their interaction as required. We present an outline three-layer reference model as a context in which to articulate some of the main issues and to describe our pilot implementation.

The approach, concurrency concepts and analysis will be demonstrated through a series of examples and using the LTSA model checker. In addition, current research work in the area of adaptive software will be discussed and demonstrated including current research on plan synthesis from goals and properties.


Practical concurrent programming with SCOOP: three years into the Concurrency Made Easy ERC project

by Bertrand Meyer, ETH Zurich


Structured Orchestration of Data and Computation

by Jayadev Misra, University of Texas

The internet promises to improve our lives by allowing us to connect to data and services that support our daily life. It promises to improve business processes by integrating data from a variety of sources within specific applications. This vision has not yet been fully realized. Instead, our infrastructure is currently built from numerous closed-world systems, each built by a single individual or business entity, which enclose and protect data and computation, while providing access to their services through limited human-oriented interfaces. As a result, users are forced to manually mediate between these services, thus wasting time and effort.

To illustrate this situation, consider the following scenario. A patient receives an electronic prescription for a drug from a doctor. The patient compares prices at several near-by pharmacies, and chooses the cheapest one to fill the prescription. He pays the pharmacy and receives an electronic invoice which he sends to the insurance company for reimbursement with instructions to deposit the amount in his bank account. Eventually, he receives a confirmation from his bank. The entire computation is mediated by the patient who acquires data from one source, does some minor computations and sends data to other sources. The mediation is entirely manual.

This computing scenario is repeated millions of times a day in diverse areas such as business computing, e-commerce, health care and logistics. In spite of the extraordinary advances in mobile computing, human participation is currently required in every major step in most applications. This is because the infrastructure for efficient mediation is largely absent, thus contributing to cost and delay in these applications. We believe that humans can largely be eliminated, or assigned a supporting role, in many applications. Doing so is not only beneficial in terms of efficiency, but also essential if we are to realize the full potential of the interconnectivity among machines, using the services and data available in the internet or within a large organization. We would prefer that humans advise and report to the machines, rather than that humans direct the machines in each step.

This tutorial will describe our attempt at integrating data and services within a large organization, or using the internet. We have developed a theory, called Orc, in which we specify the sources of data and computations and how to orchestrate their executions (concurrently, one after the other, only when the majority of data is available, ...). The Orc programming language has a very small kernel, using external libraries of sites for support functions. The language starts with concurrency as the defining characteristic of programs. With its hierarchical and recursive combination facilities, it provides a model of structured concurrency. It can be used to orchestrate a small application or a giant one with many millions of threads, running in real time, from milliseconds to months. We have developed an experimental implementation of the language. Orc has proved successful in this effort because it has small number of key concepts and a strong theoretical basis. The language, its documentation, and a web-interface are available at the following site where programs can also be submitted for execution.


Multi-Person Development of Multi-Version Software: The Essentials of Software Design

by David Parnas, Middle Road Software, Inc.

While it is essential that software professionals know how to produce a working program on their own, “solo programming” skills alone are not sufficient. Product development requires the cooperation of many professionals. Professional Software Developers must know how to design, develop, and maintain software in teams.

Although the first working version of a product is of great importance, a successful product will evolve during its period of use. Product performance will be improved, new capabilities will be added, subsets will be deployed for special applications on limited platforms, and versions will be needed for many platforms. Professional Software Developers must know how to design and maintain software that exists in many versions.

Most Software Systems must interact with, and control, concurrent activities. Software Developers must know how to structure concurrent interactive software.

There will be six lectures.

  1. Multi-Person Development of Multi-Version Software: An Overview
  2. The Modular Structure of Complex Systems
  3. Precise Documentation for Professional Software Development
  4. Module Interface and Program Function Documentation
  5. Effective Software Inspections
  6. Software Structure for Concurrency and Real-Time Deadlines
Go to top