Previous | Next | Table of Contents | Index | Program List | Copyright

16.1 Problem Solving: What is Concurrent Programming?

Much of the programming world involves concurrent applications. Here are some examples from operating systems, real-time systems, and simulation.

Operating Systems

When you and your colleagues all log in to terminals connected to the same time-sharing system, each of you works separately, but yet you are all using the same computer. Each of you has the feeling that the computer is working only on your task, yet many of you are working simultaneously. How is this seeming paradox possible?

The illusion that you are alone on the time-shared computer is caused by a combination of fast computers and clever programming. Suppose you are using the computer to edit a program or read electronic mail. You read and type at human speed. A very fast typist can enter 100 words per minute, or--at an average of six characters per word--about 10 characters per second. In the tenth of a second between two of your keystrokes, a modern computer can execute hundreds of thousands of machine instructions. If those "extra" machine instructions could be put to productive use, the computer would have plenty of time between your keystrokes to service other human users. It is not unusual for a modern time-shared computer to handle 100 or more simultaneous users, each working at human speed.

Managing all those instructions and users is part of the responsibility of a modern operating system. An operating system is, as you know by now, just a sophisticated program; in fact, it is a concurrent program, capable of managing many devices and human users to give the illusion of simultaneity.

Some time-shared computers consist of a single CPU; others consist of a set of identical CPUs. With more than one CPU, programs can be executed in parallel--that is, literally at the same time. With a single CPU, no real parallel execution is possible, but that one CPU can be shared in such a way that many programs seem to be executing in parallel. Concurrent programming is the creation of programs that consist of segments that have the potential for parallel execution; depending upon the actual number of CPUs available, execution of a concurrent program may be literally parallel, entirely time-shared, or some combination of the two.

Real-Time Systems

Many computer systems exist to control physical systems of one kind or another. Examples abound in medical technology, manufacturing and robotics, and transportation. In the latter domain, real-time computer programs control modern automotive fuel systems, aircraft such as the Boeing 777, and railroads like the Channel Tunnel between France and England and the subway system in Washington, D.C. These are, of necessity, concurrent programs: They must manage a number of electronic devices simultaneously; these devices, in turn, are connected to physical machines such as an automobile's fuel injection system or a railroad's "turnout" (a movable section of track that allows a train to enter one or the other of two rail lines).

Modeling and Simulation

Concurrent programming is useful in modeling or simulating physical systems, even if those systems are not directly controlled by a computer. For example, the waiting and service times in a bank, supermarket, or other service organization can be studied by writing a program in which each customer and each server--bank teller, supermarket checker, airline reservation clerk--is represented by its own program segment which interacts with the other segments.

Similarly, a subway system can be modeled by a program in which each train, station, turnout, and block (section of track that is permitted to hold at most one train) is represented by a program segment. The flow of simulated customers in the bank, or of trains in the subway, can be controlled or varied at will.

Simulation is an important tool in optimizing physical systems-- for example, choosing the most effective number of open checklanes in a supermarket or the frequency and maximum speed of trains in a subway. Studying the computer model provides information and insight into the behavior of the physical system if the former is a faithful representation of the latter; concurrent programming provides a natural way of assigning program segments to represent physical objects and therefore aids greatly in developing good simulations.

Ada is one of only a few programming languages--and the only popular one--to provide built-in structures for concurrent programming. In this chapter, we use a series of examples to present a few of the basic Ada structures and end with a spider program, namely, a simulation of a family of spiders, all trying simultaneously to move around the room without bumping into walls or other spiders.

Ada Structures for Concurrent Programming

In concurrent programming, an execution of a program segment is called a process. For example, when, logged into a time-sharing system, you invoke the electronic mail program, a process is created. The mail program itself is just a file on disk; when it is loaded into memory and executed, that execution is a process. If you and several friends all log in at the same time and invoke the e-mail program, several copies of that program are executing simultaneously on the same computer. One program has given rise to multiple simultaneous processes. Ada's term for process is task; Ada provides task types to allow the creation of multiple processes--which Ada calls task objects--resulting from a single program declaration.

Generally, your incoming e-mail is stored in a system file called the electronic mail box, or just mailbox. Suppose you are reading your mail when a friend sends you a message. The new message must be added to your mailbox file; your reading must be momentarily suspended while the file is modified (you may not notice the temporary suspension, but it happens anyway). Now suppose that two incoming messages arrive at the same time. Not only must your reading be suspended, but something in the mail software must update your mailbox one message at a time. If this protection were not provided--if two messages could update the mailbox literally at the same time--the mailbox would become hopelessly garbled and therefore useless.

The e-mail situation is an example of a readers-writers problem, a category of computing problems in which multiple readers of, and multiple writers to, a data structure must be prevented from interfering with one another. The prevention technique is called mutual exclusion; update actions on the data structure are handled one at a time while other actions are excluded. Ada's protected types provide mutual exclusion; we can declare a protected type, and variables of that type, with read operations--called protected functions and update operations--protected procedures--which Ada guarantees to execute correctly. Specifically, multiple calls to a protected procedure are executed one at a time.

Section 16.2 introduces task types and task objects; Section 16.3 introduces protected types and protected objects.


Previous | Next | Table of Contents | Index | Program List | Copyright

Copyright © 1996 by Addison-Wesley Publishing Company, Inc.