2 A Theory of Requirements Engineering - Part I
A good theory is one that helps engineers understand and improve their practices. The theory in this chapter will help you understand fundamental principles that are relevant to all requirements engineering methods. It defines core concepts that will be used throughout the book.
The heart of this theory is a formula describing the relation between software requirements (Req), domain assumptions (Dom) and stakeholder goals (Goals):
Req, Dom \(\vdash\) Goals
This formula is so important that some have called it the E=mc2 of requirement engineering.
The formula refers to three kinds of properties one must consider during the requirements engineering process:
- Stakeholder goals (Goals), which are desired properties of the world in which the software operates — for example, “an ambulance must arrive quickly in response to emergency calls”;
- Software requirements (Req), which are desired properties of the software, i.e. its desired externally visible behaviours and qualities;
- Domain assumptions (Dom), which are assumed properties about the context in which the software operates.
The formula defines what it means for software requirements to be complete with respect to some goals: the software requirements Req are complete with respect to the stakeholder goals Goals if there exist valid domain assumptions Dom such that it is possible to prove (that is the meaning of the \(\vdash\) symbol) that if the software satisfies the requirements Req, and the world satisfies the assumptions Dom, then the stakeholder goals Goals are necessarily satisfied. We will explain this formula in more detail, illustrate it with real examples, and discuss its implications for practice.
The presentation of this theory is split over two chapters. The first defines core concepts and previews requirements modelling techniques founded on them. The second presents the Req, Dom \(\vdash\) Goals formula relating software requirements and domain assumptions to stakeholder goals. It also discusses the practical implications of this theory and important aspects of requirements engineering not covered by this theory.
2.1 The World and the Machine
Like all technical disciplines, requirements engineering has its own specialised concepts and terminology. The first two important concepts are the World and the Machine.
The Machine is whatever product or service we are in charge of developing or improving. Synonyms for the Machine are the “product”, the “system-to-be”, or the “software-to-be”. Testers also call the Machine the “system-under-test”. The Machine is a concrete object: a physical computer or a set of physical computers that execute instructions described in code. The code transforms general-purpose computers into specialised machines. For example, media player software temporarily transforms general-purpose devices (phones, tablets or laptops) into specialised machines that play music and videos. Calling what we build “the Machine” helps us remember that even though the code we write is intangible, what we ultimately put into the world are actual machines that have real, tangible effects on people and the environment.
The World is the part of the real world affected by the Machine. The World includes people, objects, devices and other software systems. Synonyms are the “domain”, the “application domain”, the “problem domain”, the “environment” and the “context”. We build the Machine to serve some practical purposes in the World. For example, we build an airplane braking controller to reduce airplane accidents, we build an ambulance dispatch software to improve ambulance response times. The Machine is only a means to an end. What matters are the impacts that the Machine has on the World.
The World is not the entire physical world; it is the part physical world that is relevant to, and impacted by the Machine. For example, if the Machine is a software to control the airplane braking system, the World includes the plane position, whether it is flying or moving on the runway, whether the ground spoilers and reverse thrust are deployed or not, but it does not include the plane passengers, their ages, or what they ate for lunch. All this is irrelevant to the design of the airplane braking system. The World is the scope of our requirements engineering concerns; it separates what we need to pay attention to from what we can leave out. Deciding this scope is one of the trickiest parts of requirements engineering. We will study techniques supporting such decisions in future chapters.
The World and the Machine are connected to each other. This connection is what allows the Machine to obtain information about the World and produce effects on the World. Without this connection, the Machine would not be able to have any impact on the World.
2.2 Phenomena
Requirements engineering involves the gradual transformation of vague concerns (e.g. “The ground braking system must be safe”) into concrete statements of desired properties (e.g. “reverse thrust should not be deployed when the plane is flying”). Asking for the ground braking system to be safe is vague because the concern (“being safe”) is not specific enough: it does not refer to something that is concrete, specific, and observable. Asking that the plane does not crash is clearer because it refers to avoiding a specific observable event: a plane crash. Asking that reverse thrust is not deployed when the plane is flying is also clearer because it refers to specific observable states: the plane flying and reverse thrust not being deployed.
Going from vague concerns to precise statements of needs requires paying attention to what is observable. This is why the concept of phenomena is so essential to requirements engineering.
Table 2.1 lists three phenomena for the airplane ground braking system of Chapter 1.
Phenomena | Shorthand notation |
---|---|
the plane is flying | Flying |
the plane is moving on the runway | MovingOnRunway |
reverse thrust is deployed | RevThrustDeployed |
In this table, the phrases on the left are phenomena descriptions, the terms on the right are shorthand notations for these descriptions.
The phenomena in Table 2.1 are all states: the first two are states of the plane, the third is a state of the plane’s ground braking system.
Phenomena descriptions can be combined to form sentences that describe stakeholder goals, software requirements and domain assumptions. The sentences can be in natural language (e.g. “reverse thrust should not be deployed when the plane is flying”) and, optionally, in formal logic (e.g. Flying => not RevThrustDeployed
, where =>
is the symbol for logical implication). We will see more examples later.
Note that choosing a set of phenomena is choosing a particular way of looking at the World. For example, by choosing the phenomenon RevThrustDeployed
, we have chosen a binary way of looking: reverse thrust is either deployed or not deployed. We consider the plane’s reverse thrust to be deployed when it is deployed on both wings simultaneously. We ignore the possibility of reverse thrust being deployed on one wing and not the other. We also ignore other details, such as the engines’ power in reverse thrust. This way of looking is adequate for expressing the need that reverse thrust should not be deployed during flight, but in other situations, we could choose a different way of looking represented by a different choice of phenomena. Choosing the right set of phenomena is an abstraction skill. Such a skill is immensely valuable in requirements engineering, and in software engineering as a whole.
Domain Conceptual Models: A Preview
One way to document chosen phenomena is to organise them in a domain conceptual model. These models are described in Chapter 17. As a preview, Figure 2.1 shows a conceptual model with all phenomena in Table 2.1 and a few more phenomena that will be introduced later in Table 2.2.
Another effective approach is to write a glossary that lists and defines all chosen phenomena. The use of glossaries will be described further in Chapter 9 and Chapter 17.
2.3 Phenomena Location
We can classify phenomena based on their location: in the World, in the Machine, or at the intersection between the World and the Machine.
Figure 2.2 shows examples of World and shared phenomena for the ground braking system.
All phenomena from Table 2.1 are World phenomena that are not shared with the Machine. Other phenomena relevant to the ground braking system are:
Phenomena | Shorthand notation | |
---|---|---|
the landing gears wheels are turning | WheelsTuring |
|
the landing gears’ wheels sensors indicate that the wheels are turning | WheelsPulsesOn |
|
ground braking is enabled | GrdBrakingEnabled |
In Figure 2.2:
WheelsTurning
is a world phenomenon that is not shared with the Machine (the software controller). It is an important phenomenon because the ground barking system uses it as an indicator that the plane is moving on the runway. Whether the wheels are turning is, however, not directly observable by the software controller.WheelsPulsesOn
is a shared phenomenon. The wheels’ sensors are connected to the software controller through a wire, and the software controller perceives the signals sent along that wire.GrdBrakingEnabled
is another shared phenomenon. The Machine has the ability to enable and disable the ground spoilers and reverse thrust by sending commands to the actuators that control these components. To enable means to allow; to disable means to prevent. When the ground braking is enabled, it means that the ground spoilers and reverse thrust can be deployed. When ground braking is disabled, the ground spoilers and reverse thrust will not deploy even if the pilot pushes the buttons to deploy them.
Machine phenomena that are not shared with the Machine include internal function calls, internal events, and internal variables states.
Requirements engineering is entirely concerned with World phenomena, including those shared with the Machine. It does not deal with internal Machine phenomena, which are concerns for the software’s internal architecture and implementation.
2.4 Machine Inputs and Outputs
Among shared phenomena, we distinguish machine inputs and outputs.
In our example, WheelsPulsesOn
is a Machine input, and GrdBrakingEnabled
is a machine output.
In programming terms, you can think of machine inputs and outputs as World phenomena that the machine can “read” and “write”, respectively. You can also think of World phenomena that are not shared with the Machine as states and events in the World that the machine has no direct access to: it can neither read nor write them.
Note that machine inputs and outputs are, despite their names, both Machine and World phenomena. During implementation, we see them as Machine phenomena (i.e. in terms of code). For requirements engineering, we see them as World phenomena shared with the Machine.
Synonyms in Systems Engineering
In systems engineering, it is common to consider two machines simultaneously: a larger system-level machine and a smaller software-level machine. The system-level machine contains the software-level machine, input devices (sensors) and output devices (actuators), as shown in Figure 2.3. On the system-level machine, the machine inputs and outputs are also called monitored and controlled variables; on the software-level machine, they are still simply called inputs and outputs. In our example, WheelsTuring
would be a monitored variable, and RevThrustDeployed
a controlled variable.
The model in Figure 2.3 is called the four-variable model because it has four types of variables: monitored, controlled, input and output.
You will sometimes be responsible for writing requirements for a system-level machine, sometimes for a software-level machine, and sometimes for both. We will see later what the relations are between system-level and software-level requirements.
Context Diagrams: A Preview
In practice, machine inputs and outputs are commonly represented using context diagrams. We will explain context diagrams in Chapter 16. As a preview, Figure 2.4 shows a context diagram for the ground braking system. The machine in this diagram is the Ground Braking Controller on the right. The other boxes and the stick figure are actors in the World. The next section defines what we mean by actor.
2.5 Actors and Stakeholders
Two other important concepts are that of actors and stakeholders.
Actors
An actor can be a person, a department or organisation, a device like a sensor or actuator, or a software system. The Machine is also an actor.
When we say that the Machine and other software systems are actors, we do not imply that they have human-like consciousness and intelligence. We simply mean that they can perform actions in the World.
Actors for the ground braking system are shown in Figure 2.4. They include the pilot, wheels sensors attached to the landing gears, the actuators controlling the states of the ground spoilers, engine thrust and landing brakes, and our Machine: the Ground Braking Controller.
The concept of actor is present in many requirements modelling languages, notably in use cases, domain scenarios, context diagrams, and goal models, which will all be covered in Part III.
Synonyms for actor are “agent”, “component”, and “element”.
Stakeholders
As mentioned in Chapter 1, the stakeholders usually includes many more people that the client (who pays for machine development) and users (who operate the machine). For example, airplane passengers are stakeholders of the airplane ground braking system, although they are neither clients nor users. Other stakeholders include aircraft manufacturers, the airlines who buy and operate the aircraft, and authorities around the world who supervise and regulate commercial air travels. The software engineering team who develop and maintain the Ground Braking Controller are also stakeholders. All these people (and others) have a stake in the system. Understanding the needs and expectations of all stakeholders is important to build a successful product. We will study how to discover stakeholders and their needs in Chapter 9.
A stakeholder is not the same thing as an actor. Some people are both actors and stakeholders. Usually, all human actors will have an interest in, and be affected by the Machine. They are therefore both actors in the World and stakeholders of the World. Not all actors, however, are stakeholders. Non-human actors, like sensors, actuators and other software systems are not stakeholders (they are not people). The persons who develop, maintain, and sell these systems, however, are likely to be stakeholders. Finally, not all stakeholders are actors. For example, airplane manufacturers, airlines, and the authorities that regulate air travel and certify airplane systems are all stakeholders of the ground braking system, but they are not actors in that system.
2.6 Notes and Further Readings
Pamela Zave and Michael Jackson developed the theory of requirements engineering covered in this chapter and the next. Their ideas are described in multiple papers. Excellent introductions can be found in “Deriving Specifications from Requirements: an Example” and “The World and The Machine” (Jackson and Zave 1995; Jackson 1995b). A more detailed presentation can be found in the journal paper “Four Dark Corners of Requirements Engineering” (Zave and Jackson 1997). The most enjoyable exposition (and many other insights about software development) can be found in Michael Jackson’s brilliant book “Software Requirements & Specifications" (Jackson 1995a).
In their papers, Zave and Jackson use different notations for the Req, Dom \(\vdash\) Goals formula. In their papers, the formula is written as D, S \(\vdash\) R or S, K \(\vdash\) R. The ideas however are the same. The next chapter will explain how our terminology relates to the original papers.
The characterisation of this formula as the “E = mc2 of requirements engineering” is due to Anthony Hall, who co-developed the REVEAL requirements engineering method based on this formula with colleagues at Praxis Critical Systems (Hall 2010; Hammond, Rawlings, and Hall 2001).
The Four-Variable Model of systems engineering originates from the work of David Parnas and Jan Madey on documenting requirements for control systems, notably for aircraft and nuclear power plants (Parnas and Madey 1995). Their work shares many ideas with Zave and Jackson’s theory.
2.7 Review Questions
To check and deepen your understanding, try to apply the concepts in this chapter to the ambulance dispatching system introduced in Chapter 1.
Question 2.1 Phenomena
Which of the following phrases describe phenomena of the ambulance dispatching system? Remember that a phenomenon is an observable state or event.
- An incident occurs at 10 Downing Street.
- A person calls the emergency service to report the incident.
- An ambulance arrives at 10 Downing Street.
- An ambulance.
- The ambulance response time must be less than 14 minutes.
Show Solution
- The first three phrases describe phenomena; all three are observable events.
- The fourth is not a phenomena. An ambulance is neither a state nor an event. An ambulance is an entity. Entities are things with persistent identities. Phenomena are about entities, but the entities themselves are not phenomena. We talk more about entities in Chapter 17 on conceptual models.
- The fifth phrase is also not a phenomena. It is a statement of need about the relation between two phenomena: the ambulance arrival at the incident (phenomena A) must occur less than 14 minutes after the reporting of the incident (phenomena B).
Question 2.2 Classify Phenomena
The Machine is the Computer Aided Dispatch (CAD) software. The CAD receives inputs from call handlers who encode details about each emergency calls from the public. It also receives ambulance location information from the Global Positioning System (GPS) and information about ambulance status from the Mobile Data Terminals (MDT) on board of each ambulance. When an ambulance is allocated to an incident, the CAD sends a mobilisation order to the ambulance’s Mobile Data Terminal.
Classify each of the following phenomena as either:
- W: World Phenomena, not shared with Machine
- I: Machine input
- O: Machine output
- M: Machine phenomena, not shared with the World
- An incident occurs at 10 Downing Street.
- A person calls the emergency service to report the incident.
- A call handler encodes the incident details into the CAD.
- Ambulance with ID 123 is at Buckingham Palace.
- Ambulance with ID 123 is available.
- The function
findNearestAvailableAmbulance()
returns the ambulance ID 123. - The CAD sends mobilisation instructions to ambulance 123’s Mobile Data Terminal.
- The ambulance crew for ambulance 123 accept the mobilisation on the Mobile Data Terminal.
- The ambulance with ID 123 arrives at 10 Downing Street.
Show Solution
- An incident occurs at 10 Downing Street. W
- A person calls the emergency service to report the incident. W
- A call handler encodes the incident details into the CAD. I
- Ambulance with ID 123 is at Buckingham Palace. W
- Ambulance with ID 123 is available. W
- The function
findNearestAvailableAmbulance()
returns the ambulance ID 123. M - The CAD sends mobilisation instructions to ambulance 123’s Mobile Data Terminal. O
- The ambulance crew for ambulance 123 accept the mobilisation on the Mobile Data Terminal. W
- The ambulance with ID 123 arrives at 10 Downing Street. W