3  A Theory of Requirements Engineering - Part II

The previous chapter introduced the first part of the theory of requirements engineering by defining key concepts: the World and the Machine, phenomena, Machine inputs and outputs, actors and stakeholders. This chapter presents the second part. It starts by defining the concepts of behaviours, properties, stakeholder goals, machine requirements and domain assumptions. It then explains the important formula Req, Dom \(\vdash\) Goals that defines requirements correctness. The chapter also discuss practical implications of this theory and important requirements engineering concerns that are not covered by the theory.

3.1 Behaviours and Properties

Discovering requirements involves understanding what behaviours our stakeholders consider to be desirable or undesirable in the World. This section defines what we mean by behaviour and how behaviours can be characterized by their properties.


Definition: Behaviours

A behaviour is a temporal sequence of phenomena.

Figure 3.1 and Figure 3.2 show two behaviours for the ground braking system.

Figure 3.1: A desirable behaviour for the ground braking system

Figure 3.2: An undesirable behaviour for the ground braking system

In Figure 3.1, the behaviour is a sequence of three sets of phenomena, holding at time t0, t1, and t2, respectively. At time t0, the plane is flying; at a later time t1, the plane is moving on the runway and the ground braking is enabled; and at the later time t2, the plane is moving on the runway, the ground braking is enabled and the reverse thrust is deployed. Between time instants, the phenomena stay the same. This behaviour illustrates a normal expected behaviour of the ground braking system.

In Figure 3.2, the behaviour also has three time instants. At time t0, the plane is flying. At time t1, the plane is still flying and ground braking is enabled. At time t2, the plane is flying, ground braking is enabled and reverse thrust is deployed. This is an undesirable behaviour because we do not want reverse thrust to be deployed when flying.

A synonym for ‘behaviour’ is ‘scenario’. In practice, scenarios can be represented using a variety of textual and graphical notations that are more stakeholder-friendly than the notation in Figure 3.1 and Figure 3.2. For example, behaviours can be described as UML use case scenarios, domain stories, or given-when-then scenarios of Behaviour Driven Development. These techniques will be described in Chapter 15 and Chapter 20. The concept of behaviour is also fundamental in process models, state machines, and for the definition of stakeholder goals, machine requirements and domain assumptions.


Listing all conceivable behaviours of a system and classifying each one individually as either desirable or undesirable would be tedious and impossible. A simpler approach is to formulate properties that characterize whole sets of behaviours as either desired or undesired.

Definition: Behavioural properties

A behavioural property is a condition on behaviours.

A behavioural property, or property for short, is true for a set of behaviours and false for all other behaviours. For example, the property that “reverse thrust should not be deployed when the plane is flying” is true of all behaviours that satisfy this condition (Figure 3.1 and many others), and it is false of all behaviours that violate it (Figure 3.2 and many others).

A property is an abstract condition that exists in people’s minds; it can be formulated in multiple ways. For example, if I say “If the plane is flying, reverse thrust must not be deployed”, I am describing the same property as someone saying “reverse thrust should not be deployed when the plane is flying”. Someone else saying “reverse thrust must be safe during flight” might also be describing the same property even if their formulation is less precise. A property may be formulated in multiple natural languages, for example in English, French, Arabic and Chinese. These would be multiple formulations of the same property.

Properties can be formulated in natural language and formal logic. For example, two properties for the ground braking system are:

(G1) Ground braking must be enabled when the plane is moving on the runway.
MovingOnRunway => GrdBrakingEnabled

(G2) Ground braking must be disabled when the plane is flying.
Flying => \(\neg\) GrdBrakingEnabled

In the formal logic expressions, P => Q means that P implies Q at the current and all future times; and \(\neg\) is the logical symbol for negation (“not”).

Going back to the relation between properties and behaviours, we can observe that the behaviour in Figure 3.1 satisfies G1 and G2. At time t0, when the plane is flying, ground braking is not enabled. At time t1 and t2, when the plane is moving on the runway, ground braking is enabled. We can also observe that the behaviour in Figure 3.2 does not satisfy G2 because at times t1 and t2, the plane is flying and ground braking is enabled.

Natural languages and formal logic

In requirements engineering, natural language is always the primary language for communicating properties. Logical formulae are optional and usually not shown to stakeholders. Some requirements engineers use formal logic “behind the scene” to facilitate reasoning and perform complex analysis using automated tools. The findings of such analysis are then communicated back to stakeholders in natural language.

For people who have learned formal logic, formulating properties in logic has many benefits:

  • well-written logic formulae are shorter and easier to read than the equivalent natural language sentences;
  • their structure is not subject to ambiguity (e.g. no pronouns ambiguity, no ambiguity about the scope of logical connectors);
  • they can be analysed automatically using a variety of tools for consistency checking, simulation and verification;

Expressing software requirements in a formal language also provides the basis for automated test generation, program verification, program synthesis, and debugging.

The rest of the book does not require knowledge of formal logic beyond what is covered standard introductions to logic for computer scientists, i.e. you understand the main propositional logic operators (not, and, or, implies, if and only if). Chapter 13 will provide more information about automated tools that people use to analyse and debug requirements expressed in formal languages. I hope to eventually add a chapter on temporal logic for requirements engineering. Knowledge of temporal logic is not needed for the rest of the book.

Natural language is the main language for expressing properties. Techniques that support the formulation and analysis of requirements in natural language include requirements templates (Chapter 18), goal models (Chapter 19), and specification by example (Chapter 20). Natural Language Processing techniques are also emerging as important tools to discover stakeholder needs (Chapter 10) and analyse formulated requirements (Chapter 13).

3.2 Goals, Requirements, Assumptions

Requirements engineering is concerned with three types of properties:

  • stakeholders goals, which are desired properties of the World;
  • machine requirements, which are desired properties of the Machine at the interface with the World;
  • domain assumptions, which are assumed properties of the World.

Let’s look at each of these property types in more details.

Stakeholder Goals

Definition: Stakeholder goals

A stakeholder goal is a desired property of the World.

A stakeholder goal is a property that some stakeholder wants to be true in the World. When the context is clear, we will simply write “goal” for stakeholder goal.


Two goals for the ground braking system are the properties G1 and G2 introduced earlier and recalled here:

(G1) Ground braking must be enabled when the plane is moving on the runway.
MovingOnRunway => GrdBrakingEnabled

(G2) Ground braking must be disabled when the plane is flying.
Flying => \(\neg\) GrdBrakingEnabled

A stakeholder goal for an ambulance dispatching system is:

An ambulance must arrive at the incident scene within 14 minutes after the first call reporting the incident.

In this example, the 14 minutes target comes from a UK Government standard that was in place at the time the London Ambulance Service first automated in ambulance dispatching system.


In English, stakeholder goals are formulated using modal verbs like “must”, “should”, or “shall” that convey expectations, recommendations or obligations. They form sentences in the optative mood — a grammatical mood that expresses wishes. Other languages have their own modal verbs and sentence structures to express wishes and expectations.


Stakeholder goals have three important characteristics.

  1. Goals are formulated in terms of World phenomena. Because a goal is a desired property of the World, its formulation must refer to World phenomena. For examples, G1 and G2 refer to Flying, MovingOnRunway and GrdBrakingEnabled. The goal for the ambulance despatching system refer to two classes of phenomena: the reporting of incidents and the arrivals of ambulances at incident scenes. Stakeholder goals do not refer to internal Machine phenomena.
  2. Satisfying a goal may involve multiple actors, not just the Machine. For example, satisfying the stakeholder goal for the ambulance dispatching system requires the involvement of many actors: call takers, ambulance crews, GPS, the ambulances’ mobile data terminals and the ambulance dispatching software. The ambulance dispatching software cannot satisfy this goal by itself. For the ground braking system, satisfying goals G1 and G2 involves the wheels sensors (to detect whether the plane is moving on the runway) and the Ground Braking Controller (to enable and disable the ground braking system based on signals it receives from its sensors).
  3. Not all stakeholder goals must be satisfied. Identifying and formulating a stakeholder goal does not commit anyone to deliver it. Some goals may be too costly to satisfy for the benefits that they bring. Some goals should not be satisfied because they conflict with other more important goals. These goals must be identified and formulated before one can decide which ones should be satisfied or not.


The concept of stakeholder goals has many synonyms: “business goals”, “customer requirements”, “user needs”, etc. The synonyms include all phrases matching the two-word pattern <stakeholder-word> <goal-word> where

  • <stakeholder-word> is one of {stakeholder, business, customer, user, system}, and
  • <goal-word> is one of {goal, requirement, need}.

Terms like “business goal”, “customer requirement”, “user need” are commonly used but rarely well-defined. A reasonable interpretation is that they denote stakeholder goals where the stakeholder is the business, the customer and a user, respectively.

Unlike “goal”, the term “requirement” has a connotation of being a property that must be satisfied, rather than being merely desired. With such interpretation, a customer requirement can be viewed as a particular kind of stakeholder goal: one that originates from the customer and that must be satisfied. In practice, however, the term requirement is often used more loosely to mean a candidate requirement. For example, during requirements prioritization, you may hear people talk about mandatory requirements (a pleonasm) and non-mandatory requirements (an oxymoron).

Other types of requirements

It is important to observe that not all requirements are desired property of the World. The term “requirement” to denote a desired or required property can be applied to almost any object or activity.

For example, a budget requirement is a desired property about development cost, a schedule requirement is a desired property about delivery dates. Budget and schedule requirements are desired properties of the development process rather than of the World. These properties are important during requirements engineering because they influence decisions of what to build given the budget and time constraints.

Other types of requirements are desired properties on the software construction and deployment. For example, a technological requirement is a desired property about the technologies used inside the Machine (e.g. about programming languages and frameworks), a hardware requirement is desired properties of the hardware on which to run some software, a package requirement is a dependency of one software package on another, etc. In future chapters, we will describe how to handle such concerns when they come up during requirements engineering.

Machine Requirements

Definition: Machine requirements

A machine requirement is a desired property of the Machine at its interface with the World;

This definition implies that machine requirements are desired properties about shared phenomena (machine inputs and outputs), to be satisfied by the Machine alone without help from other actors.

Machine requirements can be viewed from two perspectives.

From the implementation perspective, machine requirements are desired properties of the Machine that refer to machine phenomena (the machine inputs and outputs). The machine requirements are the properties that the machine implementation must satisfy. They are properties that drive the software design, coding, testing and debugging.

From the requirements engineering perspective, machine requirements are desired properties of the World. They refer to the world phenomena that are shared with the machine. Machine requirements are thus special kinds of stakeholder goals: they are stakeholder goals that are entirely about shared phenomena and that must be satisfied by the Machine alone.

To keep things simple, we assume for the moment that all stakeholder goals and machine requirements are behavioural properties. We will see later that stakeholder goals and machine requirements also cover quality properties (i.e. some of the “non-functional” requirements like performance, availability, etc.)


A machine requirement for the ground braking system is:

(R1) Ground braking must be enabled when the wheels sensors indicate that the wheels are turning.
WheelsPulsesOn => GrdBrakingEnabled


Let’s review two important characteristics of machine requirements.

  1. Machine requirements refer to shared phenomena only. They cannot refer to World phenomena that are not shared with the Machine. For example, the goals G1 and G2 cannot be machine requirements because they refer to the phenomena Flying and MovingOnRunway that are not shared with the ground braking controller. The requirement R1, however, is formulated entirely in terms of shared phenomena: WheelsPulsesOn and GrdBrakingEnabled.

  2. Machine requirements must be satisfied by the Machine alone. Unlike stakeholder goals whose satisfaction may involve multiple actors, a machine requirement has to be satisfied by the Machine alone, without relying on any help from other actors in the World. For example, the goals G1 and G2 cannot be machine requirements because the ground braking controlled would not be able to satisfy the goals alone. In oder to satisfy these goals, the ground braking controller relies notably on the correct behaviours of the wheels sensors. The requirements R1 however can be satisfied by the software controller alone. Even if the wheels sensors send incorrect information, the controller would still be able to satisfy R1.

The idea behind these two characteristics is that machine requirements must be implementable (at least in principle) without recourse to any additional information about the World.

The more precise formulation of these characteristics is that machine requirements must be realizable. Realizability is a more advanced concept that you can ignore if you’re reading this for the first time. An intuitive explanation is given below.

Intuitively, a set of machine requirements is realizable if it is possible to define a machine (formally, a state transition system) that satisfies the requirements without being more restrictive than the requirements. Requirements may be unrealizable for one or more of the following reasons:

  • reference to unshared phenomena: they refer to phenomena that are not shared with the machine;
  • reference to future: they define a input-output relation where the next output depends on future inputs;
  • unbounded: they require some event to occur in the future without specifying any time bound — the problem is that such properties cannot be violated by any finite behaviour, which makes them pointless as requirements.
  • conflicting: the machine requirements are realizable separately but not together.

Checking realizability is one of the quality checks that you can perform on machine requirements before their implementation (Chapter 13). Checking that they don’t refer to unshared phenomena is easy; checking that they are not conflicting is harder. Finding out that machine requirements are unrealizable before starting to implement them can save you time during implementation. Checking for realizability also plays a key role in the goal refinment process of goal modelling (Chapter 19).


Synonyms for machine requirements are “software requirements” and “software specifications”. We will also sometimes simply write “requirements” to mean machine requirements. Many people also use the phrase “software specification” to mean the set of software requirements.

What constitutes a machine requirements depends on what is taken to be the Machine. As mentioned in Section 2.4, we sometimes consider two nested machines: a system-level machine that contains a smaller software-level machine (see Figure 2.3). The terms “system requirements” and “software requirements” are then used to refer to machine requirements for the system-level and software-level machines, respectively.

Stakeholder goals vs. Machine requirements

Stakeholder goals and machine requirements are both desired properties of the World. The most important difference is that stakeholder goals can refer to non-shared World phenomena; whereas machine requirements cannot – they must refer to shared phenomena only.

One of the biggest mistakes you can make in requirements engineering is to believe that the machine requirements are the only stakeholder goals that matters, without understanding that they are only a means to realize more fundamental stakeholder goals that are independent of the machine and refer to non-shared phenomena. For example, for the ambulance dispatching system the mistake would be to believe that the goal is to automate ambulance dispatching (machine requirements) rather than to reduce ambulance response times (a stakeholder goal that refers to non-shared phenomena). The problem is that by focussing on the machine requirements only, you risk building a machine that does not satisfy the stakeholders’ real goals.

Conclusion: Always make sure that you identify stakeholder goals that are independent of the Machine. Chapter 9 will present techniques for discovering such goals.

Domain Assumptions

Definition: Domain Assumption

A domain assumption is a property of the World that holds either as a law of nature or because of the behaviours of World actors other than the Machine.

Domain assumptions are properties of the World that the software development team can rely on to design the Machine. Labelling a property as a domain assumption also means that the development team does not need to satisfy that property in the Machine.

What constitutes a domain assumption is relative to the Machine. An assumption for a team developing one machine may be a requirement for a team developing another machine.


Example of assumptions for the airline braking safety controller are:

(D1) If the plane is moving on the runway, then its wheels are turning.
MovingOnRunway => WheelsTurning

(D2) If the plane wheels are turning, then the wheels sensors indicate that the wheels are turning.
WheelsTurning => WheelsPulsesOn

The assumption D1 can be viewed as an assumed property of nature; assumption D2 as an assumption on the behaviour of the wheels sensors.


In natural language, domain assumptions are usually expressed using verbs in the indicative mood, a grammatical mood used to state facts and beliefs. For example, “if the plane is moving on the runway, then its wheels are turning”. This contrasts with stakeholder goals and machine requirements that are expressed in the optative mood, using modal verbs like “must”, “should”, or “shall”.

Domain assumptions that express an expected behaviours from actors in the World can also be written in the optative mood. For example, “After landing, the pilot must press the ground braking button to deploy the ground spoilers and reverse thrust”.


Common synonyms for domain assumptions are “domain properties”, “domain knowledge” and “environment assumptions”.

Not all assumptions in software engineering are domain assumptions. Software engineers commonly make assumptions about all sorts of things. For example, assumptions about how long it will take to complete a task, assumptions about the deployment environment, assumptions about the capabilities of a programming framework, etc. These assumptions are important but they are not domain assumptions.

3.3 Requirements Correctness

We now arrive that the formula Req, Dom \(\vdash\) Goals that defines requirements correctness.

In the same way that the correctness of a program is relative to its requirements, the correctness of machine requirements is relative to their stakeholder goals.

Definition: Requirements Correctness

The machine requirements Req are correct with respect to the stakeholder goals Goals if, and only if, there exists valid domain assumptions Dom such that:

Req, Dom \(\vdash\) Goals

If the Machine satisfies the requirements Req and the World satisfies the assumptions Dom, then it can be logically deduced that the World satisfies the stakeholder goals Goals.

The turnstile symbol \(\vdash\) denotes logical deduction and can be read as ‘therefore’ or ‘implies’. The canonical example of logical deduction is: “All men are mortal”, “Socrates is a man\(\vdash\) (therefore) “Socrates is mortal”.

In this definition, asking that the domain assumptions are valid means asking that they are true in the World.

Another condition, not stated explicitly, is that the domain assumptions must be logically consistent with the machine requirements. You may remember from your logic classes that if they were logically inconsistent then it would be possible to prove anything (no worry if you don’t remember).


Let’s illustrate this definition on the requirements R1:

(R1) Ground braking must be enabled when the wheels sensors indicate that the wheels are turning.
WheelsPulsesOn => GrdBrakingEnabled

We want to show that R1 is correct with respect to the goal G1:

(G1) Ground braking must be enabled when the plane is moving on the runway.
MovingOnRunway => GrdBrakingEnabled

Our argument will use the two domain assumptions:

(D1) If the plane is moving on the runway, then its wheels are turning.
MovingOnRunway => WheelsTurning

(D2) If the plane wheels are turning, then the wheels sensors indicate that the wheels are turning.
WheelsTurning => WheelsPulsesOn

We can then show that D1, D2, and R1 imply G1. The logical argument is as follows. Consider a situation where the plane is moving on the runway. If D1 is true, then the plane wheels are turning. If D2 is true, then the wheels sensor indicate that the wheels are turning. If the machine satisfies R1, then the ground spoilers and reverse thrust are enabled. Therefore, every time the plane is moving on the runway, the ground spoilers and reverse thrust will be enabled. In other words, G1 is satisfied.

Observe the role of the domain assumptions in this example. Without the domain assumptions, it would be impossible to prove that the requirement satisfy the goal. The domain assumptions allow us to bridge the gap between the shared phenomena of the machine requirements and the non-shared world phenomena of the stakeholder goal.

Goal Modelling: A Preview

In practice, the relation between machine requirements, domain assumptions and stakeholder goals can be shown in a goal model. For example, Figure 3.3 shows the goal model corresponding to the previous example.

Figure 3.3: Part of a goal model for the ground braking system

In this figure, the black dot denotes a goal refinement. The lines from D1, D2 and R1 to the black dot and from the black dot to G1 mean that the conjunction of R1, D1 and D2 imply G1. In other words, if R1, D1 and D2 are true, then G1 will necessarily be true.

Goal models will be explained in Chapter 19. The purpose of Figure 3.3 is to give you an idea of how the correctness formula is applied in practice. When modelling large systems, a goals model can include hundreds or thousands of goals and requirements. A goal model typically involves multiple levels of refinements from goals to subgoals, and eventually from subgoals to requirements. The formulations of goals and requirements are often more complex than in this simple example. In practice, not all domain assumptions are recorded explicitly, few goals (if any) are defined formally, and few refinements (if any) are formally proved to be correct. Despite this –or thanks to this– goal models are useful to structure, analyse, and evolve requirements for complex systems.

3.4 Implications for Practice

The theory has both direct applications on specific requirements engineering methods and wider indirect implications for all requirements engineering practices.

Direct Applications

A few requirements engineering methods, notably the Problem Frame approach and some goal-oriented requirements engineering methods, are based directly on this theory. In these methods, requirements engineering includes three intertwined steps:

  1. identify, formulate and agree a set of stakeholder goals (Goals),

  2. identify valid domain assumptions (Dom),

  3. formulate machine requirements (Req) such that Req, Dom \(\vdash\) Goals.

This is an extremely simplified presentation. The methods include a range of techniques supporting these steps and other important analysis that will be covered later (Chapter 19). The idea that requirements engineering involves the explicit formulation of stakeholder goals and the derivation of machine requirements from such goals and domain assumptions is at the heart of such methods.

Wider Implications

The theory also has practical implications that go beyond its direct applications in specific methods; its ideas apply to all requirements engineering approaches. We recall the three most important here.

  1. Stakeholder goals vs. Machine requirements. The most important idea is the observation that the purpose of a software system is located entirely in the world. What matters to stakeholders is not the machine but the impacts that the machine will have on the world. Understanding stakeholder goals is therefore essential. Many requirements engineering methods, however, focus almost entirely on understanding the machine requirements, for example by analysing use cases or user stories, with very little attention to stakeholder goals that are independent of the machine. To be effective in practice, methods that focus on machine requirements only need to be combined with other methods, like goal modelling or impact mapping, that will help you understand the stakeholder goals and their relations to envisioned software use cases and features.

  2. Machine requirements vs. implementation. The second important idea is the observation that machine requirements are desired properties of the machine at its interface with the World. The distinction between requirements and implementation is thus not a distinction between the “what” and the “how” as it is sometimes presented, but rather a distinction about “where”: requirements are descriptions of the machine as it is seen from the world; design and implementations are descriptions of the inside of machine. This is the idea behind use cases and user stories too: describe the software functionalities from the perspective of users. For many of us, describing machine requirements from the perspective of the World does not always come naturally. It requires a radical shift of perspective from our habitual way of thinking about software from the inside, in terms of code.

  3. Importance of domain assumptions. The third important idea is the observation that domain assumptions play a critical role in software development: they are necessary to bridge the gap between stakeholder goals and machine requirements. We will see in the next chapter that critical failures are often caused by invalid domain assumptions. Effective requirements engineering therefore requires critical examination of domain assumptions. Few requirements engineering methods, however, pay explicit attention to domain assumption. We will mention one of such methods in the next chapter.

3.5 Beyond the Theory

The theory presented here is not a complete theory of requirements engineering. It describes an ideal end product of requirements engineering —the description of goals, requirements, and assumptions— and deliberately ignores the real-world processes that help us move towards such ideal product. Many important aspects of requirements engineering are thus left out.

  1. Stopping when good enough. For any real system, it is impossible to identify all stakeholder goals and define machine requirements and domain assumptions with a proof that Dom, Req \(\vdash\) Goals. The theory defines an ideal we can aspire to but never achieve. In practice, we need to decide when our understanding of goals, requirements and assumptions is “good enough”. The theory of requirements correctness does not support such decisions.

  2. Managing Conflicts. Stakeholder goals are often conflicting. The theory does not talk about conflicts. The correctness formula starts from a position where stakeholder goals have been agreed.

  3. Exploring alternatives. Requirements engineering involves exploring alternative ways to satisfy stakeholder goals. Each alternative correspond to a different set of machine requirements and domain assumptions. The theory does not consider how to represent such alternatives, how to evaluate them, and how to select a preferred alternative.

  4. Dealing with partially satisfied goals. Many stakeholder goals cannot be satisfied in an absolute sense. Often, we are interested in how well the goal is satisfied rather then whether it is 100% satisfied or not. This is particularly the case for goals related to safety, security, privacy, performance and other “non-functional” properties. The framing of the theory in classical logic suggests that goal satisfaction is all-or-nothing and is therefore not suitable for reasoning about levels of satisfaction.

  5. Dealing with Uncertainty. Requirements engineers are confronted with many uncertainties, notably about stakeholder goals and their relative importance, about the validity of domain assumptions, and about the impacts of design decisions on stakeholder goals. The theory in this chapter does not consider such uncertainties.

  6. Implementation Constraints. Requirements engineering must deal with technological, budget and time constraints that are not covered by the theory.

We will see various ways to deal with these concerns in future chapters. Most of these concerns, however, remain important research problems. The question of knowing when the requirements is “good enough” is perhaps the most important and has not received a lot of attention from researchers so far.

3.6 Notes and Further Readings

The theory of requirements engineering is based on the work of Pamela Zave and Michael Jackson. References to the main publications are given at the end of the previous chapter (Chapter 2).

Similar ideas were developed around the same time and presented slightly differently in Axel van Lamsweerde’s work on the KAOS goal-oriented requirements engineering method (Lamsweerde 2008, 2009). I contributed to that work for my PhD thesis (Letier 2001).

This theory of requirements engineering is a foundation to a few requirements engineering methods: the Problem Frame approach (Jackson 2001), the KAOS goal-oriented requirements engineering method (Lamsweerde 2009), and the REVEAL method used at Praxis Critical Systems (Hammond, Rawlings, and Hall 2001; Hall 2010).

The following table explains the correspondence between our terminology and that used in Zave and Jackson’s paper and in KAOS.

This book Zave and Jackson KAOS
Stakeholder goal Requirement Goal
Machine requirement Specification Requirement
Domain assumption Domain knowledge Assumption or Expectation

Our definitions of these concepts also differs from earlier definitions in Zave and Jackson’s papers and in KAOS.

Zave and Jackson’s definitions refer to grammatical moods: requirements and specification are statements in the optative mood; domain knowledge are statements in the indicative mood. In class, my explanations of English grammar would often raise perplexed looks from students.

In KAOS, the definitions refer to whether a statement is prescriptive or descriptive: a goal is a prescriptive statement whose satisfaction may require the cooperation of multiple actors; a requirement is a prescriptive statement to be satisfied by the Machine alone; an expectation is a prescriptive statement to be satisfied by actors other than the Machine; an assumption is a descriptive statement about the World. Our definitions are similar with the small difference that we refer to goals as desired rather than prescribed.

The most important difference is that we do not define goals, requirements and assumptions as statements, but rather as abstract properties that may or may not be stated explicitly. Goals, requirements and assumptions are properties that exist in people’s minds. They exist even if they are not formulated as explicit statements. A single stakeholder goal could also be stated in multiple ways; some ways can be more precise than others.

These are points of details. The core ideas of what are stakeholder goals, machine requirements, and domain assumptions remain the same.

3.7 Review Question

Question 3.1 Goal, Requirements or Assumptions?

Consider again the ambulance dispatching system. As in Question 2.2, the Machine is the Computer Aided Dispatch Software (CAD). The CAD interacts with the following external actors: call handlers, the Global Positioning System (GPS), and the ambulance’s Mobile Data terminals (MDTs).

Classify each the following statements as a stakeholder goal, machine requirement or domain assumption.

  1. An ambulance must arrive at incident scene within 14 minutes after the first call.
  2. When they receive an emergency call, the Ambulance service staff encode incident’s details and location.
  3. When a call assistant submits a new incident form, the CAD generates a list of the nearest available ambulances according to its latest information about ambulances’ status and location.
  4. The GPS gives correct ambulance locations.
  5. The ambulance crew signal their arrival at the incident scene on the Mobile Data Terminal.
  6. The mobile data terminals send updates about ambulance status to the CAD.
Show Solution