6  Requirements in the Wider Context

The context in which software development takes place has a strong influence on requirements engineering practices. There is no one-size-fits-all approach that can be applied in all situations.

In this chapter, we will look at the role of requirements in client projects, product development, greenfield and brownfield projects, and regulated environments. We will also discuss the relations between requirements engineering and social responsibility, and between requirements engineering and artificial intelligence. Understanding these contexts is important to understand the variety of requirements engineering methods in practice.

6.1 Requirements in Client Projects

Many software development activities take place in the context of a client project. These are projects in which software is developed by an IT company to address the needs of a specific client. For example, the client may be an ambulance service that needs a new ambulance dispatching software or a government administration that needs a new system to distribute social benefits. In a client project, the initial requirements come from a client organisation, and the software is to be developed by a provider organisation.

The tendering process

The client usually selects a provider through competitive tender. The client invites multiple providers to submit a proposal and then selects the best or cheapest proposal among those it receives. Requirements engineering plays a crucial role in all stages of the tendering process. This process is illustrated in Figure 6.1.

Figure 6.1: The tender process and change requests in client projects. Requirements engineering is a significant activity in all stages, and all documents contain descriptions of stakeholder goals and software requirements.

In the first step, the client assembles a procurement team tasked to prepare a request for proposal. The request for proposal defines the client’s needs for the project by describing the stakeholders’ goals, the context in which the software is to be used and a list of required features. For complex projects, requests for proposals can be several hundred pages long. Preparing a good request for proposal involves substantial requirements discovery, analysis, and documentation. This is usually done by the client organisation, sometimes with the help of external consultants. The quality of requirements in the request for proposal is crucial because this document will set the direction for the whole project.

In the second step, multiple candidate providers assemble bidding teams to prepare proposals in response to the request. These proposals describe how their organisation will deliver the system or services described in the request for proposal, at what cost, and by when. Responding to a call for proposal can again involve significant requirements engineering activities, this time performed by the provider organisations. They must analyse all requirements in the request for proposal, envision an architecture that will help them deliver the requirements, and write a proposal describing how their proposed solution will satisfy each requirement. These proposals can again be several hundred pages long.

The client must then evaluate all proposals and select one that best matches its selection criteria. The client and selected provider will then enter contractual negotiations to produce the project contract. The contract is legally binding; each party can sue the other if it violates its contractual obligations. The requirements are part of the contract; therefore, both parties must check them carefully.

The development team will then receive the contract and use the requirements it contains as the basis for their implementation. Usually, requirements in the contract do not specify everything in full detail, and the development team will perform additional requirements engineering to clarify details with a client representative.

During development, the client may send change requests to the development team asking them to modify some of the requirements. These change requests are managed differently depending on the type of contract.

Contract types

There are two main types of contracts.

The first type is fixed price and scope. It means that the provider must deliver a fixed set of requirements for a fixed price. All requirements must therefore be defined in advance. When the client wants to change the requirements, it needs to send a formal change request and negotiate the cost of that change with the provider. This type of contract aligns well with a waterfall model. Agile development is possible in the sense that the development team could still deliver the software iteratively and incrementally. However, some of the key benefits of an agile approach will be lost since requirements are assumed to be fixed upfront when the contract is negotiated.

The second type of contract is time and materials. It means that the client pays the provider based on the time spent developing the system. This type of contract is more favourable to agile processes. It gives the client more flexibility to change requirements during development but less certainty about the cost. The client may also be concerned that the supplier lacks incentives to work quickly and may not assign its most efficient developers to the project.

In Summary

The key point is that significant requirements engineering activities occur during all stages of the tendering process and during the management of change requests. The various documents in this process —the request for proposal, the proposal, the contract, and change requests— all contain descriptions of stakeholder goals and software requirements. The quality of the requirements in the initial call for proposal and in the signed contract are critical to the project success. Requirements practices, such as the amount of upfront requirements definition and how requirements changes are managed, will depend on the contract type: fixed-price and scope vs time and materials.

6.2 Requirements in Product Development

In product development, the software is developed for a market rather than for a specific client. Usually, the system is designed and developed internally, although some components can be outsourced. Examples of product development include many software that you use every day: email clients, messaging applications, word processors, IDE, video conferencing systems, university course management systems, etc. In this context, the software development team does not receive requirements from an external client. The requirements are defined internally based on the company’s vision for the product, market analysis, and feedback from product stakeholders.

This context is more favourable to agile development than client-driven projects. The role of requirements engineering in product development is illustrated in Figure 6.2.

Figure 6.2: Requirements Engineering in Product Development: the main role of the product owner is to identify and prioritize what to build next based on feedback and ideas from stakeholders and the development team.

In agile methods, the person responsible for deciding and communicating the project requirements is called the product owner. One of the main roles of the product owner is to collect feedback and ideas from various stakeholders, prioritise these ideas, transform them into concrete features, and maintain the “product backlog”, which is a prioritised list of feature requests and change requests that the development team can potentially implement. The development team will then deliver product updates to the stakeholders at regular intervals.

Requirements engineering in this context can be much more iterative and incremental than in a client project. The product owner’s relationships with stakeholders and the development team are quite different from the client-supplier relations in a typical client project.

Following a product development approach is not incompatible with a client project. The client and provider may agree to work on a time-and-material basis and treat the project as a product development. For projects on a fixed-price and scope contract, the provider’s development team could still treat the project internally as if it were a product development and manage its requirements accordingly.

6.3 Greenfield vs. Brownfield Projects

Another important distinction is between greenfield and brownfield projects.

  • A greenfield project is one where you develop a new system from scratch. An example is the development of a new ambulance dispatching system intended to replace a paper-based system, as was done for the London Ambulance Service in 1992.
  • A brownfield project is one where an existing software system needs to be modified or replaced. An example would be to change some of the components of the existing ambulance dispatching system in response to new Government standards about incident priorities and response times.

The vast majority of software projects today are brownfield projects. Working on a greenfield project is the exception.

The fundamental principles of requirements engineering apply to both greenfield and brownfield projects. In both cases, you need to understand the stakeholder goals, the context in which the system is used, and the desired behaviour and quality of the new system. One important difference, however, is that in brownfield projects, you have less flexibility to change the context, and you must ensure that any change you make integrates with existing applications and working practices. Therefore, requirements engineering in brownfield projects requires a deeper analysis of the context, particularly of the constraints imposed by the legacy applications.

6.4 Requirements in Regulated Environments

In some industries, software must be audited for regulatory compliance before it can be deployed and used. For example, software developed for aerospace, nuclear power plants, or medical devices undergoes a rigorous auditing process during which systems engineers are required to provide evidence of their system’s safety.

The obligation to provide evidence that software complies with regulations has a strong impact on requirements engineering. It necessitates the production of comprehensive documentation for software requirements and the establishment of traceability from regulations to requirements and from requirements to code, tests, and test results. Developing high-quality software and having to demonstrate to auditors that the software is safe and complies with regulations forces an organisation to use more systematic and rigorous practices than they might otherwise have used. Some of the techniques we will study in later chapters involve defining requirements in such a context where high quality is mandated and must be demonstrated. This includes, for example, goal modelling techniques that help maintain traceability from stakeholder goals to machine requirements and domain assumptions.

6.5 Requirements and Social Responsibility

Social responsibility is the idea that businesses and individuals have a duty to act in the best interest of society as a whole. This means that when you develop software, you and the organisation you work for have a responsibility to consider all impacts that the software may have on society and the environment.

Table 6.1 defines some of the most important concerns. The list is far from complete.

Table 6.1: Some of the social responsibility concerns for software systems.
Concern Description
Safety Avoiding physical harms to people and infrastructure.
Security Protecting people and assets against harms caused by others.
Privacy Protecting people’s freedom not to be observed or disturbed.
Environmental sustainability Protecting the environment by conserving natural resources, avoiding pollutions, and reducing contributions to climate change.
Fairness Avoiding discrimination, notably based on race, religion, and gender.

National and international regulations reflect many important societal concerns. These include, among others, discrimination laws, accessibility requirements, and privacy regulations such as the European Union General Data Protection Regulation (GDPR). Many sectors, such as finance, also have specific regulations.

At an individual level, software engineers are expected to adhere to the ethical and professional standards set out in codes of ethics from professional bodies such as the IEEE and ACM. These include general standards of honesty, trustworthiness and maintaining technical competence. Beyond regulations and codes of ethics, your social responsibility may also include choosing to work on projects that you believe are valuable to society and avoiding projects that you believe are harmful or pointless.

Sometimes, it is obvious that certain software is illegal, unethical, or harmful to others. Notable examples include the Volkswagen emissions scandal, where software was used to cheat pollution emission tests, and the implementation of dark patterns to manipulate users into unwanted behaviours or purchases.

Most often, however, determining what is socially responsible is a complex and nuanced issue. This is especially true in defence applications, policing, justice, healthcare, and all areas with complex ethical and fairness implications.

Addressing social responsibility issues is often a “wicked problem”. A wicked problem is one where there is no definitive, universally agreed definition of the problem to be solved. This contrasts with a “tame problem”, such as the game of chess, which has a clear goal and a fixed set of rules. Many software engineering projects, especially those with significant social impacts, are wicked problems: their goals are unclear and difficult to define in advance, stakeholders have competing needs and values, and the societal impacts of the software are difficult to predict and measure.

Requirements engineering plays a critical role in addressing social responsibility concerns. Requirements engineering is the set of activities in which software engineers work with stakeholders and experts from different disciplines (laws, social sciences, environmental sciences, psychology, philosophy) to analyse the potential social and environmental impacts of technical decisions.

Requirements engineering brings specific skills and techniques that complement those of other disciplines. These include techniques for identifying stakeholders, understanding their needs and concerns, clarifying vague concerns such as safety and fairness, translating such concerns into precise software requirements, managing conflicts between competing goals, analysing risks, and evolving software systems to meet changing needs and contexts.

6.6 Requirements and Artificial Intelligence

Artificial Intelligence (AI) is a term used to describe a wide range of computational techniques that emulate, or claim to emulate, human reasoning. AI techniques include machine learning techniques such as deep learning and reinforcement learning, optimisation techniques such as planning and search algorithms, and knowledge representation and formal reasoning techniques such as various formal logic and reasoning systems.

An AI system is a software system (a machine) where some of the system’s core components are implemented using one or more AI techniques. Examples of AI systems include self-driving cars and other autonomous vehicles, virtual assistants on mobile phones, recommendation systems used by online stores and streaming services, credit scoring systems used in banking, risk assessment tools used by police and judges to assess the likelihood of future criminal activity, diagnostic systems for medical images, and, of course, generative AI systems that generate text, images, or software code.

Requirements Engineering for AI

The rapid progress of AI is creating huge opportunities to transform many sectors of activity. Over the next decade, many organisations will be looking at how best to incorporate AI into their business. As a result, there will be a high demand for business analysts and software engineers with deep expertise in requirements engineering for AI systems. Without such expertise, many AI projects will result in expensive failures.

At a high-level, requirements engineering for AI systems has the same concerns as any other type of system: we must understand the stakeholder needs for the AI system, the context in which it will be used, and its desired behaviour and qualities.

Although, defining requirements for AI systems has specific challenges, many established requirements engineering practices remain essential.

For example, many AI initiatives start as exploratory projects without clear business goals. Requirements engineering practices are important in this context to guide the explorations: they can help data scientists understand the context in which the AI system would be used, clarify and quantify the potential impacts of the AI system on business goals, and analyse the tradeoffs and risks of different AI solutions.

After the transition from the exploratory to the development phase, effective requirements engineering practices are needed to address the common challenges that have been reported by software engineers and data scientist building AI systems. This includes dealing with stakeholder’s unrealistic expectations about what can be achieved with AI and the costs involved, dealing with regulatory constraints, and dealing with requirements for the complex IT infrastructure needed to monitor, manage, and evolve machine learning models and datasets.

Tradeoffs and Risk Analysis

While well-established requirements engineering practices are essential, they are not sufficient because engineering AI systems demands much greater attention to tradeoffs and risk than most other type of systems.

Requirements tradeoffs are everywhere in AI systems. For example, if you build a classifier to detect potential frauds in credit card transactions, you need to find the right balance between the rates of false positive (classifying a legitimate transaction as fraudulent) and false negative (classifying a fraudulent transaction as legitimate). Each error type has different costs and consequences for the banks and card holders. In general, requirements engineering for AI systems involves exploring a large range of design decisions that have important impacts on multiple stakeholder goals. These design decisions include decisions about what tasks to automate, the roles of humans in the system, tradeoffs between quality metrics (e.g. false positive and false negative), what data to use in training, etc. Furthermore, the impacts of decisions will often be hard to predict and decisions must take into account multiple and conflicting stakeholder goals. Requirements engineering for AI systems therefore involves significant and complex decisions under uncertainty. We will describe techniques supporting such decisions in Chapter 11.

Requirements engineering for AI systems also demands significant attention to risks. AI systems built using machine learning are rarely, if ever, 100% accurate. For example, in a financial fraud detection system, some legitimate transactions will be incorrectly flagged as fraudulent, and vice-versa. In some contexts, the inaccuracies are benign (e.g. an inaccurate video recommendation), but in other contexts, inaccuracies can have important consequences (e.g. for medical diagnosis, or for identifying a person in front of a moving self-driving car). Another source of risk is that many machine learning models are non-deterministic and too complex to be understood by human review. AI systems based on such models therefore have a greater risk than simpler, deterministic systems of behaving in unexpected and harmful ways in situations that have not been covered in training and testing. The increased automation that is typical of AI systems also reduces the opportunity for humans to intervene to recover from unexpected and unwanted situations. Requirements engineering for many AI systems will therefore require much greater attention to risk than for most other systems. We will describe techniques for requirements-level risk analysis, such as obstacle analysis, in Chapter 13 and Chapter 19.

Fairness, Accountability, Transparency

The use of machine learning to support decisions that affect people’s lives (such as access to credit and insurance products, granting bail and parole in the justice system, screening job applications, investigating fraud, etc.) has raised concerns about the fairness, accountability and transparency of such decisions.

  • Fairness refers to the idea that the decision-making process should not create, reinforce or perpetuate unfair discrimination, particularly on the basis of race, gender or religion.
  • Accountability refers to the idea that someone must be held accountable for the decisions made by the system. Unaccountability would be a situation where everyone avoids responsibility by blaming decisions on the ‘algorithm’.
  • Transparency refers to the idea that decisions can be explained and justified to the people affected by them.

These concerns are not new, nor are they specific to AI systems. For example, the UK’s new system for accessing welfare benefits has raised concerns about transparency, even though it was not built using AI techniques. However, the increasing use of machine learning has made everyone more aware of their importance.

How to translate these concerns into specific stakeholder goals and machine requirements is an emerging challenge for requirements engineers. This is an important area of ongoing research in the software engineering, machine learning and human-computer interaction communities.

The AI Alignment Problem

Another important concern is the AI alignment problem.

Many AI systems are machines whose behaviour is guided by the pursuit of some goal: the machine observes the world and acts on the world to optimise some objective function. Typical examples of such systems are robots, self-driving cars, conversational AI, and stock trading algorithms.

The AI alignment problem arises when the objective function given to the AI system differs from the actual stakeholder goals that system designers, users, and society would like to be pursued. This can lead to situations where the AI system optimizes its objective function but fails to satisfy some important stakeholder goals. Such a situation is a typical symptom of requirements errors: the machine satisfies its requirements but not the stakeholder goals (Chapter 4).

One example that received much attention was an AI chatbot called Tay that Microsoft launched on Twitter in 2016, only to shut it down a few hours later when the chatbot started generating offensive racist and sexist tweets, presumably because it was maximising an objective function that may have been related to generating as much attention as possible. Another example reported in the press in 2023 was the hypothetical scenario of an AI-enabled military drone tasked with identifying and destroying enemy sites, with the possibility for a human operator to abort the mission before destruction. In this hypothetical scenario, the AI drone observed during training that destroying the enemy site provided the highest reward, but that its human operator could prevent it from reaching that reward by aborting the mission. It then learned that it could maximise its objective function by preventing the abort command, either by killing its operator or by destroying the operator’s communication tower.

Formulating better objective functions and specifying constraints on the behaviour of AI systems can reduce AI-alignment problems, but it would be naive to believe that such approach can eliminate them altogether. In any complex system, the objective functions are always proxies (simplified representations) for complex concerns that cannot be fully represented by mathematical functions. Requirements engineering has an essential role to play here, both in helping to formulate better objective functions and constraints, and in designing systems that take into account the impossibility of formulating mathematical objective functions and constraints that capture the full complexity of stakeholder goals and of the world in which the system operates. Here again, risk analysis and mitigation techniques, such as obstacle analysis, will have an important role to play.

AI for Requirements Engineering

AI can also be used to assist, and perhaps even automate, the requirements engineering process. This idea has been explored in various forms since the beginning of requirements engineering more than 30 years ago, and recent breakthroughs in machine learning and natural language processing have renewed and increased interest in it. Examples of the use of AI in requirements engineering include:

  • tools for analysing the quality of requirements sentences, for example to flag ambiguous words using pre-defined rules and, more recently, to analyse the sentences quality using machine learning;
  • tools for analysing user feedback, such as that found in app reviews or social networks, notably for classifying app reviews and for discovering requirements-related information in their content;
  • process mining tools that apply machine learning to process data (event logs) to analyse existing workflows in order to reveal bottlenecks and other areas for improvement.

There are also various prototype tools that use formal reasoning techniques (a form of AI) to derive machine requirements from stakeholder goals, or to infer machine requirements from scenarios. People have also started exploring the use of generative AI tools (like chatGPT) to write requirements and generate requirements models.

Although the real-world applications of AI in requirements engineering are still marginal, major improvements and wider adoptions might be around the corner. The latest research in this area will be covered in Part II.

Requirements in the Age of AI Coding Assistants

Requirements engineering is also affected by the use of AI coding assistants, notably for automated code generation, testing, debugging and optimisation.

Less coding, more requirements engineering. As software development becomes increasingly automated, software engineers will gradually spend less time writing code and more time discovering, analysing and communicating requirements. The skills needed to be an effective software engineer are changing. In-depth knowledge of the intricacies of programming languages and framework are becoming less important (if not obsolete); the ability to understand stakeholder goals and the complexities of an application domain are becoming increasingly essential.

Importance of requirements formulation skills. AI coding assistants need to be provided with information about requirements to be satisfied and objectives to be optimised. The quality of these requirements and objectives —-how accurate, complete, precise, readable, and testable they are—- is likely to have a significant impact on the coding assistants’ performance and usefulness. To be a proficient user of AI coding assistant, you will thus need strong requirements formulation skills (Chapter 12). You need to be able to write requirements and objectives that are accurate, complete, precise, readable, and testable.

Shorter cycle times, faster feedback. AI coding assistants can dramatically reduce the time and cost required to deliver a working software from requirements. Shorter development times mean faster feedback. Prototyping and experimentation, such as A/B testing, which are already two important requirements practices will become cheaper and easier. Automated code generation through AI could potentially radically change their use.

6.7 Notes and Further Readings

The description of requirements in client projects is partly based on Chapter 9 of Jeremy Dick, Elizabeth Hull and Ken Jackson’s requirements engineering book (Dick, Hull, and Jackson 2017). The book includes more details about requirements engineering activities for the client and provider organisations.

The description of requirements in product development is partly based on Henrik Kniberg’s video on “Agile Product Ownership in a Nutshell”. Requirements engineering in product development is also described in Chapter 9 of (Dick, Hull, and Jackson 2017).

Social responsibility is important concern for software engineers. Martin Fowler gives an excellent talk on the topic, reminding us that we are “Not just code monkeys”. The ACM and IEEE code of ethics are important references (Gotterbarn et al. 2001). Although they can be hard to enforce and are vague on guidance, they provide essential baselines of professional conduct for all software engineers. Ronald Howard and Clinton Korver recommend extending such baselines with a personal code of ethics (Howard, Korver, and Birchard 2008). Writing a personal code of ethics can help us clarify our ethical thoughts and values ahead of time, before being caught in the heat of tricky situation or before we start to violating our principles without even realising it. Kevin Ryan’s ethical exercises for software engineers is a great way to help us start thinking about our personal values and tradeoffs (Ryan 2020). The paper also describes some of the ways we can push back on unethical practices in our jobs. Yuriy Brun and Alexandra Meliou introduced the topic of software fairness to the software engineering research community (Brun and Meliou 2018). The paper presents many examples of fairness issues in software systems and how dealing with fairness affect the various stages of the software development process. Important research is also being conducted on the topic of values in software engineering (Ferrario et al. 2016; Hussain et al. 2020).

The roles and impacts of AI in requirements engineering is the topic of many recent papers. Nadia Nahar and colleagues have helpfully compiled and summarized the results of 50 studies describing the challenges most commonly reported by software engineers and data scientists building AI systems with ML components (Nahar et al. 2023). Boris Scharinger and co-authors describe how requirements engineering plays a crucial role in addressing many of these challenges (Scharinger et al. 2022). Iason Gabriel presents a clear and in-depth discussion of multiple perspectives on the AI value alignment problem and potential approaches to address it (Gabriel 2020).