Useful Links

Main Page

Diagramming Methods

Analyst Improvement Tips

Figures

Works Cited

Challenges of Requirements Elicitation
Sean Isserman
IS 6840

Introduction

Requirements elicitation is one of the most important stages of systems analysis, as it is at this point that clients and analysts work together to determine the requirements of a new system to be developed. If the system does not meet a clients expectations, then the project is essentially a failure. Requirements Elicitation is one of the most difficult stages of analysis, with numerous communication barriers existing between the analyst and client that make eliciting requirements difficult. Analysts and clients often speak in different general languages, with analysts often being more technical in nature, while clients will often speak more from a business perspective. This makes common understanding difficult. Tagbo also identified several other general challenges in requirements elicitation, including conflicting requirements, unspoken or assumed requirements, difficulty in meeting with relevant stakeholders, stakeholder resistance to change, and not enough time set for meeting with all stakeholders (Tagbo, 2010). Roger Sessions, the Chief Technology Officer of ObjectWatch, a company devoted to reducing the complexity and cost of IT systems development, has estimated the annual worldwide cost of failed to be "about $6 trillion or $500 billion per month. For the United States alone, the annual cost is about $1 trillion" (All, 2009). This is particularly concerning since the 2009 Chaos survey reports that only 32% of IT projects are successful. 44% are considered challenged (late, over budget, doesn't meet full requirements list), and the remaining 24% completely failed. (Dannawi, 2009). Even more concerning is the fact that trends show failed projects have gone up, while successes have decreased in the last 5 years.

Furthermore, research has indicated that requirements elicitation, or the lack thereof, is often times the chief suspect in the cause of project failures. In a study of European IS projects, McManus and Wood-Harper found that one of the most important factors in the failure of these requirements was "the lack of due diligence at the requirements phase" (McManus, Wood-Harper, 2010). Indeed, in their article, which can be read here, the authors noted that none of the projects examined even went over budget or schedule until after requirements analysis. While it is not necessarily a bad thing that the projects were still considered to be on time, this can also be construed as a sign that analysts are not spending enough time on requirements elicitation. Indeed, some research has concluded that systems failure can be traced back to poor requirements elicitation in up to "90% of large software projects" (Davis, Fuller, Tremblay and Berndt, 2006, p.78). The section below identifies some of the more specific causes of difficulties in requirements elicitation in addition to the ones identified above, as well as potential ways to address them. Then, this paper focuses on two different strategies for improving requirements elicitation. Using models as a means of improving the requirements elicitation process, and using techniques/strategies to improve the actual analyst. The goal of this paper is to show how a combination of the right diagramming techniques and analyst strategies could drastically improve the efficacy of requirements elicitation.

Issues and Challenges

Hickey and David identified the four most common factors involved when an analyst selects a requirements elicitation technique, usually from a combination of the following: “(1) it is the only technique that the analyst knows; (2) it is the analyst's favorite technique for all situations: (3) the analyst is following some explicit methodology, and that methodology prescribes a particular technique al the particular time; and (4) the analyst understands intuitively that the technique is effective in the current circumstances” (Hickey and Davis, 2004, p. 68). Obviously, as the authors state, the fourth factor is the most desirable of the four, as it indicates an advanced understanding of system’s analysis and requirements elicitation. This advanced understanding makes an analyst more likely to better identify requirements. However, many analysts are what would be considered novice analysts, and do not have either the knowledge, experience, or the cognitive skills of more advanced analysts. As such, numerous techniques have been developed in order to improve requirements elicitation. These techniques can be divided into two categories, techniques meant to improve the process of requirements elicitation, and techniques meant to improve the skills of the analysts while performing requirements analysis.

Appan and Browne focused on the issue of memory recall during requirements elicitation, as they noted that “past research has shown that users often do not recall all the relevant information they have available” (Appan and Browne, 2010, p. 251). Appan and Browne point out that a client who only recalls requirement A in the first interview is likely to only focus on A and not B or C in subsequent interviews as well. It is inevitable that when clients (and analysts) are unable to recall all relevant requirements, the project will inevitably run into problems further down the line. This problem is referred to as Retrieval-Induced Forgetting (RIF), and is highly prevalent during requirements elicitation.

Appan and Browne proposed targeted questioning could be used to help mitigate RIF issues. The authors took 60 college students and divided them into 3 groups, and elicited requirements from them for a theoretical grocery system after learning about the system for a real company. Group 1 had no cues used, and two rounds of free recall, group 2 had directed questions and then a chance to freely recall any relevant requirements (Immediate recall treatment), and group 3 received directed questions and then, after 24 hours, was asked to freely recall relevant requirements (delayed recall treatment). Appan and Browne proposed six hypothesis for the above mentioned control groups. These hypotheses can be seen in the index.

Various statistical measures performed on the results indicated all 6 of the hypothesis were correct. The authors concluded that requirements previously recalled are significantly more likely to be the most emphasized recalled items, particularly during iterative processes. In order to counteract RIF, Appan and Browne suggest a three-tiered funneling strategy. Analysts should start by using free recall techniques in the initial phases to avoid accidental requirements suppression; this should be done multiple times. In phase 2, cognitive interviews encouraging clients to use multiple retrieval routes should be used to determine the desired level of depth for each of the identified requirements. Only after this is done should the third stage be performed, where directed questions and standardized surveys are used to get specific questions answered (Appan and Browne, 2010, p. 266-267). As the authors state, this is one of the first studies investigating RIF in relation to requirements elicitation, and it was performed in a controlled environment, which may not represent real life. It would be useful to test these theories in further studies on real systems during their development to ascertain for certain whether this technique actually works. However, the results of this study do indicate there is a strong likelihood that the RIF phenomenon needs to be kept in mind when performing requirements elicitation, particularly due to its inherent iterative nature.

In addition to the previously mentioned causes of requirements elicitation difficulty, Jeyaraj and Sauter approached the question from the perspective of whether or not the actual system’s modeling tools were causing problems. Specifically, they propose that IT projects continue to fail because clients do not fully understand the system that analyst’s are proposing to design, during the requirements verification stage, mostly because they cannot interpret models that the analysts use to explain their understanding of the requirements (Jeyaraj and Sauter, 2007, p. 64). This in turn causes issues down the line due to the misunderstandings between the analysts and the clients, that are expensive and difficult to fix, and may ultimately ruin the project.

Jeyaraj and Sauter tested this hypothesis using a test where business school and MIS students were given two types of models of the same system, a data-flow diagram (DFD) and a use-case diagram (UCD). The model, which represented a public university’s registration system, and had been verified by systems experts of the diagram types as well as the system. Some students viewed the DFD’s first, while others viewed the UCD first. Click the link here for more information on the differences between DFD's and Use-cases.

Students then created narratives with their interpretation of the information on the diagrams. A common coding sheet was created so that both diagram types could have five dimensions of the models mapped onto the same chart. The experiment results determined that students, who acted as the client’s during this experiment, were better able to understand the DFD’s and obtain more information about what the system was meant to do based on the developer’s understanding, and were also more likely to identify misunderstandings using DFD’s. That said, the novice user’s had an equal amount of difficulty with both models. This disproved a previous hypothesis that novice user’s would have more trouble with DFD’s due to the greater information representation in a DFD. That said, they concluded that it was more likely to be a combination of too much information for the student’s to process, and the lack of training in reading diagram. Furthermore, even the trained students failed to identify all the possible values on three of the dimensions that were “crucial for verification” (Jeyaraj and Sauter, 2010, p. 67).

The implications of this research, as Jeyaraj and Sauter identify, is that it may be necessary, as part of the requirements verification stage, to train users how to read the diagrams that analysts use, whether a DFD or a UCD. While a DFD may be better at showing more information, it is useless if the client’s can’t actually read them to verify accuracy. Finally, this research shows that it is not enough to simply create the models, it is vital, as the authors state, that the analysts verify the diagrams as correct, to ensure the analysts are on the same page as the clients.

In this section we have identified several potential causes of requirements elicitation failure, and the potential means to address them. The following sections of this paper address diagramming techniques for improving the requirements elicitations process, and techniques for improving the skills of the analyst. Click here to read about diagramming, or here to read about analyst improvement techniques.