Requirements Elicitation and Diagramming
This section of the paper deals with actual techniques and tools to improve the requirements elicitation process. Namely, we focus on methodology’s that rely on diagramming and models to improve the process. There are hundreds, if not thousands, of different techniques, and it is not possible to identify all the different methodologies and diagramming techniques in this paper. Rather, our goal is to provide examples on how the use of different diagramming tools can drastically improve the effectiveness of requirements elicitation, making project failure and the need for expensive changes later on in the development life cycle less likely.
Davis, Fuller, Tremblay, and Berndt proposed requirements elicitation methodology based on the use of repertory grid techniques. Specifically, they focused on better identifying requirements that may not be apparent to the analysts or the clients. The goal of the proposed technique was to ensure that analysts actually understood the requirements of the client’s, not what they think are the requirements of the client. A repertory grid works in three phases, creation, assessment, and clarification. In creation, a blank grid (see figure 1 for a sample) is used in conjunction with semi-structured interviews with clients. Clients are asked to identify task elements and constructs.
Task elements are “the object of an individual’s thinking”, while constructs “represent the ways in which elements are judged with some kind of rating systems to ensure that all elements are judged in terms of all constructs” (Wu and Shieh, 2010, p. 1141). As the previously referenced figure shows, elements reference columns, and constructs map the rows. The entries are placed on index card and presented to the client’s in random groups of three, clients must ID the card that doesn’t fit, and explain why, and how the other two are related. This is done with all possible combination, and then tasks and constructs are placed on the grid. During the assessment phase, clients rate the elements versus each construct to show how applicable the labels are vs. the construct, blank fields are allowed for incompatibles. Two-dimensional cluster analysis is performed to reorder the grid and identify the most closely related tasks and elements. In the final clarification phase, analysts simply speak with the clients and discuss each of the associations, in order of the importance laid out in phase 2 (Davis, Fuller and Tremblay, 2006). The final stage is most important to the analyst, as it is the explanations that add to the analyst’s understanding of the client’s needs and as well as their understanding of the business.Davis et al tested this technique on a real-life Florida health agency that was designing a Decision Support System to meet user’s requirements for the DSS. The repertory grid technique allowed the group to identify more requirements than had originally been identified. Davis found “had we not used this technique, there is a strong possibility that the users' range of retrieval, presentation and, therefore, decision making options would have been limited by the developers' perceptions of the user’s needs” (Davis, Fuller and Tremblay, 2006, p. 83). Specifically, the RepGrid revealed presentation requirements that had not been established previously, because neither the analyst nor client had thought of it. Techniques that successfully identify requirements that are usually not found until much later during development are very powerful and important. It is expensive to adjust a system design the further in development it is, repertory grids offer a potential mean for better finding hidden requirements. It should be noted, as Davis pointed out, that this is a technique meant to use in compliment with other requirements elicitation techniques, not replace them.
Another type of model recommended for use as a complement in requirements elicitation is the Self-Organizing Map (SOM). SOM's are a type of neural network that use "unsupervised learning", learning that is done without any inputs, to discover the "underlying structure of data", and uses a "topological structure imposed on the nodes in the network" that can be represented visually. (Orr, Schraudolph and Cummins, 1999). Neural networks are a type of data modeling tool meant to be an "artificial system that could perform 'intelligent' tasks similar to those performed by the human brain" (What is a Neural Network?, 2010). A more thorough explanation of neural networks can be read here. The video below demonstrates a SOM (also known as a Konohen network) that "learns" to organize pixels of color.
Sedbrook suggests using these visual displays as a means to track changing requirements over the course of changing business needs, a constant challenge in all system development projects. In order to test the effectiveness of SOM’s, Sedbrook performed a field study during the requirements phase of the development of a European e-commerce site. Requirements that were generated from use cases, emails, spreadsheets, and numerous other sources. Once generated, study coding procedures were used to characterize the information repositories that came about as a result. See figure 2 for the resulting attribute categories.
The study then applied SOM models over three requirements phases, the initial set of use cases for the first phase, the middle-phase requirements issues , questions, and concerns for the second phase that resulted over several months of design, and the last phase representing changing partnership relationships. All of this was done using generated use-cases for each phase. Attributes extraction was performed for each of the three stages by performing interviews with all parties involved in the project to create a classification set that had attributes for “the artifact's main actor, management level focus, architectural development level, competitive advantages differentiators, and business priority and estimated development effort” (Sedbrook, 2006, p. 64). A two-dimension SOM was fitted for the classification space that represented the 27 sets of normalized input vectors describing requirements artifacts (use-cases etc.).
The models Sedbrook created with this technique “revealed requirements clusters and component planes identified important attributes within clusters” and are documented in figures 3 and 4 below (Sedbrook, 2005, p. 66). The shading of the nodes represented patters in the input vectors. The shifting that was observed between each phase represent represents the changing requirements that occurred on the system over the project lifecycle. It also allows for easy classification of newly discovered requirements. The final step consisted of creating a semantic addressable mapping, which showed the link between SOM clusters and the related documents and diagrams. See figure 5 for details. At the conclusion of the experiment the managers and analysts discussed the models together and claimed to have spotted several inconsistencies that had not been previously identified, and had gained a better “comprehension and understanding of change trajectories across project phases” (Sedbrook, 2005, p. 69). Continuous and ever-changing requirements is one of the great risks of requirements analysis; the ability to track those changes over time, particularly in regards to more volatile stakeholders, is essential for avoiding project failure. That said, there are several caveats that should be noted about this technique. SOM’s are a powerful mapping technique that relies on advanced mathematics and neural mapping, which means specialized commercial software is necessary in order to create them. Furthermore, this study relied primarily on use-cases, as opposed to something like a data-flow diagram. Further analysis of SOM effectiveness when different requirements artifacts are used is necessary before deciding whether or not SOM is a good fit for any scenario, or specific situations.
Al-Karaghouli, Alshawi, and Fitzgerald focused on the issue of communication mishmash between analysts and clients, specifically when analysts focus too much technical issues as opposed to the business issues clients are concerned with. In order to correct this problem Al-Karaghouli et al suggests the use of set mapping (Venn diagrams) as a means to bridge the knowledge hap between analyst and client. The video below provides a brief explanation of how Venn diagrams work as well as the logic behind them.
Figure 6 in the index contains a very basic example of how Al-Karaghouli et al.’s proposed model works. Circle R represents all identified client requirements, S shows system development spec requirements from the analyst’s perspective, and the overlap between the two shows agreed on requirements. Analysts and clients would then meet to clarify requirements so that the overlap between the two becomes larger, and business requirements are mapped to system specs (illustrated in figure 7) (Al-Karaghouli, Alshawi and Fitzgerald, 2005). While this technique shows potential for allowing analysts and client’s to see how close they are to being on the same page, there do appear to be some weaknesses in the technique. For one, the diagramming referenced here refers to a very high level overview, and is only useful to ensuring that analysts and clients are on the same page. While this is important, the diagram serves little other purpose, and there are other techniques, some of which identified above, that can provide the same assurance but also provide more useful data. Furthermore, Al-Karaghouli et al did not perform or reference any sort of field tests on this technique to see how useful is actually is. On the other hand, this is an incredibly simple diagramming technique, more easily interpretable by clients that DFD’s or UCD’s, and that in itself could lend itself toward the usefulness of this technique. In the end, Al-Karaghouli et al.’s arguments would be better received if he provided some kind of evidence of the diagram’s benefits.
This section has introduced several types of diagramming tools, and shown how used in conjunction with current requirements elicitation practices, could greatly improve the requirements elicitation process. No matter how skilled an analyst is, a lack of good modeling tools will always inhibit their abilities to correctly identify requirements. Some of these tools are relatively simply to use, such as the Venn diagram, while others require sophisticated software to fully utilize, such as the SOM. Analyst's must measure the benefit of these tools vs. the cost of the time it takes to use the, and the potential cost of missing requirements. The last section of this paper describes techniques meant to improve the actual analyst, as opposed to the requirements elicitation process, and can be read here.