| Copyright 2004 CXO Media Inc
As soon as she walked into the meeting, Jane Smith knew that the
executive on the other side of the desk wanted to buy something that Smith
wasn't supposed to sell: a trumped up rating for the executive's software
development division so that his company could qualify to bid on contracts
from the United States Department of Defense.
Smith (not her real name) is one of a select group of experienced IT
pros, called lead appraisers, who go into companies and assess the
effectiveness of their software
development processes on a scale from 1 (utter chaos) to 5
(continuously improving) under a system known as the Capability Maturity
Model, or CMM. The company she was visiting wanted to move up to Level 2,
but based on some initial discussions, Smith knew that the company was a
1. Level 1 describes most of the software development organizations in the
world: no standard methods for writing software, and little ability to
predict costs or delivery times. Project management consists mostly of
ordering more pizza after midnight.
After a few initial niceties, the executive leaned across the table to
Smith and another lead appraiser who had accompanied her to the meeting
and asked, "How much for a Level 2?"
"That's when I got up and left the room," Smith recalls. "The other
appraiser stayed. And the company got its rating."
The stakes for a good CMM assessment have gotten only higher since
Smith's close encounter with corruption some 10 years ago. Today, many
U.S. government agencies in addition to the DoD insist that companies that
bid for their business obtain at least a CMM Level 3 assessment--meaning
the development organization has a codified, repeatable process for an
entire division or company. CIOs increasingly use CMM assessments to
whittle down the lists of dozens of unfamiliar offshore service
providers--especially in India-- wanting their business. For CIOs, the
magic number is 5, and software development and services companies that
don't have it risk losing billions of dollars worth of business from
American and European corporations.
"Level 5 was once a differentiator, but now it is a condition of
getting into the game," says Dennis Callahan, senior vice president and
CIO of Guardian Life Insurance. "Having said that, there are some Level 3
or 4 startups that we might consider, but they have a lot more convincing
to do before I would do business with them. They would be at a
disadvantage."
With CIOs increasingly dependent on outside service providers to help
with software projects, some have come to view CMM (and its new, more
comprehensive successor, CMM Integration, or CMMI) as the USDA seal of
approval for software providers. Yet CIOs who buy the services of a
provider claiming that seal without doing their own due diligence could be
making a multimillion-dollar, career- threatening mistake.
That's because software providers routinely exaggerate their
assessments, leading CIOs to believe that the entire company has been
assessed at a certain level when only a small slice of the company was
examined. And once providers have been assessed at a certain level, there
is no requirement that they test themselves ever again--even if they
change dramatically or grow much bigger than they were when they were
first assessed. They can continue to claim their CMM level forever.
Worse, some simply lie and say they have a CMM assessment when they
don't. And appraisers say they occasionally hear about colleagues who have
had their licenses revoked because of poor performance or outright
cheating in making assessments.
Yet CIOs who want to check up on CMM rating claims are out of luck.
There is no organization that verifies such claims. Furthermore, the
Software Engineering Institute (SEI), which developed CMM and is
principally funded by the DoD, will not release any information about
companies that have been assessed, even though appraisers are required to
file records of their final assessments with the institute.
As American and European companies stampede offshore to find companies
to do their development work, they first need to understand what CMM
ratings really mean. Yet few CIOs bother to ask crucial questions, say IT
industry analysts and the service providers themselves. "Not even 10
percent of customers ask for the proof of our CMM," says V. Srinivisan,
managing director and CEO of ICICI Infotech, an Indian software services
provider that claims a Level 5 certification. "They inevitably take it for
granted, and they don't ask for the details."
CIOs who don't ask for the details will not be able to distinguish
between companies that are using CMM in the spirit it was intended--as a
powerful, complex model for continuous internal improvement--and those
that are simply going through the motions to qualify for business. Buying
by the CMM number alone could mire CIOs in the same problems that caused
them to look offshore in the first place: high costs, poor quality and
shattered project timetables-- not to mention the loss of thousands of
U.S. IT jobs.
"When you talk about something simple like a number and lots of money
is involved, someone's going to cheat," says Watts Humphrey, the man who
led the development of CMM and is currently a fellow at the SEI. "If CIOs
don't know enough to ask the right questions, they will get hornswoggled."
(For a list of the best questions to ask, see "Twelve Critical Questions,"
Page 52.)
Where CMM Comes From
The CMM was a direct response to the Air Force's frustration with its
software buying process in the 1980s. The Air Force and other DoD
divisions had begun farming out increasing amounts of development work and
had trouble figuring out which companies to pick. Carnegie Mellon
University in Pittsburgh won a bid to create an organization, the SEI, to
improve the vendor vetting process. It hired Humphrey, IBM's former
software development chief, to participate in this effort in 1986.
Humphrey decided immediately that the Air Force was chasing the wrong
problem. "We were focused on identifying competent people, but we saw that
all the projects [the Air Force] had were in trouble-- it didn't matter
who they had doing the work," he recalls. "So we said let's focus on
improving the work rather than just the proposals."
The first version of CMM in 1987 was a questionnaire designed to
identify good software practices within the companies doing the bidding.
But the questionnaire format meant that companies didn't have to be good
at anything besides filling out forms. "It was easy to cram for the test,"
says Jesse Martak, former head of a development group for the defense
contracting arm of Westinghouse, which is now owned by Northrop Grumman.
"We knew how to work the system."
So the SEI refined it in 1991 to become a detailed model of software
development best practices and added a group of lead appraisers, trained
and authorized by the SEI, to go in and verify that companies were
actually doing what they said they were doing. The lead appraisers head up
a team of people from inside the company being assessed (usually three to
seven, depending on the size of the company). Together, they look for
proof that the company is implementing the policies and procedures of CMM
across a "representative" subset (usually 10 percent to 30 percent) of the
company's software projects. The team also conducts a series of
confidential interviews with project managers and developers-- usually
during the course of one to three weeks and, again, depending on the size
of the organization--to verify what's really happening. It's a tough
assignment for the internal people on the team because they are being
asked to tattletale on their colleagues.
"It can be very stressful for the [internal] assessment team," says a
lead appraiser who asked to remain anonymous. "They have conflicting
objectives. They need to be objective, but the organization wants to be
assessed at a certain level."
David Constant, a lead appraiser and partner with Process Inc., a
software projects consultancy, recalls assessing a company where all the
developers had been coached by management on what to say. "I had to stop
the interviews and demand to see people on an ad hoc basis, telling the
company who I wanted to speak to just before each interview began,"
Constant recalls. "And the sad part was that they didn't need to coach
anybody. They would have easily gotten the level they were looking for
anyway--they were very good."
The new model is much tougher to exploit than the original
questionnaire. In 1991, Westinghouse's Martak recalls telling his
management: "This is a different ball game now. If you have a good lead
appraiser, you can't fake it out." Martak led his group to a Level 4
assessment and eventually became a lead appraiser himself.
The depth and wisdom of the CMM itself is unquestioned by experts on
software development. If companies truly adopt it and move up the ladder
of levels, they will get better at serving their customers over time,
according to anecdotal evidence. But a high CMM level is not a guarantee
of quality or performance--only process. It means that the company has
created processes for monitoring and managing software development that
companies lower on the CMM scale do not have. But it does not necessarily
mean those companies are using the processes well.
"Having a higher maturity level significantly reduces the risk over
hiring a [company with a lower level], but it does not guarantee
anything," says Jay Douglass, director of business development at the SEI.
"You can be a Level 5 organization that produces software that might be
garbage."
That assessment is borne out by a recent survey of 89 different
software applications by Reasoning, an automated software inspection
company, which on average found no difference in the number of code
defects in software from companies that identified themselves on one of
the CMM levels and those that did not. In fact, the study found that Level
5 companies on average had higher defect rates than anyone else. But
Reasoning did see a difference when it sent the code back to the
developers for repairs and then tested it again. The second time around,
the code from CMM companies improved, while the code from the non-CMM
companies showed no improvement.
Truth in Advertising
Stories about false claims abound. Ron Radice, a longtime lead
appraiser and former official with the SEI, worked with a Chicago company
that was duped in 2003 by an offshore service provider that falsely
claimed to have a CMM rating. "They said they were Level 4, but in fact
they had never been assessed," says Radice, who declined to name the
guilty provider.
When done correctly, CMM is a costly, time-consuming effort. The
average time for a company to move from Level 1 to Level 5 is seven years,
and the expense of building a really robust, repeatable software
development process with project and metric tracking is many times the
cost of a CMM assessment (which alone costs about $100,000). For small
companies short on funds and staff, or startups, forgoing business while
building a software process capable of receiving a Level 5 assessment may
seem more risky than fudging a number--especially when your customers
don't know enough to ask about it. And mature companies that already have
a high CMM level may not want to risk the disruption, cost and potential
disappointment of getting assessed again regularly.
Officials at the SEI deny that companies are exaggerating or lying
about their CMM claims.
"There is no one who will declare 'We are CMM Level 3 as an
organization,'" says the SEI's Douglass. "They'll say they are Level 3 in
this development center or that product group."
Not true. A quick Nexis search revealed four companies-- Cognizant,
Patni, Satyam and Zensar--claiming "enterprise CMM 5," with no explanation
of where the assessments were conducted or how many projects were
assessed, or by whom. Dozens more companies trumpet their CMM levels with
little or no explanation.
Indeed, all of the services companies we interviewed for this story
claimed that their CMM assessments applied across the company when in fact
only 10 percent to 30 percent of their projects were assessed. That's
partly because experts say that assessing every project at a big company
would be too unwieldy and expensive.
Yet few of those same experts support the idea that assessing a 10
percent slice of projects--even those considered to be representative of
all the different types of work a company does-- should lead to claims of
"enterprisewide CMM." Vendors argue that there is logic behind these
claims. The higher CMM levels (3 and above) require that a company have a
centralized process for software development and project tracking, among
other things. Since everyone across the company is supposed to use that
same process that was used in the projects that were assessed at Level 5,
for example, all projects across the company can be assumed to be at Level
5.
But as soon as you dig beneath the surface, the logic falls apart. The
process may have changed completely since the assessment was performed.
Indeed, Indian services companies in particular, where the most CMM Level
5 assessments have been reported, are growing so quickly--some adding as
many as 50 to 60 new developers a week--that avoiding change is nearly
impossible. The company also may have changed the types of work it does
and perhaps acquired other companies along the way that were not assessed
at any level. And if the company does not have an excellent training
program for all its project managers and developers--so they can work at
the same level as those in the projects that were assessed--the assessment
means little.
CMM is a "snapshot in time," says the SEI, and it encompasses only the
projects that were assessed. Furthermore, if the snapshot was taken more
than two years ago, most experts say, it will have yellowed so badly that
the company is probably no longer at the same maturity level.
Now that CMM has become table stakes for billions worth of business,
some believe that providers should bite the bullet and get all their
projects assessed if they are going to claim "enterprise Level 5 CMM."
"If I were a CIO and a company was telling me their entire company was
CMM 5, I'd want all the people on my project to have gone through the
assessment," says Margo Visitacion, a Forrester Research analyst and
former quality assurance manager at a software development company. "[The
service providers] are getting millions in business from their CMM levels.
Why shouldn't they have all of their developers go through an
assessment?"
How Much for That Certification?
Appraisers continue to cheat too, according to their colleagues. The
pressure on appraisers, in fact, is higher than ever today, especially
with offshore providers competing in the outsourcing market. Frank Koch, a
lead appraiser with Process Strategies Inc., another software services
consultancy, says some Chinese consulting companies he dealt with promised
a certain CMM level to clients and then expected him to give it to them.
"We don't do work for certain [consultancies in China] because their
motives are a whole lot less than wholesome," he says. "They'd say we're
sure [certain clients] are a Level 2 or 3 and that's unreasonable, to say
nothing of unethical. The term is called selling a rating."
Will Hayes, quality manager for the SEI Appraisal Program, would only
acknowledge one recent case of an appraiser who had his license revoked by
the SEI for improperly awarding a company a Level 4 assessment. However,
it's difficult for the SEI to know exactly how much cheating is going on
because it does not monitor the claims that companies make about CMM.
"Are there organizations out there claiming Level 5 who have never
submitted the information to the SEI? I'm sure that there are," says SEI's
Douglass. That's little comfort to CIOs who would rather not discover a
false CMM claim the hard way--by seeing their projects fail.
There is a way to prove whether the assessment was done, but it may be
hard for CIOs to get the evidence. Appraisers are required to submit
formal documentation of all their assessments to the SEI and to customers.
Lead appraisers must write up something called a Final Findings Report
that includes "areas for improvement" if the appraiser finds any (they
usually do, even with Level 5 companies). But there is no requirement for
the content or format in the reports to be consistent across appraisers or
companies. Only the methods for arriving at the final number are
consistent. According to one appraiser who asked not to be named,
companies will often ask appraisers to "roll up" the detailed findings
into shallow PowerPoint presentations that don't give a very good picture
of the company and its software development processes. "The purpose of the
report is to tell companies where they need to improve--that's the whole
point of CMM," she says. "But they make us write these fluffernutters that
can gloss over important details."
The Final Findings Report is what company officials present internally
to the big brass and to customers knowledgeable enough to ask for it. But
there's no obligation to do it. They can declare their CMM level without
producing any evidence. They can even hire their own lead appraisers
inside the company and assess their CMM capabilities themselves. They
don't have to hire a lead appraiser from the outside who might be under
less pressure to give a good assessment. And they can characterize their
CMM level any way they want in their marketing materials and press
releases.
SEI officials say they are not in the business of controlling what
companies say about their assessments. Nor will they reveal to the public
which companies have been assessed or what the assessments consisted of.
"We weren't chartered to be policemen-- we're a research and development
group," Hayes says.
Instead, the SEI exerts control through the relatively small lead
appraiser community (approximately 220 are authorized to do CMM
assessments). From the beginning, the SEI has reserved the right to
discipline or even remove appraisers who cheat or do their jobs badly. But
in the early days, the SEI rarely followed through on those threats, say
longtime appraisers.
More recently, the SEI toughened up the CMM itself and plans to
completely replace it (as of December 2005) with a broader, more in- depth
model called CMMI. In the process, it has increased the training
requirements and controls on appraisers. According to Hayes, under CMMI,
the SEI reviews each appraisal that comes in for irregularities. And under
CMMI, appraisers have to file a report called an Appraisal Disclosure
Statement that clearly states which parts of the organization and projects
were assessed, as well as all the people who took part in the assessment
(though assessed companies are not required to reveal that report
publicly, either). The SEI, along with the lead appraiser community, is
also developing a "code of ethics" for appraisers.
Yet if CIOs want to get the true picture about appraisers, to check if
they've ever been reprimanded for performing faulty assessments or thrown
out altogether for cheating, they are out of luck. The SEI will not reveal
any information about errant appraisers.
And the SEI has no intention of becoming a governing body like the
American National Standards Institute (ANSI), which controls ISO 9000
certification in the United States. ANSI requires companies to be
reassessed every six months if they want to maintain their ISO 9000
certification and reassesses all its appraisers each year. "No one has
asked us to become a governing body, and that's not our mandate. And if we
did, what would that solve?" the SEI's Humphrey asks. "It wouldn't excuse
anyone from doing their homework."
Indeed, CIOs who look to CMM for guarantees won't find them, says Rick
Harris, director of application development for OnStar, a division of GM
that provides communications inside the company's vehicles. He recalls
confronting a manager from one of his CMM Level 5 offshore outsourcing
companies who did not know how to do a testing plan for software. "My
people had to train him to do it," he says. On another occasion, Harris's
staff discovered that the offshore provider had fallen far behind schedule
in one of its projects but had not told him. "You'd think a Level 5
company would have told me months before that the schedule was slipping
and we needed to do something," he says.
Problems like those can damage CIOs' credibility inside IT and with the
business--especially if they used a CMM level to defend a decision to move
development offshore or use a particular outfit. As Harris has learned,
what matters is what's behind the impressive- looking number. Is there a
verifiable commitment to quality, process and training? Can companies
demonstrate improvements they've made over time in customer delivery
times, developer productivity and defect density? Will the project
managers that went through the assessment be assigned to your project? If
the answer to any of these questions is no, then a CMM Level 5 isn't worth
much.
There is still no substitute for deep due diligence. "The real test is
when you get into the trenches and see whether these companies bring their
capabilities to bear," says Harris. "Do their people and processes hold up
under pressure? In my experience, in some cases they have and others they
haven't."
Do you have any CMM stories to share? Send feedback to Executive Editor
Christopher Koch via e-mail at ckoch@cio.com. |