"Working software is the primary measure of progress"
Fundamentally, there is no more valid measure for progress, than the working software itself. This only leaves open to discussion, the definition of "working software".
Defining "Working Software"
The criterion for defining working software is obviously open to debate. A common definition is:
Software can be called "working software" when it meets a defined set of business requirements and can be demonstrated to do so through testing.
This is one reason why Agile processes put so much value on unit testing, these tests show very early on in the process that software is meeting business requirements, without the need to create a fully functional user testable application. These unit tests also allow demonstration at a fairly low level of granularity that code is meeting requirements, where a user facing application is dependent on too many factors to easily establish correctness.
So How Do We Measure Progress?
Based on this definition, our best way to measure progress and velocity on projects is to evaluate defined business requirements, against the code that is provided to meet those requirements.
Code that is written, but is not yet functional and passing tests cannot be considered progress until it has been completed to a level as defined above.
This then prioritises getting components of functionality completed early, rather than attempting to do all things simultaneously.
The traditional "waterfall" approach to software development is to spread large amounts of functional requirements out amongst large teams, for example assigning each piece of functionality to a team member with an expected delivery date measured in weeks or months. This leads to very long cycles for delivery of something correlating to "working software"
An Agile approach to the same problem is to focus the teams on only a small subset of that functionality, and to attempt to deliver it in a working and testable state in very short iterations. The degree of success with which they do this then becomes their "velocity". The velocity can then be used as a predictor of future success rates and therefore of future timescales. This also allows a "fail fast" mentality, where it is better to hit problems early on and resolve them, rather than delay all the problems for as long as possible down the development path.
Therefore, the best way of measuring success is to do one thing at a time, do it well, ensure it works, ensure it meets criteria, ensure it can be tested, and then to replicate the things that went right on the next piece of functionality, and eliminate the things that did not go so well.
05-15-2008 9:48 AM