I recently finished almost two years worth of work for a client, the lion's share of which was spent on one specific J2EE application. It occurred to me shortly before I transferred off, and not for the first time, that with the exception of some new functionality that's been added over those years that at least 95% of the requirements for this application have not changed since the software was a mainframe program written in Natural and using Adabas more than 5 years ago.
I then realized that with no major exceptions I could think of, every application I've ever worked on, in twenty-odd years of serious programming, has had fairly stable requirements.
Now, don't get me wrong. Frequently the requirements encoded in the final version of a program are wildly at variance with what the client and programmers initially thought them to be. The client may be quite inarticulate at expressing what he wants, the client may assume that the business analyst or developer knows or understands things about the business rules when they really don't, the business analyst may do a bad job at describing the requirements for the designers, the designers may botch the job of capturing the requirements when they do the design, and so forth.
Once the application is delivered it's rarely correct. The client now has something they can compare their wishes to, and years of fixing and upgrading reshape the work into something approximating what should have been captured and coded 3 or 5 years before. The main point being, over a substantial timeframe the real requirements do not change.
The agile community would have it that requirements change, and change a lot. Well, no, they don't. The understanding of what the requirements are does change a lot, though. But it seems rather silly to me to espouse a philosophy that basically accepts poor communications, and rather than working to fix the problem, works around the problem.
I recently read a blog article, and a comment in another, that seem to me to be wise solutions. One writer espoused writing a thorough Readme first, before doing design, and another advocated writing the user manual first. Both are basically the same thing, although I like the idea of the user guide.
Think about it. The business analyst gathers the requirements, and at that point the user manual is written. This can immediately go back to the client for review and approval. If there is some serious misunderstanding of what the users want, it's likely to show up in the user guide.
A user manual isn't everything. Not every requirement manifests as a process or step or procedure that's described in a user guide. But a lot does. The rest is often business rules, and I also believe that there are better ways of capturing and coding these than what is often the case. I'll discuss that in a future article.