We assert that:
Most projects do not do either.
From a system maintenance point of view, the most important issues that affect
selection of approach are:
These can be concentrated on a few requirements - and hence hopefully on just a few classes, or may be distributed randomly across all requirements and hence the system.
If you do not know the change pattern, you have to assume the random pattern => so use the simplest possible design. You have to use this approach until you know more.
Risk reduction plays an important part. The way to do it is to produce the simplest design. Doing more analysis does not reduce risk. Although we felt that this statement may be controversial, there was rough agreement between participants that this reflected their experience.
Culture is a big issues that we did not really cover as part of the workshop, except to note that the selection of either approach is driven by cultural factors and is likely to be a political decision.
The designed in flexibility approach demands that there be a model of the domain available or attainable. (We need an existing model, because if we do not it will take too much time to come up with an appropriate metamodel). With an application family, and multiple models and applications we can drive towards a metamodel.
The counter argument for this is that in a sufficiently dynamic environment, we are shooting at a moving target - requirements constantly change, eventually we know what to expect from the changes, our abstractions start to make sense in the domain. So we can only generalize after we have done a lot of the implementation work, then we can start to discover an abstractable domain - hence we should talk about evolving and emergent systems.