Many statistical procedures try to get most of our data (and often provide us with more information than we actually need to answer our scientific hypotheses). A common strategy is to reduce the amount of information by fitting it into more simple concepts: clouds of points are represented by best fitting lines, complex similarity structures are represented in the form of binary trees.
The strategy we follow in this paper is to constrain the solutions to simple patterns, patterns which are known to put relatively little demand on the perceptual skills of an audience.
This should not be misunderstood as an argument for replacing more elaborated, statistically well grounded procedures, but for supplementing them with a family of 'low cost' tools. Such tools serve their purpose when they are able to grasp the most important features of the underlying information.
A general expectation from the perspective of model building is that the more simple a model and the more restricted the set of its solutions, the likelier we are to end up with a very poor fit of the empirical data. Of course this will depend on our data.
On the other hand even a poor model can be useful to some degree, depending on whether it still reflects the most important features of the underlying structure and misplaces only less important elements. This assumes an algorithm which explicitly takes some critera of structure into account and focuses on solving the fit for the core of the structure first, proceding to less important elements at a later stage.