Consider this article as a brain dump. One day it will be refined...
It is rare that a lucid understanding of the domain is known prior to writing your first line of code. Indeed, this was the objective of the waterfall method: learn everything you can about what the end-user will need before writing any code and hope that those requirements will not change. This assumes that even the end-user fully understands the domain. But this is rarely true. It can be guaranteed that upon delivery the user will discover something that can be improved or the domain may even have changed in which case the whole enterprise has been a waste. Ideally, we would like to build only that which is currently required as we grow our understanding of the domain and the users in that domain (for it might be the case that users located elsewhere but working in the same domain may have vastly different usage patterns).
For this reason, development must always proceed cautiously with every step offering users the chance to be observed using the product as well as give feedback. Given that the nature of software development is discrete (data models are discretised descriptions of entities; features are countable improvements; runtimes involve non-overlapping instances of execution etc.) it is open to combinatorial explosion. This is what makes writing great software very hard because careful expensive attention must be devoted to ensuring that very precise entities function seamlessly with as few chances for unintended consequences as possible. Every new feature increases the possible interactions with other components. Good design attempts to minimise the emergence of such heterogeneity. Once a good fit has been discovered (yes, these have to be discovered) incremental improvements can proceed far more rapidly and every instance of a bug is an opportunity for refinement. In fact, my experience has been that whenever I have taken the time to be exacting about refining my code fixing bugs is easy because the understandability of the code is usually good and the fix leads to far better code than before.
This idea of incremental delivery implies that software development must be exploratory: we are looking for the correct representation of the domain and the transactions it admits. This is the essence of the hack: it's dirty. In most cases, the hack is so ad hoc that it can only be used by the creator. Unfortunately, in a lot of cases (particularly in fields where data analysis is part and parcel of the job such as scientific research) the code never leaves this stage even though it is presented as complete. Without proceeding to the refactor step the work persists as a juvenile babble, incoherently articulating ideas it is half prepared to do. This is why adapting such code is usually a fools errand unlikely to amount to much. The refactor attempts to elicit the implicit data models that need to be made explicit while highlighting the main transactions that are performed between these data models. It pays to have this clear in the mind of all participants.
Our perspective is naturally limited by our immediate terrain: the view from a valley is different to that from a summit. Even the highest summit only allows a view as far as the horizon. This is the principle at work with exploration: every iteration provides a foundation upon which to begin the next exploration.
If you are faithful to this task then you open yourself up to serendipity: unexpected pleasant surprises. The more you refactor your code towards domain faithfulness the more your code will automatically lend itself to the domain in ways you hadn't even anticipated. I experience this on a project I've been working on for a year. I discovered a way to use a feature that I had written without that application in mind. It was the best feeling in the world.