view as a single page
by Max Guernsey, III - Managing Member, Hexagon Software LLC
If you haven’t read the
introduction for this channel of articles,
I highly recommend you do so now. For those who just need a brief reminder, the point is this:
In each article, we pick a practice used in traditional, program-centric Agility. We deconstruct that practice until we have
reduced it to its essence: principles and values. Then, we put Humpty-Dumpty together again, with data in mind. The result
should be a new practice which fills the same need as the one with which we started but is applicable to persistent data
You will also need to understand the concepts from
Part I: Evolution.
If you have not already done so, you probably should read that now. If you just need a refresher, here is the gist:
Source code does and should evolve. Evolution in source code permits emergent design without which Agility could
not function. However, the process of evolution operates on groups of things and functions by the destruction of
one generation and creation of another. This is not possible with databases, because the information in an individual
database has too much value in it to permit its destruction and replacement.
In this installment, we’re going to talk about deployment in the database world. In the software world, we have
highly reliable deployment mechanisms and techniques. These are important because they let us distribute the software
we write. If you cannot distribute a program to its users, it is very difficult to realize any of its potential value.
While the techniques in the software world are highly refined and usually pretty reliable, the processes we use for
databases are largely manual and/or unreliable. We are going to try and figure out why deployment of programs and
components is so reliable and deployment of database schemas is, by comparison, unreliable.
Introduction to this Series,
Part I: Evolution,