Easing database decomposition

When talking about monolith decomposition into microservices, it’s not unusual to assume that data should also be decomposed and moved within the infrastructure boundaries of the new services that arise. It must – no discussion there – but it doesn’t necessarily mean immediately.

There’s a lot to think about when decomposing a monolithic database. In a relational database, for example, we might have foreign keys checks to disable, pivot tables connecting tables that now will be separated. It might sound minor but it adds up to the code decomposition and all the work around getting the new service up and running. It can be overwhelming.

An option to ease the transition of the data to the service boundaries, without having to completely move data to another datastore instance, is to do what’s called a logical decomposition. It actually means using new schemas or collections (depending on the database type).

Yes, it still makes the current, monolithic, database a single point of failure for two services (the monolith and the new service) but serves as an intermediate step: the new service will still handle this as if it was a different DB, but it allows for the gradual decomposition of the database.

When that logical decomposition is done, we can safely move all data and schemas/collections to a new, dedicated, instance within the owning service boundaries. Until then, we’re free to gradually re-architect all the data logic and structure as necessary until we reach a state we’re satisfied with.