
- By Patricia Bourrillon
- ·
- Posted 27 Feb 2025
Software Modernisation to Boost Business Agility
Many software teams face a common challenge: they inherit large, complex systems built long before they arrived. Over time, these systems accumulate..
In the first part of his series on Modernising the Public Sector, Carlo looked at the risks of failing to modernise legacy systems which range from loss of data and outages to failures in critical sectors, such as hospitals or transportation.
The migration of legacy systems requires a well-defined strategy, and the experience of a specialised partner is essential to avoiding failure.
In this second part Carlo addresses the steps required to decouple applications sharing a common database schema
As a possible scenario: Let’s see a practical example of a database-sharing scenario:
This is the situation presented in this scenario:
Let’s suppose that the DB licence is outdated and can’t be renewed.
Application 1 must be evolved and moved to the cloud as per GDS recommendations.
The motivation for the selection of a specific application to be moved to the cloud instead of one another is out of the scope of this article, but these can be some reasons:
The motivation for an urgent decoupling of application 1 from other applications, independently from the shift to the cloud, could be this: schema sharing is blocking strategic improvements of application 1.
The first part of the job consists in separating the shared schema between applications 1 and 4.
Starting point: a single schema containing all the database objects from applications 1 and 4.
Objective: two separated schemas, each of them with the database objects of one of the two applications, each of them with a different database user.
The solution consists of these main steps:
The schema in common to applications 1 and 4 must be split: one application will use the new schema, while the other application will continue to use the old schema. A new database user will be created for the new schema.
Which database objects, can we move from the original schema to the new one, being sure to not affect the application's behaviour? We are dealing with legacy applications, so no tests are available.
The prerequisite for every intervention like the one in this example is an assessment of the applications to understand use cases and main flows, and then an implementation of an exhaustive suite of tests.
The identification, design and build of the test is outside of this article, however, Sam Davies offers great insights and further practice suggestions in his excellent article; How to test legacy software (that was not designed to be tested) when modernising.
In the assessment phase, a thorough analysis of all the applications that share the same db must be done, answering these questions:
All questions must be answered to have a map of the appilcations-data relations and to plan the automatic test suite that must be implemented to avoid regression errors.
The assessment could evidence some actions that must be taken before moving to the next phase. For example, if two or more applications share the same DB user (and so the same permissions), further db users must be created and assigned to these applications; the newly created users must be granted with only the minimum needed rights. This is a sub-project by itself: the grant updates must be tested, deployed in production, monitored, corrected in a sequence of iterations. If there is no test environment, then one must be implemented.
The first step is to implement automated tests that cover the main use cases identified during the assessment. It is not necessary to cover everything, and for further details, you can read the document mentioned before; How to test legacy software (that was not designed to be tested) when modernising.
The “Database per service” pattern has to be applied to decouple the applications.
When a reliable suite of tests is available for both applications 1 and 4 it's time for refactoring: database objects can be separated between the old and the new schema (let's suppose to move the application 1 object in the new schema), application 1 must be reconfigured to connect to the new schema, and the automated tests must be run on both applications to identify failures. If a failure happens, it means that an object is shared and a strategy for the conflict must be applied.
If there is no confidence about which tables are related to one application rather than another, then an analysis of the code must be done. The automated tests can also help in this stage: they can be run as many times are needed to understand if the table was moved correctly, has to be replaced in the original schema or if there is a conflict. The same applies to other database objects.
When moving a database object from the original schema to the new one some problems can happen:
The following schema represents the improved scenario, with applications 1 and 4 separated, each one with its own schema. Application 1 has a new component called “Application 1 API”: it will be used by application 4 to access the data that were in the shared database.
We can suppose that Application 4 needs to access data managed by Application 1 and not vice-versa. This is because, in the following steps, we need an application free of dependencies to be moved to the cloud.
The goal is the re-platforming of the application stack, with “Application 1” moved to a cloud virtual machine and the DB schema moved to a cloud managed database service (e.g.: AWS RDS, AWS Aurora, Azure SQL Database, Azure Database for PostgreSQL).
In our new eBook; Crafting a Cloud Migration and Modernisation Strategy for the Public Sector we explore other, better solutions for “Application 1” (e.g.: to be rewritten for containers or for serverless).
It is also worth exploring our other blog articles on software modernisation and cloud migration.
The problem with the situation presented in the previous picture is the fact that more than one application accesses the database at the same time, in read and/or write mode.
The solution consists of these main steps:
The real solution has many more steps that are avoided here to not burden the example of the very specific problem of shared databases.
Again, the “Database per service” pattern has to be applied to decouple the applications. “Application 1 DB” must be accessed only by “Application 1”, and the other applications can’t have direct access.
Other applications can interact with “Application 1” data in different ways, for example;
In the paragraph "Decoupling two applications that share the same database schema" a list of possible database coupling mechanisms was tackled, offering different solutions to reach the decoupling.
The same problems and solutions must be applied to solve the direct access from Application 3 to the data of Application 1 and the dblink from Application 2 schema to Application 1 schema.
In this case, the database schema of Application 1 is not fully shared with applications 1 and 3 because these applications have their own schemas.
Also in this case; assessing, documenting, implementing automated tests and finding the right strategy to act must be done for every object that applications 2 and 3 use in the Application 1 schema, exactly as explained before.
Here down there is a diagram representing the applications decoupled by the API service created in the previous step. The solution represented consists of newly created APIs functions in the “Application 1 API component”, leaving untouched the “Application 1” business logic. Application 2 and 3 must be modified to make http API calls to the “Application 1 API” component.
The API component can be split in a set of micro-services, but this is beyond the scope of this document.
During this phase, automated tests must be executed in order to avoid regression errors. The tests must be implemented following the information collected during the assessment phase.
After the application decoupling the stack “Application 1”, “Application 1 DB” and “Application 1 API” can be moved to the cloud. The API component can be exposed to other applications through the Internet or a VPN. The APIs must be implemented and deployed with all the security constraints required (e.g.: HTTPS, VPN, authentication/authorization), and the applications still on-premises must be configured and allowed to contact the “application 1 API” component.
The schema App 1 DB could be migrated to a db server in the cloud, possibly a managed one; if needed, a conversion between different DB engines objects (e.g.: from a commercial database server to an open source one) must be done.
The public sector is facing the challenge of legacy application modernization, and often this challenge is made harder by practices like RDBMS-centric infrastructures.
To get rid of the burden of legacy infrastructure it is necessary to define a strategy and apply an iterative method. An experienced partner could help.
Why not download a copy of our new eBook: Crafting a Cloud Migration and Modernisation Strategy for the Public Sector which focuses on cloud migration and modernisation strategies in the Public Sector. Download your copy here.
Many software teams face a common challenge: they inherit large, complex systems built long before they arrived. Over time, these systems accumulate..
When faced with the challenge of modernising legacy systems in businesses, we know the potential benefits are immense, but so are the challenges...
Technical debt—a term often discussed but less frequently addressed effectively—is often cited by technology leaders as one of the biggest drains on..
Join our newsletter for expert tips and inspirational case studies
Join our newsletter for expert tips and inspirational case studies