January 23, 2020

December 19, 2019

Please reload

Recent Posts

Data Science and the Health Care Revolution

January 1, 2018

1/1
Please reload

Featured Posts

Microservices for Legacy System Interoperability (part two)

September 17, 2018

It is debatable whether or not one would consider AHLTA a successful EHR. We all love to hate the venerable system, but for all practical purposes, it has largely done its job. Successful applications very often have a habit of growing over time, and AHLTA is certainly no different in that regard. However, from a sustainment perspective, most long-lived business critical systems will eventually become massive unwieldy monoliths, and AHLTA is no different in that regard either.

 

Size matters...

 

If you’re operating an agile shop to manage your own monolith, each maintenance sprint will likely introduce a few more user stories, which, of course, means adding a few more lines of code. Even if you’re not developing using an agile methodology, after a few years even the most basic applications will have feature-crept into giant unmanageable monoliths. In fact, if you’re managing a behemoth like AHLTA you already recognize that even the most judicious attempts at agile development and continuous delivery are ultimately ineffective. Perhaps the most obvious issue is that the applications have grown to be overwhelmingly complex over a number of years, with each individual instantiation differing ever so slightly such that over time they become unrecognizable to each other. These applications become simply too large and too complex for any single developer (or even any single development team) to fully comprehend. As a result, holistically fixing bugs and implementing new features becomes increasingly more difficult and time consuming. This creates a scenario some developers have coined “the infinite do-loop of sadness”, whereby a codebase that is so difficult or impossible to understand leads to changes or modifications that are inherently just as difficult or impossible to understand. You ultimately end up with an incomprehensible monolith - you end up with AHLTA.

 

Another problem with these large and complex monolithic applications is that their sheer size and complexity become a barrier to continuous integration and deployment. Modern PaaS and SaaS applications are designed to push changes into production multiple times a day. This is extremely difficult, if not impossible to achieve with a complex monolith since you must typically redeploy the entire application in order to update any one part of it. Because it is nearly impossible for one developer (or even one development team) to fully comprehend every aspect of the monolith, the impact of any introduced change is usually not very well understood either. This leads to extensive manual testing, which is often ineffective due to the aforementioned nuances in the various instantiations.

 

To be fair, the military’s current electronic health record isn’t truly a single monolith, but rather a family of similar systems loosely coupled together to perform a singular monolithic function. Many of these components were tacked on to the AHLTA periphery over time, either to wholly address a missing capability or to enhance or extend an existing one. These include CDR, HAIMS, BHIE, BMS, CHDR, PDTS, and DHMS, each arguably a monolithic application in its own right, but one that contributes to the effectiveness of the parent monolith, AHLTA. This makes it difficult to scale because the different components and modules all have conflicting, and often competing resource requirements, are written in different development languages, and are all deployed on different infrastructures.

 

AHLTA has performed its intended purpose, and with the efficiency and effectiveness of that performance notwithstanding, it remains a business-critical application today. Unfortunately, it has grown into a massive monolith that very few developers fully understand. Although the majority of its functionality is being superseded by the MHS Genesis rollout, it currently houses the longitudinal record of care. Because the level of effort to normalize the legacy data was deemed far too great, leadership determined that it was a bridge too far to port “old” data into the new EHR system. As such, the legacy system capability will need to be maintained for the foreseeable future. This creates a unique Catch-22 scenario in that the cost of the MHS Genesis acquisition was intended to be offset by the sun-setting of those very same legacy systems. Achieving both will require a more effective strategy. In the most basic sense, access to the longitudinal record of care must be preserved more efficiently. This is challenging because AHLTA is written using an obsolete language on unproductive technology, both of which make hiring and retaining talented developers difficult. AHLTA in its current state is also difficult to scale, and its reliability is questionable; additional factors that make the cost-conscious practices of agile development and CI/CD nearly impossible.

 

So what can we do about it?

 

Organizations like Amazon, eBay, and Netflix have all solved similar scalability problems by adopting what is now known as the Microservices Architecture pattern. While these commercial giants are not retooling decades old business critical systems, they are avoiding having to do so in the future by purposefully steering clear of the single monolithic application paradigm; opting instead for a microservices concept that involves splitting what would have previously comprised a single application into a set of smaller, interconnected services. A service typically implements a set of distinct features or functionality, such as order management, customer management, etc. Each microservice then becomes a mini-application that has its own hexagonal architecture consisting of business logic along with various adapters. Some microservices would expose an API that’s consumed by other microservices or by the UI application’s clients. Other microservices might implement a web UI. At runtime, each instance is often instantiated as a cloud virtual machine (VM) or a Docker container.

Each functional area of the ‘application’ is implemented by its own microservice. At that point, the UI is split into a set of smaller, simpler applications – which makes it much easier to deploy distinct experiences for specific users, devices, or specialized future use cases. Each backend service exposes a REST API and most services can be designed to also consume APIs provided by other services. The UI services invoke the other services in order to render web pages. Services might also use asynchronous, message-based communication. Some REST APIs can also be exposed to a variety of mobile apps used by the various communities of interest. The apps, however, won’t have any direct access to the backend services. Instead, communication is mediated by an API Gateway. The API Gateway is responsible for all of the scaling tasks such as load balancing, caching, access control, API metering, and monitoring.

 

The Microservices Architecture pattern is much different from a traditional n-tier software development approach, particularly with respect to the relationship between the application layer and the data store. Rather than sharing a single RDBMS database schema with other services, each service has its own schema (or datamart as the case may be). To be clear, this pattern is admittedly at odds with the idea of an enterprise-wide data model. Additionally, it will admittedly result in some degree of data redundancy and duplication. However, having a database schema (or corresponding datamart) per service is absolutely essential in order to take full advantage of the many benefits inherent to the microservices architectural pattern. The single schema, single service model ensures loose coupling and late binding. The following figure depicts a high-level data tier architecture for a sample EHR application. Each of the services in the illustration has its own data mart. To be more accurate, the illustration should depict at least one of the three services having its own database. This would more effectively illustrate that a service can use whatever type of database that is best suited to its needs, fully exploiting the polyglot persistence architectural pattern.

 

 

On the surface, the Microservices Architecture pattern is very similar to that of a Service Oriented Architecture (SOA). Both architectures consist of a set of services. Microservices, however, are much lighter. The Microservices Architecture pattern is essentially SOA without all the baggage – No ESB, no Web Service Specifications, etc. Microservices implement lightweight protocols such as REST API’s; and for all practical purposes, properly deployed microservices should be the final death nail for the much hyped ESB technology the big software vendors shoved onto the market prematurely a decade ago. Instead, microservices evoke ESB-like messaging functionality natively. The Microservices Architecture pattern also typically rejects other SOA standards such as the concept of a canonical data access schema.

 

The Benefits:

 

The Microservices Architecture pattern has a number of very important benefits. Perhaps the most important of these benefits is that it very effectively handles the problem of monolithic design complexity by decomposing such systems into a succinct set of purposeful services. While overall functionality remains the same, the application has been broken up into more manageable chunks. Each chunk (or service) has a well-defined boundary in the form of a message-driven API. The Microservices Architecture pattern achieves a level of modularity that is practically impossible in a monolith. As a result, individual services are much faster to develop, and much easier to sustain.

 

Another key benefit is that this type of architectural pattern enables distributed application development. Development teams can work independently of each other; focusing only on the services in which they have a specific domain expertise. They are free to choose whatever technologies or development language makes sense for their domain, provided that the resulting service honors the basic API contract. While the reality is that most organizations will want to avoid the chaos associated with standard-less environments, such a degree of freedom means that development teams are no longer forced to use the obsolete technologies that existed when the original monolith was conceived. When developing a new service, the team has the option of employing current technology, and because the individual services are relatively small by comparison, it often makes more sense from an economics perspective to simply rewrite an old service using the more modern methodology.

 

Additionally, the Microservices Architecture pattern enables each microservice to be deployed independently from other services. Development teams working on their domain specific component never have to worry about coordinating the deployment of changes that are local to their service. Local service changes can be deployed as soon as they have been tested. The presentation layer team can perform testing and rapidly iterate on UI changes. The Microservices Architecture pattern makes continuous deployment and integration possible for legacy systems where the previous sustainment level of effort may have made such endeavors cost-prohibitive.

 

Finally, each service deployed under the Microservices Architecture pattern is designed to scale independently. In an elastic cloud environment, MSO system administrators can deploy only the number of instances of each service necessary to adequately satisfy its unique capacity and availability constraints, both of which can be expected to vary by service domain. For on-premise deployments, technicians can utilize the hardware that best matches a service’s independent resource requirements.

 

The Risks:

 

It goes without saying that microservices are certainly not a magic fix for intrinsically problematic business systems, but they can be a good solution if you have the right mix of environmental factors. However, every derived microservices solution comes with its own set of problems and implementation challenges. It has been our experience that the overwhelming majority of a microservices refactoring effort is focused on the architecture of the code artifacts, which is understandable. However, enterprise applications are useless without the underlying data layer components. The critical one schema, one service design pattern inherent to an effective microservices architecture can be a difficult pill to swallow for many organizations.

 

Maintaining state becomes a constant challenge for inexperienced developers, and taking the easy path leads to what can be best described as data taffy – an anti-pattern that manifests when all of the microservices have full access to all of the objects in the data layer. When your development team needs to accommodate a complex data ingestion scenario, it seems logical to them to have all of the services simply call what they need directly from the data layer. However, that creates problems when an individual domain needs to scale, but the others do not. Data rarely grows uniformly across all domains, and it is difficult to predict which will grow the fastest. The end result is data that is deeply entangled with the business logic - It becomes difficult to pull apart – it becomes data taffy.  In such a scenario, the organization will typically have complex, multi-dimensional queries, stored procedures, and object relationships that will need refactoring.  Each domain has its own understanding of how their respective area is supposed to operate. This almost always leads to data contamination, performance problems, and a near impossible environment in which to establish effective meta-data management or to ensure data provenance. To address the data taffy problem, data assets must be isolated to specific domains that are only accessible via the microservices designed to service them. The data can start in the same data layer schema, but well defined schema and access policy must be implemented to limit access to a single service. This will enable the organization to change data structures, create new partitions, or move to an entirely new data source without impacting other services.

 

Again, the one schema, one service pattern means that distributing data between different microservices creates integration challenges. When discussing a potential microservices option, most organizations will cite a scalability requirement. The reality is that scalability isn’t likely to be the driving factor. While we have all heard great things about the microservices architectural pattern, and about the successes commercial giants like Amazon and Netflix have had with its implementation. However, we can’t all be Amazon or Netflix. Most organizations considering a microservices implementation are not dealing with the same scalability issues that Amazon and Netflix faced when they chose to adopt the architectural pattern. In most cases, the implementation decision is more about improving sustainability and less about addressing scalability. It has also been suggested that DevOps initiatives share many of the same goals, so why embark on a full microservices effort?

 

Typically, the answer to that question is that your organization’s codebase has grown so large that it has become prohibitively complicated to make changes without introducing additional points of failure. It’s likely very difficult to coordinate releases between the various stakeholders in a huge, tightly coupled monolith. Remember that with a microservices pattern, you are essentially trying to segregate individual pieces of the monolith into a smaller, well-defined, cohesive, and loosely coupled artifacts. Unfortunately, in many cases this is easier said than done. There are no silver bullets, and just like any other promising technical approach, microservices has its drawbacks. The name itself creates a paradigm under which developers place a greater emphasis on the literal size of the service rather than on sufficiently decomposing the monolith in order to facilitate any semblance of an agile development methodology. Microservices doesn’t mean the service has to remain under a certain line of code count. It just means the individual services represent smaller subsets of the monolith.

 

Perhaps the greatest drawback is the inherent complexity a microservices pattern introduces. By design, a microservices architecture represents a distributed system. Distributed systems are considered more complicated because developers have to handle things like service latency, inter-process communication, and partial procedural failures. While this is all pretty standard for any seasoned programmer, it is certainly more complicated than dealing with a typical monolith, where all procedural calls are local.

Obviously, the partitioned database architecture of the microservice pattern presents additional challenges. Typical transactions that might need to update multiple entities are fairly common, and while these kinds of transactions are trivial to implement in a monolithic application, you will need to update multiple databases owned by different services in the microservices approach– and distributed transactions are usually not an option because they are typically not supported by scalable NoSQL databases and messaging brokers. Testing a microservices application is also considerably more complex. For example, under a SpringBoot framework, it is relatively simple to write a test class that starts up a monolithic web application and cycles its corresponding REST API. However, a similar test class for a microservice would need to launch the parent service and any dependent services that might be required, or at least configure stubs for them. Once again, this is not insurmountable, but it’s important to not underestimate the complexity of the pattern.

 

Additionally, implementing changes that span multiple services requires a greater degree of coordination. You will need to carefully plan the rollout of changes to each of your services. An off the shelf platform as a service capability such as Cloud Foundry can provide developers with an easier method of deploying and managing microservices. PaaS tools can serve to insulate your team from concerns such as the provisioning and configuring of information technology resources. The PaaS can also handle a great deal of the security compliance scripting. However, it is important to note that an off the shelf PaaS is no more of a silver bullet than the microservices pattern itself. In fact, it may prove more advantageous to automate the deployment of microservices by essentially developing your own PaaS with a clustering solution like Kubernetes combined with a container technology such as Docker. This approach has proven successful in other Defense Health Agency Initiatives, specifically the mSTAX mobility framework – a microservices based mobile application development platform deployed for the agency’s Web and Mobile Technology Program Management Office in 2018.

 

Building complex applications is by its very nature a difficult undertaking. However, the Monolithic pattern only makes sense for simple, lightweight applications. The Microservices Architecture pattern is a much better choice for complex, ever evolving legacy applications, despite the drawbacks and implementation challenges. In later posts, we will outline strategies for refactoring a monolithic application such as AHLTA into microservices.

 

 

Share on Facebook
Share on Twitter
Please reload

Follow Us

I'm busy working on my blog posts. Watch this space!

Please reload

Search By Tags