Please reload

Recent Posts

Data Science and the Health Care Revolution

January 1, 2018

1/1
Please reload

Featured Posts

Microservices for Legacy System Interoperability (part one)

August 30, 2018

While computing technology is evolving rapidly, the adaptation of that technology within the healthcare industry often lags far behind other sectors. This reluctance to break with the status quo and adopt these emerging capabilities is reducing opportunities to innovate within the healthcare industry.

 

It is inherently understood that technology refreshes can be costly, particularly for monolithic legacy systems like CHCS, AHLTA and VISTA. Although the DoD and the VA have both recently opted for a wholesale replacement of their existing legacy EHR’s, it can be argued that some mechanism for targeted legacy system interoperability will still be required in order to preserve the entirety of the longitudinal record of care throughout the rollout period of the new EHR’s. To accomplish this, a thoughtful adoption of select technologies with a demonstrated return on investment will be necessary to both increase analytic productivity and reduce the burden of maintenance associated with the myriad of legacy systems currently housing the LROC data. In this three part blog post, we will introduce the concept of microservices and their application with respect to the unique challenges both the DoD and VA face in rolling out a new EHR while also simultaneously preserving the longitudinal record of care; an undertaking which will ostensibly require the preservation of existing data analysis pipelines; at least until there is sufficient data amassed in the new system’s respective data warehouses. We believe that a thoughtfully crafted microservices approach is the most pragmatic solution to this challenge.

 

A microservices architecture can best be described as a framework for dividing complex systems into easily managed parts. In this case, the complex system is really the collection of legacy second tier systems that have promulgated off of the main CHCS, AHLTA and VISTA platforms. In a microservices architecture, each individual service is limited in functional scope, thereby affording a higher measure of functional isolation and reliability to the collective solution while reducing the maintenance requirements of the underlying legacy systems. For the DoD and the VA, perhaps the most practical approach to implementing such a solution would be to treat the overlap of legacy system capability as a single monolithic application, although the collection of existing systems clearly represent disparate architectures and infrastructures today. Treating the overall capability produced by the collection of legacy systems as single monolith allows for the development and adoption of a strategy that seeks to split the collective presentation layer from the business logic and data access layers based on existing workflow. A typical enterprise application consists of at least three different types of components:

 

  • Presentation layer – Components that handle HTTP requests and implement either a (REST) API or an HTML‑based web UI. In an application that has a sophisticated user interface, the presentation tier is often a substantial body of code. For each legacy system, the components that support presentation layer functions can be identified and bundled. Common functions can then be consolidated into Presentation Layer Services

  • Business logic layer – Components that are the core of the application and implement the business rules. For each legacy system, the components that support logic layer functions can be identified and bundled. Common functions can then be consolidated into Logic Layer Services

  • Data‑access layer – Components that access infrastructure components such as databases and message brokers. For each legacy system, the components that support Data layer functions can be identified and bundled. Common functions can then be consolidated into Data Layer Services

 

There is usually a clean separation between the presentation logic on one side and the business and data‑access logic on the other. In this sense, the monolith’s resulting business tier will consist of one or more facades which encapsulate business‑logic components. This creates a natural seam along which we can split the monolith into two smaller applications. One application contains the presentation layer. The other application contains the business and data‑access logic. After the split, the presentation logic application makes remote calls to the business logic application. The following diagram shows the architecture before and after such refactoring…

 

 

 

Treating the collection of legacy applications as a single monolith and then splitting it into smaller logical components serves two primary purposes. First, it enables the organization to develop, deploy, and scale the resulting components independently of one another. In particular, it allows the presentation‑layer developers to iterate rapidly on the user interface components without risking disruption to other business functionality. The second benefit is that it exposes a remote API that can be eventually be called by the new microservices architecture. At this point, the strategy involves turning existing modules within the monolith components into standalone microservices. By taking this structured and phased approach, each time the organization extracts a module and transforms it into a service, the monolith shrinks. Once enough modules have been converted, the monolith either disappears entirely or becomes small enough to be treated as just another service.

 

It is expected that large, complex monolithic applications such as MDR, COHORT and CHAS will be comprised of tens or hundreds of separate modules, many of which will be prime candidates for eventual extraction. Figuring out which modules to convert first will be the primary challenge in this phase. The recommended approach would be to start with a few modules that are less complicated to extract first and then build upon the initial conversion success; an approach that will give the team experience with microservices deployment in general and with the extraction process in particular. Converting a legacy system module into a service is expected to be a time consuming endeavor if done correctly. As such, the organization will need to rank target modules by the expected conversion benefit, understanding that it is often more beneficial to extract modules that are frequently changed or modified first. Once these frequently modified modules are converted into services, they can then be developed and deployed independently of the monolith, a benefit which is expected to significantly accelerate development efforts.

 

It is also beneficial to extract modules that have resource requirements significantly different from those of the rest of the monolith. It is useful, for example, to turn a module that has an in‑memory database into a service, which can then be deployed on hosts with large amounts of memory. Similarly, it can be worthwhile to extract modules that implement computationally expensive algorithms, since the service can then be deployed on hosts with lots of CPUs. By turning modules with particular resource requirements into services, you can make your application much easier to scale.

 

When figuring out which modules to extract, it is useful to look for existing coarse‑grained boundaries (a.k.a seams). They make it easier, and by default, significantly less expensive to turn modules into services. An example of such a boundary is a module that only communicates with the rest of the application via asynchronous messages. It can be relatively cheap and easy to turn that module into a microservice.

 

 

Share on Facebook
Share on Twitter
Please reload

Follow Us

I'm busy working on my blog posts. Watch this space!

Please reload

Search By Tags
Please reload

Archive
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square