EHR Platform

From EC2 monolith to container-ready architecture

A monolithic application running across dozens of EC2 instances doesn't fail all at once - it gradually slows you down. Deployments get longer, releases get riskier, and every change to one part of the system threatens the stability of everything else. How do you modernize a production monolith that handles regulated healthcare workloads without replacing it from scratch?

digital healthcare background

The cloud-ready foundation a growing platform actually needs.

EHR platform serving thousands of private medical practices had grown its infrastructure organically over the years. By the time we engaged, the application ran across 50-60 EC2 instances, deployed via SaltStack, with a Terraform codebase managed as a single monolithic state. Releases took hours. Every deployment carried a blast radius risk that spanned the entire platform. Following an acquisition, the pressure to modernize was no longer optional; the platform needed to be ready for containerization, autoscaling, and the operational standards of an enterprise health tech group. The question wasn't whether to modernize, but how to do it without breaking a system that thousands of medical practices depended on every day.

Quick facts

EHR Platform

Private medical practices in the USA

Cloud-based EMR/EHR platform for private medical practices in the USA, managing PHI data in compliance with HIPAA across scheduling, charting, billing, and telehealth services.

Hours → Minutes

Deployment speed

By restructuring deployment tooling and containerizing the application layer, we reduced release cycles from multi-hour operations to minutes, cutting risk per release and giving engineering teams the confidence to ship more frequently.

SaltStack → Docker + EKS

Replacing instance-level SaltStack provisioning with containerized, Kubernetes-ready deployments eliminated the rebuild logic that made every release slow and fragile. The platform gained a deployment model built for predictability and scale.

"We went from dreading deployments to treating them as a routine operation. The architectural changes ITSyndicate delivered changed how our entire engineering team thinks about releasing software."

Michael Torres

CTO, US Healthcare EHR Platform

What we did for the EHR Platform

Preparing the monolith for containerization

A monolithic application deployed via SaltStack across 50-60 EC2 instances carries years of accumulated environment-specific assumptions, undocumented dependency chains, hardcoded configuration, and integration points that exist only in institutional memory. Before a single Docker image could be built, we needed to surface everything. The objective wasn't speed, it was repeatability: identical, predictable builds across development, staging, and production environments.

  1. SaltStack audit and dependency mapping: We audited the existing SaltStack deployment model end-to-end, identifying every environment variable, external integration, and dependency chain that the application relied on at runtime. This surfacing work is unglamorous but essential; skipping it is the primary reason containerization projects produce images that work in development and break in production. The audit produced a complete inventory of what needed to be externalized from the application before containerization could begin.
  2. Docker image standardization and CI/CD integration: With the dependency map in hand, we built standardized Docker images for the application runtime base images hardened, versions pinned, and all environment-specific configuration extracted and injected at runtime rather than baked in. The container build process was defined directly in the CI/CD pipeline, making every build reproducible and auditable. Any engineer on the team could trigger a build and receive an identical artifact, regardless of their local environment.

Building a container-ready infrastructure

Getting the application into a container solves half the problem. The other half is ensuring the infrastructure can receive, run, and update containers predictably without recreating the manual provisioning overhead that made EC2 deployments slow in the first place. The goal here was to replace the instance-level thinking that SaltStack enforced with a deployment model built around containers from the ground up.

  1. Kubernetes-ready deployment configuration and EKS foundation: We designed Kubernetes deployment manifests and prepared the EKS cluster foundation, including VPC layout, IAM roles, node group configuration, and networking, aligned with AWS EKS best practices. Rolling deployment strategy was configured from the start, meaning new releases updated containers incrementally rather than triggering a full platform restart. Rollback capability that previously required manual SaltStack intervention became a single-command operation. The infrastructure no longer needed to be rebuilt with each deployment; containers were deployed and updated predictably on a stable cluster foundation.
  2. CI/CD pipeline integration for container delivery: Container builds were integrated into the existing CI pipeline, creating a continuous path from a merged pull request to a running container in the target environment. The rebuild logic that SaltStack had previously handled at the instance level, package installation, service configuration, and environment setup, was eliminated entirely. What replaced it was a pipeline that pulled a pre-built, validated image and deployed it. Deployment times collapsed as a direct result, and the blast radius of any given release shrank to the containers being updated rather than the entire EC2 fleet.

Platform containerization: FAQ

It means the application can be packaged, configured, and deployed as a container without environment-specific manual steps.

A container-ready application has its configuration externalized from the codebase, its dependencies explicitly defined, and its build process automated and reproducible. For this client, reaching that state required restructuring the way environment variables were managed, secrets were accessed, and the deployment pipeline interacted with the application before a single Kubernetes manifest was written.

Because SaltStack configuration encodes years of environment-specific decisions that will break containerization if not surfaced first.

Instance-level provisioning tools like SaltStack accumulate implicit dependencies, packages installed in a specific order, services configured against local paths, and integration credentials set as system variables. Containerization assumes none of these exist.

The audit we conducted for this client identified every assumption the application made about its runtime environment and produced the inventory needed to externalize them cleanly into Docker and AWS Secrets Manager.

By eliminating the manual coordination steps that accumulate between the artifact and production.

For this client, deployment time was dominated not by build time but by sequential SaltStack runs across dozens of instances, manual validation steps, and the rollback procedures that teams kept ready because they didn't trust the process. Docker packaging made the artifact consistent and environment-agnostic. The CI/CD pipeline integration removed the manual steps.

Together, these changes collapsed the deployment window without touching the application code.

Containerization changes the attack surface, and both the audit perimeter and the attack surface need to be addressed explicitly.

Container images must be scanned for vulnerabilities, base images must be version-pinned and hardened, and secrets must never be baked into images. For this client, we integrated image scanning into the CI/CD pipeline, sourced all secrets from AWS Secrets Manager at runtime, and ensured container execution policies aligned with the platform's existing HIPAA controls.

The audit trail produced by the new deployment pipeline also strengthened the platform's change management documentation, a HIPAA requirement that the previous SaltStack-based process handled inconsistently.

After the application is container-ready and the infrastructure is designed to receive it, not before.

EKS migration attempted on top of a manually deployed application and unscoped infrastructure is a high-risk project that frequently stalls or regresses. The sequencing we followed for this client audit dependencies, containerize the application, and build EKS-ready infrastructure means the actual workload migration into EKS is an execution exercise rather than a design one.

The hard architectural decisions are already made.

Background Image

We’d love to hear from you

Ready to migrate critical systems without disrupting your business?

Talk to our team about your needs.

Contact us