Thread

SaaS migration from Laravel Forge to AWS

Your hosted platform can no longer handle the load, while your engineering team is focused on the MVP delivery. How do you execute a foundational migration to a scalable AWS architecture without hiring a full-time team or changing your roadmap?

Service Desk Team Working

Thread's communication platform was hitting the scaling limits of its hosted environment on Laravel Forge.

As a startup, they made a strategic decision to focus all their capital on code engineers to go live faster. They needed an expert partner to execute a foundational migration to a cloud architecture, required for their growth.

Quick facts

Thread

AI Service Desk

Thread’s AI Service Desk combines the power of AI, automation, chat, and inboxing to unlock service experiences for scaling MSPs and their customers.

See their product

100% coverage

Building & Education

Migration was followed by AWS setup with Well-Architected framework. Then we educated the team for the next stage of product features and customer base growth.

AWS + Kubernetes

We migrated Thread from a limited hosting platform to a scalable AWS cloud. Their solution worked on Amazon EKS and was provisioned entirely with IaC.

“It was very, very helpful because we went from zero. So there were a lot of new things that we learned and it was great.”

Mark Alayev

CEO, Thread

What we did for Thread

Effort Distribution

How do you migrate a startup from a hosted platform to a professional AWS architecture?

The transition from a managed hosting platform to a custom cloud environment required a structured, multi-stage process. For Thread, our approach was designed to build a professional, scalable, and fully observable infrastructure from scratch, covering the entire lifecycle from initial architectural design and automation to implementing the CI/CD workflows and monitoring stacks needed to operate effectively.

  1. Designing a Container-Based Architecture. The first step was architecting the landing zone on AWS. We designed a scalable environment using Amazon EKS to orchestrate their Docker containers, connected to Aurora DB for databases and S3 for asset storage, creating a robust foundation for their PHP and Node.js applications.
  2. Automating Infrastructure with Terraform (IaC). To ensure a consistent and repeatable setup, we implemented Infrastructure as Code from day one. The entire AWS environment was provisioned using Terraform, eliminating the risk of manual configuration errors and creating a documented, version-controlled infrastructure.
  3. Implementing a CI/CD Workflow. A key deliverable was enabling the client's development team. We configured a full CI/CD pipeline using Bitbucket, which allowed their engineers to automate code integration, testing, and deployment, moving them away from manual processes and accelerating their feature delivery cycle.
  4. Establishing Advanced Observability. To provide visibility into the new system, we integrated a comprehensive monitoring solution. NewRelic was set up for real-time system health metrics and provides detailed application performance monitoring (APM), and helps troubleshoot code-level bottlenecks.

Seamless product team integration and education

We implement a comprehensive resilience strategy to ensure data safety and business continuity. The goal is to make recovery swift and predictable.

Our process involves implementing automated backups for critical databases like Aurora and DocumentDB, defining parameterized Recovery Point and Recovery Time Objectives, and validating the restore procedures.

For content, assets are served globally via S3 and CloudFront, while background jobs are managed through queues to ensure graceful degradation rather than catastrophic failure during an incident.

Through a combination of proactive performance tuning and load testing before production deployment. We don't guess about performance; we validate it.

We start by profiling application hotspots with New Relic to identify bottlenecks. We then optimize the underlying services (like Nginx and PHP-FPM) and fine-tune container resources.

Finally, we conduct load testing to validate the EKS autoscaling policies, ensuring the platform performs predictably and can handle growth scenarios without degradation.

By establishing a secure baseline that eliminates hard-coded secrets and enforces the principle of least privilege. Configuration drift is a major security risk, so we control it programmatically.

Our approach involves minimizing and scoping all IAM roles, hardening security groups, and locking down cluster access.

We enforce image provenance via ECR to ensure only trusted code runs, and all secrets are centralized in a system like AWS Secrets Manager or SSM Parameter Store, removing them entirely from the codebase and CI/CD logs.

We implement FinOps guardrails from day one to provide cost visibility and control. The goal is to align spending directly with usage and business value.

This involves establishing a consistent resource tagging policy for chargeback, implementing right-sizing policies for clusters and databases, and configuring autoscaling to eliminate payment for idle capacity.

We provide dashboards for spend visibility and growth forecasting, which help the team keep costs predictable and avoid surprise bills.

Through a structured knowledge transfer and enablement process with Thread engineers. Our objective is not to create long-term dependency but to empower your internal team.

We achieve this by pairing directly with your engineers on specific tasks, creating thorough documentation for the new architecture, and running dedicated enablement sessions on the core technologies we implement, such as EKS, Terraform, and the CI/CD pipelines. This ensures your team can confidently own and manage the day-to-day operations of the new platform.

Background Image

We’d love to hear from you

Ready to adopt cloud for your business properly from day one?

Talk to our team about your needs.

Contact us