In a prior post I defined microservices and the advantages they provide over more traditional monolithic application architectures.
At Kingland we see the cloud as a key enabler of the microservice architecture. Many microservice features enable organizations to benefit from an environment that automatically scales, communicates with other services, and replace a faulty service without impacting conjoined services. Three of the most important features of the cloud, as it relates to running microservices, are rapid provisioning, service discovery, and detailed monitoring.
Unlike monolithic applications, microservice-based applications can be selectively scaled out.
Instead of launching multiple instances of the application server, it is possible to scale-out a specific microservice on-demand. When the load shifts to other parts of the application, an earlier microservice will be scaled-in while scaling-out a different service. This delivers better value from the underlying infrastructure as the need to provision new virtual machines shifts to provisioning new microservice instances on existing virtual machines.
One way to facilitate this is to deploy microservices as Docker containers. Docker containers are easily portable, quick to provision, and provide process isolation. Most cloud providers support Docker containers. For instance, Amazon EC2 Container Service (Amazon ECS) provides a highly scalable, high performance container management service supporting Docker.
The scale-out of microservices on-demand brings about the question of how the services are going to find each other. We have to manage complexity by instituting some standardized approach for service discovery. What is needed is a mechanism for registering services immediately as they are launched and a query protocol that returns the IP address of a service, without having this logic built into each component.
Microservices, deployed as container images, can be instantiated by various orchestration solutions such as Kubernetes or Mesosphere. These orchestration solutions provide built-in service discovery solutions. Public cloud providers such as Amazon and Google also provide first-class solutions for container orchestration and deployment.
With microservices, we can develop and deploy self-healing applications. Since each microservice is autonomous and independent, we can monitor and replace a faulty service without impacting any other.
To facilitate this, detailed monitoring is essential to detect problems in the complex network of services. Most cloud providers have monitoring solutions. For instance, Amazon CloudWatch monitors AWS resources and applications, allowing users to collect important metrics, gather logs and generate alarms or events. Alarms or events created in CloudWatch can automatically trigger response actions; they could also terminate an instance, run an Auto Scaling policy or run an AWS Lambda function. The ability to establish and respond automatically to conditions fits with microservices-type application deployments in public cloud.
Without these three features of the cloud, we will not be able to take full advantage of the microservice architecture. And without the microservice architecture, we will not be able to take full advantage of the cloud platform. Microservices and the cloud truly go hand in hand.
The idea of building applications using services has always made sense to me. Why code from scratch when you can assemble applications using the same services via standard APIs? This is the key driver behind the Kingland platform, where many of the components are based on microservice architecture, allowing our clients to enjoy tremendous economies of scale as well as a great return on their cloud investment.