1001010110101010
Thank you! Our team will contact you soon

Monolithic and Microservice Architectures



Monolithic And Microservices Architecture

To learn the differences between monolithic and microservice architectures, and how to architect for microservices.


Loose Coupling

Traditional monolithic infrastructures revolve around chains of tightly integrated servers, each with a specific purpose. When one of those components or layers goes down, the disruption to the system can be fatal. This configuration also impedes scaling. If you add or remove servers at one layer, you must also connect every server on each connecting layer.

With loose coupling, you use managed solutions as intermediaries between layers of your system. Failures and scaling of a component are automatically handled by the intermediary. Two primary solutions for decoupling your components are load balancers and message queues.


Microservices

Microservices are an architectural and organizational approach to software development. Using a microservices approach, you design software as a collection of small services. Each service is deployed independently and communicates over well-defined APIs. This speeds up your deployment cycles, fosters innovation, and improves both maintainability and scalability of your applications.


Autonomous

The component services in a microservices architecture are isolated from one another and communicate through an API. Because of this, you can develop, update, deploy, operate, and scale a service without affecting the other services. These services can be owned by small autonomous teams, allowing for an agile approach.


Specialized

You design each service for a set of capabilities that focuses on solving a specific problem. Teams can write each service in the programming languages best suited to that service. They can also host their services on different compute resources.

In this example, a monolithic forum application is refactored to use a microservices architecture: a user service, a topic service, and a message service. The /users service team runs the user service on AWS Lambda. The /topics service team runs the topics service on Amazon Elastic Compute Cloud (Amazon EC2). The /messages service team runs the messages service on containers. The microservices application is distributed across two Availability Zones and manages traffic with an Application Load Balancer.


What Is a container?

A container is a self-contained environment that includes the all the components needed to run an application. This includes the runtime engine, your application code, dependencies such as libraries, and configuration information. You containers will deploy the same way on any server running Docker which gives your application portability, repeatability and scalability.

We build microservice infrastructures with containers. Although running Virtual Machines (VMs) in the cloud gives you a dynamic, elastic environment, you can simplify your developers’ processes. Containers provide a standard way to package your application’s code, configurations, and dependencies into a single object.

Containers share an operating system installed on the server and run as resource-isolated processes, ensuring quick, reliable, and consistent deployments, regardless of the environment.


Containers And Microservices

Containers are an ideal choice for microservice architectures because they are scalable, portable, and continuously deployable.

Earlier in this module, you learned how microservice architectures decompose traditional, monolithic architectures into independent components that run as services and communicate using lightweight APIs. With these microservice environments, you can iterate quickly, with increased resilience, efficiency, and overall agility.

You can build each microservice on a container. Because each microservice is a separate component, it can tolerate failure better. If a container fails, it can be shut down and a new one can be started quickly for that particular service. If a certain service has a lot of traffic, you can scale out the containers for that microservice. This eliminates the need to deploy additional servers to support the entire application. Microservices and containers are also great for continuous deployment. You can update individual services without impacting any of the other components of your application.


Levels Of Abstraction

A bare metal server runs a standalone operating system (OS) with one or many applications by using libraries. Costs remain constant, whether the server is running at 0 percent usage or 100 percent usage. To scale, you must buy and configure additional servers. It is also difficult to build applications that work on multiple servers since the OS on those servers would have to be the same. You also need to synchronize the application library versions.

With virtual machines, you isolate applications and their libraries with their own full OS. The downside of VMs is that the virtualization layer is “heavy.” Each VM has its own OS. This requires more host CPU and RAM, reducing efficiency and performance. Having an individual OS for each VM also means more patching, more updates, and more space on the physical host.

With a containerization platform, containers share a machine’s OS system kernel and the underlying OS file system is exposed. Sharing a machine’s OS system kernel allows shared libraries but can permit individual libraries as needed. This makes containers highly portable. You can also start and stop containers faster than VMs. Containers are lightweight, efficient, and fast.

Unlike a VM, containers can run on any Linux system, with appropriate kernel feature support and the Docker daemon. This makes them portable. Your laptop, your VM, your Amazon EC2 instance, and your bare metal server are all potential hosts.

The lack of a hypervisor requirement also results in almost no noticeable performance overhead. The processes are communicating directly to the kernel and are largely unaware of their container silo. Most containers boot in only a couple of seconds.


Containers On AWS

When running containers on AWS, you have multiple options.
Running containers on top of an EC2 instance is common practice and uses elements of VM deployments and containerization. This diagram shows the underlying server infrastructure—a physical server, the hypervisor, and two virtual guest operating systems. One of these operating systems runs Docker, and the other runs a separate application. The virtual guest OS with Docker installed can build and run containers. Though possible, this type of deployment can only scale to the size of the EC2 instance used. You also have to actively manage the networking, access, and maintenance of your containers.

Using an orchestration tool is a scalable solution for running containers on AWS. An orchestration tool uses a pool of compute resources, which can include hundreds of EC2 instances to host containers. The orchestration tool launches and shuts down containers as demand on your application changes. It manages connectivity to and from your containers. It also helps manage container deployments and updates.


Running Containers On AWS

Deploying your managed container solutions on AWS involves selecting and configuring some components.


Amazon Elastic Container Registry (ECR)

Amazon Elastic Container Registry (Amazon ECR) is a managed Docker container registry. You push your container images to Amazon ECR and can then pull those images to launch containers. With Amazon ECR, you can compress, encrypt, and control access to your container images. You also manage versioning and and image tags. An Amazon ECR private registry is provided to each AWS account. You can create one or more repositories in your registry and store images in them.


Amazon Elastic Container Services (ECS)

Amazon Elastic Container Service (Amazon ECS) is a highly scalable, high-performance container management service that supports Docker containers. Amazon ECS manages the scaling, maintenance, and connectivity for your containerized applications.

With Amazon ECS, you create ECS services, which launch ECS tasks. Amazon ECS tasks can use one or more container images. Amazon ECS services scale your running task count to meet demand on your application.

You create an Amazon ECS cluster with dedicated infrastructure for your application. You can run your tasks and services on a serverless infrastructure managed by AWS Fargate. If you prefer more control over your infrastructure, manage your tasks and services on a cluster of EC2 instances. Your cluster can scale EC2 hosting capacity by adding or removing EC2 instances from your cluster.


Amazon EKS

Kubernetes is an open-source software that you can use to deploy and manage containerized applications at scale. Kubernetes manages clusters of Amazon EC2 compute instances and runs containers on those instances with processes for deployment, maintenance, and scaling. With Kubernetes, you can run any type of containerized applications using the same tool set on premises and in the cloud.

Amazon Elastic Kubernetes Service (Amazon EKS) is a certified conformant, managed Kubernetes service. Amazon EKS helps you provide highly available and secure clusters and automates key tasks such as patching, node provisioning, and updates.

  • Run applications at scale - Define complex containerized applications and run them at scale across a cluster of servers.
  • Seamlessly move applications – Move containerized applications from local development to production deployments on the cloud.
  • Run anywhere – Run highly available and scalable Kubernetes clusters.

Amazon EKS is a managed service that you can use to run Kubernetes on AWS without having to install and operate your own Kubernetes clusters. With Amazon EKS, AWS manages highly available services and upgrades for you. Amazon EKS runs three Kubernetes managers across three Availability Zones. It detects and replaces unhealthy managers and provides automated version upgrades and patching for the managers. Amazon EKS is also integrated with many AWS services to provide scalability and security for your applications.
Amazon EKS runs the latest version of the open-source Kubernetes software, so you can use all of the existing plugins and tooling from the Kubernetes community. Applications running on Amazon EKS are fully compatible with applications running on any standard Kubernetes environment, whether running in on-premises data centers or on public clouds.


Kubernetes Architecture

The basic components of Kubernetes architecture are user interfaces, control plane, and data plane. Web user interfaces, such as dashboards or the command-line tool, kubectl, allow you to deploy, manage, and troubleshoot containerized applications and cluster resources.

The control plane manages object states, responds to changes, and maintains a record of all objects. The data plane provides capacity such as CPU, memory, network, storage, and includes the worker node running in containers in a pod.


AWS Fargate Serverless Cluster Hosting

AWS Fargate is a technology for Amazon ECS and Amazon EKS that you can use to run containers without having to manage servers or clusters. With Fargate, you no longer have to provision, configure, and scale clusters of VMs to run containers. This removes the need to choose server types, decide when to scale your clusters, or optimize cluster packing.


Protect yourself and others from the covid-19 pandemic. Learn more