What are the benefits of containers and orchestrators?
Container infrastructure can easily be version-controlled. Thus, apps built to run in containers can be developed, tested, and deployed using automated tools as part of a build pipeline
Azure Container Registry or Docker Hub ?
1. The benefit of ACR over other options is that you can keep your images close to your production environment, improving build and deployment times.
2. They’re accessible by whatever services you wish to use to host them.
3. You can also secure them using the same security procedures you use for the rest of your Azure resources, improving security and reducing asset management effort.
Azure Web App for Containers, Azure Kubernetes Services (AKS), and Azure Container Instance (ACI)
Azure Kubernetes Service (AKS) service allows you to fully leverage its features, without having to install and maintain it. Benefits -
1. With a full CI/CD pipeline in place, you can configure a canary release strategy to minimize the risk involved when rapidly deploying updates.
The new version of the app is initially configured in production with no traffic routed to it, and then a small number of users are routed to the newly-deployed version of the app.
As the team gains confidence in the new version of the software, more instances of the new version are rolled out and the previous version’s instances are retired.
2. Health monitoring
3. Failover
4. Scaling
5. Rolling Upgrades
Azure Kubernetes Service
1. If you’d like to host the application in your own AKS cluster, the first step is to create your cluster.
Each microservice executes in a separate process and typically runs inside a container that is deployed to a cluster.
A cluster groups a pool of virtual machines together to form a highly available environment. They’re managed with an orchestration tool, which is responsible for deploying and managing the containerized microservices
2. Once the cluster has been created and configured, you can deploy the application to it using Helm and Tiller
With the declarative approach, you use a configuration file that describes what you want instead of what to do and Kubernetes figures out what to do to achieve the desired end state.
Declarative deployments are used by deployment controllers to update cluster resources. Deployments are used to roll out new changes, scale up to support more load, or roll back to a previous revision. If a cluster is unstable, declarative deployments provide a mechanism for automatically bringing the cluster back to a desired state.
Scaling containers and serverless applications
There are two typical ways to scale an application: scaling up and scaling out.
1. The simple solution: scaling up
The process of upgrading existing servers to give them more resources (CPU, memory, disk I/O speed, network I/O speed) is known as scaling up.
Cloud-native apps typically scale up by modifying the virtual machine (VM) size used to host the individual nodes in their Kubernetes node pool
To vertically scale your application, create a new node pool with a larger node VM size and then migrate workloads to the new pool.
2. Scaling out cloud-native apps — IMP
Cloud-native apps support scaling out by adding additional nodes or pods to service requests
3. Autoscaling adjusts the resources used by an app in order to respond to demand,
AKS clusters can scale in one of two ways:
• The cluster autoscaler watches for pods that can’t be scheduled on nodes because of resource constraints. It adds additional nodes as required.
• The horizontal pod autoscaler uses the Metrics Server in a Kubernetes cluster to monitor the resource demands of pods. If a service needs more resources, the autoscaler increases the number of pods.
Local Kubernetes Development
There are several ways to achieve this, two of which are Minikube and Docker Desktop. Visual Studio also provides tooling for Docker development.
Automating infrastructure
Tools like Azure Resource Manager, Terraform, and the Azure CLI, enable you to declaratively script the cloud infrastructure you require
CONFIGURATIONS
To support centralized management of configuration settings, each micro service includes a setting to toggle between its use of local settings or Azure Key Vault settings.
Azure Key Vault
Azure Key Vault provides secure storage of tokens, passwords, certificates, API keys, and other sensitive secrets.
Access to Key Vault requires proper caller authentication and authorization.
Don’t check in these credentials into source control, but instead set in the application’s environment.