K011: “Environment On Demand” With Modern Infrastructure

transcript

Summary Keywords

infrastructure, terraform, create, code, modules, environment, configuration management, ansible, test, tools, provisioner, important, configurations, demand, servers, cloud, means, files, production, kubernetes cluster

Kamalika Majumder 00:00

Hello, everyone, welcome back to Cloudkata, the Modern Infrastructure show. This is season one anatomy of Modern Infrastructure. And today's episode is about the 10th. And the final factor in the anatomy, which is environment on demand, we will understand how to achieve immutable infrastructure by creating a model that is environment on demand. So without any further delay, let's get started. Before we start to deep dive into the concept of environment on demand, let us understand what are some of the challenges that we face, which are very common day to day in setting up an infrastructure. The first and foremost channel, of course, is manual setup, which means if there is lack of automation, or no automation at all, manual steps to take to set up servers take forever, and that in turn, creates untraceable changes and no track of what change of changes were made when when they're made. And eventually every server becomes a vocation of art on its own. The other challenge that we hear about very, which is very common is that the code works in localhost but not in production. And, believe me, even after 10 to 12 years have passed after the concept of our DevOps as a concept was, you know, idealized even today, we face with so many advanced tools and technologies in place, we still face the same issue, probably the choice of words have changed a bit probably people today would say that the code works in my container, but it does not work in production server, but the problem remains the same, it works during development, but it does not work during the after the delivery, right. And last, but not the least, is that testing takes forever and because of time consuming processes in testing, people tend to skip that process to achieve a faster delivery cycle. But remember, untested features or code is always a ticking time bomb, which can explode anytime in production and in turn impact all the trust that is being built with the customers. So, these are some of the very common but very, you know, continuous challenges that people face in Modern Infrastructure, and these really need to be taken care of right at the base otherwise, with the Modern Infrastructure that we have discussed, so far, with all the features and factors and configuration parameters, if we do not take care of solving this problem, right during the setup, there is no use of you know, achieving the first nine factors which you have just discussed. So, let's look into how these challenges can be solved for long term, you know, effective delivery of Modern Infrastructure. So, first solution or just I would like you to imagine a situation wherein you face you know, day to day, challenges of manual manual setup and and then suddenly, you have an environment that can be created, tested, destroyed, and even recreated on demand within few minutes, how about that, you know, you would love to have that kind of situation right, we

Kamalika Majumder 3:29

all would love to have that situation, an environment on demand is that kind of environment, which can be set up and can be brought up and brought down at any time depending on the demand of delivery. And an environment on demand means you have performance based upscaling and downscaling of environment, which means depending on the demand of your output, you create your input and which also means that you have a continuous delivery of your infrastructure for your dev from your development till your releases, and this also gives you backward compatibility. So that it doesn't matter, you know, if you have scaled up to 20 instances, if there is suddenly a drop in the traffic, it can come down to one instance right. So, it is always backward compatible, and it is upgradable and also down gradable right. So that is what is the concept of environment on demand and what do you achieve when you actually have an envelope environment on demand setup, you will achieve an immutable infrastructure. If you have not heard about this term called immutable infrastructure, let me explain a bit. So by the word immutable, that means something which does not mutate. Now with our recent experience in the past few years with the virus we know. See what kind of you know scared scared that it gives us whenever we hear about a new mutation that means the changes that are the evolution that happens to any item right? The same way, even in infrastructure, if you let your setup mutate over time, right, you increase your dependency on your infrastructure you became you become so, tied up to the infrastructure that anything any change that you would like to make, you would be skeptical in doing so, because you would not know what has gone in or what will be the impact of that change. And that is called mutable infrastructure, which is a very dangerous you know, state to be you should always aim at achieving an immutable infrastructure that means no matter what changes have gone in right, it is still a commodity item, you can still spin up as many updates are down you know, upgraded versions of it or even you can even downgrade when the utilization is not that much, right. So, that is the idea that environment on demand brings in and that is why if you have to achieve and are very robust Modern Infrastructure that I have discussed in various layers from networking, storage, servers, monitoring, you know, your security, all these configurations, if you want to achieve all of them, and remember, all of these will change over time, nothing will be seen today. Whatever versions you're building today, it will go through an upscale version, there will be newer threats that come in, and you will have to adapt to it. So, you should always think about creating an immutable infrastructure using an environment on demand design. So this is what I mean by environment on demand. Now, let's look at how to set up an environment on demand with whatever technologies or tools you have in hand and bear in mind that environment on demand is a concept that is not just for Modern Infrastructure, it is truly about me, automating your infrastructure setup in such a way that it can be created or an or recreated or destroyed or that means you can bring up and or bring it down at your choice, right. So what do you need to achieve these kinds of environments? The first and foremost thing which I think you would have already guessed is automation. And by automation, what I mean, what you actually need is not just to you know, write some scripts and bring them in. No, that is not enough for an environment on demand. Remember, you will have to look at your demand and then bring up your environment. So you need to go beyond automation. And you need to treat everything as code. That is why the first thing that you will need is infrastructure as code and what is infrastructure as code. It's not as simple as just writing or automating something infrastructure as code means treating every component in your infrastructure as a code. And what does code means in the technical development terms code means it should have a single source of truth, it should be version controlled, it should be traceable, testable and scalable that is the true meaning of code. So when we talk about infrastructure as code that means treating each and every component in your infrastructure and converting them into code. And that will require your server to configure your packages that need to be installed on those relationships between the servers, let's say you have an application cluster and you have a database cluster. So how what is the relation between them is it only one way or both way is it read only or read write all those relationships between various servers, etc should be modeled with code and it should be automated, it should remove any manual steps which are prone to errors, model server configurations parameterization of everything, including the packages and the relationship all of these things is very essential and when you actually achieve model server configuration and parameterization This is called config also called configuration management which I will discuss what is the difference between infrastructures code or configuration management. But first thing is anything that you write in your own infrastructure should be in a single source of truth maybe in a version control system, SCM whatever you say git, GitHub, GitLab, whatever, but it should be stored in a single place and it should have a single source of truth and it should be version controlled. There should be history maintained for all of this. It should be traceable, so that you can see whatever the changes that have occurred And then over time, it should be testable, and it should be scalable that means you should be able to add up more configuration and append more configuration to it plus it should be parameterised

Kamalika Majumder 10:13

you know, properly and no static configuration should be there. Now, for instance, some of the other things that to also keep in mind is that, we should, if you can, and, in fact, you should tag branches and release the code that defines your service, right? You have a lifecycle that covers different stages to the infrastructure code that is for your development, your testing your production, and continuously test our infrastructure as we make changes. Now, I'll talk more about how to test it in a few minutes. But let's look at another component of infrastructure as code, if I may say, is configuration management. Now, sometimes you may get confused with configuration management and infrastructures code, because, today, we have so many options in our hands in terms of tools, and you know, do you know the tech stack that one tool can do everything right. And there are many roles that it plays, but it is very important to understand what the subtle difference between infrastructure is code and configuration management. Now, infrastructure as code is typically what I have seen, right? Everything that your infrastructure has converted into a code, which is code format, which is stored in a source control repository version control, and which can be tested, and which can be scaled. Technically, you know, whatever you do, like for your applications as well. And what is configuration management? Now, once you have created your infrastructure using infrastructures code, sometimes you may have to configure the software that you have installed. With managed services today on cloud you might achieve pre built, you know, solutions that he that is at your exposure, you can let's say Kubernetes for example, you can choose to set up a Kubernetes cluster on your own, because maybe you are not very comfortable with the with the SAS option, and you want more private and more customized version of a Kubernetes cluster. So, you may want to do a Kubernetes installation on servers. Now, when you are doing it. First thing is installation and next thing is let's say you want to upgrade Kubernetes to a different version after six months. Now that is configuration management and continuous changes that you have to do or modify based on the requirements. So, this configuration management means your model server configurations you know parameterizing everything and relationship is in a code. Again, this is part of infrastructure code. So another example can be especially when you're setting up monitoring systems and tools, you know something to collect more analytics now on cloud, you will get things out of the box, but you may need more things like say you want to set up synthetics monitoring or business monitoring or collect all the data of logging for sick or audit logging, right. So you may choose to install some software which you would purchase from some third party provider. Let's say you want to install Splunk right, which is a logging software which gives you much more enriched log aggregation rights. Now, when you are doing a small Splunk installation. And you are, you know, many maintaining that installation on your own, you may have to do some changes, let's say you need to extend the disk space after some days or you need to upgrade the version of Splunk or maybe you need to create different blob stores,you know the partitions for the log stores or format.

Kamalika Majumder 14:00

So, these are configurations of the software that you have already installed. So, you are not actually creating any new infra but you are just configuring the software within the infra that you have set up. This is called configuration management and the tools which do infrastructure as code and tools which are specifically meant to do configuration management. They are slightly different. However, today you might see, you know both kinds of tools doing the same work, but keep in mind that what is their core value, right? So the choice of tools really matters when you are sometimes you may end up using one tool doing both the work if you get it well that is very good. Suppose sometimes you might not be able to achieve that. So you may have to combine one or two tools. One tool for your infrastructure is code, deploy the infrastructure deployment And you know creation in your environment on demand creation. And another one which is embedded within it, which is for configuring certain parts in your infrastructure, which are not part of, let's say the same cloud provider or the infrastructure setup, but it is something that you have self managed. So this is a subtle difference between not difference, I would say, you need to understand what infrastructure is code and configuration management. So, what should you really keep in mind when you are building infrastructure as code or be it configuration management code? Right. One thing to keep in mind is don't just automate as I have said, right, automation is very trivial today, if you ask me, honestly, it may have been difficult 10 years or eight years back, but today, it's very easy, you know, you can automate anything a in any script, you can know the bare minimum you can have as a shell script or Python script, but that is not enough, you know, when we say, writing code that has to have some quality attached to it, if that code is not written in a good way, it will become again, it will end up creating mutable infrastructure and not immutable infrastructure, it will create immutable code, which only keeps growing and it is difficult to, you know, scale or extend. So, you should keep in mind that just automating is not enough, what is important is you have to create, you have to do modularization, and template isolation. So you have to have modular eyes and templates. What does that mean? Make sure, first and foremost you choose cloud agnostic tools, right, or provider agnostic tools provider here means wherever you're hosting your infrastructure, make sure you choose tools, which are agnostic of that provider. So that tomorrow, if you have to customize it in some other provider, you would not have to redo or learn the whole thing again, right. So, you know, use cloud native, cloud agnostic tools to create cloud native applications. So some of the tool charges for infrastructure as code is the top one is TerraForm. So if you are already well connected with infrastructure as code, you know that TerraForm is the tool or the first choice of any infrastructure developer, because, again, it has a lot of integration points for, you know, infrastructure creation in various clouds. The other choices that can be used are Ansible. Again, Ansible is more useful in configuration management. Now we'll speak about where each one of them is useful. There is a chef, and of course, there is at least one which is Python. In fact, many of these tools also use Python as their underlying language to build it.

Kamalika Majumder 18:12

TerraForm uses Golang as an underlying language, you know, Chef uses erLang So, but TerraForm and Ansible, you will find it mostly used to create but something I'm not going to talk more about TerraForm here, but I want to want you to understand how to write these TerraForm code, it's very easy, but again, that will just get you to a beginners level you will not be at an expert level of infrastructure development, if you do not modularize and templatized How do you modularize first thing is have a base image or base setup for each and every component or the fundamental components let's say if you are going to have a lot of compute services, maybe Kubernetes cluster or simple virtual machines, anything if you have virtual machines running as a part of any setup, make sure you have a base system image right identify the operating system that you would like to use let's say you are using Linux operating system. So you have to choose one operating system right? You should not keep like five different operating systems or five different things unless there is a genuine reason that you're doing something with some features that are only that can only be run on a particular Linux system. Other than that, just make it consistent across your infrastructure. If it is a Linux infrastructure make sure you use one single version or one default version for everything. So that will make it easier for you to upgrade and fix things whenever you need it right. So base system image, a base container image if you have if your application is already containerized so you will need a base container image also so do you need both remember, because containers are just processes running on systems. So you need the first base fundamental system, then the container. And then you need some base modules first. Right? So what are modules, modules are units of infrastructure. For instance, if you are on let's say AWS so you will first have to choose what all AWS services you want to use, make maybe you want to use an easy to service you want to use, you want to create a private infrastructure, so you need a VPC, you know, likewise, so, something will be common across your infrastructure. So, if you have a if you're running Compute Engine, you will definitely need two easy instances, let's say for example, or you know, compute instances if you're NGK or any other car or the equivalent of it. So, you will have to have a base module, which creates compute instances and then you will have to have a base module which creates a container right. So, make sure the default setup is all part of a base setup or a base item, base image, which is again created through an automated process. So, let's say for you know, you can even

Kamalika Majumder 21:09

use packer is a tool which can be used to create system image container images from any cloud that is there right. So, you can use that. So, the first thing in the step to modularize is to create a base setup. And the next thing is to create modules within your infrastructure as code. I personally like to divide my infrastructure is code into modules into what I call as modules and provisioners modules is something that you will if you are familiar with TerraForm or Ansible, you would have heard about it in Ansible it is called let's say playbook in chef term it is chef has their cookbook, right and then within cookbook, you have recipes and within in Ansible it will have rules and then and within rules, it has playbooks various things right. So, basically I want to divide the independent units in an infrastructure as I have said they compute instances one is a unit RDS is a unit load balancers is a unit VPC is a unit on maybe you know, something else like say s3 buckets creating s3 buckets is a unit things which I know that will be created every time or most of the time for different environments, I need to create a dev replica of production environment in there, I will need all the components because I will need NEDC two instances I will need rds, I will need s3 buckets, I will need load balancers, I will need V PCs, so create a module of each of the unit right? So, this is the base first step right modularization means creating independent modules and then creating new provisioners now, what is provisional provisioners are they are also a form of modules, but these are the executables of these base modules that you have created. So, let's say provisioners for your dev environment or your non prod environment, you can have one provisioner for your non prod environment. And you can create as many environments as you want in non prod you can have a development you can have a regression one you can have a performance test environment, as many environments as you want you can create a provisioner for it, let's say you create a provisioner for non prod and you create a provisional for prod now, you may keep one provisioner and you may try to distribute or create different environments from a single provisioner that is absolutely up to you. But sometimes for security reasons you might or sometimes in production, you might have more you know, environment specification, more number of servers compared to what you have in depth. So, you can keep and also you know, until you reach that mature state where everything is properly tested, you may want to keep both separate so that to avoid any unnecessary you know, code modules that happened in production and that might impact now, but the idea is create modules and then create provisioner. So, what do you have in proportioner? In provisioner, you will have an environment full of environment creation and it will call in independent modules that it needs. That means it will call the EC two modules, it will call the SG module or RDS module and then it will have the whole thing in a stack and it will create an environment. So if you look at if you have done application development, you may correlate it with your you know, libraries and your actual packages right or your functions and packages. So similarly, modules and provisioners divide your code in such a way that these modules are reusable in as many environments as possible. Don't just create one piece of code or one file when you're done. In everything, then you will not be able to create you know or slice and dice different environments from it if you just have one environment five will not be able to do it. So, try to divide it. So, if you just have to launch one easy instance, you will just be able to do that by creating a provisioner and calling it and if you just have to launch the entire environment into n, then you call all the all the compute all the modules right. So, this is what is the design for a modular infrastructure as code then, some of the other things in the modular code that has to be kept in mind is that parameterization very important, don't hard code even after using TerraForm or Ansible I have seen people hard code values, especially IP addresses and you know, naming configuration naming conventions within the code within the variables dot rtf file, and it is very dangerous, try to make sure that there is no hard coded value. So, when you have a module and provisioner distribution, make sure there is no hard coded value within the variables, the values only come in the provisioners where you give them an input and then the output is the environment where you only give the values when you're executing it not in the default code. So, that is why you need the modularization otherwise, if you just have one module creating everything, you will hard code every value within it and you may end up overriding or you know, just building a hard coded code everywhere. So, make sure that parameterization is very important and no variable values are saved in modules, but in the provisioner

Kamalika Majumder 26:39

output is very important when you are executing it from the provisioner make sure that you have defined output. So, you know, that thing that infrastructure components are actually getting created. Then the other thing is versions, make sure you have a version for this module, because remember, these will keep upgrading with newer versions right. So you need to have a different upgrade, keep upgrading the versions in the modules and documentation, you need to have very well documented readme files, which explains the use case and how and the usage very important the use case of the modules and usage of the modules in provisioners. So have a very well documented README file for that. Now, some of the tools of my choice are as I mentioned earlier, I actually like TerraForm for infrastructure creation orchestration and execution, because of some of the reasons some of the features that it has because it preserves the state of infra in it whenever it creates an infra it collects the output and it actually saves it in files. And next time when you are running the same script on the same infra unless you have done any changes, it will not recreate any new thing, only after only when you are adding a new component, it will actually append the changes and not override or destroy the whole infra however, it depends on your infrastructure code as well. So sometimes there can be disaster, wherein you end up overwriting the whole thing. So your code has to be truly idempotent, especially if you're using a simple execution of commands, that's when you end up, you know, running into a risk of query creation of infrastructure right. So, TerraForm has one more important thing that it has a validation stage where you can actually see what will be created or what will be upgraded or what will be just, you know, skipped. So it preserves the state and it validates the state during execution. So I like it, because very important for an infrastructure creation is that you may be fine with creating you know, or destroying infrastructure in non production, but production you know, especially live systems you cannot go and on the fly, destroy things, no matter even if it is stateless, right, you will have to make sure that everything is scaled properly. The other important feature of TerraForm, which I love is that it indeed integrates with most infrastructure providers be it cloud providers or be it any SaaS providers for databases. So it has a TerraForm provider for anything, which is exposed to most of the things right, you will have a TerraForm provider for Kubernetes or apart from the cloud once right? You will have an independent provider like Kubernetes one or maybe MongoDB one, you have an independent provider for this. So that's another feature I like about it. It is very important that I mentioned versioning is very important for modules. So it has a versioning concept of the module so you can actually version control it. It has a clear output so you can define the output that your script is going to create. It has another important feature, which is workspace management. Now, remember, I told you that you can have one provisioner for non prod and one for prod. So when you're having one provisional for non prod, you may want to have multiple environments like say Dev, QA, staging, performance, you know, depending on parallel testing, right? Now, when you need that TerraForm has a concept of workspace, wherein you can actually create a different environment with the same module. So it is like how branches get a similar concept, and not exactly similar, but very close to that in TerraForm, where you have a workspace. So, when you're coding, you actually create a branch on a local machine and you are actually developing it and only when you're confident you will raise a merge request, right. So, in TerraForm, also, you have a workspace, you can create a dev QA staging workspace, and you can actually switch between the workspace and TerraForm will make sure that it maintains different state files or for different workspaces. So, when you have multi when you want to create multiple non production environment or testing environments, you can create different workspaces and then you can, you know, run your code and but you will have the same provisioner, which will only have different values and different workspaces, so, that way you can test things in parallel without impacting the other environment.

Kamalika Majumder 31:22

Some things that should be careful while you are developing with TerraForm is that some resources may or may not be I didn't put in depending on how they behave, like say TerraForm has a simplest resource called for command execution. So, you can basically run a shell script or just shell commands within TerraForm, remote exec or local exec, but these are not entirely idempotent, meaning, it will not check the item potency of the commands that you have given. So, it will just run the commands as it is right now, you need to be careful if you know if your commands are not in line with Adam potency, so you need to be careful in that you test it out thoroughly for TerraForm. And the other tool that I also choose when I'm doing infrastructure as code is Ansible. But this is specifically for configuration management of self managed software. As I mentioned earlier, that some software's you may want to do want to set up on your own, like the your monitoring tools, your scanning tools, you know, some some things right, which may or are not all cloud provider have these things, especially related to security and monitoring and observability basically, related to operations. So all these operations tools have to be set up on your own. So in that case, Ansible is very useful, because it is why it was built, you know, it's a very good configuration management tool. It has rules. So you can have a monitoring rule defined for your monitoring servers or your scanning servers. And it also has pre built modules for most of these software's like, say, package installations or, you know, user creations. So sometimes these monitors in these operational tools need you to configure your servers in a specific way that you know, create certain partitions and folders and things like that. So Ansible has pre built modules to create those things. And you don't have to write, you know, shell commands and find out which is the right shell command, right. So Ansible has those things. And most of the Ansible modules are idempotent. Now, I actually like to use a combination of TerraForm and Ansible. Whenever I have Ansible modules for configuration management, the execution part is still within TerraForm. So I use TerraForm to execute the Ansible code. Let's say I'm setting up a monitoring setup, let's say in Splunk and I want to install Splunk in a Kubernetes cluster. So, I will create the Kubernetes cluster with TerraForm but I will probably install, you know, the Splunk software using Ansible because that will then mean that I can also install it in something other than Kubernetes cluster, maybe a simple Veer. So, Ansible plus TerraForm is a good combination gives a lot of flexibility and sometimes you may or may not have any TerraForm provider for software to instal write or configure, but Ansible will help you even write a shell script into and so Ansible is also easy to learn because it is again Yemen but one drawback in Ansible is that unlike TerraForm Ansible does not many men are stored the states of the infrastructure so that is a tricky part. So whenever using Ansible that's why I do not use Ansible for infrastructure creation. Although it can create infrastructure, I do not prefer it because it does not store the state files. By default you will have to do some customization and it is not that clean. So I prefer the tariff form to do it, because TerraForm will store the state files and Ansible will do the configuration part. So it is more about writing automation scripts. And therefore, it's more about writing the infrastructure code. So, these are some of the choices and how I personally like to design my modular code with the features that I mentioned earlier.

Kamalika Majumder 35:23

Now, the other thing that you need, once you have written the code, right, you have not reached there, you have not really created an environment on demand, you need to test that code. So it is very important without testing, you cannot release any infrastructure, especially in infrastructure, right? If you just randomly go ahead and change your port number, without any testing, it might bring down the entire live setup, right. So what I call is, it is not just test event development, but it is test driven delivery, especially when it comes to infrastructure. Because in infrastructure, everything starts with delivery, right, everything starts getting used, it's not just in somebody's local laptop, it's being used by users to test. So test driven delivery is also what we should be looking at, that means test everything before you are given to somebody to use, right. So which will mean that hosted production, like the environment at your, for your developers is the same as QA. So, all the devs and QA s have a prod like environment to test the same code that will eventually go into production, this way more bugs can be caught before it goes to production. And also introduced tests in every module. There are tools available which, like you know, to test TerraForm code or to test and simple code, test each module and then test each integration also that means each provisioner as well. So, that is why you need to test at the module level you need to test after creation of this, let's say the lower environments right and also you need to introduce a performance test for your environment. So, if you have before you go to production, you need to performance test the environment, which you have designed for production. So you need to have a performance test environment as well. And then the application will be run in that environment and it will be you know, it will be identified whether the scaling is fine or not. Same way user acceptance tests also, you will need to have tests to feel that if you have designed a Kubernetes cluster, does it have the right number of increases or entry points that are needed. So test driven delivery is very essential. The other thing is very important is infrastructure as code you should also have pipeline as code. So what is pipeline is the whole assembly line unit that you will use to create your or to run your infrastructure provision your infrastructure, right. So it can have different stages, let's say it can have an initial dev stage, then QA stage, then staging stage, and then it has a production. So that means in each stage, you are creating a dev, a dev environment and testing it and then you know, then promoting it to the next one. So this is a pipeline that you have. And you should also put this pipeline in a code format, right? Most of the CI CD tools have a Yamo format for this code. So very important to have a, you know, deployment and a test template for your pipeline as code. And when you have a pipeline as code in your infrastructure, you can also manage the secrets that mean the usernames and password, which was stored somewhere can be automatically called during runtime. And you will not need to save it in any file or any server. So when you're launching the pipeline as code, the pipeline will call a secret management place from where the secrets are taken. And then it is done. So the point is putting this entire process of assembly line also in a code. So some of the examples are if you're in GitLab, you have GitLab CI ml or if you're using Jenkins and it has groovy your Jenkins files. And likewise, right every pipeline, ci CD tool that we have, they actually have a pipeline as code mechanism. So make sure that you have a pipeline as code, again, for traceability. So that tomorrow, if something additional needs to be added, maybe something needs to be promoted, you can always do it from code and it is not just a manual change in the UI.

Kamalika Majumder 39:30

Another thing important is for an environment on demand is zero downtime deployment. That means when you are extending or doing any changes in your infrastructure, there is zero downtime of your application. So how do you achieve that you can really achieve that using stretch clusters. That means if you're on Cloud, make sure that the clusters are stretched across all the zones that are available in that region. And also use a deployment model that includes The gees of bluegreen rolling and Canary for platform upgrades, right? Especially, let's say bigger upgrades like say Kubernetes upgrade or say PostgreSQL upgrade, which will, which usually has a very impactful effect on the application. So, use the deployment model from the strategies of a high availability of persistent storage using network file storage or geo replication across data centers, because remember, especially when you're doing database upgrade, it is very important that the underlying storage system is also being replicated across locations. And a very important thing is to not override changes. So make sure your application on infrastructures code is idempotent. So, it is not overriding your configurations it is just amending it another important thing is auto scaling, because environment on demand once you have set it up, then it should also include auto scaling that means it will scale out when the performance is high and it will scale down when the performance is low. And only after that you will be able to get what your demand has asked you for. So the other last but not the least, once you have all these configurations in place, one thing very important is that you control it from one place right. So that is why you will need a centralized command center. And that is the reason why I spoke about a single source of truth at the very beginning, then that's why you will need to treat your infrastructure as code so that you have one place from which which is a source of truth and any changes that go there will trigger a new deployment right. So you should use a version control system and avoid making manual changes if the servers are logging into the server and running any script. Don't do that make sure you have pipeline as code use a CI system to run that pipeline as code and within that pipeline as code run the provisioners which will call the modules based on the you know values that are provided, it will create the infrastructure and it is all controlled from a centralized place, which is a centralized Cloud Administration. This will also help you prevent any vendor locking, right. So if you have different modules that say for AWS or GCP, for Azure, you can pick and choose if you have to leave AWS for some reason tomorrow than you can always have a GCP environment because your provisioners need to just switch the modules from GCP to AWS. So you will have that flexibility and not have to scratch your head. To be very honest, in the very recent time, I have done a major cloud migration from Alibaba cloud to GCP. And in fact, half of the job was easier because we followed an infrastructure as code model. And we use Cloud agnostic tools. And this is also very important for your internal teams because they don't have to learn everything from scratch right they still have the same programming language which they're using for programming their infrastructure, they just are changing different

Kamalika Majumder 43:06

providers and this also will build a better observability So, you will know what you are doing in your infrastructure if you have a centralized command center. So, what do you achieve from all of this from having an environment on demand? The first and foremost fruit of this effort is that you will have more stable and tested releases right you will have reliable and consistent releases very important right you can trust your releases or trust your infrastructure because one of the very common habit of people is that anything improve the brakes in production, the first suspect usual suspect is infrastructure. But if you build on this, everything within code and you know traceable and testable and with reports people will not suspect the infrastructure and they will focus on something which may be by some third party effect or or maybe some application code effect or something like that. So, very easy to you know, resolve disputes in infrastructure, you will achieve production like environment on your at your exposure at any time it will be reality. Now, you will have consistency across all environments because remember, you will have one template and you will follow it in lower and upper environments. Then you have an automated and scalable environment on demand, you will have fully tested configuration management. Production going live can happen multiple times in a week, multiple deployments every week. Overall testing time will come down because over the time when you have everything tested, you will not need to run the full regression suit right because it is already getting tested in lower scale in the modular level in the provisional level. So when it comes to the final one, you just need to validate certain things. And because the Big Bang testing would have already been cut But, so, testing overall testing time will be reduced will be faster basically. So, in summary, what does environment on demand see environment on demand say that have model server configurations parameterize everything, relationships with other servers packages in code, have configuration management and infrastructure as code. We can solve any issue that we are seeing today, if we follow this step plus version control the code, right? Host a production-like environment at your lowest level, maybe from the development phase or in the development laptop itself, so that you can catch more bugs. And so that you can try and test your code as many times as possible before you go to a production, scalable environment on demand. That means having an environment that can be brought up, tested with infrastructure as code and destroyed, recreated on demand in a few minutes, and which really gives you immutable infrastructure. I hope that you liked today's episode, I hope I was able to cover the idea of environment on demand. And this really marks the 10th, or the final factor of the anatomy of Modern Infrastructure in the framework that I have covered so far, which is 10 factor infrastructure means the top 10 factors that help you build a robust Modern Infrastructure. So this is the last of the factors. And next episode, I will do a quick rewind of the 10 factor infra to capture it before we move on to our next season, which will be on another interesting aspect of Modern Infrastructure. So do let me know how you like this episode? If you have any questions, any specific queries, please write to me on Cloudkata.com. You can subscribe to the show on Spotify, Apple podcasts, Google podcasts and Stitcher or you can go to Cloudkata.com. Subscribe and download the transcript or write to me and in the comment section and let me know how it was helpful for you? So stay tuned for the next episode. Till then, stay healthy, stay safe.

See you Bye bye

Transcribed by https://otter.ai

Other episodes

Leave a comment

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments