YTread Logo
YTread Logo

Terraform for Beginners + Labs

Jun 10, 2021
Terraform is one of the most popular devops tools for infrastructure as code and over the next hour or so we'll take you from zero to hero through this comprehensive, hands-on course. In this course we will simplify complex concepts using illustrations and animations and you will get hands-on practice through our

labs

that you can access for free. Yes, our

labs

open right in your browser so you don't have to pay extra for cloud accounts or set up your own infrastructure. In this course, vision will guide you. through the basics of

terraform

and we will help you not only understand the basics but also practice and gain hands-on experience.
terraform for beginners labs
If you are visiting our channel for the first time, don't forget to subscribe as we upload new videos and courses all the time. and welcome to this course on

terraform

for

beginners

my name is vijayn palazzi and i will be your instructor for this course in this course we will start with terraform but first we will look at infrastructure as code or iac and the different types of tools available in iac and their purpose in the modern IT infrastructure management, then we will look at the role of terraform in today's IT infrastructure, then we will learn how to install terraform.
terraform for beginners labs

More Interesting Facts About,

terraform for beginners labs...

This is followed by the basics of the hashicorp configuration language. Next, we have our first lab. where you will get your hands dirty working with hcl syntax next we will understand the basics of terraform such as providers, input and output variables, attributes and resource dependencies after this we will look at state and terraform what is it for what is used and considerations to follow when working with state, we then delve into the fundamentals starting with different commands provided by terraform. This is followed by a lecture where we understand the difference between mutable and immutable infrastructure. Next, we'll look at lifecycle rules in Terraform, where we'll learn how to manage the ways in which resources are created.
terraform for beginners labs
After this, we will have lectures on basic topics such as data source meta arguments, such as count and for each, and finally we will understand version restrictions in Terraform, so let's get started. introduced to our laboratories to access the laboratories, go to this link. This link is also available in the description below. If you are visiting Code Cloud for the first time, sign up for free and once inside you will find the lab course under your course list the course has multiple scenarios I will let you know when to access the other labs as you progress through this course as well Please go to code cloud to access the labs.
terraform for beginners labs
Let's start with how application delivery works in a traditional infrastructure model and how it evolved with the rise of technologies like cloud computing and infrastructure as code. Let's go back in time and look at how infrastructure was provisioned in the traditional IT model. . Let's consider an organization that wants to implement a new application and the company presents the requirements for the application, the business analyst then collects the business needs, analyzes it and converts it into a set of high level technical requirements, this is then passed to a solutions architect, the solutions architect then designs the architecture to follow for the implementation of this application.
This would typically include infrastructure considerations such as the type of specification and the number of servers needed, such as front-end web servers, back-end servers, databases, load balancing, etc. Following the traditional infrastructure model, these would have to be implemented in the organization. local environment, which would mean making use of assets in the data center, if additional hardware is needed, they would have to be ordered through the procurement team. This team will submit a new hardware request to vendors and then may accept any option from a few days to weeks or even months for the hardware to be purchased and delivered to the data center once received by the data center engineers. field would be in charge of the rack and equipment stack system administrators perform initial configurations and network administrators make systems available on the network storage administrators assign storage to servers and backup administrators configure backups security and finally, once the systems have been configured to the standards, they can be handed over to the application teams to deploy their applications in this deployment model. which is still used quite frequently today has quite a few disadvantages, the turnaround time can range from weeks to months and that is just to get the systems ready to begin the implementation of the application, this includes the time it takes to initially acquire the system. and then it is delivered between teams.
Additionally, scaling up or down infrastructure on demand cannot be achieved quickly. The overall cost of implementing and maintaining this model is generally quite high, while some aspects of the infrastructure profiling process can be automated in multiple steps, such as racking. and stack wiring and other deployment procedures are manual and time-consuming with so many teams working on so many different tasks, the chances of human error are high and this results in inconsistent environments. Another major disadvantage of using this model is the underutilization of infrastructure computing resources. Sizing activity is usually carried out well in advance and servers are sized considering maximum utilization.
The inability to easily scale up or down means that most of these resources would not be used during off-peak hours over the past decade. Move to virtualization and cloud platforms to take advantage of services provided by leading cloud providers such as Amazon AWS, Microsoft Azure, Google Cloud Platform, etc. By moving to the cloud, the time to launch infrastructure and time to market of applications is significantly reduced, this is because With the cloud, you do not need to invest in or manage the actual hardware assets that you would normally do in the case of a traditional infrastructure model.
The data center, hardware assets, and services are managed by the cloud provider. A virtual machine can be spun up in a cloud environment in a matter of minutes and time to market is reduced from several months as with traditional infrastructure to two weeks in a cloud environment and infrastructure costs are reduced by compared to the additional data center management and human resources costs of cloud infrastructure. comes with support for apis and that opens up a whole new world of opportunities for automation and finally, the built-in elastic and auto-scaling functionality of cloud infrastructure reduces resource waste with virtualization and cloud, you can now provision infrastructure with a few clicks while this approach is certainly faster and more efficient compared to traditional deployment methods.
Using the management console for resource provisioning is not always the ideal solution. It is okay to have this approach when you are dealing with a limited number of resources but in a large organization with flexibility and highly scalable cloud environment with immutable infrastructure, this approach is not feasible once provisioned, the systems still have to go through different teams with a lot of process overload that increases the delivery time and the possibilities of human error are still wide, resulting in inconsistent environments, so different organizations started to solve these challenges within themselves by developing their own scripts and tools While some use simple shell scripts, others use programming languages ​​like python ruby ​​perl or powershell, they were all solving the same problems trying to automate infrastructure provisioning to deploy environments faster and more consistently.
By leveraging the API capabilities of various cloud environments, these evolved into a set of tools that became known as infrastructure as code. In the next lecture we will see what infrastructure as code is in more detail. In this conference, they will introduce us to the infrastructure. As code is commonly known as iac, we will also look at commonly used iac tools. We previously discussed provisioning by using the management console of various cloud providers. The best way to provision cloud infrastructure is to codify the entire provisioning process. This way we can write and execute code to provision, configure, update and eventually destroy infrastructure resources.
This is called infrastructure as code or iac. With infrastructure as code, you can manage almost any infrastructure as code component, such as databases, networking, storage, or even application configuration. The code you see here is a shell script, however it is not easy to manage. Requires programming or development skills to build and maintain it. You'll need a lot of logic to code and it's not easy to reuse, and that's where tools like Terraform. and ansible helps with code that is easy to learn is human readable and maintains a large shell script can now be converted into a simple terraform configuration file like this with infrastructure as code we can define infrastructure resources using a simple language high level human readable here is another example where we will use ansible to provision three aws ec2 instances making use of a specific ami although ansible and terraform are IEC tools they have some key differences in what they are trying to achieve and as a result they have some very different differences. use cases, we will look at these differences below, there are several different tools that are part of the infrastructure, such as code family, ansible, terraform, marionette, cloud formation, packer, salt stack, vagrant, docker, etc., although You may be able to use any of these tools to design similar solutions, they have all been created to address a very specific goal with that in mind, iec can be broadly classified into three types configuration management ansible puppet salt stack fall into this category tools used to create server templates docker packer and vagrant fall into this category and finally we have infrastructure provisioning tools like terraform and cloud formation, let's look at them in a little more detail.
The first type of IEC tool we will look at are configuration management tools. These include tools like Ansible Chef Puppet and Salt Stack, which are commonly used for installation. and manage software on existing infrastructure resources, such as servers, databases, network devices, etc. Unlike the ad-hoc shell scripts we saw earlier, configuration management tools maintain a consistent code structure and standard, making it easier to manage and update as needed. They are also designed to run on multiple remote resources at once. An ansible playbook or role can be registered in a version control repository. This allows us to reuse and distribute it as needed;
However, perhaps the most important feature of a configuration management tool is that they are powerful, this means that you can run the code multiple times and each time you run it it will only make the changes necessary to bring the environment to a defined state. It would leave everything already in place as is without us having to write anything. Additional code next, let's look at the server templating tools. These are tools like Docker Vagrant and Hashicorp's Packer that can be used to create a custom image of a virtual machine or container. These images already contain all the necessary software and dependencies installed on them and for the most part, this eliminates the need to install software after deploying a virtual machine or container.
The most common examples of server template images are virtual machine images such as those offered on osboxes.org, custom amis on amazon aws, and docker images on docker hub. and other container registry server templating tools also promote immutable infrastructure, unlike configuration management tools, this means that once the virtual machine or a container is deployed, it is designed to remain unchanged if there are changes to be made to the image instead of updating the running instance as in In the case of configuration management tools like Ansible, we update the image and then returnto deploy a new instance using the updated image.
We have a section on immutable infrastructure later in this course where we'll look at it in much more detail, the last type of IEC tool that What's specifically of interest to this course is provisioning tools. These tools are used to provision infrastructure components using simple declarative code. These infrastructure components can range from servers like virtual machines, databases, vpcs, subnets, security groups, storage and almost any service based on the provider you choose while cloud formation is specifically used to deploy services on aws terraform es vendor agnostic and supports vendor plugins for almost all major cloud providers in the next lecture we will see how terraform helps in infrastructure provisioning.
Now let's talk about terraform and go over some of its high-level features, as we discussed, terraform is a popular IEC tool that is specifically useful as an infrastructure provisioning tool. Terraform is a free and open source tool developed by Hashicorp. It is installed as a single binary that can be configured very quickly, allowing us to build, manage and destroy infrastructure in a matter of minutes. One of the biggest advantages of terraform is its ability to deploy infrastructure on multiple platforms, including public and private cloud, such as on-premises vsphere cluster or cloud solutions such as aws gcp or azure to.
Name a few, these are just a few of the many resources that Reformation can manage, so how does Terraform manage infrastructure across so many different types of platforms? This is achieved through suppliers. A provider helps Terraform manage third-party platforms through its API providers. Allows Terraform to manage the cloud. platforms such as aws gcp or azure as we just saw, as well as network infrastructure such as big-ip cloudflare dns apollo alto network and infoblox, as well as data monitoring and management tools such as datadog grafana or xero wavefront and sumo logic databases such as influxdb mongodb mysql postgresql and version control systems like github bitbucket or gitlab terraform support hundreds of such providers and, as a result, can work with almost all infrastructure platforms.
Terraform uses hcl, which stands for hashicorp configuration language, which is a simple declarative language for defining infrastructure resources to be provisioned as blocks. Code All infrastructure resources can be defined within configuration files that have a dot df file extension The configuration syntax is easy to read, write and learn for a beginner This sample code is used to provision a new EC2 instance In the AWS cloud this code is declarative and can be maintained in a version control system, allowing it to be distributed to other teams. We will cover hcl syntax in more detail later in this course.
We also have many hands-on labs where you will practice working with these files. and we will gain a lot of experience at the end of this course, so we said that the code is declarative, but what does declarative mean? The code we define is the state we want our infrastructure to be in, that is the desired state and this on the right is the current state where there is nothing, Terraform will take care of what is required to go from the current state to the state desired without us having to worry about how to get there, so how does Terraform do it?
Terraform works in three phases, startup plan and is implemented during the initial phase, terraform initializes the project and identifies the providers to be used for the target environment during the planning phase, trafform writes a plan to reach the target state and then, In the application phase, terraform makes necessary changes needed in the target environment to achieve to the desired state if for some reason the environment would change from the desired state then a subsequent terraform application will return it to the desired state by fixing only the missing component. Each object that manages the reshape is called a resource.
A resource could be. a compute instance a database server in the cloud or on a local physical server that manages terraform terraform manages the lifecycle of resources from provisioning to configuration and decommissioning terraform records the state of the infrastructure as seen in the real world and based on this, you can determine what actions to take when updating resources for a particular platform. Terraform can ensure that the entire infrastructure is always in the defined state at all times. The state is a blueprint of the infrastructure deployed by Terraform. Terraform can read attributes of existing infrastructure. components by configuring data sources that can then be used to configure other resources within Terraform.
Terraform can also import other resources outside of Terraform that were created manually or through other IEC tools and put them under your control so you can manage them. Resources in the future, therefore, Cloud and Terraform Enterprise provide additional features that enable simplified collaboration between teams managing the infrastructure, improved security, and a centralized user interface to manage reform implementations. All of these features make it a great enterprise-grade infrastructure provisioning tool that was a quick introduction to Terraform at a high level, so let's dive in and explore all of this in much more detail in the upcoming lectures in this section.
We will learn how to install terraform. Terraform can be downloaded as a single binary or executable file from the terraform downloads section at www. terraform.io installing terraform is as simple as downloading this file and copying it to the system path once installed we can verify the version by running the terraform version command the latest version of terraform as of this recording is 0.13 and we will use this version Throughout the course Terraform is compatible with Windows Mac and several other Linux-based distributions. Please note that all examples and labs used in this course will use terraform running on a Linux machine and specifically version 0.13 and that's it, now we can begin. deploy resources using terraform as stated above, terraform uses configuration files that are written in hcl to deploy infrastructure resources.
These files have a dot tf extension and can be created using any text editor like notepad or notepad plus plus for Windows or command line text editors like like vm or emac on Linux or it could be any idea of ​​your choice, so, what is the resource? A resource is an object that therefore manages it, it could be a file on a local host or it could be a virtual machine in the cloud, like an ec2 instance or it could be services like s3 buckets ecs dynamodb table imuse role policies of rhyme groups, etc. or it could be resources on major cloud providers such as gcp databases compute and application engine in azure azure active directory etc. there are literally hundreds of resources. which can be provisioned across most cloud and on-premises infrastructure using terraform.
We'll look at some of these examples later in this course, but in the first few sections we'll stick to two very easy-to-understand resources: the local file resource type. and a special type of resource called random pet, it is important to use a simple resource type to really understand the basic concepts of terraform, such as resource life cycle, hcl format, etc., once we understand the concepts well basics, we can easily apply them. that knowledge to other real life use cases and we will see that in later sections of this course in this lecture we will understand the basics of hcl which is the hashicorp configuration language and then we will create a resource using terraform, first let's understand the syntax of hcl.
The hcl file consists of blocks and arguments. A block is defined in curly braces and contains a set of arguments in key-value pair format that represent configuration data, but what is a block and what arguments does it contain in its simplest form? Contains a block in its complete form. information about the infrastructure platform and a set of resources within that platform that we want to create, for example, let's consider a simple task: we want to create a file on the local system where terraform is installed, to do this we first create a directory called File Terraform Dash local script in the forward slash root directory.
This is the directory where we will create the hcl configuration file. Once we change to this new directory, we can create a configuration file called local.tf and inside this file we can define a resource block. In this way and within the resource block we specify the name of the file to be created, as well as its content, using the arguments of the block, let's analyze the local.tf file to understand what each line means. The first element of this file is a block that can be identified by the curly braces within the type of block we see here is called a resource block and this can be identified by the keyword called resource at the beginning of the block, after the keyword called resource, we have the declaration of the type of resource that we want to create this is a fixed value and depends on the provider where we want to create the resource in this case we have the type of resource called local file a type of resource provides two bits of information first is the provider that is represented by the word before the underscore in the resource type here we are making use of the local provider the word that follows the underscore, which is file, in this case the next and last statement in this represents the resource type resource block is the name of the resource this is the logical name used to identify the resource and it can have any name but in this case we have called it pet since the file we are creating contains information about pets and inside this block and inside From the keys we define the arguments for the resource that are written in key value. pair format, these arguments are specific to the type of resource we are creating, which in this case is the local file, the first argument is the file name, we assign the absolute part to the file we want to create, in this example it is set in root bar pets dot txt now we can also add some content to this file by making use of the content argument to this, let's add the value we love pets, the words filename and content are specific to the local file resource we want to create and they cannot be changed, in other words the local file resource type expects us to supply the filename and content argument.
Each type of resource has specific plots that expect us to see more of that as we progress through the course and that's it. We have a complete hcl configuration file that we can use to create a file named patch.text. This file will be created in the slash root directory and will contain a single line of data. The resource block we see here is just one. example of the configuration blocks used in hcl, but it is also a mandatory block needed to deploy a resource using terraform. Here is an example of a resource file created to provision an AWS EC2 instance.
The resource type is aws underscore instance which we call resource web server. and the arguments we have used here are the ami ID and the instance type. Here is another example of a resource file used to create an AWS S3 bucket. The resource type in this case is the AWS S3 underscore bucket. The name of the resource we have chosen. It is the data and arguments that we have provided such as the bucket name and ACL. A simple Terraform workflow consists of four steps: first write the configuration file, then run the terraform init command and then review the execution plan using the terraform plan command, finally once they are ready apply the changes using the terraform apply command with configuration file ready, now we can create the file resource using terraform commands as follows, first run terraform init command, this command will check the configuration file and initialize the working directory containing the file tf.
One of the first things this command does is understand that we are making use of the local provider based on the resource type declared in the resource block, then it will download the plugin to be able to work with the resources declared in the tf point. file from the output of this command we can see that terraform init has installed a plugin called local. Next, we are ready to create the resource, but before doing so, if you want to see the execution plan that terraform will carry out, we can use the command terraform plan this command will show the actions that terraform will carry out to create the resourceterraform knows to create resources and this is shown in the output similar to a diff command in git the output has a plus symbol next to it For the local file type resource called pet, this includes all the arguments we specify in the file dot tf to create the resource, but you'll also notice that some default or optional arguments are also displayed that we didn't specifically declare in the configuration file.
On the screen, the plus symbol implies that the resource will be created. Now remember that this step will not create the infrastructure resource but this information is provided for the user to review to ensure that all the actions that will be performed in this execution plan are as desired after the review we can create the resource and for this we will use the terraform apply command. This command will display the execution plan once again and then ask the user to confirm by typing yes to continue. Once we confirm, it will proceed with the creation of the resource which in this case is a file.
We can validate that the file was really created by running the cad command to see the file. We can also run the terraform show command within the configuration directory to see the details. From the resource I just created this command inspects the status file and displays the details of the resource. We will learn more about this command and status in a later lecture. Now we have created our first resource using terraform. Before we end this section, let's go back and look at the configuration blocks in the local.tf file. In this example we used the resource type of the local file and we learned that the keyword before the underscore here is the name of the provider called local, but how do we know what types of resources besides the local file? are available in the provider called local and finally how do we know what arguments the local file resource expects?
We previously mentioned that terraform supports over 100 providers, including the local provider we used in this example. Other common examples are aws to deploy resources on amazon aws cloud azure gcp alicloud etc. Each of these providers has a unique list of resources that can be created on that specific platform and each resource can have a number of required or optional arguments that are needed to create that resource and we can create as many. resources of each type as needed, it is impossible to remember all of these options and of course we don't have to. The Terraform documentation is extremely complete and is the only source of truth we should follow if we are looking for the premises. provider within the documentation we can see that it only has one type of resource called local file.
In the arguments section we can see that there are several arguments that the resource block accepts, of which only one is required, the file name, the rest. The arguments are optional. That's all for this conference. Now let's go to the practical labs and practice how to work with hcl and create our first resource using terraform. This is an introductory video to give you a quick tour of the hands-on labs available here. Of course, each lab is specially designed to help you practice and gain knowledge on the topics we learned in the associated lectures and demonstrations. Click this button to open the hands-on lab and wait for the lab environment to load.
It may take up to a minute, the lab interface has two sections, the terminal for the iac server is on the left side and this is where you will run commands and perform tasks based on the questions that are asked. The questions can be found in the quiz portal, which is on the right side of the split screen, there are two types of questions you can expect. The first type is a multiple choice question where you will have to search for the answer to a specific question. Some of them may be simple, but for others you will have to inspect them. configuring terraform from the terminal for example, to answer this question we have to navigate to a specific path and inspect the extension of the file that is created within.
One way to do this is from the terminal on the left side, we can see that there is a file called main.tf created in this directory, so let's go back to the quest portal and select dot tf as our answer for this question. The second type of question is the configuration test, for these questions you will have to perform specific tasks. like writing terraform configuration files and running the terraform workflow to create, update or destroy infrastructure, for example, here we have to run the terraform init command within the same configuration directory. This can be done again using the terminal on the left side if you are unsure. how to attempt a question click on the hints button it will provide you with helpful hints and point you in the right direction once you reach the aws sections of this course you will be able to work with aws services the labs are integrated with a AWS testing framework. which will allow you to create, update and destroy resources in aws, for example this question requires us to create an im user named mary and then run terraform init in the configuration directory.
The labs for this course are slightly different from our other courses. Optimize the learning experience for Terraform. We have integrated Visual Studio code into the lab. This will allow you to use built-in Terraform extensions that will help you quickly and efficiently write Terraform configuration files to access Visual Studio code. Click on vs code at the top of the terminal which will open it in a new tab from here we can open the configuration directories on the iic server and navigate to the path specified in the question. We can also make use of the terraform extensions that are pre-installed to answer this question let's create a file called imuser.tf in the im directory to easily create a resource block let's use the command completion function this will load the template for the resource block resources below for the resource type simply type aws and press control and space bar together, this will list all the resource types available in aws.
Now we can type iam and select the appropriate resource type, which in this case is aws user iam. We can also search for arguments for this resource block in the same way within the block. Press control and space bar to see a list of all available arguments and choose the ones you need once you are ready run the terraform commands from the terminal. We can also do this from vs code, we can right click inside the directory and click open an embedded file. terminal this will open the bash terminal at the bottom of the screen. Well, that's all for this video.
We wish you an excellent learning experience in the future. In this lecture we will learn how to update and destroy infrastructure using terraform. In the previous lecture we saw how to create a local file now let's see how we can update and destroy this resource using terraform first let's try to update this resource let's add a file permission argument to update the file permission to 0700 instead of the default value of 077 this will delete any permissions for everyone else except the file owner. Now, if we execute the Terraform plan, we will see a result like this. In the result we can see that the resource will be replaced by the minus plus symbol at the beginning of the resource name. in the plan it implies that it will be deleted and then created again.
The line with the command that says force replace is responsible for deletion and recreation and in this example this is due to the file permission argument we added to the configuration file even though the change we made was trivial. Terraform will delete the old file and then create a new file with the updated permissions. This type of infrastructure is called immutable infrastructure. We looked at this briefly when we looked at the different types of IEC tools if you want to continue ahead. change use the terraform apply command and then type yes when prompted for confirmation, the existing file is deleted and recreated with the new permissions to remove the infrastructure completely. run the terraform destroy command.
This command also displays the execution plan and you can view the resource. and all of its arguments have a minus symbol next to them, this indicates that the resource will be destroyed. To continue with the destruction, confirm yes in the message. This will delete all resources in the current configuration directory. In this example, it deletes the root of the file. pet dot txt that's all for this lecture, let's go to the practical labs and practice updating and deleting infrastructure resources using terraform in this lecture, let's take a look at the providers in more detail that we saw in the previous lecture, after writing a terraform configuration file the first thing to do is initialize the directory with the terraform init command when we run terraform init inside a directory that contains the configuration files.
Terraform downloads and installs plugins for the providers used within the configuration. These can be plugins for cloud providers like aws gcp azure or something as simple as the local provider we use to create a local file type resource. Terraform uses a plugin-based architecture to work with hundreds of such infrastructure platforms. The terraform providers are distributed by hashicorp and are publicly available in the terraform registry at the URL registry.terraform.io there are three levels of providers, the first is the official provider, these are owned and maintained by hashicorp and include the major cloud providers, such as AWS GCP and Azure, the on-premises provider we have used so far is also an official provider.
The second type of provider is a verified provider. A verified provider is owned and maintained by a third-party technology company that has gone through a supplier process associated with hashicorp. Some of the examples are big-ip fi network provider heroku digitalocean etc. and finally we have the community providers which are published and maintained by individual contributors of the hashicorp community terraform init command when run it shows the version of the plugin being installed, in this case we can see that the name of the hashicorp plugin local slash with the Version 2.0.0 has been installed in the Terraform init directory is a safe command and can be run as many times as necessary without affecting the actual infrastructure being deployed.
The plugins are downloaded to a hidden directory called dot terraform slash plugins in the directory. working directory containing the configuration files in our example the working directory is slash root slash terraform script local script file the name of the plugin you see here hashicorp slash local is also known as source address this is an identifier which terraform uses to locate and download the registry plugin, let's take a closer look at the format of the name, the first part of the name, which in this case is hashicorp, is the namespace of the organization, it is followed by the type, which is the name of the supplier, such as local, other. examples of providers are aws azure rm google random etc. the plugin name can also optionally have a hostname in front of it. the hostname is the name of the registry where the plugin is located.
If omitted, the default value is registry.reform.io, which is the public registry of hashicob. Given the fact that the local provider is stored in the public terraform registry within the hashicorp namespace, the source address can be represented as registry.terraform.io hashicorp local or simply by hashicorp local omitting the hostname from form By default, terraform installs the latest version. of the provider's plugins, especially the official ones, are constantly updated with newer versions. This is done to incorporate new features or add bug fixes and these can introduce breaking changes to your code. We can lock our configuration files to make use of a We will also see how to do this later in this course.
Now let's take a look at the configuration directory and file naming conventions used in Terraform. Until now we have been working with a single configuration file called local.tf and this one. It's inside the directory called the Terraform Dash local script file, which is our configuration directory. This directory is not limited to a configuration file. We can create another configuration file like this, cad.tf is another configuration file that uses the same local underscore file resource. when applied, it will create a new file called cad.txt. Terraform will consider any file with the extension tf within the configuration directory.
Another common practice is to have a single configuration file that contains all the resource blocks necessary to provision theinfrastructure. A single configuration file. You can have as many configuration blocks as needed. A common naming convention used for such a configuration file is to call it main.tf. There are other configuration files that can be created within the directory, such as variables.tf and outputs.tf. and providers.tf we will talk more about these files in later sections of this course. Now let's move on to the hands-on labs and explore how to work with suppliers. In this lecture we will see how to use multiple providers and resources in Terraform so far.
We have been using a single provider called local to deploy a local file to the system. Terraform also supports the use of multiple providers within the same configuration to illustrate this, let's use another provider called random. This provider allows us to create random resources. like a random id, a random integer, a random password, etc., let's see how to use this provider and create a resource called random pet. This type of resource will generate a random pet name when applied using the documentation we can add a resource block. to the existing main.ta file like this here we are making use of the resource type called random pet in a previous lecture we saw that the resource type can be divided into two parts, the keyword before the underscore is the provider which in this the case is random, the keyword that follows is the type of resource, which is the pet, let's call this resource mypet inside this resource block, we will use three arguments, one is the prefix that will be added to the name of the pet, The second argument is the separator between the prefix and the name of the pet that is generated.
The final argument is the length, which is the length of the pet name that will be generated in words. Our main.tf file now has a resource definition for two different providers, a resource of the local file type we have. already created above and another random pet type resource before generating an execution plan and creating these resources, we have to run the terraform init command again. Now this is a required step as the plugin for the random provider needs to be initialized in the configuration directory before. we can use it in the output of the terraform init command.
We can see that the local provider was previously installed and will be reused. The plugin for the random provider, on the other hand, will be installed since it was not used before. now run the terraform plan to check the execution plan as expected, the local file resource called pet will not be updated as it has not changed from the previous one. A new resource with the name my pet will be created based on the new resource block that I just added now let's apply the configuration using terraform apply as expected the local file resource is left as is but now a new resource has been created which is called my random pet provider is a logical provider and displays the results for the pet name in On a screen like this, an attribute called id that contains the pet name is entered using the apply command.
Before continuing, please note that in our illustration the dog icon represents a pet and we will use it throughout this article. Of course, the random pet can generate any pet name and doesn't have to be specifically a dog. Now let's move on to the hands-on labs and explore working with multiple vendors in Terraform. In this lecture we will see how to use variables. in terraform we have used several arguments in our terraform block so far for local file we have file name and content and for random pet resource we have used prefix separator and length as arguments since these values ​​are defined directly inside the file major. configuration files are considered hardcoded values ​​hardcoding values ​​is not a good idea for one, this limits code reuse which defeats the purpose of using iac.
We want to ensure that the same code can be used over and over again to deploy resources based on a set of input variables that can be provided at runtime and that's where input variables come into the picture, just like in any general purpose programming language, like bash scripting or powershell, we can make use of input variables in terraform to assign variables, create a new configuration file called variables.tf and define the values ​​in this way, the file variables.tf al Just like the main.tf file consists of blocks and arguments to create a variable, use the keyword called variable, this is followed by variable name, this can have any name, but as standard use an appropriate name, such as the name of the argument for which we are using the variable within this block.
We can provide a default value for the variable. This is an optional parameter, but it is a quick and easy option. A simple way to assign values ​​to variables. We'll look at the other methods of doing this in a later lesson. Great, now we have our variable configuration file, but how do we use it inside the main.tf file? To do this, we can replace the argument values. with the variable names preceded by a war like this when using variables you don't need to enclose the values ​​in double quotes as you would when providing actual values ​​and use the same execution flow we have seen many times so far we can create the resources Using the terraform plan followed by the terraform app, the resources have now been created as expected now if you want to perform an update to the resources by making changes to the existing arguments.
We can do this by simply updating the variables.tf file. There is no need to modify the main.tf file, for example, let's update the local file resource to create the file in the same location but with an updated content that says my favorite pet is Mrs. Whiskers and for the random pet resource let's change the length of the pet name to two, we can do this as expected when we run terraform apply, it will recreate the resources the content of the file has been changed and the pet name now has two words after the prefix before we conclude this lecture.
Here is an example of what our configuration files would look like when creating an ec2 instance in aws with terraform while using input. Variables don't care if the resource block and arguments are unfamiliar. We have a separate lecture where we will make use of AWS resources later in the course. In this lecture, we will first take a closer look at the variables block in Terraform. Let's look at the different arguments that a variable block uses. The variables block in Terraform accesses three parameters. The first one we have already used is the default parameter. This is where we specify the default value for a variable.
The others are type and description. Optional, but it is good practice to use this argument to describe what the variable is used for. The type argument is also optional, but when used it enforces the type of variable being used. The basic variable types that can be used are string number and boolean. String variables, as we have seen in our example so far, accept a single value which can be alphanumeric and consisting of alphabets and numbers, the numeric variable type accepts a single value of a number which can be positive or negative and the boolean variable type accepts a value of true or false, the type parameter as mentioned above is optional if not specified in the variable block, it is set to type any by default, in addition to these three simple variable types , trafform also supports additional types such as list of map set objects and tuple let's now see how to use all this in terraform let's start with list a list is a numbered collection of values ​​and can be defined like this in this example we have a variable called prefix which uses a list of values ​​sir madam and sir but why do we call it numbered collection well that is because each value which is also known as element can be referenced by the number or index within that list the index of a list always starts at zero in In this case the first element of the list at index 0 is the word sir, the element at index 1 is missing and the final element at index 2 is sir.
These variables can be accessed within a configuration file like this with the index specified in square brackets, hence the expression word.prefix with index 0 uses the value mr one.prefix with index 1 uses the value mrs and with the index 2 uses the value of lord. Next, let's look at the type called map. A map is data represented in the format of key-value pairs in the file variables.ta allows us to create a new variable called file content with the type set to map to the default values. We can specify as many key-value pairs enclosed in braces. Here statement 1 and statement 2 are keys and the following string data. they are values ​​now to access a specific map value inside the terraform configuration file we can make use of key matching in this case we want the content of the local file resource to be the value of the key called declaration2 and for that we use a var.file content expression which is the name of the map type variable followed by the matching key in square brackets.
We can also combine type constraints, for example if you want a list of elements of type string, we can declare it like this to use a list. of numbers change it like this if the variable values ​​used do not match the constraint type the terraform commands will fail in this example we have used list of types where the element must be of type number but now the default values ​​are all of type string when we run terraform commands like plan or apply you will see an error like this stating that the default value does not support the variable type restriction and that a number is required and not a string and the same applies with the maps we can also use type constraints to ensure that the values ​​of a map are of a specific type in the first block of variables we are using a string type map and in the second we are making use of a map that uses numbers let now let's see sets set is similar to a list the difference between a set and a list is that a set cannot have duplicate elements in these examples we have a variable type of set of strings or a set of numbers the examples on the left are fine, but the on the right do not have duplicate values ​​that will generate an error.
Default values ​​are declared just as you would for a list, but remember that there should not be any duplicate values ​​here, the next type of The variables we are going to look at are objects with objects we can create complex data structures by combining all types of variables that we have seen so far, for example, consider a new variable called bella, which is the name of a cat. is used to define the different characteristics of the sketch, such as its name, which is a string, the color, which is also a string, h, which is a number, the food it eats, which is a list of strings, and a boolean value which indicates if it is a favorite. bet or not, now we assign some values ​​to this variable, let's use name equals beautiful, color equals brown, age equals seven, food is fish, chicken and turkey, and favorite bet is set to true and we can use the same default values ​​within a block of variables like this the last type of variable we will see are tuples the tuple is similar to a list and consists of a sequence of elements the difference between a tuple and a list is that the list uses elements of the same type of variable such as string or number, but in the case of tuple we can make use of elements of different types of variables, the type of variables that will be used in a tuple is defined between square brackets in this example, we have three defined element types, the first is a string, second is a number and finally a boolean.
The variables to be passed to this must be exactly 3 and of that specific type for it to work. Here we have passed the value of cat to the string element. The number 7 to the numeric element and true to the boolean adding additional elements or a wrong type will result in an error as seen here. If you add an additional dog value to the variable, it will fail since the tuple only expects three elements of type string number and boolean, that's all. for this lecture, let's go into the hands-on labs and explore how to work with variable types in terraform.
In this lecture we will see the different ways in which we can make use of input variables in Terraform so far we have created input variables. in terraform and assigning default values ​​to it based on a variable type, this is just one of the ways to pass values ​​to the variable. Earlier we learned that the default parameter in a variable block is optional, this means we can have our variable block look like this, but what would happen if we run terraform commands now? When we are in terraform apply, we will be asked to entervalues ​​for each variable used in an interactive mode.
If you don't want to provide values ​​in an interactive mode we can also make use of command line flags like this with terraform command we can make use of dash war option with variable name equal to value format we can pass as many variables as we want with this method using the hyphen or flag several times we can also make use of environment variables with the underscore tf var underscore followed by the name of a variable declared like this in this example the file name of underscore war underscore tf sets the value of the variable called filename to the root bar value pets.txt and similarly the variable called length now has the value of 2. and finally when we are dealing with many variables we can load values ​​using For variable definition files like this, these variable definition files can have any name, but should not always end in.tfrs or tfos.json.
Here we have declared the variables and values ​​in a file called terraform.tf-force. If you look at the syntax used to create this file, you will notice that it uses the same syntax of an hcl file but only consists of variable assignments, the variable definition file if called terraform.tfrs or teform.tfrs.json or with any other name ending in.auto.efrs or dot auto.tfs.json will be automatically loaded by terraform if you use any other filename like variable.tfrs for example you will have to pass it along with a command line flag called dash war dash file like this finally it is important to note that we can use any of the options we have seen in this lecture to assign values ​​to variables, but if we use multiple ways to assign values ​​for the same variable, terraform follows the variable definition precedence for understand what value you should accept to illustrate this let's use a simple example in this case we have a main configuration file with a single resource a local file that will create a file in a path declared in a variable called filename in the variables.tf file we have not specified a default value for this variable and we have assigned different values ​​to it in multiple ways we have exported the environment variable called tf underscore and underscore file name with the value of slash root slash cats.text the terraform.ta4 file has a value of slashroot patch.txt for the same variable we have also made use of a variable definition file with name variable.auto.tfors with root value mypet.txt and finally we are also making use of dash war option while executing the command terraform apply with the value of slash root best pet dot text, so in this case which of these values ​​would be accepted, Terraform follows an order of variable definition precedence to determine this, first loading the environment variables, then the value in the terraform.tf wars file, followed by any file ending with dot auto.tfrs or dot auto.tfrs.json in alphabetical order and finally terraform considers the dash war command line prompt or the dash war file dash which has the highest priority and will overwrite any of the previous values; in this case, the name of the variable file will be assigned the value of slashroot bestpet.txt that's all for this lecture, let's go to the practical laboratory and practice working with the concepts that we learned in this lecture, in this lecture we will learn how to link two resources using resource attributes.
In the last lectures we saw how to use variables to improve the reusability of our code. We now have two resources in our configuration file. Each of these resources has a set of arguments that are used to create that resource for the file resource we have. we use file name and content as arguments and for pet resource we have used prefix separator and length when this setting is applied terraform creates a file and a random pet resource the random pet name is shown in the screen as an id which is mr bull, in this case as it is there is no dependency between these two resources, but this rarely happens in a real world infrastructure provisioning process, there are likely multiple resources that depend each other, so what if you want to make use of the output of one resource and use it as input for another.
What if we want the file content to use the name generated by the random pet resource? Currently the content of the file is set to My Favorite Pet is Mr Cat. but what if we want it to be set to the name generated by the Random Pet resource? To understand this, let's go back to the documentation for the random pet resource at registry.terraform.io. You may have noticed that there are many examples provided in the documentation, including examples of the arguments, but there is also a section called attribute reference which provides the list of attributes returned by the resource after running a terraform application.
In this case the random pet resource returns only one attribute called id which is of type string, the id is the name of the pet generated after running terraform apply, as we have seen before our goal is to make use of the attribute called id and use it as content of the local file resource for this. we can make use of an expression like this this expression is used to reference the attribute of the resource called mypet the syntax for using this reference expression is the type of resource followed by the name of the resource and the attribute to be used all of which are separated by a dot or dot, the resource type in this example is random pet, followed by mypet, which is the name of the resource and the attribute uses id.
You may also have noticed that we are using the dollar sign followed by the expression that is enclosed within braces, this is known as an interpolation sequence, since the content argument already uses data of type string. This sequence is used to evaluate the expression given inside the curly braces, convert the result to a string and then insert it into the final string like this. Let's apply the changes made and allow terraform to recreate the local file in the output. You can see that the content is being replaced to our desired value and contains the name of the pet that was generated by the random pet resource.
That's all for this conference. Now let's move on to the hands-on labs and explore how to work with reference expressions and resource attributes in terraform in this lecture we will look at the different types of resource dependencies in terraform. In the previous lecture we saw how to link one resource to another using reference attributes using reference expression and interpolation. We were able to make use of the output of the random-battery resource as input to the local-file resource now when Terraform creates these resources it knows the dependency, since the local-file resource depends on the output of the random-battery source, like As a result, use the following order to provision them first: Terraform creates the random pet resource and then creates the local file resource when the resources are deleted.
Terraform removes it in reverse order. First the local file and then the random pet. This type of dependency is called implicit dependency. Here we do not explicitly specify what resource it is. It depends on what other Terraform resource figures it out on its own; However, there is another way to specify the dependency within the configuration file, for example, let's make use of the above configuration file without using the reference expression for the file content if you still want that. make sure the local file resource is created after the random pet. We can do this using the depends on argument like this.
Here we have added a depends on argument inside the resource block for the local file and have provided a dependency list that includes the random pet resource called mypet, this will ensure that the local file is only created after the resources have been created. random pet. This type of dependency is called explicit dependency. Explicitly specifying a dependency is only necessary when a resource depends on some other resource indirectly and does not use a reference expression as seen in this case. In later sections of this course we will see how and when to use this in a real-world use case.
Now let's move on to the practical labs and explore the work. with resource dependencies in terraform, let's now look at the output variables in terraform. So far we have used input variables and reference expressions in our terraform configuration files along with input variables. Terraform also supports output variables. These variables can be used to store the value of an expression. In terraform, for example, let's go back to the configuration file we used in the previous lesson, we already know that the random bet resource will generate a random pet name using the attribute called id when we apply the configuration to save this id in an output. variable called petname, we can create an output block like this, the syntax used to create this output block is the keyword called output followed by the name we want to call this variable inside this block, the required argument for the value is the reference expression that we can also add a description, which is an optional argument to describe what this output variable will be used for once this block has been created.
When we run terraform apply, we can see that the output variable is printed to the screen once the resource has been created. Also make use of the terraform output command to print the value of the output variables. The output terraform command alone will print all output variables defined in all files in the current configuration directory. We can also use this command specifically to print the value of an existing output variable like this now you might be wondering where we can make use of these output variables. We already saw that dependent resources can make use of reference expressions to obtain the output of a resource block as input to another block, since such output variables are not really necessary here, the best use of output variables of terraform is when you want to quickly display details about a provisioning resource on screen or feed the output variables to other iac tools such as a plugin script or an ansible playbook for configuration management and testing now let's get to the labs practical and let's practice how to work with output variables in this lecture we will see the purpose of using state in terraform.
We already saw how terraform uses the state file to map resource configuration to real-world infrastructure. Mapping allows Terraform to create execution plans when a drift is identified between resource configuration files and state, therefore a state file can be considered as a model of all the resources that are available to administrators in the world. real when Terraform creates a resource. registers its identity in the state, whether it is the local file resource that creates a file on the machine, a logical resource like random pet that simply returns the name of the random pet, or cloud resources that would have each resource created and managed by terraform. a unique ID used to identify resources in the real world, in addition to the mapping between resources in the configuration and the real world, the state file also tracks metadata details such as resource dependencies.
Previously we learned that our form supports two types of dependencies, implicit. and explicitly, if we inspect the example configuration file, we can see that we have three resources to provision here. The local file resource named pet depends on the random pet resource. This is evident in the content argument in the local file resource block which uses a reference. For the random pet resource, the local file resource named cat is unrelated to the other and therefore can be created in parallel with the random pet resource. When we apply this configuration, the random battery source named mypet and the local file named cat can be created first. at the same time, but the local file resource named pet can only be created after creating the random pet resources, we can see that the local file with the name cat and the random pet resource with the name mypet are the first resources that will be created once it is only done then the local file resource called pet is created.
So far we don't rely on state for provisioning, but what happens if we decide to remove the random pet resource and configuration-dependent local file? Let's now see what happens when we delete resources from the file, for example we delete the local file and the random pet resource from the file if we were to apply the settings now terraform knows to delete these resources, however in what order does it delete them ? random pet resource first or the local file, theResource dependency information is no longer available in the configuration file since we have removed those lines. This is where terraform relies on state and the fact that it tracks metadata within the state file we use.
You can clearly see that the local file resource called pet depends on the random resource pet, as these two resources have now been removed from the configuration. Terraform now knows to delete the local file first followed by the random resource. Another benefit of using State is performance when dealing with a small amount of resources, it may be feasible for Terraform to reconcile the state with the real world infrastructure after each Terraform command such as schedule or apply, but in the real world, Terraform would manage hundreds and thousands of such resources and When these resources are distributed to multiple providers and especially those in the cloud, it is not feasible for Terraform to reconcile the state of each Terraform operation.
This is because in some cases it would take Terraform several seconds or several minutes to get details about each unique resource from all providers that are configured for larger infrastructures, this may be too slow in such cases, the terraform state will can use as a log of truth without having to reconcile this would significantly improve performance, terraform stores a cache of attribute values ​​for all resources in the state and we can specifically make terraform refer only to the state file while we run commands and avoid having to update the state every time to do this, we can make use of the dashboard update which is equal to a false flag with all the terraform commands that make use of the state when we run the plan with this flag, you can see that our form It does not update the state, but rather relies on cached attributes and, in this example, the content that has changed in the configuration file and, as a result, execution.
The plan outlines a replacement of resources. The final state benefit we will see is collaboration when working as a team, as we have seen in previous conferences. The Terraform state file is stored in the same configuration directory in a file called terraform.tf. status In a normal scenario, this means that the status file resides in the folder or a directory on the end user's laptop. This is fine when starting out learning Terraform and implementing small projects individually, however, this is far from ideal when working as a team every day. The computer user should always have the latest state data before running terraform and ensure that no one else is running terraform at the same time.
Failure to do so can lead to unpredictable errors. As a consequence, in such a scenario, it is highly recommended to save this terraform. state file on a remote data store instead of relying on a local copy, this allows state to be shared among all team members securely. Examples of remote state stores are Amazon web services, s3 service, hashicorps console and terraform cloud, we will learn more about remote state stores in much more detail in a later section, we will also learn about reshape cloud and a separate section of its own. Now let's go to the hands-on labs and explore how to work with the terraform state in the previous lectures we learned about the terraform state. and its benefits terraform state is the only source of truth for terraform to understand what is implemented in the real world;
However, there are a few things to keep in mind when working with state and we will learn about them in this lecture. Optional feature in terraform, however there are some considerations. The first is that the state file contains sensitive information. It contains every little detail about our infrastructure. Here is an example snippet of a state file for an AWS EC2 instance that is essentially a virtual machine. In the AWS cloud, this state file consists of all the attributes of the virtual machine being provisioned, such as the CPU allocated, the memory operating system or image used, the type and size of the disks, etc.
It also stores information such as the IP address assigned to the virtual machine. and the ssh key pair used etc. for resources like databases state can also store initial passwords when using local state the state is stored in plain text json files and as you can see this information can be classified as sensitive information and as a result , We need to ensure that the state file is always stored in secure storage, so we have two types of files in our configuration directory: the Terraform state file that stores the infrastructure state and the Terraform configuration files. Terraform that we use to provision and manage infrastructure when working as a team, it is considered best practice to store terraform configuration files in distributed version control systems such as github, git lab or bitbucket;
However, due to the sensitive nature of the state file, it is not recommended to store them in git repositories, but rather to store them. status on remote backend systems like aws s3 google cloud storage blue storage terraform cloud etc. We'll cover how to work with remote state backends in a dedicated section of its own, but for now it's important to take note of these state considerations terraform is a json data structure designed for internal use within terraform. We should never attempt to manually edit state files ourselves; However, there will be situations where we may want to make changes to the state file and in such cases we must rely on terraform state.
Commands we will cover in a later section of the course. So far we've seen quite a few Terraform commands in action, such as Terraform init plan and apply. Now let's take a look at some more commands available in Terraform. The first command we will see. take a look at the terraform validate command once we write our configuration file, there is no need to run terraform plan or apply to check if the syntax used is correct, instead we can make use of the terraform validate command like this and if everything is correct with the file we should see a validation successful message like this if there is an error in the configuration file the validate command will show you the line in the file that is causing the error with suggestions to fix it in this example we have used a Incorrect argument for local file resource should be file permission and not file permissions.
The next command we will look at is the terraform fmt or terraform format command. This command scans the configuration files in the current working directory and formats the code in a canonical format. format this is a useful command to improve the readability of the terraform configuration file when we run this command the files that are modified in the configuration directory are displayed on the screen the terraform show command prints the current state of the infrastructure as you see it terraform in this example we have already created the local file resource and when we run the show command it shows the current state of the resource including all the attributes created by terraform for that resource such as file name, directory permissions, content and its source id, additionally we can use the json flag to print the content in json format to see a list of all providers used in the configuration directory.
Use the terraform providers command. You can also use the mirror subcommand to copy the necessary vendor plugins. for the current configuration to another directory like this, this command will reflect the provider configuration in a new path root slash forward slash terraform new slash underscore local underscore file we saw how to use terraform output variables in one of the previous lectures if you want to print all output variables in configuration directory use terraform output command also you can print the value of a specific variable by adding the variable name at the end of output command in this way terraform update command is used to synchronize terraform with real-world infrastructure, for example, if changes are made to a resource created by terraform outside of its control, such as a manual update, the terraform update command will select it and update the state file.
This reconciliation is useful in determining what action to take during the next application. This command will not modify any infrastructure resources, but will modify the state file as we saw earlier. The terraform update is also run automatically using commands like terraform plan and terraform apply and this is done before terraform generates an execution plan. However, this can be bypassed by using dash update equals false. option with commands, the terraform graph command is used to create a visual representation of the dependencies in a terraform configuration or execution plan in this example, the local file in our main.ta file has a dependency on the random pet resource, This command can be run as soon as you have the configuration file ready, even before you have initialized the configuration directory with terraform by running the terraform graph command you should see output like this.
This generated text is difficult to understand as is, but it is a graphic. generated in a format called point to make more sense of this graph, we can pass it through a graph visualization software like Graphiz and we can install it in ubuntu using apt so once installed we can pass the Terraform graph output to the point . command we install using the Graph Face package and generate a graph like this. Now we can open this file through a browser and it should show a dependency graph like this. The root is the configuration directory where the configuration for this graph is located.
We can see that there are two resources the local file called pet and the random pet resource called my pet which makes use of the local and random provider respectively and finally we can see that the local file called pet depends on the random pet resource called mypet as I have used a reference expression in the local file resource that points to the ID of the random pet. That's all for this conference. Now let's move on to the labs and explore how to work with the Terraform commands that we learned in this lecture in this section.
We will learn about the difference between mutable and immutable infrastructure in one of the previous lectures. We saw that when Terraform updates a resource, such as updating the permissions of a local file, it first destroys it and then recreates it with the new permission. So why Terraform? Let's do that to understand this, let's use a simple example. Let's consider an application server running nginx with version 1.17, when a new version of nginx is released, we update the software running on this web server first from 1.17 to 1.18 and finally when the new version 1.19 is released. We update in the same way from 1.18 to 1.19.
This can be done in several different ways. A simple approach is to download the desired version of nginx and then use it to manually update the software on the web server during a maintenance window, of course we can also make use of tools such as addox scripts or configuration management tools such as ansible to achieve this for high availability instead of depending on one web server we can have a group of these web servers all running the same software and code we would have to use the same software update life cycle for each of these servers using the same approach we used for the first web server.
This type of upgrade is known as an in-place upgrade and this is because the underlying infrastructure remains the same but the software and settings on these servers are changed as part of the upgrade and this is an example of a mutable infrastructure. . Updating software on a system can be a complex task and in almost all cases there is likely to be a set of dependencies that have to be met before an update can be carried out successfully, let's say the web servers 1 and 2 meet all the dependencies while we are trying to upgrade the version from 1.18 to 1.19 and as a result these two servers are upgraded without any problem.
Web server 3 on the other hand the update does not fail on web server 3 because it has some dependencies that are not met and as a result it remains on version 1.18. The update failure could be due to several reasons such as network issues affecting connectivity to the software repository file system, a complete or different version of the operating system running on web server 3 compared to the others two; However, the important thing to note here is that we now have a group of threeweb servers in which one of these servers runs a different software version compared to the rest.
Over time, with multiple updates and changes to this group of servers, it is possible that each of these servers vary from each other, it may be due to software, configuration or operating system etc., this is known as configuration change, For example, after a few update windows, our three web servers might look like this web server 1 and 2 have version nginx 1.19 and web server 3. has version 1.18 and all three web servers may also be running slightly different versions of the operating system in them. This configuration variation can leave the infrastructure in a complex state, making it difficult to plan and perform subsequent upgrades.
Problem solving would also be difficult. task since each server would behave slightly different from the other due to this configuration variation, instead of updating the software versions on the web servers, we can create new web servers with the updated software version and then delete the old web server to when we want to upgrade nginx 1.17 to 1.18, a new server is provided with nginx version 1.18, if the upgrade is done, the old web server is removed. This is known as immutable infrastructure. immutable means unchanged or something that cannot be changed as a consequence with an immutable infrastructure we cannot carry out in-place updates to resources, this does not mean that updating web servers in this way will not cause failures if the update fails for any reason , the old web server will be left intact and the failed server will be removed as a result, we don't leave much room for configuration variation to occur between our servers, ensuring that it is left in a simple and easy to understand state and, Since we are working with infrastructure as code, immutability makes it easy to version and roll back infrastructure and move between versions.
Terraform as an infrastructure provisioning tool uses this approach going back to our example, updating the resource block for our file resource. local and changing the permission from 777 to 700 will result in deleting the original file and creating a new file. created with the updated permission by default, the form destroys the resource first before creating a new one in its place, but what if we want the resource to be created first before the old one is deleted or ignore the deletion completely? How do we make this possible? This is done by making use of the lifecycle rules in our resource block and we will see how to do it.
Next in this section we will learn how to configure lifecycle rules in Terraform. Previously we saw that when Terraform updates a resource, it releases the infrastructure to be immutable and first deletes the resource before creating a new one with the updated settings, for example, if we update the file permissions on our local file resource from 777 to 700 and then we run terraform apply you will see that the old file is deleted first and then the new file is created now, this may not be a desirable approach in all cases and sometimes you may want the updated version of the resource to be create it first before the old one is deleted or you don't want the resource to be deleted at all, even if a change was made to your local configuration.
This can be achieved in Terraform by making use of lifecycle rules. These rules use the same block syntax we've seen many times so far and go directly into the resource block whose behavior we've seen. If we want to change the syntax of a resource block with the lifecycle rule it looks like this, inside the lifecycle block we add the rule that we want terraform to comply with while updating the resources and one of those arguments or rules is the rule create before destroy, here we have the same resource block that has been updated with the create before destroy lifecycle rule set to true, this rule ensures that when a configuration change forces the resource to be recreated, first a new resource is created before deleting the old one, now there would be cases.
When we do not want a resource to be destroyed for any reason, we can make use of the prevent destruction option when set to true. Terraform will reject any changes that result in the destruction of the resource and display an error message like this. This is especially useful to prevent your resources from being accidentally deleted. For example, a database resource like mysql or postgresql may not be something we want to delete once it has been provisioned. One important thing to note here is that the resource can still be destroyed if you use the terraform destroy command, this rule will only prevent resource deletion due to changes being made to the configuration and subsequent application.
The last type of argument we will look at here is the rule of ignoring changes. This lifecycle rule, when applied, will prevent a resource from being updated based on a list of attributes that we define within the lifecycle block. To understand this better, let's use an example ec2 instance which is a virtual machine in the AWS cloud. This ec2 instance will be used as a web server. and can be created with a simple resource block like this, don't worry if this resource block and the arguments are unfamiliar to you, we will cover it in much more detail in the ec2 section of the course, but for now keep in mind that the resource is called web server. makes use of three arguments, the ami and the instance type, they are used to implement a specific type of virtual machine with a predefined specification, in this case the values ​​we have chosen implement a Ubuntu server with a cpu and a gb of ram that we are also using. from a tag called name which has a value of project a web server using the tax argument which is of type map when we run terraform apply the ec2 instance is created as expected with the tag called name and a value of project a web server now yes Changes are made to any of these arguments.
Terraform will try to fix it during the next application as expected, for example, if we modify the tag called name and change its values ​​from project web server a to say project web server b, either manually or using any other. The Terraform tool will detect this change and try to change it back to what it was originally, which is projecting a web server. In some rare cases, we may want the name change while any other method is acceptable and we want to avoid it. terraform go back to the previous tag to do this, we can make use of the lifecycle block with the ignore changes argument like this, the ignore changes argument accepts a list as indicated by the square brackets and will accept any valid resource attribute in this particular case .
I have asked terraform to ignore changes that are made to the tags attribute of the specific ec2 instance. If a change is made to the tags, a subsequent terraform app should now show no changes to apply the change made to the tags from an external server. from terraform is now completely ignored and since it's a list we can update more items like this. You can also replace the list with the all keyword and this is especially useful if you don't want the resource to be modified by changes to the resource's attributes. We will learn more about lifecycle rules later in the course when we work with resource resources.
AWS, but for now here's a quick summary of the three types of arguments that we've seen in this lecture and now let's move on to the hands-on labs and practice working with them. lifecycle rules in terraform in this section we will see the data sources in terraform. We already know that terraform uses configuration files along with state file to provision infrastructure resources, but as we saw earlier, terraform is just one of the infrastructure as a code tool that can be used to provision infrastructure can be provisioned using other tools such as puppet cloud formation, ansible salt stack, etc., not to mention ad hoc scripts and manual provisioning of infrastructure or even resources created by terraform from another configuration directory, e.g. let Suppose a base instance of Data was manually provisioned in the AWS cloud, although Terraform does not manage this resource, it can read attributes such as database name, host address, or database user and use it to provision a managed application resource by Terraform.
In a simpler example, we have a local file resource called pet created with the content we love pets. Once this resource is provisioned, the file is created in the slash root directory and information about this file is also stored in the Terraform state file. Now let's create a new file using a simple shell script like this, evidently this file is outside of terraform's control and management at the moment, the local file resource that terraform is in charge of is pet.txt in the root directory and It has no relation to the local file called docs.txt which is also created in the slash root directory, docs.txt has a single line that says dogs are awesome.
We would like terraform to use this file as a data source and use its data as the content of our existing file. Called pets.txt if we want to make use of the attributes of this new file created by the batch script, we can make use of data sources. Data sources allow Terraform to read attributes of resources that are provisioned outside of its control, for example, to read. The attributes of the local file called docs.txt we can define a data block inside the configuration file like this, as you may have noticed, the data block is quite similar to the resource block, instead of the keyword called resource, We define a data source block with the keyword called data, followed by the resource type we are trying to read in this example, it is a local file, it can be any valid resource type for any terraform supported provider, then comes the name of the logical resource from which the attributes of a resource will be read within the block, we have arguments as we. we have in a normal resource block, the datasource block consists of specific arguments for a datasource and to know which argument is expected we can look at the provided documentation and terraform log for the local file datasource, we only have one argument that should be used which is the name of the file to be read, the data read from a data source is available in the inter form of the data object, so to use this data in the pet resource call, We could simply use data.localfile.doc.content, these details are of course available in the Terraform documentation in data sources within the documentation and in the exported attributes, we can see that the data source for a local file exposed two attributes: the content and the base64-encoded version of the content to distinguish between a resource and data sources.
A quick comparison resources are created with the resource block and data sources are created with the data block. Resources in terraform are used to create, update, and destroy infrastructure, while a data source is used to read information from a specific resource. Regular resources are also called managed resources. As it is a Terraform managed extension, data sources are also called data resources. That's all for this conference. Now let's go to the practical labs and practice working with data sources in terraform. In this lecture we will see meta-arguments. In Terraform so far we have been able to create single resources such as a local file and a random pet resource using terraform, but what if you want to create multiple instances of the same resource?
For example, three local files if you were using a shell script. or some other programming language, we could create several files like this. In this example, we have created a bash script called createfiles.sh that uses a for loop to create empty files within the root directory. The files will be called pet followed by the range from 1. to 3. While we cannot use the same script that is inside the resource block, Terraform offers several alternatives to achieve the same goal, these can be achieved by making use of specific meta arguments in Terraform. Meta arguments can be used within any resource block to change.
The behavior of resources We have already seen two types of meta arguments in this course that depend to define the explicit dependency between resources and the lifecycle rules that define how resources should be created, updated and destroyed within terraform. Now let's look at some meta arguments more specifically related to loops in terraform in this lecture we will look at each meta argument and their uses in terraform in the previous lecture we saw that whenWe use count resources are created as a list and this can have unwanted results when updating them. One way to overcome this is to make use of for each argument instead of counting like this.
Next, to set the file name value for each list item, we can use the expression each dot value as However, this has a problem: if we run terraform plan now we will see an error, for each argument it only works with a map or set in the variables.tf file. We are currently using a list containing string elements. There are a couple of ways to fix this they change the variable called filename to the type set in the variables conference, we learn that a set is similar to a list but it cannot contain duplicate elements once we change the type of variable and then run the terraform plan, we should see that.
There are three files that will be created. Another way to fix this error while preserving the variable type as a list is to make use of another built-in function. This time we will use the two set function which will convert the variable from an enumerate to a set once this is done the terraform plan command should now work as expected. Now let's replicate the same scenario we did earlier with the count meta argument and remove the first element with the value slash root bets.txt from the list. When we run Terraform Plan we can now see that only one resource is set to be destroyed, the file named pets.txt, the other resources will not be modified.
To see how this works, let's create an output variable called pets to print the resource. details as we did with the example using terraform output command count, now we can see that the resources are stored as a map and not a list when we use for each instead of count, the resources are created as a map and an order list, this means that the resources are no longer identified by the index, thus avoiding the problems we saw when we used count. Now they are identified by the keys which are file names slash root slash dot dot txt and slash root cache dot txt which are used by for Each argument in the configuration file, if we compare it with the previous result we got when we used count, we can see the difference in how the resources are created, the count option created it as a list and for each one created as a map there are some other meta arguments in terraforms like provisioning provider servers etc, we will see that later in This course let's go to the practical labs and practice working with each meta argument in Terraform in this lecture. see how to make use of vendor-specific versions in terraform.
We saw earlier that providers use a plugin-based architecture and that most of the popular ones are available in the public terraform registry without additional configuration. The terraform init command downloads the latest version of the plugins from the vendor. which are required for configuration files, however this is not something we can want whenever the functionality of a vendor plugin can vary drastically from version to version, our terraform configuration may not work as expected when using a different version than what it was. Fortunately, we can ensure that Terraform uses a vendor-specific version when we run the data terraform init command.
Instructions for using a specific version of a provider are available in the documentation provided in the registry, for example, if we search for the local provider within the registry, the default version is 2.0.0, which is also the latest version as of this recording, to use a different version, click on the version tab and it should open a dropdown menu with all previous versions of the provider. select version 1.4.0 to use this specific version of the local provider, click the use provider tab on the right. This should open the code block that we can copy and paste within our configuration.
Here we are making use of a new block called terraform. which is used to configure settings related to terraform itself to make use of a specific version of the provider, we need to make use of another block called requiredproviders inside the terraform block and inside the requiredproviders block we can have multiple arguments for each provider we want use in this example we have an argument with the key called local for the local provider the value of this argument is an object with the source address of the provider and the exact version that we want to install which in this case is 1.4 .0 with the terraform block configured to use version 1.4.0 of the local provider when we run a terraform on it, we should see a message like this before concluding this lesson, let's look at the syntax used to define version restrictions in Terraform in the local provider configuration we have specified a version equal to 1.4.0, this allows Terraform to find and download this exact version from the local provider;
However, there are other ways to use the version constraint if we use the not equal to symbol instead. Terraform will ensure that this specific version is not downloaded. In this case, we have specifically asked Terraform not to use version 2.0.0, so download the previous version available, which is 1.4.0, if you want a form to use a lower version. than a certain version, we can do it by making use of comparison operators like this and to make use of a version greater than a specific version, we can make use of the greater than operator in this way, we can also combine the comparison operators like this to make using a specific version within a range in this example we want to make use of any version greater than 1.2.0 but less than 2.0.0 but also not version 1.4.0 specifically as a result terraform downloads version 1.3.0 which is acceptable In this case and finally we can also use pessimistic constraint operators.
This is defined by using the tilde greater than the symbol like this. This operator allows Terraform to download the specific version or any available incremental versions based on the value we provide. for example here we have given the value of 1.2 after the tilde greater than symbol, this means that terraform can download version 1.2 or incremental versions like 1.3 1.4 1.5 up to 1.9; however, if you look at the vendor's documentation, we do. I do not have a version 1.5 or anything above, the maximum version that we can use in this case is 1.4.0 and this is the version that is downloaded when we run terraform on it, let's use another value this time 1.2.0 with the same operator pessimistic constraint this time terraform can download version 1.2.0 or version 1.2.1 or version 1.2.2 up to 1.2.9 again we only have a maximum version of 1.2.2 in the registry and that is the version that is will download when we run terform.
That is all for now. At Code Cloud we have multiple learning paths curated just for you to help you go from beginner to expert in various devops technologies, so don't do it. Forget to check it out and don't forget to subscribe to this channel as we publish more courses very often.

If you have any copyright issue, please Contact