YTread Logo
YTread Logo

Google Cloud Platform Full Course | Google Cloud Platform Tutorial | Cloud Computing | Simplilearn

Apr 23, 2024
Hello everyone, welcome to this interesting video on Google Cloud Platform, complete

course

in this video we will explore and understand all the important concepts revolving around Google Cloud Platform, starting with the basics of what is gcp and the following concepts, such as gcp domains that comprise various services such as

computing

. iot storage networks, etc. with proper practical demonstration of these services, below we will have a comparison between gcp aws and azure and see how they differ from each other and how they maintain their individuality and then at the end understand some of their deep concepts like gcp web hosting and gcp

cloud

ml, and towards the end we will guide you through the gcp fundamentals and certification training that will help you get this certification, so for this training with me, I have our experience as a gcp specialist.
google cloud platform full course google cloud platform tutorial cloud computing simplilearn
We will guide you through the various important keynotes of Google Cloud Platform, so let's start with an exciting video on the complete

course

of Google Cloud Platform. Before you start, make sure to subscribe to our YouTube channel and hit the bell icon and never miss an update from Simply Learn. Good morning. Good evening everyone, welcome to this session on the Google

cloud

platform

. My name is aj and I am a just learn instructor and also work as a cloud and big data architect for various clients around the world. In this session we learn about Google's cloud

platform

. in which we will learn about what is cloud

computing

, why

google

cloud platform, what is gcp i.e.

google

cloud platform, google cloud platform domains, about a use case which is a federal use case and also a quick demo on using Google cloud platform services so when we talk. about Google cloud platform, we also need to know what cloud computing is, so cloud computing is the use of hardware and software components to provide a service to a network, users can access to these files, applications and services provided by a cloud provider from any device that has access to the Internet In short, cloud computing is a way in which the cloud provider provides access to different services, such as resources, and when we talk about resources here we talk about computing, we talk about memory, we talk about processing power, we talk about storage and we also talk about different services that cloud providers make available to users now that allow us automatic software integration , data backup and restoration, unlimited storage capacity, reliable use of different services and cost-effective solution.
google cloud platform full course google cloud platform tutorial cloud computing simplilearn

More Interesting Facts About,

google cloud platform full course google cloud platform tutorial cloud computing simplilearn...

Now we know that cloud computing models exist different service models. So, you have platform as a service, you have infrastructure as a service and you have software as a service, and these are the prominent models that different cloud providers offer, like Google Cloud, like Google, you have Amazon, you have Azure and also some organizations. Build your own private clouds using open source openstack, let's focus more on Google cloud platform and what it has to offer and how it is gaining popularity in the market among different organizations who would want to work on a cloud platform or who would want to work on an infrastructure that is modernized, so when we talk about cloud computing, this is one of the approaches that organizations can adopt to instantly benefit from modernization by using different services, different platforms and also newer technologies for their various requirements, like scalability or dynamic business, before you get into Google Cloud Platform it's also good to know where you can find some useful resources about Google Cloud Platform and here is a website that comes from your Google Cloud, so this is cloudacademy.com slash Library slash google and here you can create a free account.
google cloud platform full course google cloud platform tutorial cloud computing simplilearn
Now, that would only give you a seven-day trial, but it has a lot of videos that talk about its Google cloud platform and the gist, and it has several videos related to the certification that you can access and then seven days after you've done it tested, obviously you can go for a paid account where you can get detailed learning from here now this is your training library which shows your different cloud providers like amazon microsoft seo gcp and then you have other topics which are organization specific, obviously you can see the prices and you can also check out resources that will give you a good experience learning from these videos.
google cloud platform full course google cloud platform tutorial cloud computing simplilearn
You can also create a free account, which I will walk you through later so you can create a free account on cloud.google.com now that I say free account. That basically means that it allows you to create an account and every user who creates an account gets a free 300 credit which can be used to practice, improve your skills and also explore and learn about the different Google Cloud products that Google offers and are now back . to our Google Cloud Platform, let's understand why Google Cloud Platform is popular for many reasons. Now let's look at some of the biggest reasons why it stands out when it comes to pricing.
Price is one of the important factors that make Google Cloud. stands out among the other cloud providers, it offers a monthly pricing plan that is built according to the monthly usage and when we talk about billing here, the billing can be in hours, it can be in minutes and it can also be in seconds, there are different options when talking about their pricing, which can be found on their Google Cloud web page. Now prices could be based on preventive machines. Pricing could be based on reserved instances or reserved resources. I will show you the link where you can find more details about the pricing part.
So Google Cloud actually has several pricing options that help customers in their different requirements, whether they opt for any of the service models such as infrastructure as a service, platform as a service or even software as a service. One more attractive thing about Google Cloud pricing. is that it offers discounts for committed usage, for example, under this scheme, you can buy a specific number of virtual CPU cores and memory with up to 57 percent discount on regular prices if you commit to use for one or three years. This is just one option. There are several such options that you can learn about on the Google cloud page that really suit different customers for their different requirements.
Now when we talk about speed, we don't need to really challenge this aspect when it comes to Google services, so Google provides its Google Cloud Client and Google App speeds up to 10 terabytes due to its faster cable system. There are different types of machines that can be used whether it's computing, whether it's memory-intensive applications or even storage-intensive workloads, generally speed. It is one of the defining characteristics of Google cloud services. The cable has connections in major US West Coast cities in Japan and even major centers in Asia. This speed improves performance and leads to customer satisfaction.
Now when we talk about customers, each or every customer would prefer to have services based on low latency and high performance, they would like to use higher speeds to process their data in the shortest possible time. Google provides a low latency network infrastructure. In fact, it can be said that when you use Google cloud services, you are using the same infrastructure services that Google uses for its popular services like Google search or even YouTube, which is one of the second largest repositories. which can be accessed for videos now, when we talk about big data, big data is very complex data and has a large number of other characteristics, such as its volume, it has speed, it has variety, it has veracity, validity, volatility, virality, etc. ., if an organization works on big data, Google Cloud may be a better option because Google has many innovative tools for cloud storage, for example. like bigquery and even real-time data processing tools like dataflow, bigquery is a data warehouse that allows massive data processing at high speeds, basically working with structured data.
Google has also launched some new machine learning tools based on artificial intelligence. Now there are several. Other services that we can use from Google cloud platform, but let's understand what is Google cloud platform and what are some of the services, even the services not listed here can be found in your console from Google cloud. Google. So what is Google cloud? gcp platform is a set of cloud computing services provided by Google that runs on the same infrastructure I mentioned that Google uses for its end-user products like YouTube, Gmail, and even Google Search. The various sets of services offered by the Google cloud platform are so that you have services that are specific to computing requirements and again, in computing, you have several different options available, you have machines that are compute optimized machines, which are memory optimized machines, which are storage optimized and we also have certain machines like pre m tiff, which basically means you could get a machine up and running at a much lower price than any other machine, but when we talk about preventative, these are the machines that can be requested on demand and Google can recover them at any time.
It has network related services which can be very useful when you set up your applications or your services around the world. It also has different services that are specific to machine learning and organizations that would be interested in working on machine learning or artificial intelligence would be really interested in using these services there. There are also innovative solutions for working with big data and big data related technologies now that we talk about the domains of Google cloud platform, so we can break down these services into details like, for example, you have computing now, the service of computing allows computing and hosting the cloud now when you talk about computing here, there are different services, like the application engine, you have the computing engine, you have kubernetes, you have functions in the cloud and execution in the cloud when you talk about storage and database data, so the storage and database service allows the application to store backup copies of media files or other files. -as objects now, various services under this are as follows: it has cloud storage, sql cloud table for unstructured data, it has cloud key, cloud data warehouse, persistent disks and memory storage in Cloud.
When we talk about networks, the network service allows us to balance the traffic load. The resources now, as I mentioned earlier, could be your different resources that you would use from a cloud platform, like your devices, your instances, memory optimization or CPU optimized instances or other resources that create dns records and much more, so various services under this are vpc which means virtual private cloud, it has cloud load balancing, cloud armor, cloud cdn, it has cloud interconnection dns and network service levels when we talk about the service big data, this allows us to process and query big data in the cloud, now various services under these are as follows It has Bigquery Cloud Cloud Data proc Cloud Composer Cloud Data Lab Cloud Data Prep Cloud Pub Sub which is published in the subscription system.
It has Cloud Data Studio. Now you also have the developer tools and the developer tools service which includes tools related to developing an application, now several. The services under these are as follows: It has a cloud SDK software development kit, it has a deployment manager, cloud source repositories, and a cloud testing lab. When we talk about identity and security, that is one of the main concerns of any organization or user who is interested. By using a cloud platform, Google Cloud has really taken care of this, so when you talk about identity and security domain, this is about security services.
Now there are several services here: cloud identity, you have identity and access management, i.e. cloud iam, you have identity-aware proxy servers. has a cloud data loss prevention API security key application key management service and many more, it also has cloud services that are related to Internet of Things, so organizations that would work on iot devices or the data generated by themdevices would use them heavily. various services here are cloud iot core it has edge tpu and cloud iot also when we talk about cloud ai which is artificial intelligence it consists of services related to machine learning and much more so it has cloud auto ml cloud tpu engine cloud machine learning job discovery dialog flow enterprise natural language cloud text to speech and much more if you would be interested in services related to api platform then there are different services in this category or this domain so you have a platform of maps, it has a platform api apg, monetization developer portal analytics i.e. api Analytics apg sense cloud endpoints and services infrastructure, so these are some of the service listings under each platform domain in Cloud.
Now let's see the use case of Ferrero and also understand what was done here, so Ferrari is one of the famous chocolates and ranks third among world chocolate and confectionery producers, it was found in 1946 in Italy. I'm sure you would have seen Ferrero chocolates when you went out to buy some chocolates, so the challenge here was that the federal one, as we know, is sold in all supermarkets and is known for its quality once the business grew, they emerged some problems and that is what happens when the business grows, problems arise which could be related to the volume of data, the speed with which the data is generated, the variety of data and also the appearance.
In its platforms that support different dynamic applications or its scalability requirements, performance requirements, etc., it needed a data warehouse processing and analysis system for a large customer database, there was a big gap between the company and the people who bought their products because the company trusted them. About the data provided by the points of sale, that is one of the challenges. Ferro wanted to create a digital ecosystem where there was a point of contact with its customers and also a basis for an innovative data-driven marketing strategy. What was the solution? So one of the cloud platforms services or Google cloud platform is bigquery and this was an answer to ferraro's challenges as it was able to do super data analysis. fast and efficient as a solution.
Now as I mentioned bigquery is a data warehouse that allows you to store structured data now this could be used directly as a service where you could store any amount of data and you wouldn't pay for storage so there are different pricing models and for data up to a terabyte you would not be charged anything and whether it would be accessing the data you are reading the data or processing the data from bigquery, that's where the pricing model comes into play now using bigquery from Google Cloud. Federal business analysts were able to store and analyze massive data sets in a very reliable, fast and affordable way for the consumer.
Behavioral data and sales pattern reports were easy to create and automate and analytics also followed Federer to adopt advertising across multiple marketing channels to better meet customer needs. Which it was the result? They were able to divide their database into actionable consumer groups in real time. To generate more accurate user profiles, Ferrari was also able to customize its marketing strategies to meet user needs. Now, Google's cloud platform has completely adapted mobile content and website advertising and created a highly profitable media strategy. Now these are some basics of Google Cloud. As I said, you can always find many details about the prices of the services and all the services can also be accessed in your free trial account.
Now let me guide you here so that you can always find details about the documentation of each of these services. so if I click start, you have quick start which basically shows you short

tutorial

s, you have trainings and also certifications. If we click on quick start now, that takes me to this page that shows me quick starts or if you want. to understand different projects, if you want to check the documentation on creating a Linux virtual machine or storing a file and sharing it, deploying a docker container image, etc., to have many quick starts, here you can also check minute from the cloud on the same page on the right side you have documents and once you click on them you will be taken to these links.
Now this shows you how to create solutions that show your main use cases, best practices, all the solutions, it's always good to learn from these uses. cases that are available and here you have different featured products and all the different services that Google Cloud offers. Now you can click on featured products and that basically shows you a list of these products. Here we have all the solutions that you can see in architecture. enterprise level database, big data and analytics, internet of things related games etc., now if you go down here you will see the featured products and it will show you some of the important products such as computing engine which belongs to the computing domain that you have running in the cloud. you have anthers, which is for migration and basically cloud adoption, when organizations want to move from on-premises solutions to cloud-based solutions, you have a vision, ai, you have cloud storage now which basically allows you to store any type of data, whether it is an object. this acts as an object storage.
You have Cloud SQL, which is basically an out-of-the-box service where you would use MySQL Postgres or any other SQL server database service. You have BigQuery which is a data warehouse that basically allows you to store your structured data and then you have your products related to artificial intelligence and machine learning, so you have ml machine vision, video, ai, text to speech, space to text, etc., now you also have different platform accelerators that can be used and in any of these cases, eg. If you were to click on Compute Engine, which is a featured Google Cloud product, it now basically shows you quick starts using your Linux machines, how-to guides that tell you how to work completely on a VM instance or how to work on storage, work on disks persistent, etc. and shows you the documentation here.
Now you also have product and price options that you can view for gcp prices and you can immediately go to the prices. You could be looking for solutions that talk about modernizing infrastructure. Now this is something that organizations are interested in. on when they would want to move from their on-premises solution to their Google cloud now here I can say, for example, I click on infrastructure modernization. You can always find some case studies, what are the different solutions that we have here when you talk about Google. cloud so you can always click on see solutions and you can see vm migration or sap cloud directly on google cloud what can be used for vmware as a service or hpc which is high performance computing and we will learn about these services in detail later. now I can go back to the same page where I was clicking on the services so that you have quick starts, you have how-to guides, you have a deep understanding of different concepts and here, if I click on all the how-to guides, that shows me what they are. the different ways in which you can work with the compute engine and work with different instances, although it is quite exhaustive content, but if you follow it consistently, you will be able to learn a lot about Google Cloud.
Now here I can come back, so this just gives you something. idea about the different products that are available on Google Cloud by searching in different sections, finding the right documentation here and you can always see each of them in detail for each product that Google Cloud offers now, if we scroll down, we can see all the options. or all the domains that we see here, let me go back because we come to the solutions, now we have quick starts and then basically you can scroll down to see the cloud SDK that can be configured on your Windows machine, like I configured it on my Windows machine and also when you use the Google console, you have a graphical user interface that I'll show you in a couple of minutes and also a cloud shell where you can use the command line options to work with the Google cloud platform. you have a cloud console which is nothing more than your graphical user interface to access your resources.
Now you can also see the prices for the detailed

tutorial

s and you have other different options so let's just click on prices here and then we will have your price list which basically gives you details of different services, which ones are available and which ones are the services or which ones? are the prices, for example, if I right click on the compute engine and this shows me the pricing aspect of the compute engine that belongs to the compute domain and here you can see that it has the price of the vm instance. you have network pricing for single tenant nodes that are specific to particular organizations or if organizations want to have dedicated nodes you have GPU based pricing entitlement so general processing units have disk and image pricing and you can click on any of the links and you can see the prices that also show you the different types of machines, so here you have different types of machines that say n1 n2 n2d e2, you have memory optimized machine types, you have optimized computing, you have premium images and you can basically see all the categories. you can check the disk price which includes persistent disk price for ssd or sdd, what is the type of images, what are the different network services, and you can always see the types of your machine, you can choose a particular region , there is the concept of region and availability zones here and depending on the region, you can see the prices.
You can also see standard prices. You might be looking for what the free tier machines are. If I'm specifically looking for VM instance pricing, I can click on this and that takes me to the VM instance price, what's the billing module, what's the instance uptime, what's the price based on resources and then you can also see different types of discounts such as sustained usage discounts, committed usage discounts, discounts for preemptible VM instances and so on and that's how you can check the prices of all the different resources and you can choose the resource that you are interested in and then look for the stock of price references if you want to use them and benefit if you want.
If you want to see what the quotas and limits etc are, explore this link and you can find a lot of information on technical options on different Google cloud products that we briefly discuss in regards to domains and what each product does. can use for you, you can always go back to the main page where I was showing you the different services where we looked at the prices right away. You can always come back to this page at yourcloud.google.com and then you will have your solutions. talks about different products and you can click on them or you can check out the technical documentation, so it talks about their different featured solutions, infrastructure solutions, it has relation to data center migration etc. so it could be talking more and more about Google's cloud platform. an ocean is a lot of different services for different organizational requirements so look at this link and also what you can do is create an account on Gmail.
I mean you can create a free account or you can go to cloud.google.com and then you can create a free account like I created here and I can just click on the console, now that's my gui or the console from the Google cloud, which basically allows me to work with the Google cloud. Now there are many options here by default when you create an account. Now, in my case, at the top it says it's a free trial account. I have 300 credit and of that there are 251 dollars left and 237 days a year left for my account.
Now here I can basically see the project correctly and by default. we can create a project or by default there is a project for any user so when you log in it creates a particular project and you could create a new project and the project would dedicate its different services or different resources to each project different, so this is your dashboard that basically shows you your project, which shows you a project number and a project ID that is always unique. Now you can click on this and on this, if you want, you can hide this information.
You can also check the documentation part and then it shows you the different services or graphical information of the services that you might have used in the past, so you have a computing engine that shows you how much CPU percentagewas used. You can always go to compute engine as a service, you can look at your Google. route platform status and look at google cloud status. Now this is billing as it shows me the billing period for the month of April and I can always see the itemized charges. I can see the reports. I can see the different APIs that Google offers and for your different type of work you can always go to API overview or API link and enable or disable any API now once you enable or disable any API you can use it to have a new section, have documentation and getting started guides that tell you how to work or enable different APIs if you want to deploy a pre-built solution, add dynamic log monitoring errors, deploy a Hello World app, etc.
Now this is your dashboard. I can click on the activity and that would show me what type. of activity I would have carried out on my cloud platform. I can always choose the type of activities by saying activity type. I can choose the resources. Now it shows me that I created some VM instances. I delete them. I updated some metadata. I basically worked on instances. I changed some firewall rules here, then I also worked on some other services or APIs, so I created some buckets that are for object storage and again I've been working with some instances and creating some firewall rules here.
I have granted some permissions that I am configuring. correct policy, so this activity gives me a history of things I've done over the past few months while working on Google's cloud platform. Now, in the top left corner, you have the hamburger menu that you can click on to make that your navigation menu. and when you click on this, it shows you the home page, it takes you to the marketplace, specifically it takes you to billing. You can always see the APIs and the services, so here in the API services, I can go to a dashboard and see a report on traffic, errors or latency and the type of APIs that are available, so you have the compute engine API, you have the Bigquery API, the BigQuery data transfer API, the BigQuery storage cloud data processing, so these are some of the APIs or services that I have used in the past.
I was evaluating a product, it could be that I was using a particular product, so whenever you want to use a particular service from Google cloud platform, you would enable these APIs, so here we see some APIs for the process of data, has log monitoring. Resource Manager API, you have Cloud SQL which allows you to use MySQL or Postgres directly, you have cloud storage entitlement and so many APIs, so if you would like to enable a particular API that is not needed at the moment, it is good to know that you always can. click on this, you can search for an API, for example, if I said data processing is correct and that shows me the Cloud Data Processing API which manages the Hadoop based clusters and jobs on the cloud platform.
Google cloud so I can spin up a Hadoop cluster and I can start running some jobs on a Hadoop cluster on demand and when I'm done I can get rid of it, so this is an API that I would have to enable and then, for each particular service, you also have a manager API that would now allow you to return, so exiting this API library, I went into the API and the services. Now you can look in the library, you can look at the credentials and you can also look at different time intervals for which you can see the usage of your different apis now we come back to the menu, you have support where you can always contact the Google support team if you are Using a paid version, even with a free trial, you can try to contact and you will find someone to help you, but that's always the case. well when you have a billing cycle by contacting customer support, you have identity access management in the administrator and this is necessary when you work with different APIs or services, when you want to have relevant access, you are starting out, you have security related options. you have anthos, which is mainly for migration, now here you can see your different domains, like the ones we discussed, so you have compute and in compute you have different services, so you have the application engine, you have the kubernetes compute engine, you have cloud functions and cloud execution.
You can look at the storage aspect shown by the bit table, which is usually primarily for your large table of unstructured data, which is something that gave rise to popular Nosql databases like hbase and cassandra. It has a data store that can be used. It has a Firestore file store. storage that can be used to input data of any type, has SQL for structured data, has wrench as a service, has memory storage and data transfer. Now when it comes to the network domain you have all these services like virtual vpc. private network network related services for your load balancing to use a cloud based dns or a cloud defined network, you have a hybrid connectivity network service for yes, security and intelligence, then you have other options for your operations that can be used and here you have other different tools that can be used such as cloud build, you have a cloud task container registry deployment manager and many more big data specifics, you have the services here, for which has a data process that can be used to spin up your clusters.
You have published a subscription messaging system, like now. Kafka, which you may have heard of, originates its idea from here, so pub sub, you have data flow, you have iot core bigquery, which is a data warehouse package or a Google Cloud data warehouse solution, now you have your services related to artificial intelligence and other other Google solutions. So you can always find a huge list of services or you can say high level solutions offered by Google Cloud and we can basically use them to test some of the solutions, use them and work on them, so I'll give you a quick demo. different services that can be used here from your Google cloud platform now this is your Google cloud platform and this is your console now this also gives you a cloud shell that can be activated so that you can work from the command line and you can always find a There's a lot of documentation on that, so you can also configure your Cloud Shell which is SDK on your Windows machine and if you have it configured, you could basically be doing something like gcloud correctly if the cloud is has configured, so in my case.
I had configured the cloud SDK and basically I can access that by looking at what path I have configured for the cloud SDK and then I can use it from my windows machine as per my convenience. I can also activate the cloud shell here, which Basically, open a terminal and at any time you can open this cloud shell in a different window. You are preparing the cloud shell where you will configure your project by default, configure the metadata and now I am logged in. my Google Cloud account from the command line and here I can basically use gcloud correctly and that basically shows you the different options available that you can use, so if you want to use a particular service, you can also try to provide help and that. shows you how to properly manage your Google cloud platform so that you have different billing options to work with your different services and basically you could be working on Google Cloud using this cloud shell from your command line, usually for people who are learning about Google Cloud in At first it's always good to go to the console and use the different services from here in an easier way, but like I said, you can always use the command line, for example if I go ahead and type gcloud create instances and that will take directly Go to the cloud console documentation which shows you different options that you can use to work with instances so I can create gcloud compute instances and then I can give my instance name etc., and I will show you some examples about it. you can use your cloud console which is your cloud shell and you can start working right away from the command line;
However, I would suggest using the console at first and when you are very experienced you can start using the cloud shell to do things from command. line and when you have

full

experience you can always change and certain things are usually useful or easier when done from the command line and some are easier when done from the console and you can use any of these options now we can go to Google console from the cloud now, from now on I can close this. I still have my cloud shell open. If I want to review it, I can click on this and I can immediately go to the compute engine where I would like to work on creating some. instances on the Google cloud platform using the compute engine service and then basically connecting to those instances and basically trying some basic things so I can click on vm instances and then once this comes up I can always create some instances now if you see here, I have some instances are already created success

full

y and I can continue working on them.
I can create new instances. I can use different options while creating instances and I'll show you that in a demo in a few seconds, so let's do a quick demo on how to set up gcp instances now. Before that, let's do a quick summary, so when you talk about an instance or a virtual machine, it is hosted on Google infrastructure correctly and you can create your Google cloud instance using this Google cloud console i.e. , clicking on this and then going to the compute engine and Clicking on VM instances now, you can also do it from the Google cloud command line tool, which is Cloud Shell, and you can do it using the API of the compute engine so that the compute engine instances can run the public images for the Linux or Windows servers that Google provides to you.
With the option to create or use custom images that you can build and import from your existing systems, you can deploy docker containers that start automatically on instances running OS-optimized containers. Now, when you talk about instances and projects, always remember that each instance belongs to Google. The cloud console project and a project can have one or more instances when you talk about instances and storage options, each instance has a small bootable persistent disk that I'll show you in additional screens that contains the operating system. You can add more storage space if needed and when you talk about instances and networks a project can have up to five vpc networks so vpc network are virtual private networks where you can virtual private cloud networks where you can have your resources within its own subnet and each instance belongs to a vpc network, now instances in the same network can communicate with each other via local area network protocol.
An instance uses the Internet to communicate within any machine, so it could be virtual physical or outside its own network. When you talk about instances and containers, you have to remember that compute engine instances support a declarative method. To launch your application using containers, now you can create a vm instance or a vm instance template, you can provide a docker image and launch the configuration, so there are different ways you can create these instances and, once you create them, you can say for example, if you are creating a linux instance, you can associate ssh keys with your google account or your g suite account and then manage your non-admin admin access to the instance using im roles If you connect to your instance using gcloud or ssh from the console, which we will see later that the compute engine can generate ssh keys for you and apply them to your Google Cloud or G Suite account.
Now what we can do is see how we can create your Google Cloud instances from your console or from the command line. look at instantiation using the gcp console now that we've learned some basics of the google cloud platform, what are the different services, what are the different domains, basically looking at your google cloud console or even the cloud shell, let's go ahead and create some VM instances now, that's from your compute engine service and let's try to connect to these instances and see how it works. Let's also see what are the different options that are available when you want to create the virtual machine instances here, when you click on the drop-down menu from the top left. and choose Compute Engine so it will take you to this page so you can show it again so you can click on Compute Engine, click on vm instances and that will basically take you to this page.
Now here it tells you that Compute Engine allows youuse running virtual machines. on google infrastructure so we can create micro vms or large instances running different linux or windows distributions using standard images i.e. public images and you can also have your own image so let's create a vm instance by clicking create now, that's basically helping me create an instance here so it shows me the instance name. I can give you something, so let me say c1 now I can add tags. So what is etiquette? It basically allows you to organize your project. Add arbitrary tags as key-value pairs to your resources.
This is basically categorizing your tags and projects. If you have multiple projects, remember if you have your console in the cloud and if you have created your free trial account, it will allow you to do these things; otherwise, you may have to return to the billing section. and see if billing is enabled which also means when you create your Google cloud account it will ask you to enter your credit card details but they don't charge anything or they may charge one dollar or one rupee depending on your location and that is also refunded but that is just to verify your card now once you have given the name I can choose a region so basically I would choose Europe since I am in Europe I would click on Europe West 3 and here I can choose. an availability zone is basically to make your services or instances or any other resources highly available so you can choose one of the availability zones now.
I would choose to save s3. Now we can scroll down and here it says machine configurations that you have in general. Purpose machines, you have memory optimized machines, which are the m1 series and you can always go back to the Google cloud page and see what a particular type of machine specializes in, so I would click on general purpose and general purpose you have different categories so you have n1 which works with intel skylake cpu platform or you have e2 which is the cpu platform selected based on availability so let's select n1 now this one shows me the type of machine and if it is using its Free level account, you can start fixing a micro machine which is a virtual CPU core, you can go for a small g1 which is a virtual CPU core and 1.7GB RAM, you can even go up to a high-end machine and then you can basically see if you're using a free account.
Many of these machines can be used if I were to select eight virtual CPU cores and 32 gigabytes of memory, then it would allow me to create at least two instances with this configuration. We will use the n1 standard, which is now a virtual CPU core with 3.75 GB of memory. then we can also deploy a container image to this VM instance if you would be interested in deploying a container image. Let's not get into that now. Here it shows you the boot disk and shows you dbn gnu 9 linux 9. What would you do? Can I go for this distro or can I choose a Linux distro of my choice so here you have public images you have custom images you also have snapshots if you have created a backup of your previous images here I can choose for example ubuntu and then You can choose a particular version, so let's go for Ubuntu 18.04.
You can choose the latest one, also 18.04 would be enough and here it tells me what the boot disk is. Then you have SSD or standard persistent disks. Now SSDs are a bit expensive. compared to your standard persistent disks or your SDDs but SSDs are faster so as of now we can choose the standard persistent disk as is and we can let the gigabyte be 10, now depending on your requirements you can increase it, you can even add disks. later that's not a problem click select now here I have access scopes so here I will say allow default access.
You can also configure access for each API or you can grant full access to all cloud APIs so depending on your requirements you can do this at any time. change later we will also say allow http traffic and we can also choose to allow https traffic which basically allows me to access this machine or services that are http based and accessible from this machine. Now I can just click create however it would be nice to basically enable connectivity to this machine now we can do it in different ways one is when you open your machine you will have an ssh access that you can log into from the cloud platform here or what you can do is create a private account and public key using some software like PuTTY or PuTTY Gen for example, if you don't have it on your machine you can download it.
Simply type download PuTTY which will take you to the PuTTY.org page and here you can click on Download PuTTY. and scroll down to see your 64 bit machine which I have in my case. You can download putty.exe which is basically your ssh and telnet client to connect to your machines. You can also use puttygen which will allow you to create a one private key and one public key which I have done in my case let me show you how so what you can do is go to puttygen to get started and here you can click generate now and then, To generate this key, simply move the cursor. the top here in the empty space and that creates your key.
You can give it a name, so, for example, let's give it a username. I would give it to HDU. Now I can give a password for this one, so let me give you a simple password and whatever I can. I can also copy this public key from here and for my later use I can keep it. it is in my notepad file which I can use later and I will show you when so now we can save this private key and this will basically allow me to save my key private. I can choose the desktop and I can give it a name, so let's say. new key new key and it will be saved in a dot ppk file so save it and that's it we also have our public key and we have saved our private key now we know that when you want to connect using ssh you will need your private keys for the client and your key public must also exist, so let's take this public key, so let's do a check.
I'm going to copy this and I'm going to go back to my Google cloud console and here you can click on security so you can click on the security tab here, scroll down and just give your public key once you give it, it resolves and it shows you the name and this is good enough that I can use my ssh client to connect to this machine. I can click create and this will basically create my vm instance. It will take some time and then your instance will have an internal IP which will be displayed here. External IP and it will also show you options to connect to these machines, so this is my internal IP. ip this is my external ip that I can use to connect from a client.
I can easily connect from the option here that says ssh and I can say open in a browser window. I can even open this in a custom port. I can look at gcloud. command that you can provide from Cloud Shell or you can use another ssh, so let's first open the browser window and see if it connects so we can easily connect to our instance on the Ubuntu 18 instance that I just set up here easily in A couple of seconds now this is trying to establish a connection using your ssh keys and when it does that it basically also shows this web browser so I'm already connected and it shows my username which is the username of my cloud account and to which I have connected. this machine here using ssh didn't ask me for any password and basically now I can check what I have on my linux file system and I can log in at any time as root by doing a sudo su for example let's try to install a package and I can say apt get install vim or apt get install wget or apt-get install open ssh and all these packages already exist so it's not a problem now I can start using this instance.
I can only look at the disk for what's available, okay? gave around 10 gigabytes of which we see 8.3 gigabytes here for the dev sda1 and then 1.8 gigabytes available and you can continue using this machine. This was the easiest way to connect to a VM instance using ssh. Now what we can also do is that I can. just leave this and now I will try to connect using an external ssh client and here you can copy the public IP so when you want to connect to an instance you will have to get the public IP. Also remember that if you select and stop this. machine that will stop your billing counter and if you start it again, the internal IP will remain the same, but the external IP is the one that will change.
Obviously I can select this machine at any time and I can do a cleanup and I can delete it. Start the machine if it is stopped and I can even import the virtual machine to use later so there are different options that you can always use so this is my instance and if you want to see the details click on this. a c1 and that should basically allow you to see the details so it shows you what is the instance id what is the machine type if it is reservation specific what is the cpu platform what is the zone and all the other details in Any time if you want to edit, you can always click edit and you can change the details as you need.
Now you can also refer to the equivalent rest command to basically use the rest API to connect to this instance. For now, I have copied the public IP and I would like to connect it using PuTTY, so let's come in here and give it the hostname so we can give it ubuntu. I can give my IP address to be in my session. Now I'll click ssh. I'll go into authentication. Click Browse and this is where I need to choose the ppk file, so this is the one we created. New password. Let's select this and then I can come back to the session.
I can even save it and I can call it as my instance. save it and you can create any number of instances so you see I've created different instances here for my Google or Amazon cloud related instances and I can click open. It says that the service host key is not cached in the registry. Okay, just click on yes, and it basically says that no authentication method is supported, now this could be because we haven't enabled your ssh access, so let's look at that, now let's see if we were trying to connect using PuTTY, which one was the problem here so if I go back to my PuTTY select my instance load it it says port 22 am I giving the username which is window or sorry that is the wrong username we gave and that could be the reason why we set the user as sdu, so let's save this again and now let's try. connecting to it and it asks me for my password and you can connect it right now.
If there were any other issues related to network connectivity, then we could look at the entry and exit rules that allow us to look inside the machine now. we are connected to our machine here using the hdu user. I can log in as root and I can continue working so not only from ssh inside the cloud console but you can also use an external ssh client and connect to your machine so this is your ubuntu machine. and we can basically look at the space and that basically confirms that we are connecting to the same machine which shows 8.3 gigabytes here and 1.8 gigabytes here that we were looking at from ssh in the browser.
Now let's close this and go back to our instance page. now here you can always see the network details and this will show you the different types of rules i.e. inbound or outbound rules which basically allow you to connect to this machine from an external network or this machine to connect to an external network , so here we have different firewall rules that show the default http permission, that's your ingress rule and it tells you that it applies to everything. It shows me what are the IP ranges where I can specifically give the IP of my machine.
Shows the protocols. Shows what the different ports are. that you have used for these services, for example, rtp or ssh, which shows 22. Now you have icmp http and https at any time. If you want to make a change to these rules, you can do so by going into your network details and saying, for example, that you would do so. I want to work on firewall rules, so we have these firewall rules here. You can click on this one. Right now we are investigating the network domain and we are investigating the vpc network correctly and this shows me what are the different rules that we have now if I would like to create a different firewall rule for a different protocol.
I can always click create firewall rule. I can give you a name. Alright. I can tell what would happen if I wanted to turn on firewall logs. You can basically tell what the type is. of traffic, so inbound is applied to inbound traffic and outbound is applied to outbound traffic and then you can basically choose what are the IP ranges that you want that connection to come in or out of. You can choose a particular protocol. You can provide a protocol. here with comma separated values ​​and you can create a firewall rule, so this could benecessary depending on what services you are running, where you may want to enable access to your machine from an external service or to an external service so you can always go. to access network details from here you can go into firewall rules, create a firewall rule, apply it to your instance and restart it, so from now on we don't need to create any network details here because my rules input or output are now available.
Now I can basically stop and kill my instance so I can just stop it or since this is running, the ideal wave would be to stop it. Might as well do a reboot. Now what does reset mean? Doing a reset basically doesn't delete the machine. What it does is clean the machine and bring it to the initial state, so sometimes we may have installed certain things on our machine and we would like to clean them and at that time resetting can be useful. Basically I can click delete on the right and I can select this and I can do a clean and this is good as long as you are using a free trial account, try using different services, play with them and then you can clean so you don't waste your free billing . credit right and you can use it for meaningful things now I clicked delete and in a few seconds my instance that I had created will be deleted.
Also remember that if you are creating multiple instances you can connect from one machine to another using ssh using the private file so we can learn in detail later, so this is just a simple example of using your compute engine creating your instances by connecting to it from an internal ssh or from an external ssh client like PuTTY, where you already have I created your public and private keys, now I can click on the browser here and then I can basically exit and I can basically search for any particular service, so I just we look at the computing engine.
You have other different options, you have groups of instances. instance templates, for example, let's click on instance templates here and that basically shows you that you don't have any instance templates and this basically makes it easier, let's say for example, you're working as an administrator and you would like to create an instance template for that I can use it. you can describe a VM instance and then you can basically use this template to create different instances. You can opt for Sultan and nodes. You can search for machine images. You can also look at your disks. You can create snapshots and you can see different options. here now let's go back here and click home and that should take you back to your home page, which basically shows you if there's a particular API that you've used recently, which it shows me in the graph here so I can go to the API overview directly. and that basically shows me if there was any error if I was using the compute engine API to basically create a VM instance and that's why we saw a spike in the graph so this is a quick demo of using the engine service computing provided by Google. cloud where you can create vm instances and use those vf instances for your application installation for any other purpose now that we have seen how you use your cloud console to create an instance and also clean it, let's also understand how you can do it using your command line options and let's see what is needed or what are the different commands that you can use to create your instances now, you can always do it when you are creating an instance, you can use the compute engine that provisions resources to start the instance, so the instance basically it has different states that we can see when we are creating the instance, so basically it starts the instance, the instance goes to the preparation stage which is ready for its first boot, finally it starts and then it goes to run, so when look at the instant states, we will create the instance. and you will see that basically it will have different states, like provisioning, where resources are being allocated for example, but the instance is not running yet, then it goes to the preparation stage where the resources have been acquired, an instance is being prepared for its first startup, then the instance is starting and running. and if you are stopping an instance, it goes to the stopped state and will move to the terminated option.
You can also repair the instance and finally you can terminate your instance or clean it by stopping it and then deleting it now when we say stop and reset an instance you can stop the instance like I showed above if you don't need it anymore but if you need it for future use you can use the restart option which will basically erase the content of the instance or any state of the application and finally you can stop and end it now whenever you want to do it using your cloud console or we have seen the options now, let's also see from the command line tool how you can do it, so here I have the cloud shell that I brought from here and basically opened it in a new window, so the command line tool allows you to easily manage your compute engine resources in a more format friendly than using your compute engine's API.
Now gcloud, which is part of the Cloud SDK, is the main command here and then. You can always auto-populate the different options here, so when you want to create or work in g cloud, you can just type gcloud here and then, for example, I could say help to see different g cloud options, which will show me different options. which you can use here now we would be interested in compute instances so you could also do g cloud computing compute instances and then I can basically say create and then I can do a help now that should show me different options that work with compute instances from gcloud create a command that expects an instance name so that your Google Cloud SDK, which we can configure on our Windows machine or even your Linux machine, is a set of tools to help you manage resources and applications hosted on their gcp, which is their Google cloud platform. now here you have options like gcloud, which I'm showing you right now, you have gsutil and then you have bq, so that can also be used so you can set up Google cloud computing if you use gcloud.
Now what we can do is if we are configuring our SDK on our Windows machine, then we would have to do a gcloud init which basically in the init command because it is already initialized, now you can always look at your default zone, you can look at your region, what is using, all of those things are coming from the metadata that is being used, for example, I might be looking at the metadata of my particular project just doing a g cloud and then I can say calculate and I would be interested in the project information so I can do a project less information. and then I can make a description and then I need my project id so I can say less less project and I can get my project ID from here so you can click on this and that's my project ID so I can click on this and here I can just right click and if it doesn't stick, then you can do Ctrl V and then I can try it. to see my project information, this will basically give me the metadata that is configured by default and we can always see what are the regions or what is the zone that has been configured, so if it has been configured, I am looking at my details me it shows my ssh keys and then you can use your default region and default available zones.
It also shows my username and other details, so this is basically to see the metadata that is available now at any time. Basically I can say add metadata. Basically I can choose the metadata option and I can say that my default region should be Europe, which I was choosing before, so I can open this command again and what I can do is say here where I have gcloud compute. project information where I did it described above now what I could also do is say add metadata and then I can specify what metadata I would be interested in adding.
Then I can say the default region of Google Compute and then I can basically give a region, for example, Europe and then I can say West 3 and if you don't have a lot of experience you can always do it from the console or you can do it from here, so I hit Google Cloud Compute default Google Compute region and then I can also give a default Google Compute zone and then I can pass a value for the default zone so I can say Europe West 3 and then basically I can give an availability zone so if basically it's giving me some error when trying to pass these values, so I can give it this way and then it basically says if you can look for help in particular to see what is the command that I would have to do now, basically I can do an initial configuration of g cloud init here and this basically says it's initializing my default configuration, it says reinitialize this configuration from Cloud Shell, you want to create a new configuration, so if I had updated my metadata, I could basically do a cloud init and I could reinitialize my configuration or default properties, which I have passed, so from now on I will not activate or change the default region.
We're going to look at a simple way that you can create your calculation engine, so for example, I'll just say one and it's restarting and asking me what. is the username and let's select that says what the project is and let's select that and you want to set up a default region and compute zone correctly and I'll just say yes and basically then it shows me different options that we have here. There are too many options here and then here we were interested in Europe West 3, so let's choose 21 correctly and that basically allows me to choose my region and my zone, so it also gives you some specifications, so it says that the zone of Your project's default compute has been set to europe s3 a, you can change it by running gcloud config set and you can assign a correct compute zone so you can always give these commands.
I can get help information here, so I can just say gcloud config set and I can specify the calculation zone, so I can say calculation forward slash zone and I can give a zone, I can say calculation forward slash region and I can give a region name if I want to do it or the easiest way is like what I did now so basically you can do a gcloud command on it and then it will basically allow you to change its settings or set default items. Here you can do it as many times as you can say gcloud config unset to basically delete a compute zone or a compute region.
Now there are different ways, so if you were working on a Linux machine, you could always use an export command similar to this so you can export the calculation, sorry, the Cloud SDK and then you can say calculate the underline zone and then give the name of your zone or calculate the underscore region and provide your region, or you could add it in your bash rc file right now i.e. when you have your cloud SDK setup on your Linux machine or on your Windows machine and You'll want to specifically configure a zone and region for all of your compute related resources, so we don't need to do that, the default configuration is already provided here and what we can do is start by quickly searching for the gcloud compute instances options so I can say gcloud compute instances and then I can make a list at any time if I would need help you can just do a calculation on gcloud and then say less less help or I could just do a calculation on gcloud cloud and that would tell me shows different options that I have here that I use instances from.
I can always type instances and again press Enter and it shows me different options of what it would like to do so initially I can just type list which should show me what are the list or what are the available instances and as of now we don't have any instances , I can always make a list and then I can specify the minus minus format and then try to get the information in json or ml format or text in minus minus format so you can always make a list, you can make a filter so there are different options with your ready and you can try it. making a help here and that basically shows you what the options are, so for example, if I make a help and it shows me with a list what are the things that you can do, then you can give a name, you can give a regular expression that you can say in In a particular zone, you can use a negative filter, so there are different commands available and you can always find all those options here, as I showed you earlier, so now what we would be interested in is basically going to your compute instances.
I can always do an ssh and I can create an instance. I can add and remove metadata directly for my instances giving a particular zonegiving a particular region and if you want to do it now, here what we can do is basically create an instance we just give a name to that particular instance and then we can go back to our console and see what it has done, so I can say here create, so that's an option let's see what creation does so it says okay you're giving a create option but you would have to give a name so let's say we're going to call it e1 and that's going to be the name of my instance and this It is an easy command from your cloud shell that basically creates an instance, you see the zone that has been configured, it is the standard machine type it is not interruptible it has an internal IP and it has an external IP now I have created an instance and here I can return and then I can refresh this page and that already shows me the instance that I have created and then you can use the same method to connect using an ssh, so I have an instance created from my cloud shell and what I can do is basically look the instances and see what the different options are with the instance. if we did a successful creation, you have an option delete we did a creation, you have a delete option, you can then delete the instance from the command line or you can use other options so you can make a list, you can do a stop, you can start, like this which I can basically stop and let's say even let's go ahead and stop the commands here are pretty easy to remember or you can always use the help option and now I'm trying to stop the instance and then I can basically continue. go ahead and delete it so this is a simple way I created an instance from the command line or from the console which I showed earlier and similarly you can work with other options so now that I have created stop the instance , let's do a list to see if my instance shows up, it shows up correctly and it says the state is finished, so I stopped it, it's in the finished state and now what I can do is go ahead and delete it, so it says this is done. will lose, right?
Sure you want to continue, just say yes and that should take care of deleting your instance, so here's a quick demo on how to create your instance. We have already seen how you can connect to these instances using ssh now that we have created instances. Let's see too. how to use google cloud storage service which can be used to upload data or upload data, for this we will have to search on google cloud storage options and here you can search on cloud storage as well Let's go back here at the top, showing me. different services that we have here and let's see the storage, so this one shows me its storage option.
Now you can click on the browser and that basically shows me your storage browser, which it shows me at the top. You have options to create a deposit. Now Google Cloud Storage. allows you to store any type of data here, the easiest and simplest way would be to use the cloud console, although you can again use the cloud shell and in that you can use its gsutil command and gsutil basically has different options where you can use, for example, For example, I would like to work on buckets so I can use mb and then I can use my command line option to create buckets.
I can put data in there, I can explore it, and I can access the data from the command line, so let's create a bucket. here, let's click on this, let's give it a name, so let's say my data is important, that's the name. Now I can click continue right away. I can look at all of these options if I'm interested, so basically you can look at your monthly. cost estimate at the beginning, now I can click continue or you can say choose where to store your data and this shows you different options so that you have a specific region, so we will also give you the bucket name in lower case, that's what requires, yes.
Going back to the location type, you can choose multiple regions, which basically allows for high availability, so your bucket or your storage option will be accessible in all regions. It can also offer dual region and high availability and low latency in two regions. I can also say region specific. So for our use case we can only give a specific region which can keep our cost low but in business use cases I would go for multiple regions. Now here is the location and again I would choose Europe and we will choose Europe West 3 Frankfurt. It's always good practice that when you create your instances, when you create your storage or use different services, you try to choose a geographic region and then try to place the things or your services within the particular region within a particular zone, unless you would like that is accessible and available in all regions.
Now I can choose a default storage class, so when it says default storage class, there are different storage classes and each one is for a different use case, so you have a standard that is best for short term. Term storage and frequently accessed data have you close to the line, so this is basically best for backups and accessing data less than once a month. You can also have cold lines, so they're basically like your cold storage or frozen storage, which you may have heard generally as terms. so you can choose one of these storage classes depending on what the use case will be for this particular storage bucket, so let it be standard now how to control access to objects, so you can basically say specify access to individual objects using object-level permissions now. you can give permissions to the bucket you can just say uniform access to all objects in the bucket using only bucket level permissions so you can choose that you can also go into the advanced settings and here you can see different ways you can have configuration configure Now you can also have a retention policy to specify the minimum duration that the object in this bucket should be protected from deletion and you can set a retention policy that won't go into all that.
We will first try to create a bucket, so just click create. and that should create your bucket where I have my bucket now I can click on overview to see the details what is the region what region does it belong to what is the storage class at any time you can always click on edit bucket and make some changes here you can look in the permissions, the repository uses fine-grained access which allows you to specify access to individual objects and then you can basically see who has access to this, so in my case the project editors are basically the owners of the repository and project viewers. right and basically you can choose what type of access you need, for this you can always go to cloud storage and then you can decide what type of access you would like to give, be it a storage manager, an object manager, an object creator, an object viewer and so on, you can always query legacy storage and any other services depending on the APIs you have enabled.
You can always control its permissions, so now it's the legacy storage bucket reader and that's fine, and this is the bucket owner and then it could be. I was using other services like Data proc which uses cloud storage and that's why I also gave a Data proc service agent, so these are some of the members that you can delete, you can see them by specific members, you can see them by roles, so what are the different ones? roles that have access to this, so it says storage inherited bucket owner. There are two owners depending on this particular project, so they are handled automatically, but you can then add or delete machines and you can save storage costs by adding a lifecycle rule to delete objects afterwards. the duration of the current retention policy so you can add different policies and basically control your deposit.
Now my bucket is already created successfully so I can come back and then I see that my bucket is already here. I can click this option and you can do it at any time. edit the bucket permissions, you can edit the tags, the default storage class, you can just go ahead and delete the bucket, you can export it to Cloud Pub Sub, so basically if you want the content of this bucket to be accessible in a message queue system. you can opt for pub sub, you can process with cloud features and you can scan with cloud data loss prevention, so different options are available.
We can always select this bucket and delete it. Now I can click on the bucket and that basically shows me. There are different ways that I can upload some data here so I can just click on upload files and here I can choose some files so for example I'll go in here go to data sets and I have different data sets so let's for example choose this which is a csv file and then I'm uploading it to my cloud storage. It's as simple as this so you can drag and drop your files or I can just upload your files and my file is uploaded here now I can basically edit the permissions for this so this is in the bucket so anyone who has access to this repository you can basically download this file, you can copy it, move it or rename it. you can export it and you can see the permissions for this file, so let's look at the editing permissions and it says that for this project, whoever the owner is has access.
I also gave a specific user my Gmail ID and gave them access. at any time you can just say add item and then you can start granting different types of accesses so let's click cancel here now my data is already loaded into this particular bucket and that's a simple use of your storage in the cloud, now what we can. What I can also do is basically select this and I can delete it. I can also create specific folders and then basically load data into them. I can click on this. I can just do a download and then I can download it anywhere on my machine, so let's go. go to the desktop and let's download this file so that we not only upload some content to the bucket, but we also download it to a different location than what will be accessible now.
What I could have also done is I would have created a folder here and specified, let's say, for example, immediate data, okay, and I'm clicking on this to make it my immediate data. I can click on this and now I can upload my data specifically to this particular folder using the same mechanism that you can just upload the file. let's choose air passengers, let's open it and it will load my file so you can always choose what the retention expiration date is for this one, so as of now there's nothing right, but you can delete them.
Objects can be deleted or modified until they arrive. its minimum duration you can control all that, basically you can download it correctly, you can copy it, move it and rename it. This is a simple example where I created a particular deposit right now if, for example, you click on this transfer. says cloud storage data transfer products have moved on. Now you can find the local data storage transfer service and transfer device in the new data transfer section so you can always go back to cloud storage. You can also look at the transfer options which are mainly found when you are using a local service and you will want to upload data to a Google cloud, so this is my bucket here, now I can go to the cloud console and what I can do is if you would be interested in working on being able to just say gsutil right and then we have, for example, let's try a help, okay, and that basically shows me my gsutil.
Now you can always go to the gsutil quick start tool and this is one more way where you can basically do it from the command line so work with cubes so you can do a gsu to mb minus b and then in a particular zone if you're interested and then you can say gs colon and give a name so my awesome cube this is how you're going to create a cube that will show you it's creating a bucket and then basically it can upload an image or some data from the internet, it can just download the file with wget and then you can run a gsutil copy command and that will basically pick up the file from your cloud shell location. put it in your bucket once you've done that you can always make a copy right so here you are making a copy from your local machine which is your cloud cell machine to your bucket and you can do the opposite of that What can you do? a copy and you can point your repository and the file and download it to your desktops so you can download it using the command line and you can copy the object to a folder on thebucket, creating a particular folder and it will be By inserting the file into a particular bucket, you can list the contents of a bucket using ls and then give it the name of your bucket, so gs colon so we can test this one to be simple and rest, you guys can try it, so here you can make a gsutil. you can do a slash ls gs colon and then I need to give the name of my bucket so in my case the bucket name was my important data so let's try it so let's say my data is important and I am trying to list the deposit and content. you have right and then you have a folder and then you can search the folder and search the files, so this is a quick and simple option where you can use your cloud shell.
You could be using your Google Cloud SDK from your local machine or you. You can use the cloud console to create a bucket, upload some data, download the data, see if the data is accessible and that basically shows you the power of cloud storage, where you can easily upload any type of data. Now what we can also do is you're using a free account or even a paid account unless you want to keep the deposit you can select it and you can go ahead and delete it so basically it's going to ask you to enter the name of the deposit and let's say my important data you need to confirm your bucket name click confirm and you could have done it using gsutil and a delete command from the command line which is from Cloud Shell so now we have created a bucket in which we have loaded some data and we saw how. you can download it or create a folder and upload some specific data in it and you can also do the same using gsutil tools in which you can list your bucket, create your buckets, delete it, create some folders in it and do everything from the line command. this is in a simple way you can use your gcp where you can run your machines or you can use cloud storage to basically use an easy to use storage or instance on google cloud platform.
Welcome to this google cloud platform tutorial and here you will learn about cloud computing, what is cloud computing, what is gcp, what is your google cloud platform, what are the benefits of Google's cloud platform and what its different services are, a little about Google's infrastructure, a comparison of different cloud providers, such as Google, which offers gcp. It is Google's cloud platform, Amazon, which offers Amazon web services, and Microsoft, which offers Azure. We will also learn about the use case of domino's pizza and then we will have a quick demo on using some of the services in gcp before we start let's understand why cloud computing and it would always be good to learn about cloud computing depending on a use case.
There are several use cases where organizations are adopting or moving their solutions or infrastructure to the cloud or I can simply say integrating with the cloud. Here is a use case. Nina founded a company related to Website Development, the challenges Nina faced were low memory space when needed for processing or any other type of application related work, high traffic to the website crashing and also less number of servers. Now, with these challenges, she referred to the concept of cloud. computing and how that could benefit her and help her solve her problems, most of her problems were solved when she started using cloud computing and with cloud computing she was able to increase her memory space as needed, as needed, control the website load, that is basically load balancing and handling more requests on the website or requests per minute, buy servers at a lower price which increases or decreases depending on the requirements when we talk about cloud computing, computing Cloud is the use of hardware and software components that a cloud provider offers as a service that can be accessed over the cloud computing network.
Cloud is the use of these resources, which could be dedicated resources or come from a set of resources that the cloud provider offers to provide a service to customers, users can access these different services, application files from any device that can basically access the Internet. Cloud computing allows for automatic software integration. Allows you to backup and restore data. It basically offers unlimited storage memory or computing capacity. It gives access to trusted sources that are generally used by the cloud provider itself for its use case and has a cost. efficient model that helps organizations quickly integrate or basically modernize their infrastructure, cloud computing is generally used with id or within its space where there are five traits if there is a resource requirement as the business changes or grows dynamically , so this cloud computing offers on-demand self-service so that users can use on-demand computing resources or storage network memory resources, etc., provided by the cloud provider and also realize self-service.
All this is possible through a simple interface and users can use processing energy storage. network as they need it and pay as they go, so minimal to no human intervention is required when it comes to projects that may need scalable network access. Cloud computing offers broad network access that accesses resources over the network in all geographic regions or what we call availability. zones which can be multiple sites within a particular geographic region, cloud providers also have what we call a resource pool, so this basically provides a huge set of resources that are shared and that customers can access at a lower cost.
Now there may be customers who are interested in not sharing resources and would be interested in dedicated resources and in this case the cloud provider also has single tenant offerings that help such customers. If an IT company or any other business needs rapid elasticity, then cloud providers also have resources on offer. They are elastic, you can quickly get more resources as needed and therefore you can scale up and down. Consider a gaming company that would be interested in launching a new game and would have predicted a certain number of users who would enter the portal playing it. the game and what if the request per minute or the number of users joining could increase.
Now, in this case, the organization would want an underlying solution that handles this dynamism, scales up as needed based on demand, and once demand is filled, scales back down. It is possible to use a cloud computing solution. Cloud computing solutions also include metered services, that is, a pay-as-you-go model for the usage or reservations that a user or organization would have made for the resources offered by cloud computing, so when we talk of cloud computing, The question that always arises is why this model is so compelling, why it is so interesting for organizations or users who want to use one or several cloud computing services, so the first wave The trend that cloud computing brought to storage was what we call colo. i.e. colocation IT shops that have been using or managing large amounts of data for decades basically wanted to build their infrastructures to handle their business needs now, instead of building more expensive data centers, they would rent space or share facilities and this Organizations were doing it even in the past, therefore, they would free up capital for other use cases.
Now this was more configured by the user, managed and maintained by them. Later, organizations started thinking about virtualization, so again it was configured by the user but managed and maintained by the provider, so the data components virtualized. The center matched that of a physical data center and organizations would have virtual devices managed separately from the underlying devices and then came container-based architectures or basically automated services, so within Google services they are provisioned and configured automatically, allowing your infrastructure to scale on demand. There are several reasons why an organization would think about integrating with the cloud or benefiting from the use of the cloud and thus instantly reap the benefits of modernizing its infrastructure.
Now there are few famous cloud providers here, so you have Amazon which offers Amazon Web Services and a huge list of services that come with this. you have Microsoft Azure, you have Oracle cloud, you have Saps cloud solutions, gcp, which Google Salesforce offers, etc., there are many other small players who also provide different cloud based services or organizations that partner with these leading cloud providers. offer cloud services to your customers when we talk about why Google Cloud Platform, there are several reasons why someone would choose Google Cloud Platform. Gcp has better prices compared to its competitors in terms of speed and performance, it is very fast and increases the performance of the application live migration project and there are a lot of solutions that I will show you later in additional screens that help a organization to adopt a cloud platform integrated with a cloud platform or even migrate completely to a cloud platform that none of Google's competitors offer live.
Application Migration When we talk about big data AI type of machine learning solutions, gcp provides many innovative solutions compared to other cloud providers like aws azure etc. what is google cloud platform? It has a set of cloud computing services provided. by google that runs on the same infrastructure that google uses for and for its end user products like youtube gmail etc. Let's learn about the benefits of Google Cloud Platform such as high productivity, work from anywhere, fast collaboration, high security, less data stored on vulnerable devices, reliable resources that can be used across the organization, across geographic regions and in all countries, very flexible, allowing organizations to scale up and down as demand increases or decreases and cost-effective solutions for various use cases, these are some of the benefits and if we look at different services offered the Google Cloud platform.
We could discuss the detailed benefits offered by each service in a different use case that basically helps organizations working in different domains to handle different types of small, medium or large businesses and with different business objectives when we talk about Google Cloud. platform services here is a list of services or you could say top level domains or categories of services, so you have compute related services, you have storage and database, you have networking, big data, developer tools, identity management and security, Internet of Things, cloud AI management tools and also data transfer solutions, when we talk about Google infrastructure, Google has one of the most powerful infrastructure in the world, the infrastructure is available At two levels, the physical and abstract layers, you have the physical infrastructure and then you have the abstract infrastructure.
The physical infrastructure consists of data centers. Extensive development of high-efficiency backend data centers. It has a very strong backbone network that is used. by Google itself and is also offered as a service to customers through gcp platform services, so you have redundant and unordered backbone points of presence globally. When we talk about Google, it has more than 110 edge points of presence in more than 200 countries and when we talk about edge caching. caching platform at the edge of your network, so this is what defines Google's physical infrastructure. Now there is much more than just these four points when we talk about abstract infrastructure that is divided into global regions and zones when we talk about zone. a zone is roughly equivalent to a data center and a single point of failure, so you could have your compute engine that is within a zone or you could say the compute engine is a zoned resource, it has regions that are areas geographic regions that contain multiple zones, so you would have a region for Central US or Central Europe, Western Europe, etc., and within a region you would have one or multiple zones and the zones would basically allow for high availability of resources, for what the cloud load balancer has, is an example ofregional resource. you have global resources and they are available and shared across the planet, so you have several global resources, like the network, which could even be your IP addresses, etc.
Now let's do a quick comparison of aws west azur west gcp and see what each cloud is. provider offerings later, we will also discuss in detail the different services when it comes to the Google cloud platform, what each service does, what you can benefit from, which service you should use, in which case we will learn about them in later slides If I compare their different clouds. providers when we talk about Amazon and its cloud offerings, that is, Amazon Web Services or AWS, as we will know it. Amazon Web Services has 69 availability zones within 22 geographic locations and will soon have 12 more in the future, so this number continues to grow based on the spread of services offered by a particular cloud provider.
Here we are talking about availability zone specific information. When we talk about Microsoft's Azure, it has 54 regions around the world and is available in 140 countries around the world. When we talk about Google Cloud, the global cloud platform is available. in more than 200 countries around the world, when we talk about virtual servers, Amazon's ec2 which is an elastic computing cloud, is a web service that basically helps to resize your computing capacity where you can run your application programs in a virtual machine, so you can use the ec2 service. launch virtual instances which could have any Linux Windows distribution, could have different specifications in terms of RAM or CPU cores or disk, you could also decide what type of storage a particular instance should use, whether the storage should be local to the instance or whether it's an elastic file system or even object storage when it comes to Azure or Microsoft offering like their virtual machine, which is infrastructure as a service, it gives the user the ability to deploy and manage a virtual environment within a virtual network in the cloud. and this virtual cloud network would be managed by cloud provider google cloud or google cloud platform offerings i.e. gcp vm instances allow users to create, deploy and manage virtual machines to run different types of cloud workloads now that Talk about the compute engine here, it would be good to talk a little bit more about the compute engine and what are the different options that Google Cloud offers, so when you talk about your compute engine , have high-performance, scalable virtual machines.
The compute engine offers configurable virtual machines to run in Google's data center with access to high-performance network infrastructure and block storage, and you can select virtual machines for your needs that could be general-purpose or workload-optimized. and when we talk about optimized workloads, you have predefined machines or you have custom machine sizes that you can integrate computing with other Google cloud services, like ai or ml, and opt for the data analysis that you have when we talk about your gcp vm instances, just to expand, you have general purpose instances that we call n2 that provide a balance of price and performance and are suitable for most workloads including line of business applications, web servers, and databases. data.
Google Cloud also offers compute-optimized instances we call c2 instances that offer consistent high-end virtual CPU performance that's good for AAA, Eda HPC gaming, and others. applications now, when we talk about computing, optimize your general purpose, how would we leave instances optimized for memory? Those are m2 machines that Google offers, so they offer the most memory. These virtual machines are suitable for in-memory databases, such as SAP Hana real-time analytics. and in-memory caches, so if I wanted to summarize this, when we talk about different instances, AWS also offers different types of instances that are optimized for memory, compute or disk, it has a general purpose and each category of machines has a different pricing model which you can always turn to.
Go to an AWS website look up the pricing models and that will give you an idea of ​​on-demand instances or dedicated instances, reserved instances, etc. similarly, if you talk about Google Cloud, Google Cloud also has instances with various options that become the key features of Why would customers choose Google cloud platform, such as live migration for virtual machines, so that the compute engine within your gcp can live migrate between host systems and when I say live migration it basically means without rebooting, which keeps your application running even when the underlying host systems require maintenance? You also have preemptible virtual machines where you can run bad jobs and fault-tolerant workloads on preemptible virtual machines to reduce your virtual CPU and memory cost by up to 80 while getting the same performance, so these are your machines. virtual preventive or interruptible, the only demonic thing is.
These can give you really cost-effective resource usage, however, they can be taken off the market at any time and that's why we call them preemptible virtual machines. You also have single tenant nodes which are physical compute engine servers dedicated explicitly or exclusively to the users use case and when we talk about single tenant nodes, these are usually good when it comes to or working with your applications, which We call bring-your-own-license applications, so single-tenant nodes give you access to the same machine types and virtual machine configuration options as regular computing. instances, so Google Cloud offers different options when it comes to these instances that we are talking about and deals with different use cases compared to other cloud providers that also offer these services, such as you can have a predefined machine. types, you can have custom machine types, preferred vms, like I said, vms live migration, you can use persistent disks which could give you high performance durable block storage, you have local ssds, you also have gpu accelerators which can be added for accelerate computationally intensive workloads such as machine learning simulation, medical analysis, etc., and has features like global load balancing, which makes Google Cloud a unique choice when we talk about platform as a service.
Amazon has a platform as a service offering that we call elastic bean stock among one of its services is an orchestration service to deploy applications and help maintain them. Azure cloud service provides a platform to write user application code without worrying about hardware resources. Google App Engine is a service used by developers to create and host applications in Google data centers. When we talk about serverless computing, Amazon's AWS Lambda is a serverless computing service, it is used to run back-end code and automatically scales when needed. When we talk about Azure, it has something called functions that allow users to create applications. use serverless symbol functions with a programming language of your choice when we talk about Google Cloud, gcp has cloud functions which is the easiest way to run your code in the cloud and is highly available and fault tolerant, for what these days we are talking about. about the microservices architecture that organizations prefer when we talk about organizations that can scale and that can dynamically change their underlying architecture.
Organizations would be interested in serverless computing where they do not have to have a pre-planned infrastructure setup before implementing their use case. and this is where monolithic applications are not really a preferred option, many organizations are decomposing their applications into microservices based on business capability or decomposing them based on subdomains. We can learn about microservices architecture later, but just to know that serverless computing basically helps any organization if, for example, you have a web application that receives non-linear traffic and you can't keep an eye on your server always, it would be good to have a Someone automatically scaling your serverless application is basically a computing model where the cloud service provider is responsible for managing the piece of code without the developer having to worry about maintaining infrastructure configuration management, etc. .
Now when we talk about serverless applications or that benefit from serverless computing, one of the key things would be zero management, so deploying applications without any provisioning or management. auto-scaling capability that allows the service provider to worry about scaling the application up and down. It has a paper usage model that any customer would want to benefit from and that is to pay only for the resources they have used or continue to use. the time between idea implementation and employment and this is something that any organization would want, you would want to have a faster timeline to get the solution to market compared to getting tangled up with implementation management and maintenance of your underlying infrastructure when your applications face high demand.
So when we talk about serverless technology, it is a function as a service because each part of your application is divided into functions and can be hosted on multiple service providers. You have serverless applications that are usually divided as separate units or functions based on functionalities or domains. So serverless computing is gaining quite a bit of popularity these days compared to the traditional three-tier architecture where you had a presentation layer, you had an application layer, you had a database layer. Now that type of infrastructure is not really preferred in modern times. Organizations are working on different types of newer applications when we talk about object storage.
Amazon has a simple storage service which is s3. It provides object storage that is designed to store and retrieve information or data from anywhere over the Internet. Azure introduces a blob storage which is Binary Large Object Storage offers a lot of storage and scalability, it stores the object in tiers and depending on how frequently the data is accessed, the same applies even to s3 , which is from Amazon, so s3 also has different types of storage classes that can be selected when a user or organization intends to use a storage service and when I talk about storage classes, it basically means having an access storage frequent or infrequently accessed storage or what we simply call an archiving solution.
Google Cloud has cloud storage and provides unified object storage for live or archive data the service is used to store and access data on the gcp infrastructure when it comes to advantages amazon web services have business friendly services easy access to resources increase speed and agility and that too on demand and takes care of your security and reliability of the resources offered when it comes to Azure, it has better development operations, a strong security profile provides many cost-effective solutions and friendly operations execution when we talk about Google Cloud, one of the key features here is a better price than the real competitors.
The migration of virtual machines is what is really interesting. Many organizations would like to modernize their infrastructure without having any disruption to their existing services. Improved performance. Redundant backups, etc. When it comes to disadvantages in AWS, it has limitations when it comes to EC2 service. There are different options in the type of machines you can choose to work on. You have a support fee that incurs network connectivity and then also downtime, which could happen in case you migrate to Azure, have a different code base for the cloud. and the premise platform as a service ecosystem is not really as efficient as infrastructure as a service. poor management of GUI and built-in backup tools when we talk about Google cloud support fee is quite high, it depends on the type of solution or support that you or organization would offer.
Being interested in it has a complex pricing scheme, although it has different use cases for which any user or any organization can benefit from downloading data from Google cloud services is an expensive option, thestorage may not be; however, uploading or downloading data would be an expensive option. When we talk about the use case of Domino's Pizza now, while we were discussing about these features, I would really like to spend more time discussing each of them features or services in detail, like for example, it has a calculation engine, it has storage, you have a big table, you have data processing, etc., so now there is a huge list of services, before we get into your weird domino use case, let me show you this page on Google. cloud where you can see different services, so if you go to cloud.google.com and look at the option to start in dock, here you can find build solutions, see different use cases, learn the basics of what is your cloud Google. are cloud basics then you can see different cloud products so here you have products in different categories like artificial intelligence and machine learning, API management, compute containers, data analytics, databases, tools for developers, etc., and you can always click on any of these to analyze them. different solutions offered what are the different use cases what are the best practices when it comes to migrating virtual machines to a compute engine or operating containers or creating containers etc., you can always check out the featured products and that gives you an quick snapshot of What are the different products?, such as, do you have a compute engine, do you have bigquery running in the cloud, do you have a data warehouse, do you have sql in the cloud, do you have a managed mysql or postgres, do you have cloud storage to basically enter any type of data, there you have application of security keys.
AI and machine learning etc. so now you have different products with features. If you scroll down to the bottom of this page, you can again see different solutions and it would be very interesting to read and learn from them to have infrastructure modernization. application modernization, you have data management, etc., if I look at infrastructure modernization, you could basically look at the solutions that Google Cloud offers and what it does when it comes to modernizing your infrastructure or benefiting from integrating it with the cloud and having benefits immediate. infrastructure modernization, you can see different use cases, what they are doing, how Google Cloud really helps when it comes to migrating workloads to a cloud and how their different cloud solutions, such as VM migration, have SAP on Google Cloud, vmware as a service, etc. and you could learn from these different solutions that are offered, you can also look at application modernization, so not only infrastructure modernization but organizations are also interested in re-analyzing their applications and seeing how these applications could go from monolithic to microservices architecture or how applications can benefit from modernization and cloud computing offerings, so here you have again different use cases that talk about the different ways that Google Cloud can help, how you can modernize your applications, how you can use the different solutions that Google Cloud now offers.
When you talk about cloud computing services, you can always go to cloud.google.com. If you have created a free account, you can log in and by default each user gets 300 free credit so that when you can try different products wherever you want. I can use different services, so here, if I click on the console where I'm already logged in with my Gmail account or my Google Cloud account, where I have 300 free credits, of which some are being used, you have a Google console Cloud and here from the You can click on the hamburger menu and you can see different services within different domains, so you have a compute domain that has different services, like the application, you have a compute engine and that basically allows you to use your virtual machine instances that we were talking about in the previous slides. clusters create their templates use single tenant nodes create snapshots or backups of their data use different zones you can go for kubernetes which is a container based engine it has cloud capabilities it has cloud execution and then it has different options related to storage like big table having datastore, firestore, file storage etc. and for each of these services you can read about each of these services in Google documentation or However, I will explain that later you will also be able to consult the operations related to networks and other tools that Google Cloud offers.
This is a huge list of services that Google's cloud platform offers in different ways for different use cases. Now let's look at this Domino's Pizza use case and see what it helps us learn so that you can always access this page by going to this link that talks about. about customers and then shows different use cases, so Domino's increases monthly revenue by six percent with Google Analytics Premium, Google Tag Manager and Google BigQuery. This is basically when Domino's started using gcp and what the result of that was. Now let's look at this further. We all know that Domino is the most popular pizza delivery chain operating all over the world, but how is it possible?
Let's take a look at the challenges where they wanted to integrate marketing measurement across devices by connecting CRM and digital data to create a clear view of the customer. The behavior to make cross-channel marketing performance analysis easy and efficient now for these challenges that Domino was facing, the solution was to use Google Analytics Premium Google Tag Manager and BigQuery, which were used to integrate digital data sources and generate CRM data reports, became easier and more efficient. when implementing Google Analytics Premium because I had the ability to access a single Google Analytics account to evaluate web and app performance using the new implementation of Google Tag Manager.
The dominoes were able to act quickly. They were able to connect CRM data with digital analytics that they basically provided. dominated with greater visibility into customer behavior what was the result there was an immediate six percent increase in monthly revenue eighty percent of costs were saved, ad serving and operations increases agility with tag management optimized they got easy access to powerful reports and custom dashboards now that was just a simple use case before moving into practice, we can also talk a little bit about these services that Google Cloud offers as we discussed and some of these services that can really make you think why not Google Cloud Platform and when.
We talked about the different cloud platform services, let's briefly learn about some of these services, what each service is, what it does and how it can help us manage our use cases or work with different products, so let's briefly learn about the different services that Google cloud platform offers now one of the domains is computing and then let's see the computing services that gcp offers now. Here I can log in to the console and this hamburger menu in the top left corner. I can click on this and go into the compute engine. section, this is the compute domain that has cloud features of Kubernetes application engine and cloud execution, so these are the different services that are offered within the compute domain and here we can enter the compute engine by clicking on this one and then basically going to vm.
In these cases, before we see how we can use this compute engine, let's understand some of the features of the compute engine which basically offers high-performance scalable virtual machines that are configurable and run in Google's data center with access to an infrastructure high-performance networking and block storage. So from here you can select vm for your needs for your general purpose or workload optimized, predefined or custom machines. Now here you can integrate computing with other Google cloud services like ai ml and other data analytics services. It has different machines offered here, such as general purpose that provides a balance between price and performance that are suitable for most workloads including line-of-business applications, web services and databases, it also has machines compute-optimized that offer consistent high-end virtual CPU core performance and are mainly good for gaming education.
High-process computing and other similar applications, in addition to general-purpose computing optimization, also have memory-optimized machines that offer the most memory and these virtual machines are basically suitable for in-memory databases such as real-time analytics SAP Hana and in-memory cache. Now we can see these options here. You can click on VM instances while connected to your Google Cloud Console. Here you can even create an instance template that can be used to spin up instances, for example if I click on a new VM instance from a template, there are some templates that I have already created for my use.
Now here I can basically use one of these templates or what I can do is go back. I can access the instance templates and this basically allows me to create a template. you can click on create instance template, which basically allows you to create templates that can be used to spin up different instances. We can give a name to the template, for example we can say template instance. Now here I can choose machine configurations and this is where you have different options, so it has a general purpose, as I mentioned, that provides a balance between price and performance. has memory-optimized, which are types of high-memory machines for memory-intensive workloads.
It also has optimized compute, which basically gives you high-performance machine types for compute-intensive workloads. so you can choose any machine configuration which is available here as per your requirement now if you click on general purpose you have different options here you can see the series where you have n1 series you have e2 series which is platform selection of CPU depending on availability. have n2 and n2d so let's select n1 now you also have here which talks about machine types and here we can choose the configuration that we are interested in depending on the applications that will be running inside the machines, we can choose a machine by default. shows one virtual CPU core and 3.5 gigabytes of memory or RAM.
You can choose a high end machine so from now on I will just say the n1 standard which basically allows me to choose these machines so there are different features that your computer offers like how it has live migration for vms it has machines preemptible virtuals, has sold tenant nodes and all those options can be seen here now on this machine for my boot disk. I can select a distribution that I would be interested in, for example, I could go. for public images and I can choose for example ubuntu and then I can choose a version then it shows me 16.04 and you also have the latest versions like ubuntu 20.
You can choose one of these and here it asks you to choose the boot disk type . So it could be a standard persistent disk, which are low performance hard drives and can be said to be low cost compared to SSDs, so SSDs provide better performance but are a bit more expensive than using a standard persistent disk. We can choose this and we can provide a disk size, for example, 20 gigabytes and I can click select before I click select. I can click on custom images and that shows me if you have other images created in your project, you can use them and you can also get information about the images by clicking. in this link we will click on public images, we have chosen this distribution let's make a selection and now here you have identity management and API access so let this be default you can say allow default access and what you need depends on the services .
We can choose to allow http and https traffic. Now we would also need some way to connect to these machines, so that when you configure an instance by default, you can ssh into it using the Google Cloud console or from Cloud Shell. You can also give a private and public key, so here you have the option where you can give all these details so that when it comes to administration, it tells you what you want to do for the reservations and you can automatically say use the reservation created. You could also say no. If you don't want to use a fallback, you can also configure or provide a startup script that you would likeprovide whenever your machines wake up and here you have the preemption options so Compute offers preemptive virtual machines so it is mainly when you want to run bad jobs and fault tolerant workloads on these machines and you will be able to benefit from a Reduced cost for your virtual CPU and memory by 80 percent, so these are virtual machines that would last less than 24 hours.
Now by default this is disabled and purely depends. in your workload, what would you like to run on these instances, you could go ahead and select preemption on that and use this compute feature that you also have in host maintenance, which talks about what would happen with this engine instance of compute, so that when the compute engine performs periodic maintenance of the infrastructure, you can migrate your VM instances to other hardware and this is one of the features that the compute engine offers, which we call live migration for VMs. Here your compute engine can live migrate between host systems which are the underlying systems that these VM instances are based on without rebooting which will keep your application running even when the host system requires maintenance and here it says that it is recommended migrate the VM instance and leave it as is.
You can also say that if there is a maintenance you can terminate the vm instance now you also talk about auto restart which basically means that the compute engine can automatically restart the vm instances if they are terminated for reasons not initiated by the user so which these are all the configurations that are available in the administration and it also tells us the different features that we have also there is a feature called sold tenancy which basically means sold tenant single tenant nodes that can be chosen so that you can have engine servers physical computers dedicated exclusively for your use and this is usually good when talking about bringing your own. license applications for single-tenant nodes to give you access to the same machine types and virtual machine configuration options as regular compute instances; however, these can be a bit expensive.
We can choose this one. We can also analyze the networks, which basically show the default settings that are applied to the automatic subnet. You can also choose a particular IP if necessary, but that would cost more. You can click on the disks that talk about what you want to do with the boot disk when the instance is deleted, what encryption mechanism you would like to use, and here. finally you have security so basically like I said you can ssh into the instance using the cloud console option and you can also provide a public ssh key so one way to do it if you want to use an external ssh client like PuTTY to connect, you can do it. create a key so that for example I can go into puttygen and here I can say generate, just move my cursor here and that will create a key.
I can give this a name, so I'm going to say sdu is going to be the username that I'm going to give it. a simple password that I will use to log in to this machine and then I can save this private key that will be saved, so let's say a new hdu key and this is saved in a dot ppk file which is usually used when using an external ssh client. to connect, save this and that saves a ppk file to your desktop. What you can also do is copy the content of this public key from here and this is what we would like to give in our instance here so that the public key gets stored in the instance and the private key is what we will use to connect, so once I paste it here it resolves the name to hdu and we have given it the public key now in certain cases you may want to use software that uses ssh to connect.
The machine and that software may not be like your PuTTY, so in that case you may want a pem file or a private key that is saved as a dot pem file, so you can also do that by going to conversions and exporting open ssh. key and then save it, so I'll say sdu new key, but then this will be saved as a pem file on my machine so that you have a ppk file that allows you to use PuTTY to connect to the instance you have a pem on in case the software needs to ssh directly to these machines.
It also has the public key that we have already provided on the machine. Now, once this is done, I can close Puttygen. I can go back to this page where I'm creating an instance template and then I can just click create so that this has created an instance template that I can use to spin up my instances any number of instances using the theme template. The only thing I would have to do is change the region where I want the instance to run, now that we have created an instance and that is my third option. I can go back to the vm instances.
Now I can click create and here I can create an instance from scratch giving all the details again or I can just use my template. I can click on a new VM instance from a template, choose my template, click continue and once that's done I can give the name of my instance, so let's say c1. I can choose the region, so I will choose Frankfurt and then rest. Everything is automatically populated based on the template you have provided and you click create now, this basically allows you to spin up an instance and you can create any number of instances using your template, you could have already created a new instance from scratch now, once the instance is created, this has a public IP and a private IP.
The private IP will not change unless you want to configure a new machine, but the public IP will change every time you stop and start the machine. This is what we need to connect to this machine. I can do it too. an ssh from here by doing this open in a browser window, let's click on this so this is an internal way to connect to your instance using ssh, let's wait because it would transfer the ssh keys to the virtual machine, it establishes a connection and I'm connected to my machine, what we can easily do is confirm if we are on the correct machine.
We can simply do an ls to see the file system. What we can also do is basically log in as root by doing a sudo su and that allows you to log into the machine as root and from here I can change to the hdu user which will have a dot ssh directory at home and which basically has authorized keys and if we want see if this contains my public key, I can just do a cat dot ssh and then look up the authorized keys and this shows me the public key that we had initially added to our instance, confirming that we are logged in to the machine we created.
Now I can close this and what I can also. What I can do is copy this public IP, so let's make a copy to the clipboard. Now let's go to PuTTY and here I will say the hostname, where I will say HDU. I will give my public key. I'll click on SSH and under SSH I'll go to authentication. now here we have to give our ppk file so that the ppk file is sdu new key select this come back here come to the session give it a name for example c1 save it and then you can say open and say yes and you are already registered enter your machine now , once you're logged in you can always do a negative ssh and that shows you the file, that's how you just used the compute engine to spin up an instance, but we used a template that basically allowed me to create this instance and then I can connect to this instance and then I can start working on this, so when we talk about features of the compute engine, it has predefined machine types, as we saw, so the compute engine offers different predefined virtual machines and they have configurations for each From small general-purpose instances to high-memory optimized instances with up to 11.5 terabytes of RAM, you can have fast compute instances optimized with up to 60 virtual CPU cores.
It also has custom machine types so you can create a virtual machine that best suits your workload and by tailoring a custom machine type to your specific needs, you can realize significant savings. There are preemptible virtual machines, as we saw. There is also a facility that allows you to take advantage of live migration for virtual machines. It has durable, high-performance block storage for virtual machine instances. in the form of persistent disks where data is stored redundantly for integrity flexibility and to resize storage without interruption and you could choose HDDs or SDDs for your instances. Now it also has options like GPU accelerators, for example, if I just click create instance. and I can look into that, so here let the instance name be instance one and what I'd be interested to see is this one that says CPU and GPU platform to make the CPU platform settings permanent.
You can also add GPUs so that GPUs can be added. Accelerate computationally intensive workloads, such as machine learning simulation, medical analysis, and virtual workstation applications, so you can add and remove GPUs to a virtual machine when your workload changes and pay for the GPU only while you use it, so these are some of the features that the compute engine basically offers and you know Google builds in second level increments so we only pay for the processing time. Now there are different savings that are possible, so you have commitment savings, which basically means you can save up to 57 percent with no upfront cost per instance type lock that you have. container support, so you can basically run, manage and orchestrate Docker containers or compute engine instances, so here when we configure our instances, there is an option that basically allows you to use or deploy Docker images.
Now that it can be done, you can also benefit from sustained support. use sustained savings use discounts that are automatic discounts for running compute engine resources for a significant portion of the billing month. You can create a reservation for a VM instance in a specific zone which is basically seen in the management section here and you can guarantee your project. You have resources for future increases in demand and, if no longer needed, remove the reserve. These are some of the features of the computing engine. What we have done is created a compute engine using the console.
Now you can return. You can also do this using the cloud. shell and you can click this which activates Cloud Shell. You can also have Cloud SDK support in your machine settings, which can be used now. I can open this in a new window and from here I can start giving commands if I want. I'm interested in setting up an instance from my command line, so you have different options here. To get started, you can simply make a gcloud and press Enter and that will show you the different options that are available that can be used to have gcloud Compute. here, which shows an option to create and manipulate compute engine resources, now I can queue to exit, I can do gcloud Compute and that will basically show me again different options that are available if I were interested in configuring instances from the command. line, so here I can say g cloud Compute and then go to instances and if you don't know the commands, you can press Enter to show you all the different options that we have, so here we have different options, like list or create or start or update, for example, I can list here to see what instances I have and the instance we just created shows up here and says the status is running.
I can stop this instance. I can delete this instance. I can even create an instance using a create command here and you can just do a create help that will show you what are the different options that you can give, so say the instance name is what you need, you can choose an accelerator , you can choose the boot disk and several others. options, so I can say create and then I can give a name, for example, c2 and once I click on this, it says do you mean Western Europe zone 4? Then it asks me for region and zone and I can say yes and these settings come from my default profile.
I can always change them by changing the metadata, so now we've created an instance that says running. If we list again, we see two instances created, one was in Western Europe, three and one was just europius. West Four, both have an internal and external eyepiece. Now you can do a description to see the different options here, so g Cloud basically allows you different commands that you can use to work with your instance to create instances and change the metadata if you want. you want to change the region if you want to add a startup script all those options are possible from the command line which we can learn in detail in later sessions so this is yourcomputing engine as a service now that we have learned about the computing domain. and g computing services offered by gcp, let's also learn about storage and databases which again fall under the domain of storage and services offered by Google cloud platform.
Now you can go back to the console and here you can click this and here. you can just scroll down to see what are the different options in storage, so you have options like bigtable, you have datastore, firestore filestore, you have SQL based services, you have storage which is storage of objects, and then you have other options that are available, so google cloud platform offers different storage based services, of which storage, which is your object storage, is quite popular. One click on storage and that basically shows you an option that talks about the storage browser, so this is your Google cloud object storage, so when we talk about object storage.
It is basically a storage where any type of data can be stored and when we talk about object storage, it is a group of bytes that we address where each object will have a unique key. These unique keys are in the form of a URL that allows you to access the object, so cloud storage is made up of what we call buckets that are used to store and retain your storage objects. Storage objects are immutable and each change creates a new version. Now you can also have control access through iam, which is identity access management. or through the access control list, so there is also an option called object versioning which basically says that if it is on every time you try to store the same object, a new version of the object will be created, otherwise, the newer option will overwrite the older one since we can't archive. the previous version so let's see how we work with this object storage so here you can click on create bucket now once you click on create bucket you need a name so let's say test bucket and here I can just give number one to say this. is the name of my bucket, now I can directly click on continue or it would be nice to see the different options that are available here, so when you click on choose where to store your data, it already gives me an option that says the name of the bucket. it's already in use so let me give it a unique name so let's say test buck and call it aua so it should be unique.
Now here it says choose where to store your data so this will give you the location type so you can have region specific buckets. giving you the lowest latency and fastest response time within a single region; however, it does not make your storage highly available. You can create a dual region which basically allows your bucket or storage to be accessible in all regions. You can also make it multi-region. what is the highest availability offered as of now, we can choose a specific region and now it asks you to choose a location. Now, as always, I will choose Frankfurt.
Now I can click continue and rest, the rest, let all the storage options be default or you can. Click on a default storage class now that tells you, based on your storage class, that there are variable costs when it comes to storing, retrieving, or performing any type of operations so that you have a standard option that is best for short-term storage term and the data you access frequently. You can also opt for cold storage, such as nearline, which is best for backups and accessing data less than once a month. You can opt for frozen storage, such as cold line, which is better for disaster recovery and data that is accessed less than once a quarter, or you can opt for archived where data is accessed less than once a quarter. once a year, let's go with standard from now on and now you can choose how to control access to objects to be fine or uniform, let it be fine and you can grant additional permissions at the bucket level. using iam permissions or at the object level using the access control list in the advanced settings, you can choose encryption and you can also choose a retention policy to specify the minimum duration that objects in these buckets should be protected from deletion or modification after they are loaded.
You can always learn more about this by clicking here now, once I've chosen all the relevant options I can click create and that will basically create a bucket with the name I've given it. I can click on overview to basically see brief details about my bucket such as region, what is the default storage class and it also shows you the URL of the link that can be used to access your bucket. It also gives you the link for gsutil. Now gsutil is a command that can be used in your cloud shell to basically work with your buckets. you can click on permissions to basically see what kind of permissions already exist and then you can make changes, basically you can add members that you can see based on different roles, for example here by default other services are shown like data processing or your deposit.
Permissions related to the bucket owner or reader have already been granted now, once I have looked at my bucket I can start using it. I can drag drop and insert files here, so as of now there are no active objects in my object storage which is in my bucket, what I can do is click upload files and then I can choose a location on my machine For example, I'll go to data sets and what I can do is choose some of the files here in any format. choose csv or text and just open so this basically uploads my datasets here now once I've uploaded the files I can basically close this one.
I can look for options here that say edit permissions and edit metadata if you want. Download it if you want to copy it, move it, or rename it if you want to export it to a different service called cloud pub sub, which is a publish and subscribe messaging system. You can scan the data now, you can click on a particular file and that basically shows you. the URL which basically allows you to access this file, you can try to copy it, you can click download and download this file, you can even try to access it from public and that basically shows you the content of this file based on the permissions, so this is basically your object storage which is one of the services offered, what you can also do is create folders and inside the folders you can upload your data, so this is your Google cloud storage option , which is for your object storage, that is, you can add different items, you can give different permissions and you can use this Google cloud platforms storage service offering.
Now you also have other options, like large table, and we can get into that by clicking here and clicking on large table, so large table is one of the services. which pushed nosql databases in the market today we see different nosql databases like cassandra hbase mongodb couchdb neo4j and many others so basically you can use bigtable which was the pioneer when it comes to non sql databases or not just sql databases, so the problem was faced initially. by google was that the web indexes behind the search engine were taking too long to build, so the company wanted to create a database that provided real-time access to petabytes of data and that's where bigtable started, so big table powers other google services, such as gmail google. maps and other services and in 2015 it was launched as a service for clients, so when it comes to scalability with the use of bigtable, you can increase the number of machines without any downtime and can handle administrative tasks like updates, reboots, etc., which are basically taken.
Attention of the cloud provider, the data present in the Bigtable cloud is encrypted and you can use im roles to specify access, so that data written to or from Big Table is carried out through data service layers , such as managed virtual machines, Hbase Rest servers, Java services, hbs client, etc. If I wanted to use bigtable, I can click create instance and that basically tells me that the bigtable instance in the cloud is a container for your clusters. Now here you can give an instance name, so for example, I'll say a u a and then I'll say, for example, test and let's go. say big table, so that will be the instance name, this will be permanent, you can choose the storage type again, you can go for lower latency, more red rows per second, which are usually used for service use cases in real time, or you can go for SDDs which have higher latency for random reads, good performance in scans and usually used for batch analysis, so let's go for SSD from now on.
Here you have the cluster ID that is filled in automatically. You can choose a region, so let's go with our favorite, where I can say Europe. west 3 I can choose a zone here and then I can choose how many nodes I want to use for my big table, so when you talk about the big table service, it will have an underlying cluster that will have multiple nodes that will control the performance of your data. storage and rows read per second so from now on let's just let it be just one node and that's enough for our demo, when we talk about performance it basically tells you based on the current node and storage type how many reads can occur in milliseconds, so it says 10,000 rows per second at 6 milliseconds, you have writes that are 10,000 rows per second or you have scans that are 220 megabytes per second, the storage taken care of here would be 2.5 terabytes and then basically I can click create now there.
There is also an option called replication guide which basically tells you to replicate for the table in the cloud. Big Table copies your data across multiple regions, allowing you to isolate your workload and increase the availability and durability of your data, depending on your use case. You can have Bigtable which can be used for If you have your data in all regions you can now click create with all your specs chosen and that will set up a cluster or you can say a fully managed nosql database which will give you low latency and replication for high availability now once we have a new instance you can connect to it with the cbt command line tool and for instructions you can click learn more here.
You can just click on this instance id to see the details again if you want to see your big table configuration so tell me. here we have an instance of what is the CPU utilization time, how many rows are red or written, what is the throughput and this is automatically populated based on your usage. You can click on monitoring and that will basically give you different widgets that will show information for your CPU utilization, what is your most active node depending on how many nodes you have, system errors, automatic failover, storage utilization, etc. you can click on the key viewer which will allow you to see your table if you have already created some data here and you can click on tables to see how many tables you have added here so this is briefly about your big tables which is one thing What we must remember is that it is not good for all use cases, so it should be used for low latency access which is fast access and Well, if at least the data is greater than a terabyte, for a smaller amount of data, the overhead is too high so the performance of large tables will be affected if you store individual items larger than 10 megabytes, if you want to store larger objects like images, video files then go for object storage and that would be a better option, so always remember that big table is not a relational database, it is not a sql database and when you talk about multi-row transactions or online transaction processing, bigtable is not the right choice, for which can be used for a wide range of applications especially when you talk about your olap which is online analytical processing so it is designed to store key-value pairs and there can be different use cases for example if you are using something like cloud data flow or cloud data process where you would do that.
I want map reduce type of operations big table can act as a good storage because it has very high performance and scalability and the best thing is that it supports hbase api which allows easy integration with apache hadoop and spark clusters which you can open using one more service offered by Google's cloud platform, which is called cloud data processing, so the big table is goodfor real time analytics it is commonly seen in iot financial services and others and if you are thinking of running interactive sql then big table would not be the right choice but the other option would be bigquery.
You also have to remember that you have a cluster running and you will be charged if this cluster continues to run so you have to be very careful in your free account when using such services now that we have clicked on this one so I can select this one and I can basically see the permissions, I can see the tags, I can see the inherited permissions here, I can also click on my instance that we created and you can edit or just do a delete, so from now on we'll just delete this, which needs you to give that name, so we'll say aua test b d and then click delete, so whenever you test different services, the first approach should be to configure these different services. how they work, basically you try to connect to them and once you are happy with the initial trial you can plan your actions and come back and use the service for longer.
Now that's your big table, which is one of the offerings we can go back to. storage and here you have other options that are available, for example, we were on the big table. It also has an option like cloud data storage. That's one more service that Google's cloud platform offers when it comes to storage domain, so Google added software. at the top of a large table that supports more than just key-value pairs when talking about secondary indexes rather than just having a primary index when talking about asset properties for trusted transactions, such as the SQL query language, for what these features or these services were added. or on top of your big table that gave rise to a new service that was launched as a cloud data warehouse, so this is where you have the cloud data warehouse and you can select a cloud security warehouse mode of the client to be able to opt for a native mode and enable the entire cloud.
Firestore features with offline support or you can opt for cloud data storage system on top of cloud firestore, so these are different options here and here we discuss API support or scalability engine, how many rights it supports, etc., you can choose one of these. and then you can choose where to store your data, for example, if I click on this, then I choose to choose a location, so it says that the location of your database affects its cost, availability and durability. Choose a regional location. Lower right lower latency. multi-region cost or location here, I can basically choose, for example, Europe and then I can go ahead and create a database, so it says initialize Cloud Firestore in data storage mode services in eu3.
This usually takes a few minutes. You will be redirected to your database once it is ready, so if we compare the pricing structure between cloud data warehouse and bigtable cloud, always remember that cloud data warehouse pays for storage monthly, which is also true for the large table, however here you are paying for monthly storage for reads and writes. but for a large table, you pay for the cluster when it is running, so the cloud data warehouse is a good option for infrequent access to small data and is cheaper when you are talking about a large amount of data or big data and frequent access. about big table in cloud, so big table is cheaper when you talk about larger amount of data, so here it says that since your database is empty, you can still switch to Cloud Firestore in native mode for more features that you could do and that you could learn, you might say. query by gql as of now we don't have any data here let's see the panel that says since your database is empty you can still switch to Cloud Firestore in native mode to get more features so this is your warehouse of data in the cloud and has many features that help you work with your data, however, there were still some important features or features missing from rdbmss and that is where Google created another great table based service called cloud spanner.
Now we can continue working on the cloud data warehouse, which basically gives you a choice. to work with your data you can create an entity here by clicking create entity and that basically gives you options like the default namespace, you can provide a type, you can provide a numeric id and you can start adding properties, but for more information about the data warehouse. We'll learn about this in a later session, so from now on I'll click cancel. I'm going to go back to my data warehouse and basically we go to the data warehouse option or I can go into the manager here, which basically says If you have entities, you can import or export them, so let's go back and look at the storage options again.
As of now we were in the data warehouse. Let's click on this one. As I mentioned, its cloud data warehouse, which basically gives you some additional features. at the top of your big table, but what Google also did was realize that there was a need for support for the rdbms feature, i.e. there were a lot of features, now here we have created a data warehouse and it says that its base data warehouse is ready to go, just add data, you can create an entity, start entering data and then you can go ahead and check this out, so if you want to learn more about your data warehouse, you can just click on this and you will be taken to to full documentation of native mode and data.
Store Mode What is Firestore in native mode? What's in data warehouse mode? What are the prices and locations? How do you choose a database mode? What are the feature comparisons? What can you do? What are you allowed to do here? What programming languages ​​can be used? Different regions. and so on, from now on we will click on this and we will see an advanced service that Google Cloud created when it comes to additional features of rdbms, so Google created another great table based service called Cloud Spanner now that You may not see it here, but if you scroll down you should be able to see your cloud key in the options here or we missed it at the top, so let's look here again.
Yes, it's here and you can click on the key so that the cloud key is there. Released in 2017, it basically supports relational schemas, so it offers great consistency for all queries that can be SQL-based. Now you can have a multi-region deployment when it comes to massive scalability. The requirement and strong consistency Cloud Spanner is a good option, so it says that Cloud Spanner is a mission-critical, globally consistent and scalable managed relational database , so if you want to use this, you'll need to enable this API which shows you an option here that says try this API, so it's a managed service, so it's one of Google's most expensive database services.
There is also one more database service, which is Cloud SQL, that can be used, so when we talk about your cloud key, it is a fully managed relational database service. It is massively distributed, you can have millions of machines in hundreds of data centers with support for automatic sharding and synchronous replication, giving you low latency and zero-downtime schema updates, making data highly available and reliable, so we will learn about cloud key in more details later. In other sessions now we can also go back to storage and we can see the different options that we have here, so you have object storage, you have a wrench and you also have your SQL based service, which is another managed service offered by the cloud or Google Cloud.
I would say it's called Cloud SQL, so this is basically a service that allows you to have fully managed relational Postgres and MySQL server databases. Google handles replication patch management, database management, and other things that are related to this managed or fully managed database service. allows you to handle terabytes of storage capacity with 40,000 iops with a lot of RAM per instance, so you can click create instance here and then you can choose one of the databases that you would like to use for Cloud SQL to be a managed service that can choose one of your mysql or postgres and sql servers, for example I choose mysql now it basically tells me what the instance id is, you set a password, you can always change the password, you can change the region, you can choose your database version and then you can also see other configuration options that talk about the type of machine that will be used, backup and recovery maintenance and all that, and if you click create, this basically will create a fully managed SQL service that can directly allow you to start using mysql in the cloud thereby allowing you to store your relational data when we talk about storage options how can we not talk about a data warehouse solution or basically an option that allows you run your queries directly updating the data so you can use a data storage service that is offered by Google cloud platform called bigquery so basically you can search in the big data section here and here you have an option called bigquery, which basically provides ease of implementation and speed, so it may be easier to build your own data with home.
It's expensive, time consuming and hard to scale, so with BigQuery you just load data and pay only for what you use, so when it comes to features you have features like the ability to process billions of rows in seconds and, if you want to do it in real time. Streaming data analysis is also possible here, so here we have clicked on Bigquery which basically shows you the option where you can start writing your query and test your access to the data, for example if I have uploaded some data , it shows me that there are some queries. that are saved here I can now schedule your consultation.
Basically I can choose the format of a particular query by clicking on this. More here. I have an option that says add data so I can pin it to a particular project. I can explore public data sets. I can create a connection so if I click on explore public datasets it takes me to a page where you can get different types of datasets that are already available that you can put into your bigquery and start querying your data from default. which is aligned with my project and I don't need to worry about it. I can see saved queries if I have already saved a particular query.
I can see the job history. I can see transfers, scheduled consultations and reservations. So bigquery basically initially had its own version of sql which was slightly different from standard sql but in 2016 bigquery 2 was released which supports sql 2011 standard. You can always select standard sql. Now bigquery when it comes to pricing, we need to remember that storage is where storage costs. here it is much less, about 0.02 cents per gigabyte per month, it is almost similar to the nearline, where you can also have a low cost, that is, 0.01 cents per gigabyte per month, there is no charge for reading data from the storage when it comes to querying and that's where the cost is incurred so one terabyte per month is free and then it costs a few cents per gigabyte so this is mainly for high volume customers.
There is a flat rate per price that can be used, so when you discuss queries you can save your query. results, you can create a data set to store the results. Now the results are put into a temporary table in the cache and once you are done you can delete the dataset and delete all the data so that when you talk about loading data into BigQuery you can get the data from storage in cloud google drive cloud data warehouse stack driver stack uh driver other options you have cloud bigtable or other web interfaces so you can download from a url like csv json avro you can create a data set, create a table and create from the source by making a file upload so that files of 10 megabytes or less can be uploaded using the web interface as an option.
You can also use the command line and then you can start working with your big query, so here you can also work with data streaming by basically entering data streaming in your big query. which allows you to add one record at a time, now you can use something like cloud dataflow which allows you to use a particular pipeline over cloud dataflow which we will learn later so you can benefit from the different features ofbigquery and thus use Google's offering to work on your structured data or, I would say, on data that adapts well to data warehouses.
These are some of the storage-related services that Google Cloud offers, although we will learn how to use bigquery and run or load some data by creating an aggregated data set here, for example. creating a connection or using a public data set, as of now I have this. You can also go to the command line to work with this, but we'll learn about Bigquery in detail in later sessions. Now here you can scroll down and you also have an option inside. big data space and that is your data compute, so when you talk about data compute, this is again a managed service that allows you to run Spark or Hadoop jobs, especially if you are interested in Big Data workload, so for Big Data processing for machine learning. you can always use cloud data processing.
This uses compute engine instances under the hood, but takes care of the management of these instances, so it's a layer on top of running the clusters. It is a managed service. It's cheaper to pay when jobs are running, it's just quick because. It is integrated with other Google cloud services, has pre-installed open source components, and data processing is integrated with thread for easy cluster management. When you talk about data processing, you can click on create cluster and that basically allows you to configure your cluster. When choosing a particular region you would like to use, for example I will choose Europast4 here, it tells you what type of machines you would like to use and by default it has been populated as the machine here which now has 4 CPUs and 15 gigabytes since which you could be using a free account, let's not go for the high end machine, let's go for n1 standard 2 and then you can scroll down and it says what is the master drive, what is the drive type and this was for your master machine, that is, what is the machine.
You will have the master processes running, then you will have your worker node configuration that says we will choose a low end machine and we can choose how many worker nodes you would have, so it says a minimum of two. You can choose SSDs and their capacity. about the thread cores and thread memory to be allocated and here we have the option to click create so once you click create this will basically generate a pool where you can start submitting jobs right away , you can basically continue once your cluster is ready, you can enter the cluster, you can submit a job, you can choose a type that is Spark or any other application that you want to run and then basically you can use this fully managed service that gives you allows you to run your big data.
In clusters you can obviously control access via roles or access control list and you can have access as is at the project level or based on your data processing cluster or even in your worker notes, so which we will learn about data processing in later sessions starting Now you see here, the cluster is being created. It says the cloud storage bucket you are using is this one. Now I can click on this and open the link in a new tab. It still takes me to the console but now it shows me the bucket that is being used by your data compute cluster, this is the bucket that is being used and it is being used for the underlying metadata that is stored here so look at the folders related to the cluster, you can click on these folders and then you can see what the script shows what it is doing, etc.
Now I can come back here to my bucket, which will basically show me what kind of buckets have been created, so you'll see that the data processing service has automatically created some buckets that will contain some data. He also has some. other buckets that were created based on other services that we use, the access control is detailed in all cases, plus it also shows our bucket, so underlying it is using compute instances, basically it is using, let's go here and go to the compute engine and let's see in vm instances, so the data process that has created a cluster is also using the vm instances that we see here that are running, it is using the buckets and has created a cluster ready to use, so you can click on this cluster and that shows me my Details related to the cluster, if there are jobs running, what are the VM instances, what kind of configurations have you used and you can see different details here, you can see the logs, you can click in the jobs and that will show you if you have basically run a job on this cluster out of the box, so it says this particular job was run, which was a spark job.
You can click submit and it tells you what the job ID is, what region you would choose, for example, we'll do it again. choose for example Europe Quest 4, it tells you what the job type is so you can run all these job types on this cluster, like Hadoop Spark Pi Spark Hive Pig. That's it, you can provide your jar file, so if it's packaged, your app is a jar. you can mention here you can pass some arguments you can also say other jar files add some properties and then click submit which will run your job on this cluster ready to use now that we have tested it basically I can go ahead and make a elimination.
I don't want to incur any costs for these managed services that are running, so there is a lot more information about using these services that the Google cloud platform offers and we can continue to learn about these services as we explore their console in the Google cloud. or even your command line option now we can exit this by clicking on this menu and then we have other options like kubernetes, it has cloud functions, it has its network related services, monitoring related services, different types of tools , what it has. other big data specific services that you may know about and for each of these servers google has very good documentation available, for example when talking about the publish, subscribe and messaging system, it is a real time managed service which is basically a pioneer that was used and now has a famous service like kafka that is used for your publishing subscription or messaging system requirements, so to conclude about Google cloud platform services, you can always go to cloud. google.com and look in the documents section which is now here. it has a list of different featured products, it also has a list of its different domains and services that Google Cloud offers and you can learn about all the different services that Google Cloud offers.
Here you can click on the featured products and that basically shows the calculation engine cloud. run cloud storage, you have cloud sql bigquery vision ai, you can scroll down and see their services platform related to artificial intelligence and machine learning and accelerator API management, so Google cloud platform offers different services mainly in your computer storage and databases, it has networking related services big data specific developer tools cloud AI identity and security IoT management tools API platform and so on, so basically learn about the GCP services offered by the platform in the Google cloud and in detail you can play with the different services offered by creating a free account and as I showed that you can use any of these services, quickly launch them, basically connect to them, enter your data or use a managed service to manage their data and benefit from Google's cloud platform, so they have a modernized infrastructure for their different use cases.
Hello guys, we are here today. We have something very special in store for you, we are going to talk about the best cloud computing platform available on Amazon web services, uh rahul, I think you said something wrong here. The best cloud computing platform is obviously Google's cloud platform, no it's not AWS. has over 100 services spanning a variety of domains, but Google's cloud platform has cheaper instances, what do they have to say about that? Well, I guess there's only one place we can discuss this: a boxing ring, so guys, I'm opexa and I'll be Google's cloud platform and I'm rahul.
I'll be aws, so welcome to fight night. This is aws versus gcp. The winner will be chosen based on their origin and the features they provide, their performance currently and comparing them. explain them based on market share and pricing options, things they give you for free, and instance configuration. Now first, let's talk about AWS. AWS was launched in 2004 and is a cloud services platform that helps businesses grow and scale by providing them services in several different domains, these domains include computer database storage migration, networking, etc., a very important aspect of AWS is your user experience. Now AWS has been on the market much longer than any other cloud service. platform, meaning they know how businesses work and how they can contribute to business growth.
Furthermore, aws has more than $5.1 billion in revenue in the last quarter. This is a clear indication of how much faith and trust people have in AWS. They occupy more than 40 percent of the market, which is a significant part of the cloud computing market, they have at least 100 services available right now, which means that almost every problem you have can be solved with an AWS service. . That was great, but now we can. talk about gcp. I hope you know that gcp was recently launched in 2011 and is already significantly helping businesses with a set of intelligent, secure and flexible cloud computing services.
It allows you to deploy, deploy, and scale app website services on the same infrastructure as Google. The intuitive user experience that gcp provides with dashboard wizards is much better in every way TCP just entered the market and already offers a modest number of services and the number is increasing rapidly and the cost of a CPU or storage instance regional that gcp provides is much cheaper and you also get multi-region cloud storage. what do you have to say about it? I'm so glad you asked. Let's look at today, in fact, let's look at the fourth cloud market share. quarter of 2017.
This will tell you once and for all that AWS is the leader when it comes to cloud computing. Amazon Web Services contributes 47 percent of the market share. Others such as Rackspace or Verizon Cloud contribute 36 percent. Microsoft Azure contributes 10 percent. Google cloud platform contributes 36. four percent and ibm software continues with three percent 47 of the market share is contributed by aws you need more convincing wait wait wait all that's fine, but we only started ago a few years and we have already grown a lot in less time. Haven't you heard the latest news? Our revenue is already $1 billion per quarter.
Wait a few more years and the world will see. AWS earns $5.3 billion per quarter. It will be a long time before I can. even come to us yeah, we'll see now, let's compare some things to start, let's compare AWS prices. A dual-CPU computer instance with 8GB RAM costs approximately 68 US dollars. Now a computer instance is a virtual machine where you can specify what OS ram or storage you want to have for cloud storage, it costs 2.3 cents per gb per month with aws, you really want to do that because gcp wins hands down Undoubtedly, let's take the same computing instance of two CPUs with 8 GB of ram.
It will cost about 50 dollars per month with gcp and by my calculations that is a 25 percent annual cost reduction compared to aws, speaking of cloud storage costs, it is only 2 cents per gb per month with gcp, what else do you want me to say? Let's talk now about market share and options. AWS is the current market leader when it comes to cloud computing. As you remember, we contribute at least 47 percent of the entire market share. Aws also has at least 100 services available right now, which is a clear indication. how well AWS understands businesses and helps them grow, yes that's true, but you should also know that gcp is constantly growing.
We have over 60 services up and running, as you can see here, and many more to come, it's just a matter of time. When we have as many services as you, many companies have already started to adopt GCP as a cloud service provider. Now let's talk about the things you get for free with AWS. You'll get access to almost all services for a full year with usage limits. Now these. theLimits include an hourly or per minute basis, for example, with Amazon EC2 you get 750 hours per month, you also have limits on the number of requests for services, for example, with AWS Lambda, you have 1 million requests per month now, later From these limits you get By charging standard rates with gcp, you get access to all cloud platform products such as Firebase, Google Maps API, etc., you also get 300 in credit to spend over a period of 12 months on all cloud platform products and interestingly, once the free trial ended, you won.
You will not be charged unless you manually upgrade to a paid account. There is now also the always free version for which you will need an updated billing account. Here you can use a small instance for free and 5GB of cloud storage, anything higher than this always use. Free usage limits will be automatically billed at standard rates. Now let's talk about how you can set up instances with AWS, the largest instance offering 128 cpus and 4 tvs of RAM, now besides the on-demand method as I mentioned before you can also use spot. instances now these are the situations where your application is more fault tolerant and can handle an outage now you pay for the spot price that is effective at a particular r now these spot prices fluctuate but adjust over a period of time the instance The largest offered with Google Cloud has 160 CPUs and 3.75 TB of RAM, like AWS Spot Instances, Google Cloud offers short-lived compute instances suitable for bad jobs and fault-tolerant workloads.
They are called preemptible instances, so these instances are available at 80 off the on-demand price, so. They have significantly reduced the costs of their compute engine and unlike AWS, these are fixed priced. Google's cloud platform is much more flexible when it comes to instance configuration. Simply choose the combination of CPU and RAM, of course, you can even create your own instance types. So, before we finish, let's also compare some other things. Telemetry is a process of automatically collecting periodic measurements from remote devices, for example GPS. GCP is obviously better because they have superior telemetry tools that help analyze services and provide more opportunities for improvement when it comes to application support aws is obviously better as they have years of experience under their bed aws provides the best support that can be provided to customers containers are better with gcp a container is a virtual process that runs in user space as it was kubernetes originally developed by google gcp has full native support for the tools other cloud services simply They are fine-tuning a way to provide kubernetes as a service.
Additionally, containers help abstract applications from their environment. Originally running applications can be easily deployed regardless of your environment. As far as geographies are concerned, AWS is better as they have a few years head start. In this span of time, AWS has been able to cover greater market share and geographic locations. Now is the time to make a big decision, who will it be? Yes. Who will be gcp or aws? I think I will choose the right cloud computing platform. The decision is made based on the requirements of the user or the organization in that regard. I think it's time for us to finish. this video we hope you enjoyed this video and learned something new that aws is better, right, no, choosing a platform is completely up to you and your organization's requirements.
Hello everyone, welcome to this session on Google Cloud Platform Website Hosting and let's understand what is Google Cloud. platform offerings when it comes to your web hosting requirements here we will learn about google cloud web hosting some basics of cloud computing web hosting service providers what is gcp and why choose it different types of web hosting which are possible use cases like lush and a quick practice on using gcp for web hosting when talking about cloud web hosting. Google Cloud has the feature of hosting secure and trusted websites easily. Ensures protection of customers and sites. The website is hosted on a fast and reliable network.
You can do more work for less with Google Cloud. Now when we talk about Google Cloud, it is always good to know some basics about cloud computing. Now when we talk about cloud computing, cloud computing is basically using resources provided by your cloud service provider. like Google cloud and cloud computing basically means using hardware and software components to provide a service over a network. Now here users can access files and applications from any device that can access the Internet, so when we talk about cloud computing, it is basically about using hosted services. about the infrastructure managed by the cloud provider when we talk about resources that can be used and that can include big data services, storage options, its computing options, network options and various other services that a cloud provider offers.
Cloud computing has several different service models, such as its platform. as a service software as a service infrastructure as a service also today we talk about containers as a service and these are the different computing service models offered by cloud service providers that allow you automatic integration with your existing environment or if you would like to benefit of modernization through the use of a cloud provider infrastructure. It has different services offered, some of them are for automatic software integration or migration, data backup and restoration, scaling your storage capacity, which looks for unlimited storage capacity and uses a resource. provided by a cloud service provider that benefits from the reliability and cost-effectiveness that a cloud provider offers in this case we are talking about Google Cloud when you talk about web hosting service providers, there are several hosting service providers web, you have aws, you have squarespace ibm cloud godaddy bluehost and google cloud, so what is gcp and why use it when talking about gcp or as we call it, google cloud platform?
It is a set of cloud computing services provided by Google that run on the same infrastructure that Google uses for the end user. products like youtube gmail and much more when talking about gcp there are several reasons why anyone would choose it for their use cases. Now let's see some reasons when talking about pricing, gcp has better prices compared to its competitors depending on the services you would use whether it would be infrastructure as a service like using compute engine instances and running your applications on them or it could be a service managed like data proc or bigtable or your bigquery where you can run your hadoop clusters or store a large amount of data or even build your data warehouse where you can store structured data, so there are different services with gcp offerings and the prices have been a better option in case of Google compared to other competitors when talking about speed and performance, it is very fast and increases the performance of your project if you use different services offered by Google when you talk about live migration of applications, this is one of the features that organizations generally like and that none of the other competitors offer when it comes to live migration of applications, so there are specific features. for example, when you talk about compute engine, there is a feature that indicates live migration where your resources or your instances running applications would be migrated during maintenance, for example, from one underlying host to another underlying host without affecting the performance of your application and you can benefit from it.
When talking about big data, Google Cloud has several offerings in the big data space and that is one of the advantages of gcp compared to other competitors - it has different services for your big data needs such as pub sub for publishing, subscribing to the messaging system or search for data. proc which basically allows you to spin up your hadoop and spark clusters to run your different jobs etc., when we talk about features of Google cloud platform, it offers high productivity because it is using the resources which are based on the same infrastructure than Google. uses for its own different use cases.
You can work from anywhere as long as you have an internet connection and can connect to the cloud platform and use different services offered through the web. You can quickly collaborate with different teams or different colleagues working on different projects that you might be sharing. or by using different gcp resources, you are benefiting from high security and different encryption and security mechanisms with gcp offerings. Now there are fewer sources or you could say stored data or vulnerable devices. There is high reliability, flexibility or scalability that gcp offers and cost effectiveness. really makes it a good option when you want to benefit from infrastructure modernization and using a cloud platform solution for your use case.
Now when you talk about different types of web hosting, Google Cloud offers three types of web hosting. You now have WordPress. a free and open source content management system that you may have used in the past or heard of WordPress, where many websites are created. It is a popular web publishing platform to easily set up blogs and websites and therefore used by organizations or even individual users. to have your blogs or run your websites or promote your products you have lamp which means linux apache mysql and php and lampstack consists of linux apache http mysql and php server and is used to host websites and web applications, you also have the option of create your own website so you can develop a website or web application with your own code directly in the computing engine.
Now let's look at a use case and understand what happened here when we talk about Lush. Lush was founded in 1995 and you may have seen this brand somewhere. On The Streets is a UK-based global cosmetics retailer selling a wide range of fresh, handmade products. Lush had become a global brand with over 930 stores in 49 countries so the challenges we faced here were traffic on Boxing Day. It caused the website to go down for almost 18 hours and this is something that It had a devastating effect on his business. The previous platform had no scalability option. What was the solution?
Moving the website during peak hours was easy due to the flexibility of the gcp platform with fast VM deployment. On Google Compute Engine, environments were monitored and destroyed in minutes so the team could quickly test and deploy using Google Cloud. SQL Lush took complete control of their infrastructure and optimized their systems for effective scaling. Overall, the platform resulted in lower cost which led to a bright future for lush cosmetics now what was the result of all this the availability during peak loads improved thanks to the auto scaling feature of Google's computing engines. Infrastructure hosting costs reduced by 40 percent Streamline data center usage from 5 to 3 with Google's high quality Private network really benefited greatly and provided flexible architecture and scalability for future business growth .
Now, what we have seen here is that their Google cloud and web hosting basically offer users to run their websites or use a web hosting solution that can benefit them more. More than one way to look at the simplest approach is by clicking and configuring a VM instance by going to your console, for example, if we go to the Google cloud platform by clicking on console and here the easiest way we I could try it is by going to my compute engine vm instances and here I can activate an instance. I can click create now in my previous sessions.
I also explained how you can create a template and run an instance from which we can also click on a new vm instance. Let's call in a new instance, I will choose a region for example Frankfurt, I will let them be general purpose machines, I will let this machine have a virtual CPU core and 3.75 gigabytes of RAM if I am looking for a heavier website that can have more users logging in, etc. I can choose a higher configuration machine here we have the option to choose our distribution for example I can choose ubuntu and then I can choose a distribution version to be 16.04 we can choose the disk size to be 20 gigabytes and we can use ssdsor hdds so as of now it's a standard persistent disk which is hdd and that should be enough let's click select here I'll say allow default access I'd say allow http and https so if I have a content of website hosted on this machine i can access it via http now here you have management options for your instance you have a security option that basically allows you to provide a public key if you are interested in using an external ssh client to connect now i have used puttygen and I already created a private key which is here.
I have also converted it as a pem file and at any time if I wanted to use this key, all I have to do is click on puttygen where I had previously created this ppk file, now I can do an upload. and I can choose the key that I have here and I can give it the pass face, click OK and that shows you the public key. Let's copy the public key from here. Let's go to our instance and I can give you this here that resolves the name. I have the private key saved on my machine and the public key to be pushed to the machine.
I can click create and that should create my instance which you can connect to using ssh from the console itself or using an external client like PuTTY, which I will then need the public IP of this instance, the ppk file we have and you can connect to this instance. What I can also do is just do a simple ssh from here and that will basically allow me to connect to the instance. I also need to remember that every instance we create in Compute also has some default firewall rules. Now that I have connected to my instance, I can click sudo su.
I can log in as root and here if I want I can install a particular service. let's say apache let's say apt get install and then I can say apache 2 or httpd which is basically for hosting your web application so I am installing this service and here I can check if the service is running by making an httpd status of the service or I can say the status from Apache service 2, it says that it is already running, which means that the Apache web server page will be accessible from http as long as my firewall rules allow it.
Now before we look at this, we can also, for our instance, look at the firewall rules, so here you can I have network details now since we created an instance by default, Google has created some firewall rules that allow http https default icmp its default internal access rdp and ssh and here it says that these ingress rules that allow inbound traffic are allowed from anywhere we can customize that as specific ip addresses we can also see the output which is basically outbound traffic and this again it means my machine can connect to the external world in different ways using different protocols so I can customize these firewall rules from now on I will allow it.
By default, we are now connecting to our instance from the browser and we were basically checking how this browser can be used by clicking on this. Now what I can also do is here. I am connected to the instance where my Apache server is running so the best way to check if the Apache web server is reachable over http all I need is my public IP so let's copy this let's go here and I'll say http slash, just give your public IP and that shows your Apache page to the web server which means Apache service. has already been run here, it tells you if you want to host a particular page, what you will have to do is, as it says here, it is based on an equivalent page in dbn from which the ubuntu apache package is derived, if you can read this page. it means apache http server is installed on this site and working properly.
Now you can replace this file located in var www.html index.html before you continue operating your http server so this is where you can change the particular page and you will be able to host your website on this particular ec2 instance which we can do is check this so you can go into the browser and look in var www and that has an html folder. Look at this and from now on you will have an index point. html and that's basically the page we're looking at now. I can click on this and I can see that I have a little html page that basically shows me what we're seeing on our web server.
What I can also do is for now, I can try it and I can put in a different html page and see how it works so basically we had the page here in var www and then we had the html folder and in the html folder what I had was this index file so as of now I just moved it and put it in a different location so if I look in my home hdu index.html that shows me the full html page that we were viewing on our web server and here it shows you what this page would show if you ran it on your http, it says default apache 2 ubuntu page, it shows that it works, it shows the content etc., just to test, i pulled this page from this location and what i did was add a page different here so that page if you look in html and then look in the dot html section of my page.
I just added some basic details of html creation. I'm saying body. I'm using marquee and then closing this with an html tag. Now this is a simple html. page that I created and now what I did was once I did this I basically restarted the Apache service so once you do that it will take over your new page and once you've done that you can now access it by going to your http gives you your data and then basically gives you your public IP which we can get from here and just paste it here. Now press this to show me that there is mypage.html.
I can click on this and that basically shows the symbols page that I created so as of now this is my web page that I created and it is hosted on my compute instance now obviously we can create a more complex website backed by a database, use php, use mysql to store the data and then host your website on a Google cloud platform service such as Compute where your actual code is executed so we can learn about your web hosting on compute instances in subsequent sessions. Are you in the analytical space? Have you ever wondered how cloud data processing is changing the analytics space?
The answer is yes, this is the course for you. All major corporations are moving data infrastructure and computing to the cloud to optimize costs and make data analysis seamless. Technology leaders like Google and Amazon are investing heavily in cloud-based SAS tools. Google is a pioneer in this space Google's big cloud data and machine learning platform provides cloud databases managed hadoop and spark services with tools like dataflow and dataproc the platform enables highly scalable and reliable data processing the features of bigquery and machine learning in the datalab cloud make it a great choice for businesses large scale analytics intelligence and machine learning its better performance and low cost make google cloud popular and sought after tech giants like spotify and apple have implemented the platform Google Cloud in their organizations and healthcare and consumer goods leaders like Phillips and Coca-Cola are migrating to Google Cloud for their data analytics needs.
A google data engineer with experience in google cloud machine learning and bigquery earns 102 thousand dollars a year on average and the demand for google data engineer certifications has never been higher, just learn google cloud big data fundamentals and machine learning platform or cpb100 course offers a comprehensive overview of google cloud platform covering its data processing capabilities with several use case studies. It is the perfect course for data analysts, business analysts and data scientists with real-world experience in etl data modeling queries. o machine learning Decision makers evaluating Google's cloud platform have also found the training very useful.
The course will cover everything you need to know about Google's cloud platform and includes eight hours of instructor-led training modules on big data and machine learning with the Google Cloud Hello, my name is Randolph Kale and I am a co-author of the google cloud platform certification courses for google. I am certified on the Google Cloud platform, I am a Google Authorized Instructor and a Course Advisor at Simplylearn. I have more than 40 years of experience. in the industry spanning different roles from cloud infrastructure development management to product management, I have partnered with Simplylearn to ensure you have the best chance of becoming certified in various cloud platform technologies. from Google, get hands-on experience in the cloud, and prepare you for the job.
Before we discuss the course and the reasons why it will help you, first let me introduce you to the Google Cloud Platform. When you use Google's cloud platform, you have the ability to create tests and deploy applications at Google scale. The move to the cloud is happening today. broad industry adoption when I teach I see the pace accelerating across companies Forbes projects a growth rate of $19 billion to $140 billion in cloud services by 2019. How can you move to the cloud? How can you make a career in this fast-growing industry? part of the industry, your first step is to establish your credentials in the Google cloud platform domain and you can do this by getting certified.
The Google Cloud Platform Fundamentals Course introduces you to the Google Cloud Platform and shows you how you can incorporate cloud-based solutions. Your Business Strategies course is aimed at solution developers, system operations professionals, and solutions architects who plan to deploy applications and create application environments on the Google cloud platform. It is also appropriate for executives and business decision makers evaluating the potential of Google's cloud platform to address their businesses. You need to complete this course to give you the skills and confidence to make the most of the Google cloud platform ecosystem. Simply Learn offers you extensive training to help you understand the platform.
You will get eight hours of live, instructor-led online interactive sessions along with the course. You receive a free trial account with 300 credits to use the Google cloud platform for 60 days. The instructors are trainers authorized by Google. The content is attractive and all the topics are very well explained. Just by learning, you will enroll in the classroom. online plexipass giving you the flexibility to attend any of the weekday and weekend groups at your convenience. Another thing that is impressive is the proactive teaching assistance where experts solve all your technical or course project related questions while you are taking a course, so if you are looking for the training and guidance you need to have a deeper understanding of the Google cloud platform products and services.
This is the best course for you with course material labs and authoritative instructors with extensive experience. I don't think I need to go anywhere else. In charge of your career, sign up to simply learn CP 100A training. All the best to you and with that we have reached the end of this video in the complete Google cloud platform course. I hope you found it informative and interesting if you have any questions regarding the topics covered in this video please ask in the comments section below our team will help you resolve your queries. Thanks for watching, stay safe and keep learning.
Hello, if you like this video, please subscribe to the Simply Learn YouTube channel and click here to see similar videos. videos for nerd and get certified click here

If you have any copyright issue, please Contact