YTread Logo
YTread Logo

AI-900 - Azure AI Fundamentals Study Guide

Mar 29, 2024
All. In this video I wanted to offer some tips and tricks for AI900, the Azure AI machine learning exam that earns you the AI ​​Fundamentals certification. As always, first, there is a lot of preparation that goes into creating these videos, so if it helps, a like, subscribe, comment and share is always appreciated. Now, as always with all Microsoft certifications, I would start by looking at the Microsoft certification page if we actually go and take a look at the page. Go over the details of exactly what's on the exam. So we've measured these skills and you can click on the download exam skills summary which includes all the different objectives, the skills and then the individual items that you need to know.
ai 900   azure ai fundamentals study guide
So make sure you've taken a good look at this clue, what's changing? And then you'll be able to see in red everything that's changing around those various things just to make sure that yours is as ready as possible. And what I really recommend is that there is this free online training suite that uses Microsoft Learn. So it definitely, definitely, definitely meets these minimums. Maybe you'll find other resources too. The more resources, the better. It will help you a quick cigar for success. But at a minimum, take the exam skills outlined and take the free online training.
ai 900   azure ai fundamentals study guide

More Interesting Facts About,

ai 900 azure ai fundamentals study guide...

It will put you in a good position. The AI ​​900 is of a very high level. It's not very deep. The exam will not ask you about detailed steps to do any particular thing. It's really about getting a good understanding of what the services are, what the types of AI machine learning are that we might want to do, and that's what you're going to be asking. Everything is multiple choice. There are no case studies, there is no practical intervention. So what I want to do is go over some basic things to think about as maybe kind of a last minute preparation.
ai 900   azure ai fundamentals study guide
Remember, this exam is about Azure solutions around AI. And we can think that everything we are doing here are computer solutions to imitate human behavior. So if you think, well, we have this type of brain up here, and then we have different types of senses. In a way we have vision, we have speech abilities. We can hear things. And all those senses lead to the brain. It's very scary. The human being is a bit like me. Without hair, the brain. Has power burned off all that hair? But the idea of ​​all these things is that if we have these models around machine learning, well, we'll have some information to think about.
ai 900   azure ai fundamentals study guide
Well, there is information. The IE data sets, for example, are going to arrive and what we are going to do is train a little bit, so we use different models to train. And we're going to finish U with the information and training that we're going to have learning O, we're going to use models to train and then we can use them once we've done this. model well, to do different things now that could be, for example, making a decision. It could be a prediction. Um, this version, other things, could be a result, but essentially what we're doing in all of these things is we have these models, data sets, and training.
Some of them are pre-built, some of them we're going to do our own training to enable these various things. But the goal is to create this ability to predict. Data-driven, fact-based, that's all we're doing. There were now several different types of activities, one of which was very common, such as anomaly detection. Hey, we look at past results and then is that good or bad? Does this indicate that something certain is going to happen, like predictive maintenance? Hey, by increasing the temperature on these sensors, there will be a failure X number of times. But it's about capturing those various signals that allow us to do something.
These decisions again predict. I guess I'll write this as a result. It could be as simple as a chatbot. Hey, we get these kinds of questions. However. And we give you some output back. Now when we think about Azure and these types of services, we're talking about Azure machine learning. So we have this Azure. Machine learning. Now, the way we're actually going to use this Azure machine learning is this whole type of platform that we're going to leverage. And. There are several aspects of that platform now. The first thing we are going to do is create a workspace.
The Azure Machine Learning Workspace is where we're actually going to go and build this and then start doing the various things that we're going to leverage. Now we're going to use the workspace as an Azure machine learning studio. That will connect to that workspace. You may have already seen this. If you do those lab exercises, it will actually walk you through using this. This is this type of ML dot Azure machine learning. Dot com. Now, when we create that workspace, if we want to do our own training, for example, well. We need some compute, so one of the things we can add to workspaces is provision compute.
And then what do we really need? Remember. Well, we have to do that, learn that training, but we need information. So in addition to provisioning some compute in our workspace, we're going to create some data sets. This is what we will use to train the environment, so these things will actually be used for our training. Now there were different types of models that we could actually use for that training. Hmm. For example, I can think about the data set I have. That data set is actually, for example, structured. The data is labeled and has some kind of value, e.g.
So if I have that kind of supervised data set, then let's use a different color. So if we have this type of supervision. So when we have that. I really have three key types of models. We have regression. Now, the point about regression is essentially what I'm going to get from that is some numerical value. And that's based on that past history. So if you had introduced this, for example, into comics, if you had a whole set of data about comics and the attributes of comics that characters have, the age, the condition, and then the values.
So now I could pass in the attributes of a comic that I have and it would show, well, the value of that. That would be a regression. Then we can think about classification. And as the name suggests, with the classification I will get a Class A category. That will come out depending on the characteristics. So, for example, a classification here could be "I'm applying for a loan." And it will give me a high or low risk depending on the various factors that influence that. And there are also time series. Now, time series is pretty much the same as regression, so it's regression plus a time series data set.
So this will actually allow me to get a number. For a future time. So what will my car be worth in two years based on these conditions and these various types of things? Now, this is where my data, that data set, is monitored. I actually had the data labeled. If I don't have that, there is also an unsupervised. So without supervision, I'm really just going to enter the data because I don't know what the result is going to be. And what I would do is I would look at various similarities between the data and it would actually create those categories for me for the different organizations.
That he will really get it. So the point here is that in Azure, I have this workspace, I have my compute that I used to do the training with my data sets that I apply. These are all the different models. Supervised, unsupervised, put it into the training and then of course once it's done, well, I'll implement it. So now I think about implementing my trainees. Model that I can actually do the prediction and usage, and I'll typically deploy it to production in AKS. So that's really production. If I'm just testing, I could use Azure container instances.
And from that point on, once it's actually deployed, I can now have clients connect to it. Then you could have those various client applications. Who really wants to use it? That trained model that I created would go and connect. So remember to connect. They are simply. There are several different things to use, as if there is an endpoint. There will be a key. So I have this kind of prediction key that gives me access to that project model. There is data that they are going to use. In fact, go and connect and now use it. That's the picture when I think about Azure, how can I start using, training, and making these things actually available.
That's great since a. How do I use it? But there are huge risks when we think about anything related to artificial intelligence. How do you make those decisions? Is there something familiar? Garbage in garbage out. If my data set is not good, it can lead to many problems, for example, bias. Imagine you have a data set and it is any English person. He was bored, he's going to be a villain. That's the only data set I've read. When I requested something, he said, well, John is a villain, he's bored and he's English. So if I have errors in my data, I again train them with this data so that the errors in the data cause damage.
I am using data for training. Well, that data may have sensitive information, so it is exposed. That is a problem. You might create solutions that don't work for everyone, so they won't be available or applicable to everyone who wants to use these things. We trust that these models are correct. Who is responsible for the various decisions these computers make? So when we think about the type of machine learning, it's not just about Azure, but all these types of principles. But when I think about this whole pattern. Basically, there are six key AIs. Principles that we really want to use for everything.
And Microsoft has a whole site about this and yes we really jumped in very quickly. It's worth a look. It's just the AI ​​responsible for microsoft.com's IIS. So if we look here. This is the website. And it talks about the six key principles. And if we think about this for a second, then you can go and look at that website. But the six key principles are actually based on justice. So if I believe we are talking about justice, all people should be treated fairly. There is no bias arising from the data used to train it.
I can think of. Reliability. And security. That's why we're putting decision prediction in the hands of these various models that we created. We need to make sure it is rigorously tested. We make sure we have very good implementation processes. Imagine this were, for example, a self-driving car. And we didn't do very good testing. Well, there is a real problem that could cause loss of life. So you have to think about how rigorously we are testing the reliability and security of these systems, because the more we put them in the hands of these systems, the more important it is that they are fair, reliable, safe for us.
We think about privacy. So we have privacy and security. Again, we're training it with data. We have to ensure that the data used is kept private. Does not drip. We do not share it with other people so that they can use it for their things. It should be inclusive. The system must include and work for everyone, from all sectors of society, regardless of gender, race, physical ability or anything else. So this should apply to everyone. It should be transparent. If it is making decisions. We need to understand how it works. We need to understand when it works and when it may have limitations.
Those should be very well highlighted in the statements. For example, if we can only test, we will only know that it works in very good lighting conditions. We could say, hey, this system has been tested and works under these conditions. Outside of this, you should check other things. We need to be very clear about how it works, when it works, its limitations, and obviously we still need to be held accountable. Ultimately, there should be governance. There must be organizational principles that

guide

that the solutions created comply with ethical legal standards. There are always these gray areas in certain aspects, like facial recognition.
Facial recognition could be fantastic. If used in the correct way. It can make life easier. It can increase security. If used incorrectly, it has implications for our privacy. And so that responsibility, that idea of ​​having ethical standards is very important in everything that is done. So we just have to make sure that we understand those things. So in the exam you can expect to see, well, what elements of our six principles does this apply to? So take a look at that website and make sure you actually looked at it. And you understand it. What they really mean, watch the videos, feel pretty confident about that, okay?
So with that. Let's go back and really think. Well, what exactly are they? The types of service, what are the types of AI that we typically want to use? So I'll start with the visionto be able to think, OK, we're drawing a kind of vision and I'm drawing a kind of vision. It is. For all of these things, no matter what we are doing, we obviously also use the brain. Then you have to interpret and do something with it. So if we think for a second about computer vision. There were many different types of computer vision that we would normally see.
For example, I can think of the type of image classification. So image classification, well, what is this image? Then I can think. Well, okay, there is. It's kind of here and. It's supposed to be a car and maybe there's a tree, a forest. So you could say, hey, the classification is car, person, or group of people. You might also get the idea that I'm just saying hello. This is a description so you can have a description of the image. Then I had the idea. Yes, I have classification, but I could also have some kind of analysis.
From image. Now, building on that in this analysis, well, instead of just telling me, hey, it's a car, for example, here or a car in wood, black and white image. And now you might start to say, well, I might actually have that same image. But if I think about the type of analysis now, maybe they're like tags, so they'll go back to my car tree. Etc. So it's giving me the labels of what's actually in the image. Now, within these classifications, these analyses, there are actually some kind of special domains. And we see things like celebrities, athletes, movie stars and iconic places.
So if I use these specialized domains and show an image, it might say Ohh Statue of Liberty, Arnold Schwarzenegger. I won't say it John Savill. So there is this option to have these specialized domains within this type of image analysis of what we're seeing now as well. We have the idea. Well, object detection. Now the difference here is with. Remember that classification analysis looks at the image and returns a result about it. But if we really now think about a kind of object detection, that same image, for example, now. And I have this idea that my car is getting worse and worse and then maybe a tree.
Okay, now what it's actually going to do is place a box there and we'll return the coordinates. So it will say OK, well, I'll put a box around this object. I'm going to say a car. And I'm going to give a probability, a level of confidence in that result of .9. And around this one, well, I'm going to say tree and that's it. 8 confidence. So in object detection, we actually get a set of coordinates, the boundaries of the objects. And then the description and then the actual trust level of them. So that shows us the location of the elements within the image.
In fact, we have the next step. I will draw a small semantic gap. Segmentation. So if we think about this, it's really a kind of box, an object and a trust. It's all about pixels. So if this semantic segmentation again, if we go to the same image. And again I had some kind of car and the tree. This time what you're going to do is basically cut out all of those images and you're going to say that the car is kind of blue and you're going to color the tree. And he will say that the tree is something green.
So it colors the image pixels to match the actual object we are looking at. That is a very powerful ability. Then there were a series of services related to facial treatment. So when I think about facial treatment, there are different elements. This could, for example, simply detect faces. Hey, there's a face there. Again, a box to indicate where the face is. This could have attributes. So age, happiness. I remember once seeing something with a level of baldness and I was very confident at that level. Well, we can actually have recognition. Then we know who a certain person is.
So these different sets of capabilities around this now have to be weather limitations, things like sunglasses that they can handle. Then I can think: we have a photo here with the person and what a person is supposed to be. Or strawberry, whatever. But if you put sunglasses on them, well, they can handle it. You're seeing the tips of the nose, the mouth, but maybe an extreme angle would break that. Therefore, you must understand what he will do. But the point is that it will return the coordinates of where a face is and then the attributes. Happy, sad, bored, old, man, woman.
And again, we have to be careful with those key principles when we start. Identify the age, detect the man, woman that there may be. Concerns about those things? Also when we think from a vision perspective, obviously OCR. Spotting maybe some words in an image so I can think about how I read just text and image. Or maybe I'm importing very large amounts of text and there are two different APIs when I think about getting some words. Compared to the really huge read for larger amounts read API, it's about things like the form recognizer. I have a large form that I want to bring.
So I have an image of a form. I want to bring it. Maybe you have some kind of questionnaire surveys. I want to incorporate those things en masse. In fact, I'm using them. Then I can think about fighting for the OCR I mentioned. Well, there is an OCR API. And there is a reading API. If I'm just doing a couple small amounts, then this is a small text, small amount. And this is for adults. A bit. It's basically a large amount of scanned text. This is an image with a couple of words. Actually, this is how I'm scanning a page of a book, for example.
So there are all these different types of computer vision services that I can actually see inside those things. So next we think about natural language speech and hearing, and then through that hearing, maybe transposing it into text, maybe translating it into other languages. Maybe a combination of those things. So now I can think about OK, well, go and wear red. Now I'm thinking about natural language. So I'm thinking, well, I'm talking and listening to these things. Let's get a little closer. And again, there are different elements that I can think of. Well, okay, his speech. And speech actually involves a number of different things that I can think of.
Well, it's something like that. Dictation to text. Now when I think about speech to text, there are really different aspects of speech to text because I have to really think about it. I have to take the different audio elements. So I think, well, there's audio coming in on those audio waves. The audio waves have to go to those phonemes. So I think, OK, I go to my phone, make sure I type it right. No, I did not do it. Names though. If I think about what the phoneme is, they are the unique sounds that exist. So we have letters of the alphabet ABCDE and then the phonemes are the actual sounds like the cat might act.
So we take the audio to get the phonemes and from the phonemes we get words. So we get that flow of different things. So a speech to text conversion should be able to do that. So it's kind of voice recognition to put those things together. I can also think about text messages. True speech. So here we have something written and we're actually going to say it, maybe well, what voice do we use for that and then I can think about the kind of speech to speech. I do voice translation. And a large number of languages ​​are supported.
But hey, I hear it in one language, we will publish it in another. So there are all these combinations of different things that we can use. There are different APIs. But whenever you hear something with speech as part of the requirement and you're using those kinds of speech capabilities, whether it's translation, listening, speaking, you're going to use those things within that speech. Now I can also think that it was part of natural language. I can think well. There are several other text services that I can think of well. There is natural language, but of course, there is text.
To rethink the text, well, there is analysis. So a textual analysis. I'm looking at a document and. I'm doing various things that I want from that document or that passage of text, and sometimes we're looking for key phrases. Then I can think of a key phrase: what is the key talking point? I may be looking for entities. So that could be a person, a place where you could be trying to generate feeling. So if it's a feeling, is it a positive thing? And you're going to get its value. So for example, .99 will be very high or is it negative? .1.
Neutral will be maybe type .5. It could also detect language. Now if it can't detect the language, it will see N. So if the language is ambiguous, it can't be detected. He can't say it. It's like understanding those things. So we have that text analysis, that sentiment analysis can be very, very powerful. It's very common to see, for example, hey, I'm scanning Facebook, I'm scanning Twitter, we enter the text, we send it to that sentiment analysis, we return positive or negative results and then we do something with it. Then we have analysis. We also have things related to translation.
And that's more than 60 languages. With that translation, um, again, I might think about translating the speech. There is translation. Yes, there is text. But it is also a kind of translation of the speech. And when you think about the speech side, you can actually do several things. There's filtering, like you could have a profanity filter, so. Don't cover those things. I might have a selective filter to say, well, don't translate this stuff. Maybe it's a brand, so don't translate those things. We have these types of capabilities around text and then there is the type of language.
Comprehension. So when I think about language comprehension, you'll often hear this type of LUIS. That is the language that intelligent service understands. Therefore, it can be a type of both creation and prediction resource. And its goal is perhaps to detect the intention of an expression. So if you have devices in your house like a smart home, you say, hey, turn on the lights. So you can detect the intent. I turn on the light. So we think about the entities, the intentions and the expressions, we train it on the model. So now it can work and do the various things we really want.
So that's it for that Lewis guy. Those are some of the key things about computer vision, natural language. Then there's also a set of capabilities around conversational type of AI. And I'll draw it here so we can think of something bigger. Conversational. AI. So these would be some kind of interactions. It's not a beard, it's like he's talking about different types of interactions with users. And there are really two elements of conversational AI that I can think of. Well, there is this question and answer creator. It is then about creating a knowledge base. Now that knowledge base, you can create it directly within the knowledge base.
You might consider importing FAQ from somewhere else. It could be a chat data source, it could be a web page, it doesn't really matter. But it is a kind of queue, pairs of questions and answers. So I'm building that knowledge base and then I really want to do it. Do something with the knowledge base, then we'll have it. The Azure Robot. Service. And what the Azure bot service will do is a framework. Develop. Manage. And indeed and publish. These robots. So what the robots will actually do is take this knowledge base. Then the knowledge base you have here is needed.
And he will expose it through various channels. And a single bot could be exposed to multiple. Like you could have a bot that exposes a knowledge base to a team web-based chat just by having multiple tools so you can have more than one within a single bot element. So think of the bot as the knowledge base interface and then I'll lay it out. So those are the types of services we think about. Obviously, there are many of them. But it's about, hey, I can see things, whether it's objects or text or forms. We have different types of, well, hearing voice, translating languages, emitting voice.
Analyze text. What is the feeling? What does it ask me to understand natural language and then interact with people? So if these are all capabilities, well. That? How do we really use them then? Do we actually get to or what are Azure services? So what are Azure? A kind of AI. Services. Now that I think about it, there are many of them. Some of them are specific. And then there's a general one, so I can think from many capacities. Maybe it's just Azure cognitive? Service. So this gives me sort of 1 final point. Key. Which I can now use regardless of the service I want.
This has many different services contained in this single cognitive service. So if I just want to be able to use a lot of different cognitive services, I want to make managing them as simple as possible. I don't care how the different Billings happen. If I create a cognitive service, I can use pretty much any of them. There are some exceptions to that, but if you see a question like, hey, I need a single service to use multiple, I want to do vision and text, the answer will be the cognitive service. And then you have the idea of ​​something specific.
Now the specific will be: you only need to use this one or I need to be able to perform aindividual tracking of billing or usage. And even within these. The individual ones. Sometimes they can only be used for creation. Or a training or prediction. So if you select both, you will actually create two separate resources. So from a specific perspective, I can think of the computer. Vision. So we talked about computer vision. Now all of these are pre-designed. No training, so I can't train it with my own images. For example. I can analyze the images, I can do OCR, all that stuff.
So this includes OCR. But I can't use my own images to go train and do those things. Then you have computer vision. It is included in the cognitive service, so you have personalized. Patient. Now that we have a custom vision, I can use it to train. O. Prediction. If you select both. It will create two resources so you can say both. But it will create two, one for each. And with custom, I'll get a key and a point for each individual one so I can separate that tracking. But now I can use my own images, for example, to do that training, which gives me that ability.
Again, that's part of it. Then we have the in-person service. So face remembering is about identifying people, grouping them, analyzing them, classifying them. That's part of it. Then we have the forms recognizer. And this is important because it is not. Part of that form recognizer is the ability to import an image into a form. I think the maximum size we are going to put is 20 megabytes. But that's not part of that general cognitive service that can do most other things. Have. A text analysis service. So it's another service that we can take advantage of there. There is a speech service.
And remember, if the question says speak in any way that I can think of, well, this is a kind of text-to-speech translation, speech-to-text, speech-to-speech. Hmm. Maybe remember that you can combine those things together. For example, you could listen to German and send text to English. So that's going to be the speaking ability there. Then there is the translator's text. And again, those are those 60-plus guys. Languages. And there are certain codes that you set up that you can translate to multiple with a single call. So once again you can have multiple 2 for the languages ​​you want to access.
Then you could receive English and in a single call switch to German and French, for example. So we have that capability in there. And again, that has that kind of profanity filter. And there was also something selective about it. So I could say, hey, don't translate this, maybe it's a brand, for example, I don't want to do that. And those things. So the text analysis that I have to draw the points is there, it's there, it's there. So all this. The only exception is really the shape recognizer. We have an understanding of language. So, again, language understanding can be authorship or prediction.
If you say both, separate resources will be created for them. This is where you define those types of entities. I define the declarations of intent. And then you go and build that. And then those last two services, so I remember the questions and answers. Maker. And I remembered that Baker's questions and answers create the knowledge base, which is those pairs of questions and answers. Again, remember that you can create it in the tool. You could import a document, you could import a talk. There are different things I can do to create that. And then there's Azure.
Robot service. And these are not? Within the cognitive service, these are separate services, but again, this is essentially it. Develop and manage those bots that take the knowledge base and make it available to some type of channel. So the key thing I would really say is. Remember that whenever you see a question that says "you need several", it will probably be unless you say you need several and you want to track the cost individually or you want to isolate the key, it will be a cognitive service. If you say Hi, I only want this service and I want a single one, I want separate keys and endpoints.
It will be specific to each individual, so understand her capabilities. Make sure you understand the types of AI, machine learning. Make sure you understand those kinds of key AI principles around fairness, trustworthiness, privacy, inclusion, transparency, and accountability. And again, if you follow that Microsoft training course, you're going to have to create a workspace, use the studio to provision some compute, and make sure you delete it afterwards. We're going to pay for that. They have some sample data sets that you can pull in to do different types of training with the models and then see it all in place.
It is not a super complicated exam. Its true goal is simply to cover your breath. Make sure you understand that there are different types of services. When you would use what types of services, that's really the point of the exam. So take your time, do the preparation. Don't think about it too much. Don't panic about failing. If you fail, big deal, it will tell you where you are weaker. Re

study

those elements. You'll pass it next time. Good luck. Until next time. Take care.

If you have any copyright issue, please Contact