YTread Logo
YTread Logo

Introduction to programming and programming languages: C Programming Tutorial 01

Mar 29, 2020
Hello everyone. In this lesson, and in this series of lessons, we will introduce you to computer

programming

through 'C'. 'C' is a computer

programming

language. These lessons are intended for someone with very little prior programming experience. Let's keep our assumptions very low, start with the basics and move on. Ok let's start, unlike most other machines that can perform a finite set of predefined tasks, a computer is a general purpose machine that can perform any computational task. All you need to do is give it a program that is nothing more than a set of instructions to perform a task.
introduction to programming and programming languages c programming tutorial 01
A computer is nothing in itself without programs, all tasks are done through programs. A modern computer would have hundreds and thousands of programs. There are programs that manage the computer's hardware resources that we call system programs. And then there are the programs that perform your favorite tasks, we call them application programs like a web browser that you use to browse the Internet or a text editor that you use to create a document. And if any of these programs can't do the job you want to do, you can write your own program, and that's what we're going to learn in this series of lessons, write and run your own program.
introduction to programming and programming languages c programming tutorial 01

More Interesting Facts About,

introduction to programming and programming languages c programming tutorial 01...

As we saw, a program is a set or sequence of instructions that you would give to the computer, and the computer would execute those instructions. Now in what language can I give these instructions to the computer. Can I give these instructions in a natural language like English? You must have heard that a computer understands binary, binary is the language of computers. Binary is a number system that has only two digits zero and one. The number system we use has ten digits from 0 to 9 and we call it the decimal number system. So why does the computer understand binary or rather why are computers designed to understand binary and the reason is that binary is really easy to simulate in the actual physical design of things on real hardware?
introduction to programming and programming languages c programming tutorial 01
The computer is an electrical device and it is very easy to create the logic of zero and one in an electrical circuit. For example, if current flows through a wire, we can say that it is a 1 and if no current flows, we can say that it is a 0. If there is any potential difference across a capacitor, we can say that it is a 1 and if there is no potential difference we can say that it is a 0. In general '1' can correspond to something that exists and '0' can correspond to something that does not exist. '1' can correspond to some condition being true and '0' can correspond to that condition being false.
introduction to programming and programming languages c programming tutorial 01
At the lowest level of a computer, any communication must occur in binary or any data must be stored in binary. You can use a bunch of wires together to communicate or signal something or you can use a bunch of capacitors together to store some data. In the actual physical design, there may be other ways to communicate or store some information, but logically it has to be binary, a bunch of 1's and 0's together. If we use only one wire or one capacitor, we can point to or store only two possible values, zero and one. But let's see if we use two wires or capacitors together we can signal or store four possible combinations or four possible values ​​in binary 00,01, 10 and 11.
A binary digit is also called a bit, if you only have one bit you can have two possible values ​​and if has two bits, can have four possible values, each bit can be either 0 or 1. 1 is also called the 'set' bit and 0 is also called the 'unset' bit. If we have three bits, let's say each cell that I drew here in this figure is one bit position, so at each position we have two options. We can have 0 or 1 and correspond to these options. We will have two options for the next position and corresponding to a combination of these two positions we will have two options for the next position.
So if we have three bits, we can have eight possible binary values. These are eight possible values ​​with three bits, in decimal this is 'zero' this is 'one' this is 'two' and so on. In general, if we have n bits we can have two to the power of 'n' possible permutations and combinations of zeros and ones. We can have values ​​from zero to two to the power of 'n-1', for 'n' equal to 3 we can have values ​​from 0 to 7 for 'n' equal to 4 we can have values ​​from 0 to 15 and we can continue. To learn more about the binary number system and things like how to convert a number from binary to decimal and vice versa, you can check out the description of this video for some lessons.
Now going back to how the computer would understand and execute the instructions, the central part of the computer that executes all the instructions is called the central processing unit or CPU. Sometimes we just call it the processor and it's not the big box or case of your desktop computer, it's often misunderstood as that. A modern CPU is a very small integrated circuit that would look something like this, Intel is one of the companies that makes CPUs. So the CPU is the guy that has to execute your instructions. Now, every instruction for the CPU has to be a bit pattern, a pattern of ones and zeros.
But an instruction can't be a random pattern of bits, it has to be in a certain format for the CPU to decode and execute. A set of specifications is established for a CPU and its instructions must follow the specifications, for example the specification may be that any instruction to perform an arithmetic or logical operation must be in 20 bits. Let's say the cells in the figure I drew here are bit positions in a binary number. Now the specification can be and this is just an example that of these 20 bits, the first four bits or the leftmost four bits must be a binary code for the operation you want to perform.
Usually we write the 'opcode' shortcut for the opcode and the opcodes will be specified as well. Let's say the opcode for 'Add' on four bits is 0001, let's say the opcode for subtract is 0010 and there may be other operations like comparison, let's say comparison is 0101. So if we want to have an instruction to add two numbers, these four bits should be 0001 and let's say the rest of the specification is that the next 8 bits should be the first operand. The numbers on which it performs operations are called operands. The next eight bits can be the second operand. Let's say you want instructions to add the numbers 4 and 5.
So 4 in binary is 100, the rest of these bits will be 0 and 5 in binary is 101 and the rest of these bits will be zeros, the leading zeros will mean nothing. So what you have here in 20 bits is an instruction to add the numbers 4 and 5 according to an example specification I've chosen. So here's the deal: This is the language that the machine actually understands and executes instructions in binary according to some set specifications. The bits in the instruction are assigned to some physical design in the circuit, and we don't need to go into those details. Such an instruction in binary is often called a machine language instruction, because machine language can be interpreted and executed by the machine, or more specifically, the CPU.
Two CPUs can have completely different architectures and completely different instruction specifications. Therefore, the machine language instruction for one CPU architecture may not work for another CPU architecture. There was a time when programs were literally written in machine language, it was a very tedious and error-prone process. Think about it, you would constantly have to look at the specifications of the binary codes for various operations and commands. The program won't be human readable, you won't be able to figure out the logic just by looking at a program. Some improvements came with the development of the assembly language. In assembly language we can have a more human readable representation of a machine language instruction, for example, if this is the machine language instruction with an opcode and two operands, then in assembly language the same instruction can be written in a more readable form.
We can write some keywords for the opcode, for example if this is an opcode for addition we can write Add and then we can write the operands and constants in decimal, operand one is 4 and operand two is 5. An instruction like this can be written in assembly language, the improvement you are getting is that instead of writing opcodes and commands in binary, you are writing some keywords that will make some sense. Instead of writing 0001 for addition, we're writing the add keyword, and instead of writing constants as binary numbers, we're writing constants in decimal. But wait a minute, didn't I say that the CPU that has to execute all the instructions will only understand machine language instructions?
This guy is yelling out loud here that I can only execute machine-level machine language instructions, so how can we write an assembly language program or whatever we're talking about here? Well, you can write your logic in assembly language and then you can pass the assembly language instructions to a program called an assembler and this assembler will generate machine language instructions corresponding to the assembly language instructions. So basically someone wrote a program called assembler and with assembler programmers could write slightly more readable instructions in assembly language. But there was a problem with assembly language, assembly language is strongly mapped to machine language, it's just that some machine language binary codes become assembly language keywords.
So, just like machine language, assembly language instructions will also vary from one CPU architecture to another. Therefore, if you try to transfer your assembly language code from one architecture to another, the same program may not run. You may have to rewrite your program according to a new set of specifications. So both assembly language and machine language are specific to the architecture of the machine. Such

languages

​​whose instructions depend on the architecture of the machine or, more specifically, on the architecture of the CPU have been called low-level

languages

. There was a need for programming languages ​​that were not specific to the architecture of the machine, such programming languages ​​were called high-level languages.
A high-level language is supposed to have more elements of natural language and makes a programmer's life much easier because he won't have to worry about all the detailed low-level specifications of the machine. And now let's talk about high level languages, high level languages ​​give you abstraction of the machine architecture, so many high level languages ​​have been developed till date. I would mention some of them. We have C finally took the name 'C' after so long we have 'C++' we have 'Java' we have 'Python' and a couple of old ones like 'FORTRAN', 'Basic' and the list goes on.
FORTRAN was the first high-level language developed by IBM. Now even with high-level languages ​​we cannot skip the basic rule that ultimately the instructions to be executed have to be in machine language. There are two possible execution models for high-level languages. Some languages ​​are called compiled languages, for these languages ​​we would have a program that we would call a compiler. The compiler will be different for different languages. It will also be different for different machine architectures. The compiler will take your program written in a high level language. We generally say that it takes your source code written in a high-level language and generates machine code, a set of instructions that the CPU can execute directly.
C is an example of the combined language. The way it normally works is that you would give the compiler a file or a group of files containing your high-level language program. Let's say app.c is the find that will contain a C program, the compiler widget will generate another file that will be executable on the machine. Let's say you will create something like app.exe, exe files are executable files on a Windows machine. The process of generating executable files from source code written in high-level languages ​​is called compilation, basically the compiler performs the compilation. There is another execution model for high-level languages.
Some languages ​​are called interpreted languages, for interpreted languages ​​we need to use programs that we call interpreters. Unlike compilers, interpreters do not generate executable code that can be run separately. An interpreter takes the source code in a high level language, parses it, and runs it within itself, no executable file is created, no program is executed within the interpreter. We won't go into the details of how it actually happens. Python is an interpreted language, theoretically any language can be compiled or interpreted, but practically languages ​​fall into one of these categories, whether they are compiled or interpreted.interpreted.
So there are so many high level languages ​​and we are saying that we will learn programming through C. First of all why are there so many languages ​​and which one is good. There really is no such thing as a good or bad language, some languages ​​were written to overcome the limitations of previous languages. Some languages ​​were written to make a certain set of tasks easier, but as such the basic and primitive constructs are the same in most languages ​​and what you can do in one language can be done in another language. There would be very few exceptions.
If someone knows one of the programming languages ​​very well, it will be very easy for him to choose another programming language. And now let's talk about C, C was developed around the year 1970 by a great computer scientist named Dennis Ritchie. Dennis Ritchie is also the creator of the UNIX operating system, in fact, the UNIX operating system was written in C. C is a high-level language and requires compilation. C is still a very famous programming language and most of the other famous programming languages ​​like C++, Java or C sharp derive their basic structure derive their basic constructs from C.
So if you know the syntax of C, it is really easy to incorporate these other languages ​​at least for the basics and C gives you a lot of low level control on the machine. Some people say that C is somewhere between a low level language and a high level language. So working with C will give you a lot of information about computer architecture and I think it's good for a computer science engineer. So we should be clear that in this series of lessons we are going to learn to program through C. To learn any language, we must learn some vocabulary, some grammar, basically, we must have a set of rules, some syntax, some of semantics.
Of course, the rules for a programming language will be much stricter than the rules for a natural language. A programming language cannot be ambiguous like natural language. We will start with all these things in our next lesson. We will also write our first C program in the next lesson. This is all for this lesson. Thanks for watching.

If you have any copyright issue, please Contact