C program variables and constants are of four types: char, int, float and double. Before an identifier can be used in a C program its type must be explicitly declared. Here's a declaratory statement of C:
int apples;
This statement declares the programmer's intent that apples will store a signed integer value, i.e. apples may be either a positive or a negative integer within the range set for variables of type int. Now this range can be very much machine dependent; it depends, among other things on the word size of your machine. For most compilers for the IBM PC ints are stored in two consecutive bytes, and are restricted to the range [-32768, 32767]. Compare this with the VAX or the Macintosh, where ints are,4-byte signed integers in the range [-2147483648, 2147483647]. In declaring apples to be an int you are telling the compiler how much space to allocate for its storage. You have also made an assumption of the range in which you expect its value to lie.
In contrast to Pascal, in C it is possible and in fact usual to both declare a variable's type and, where needed, define its value in the same statement:
int salary = 5000;
It is not correct to assume that a variable which has only been declared e.g.:
int volts;/* volts is unpredictable */
but has not been defined, i.e. assigned a value, automatically gets the value 0. In fact its value may be anything but 0! Note that the thousands, millions or billions place values are never separated by commas, when constants are defined in C programs.
Let's 1(x)k at Program 2.1, and its output. The program adds two ints x and y, and prints their sum, z. Don't worry if you can't understand everything about the program just yet. Only remember that the printf () can be. used to print numbers just as easily as it prints strings. To print an int x the following printf () will do:
printf ("The value of x is: %d\n", x);
The %d signifies that x is to be printed as a decimal integer.
#include <stdio.h> main() { int x = 5, y = 7, z; z = x + y; printf ("The value of x is: %d\n", x); printf ("The value of y is: %d\n", y); printf ("Their sum, z, is: %d\n", z); }
The value of x is: 5
The value of y is: 7
Their sum, z, is: 12
The value of y is: 7
Their sum, z, is: 12
To understand the very great importance of using variables in a computation only after having assigned them values, execute the following program and determine its output on your computer:
#include <stdio.h> main() { int x, y, z; /* x, y and z are undefined. */ /* contd. */ z = x + y ; printf ("The value of x is: %d\n", x); printf ("The value of y is: %d\n", y); printf ("Their sum, z, is: %d\n", z); }
On our machine, a VAX 11/780 from Digital Equipment Corporation, U.S.A. the output was:
The value of x is: 2146744409 The value of y is: 2146744417 Their sum, z, is: -1478470
Now, looking at the output of this program, could you possibly have predicted the values x, y and z would get? Moral: Never assume a variable has a meaningful value, unless you give it one.
0 comments:
Post a Comment