These are actually two questions that can be addressed separately, but have a lot in common. The main fact to remember here is that chips are based on devices called transistors, and modern technological advances make it possible for us to fabricate them in very small sizes, namely a few microns in length and width. This means that, in a chip of 1cmx1cm (i.e., roughly the size of our nails), we can fit many millions of them. Each one of them is relatively "stupid", they only know how to say either "yes" or "no" (this is called binary logic), and what they say depends on what they have been "taught" to say in the past, and what other transistors next to them are saying.
To answer the second question first, a lot of information can be stored in a chip by "teaching" each transistor to say either "yes" or "no", and by making it repeat it again when we ask it to. This way, calling "yes" 1, and "no" 0, we can code any number in base 2 (i.e., like 135 in base 10 is 1x100+3x10+5x1, its equivalent in base 2 would be 10000111=1x128+0x64+0x32+0x16+0x8+1x4+1x2+1x1). If we have a few million transistors, we can store millions of numbers. Storing pictures is not much different: every picture can be divided into little "squares", called pixels, that are assumed to be of a single color (the dominant color in that square in the original picture).
Now, every shade of every color can be obtained by mixing red, green and blue in different intensities, so just by knowing which intensity of each is necessary (which can be represented by a number, and therefore by a series of "yes" and "no"), we can "rebuild" the picture. If in a picture we have, for example, a row of pixels that is all blue, instead of saying "blue", "blue", "blue",... as many times as pixels in the row, we could just say "all this row is blue"; there are ways to store the information like this in a computer, so that we save space; these are called compression techniques, and each of one has an associated format, that makes some the pictures in our computers to be "gifs" or "jpegs" or others, depending on the technique that was used.
Now, computers can do all that they do because some of these transistors can be "taught" to change from saying "yes" to saying "no", or vice versa, very quickly; the speed at what they can do it is what we call the clock speed of the computer, and is expressed in gigahertz (GHz). One gigahertz of clock speed means that a transistor inside it can be taught to say a different thing one billion times per second. So, in the end, computers are so smart and can do many things just because they are comparable to millions of very stupid people, that can do very little things, but that do them very, very fast, and when they work together, they produce a lot of results and very quickly.
How all the transistors inside a chip can be made to work together, and are connected together, is a part of computer science that is called "computer architecture"; the set of instructions that we give to those transistors so that they produce a desired result is the computer program. Different computers have transistors connected in different ways, and the programs needed to produce the same results are different too; that's why a program for a PC will not work in a Mac and vice versa.
Computers are a vital part of our every day lives and it is amazing how many tasks can be accomplished using them.The power of a computer relies primarily on its ability to do repetitive operations over and over again. Computers take big jobs and break them down into smaller and smaller operations until it can process the job in terms of the basic operations it can handle. These "basic" operations are usually things like taking sets of input, processing it, and the releasing output, which is generally in form of "on or off" or "true or false" or "0 or 1" etc. This type of output is common because it is convenient for working in binary - the computers "native language." Processing speeds are generally limited by the speed at which electrons can move through the circuits. To make computers faster, circuits have gotten smaller and smaller to shorten that distance down. While there have been improvements over the years in the architecture in which the computers process information, it is interesting to point out that the most basic operations in computers have not dramatically changed over time (due to their basic nature). From a certain perspective, the way computers process information hasn't changed so much, but the speed at which it processes it has dramatically increased due to the shorter distances electrons have to travel. It is, in fact, the small size of the processor chip that makes computers so fast! With the increased processing speed, computers can attack much more difficult and computationally intensive problems in shorter periods of time, i.e., computers are now being applied to problems that were virtually unsolvable just a few years ago!
Click Here to return to the search form.