|
What data type is being used for computer images? I know it will all ultimately be converted to bits. I want to know as a user, which data type should be used to store such information. Also, I want to know how the CPU works to change the given data into bits, and what data type will be good for the CPU to easily change the given data. Thanks! |
Question Date: 2021-07-21 | | Answer 1:
There are several questions here which generally indicate a misunderstanding of how computers store and use data. The short answers to the questions are here at the top and the rest of the response gives more detail.
Computer images (and all other data) are stored as a sequence of bits; there is no "before converted to bits" within a digital computer because computers can only understand discrete (/quantized, bit-like) data; all input and output is in chunks of information and the size of the smallest chunk is 1 bit.
There is no particular data type for images though there are many different data types which could be used to save the pixel values. Which one to use depends on what else the user wants to do with the information. There are various image file types which have some differences that I won't cover. Regardless of the file type, all are ultimately saved as sequences of binary in units of bits.
The CPU does NOT change any data into bits. CPUs can only understand digital and a separate analog to digital converter does the work of converting sensor signals into digital/bit format. As discussed in this response and in another ScienceLine response, there are various input and output devices which interface with computers that can take non-digital (analog) signals and make digital inputs for a computer, or take digital signals from the computer and generate analog. Given this, the remaining question about the best data type for the CPU to change the input data is nonsensical as written.
First, a “bit” is simply the smallest unit of information, similar to a meter being a unit of distance. Bits are not part of a computer, it is a term for an amount of information (although in non-computer contexts the unit shannon may be used). A single bit is the amount of information represented by a device which has two possible values (states) and at any time is in exactly one of those states and the name bit comes from these characteristics, combining the words binary (for the 2 states) and digit (coming in discrete units).
Modern computers are digital, meaning they use signals represented by a finite number of discrete values and in particular, they use binary signals which can take only on/off-type states (see later for the rationale for binary). This means that everything inside the computer is always "in bits" and the computer never holds it in another form. The question suggests that images are held in some other form besides bits (which, as just established, is not the case), and which the asker calls the data type. There are indeed many different data types in programming - such as integers, floating point numbers, booleans (true/false), and strings (sequences of characters, often used for text) - but all are encoded the same way as sequences of binary states of transistors (i.e., “as bits”). The term "data type" refers to an attribute that tells the computer how a sequence will be used. The computer interprets a sequence as, say, an integer, instead of as a letter only because there is another sequence which tells the computer which interpretation to use.
As an example, the sequence "01000111" read as an integer would be "71", but the same sequence as an ASCII character would be the letter "G". The interpretation is set as part of the programming of the computer. If the data type is not specified, the computer can still perform operations on the data (i.e., can change the binary states and thereby change the stored information), but the operations may not produce a sensical result depending on the type of input provided. (For example, a letter-by-letter addition of two words yields a new binary sequence, but probably not one that means anything with respect to the initial words.)
Now, even though computers require digital signals, data is not necessarily digital outside the computer. Continuous signals, also called analog signal, come in many types: sound waves that vibrate the membrane of a microphone produce electrical current that varies over time; a film (not digital) photograph captures all details of a scene with a location-varying intensity; a steering wheel causes a car to change direction. An important distinction is that all of these involve smooth changes without distinct steps in the output value as the input changes. [It continues on the following answer #2 ] | | Answer 2:
To save those continuous signals in a discrete form that a computer can understand, a digital to analog converter samples a signal at a finite number of points. "Sampling" means measuring the amplitude of the analog signal at a specific point and recording that value. The quality of the digital signal is determined by the number of samples (or samples per unit time - aka the sampling rate) and the number of bits used to store the value (the bit depth, or quantization level (audio definition, image)). With a higher sampling rate, the continuous signal is measured at more points, meaning more of the original signal is stored and therefore the digital signal is a more accurate description of the original analog (see the first figure here for a visual explanation).
Bit depth is important because the amplitude of the continuous signal cannot be perfectly represented by a finite number of digits (binary or otherwise). The value stored is as close as possible to the measured value, but inevitably there will be rounding based on the number of bits available to store the value. Imagine an analog signal ranges from 0 to 100 (the scale doesn't matter here). If the computer can use 2 bits for each sample, then there are 22 = 4 values available. All samples with amplitudes of 0-24 might become 0 in the digital recording, 25-49 would be 1, and so on. This means that much of the variation in sounds would be lost because a wide range of sounds are grouped together. Now imagine the computer can use 8 bits. This allows for 28=256 values. Samples can be recorded in amplitude steps of 0.004. Far less rounding would need to occur and the sound reproduced from the digital recording would be much more similar to the original.
Since the question asks about images, here is an example. For a black and white image, only a single bit is used for each pixel. An analog amplitude below some threshold could be saved by the computer as 0 and a magnitude above the threshold as 1. The original image may not have been black and white, but with only a single bit available to record the information, the other colors or shades cannot be saved. To keep more detail, and better reproduce the original image, more bits worth of information can be used. After binary, 8-bit grayscale is a common image type. In this case, 8 bits are used, meaning 28=256 possible gray values are available. Obviously images can also contain colors. Color images usually use multiple channels, one for each color component. The value of a channel indicates how much of that color should be included in the image. If an 8-bit image with red, green, and blue channels, then 8 bits will be used to store how much red, another 8 bits for how much blue, and another 8 bits for how much green, meaning the 8-bit RGB image requires 24 bits for each pixel. This site shows color images with varying numbers of bits used for the colors.
A side note - the above describes images stored in a pixel-by-pixel manner, which includes all photographs. These are called raster images. There is another type of image called a vector image. Vector images are saved as a series of text instructions that describe how to draw lines, shapes, and so on (including their colors) to produce the image. Even though the image is saved "as text", that text is still ultimately saved using some number of bits of information.
To now we have said that computers store information in small chunks called bits, but not how or why. As stated above, a bit is the amount of information contained by a two-state device. Conceivably, a computer could use larger units of information with three-state, four-state, or even ten-state devices that would match our decimal counting system. But to use any device, one must be able to reliably distinguish between the states. With a binary device, this is relatively easy - a light being "on" (current flowing) or "off" (no current) is clear. With more states, the differences are more ambiguous. Under non-ideal conditions, one state can be easily (relative to binary) misread as the state on either side of it. Especially in the early days of digital computers, the ability to make this distinction was lacking. Ultimately, the increase in efficiency from using more states was not worth the increase in errors and noise.
[Continue on the following Answer #3] | | Answer 3:
There are many possible physical manifestations of a two-state device. In modern semiconductors, the devices are transistors which are sort of electrical switches. The two states of a transistor are “on” (i.e., electrical current can pass through) and “off” (current is blocked) ( Read here for more about transistors and work).
Punch cards were prominent throughout the 1900’s, with the data being represented by the presence or absence of a hole. Since each switch, punch location, etc., represents 1-bit worth of information, these elements are sometimes themselves called bits.
Additional relevant pages: CS page on computer data;
Computer memory;
Converting an image to numbers
| | Answer 4:
This question doesn't have a simple answer.
There are many different types of image files. You've probably seen some of the extensions .png, .jpg, .tiff, .tga, on files. These indicate different methods of archiving the data that makes the images, the bits in different orders, etc.
Different motherboards and chips work differently as well. The drivers for these pieces of hardware, which are hardcoded into them, allow the operating system to know how to access, store, and extract data from disks, etc.
Long story short, there are a lot of options, and they are all different.
| | Answer 5:
A computer-savvy relative of mine gives you this wonderful link, and the comment that follows it:
Wonderful link.
Your question, What data type is being used for computer images?, gives this result in Google, which is the result for questions that are not clear:
here.
A question should only be one question. The second and third questions are only remotely related to the first.
| | Answer 6:
From ScienceLine Moderator:
I recommend you to read the following links which might have answers for your questions:
How does a computer's screen work?
How do computer screens work?
How does a computer work?
How do computers work?
Click Here to return to the search form.
|
|
|
|
|
Copyright © 2020 The Regents of the University of California,
All Rights Reserved.
UCSB Terms of Use
|
|
|