|
|
The
objective of image compression is to reduce irrelevance and
redundancy of the image data in order to be able to store or
transmit data in an efficient form.
Lossy and lossless compression
Image compression may be lossy or lossless. Lossless compression is
preferred for archival purposes and often for medical imaging,
technical drawings, clip art, or comics. This is because lossy
compression methods, especially when used at low bit rates,
introduce compression artifacts. Lossy methods are especially
suitable for natural images such as photographs in applications
where minor (sometimes imperceptible) loss of fidelity is acceptable
to achieve a substantial reduction in bit rate. The lossy
compression that produces imperceptible differences may be called
visually lossless.
Methods for lossless image compression are:
- Run-length encoding – used as
default method in PCX and as one of possible in BMP, TGA, TIFF
- DPCM and Predictive Coding
- Entropy encoding
- Adaptive dictionary algorithms such
as LZW – used in GIF and TIFF
- Deflation – used in PNG, MNG, and
TIFF
- Chain codes
Methods for lossy
compression:
- Reducing the color space to the
most common colors in the image. The selected colors are
specified in the color palette in the header of the compressed
image. Each pixel just references the index of a color in the
color palette. This method can be combined with dithering to
avoid posterization.
- Chroma subsampling. This takes
advantage of the fact that the human eye perceives spatial
changes of brightness more sharply than those of color, by
averaging or dropping some of the chrominance information in the
image.
- Transform coding. This is the
most commonly used method. A Fourier-related transform such as
DCT or the wavelet transform are applied, followed by
quantization and entropy coding.
- Fractal compression.
Other properties
The best image quality at a given bit-rate (or compression rate) is
the main goal of image compression, however, there are other
important properties of image compression schemes:
Scalability generally refers
to a quality reduction achieved by manipulation of the bitstream or
file (without decompression and re-compression). Other names for
scalability are progressive coding or embedded bitstreams. Despite
its contrary nature, scalability also may be found in lossless
codecs, usually in form of coarse-to-fine pixel scans. Scalability
is especially useful for previewing images while downloading them
(e.g., in a web browser) or for providing variable quality access to
e.g., databases. There are several types of scalability:
- Quality progressive or
layer progressive: The bitstream successively refines the
reconstructed image.
- Resolution progressive:
First encode a lower image resolution; then encode the
difference to higher resolutions.
- Component progressive:
First encode grey; then color.
Region of interest coding.
Certain parts of the image are encoded with higher quality than
others. This may be combined with scalability (encode these parts
first, others later).
Meta information. Compressed
data may contain information about the image which may be used to
categorize, search, or browse images. Such information may include
color and texture statistics, small preview images, and author or
copyright information.
Processing power. Compression
algorithms require different amounts of processing power to encode
and decode. Some high compression algorithms require high processing
power.
The quality of a compression method
often is measured by the Peak signal-to-noise ratio. It measures the
amount of noise introduced through a lossy compression of the image,
however, the subjective judgment of the viewer also is regarded as
an important measure, perhaps, being the most important measure. |