Industrial Training




JPEG Display Process page-1


JPEG Display Process

To display any JPEG image we have to decompress the image first. After decompressing the image, we will get RGB values for each pixel of the image. These RGB values are used to display the image. There are mainly two ways; we can display the JPEG image after getting the RGB values. They are,


  1. 1) Color Quantization

For displaying any JPEG image you need special hardware, which supports 24-bit display (like SVGA adapter) or VGA adapter with VESA BIOS extension. You already know that, a 24-bit jpeg image can have 2^24 (16.7 million) colors. But, VGA adapter (without VESA BIOS extension) can at most display 256 colors only. To display any jpeg image on VGA adapter, we have to use this color quantization method.


According to this color quantization method, 16.7 million colors are reduced to 256 colors by using some color quantization method. Then, the VGA adapter can display those 256 colors as usual. The major point is that by reducing the number of colors from 16.7 million to 256, sometimes the quality of the image can not be obtained. There are some other problems, such as memory management, may occur. This process is somewhat time consuming.

For further information about Color Quantization , see the following pages.


2) Setting SVGA mode

An uncompressed JPEG file is represented as a 24 bit per pixel full color image. In order to represent this on the screen, a SVGA (Super Video Graphics Array ) adapter card with VESA ( Video Electronics Standards Association ) compatibility must be used. For further information about VESA BIOS, see the following pages.

Note: We implemented the second method in our project.


Color Quantization Overview

Quantization of color images is necessary if the display on which a specific image is presented works with fewer colors than the original image. This is the case when a true or full color image is displayed on a device with a color lookup-table. Therefore the number of colors in the image has to be reduced. It has to be considered during color reduction that especially these colors are identified and selected which appear in the image very often and the substituted colors produce no or only little errors. This process is called color quantization.


It is used for previewing and for controlling the rendering process. Quantization is approximating a true intensity value with a displayable intensity value.

The objective of color quantization is displaying a full color image (24 Bits per pixel) with a restricted set of color numbers (256, 64, 16) without a significant (almost perceptually not noticeable by the spectator) lack of color impression approximation as closely as possible when quantized.


Quantization can be viewed as a stepwise process:

1. In the first step statistics on the used colors in the image that is to be quantizated are generated (histogram analysis)

2. a) Based on the analysis the color lookup-table has to be filled with values.
b) The true color values are mapped to the values of the color table. The color values have to be mapped to the nearest color entries in the color table.

3. The original image is quantizied. Each pixel is transformed to the appropriate index of the color table.


4. Optionally an error diffusion technique can be applied.

In this sense the original image is already quantizied. This is, because the input data for quantization is a rectangular array of red, green and blue color separations. The separations are very often in the range [0, 255]. This is the case when the image is digitized from a scanner. The discussed algorithms regard that image as the true image and try to approximate it as closely as possible.

To assess quantization techniques the following quality criterias are considered


· Human Perception

· Run Time

· Memory Requirement

As test images we used for quality measurements,

· a computer generated color image that is a composition of a scanned image, a computer generated figure, the picture of a butterfly and the transparently mapped EUROGRAPHICS logo and


· A color gray shade.

Each slide contains the image that is to be quantized (left upper corner), the reduced color image (right upper corner), the error distribution between the original image and the manipulated image (lower left corner), and the applied methods (lower right corner).


Different quantization methods are investigated:

· Static Color Table

· Median cut

· Popularity

· Octree

and combined with error diffusion techniques:

· Dithering

· Floyd Steinberg


1. Static color look-up table:

A very simple way to solve this problem is to divide the RGB cube into equal slices in each dimension. The crossproduct of these color levels can be used as the entities of the color look-up table. This kind of quantization can be made for the red axis and for the green axis into 8 levels each, and the blue axis (the human is less sensitive to blue) into 4 levels. So that 8 * 8 * 4 = 256 colors are uniformly spread over the color space are available. The mapping of an image value into a value out of this selection is simply done by rounding every component.


An important drawback of this method is the artifact of appearing edges in the image. With a 24 bit full color image to be displayed on a monitor with a color palette of up to 256 entries the problem arises to remove apparent edges caused by very small changes in color which are a result of color shading.

Even the algorithm that is very fast the result is not acceptable.


2. Median cut algorithm:

The idea behind the median cut algorithm is to use each of the color in the synthesized look-up table to represent an equal number of pixels in the original image. The algorithm subdivides the color space interatively into smaller and smaller boxes.


The algorithm starts with a box that encloses all the different color values from the original image. The "size" of the box is given by the minimum and maximum of each of the color coordinates that encloses the box we look at. For splitting the box we have to decide on which "side" we want to subdivide the box. Therefore the points are sorted along the longest dimension of the box. The partitioning into two halves is made at the median point. Approximately equal numbers of points will fall at each side of the cutting plane.

This step is applied until K boxes are generated and K maybe the maximum number of color entries in the available colormap. The according color to each box is calculated by averaging the colors in each box.

Variations in the algorithm could be made by changing the criterion where to intersect the box.


An alternative to sorting the color values form a minimum value to a maximum value could be to look at the coordinate with the largest variance. Another alternative could be to minimize the sum of variances for the two new boxes.


3. Popularity algorithm:

The initial idea of that algorithm is to build the colormap by finding the K most frequently appearing colors in the original image. Therefore the colors are stored in a histogram. The K most frequently occurring colors are extracted and they are made the entries in the color table. Now the true image can be quantized.


The only problem that has still to be solved is how to map the other colors that appear in the original image. To keep the error small we have to apply a method that finds one out of the K most frequently used color values in the nearest neighborhood of the actual pixel. That is why, in general each pixel has to be tested to find the shortest distance to one of the K most color values.


The error is measured as the squared distance in euclidian space:

d(quant,orig) = (quant r - orig r ) 2 + (quant g - orig g ) 2 + (quant b - orig b ) 2 ,

with quant and orig as color triples in the RGB color space.


The brut force method to compute the distance between a particular pixel value and all K representatives is time-consuming (exhaustive search).


To make the algorithm practical, searching the nearest neighbor must be fast. Basically distance testing with all the values in the color look-up table has to be performed or even better a smaller list of potential candidates to minimize d(x,y) can be preselected. This can be achieved by generating an N x N x N lattice of cubical cells in the color space. Each cell contains the set of color values that belong to the K most frequently used color values of the true image, which are within a particular cell. In addition to that further values form the K most frequently used values are considered, when their distance form the cell is smaller than a distance d. The value of d is computed as the distance of the candidate nearest to the center of the cell and its farest corner (locally sorted search). Thereby computation time can be saved. The amount of memory can be reduced by avoiding the computation and management of unused cells.

This algorithm (nearest neighbor algorithm) works as described below:

Currpixel is the identifier for the current color values


dmin = MAXINT

/* color value and index to the look-up table */

nearest_candidate = NULL

i = 0

while ( list_of_candidates is not empty )

{

candidate = HEAD of list_of_candiates

list_of_candidates = TAIL of list_of_candiates

distance = d (currpixel, candidate.Color_value)

if (distance < min)

{

min = distance nearest_candidate = candidate

}
}



Hi I am Pluto.