Image processing techniques consist of low levelprocedures that are used to acquire, store, manipulate and displayimages in a computer system. These procedures are different from thoseused in high level image understanding, where the algorithms attempt toimitate human cognition and make decisions according to the informationcontained in the image.

This section describes some of the basic concepts of digital image processing.

2.1 Merits and De-merits of Processing Images Digitally

Use of digital video signals may representsome significant advantages as well as disadvantages when compared withanalog video signals. Some of these are described below.

  1. Digital signals inherently require morebandwidth. For example, 198.375 Mbps are required to transfer colourimage frames of 575 lines, with each line containing 575 pixels, at arate of 25 frames per second when 8 bits (256 quantisation levels) areused to represent each colour in a pixel.

(b) Digital signals are relatively invulnerable tothe introduction of noise in transmission. Noise power is additive inanalog systems and systems’ signal to noise ratio is the sum of thenoise contribution from each stage. With the digital format, noisecauses bit errors. With higher number of bits in error, the signal islost altogether, whereas the signal to noise ratio in analog systemssuffers a graceful degradation. The quality of signal in the lattergets steadily worse, although it remains useful as noise is added.

(c) Digital signals are relatively immune todistortions caused by the non-linear transfer functions of thetransmission circuits.

(d) A digital image results from sampling in thevertical dimension by the scanning lines and in the horizontaldimension by the sampling intervals. Thus a digital signal has thepotential for aliasing both horizontally and vertically.

(e) Digital format has the capability of performingsignal conversion and processing functions that would be difficult orimpossible to perform with analog signals. Some important signalprocessing functions that can be performed on digital signals are imageenhancement, digital filtering, post-production editing and bandwidthreduction.

2.2 Stages in Digital Image Processing

Digital image processing may be divided intoa number of stages, starting from image acquisition at the source ofthe image to its display to the user. Between these steps, images maybe

These stages are represented in figure – 1.

Compress 1 0 1 – Image Compression Garments

Figure – 1 : Stages in Digital Image Processing

Image acquisition is performed by an image sensor,which is expected to scan the entire image at a rate high enough tomake it suitable for subsequent viewing by humans. Image acquisition isgenerally performed by a camera. These are inherently analog devices,i.e., their output is one or more analog signals. The camera scans theimage by making a number of nearly horizontal passes across the image.With each pass across the image, the light reflected by the image isconverted to an electrical signal. The instantaneous amplitude of thissignal represents the reflected light energy. This analog signal isconverted to digital by an analog to digital converter (ADC). Eachdigital word at the output of the ADC represents the amplitude of asample from the analog video signal.

Computation and processing of digital images isgenerally performed by a set of algorithms that can be implemented insoftware. However, they can also be implemented as specialised imageprocessing hardware. This implementation not only provides the speed inexecution of these computationally expensive algorithms, but alsoovercomes some of the functional limitations of the host computer. Itis, therefore, not unusual to find image processing equipment developedby integrating off the shelf computers and specialised image processinghardware. The operation of the entire equipment is coordinated andcontrolled by the software executing on the host computer.

The trend in image processing equipment is towardsminiaturisation. Recently, image processing hardware in the form ofsingle boards compatible with industry standard buses are commerciallyavailable. These boards are generally easily configurable and can pluginto engineering workstations and personal computers. These boardscontain digitiser and frame buffer combinations for image digitisationand temporary storage, arithmetic and logic units (ALUs) for performingarithmetic and logical operations at frame rates and one or more framebuffers for fast access to image data during processing. Mainframes andsupercomputers are, however, still being used for solving large scaleimage processing problems.

Image processing software available commercially caninteract with other applications like graphics software, wordprocessors, spread sheets and databases, thus allowing users to developextensive image analysis, storage and display systems with minimumefforts.

Digital representation of images may run overhundreds of thousands of bytes. The image processing systems must,therefore, be equipped with suitable storage sub-systems. These may becategorised into short term storage devices, on-line storagesub-systems and archival storage mechanisms.

Short term storage is basically used duringprocessing. Although the host computer’s RAM can be used to provideshort term storage, specialised buffers called frame buffers are alsoavailable. These frame buffers can store one or more image frames.These frames are accessible at video rates (30 complete images persecond). Although frame buffers can provide features like virtuallyinstantaneous image zoom, scroll (vertical shifts) and pans (horizontalshifts), they can store only a few frames. This is largely due to thelimitations in the physical card sizes as well as the storage densityof the memory chips used.

On-line storage provides a relatively fast recall,read/write capability and is used for frequent accesses to data.Conventionally magnetic disks have been used for this purpose. However,recent advances in the magneto-optical storage device technology havealso brought them in line with their magnetic counterparts.

Archival storage is basically characterised bymassive storage requirements but infrequent need for access. Highdensity magnetic tapes (6400 bytes per inch) can store a megabyte ofinformation in 13 feet of tape. But they suffer from the disadvantagesof relatively short shelf life and the need for a controlled storageenvironment. WORM (Write Once Read Many) optical disks, however,provide extremely high storage capacities, a high shelf life and do notrequire special storage conditions. Their biggest disadvantage isreusability as they are not erasable. WORM disks can be stored in ajukebox where they can serve as on-line storage devices forapplications in which read only operations are predominant.

Communication of image data can be categorised aslocal communication and remote communication. Local communicationoccurs within the image processing equipment when the data needs to betransferred between various modules.

Remote communication occurs when the imageprocessing devices are physically dispersed. As a digital imagecontains extensive amount of data, its transmission from one point toanother at speeds providing desirable results for the applications is aserious challenge. This problem can be addressed either by using highbandwidth communication equipment or by using imagecompression/decompression techniques to transmit only non-redundantdata as it occurs. Redundant data may only be transmitted once.

Images are conventionally displayed on monitors inan analog format. Thus digital images need to be converted into analogformat for display on these devices. Format conversion from digital toanalog is accomplished by means of digital to analog converters (DAC).

2.3 Representation of a Digital Image Pixelstyle 2 9 5 – design alternative to photoshop.

Low level image processing uses data whichresembles the input image, for example, an input image captured by a TVcamera is 2D in nature, described by an image function whose value isusually the brightness depending upon the coordinates of the locationin the image. Consider the monochrome image shown in figure – 2.

Figure –2 : Representation of a Digital Image

It is a two dimensional light intensity function f(x,y), where x,y denote the spatial coordinates and the value of f at any point (x,y)is proportional to the brightness or the gray level of the image atthat point. Thus when digitised, this image can be viewed as a matrixwhose row and column indices identify a point on the image and thecorresponding matrix element value identifies the gray level at thatpoint. The elements of such an array are called image elements, pictureelements or pixels.

A single matrix is generally used for a monochromeimage. However, several matrices can contain information about onemulti-spectral image. Each of these matrices contains one imagecorresponding to one spectral band.

2.4 Colour Images

White light is a mixture of several spectralcolours that are distinctly seen by the eye as red, blue, green,orange, yellow, indigo and violet. The three spectral coloursrepresented by red, green and blue (RGB) wavelengths are known as theprimary colours as they can be combined or added together in differentproportions to produce almost all other colours perceived by the eye.This additive property of the three colours is actually used to producea coloured picture on a TV monitor.

Monochrome imaging systems were in use long beforethe development of colour imaging systems was initiated. By that time,a large number of monochrome TV sets were in use. Techniques wererequired to provide the colour TVs with 2 way compatibility with themonochrome TV systems. This meant that the colour image signals must bereproduced in black and white on a monochrome receiver, just as acolour receiver must be capable of displaying monochrome video in blackand white. This goal was accomplished by converting the RGB signalsinto two signals. One signal would carry the luminance information in amanner identical to the monochrome transmission. The second signalwould carry the chrominance information in such a way that it issufficient for adequate colour reproduction, but easy to ignore by amonochrome receiver without causing interference to the luminancesignal.

One of the relationships used to convert between the RGB and the luminance chrominance components is given in (1).

(1)

Other relationships have been defined by standards such as NTSC and PAL.

Processing a colour image in the RGB domain requiresthe sampling and quantisation of all the three primary colours, i.e.,red, green and blue. However, experimentally it has been pointed out atnumerous occasions that coding digital images in theluminance-chrominance domain is more appropriate. It is because theluminance signal carries the bulk of information in a picture. If theluminance content of a colour image is sampled and quantised to producea decoded picture of a high quality, the chrominance components may besampled at a relatively low rate and quantised fairly coarsely. As thechrominance signals carry relatively little information, their defectstend to be masked by the luminance signals. Experimental observationsindicate that the chrominance signals together need occupy about 10-20%of the total digital channel capacity required for transmission.

3.0 Image Compression

Digital format is uniquely capable of performingsignal processing necessary to accomplish bandwidth reduction with aminimum loss of picture quality. Signals, which are a number sequence,can be transformed to a band reduced format by a series of algorithmswhich are extremely powerful mathematical tools.

Image compression algorithms aim to removeredundancy in data in a way that makes image reconstruction possible.Compression is the main goal, i.e., the aim is to represent an imageusing a lower number of bits per pixel, without losing the ability toreconstruct the image. It is, therefore, necessary to find statisticalproperties of the image to design an appropriate compressiontransformation of the image. The more correlated the image data are,the more data items can be removed.

A general algorithm for data compression and imagereconstruction is shown in figure –3. In the first step, theinformation redundancy, caused by high correlation of image data, isreduced. Several techniques, such as transform compression, predictivecompression and hybrid approaches are used. The second step is codingof transformed data using a code of fixed or variable length. Anadvantage of variable length codes is the possibility of coding morefrequent data by shorter codes and therefore increasing compressionefficiency. Alternatively, fixed length coding techniques use standardcode word length that offers easy handling. Compressed data are decodedafter transmission or archiving and are reconstructed. No non-redundantdata should be lost in the data compression process otherwise errorfree reconstruction is impossible.

Figure – 3 : Image Data Compression (A Generic Algorithm)

Image data compression techniques are generallydivided into two major groups. Information preserving or exact codingschemes permit error free data reconstruction. Thus the originaldigital image can be reproduced pixel for pixel at the decoder and,therefore, there is no degradation in image quality.

In image processing, a faithful reconstruction isoften not necessary in practice. Here, the requirements are loose andimage data compression must not cause significant changes in an image.Thus information lossy or inexact coding algorithms may be employed.These algorithms do not preserve the information completely.Compression is generally higher but the reconstructed image is actuallyan approximation of the original image as some information is lost.However, some applications may prohibit the use of lossy codingschemes. For example, diagnosis in medical imaging is often based onvisual image inspection and no loss of information can be tolerated.Exact coding schemes must, therefore, be used for such applications.

Design of image data compression systems consists oftwo parts. Image data properties must be determined in the first step.An appropriate compression technique must then be designed on the basisof the measured image properties. Compression can be increased if ascheme can automatically adapt to non-stationary statistics and varyingimage content.

4.0 The JPEG Image Compression Decompression Algorithm

The acronym JPEG stands for Joint Photographic Experts Group. The word jointcomes from the fact that it is a collaborative effort between twostandards committees, the CCITT (International Telephone and TelegraphConsultative Committee) and ISO (International Standards Organisation).

There are four main kinds of JPEG compression algorithms

(a) Sequential Encoding : Each image component is encoded in a single, left to right, top to bottom scan.

(b) Progressive Encoding : The image isencoded in multiple scans for applications in which transmission timeis long and the viewer prefers to watch the image buildup in multiplecoarser to clearer passes.

(c) Lossless Encoding : The image is encodedto guarantee exact recovery of every source image sample value (eventhough the result is low compression as compared to the lossy modes).It is based on a simple predictive method that is wholly independent ofthe DCT based algorithms.

(d) Hierarchical Encoding : The image isencoded at multiple resolutions so that the lower resolution versionsmay be accessed without first having to decompress the image at itsfull resolution. This is useful in applications where a very highresolution image must be accessed by a lower resolution display.

The most widely implemented facet of the JPEGstandards is the baseline DCT based sequential compression method. Thealgorithm is based on the fact that the sub-blocks of an image willgenerally contain either constant or gradually changing luminance, i.e.low frequency information. Only a few sub-blocks that are situated oversharp edges or regions of fine detail will contain any significantrapidly changing luminance, i.e. the high frequency information. JPEGcompression algorithm aims to extract the significant low frequencyinformation and discard the less important high frequency information.This significantly reduces the amount of information required to becoded.

4.1 Stage in JPEG Image Compression and Decompression

The JPEG compression involves the following process

(a) The image is broken into blocks of 8´ 8 pixels.

(b) Each block is transformed using the Forward Discrete Cosine Transform (FDCT).

(c) The resulting 64 coefficients are quantised to afinite set of values. The degree of rounding depends upon the specificcoefficients.

(d) The DC coefficient (at location 0,0) is ameasure of the average value of the 64 pixels within the specific imageblock. Because there is usually strong correlation between the DCcoefficients of adjacent 8´ 8 blocks, the quantised DC coefficient is encoded as a difference from the DC term of the previous block in scan order.

(e) The remaining 63 quantised coefficients arescanned in zig zag sequence. This ordering helps to facilitate entropyencoding by placing the low frequency coefficients (which are morelikely to be non zero) before the high frequency coefficients. Thisprocess is shown in figure-4.

Figure – 4 : Zig Zag Scanning Sequence

Compression Garments For Men

(f) Ordered coefficients are runlength encoded.

(g) The data stream is entropy encoded by means of arithmetic or Huffman coding.

The block diagram of a JPEG encoder is shown in figure-5.

Figure – 5 : JPEG Encoder

Decompression is accomplished by applying theinverse of each of the preceding steps in opposite order. Thus, thedecoding process starts with entropy decoding and proceeds to convertthe runlengths to a sequence of zeros and coefficients. Coefficientsare dequantised and Inverse Discrete Cosine Transform (IDCT) isperformed to retrieve the decompressed image. The block diagram of aJPEG decoder is shown in figure–6.

Figure – 6 : JPEG Decoder

4.2 Discrete Cosine Transforms

The Discrete Cosine Transform (DCT) is avariant of the Fourier Transform adapted to real as opposed to complexdata. This transform is linear, i.e., the DCT can be completelyspecified by a matrix.

An 8´ 8 block of pixelscan be viewed as a vector in a 64 dimensional space. Each of theseblocks is transformed separately. The 64 pixel values, sxy, are transformed via the Forward Discrete Cosine Transform (FDCT) to 64 coordinates in a new basis, Suv.Prior to transformation, the pixel values are shifted to center themaround zero. For 8 bit data, it means subtracting 128 from the pixelvalues. If the pixel data are expressed with 8 bits of precision, thetransformed data are stored as signed integers with 11 bits ofprecision.

The FDCT is expressed in equation (2).

(2)

0 £ u, v £ 7

The IDCT is expressed by (3)

(3)

0 £ x, y £ 7

In both the equations,

and .

Because the pixel values typically vary slowly frompoint to point across an image block, the FDCT processing step lays thefoundation for achieving data compression by concentrating most of thesignal in the lower spatial frequencies. For a typical 8´8 sample block from a typical source image, most of the spatialfrequencies have zero or near zero amplitude and need not be encoded.

In an implementation, the cosine terms are onlycomputed once and taken from a look up table. Except for thepossibility of rounding off errors, the DCT portion of the algorithmdoes not introduce any loss of data, i.e. the corresponding pixelvalues in the image reconstructed by performing an IDCT at this stageand the original image will be the same.

4.3 Quantisation

Quantiasation makes JPEG lossy. The purposeof quantisation is to achieve compression by representing DCTcoefficients with no greater precision than is necessary to achieve thedesired image quality. The goal of this processing step is to discardinformation that is not visibly significant.

In quantisation, each of the 64 DCT coefficients isuniformly quantised in conjunction with a 64 element quantisationtable. The user or the application must specify this table as an inputto the encoder. Tublme 1 0 2 – beautiful tumblr desktop client. This division is followed by rounding to the nearestinteger, as shown in (4).

(4)

Each element of the quantisation table can be anyinteger value ranging from 1 to 255. This value specifies the step sizeof the quantiser for its corresponding DCT coefficient. A step size of1 means no loss for the corresponding coefficient. The larger the valueof an element of the quantisation table, fewer bits of precision areused in the corresponding coefficient.

The coefficients in the quantisation matrix are notuniform. The variation in the coefficients is based upon the relativevisual impact of errors at the given frequencies. The eye is assumed tobe less sensitive to errors for higher frequencies than lowerfrequencies.

4.4 Runlength Encoding

Compress 1 0 1 – image compression garments garment

The final stage in the compression process isentropy encoding. Entropy encoding may be considered as a 2 stepprocess. The first step converts the zig zag sequence of quantisedcoefficients into an intermediate sequence of symbols. The second stepconverts these symbols to a data stream in which the symbols no longerhave externally identifiable boundaries.

The intermediate symbols are generated by runlengthencoding the quantised DCT coefficients. Here, each non zero ACcoefficient is represented in combination with the runlength(consecutive number) of zero valued AC coefficients which precede it inthe zig zag sequence. Each such runlength/non zero coefficientcombination is usually represented as a pair of symbols shown below

symbol 1

(RUNLENGTH,SIZE)

symbol 2

AMPLITUDE

RUNLENGTH is the number of consecutive zerovalued AC coefficients in the zig zag sequence preceding non zero ACcoefficient being represented. SIZE is the number of bits used toencode AMPLITUDE by the signed integer encoding used with JPEG’sparticular method of Huffman coding (explained in entropy encoding).

The intermediate representation for an 8´8 sample block’s differential DC coefficient is structured similarly.Symbol 1, however, represents only SIZE information whereas symbol 2represents AMPLITUDE information.

4.5 Entropy Encoding

Entropy encoding achieves additionalcompression losslessly by encoding the quantised DCT coefficients morecompactly based on their statistical characteristics. The JPEG proposalspecifies two entropy encoding methods, Huffman coding and arithmeticcoding. Baseline codecs use Huffman coding, but codecs with bothmethods are specified for all modes of operation.

A Huffman code assigns a variable bit size word toeach coefficient such that no code word forms the prefix to anotherlegal code word. Huffman coding requires that one or more sets ofHuffman code tables be specified by the application. The tables thatare used to compress an image are needed to decompress it. Huffmantables may be pre-defined and used within an application as defaults,or computed specifically for a given image in an initial statisticsgathering pass prior to compression.

The particular arithmetic coding method specified inthe JPEG proposal requires no tables to be externally provided. It isable to adapt to the image statistics as it encodes the image. Ifdesired, statistical conditioning tables can be used as inputs forslightly better efficiency. Arithmetic coding produces a smallercompressed file with no additional impact on image quality. However,arithmetic coding is considered to be more complex than Huffman codingfor certain implementations. Moreover, the particular variant ofarithmetic coding specified by the JPEG standard is subject to patentsowned by IBM, AT&T and Mitsubishi. Thus it cannot be used withoutobtaining legal licenses from these companies.

If the only difference between the two JPEG codecsis the entropy encoding method, transcoding between the two is possibleby simply entropy decoding with one method and entropy encoding withthe other.

4.6 Interchange Format

One component of the JPEG recommendation isthe interchange format. Within an application, many choices can beestablished internally. To share data, all arbitrary choices must beexplicitly spelled out. In particular, the JPEG header containsinformation about image dimensions, type of coding, quantisation levelsand a description of the entropy encoding scheme used.

The JPEG recommendation provides a set ofquantisation levels. If the application stays within the defaults, theyare not required to specify the tables. However, any set ofcoefficients can be used provided they are inserted into the headerinformation before exchanging the image with another application.

The standard also provides a set of Huffman codes.The standard states that the Huffman codes must be described in theheader information even if the defaults are used. Cinemagraph pro 2 5 1 – create living photos. Again, this onlyneeds to be done for files being exported to other applications. Someimplementations allow one to set a flag indicating whether a file isfor internal or external use.

4.7 Compressing Colour Images

The compression and decompression procedurementioned above is basically for monochrome images. However, the samealgorithm can be used for compressing and decompressing colour images.The luminance-chrominance format is preferred because the imagecomponents are reasonably uncorrelated with each other. Thus, as afirst step, the RGB image format is converted to theluminance-chrominance format. For visual purposes, it is possible tolose more colour information than gray scale information. Thechrominance images are, therefore, usually sub-sampled bothhorizontally and vertically. The luminance and the sub-sampledchrominance images are then individually compressed by the processmentioned above. At the decoder, the chrominance images areinterpolated after the Inverse Discrete Cosine Transformation. Theresulting luminance and chrominance images are transformed back to theRGB format. These processes are shown in figures 7 and 8.

Figure – 7 : Encoding Colour Images

Figure-8 : Decoding Colour Images

5.0 Image Quality Issues for JPEG Images

Coding and data compression of digital imagesinvolves manipulations that are approximate and result in imagedegradation. Data compression results in an image representation to aformat that is suitable for storage and/or transmission efficiently. Incase of lossy data compression techniques, compression is achieved bydeleting small magnitude coefficients. Moreover the quantisationprocess that is used is an additional source of error. Finally, theaccuracy with which numerical operations are carried out is anothersource of error. Thus the compression and decompression process mayalso alter data in a manner so as to cause objectionable image defectsin certain areas of the image. Defects that are easily perceived byhumans are changes in sharpness or location of edges and changes incontrast across the boundaries.

The JPEG algorithm operates by dividing the image into blocks of 8´8 pixels each and then removing the high frequency details from theseblocks. Thus, when a compressed image is decompressed, the pixel valuesin the reconstructed image are not the same as in the original image.As a result, blocks with a large amount of high frequency informationgenerally suffer more degradation than those containing less amount ofhigh frequency information. Moreover, these defects increase withhigher compression ratios. This means that images that are compressedusing high compression ratios, when reconstructed have a significantnumber of blocks in error. This effect is shown in figure-9.

(a)

(b) (c)

Figure - 9 : Effect of Compression on Image Quality

(Continued on next page)

(d) (e)

Figure - 9 : Effect of Image Compression on Image Quality

The original image is shown in figure - 9 (a). Asmall portion of the image is cropped and enlarged for analysis. Thisportion is shown in figure-9(b). It is compressed using the JPEGalgorithm in Adobe Photoshop for various quality and file size factors.The images shown in figure-9(c), (d) and (e) were compressed usingquality factors of 10 (large file sizes, 68 kilobytes), 5 (medium filesizes, 25 kilobytes) and 0 (small file sizes, 17 kilobytes)respectively. Degradations such as blockiness and blurring of edges maybe observed in these images as the compression ratios are increased.

It is also not advisable to recompress a JPEG imageafter decompressing it and then editing it. Multiple compressions ofthe same image result in poor quality images. If an image needs to beedited, it should be stored as an uncompressed image. For best visualquality, the image should be compressed only once after it has beenedited. Thus, JPEG is a useful format for compact storage andtransmission of images, but it should not be used as an intermediateformat for sequences of image manipulation steps. Lossless imageformats such as PPM, TIFF etc. should be used for editing.

6.0 Summary

Digital images contain significantly large amount ofdata that need to be compressed for efficient storage and transmission.Over the past few years, several image compression techniques have beendeveloped. These techniques aim to compress the images by transmittingonly the non-redundant information as it occurs. The redundant data aretransmitted only once. As images of natural scenes contain significantamount of redundant data, the actual amount of data transmitted orstored is reduced significantly.

Some of these compression techniques are informationpreserving or lossless in nature, i.e. the corresponding pixel valuesin the original and the reconstructed images are numerically the same.Other techniques are lossy in nature. Here, the reconstructed imagesare actually approximations of the original uncompressed images. Thecorresponding pixel values in the reconstructed and the original imagesare not the same. These techniques usually attempt to reduce image databy discarding the information that is not visibly significant. Thuscompression may be achieved without having a major impact on the visualimage quality.

JPEG standard defines a set of image compressionmechanisms. The most widely used mechanism is the baseline sequentialencoding algorithm. This is a lossy compression algorithm. It is basedon the principle that small blocks of a natural image contain eitherconstant or gradually changing luminance (low frequency information).Only a few sub-blocks situated over sharp edges or regions of finedetail have significant rapidly changing luminance (high frequencyinformation). The significant low frequency information from eachsub-block is retained while the high frequency information isdiscarded, thereby reducing the amount of information to be coded.Moreover, most of the JPEG compression software packages provide theusers with a choice between image quality and the amount of compression.

Although JPEG was originally developed as a standardfor still image compression, significant work has also been conductedon M-JPEG or Motion JPEG techniques for video. In M-JPEG, JPEGcompression is applied to individual frames of a video sequence.However, in the absence of any specific standard, these implementationsare generally vendor specific.

MPEG (Motion Pictures Expert Group) is therecognised standard for motion picture compression. It uses most of thetechniques used in JPEG, but adds interframe compression to exploit thesimilarities that usually exist between the successive frames.Therefore, MPEG typically compresses a video sequence by about a factorof 3 more than M-JPEG techniques can for similar quality. Thedisadvantages of MPEG are

(a) it is extremely computationally intensive

Compress 1 0 1 – Image Compression Garments Garment

(b) it is difficult to edit an MPEG sequence on a frame by frame basis as each frame is intimately tied to the ones around it

This latter problem has made M-JPEG methods popular for video editing products.

Barnsley M.F. & Hurd L.P., (1993), Fractal Image Compression, AK Peters Ltd.

Clarke R.J., (1985), Transform Coding of Images, Academic Press Inc. (London) Ltd.

Gonzalez R.C. & Woods R.E., (1992), Digital Image Processing, Addison Wesley Publishing Co. Inc.

Ingilis A.F, (1993), Video Engineering, McGraw Hill Inc.

Pearson D.E., (1975), Transmission and Display of Pictorial Information, Pentech Press London

Russ J.C., (1995), The Image Processing Handbook, CRC Press Inc.

Shalkoff R.J., (1989), Digital Image Processing and Computer Vision, John Wiley & Sons Inc.

Sharma S.P., (1983), Basic Radio and Television : Colour and Black & White, Tata McGraw Hill Publishing Co.

Sonka M., Hlavac V. & Boyle R. (1993), Image Processing, Analysis and Machine Vision, Chapman & Hall Computing

Wallace G.K., (1991), The JPEG Still Picture Compression Standard, Communications of the ACM, April 1991.