Image compression using wavelet transform and multiresolution decomposition

Amir Averbuch*, Danny Lazar, Moshe Israeli

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

171 Scopus citations

Abstract

Schemes for image compression of black-and-white images based on the wavelet transform are presented. The multiresolution nature of the discrete wavelet transform is proven as a powerful tool to represent images decomposed along the vertical and horizontal directions using the pyramidal multiresolution scheme. The wavelet transform decomposes the image into a set of subimages called shapes with different resolutions corresponding to different frequency bands. Hence, different allocations arc tested, assuming that details at high resolution and diagonal directions are less visible to the human eye. The resulted coefficients are vector quantized (VQ) using the LGB algorithm. By using an error correction method that approximates the reconstructed coefficients quantization error, we minimize distortion for a given compression rate at low computational cost. Several compression techniques are tested. In the first experiment, several 512 × 512 images are trained together and common table codes created. Using these tables, the training sequence black-and-white images achieve a compression ratio of 60-65 and a PSNR of 30-33. To investigate the compression on images not part of the training set, many 480 × 480 images of uncalibrated faces are trained together and yield global tables code. Images of faces outside the training set are compressed and reconstructed using the resulting tables. The compression ratio is 40; PSNR's are 30-36. Images from the training set have similar compression values compression and quality. Finally, another compression method based on the and vector bit allocation is examined. The idea is based on allocating different numbers of bits to the vectors, depending on their values, encoding the "type" of each vector (large or small) on a bit map. The vectors in each shape are grouped and trained together according to the magnitude of their variances. A vector that has higher variance is quantized using longer tables. Hence, in each shape, the vector coefficients arc quantized using several tables: each vector by the appropriate table. The relation of each vector to its quantization table is saved in a map file. The major improvement is achieved by making the process more efficient and fast since smaller tables are used, and fewer comparisons for locating the closest vector in the table have to be made. The bottleneck of searching large tables, which is very inefficient in all VQ's, is eliminated. The compression ratio and the quality of the reconstructed faces outside the training set have similar results as the previous compression method - compression ratio of 35-36 and PSNR of 35-37 - although faces reconstructed from the training set are slightly better. Distinct wavelet filters are tested, and the best results are achieved by applying the biorthogonal filters. The results presented here are comparable with the best results published recently in terms of PSNR.

Original languageEnglish
Pages (from-to)4-15
Number of pages12
JournalIEEE Transactions on Image Processing
Volume5
Issue number1
DOIs
StatePublished - 1996

Fingerprint

Dive into the research topics of 'Image compression using wavelet transform and multiresolution decomposition'. Together they form a unique fingerprint.

Cite this