Analysis of Results

We sought to 'compress' an image by taking it's wavelet representation and throwing out those coefficients whose weight was lower than some fraction of the norm. We then performed an inverse DFT on that representation, and subjectively compared the resulting image, which was 'smaller' in the amount of data needed to represent it.

The wavelets we used were important because those wavelets that belong to the Deslauriers-Dubuc family are localized in both time and frequency, as can be seen in the inverse DWT of the Deslauriers(4,2) wavelet, seen by clicking on the right. The first number in the parentheses means the number of vanishing moments in the wavelet that decomposes the signal, and the second number in the parentheses signify the number of vanishing moments in the wavelet that reconstructs the signal. Moreover, as the number of vanishing moments increase, the gradations in energy of the wavelet get smoother and smoother.

original image
WAY compressed image

The steps of the wavelet-based compression theory that was discussed earlier was followed in the test procedure that we used, although we did not explore other novel ways of entropy coding. Instead, we focused on different sorts of thresholding.

The specific kind of thresholding that worked best for us was hard thresholding. However, instead of arbitrarily choosing a value for our thresholding, we instead chose the value to be a fraction of the norm of the sample image. In that sense we combined both hard thresholding and quantile thresholding.

our code

[back to main page]