Sub-octave wavelet representations and applications for medical image processing

MISSING IMAGE

Material Information

Title:
Sub-octave wavelet representations and applications for medical image processing
Physical Description:
xiv, 137 leaves : ill. ; 29 cm.
Language:
English
Creator:
Zong, Xuli, 1960-
Publication Date:

Subjects

Subjects / Keywords:
Image processing -- Digital techniques -- Mathematical models   ( lcsh )
Imaging systems in medicine -- Mathematical models   ( lcsh )
Wavelets (Mathematics)   ( lcsh )
Computer and Information Science and Engineering thesis, Ph.D   ( lcsh )
Dissertations, Academic -- Computer and Information Science and Engineering -- UF   ( lcsh )
Genre:
bibliography   ( marcgt )
non-fiction   ( marcgt )

Notes

Thesis:
Thesis (Ph.D.)--University of Florida, 1997.
Bibliography:
Includes bibliographical references (leaves 131-136).
Statement of Responsibility:
by Xuli Zong.
General Note:
Typescript.
General Note:
Vita.

Record Information

Source Institution:
University of Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 028641851
oclc - 38854628
System ID:
AA00013543:00001


This item is only available as the following downloads:


Full Text












SUB-OCTAVE WAVELET REPRESENTATIONS AND APPLICATIONS
FOR MEDICAL IMAGE PROCESSING










By

XULI ZONG


A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY


UNIVERSITY OF FLORIDA


1997




























Copyright 1997



by



Xuli Zong











ACKNOWLEDGEMENTS


I would like to thank my advisor, Dr. Andrew Laine, for all his support and

thoughtful advice during my graduate study. I would also like to thank Drs. Ger-

hard Ritter, Sartaj Sahni, Edward Geiser, and John Harris for serving on my thesis

committee. Their time and thoughtful suggestions are greatly appreciated.

I am very grateful to Prof. Arthur Hornsby of the Department of Soil and Water

Science, University of Florida, for providing my financial support from January 1991

through June 1994. I would also like to thank Prof. Edward Geiser and Prof. An-

drew Laine for providing me a graduate research assistantship from September 1994

to December 1995 and from May 1996 to January 1997, respectively. I also want

to thank Dr. Dean Schoenfeld of the Robotics Lab in the Department of Nuclear

and Radiological Engineering, University of Florida, for providing my financial sup-

port during the final period of my Ph.D. study. His effort for reviewing part of my

dissertation is very much appreciated.

Special thanks go to Dr. Anke Meyer-Baese of the Department of Electrical and

Computer Engineering for her encouragement and constructive discussions with me.

I would like to thank members of the Image Processing Research Group for some

enjoyable moments, their help, and their friendship.

Finally, I would like to thank my parents and my relatives for their support and

constant encouragement, as well as my wife and my daughter for their understanding,

patience, and love which gave me the strength to fulfill my educational objectives.















TABLE OF CONTENTS


ACKNOWLEDGEMENTS

LIST OF TABLES .....

LIST OF FIGURES ....

ABSTRACT ........

CHAPTERS

1 INTRODUCTION .


1.1
1.2
1.3
1.4


M otivations . .
Review of Related Methods .....................
Objectives of De-Noising and Enhancement .
Wavelet Based Approaches for De-Noising and Enhancement .


1.5 Organization of This Dissertation ...........

2 DE-NOISING AND ENHANCEMENT TECHNIQUES .

2.1 Introduction ......................
2.2 Noise Modeling ......................
2.2.1 Additive Noise Model .............
2.2.2 Approximate Speckle Noise Model ......
2.3 Uniform Wavelet Shrinkage Methods for De-Noising
2.3.1 Hard Thresholding ...............
2.3.2 Soft Thresholding ..............
2.4 Enhancement Techniques ..............
2.4.1 Enhancement by a Nonlinear Gain Function .
2.4.2 Enhancement by Generalized Adaptive Gain

3 DYADIC WAVELET REPRESENTATIONS .......

3.1 Discrete Dyadic Wavelet Transform ..........
3.1.1 One-Dimensional Dyadic Wavelet Transform
3.1.2 Two-Dimensional Dyadic Wavelet Transform
3.2 DWT-Based De-Noising and Feature Enhancement .


viii

xii

xiii


. 11
. 14
. 14
. 16
. 19
. 20
. 20
. 21
. 22
. 24


. 26
. 27
. 29
. 35







3.2.1 Algorithm for Additive Noise Reduction and Enhancement
3.2.2 Algorithm for Speckle Reduction with Feature Enhancement
3.2.3 DWT-Based De-Noising . .
3.2.4 Regulating Threshold Selection through Scale Space .
3.2.5 DWT-Based Enhancement with Noise Suppression .
3.3 Application Samples and Analysis . .
3.3.1 Less Affection from Pseudo-Gibbs Phenomena .
3.3.2 Additive Noise Reduction and Enhancement .
3.3.3 Speckle Reduction with Feature Enhancement .
3.4 Clinical Data Processing Study . .

4 SUB-OCTAVE WAVELET REPRESENTATIONS .. .. .

4.1 Introduction . .
4.2 Continuous Sub-Octave Wavelet Transform .
4.2.1 One-Dimensional Sub-Octave Wavelet Transform .
4.2.2 Two-Dimensional Sub-Octave Wavelet, Transform .
4.3 Discrete Sub-Octave Wavelet Transform .
4.3.1 One-Dimensional Discrete Sub-Octave Wavelet Transform
4.3.2 Two-Dimensional Discrete Sub-Octave Wavelet Transform
4.4 SW T-Based De-Noising .......................
4.5 SWT-Based Enhancement with Noise Suppression .
4.6 Application Samples and Analysis . .

5 PERFORMANCE MEASUREMENT AND COMPARATIVE STUDY


5.1
5.2
5.3


Performance Metric for Quantitative Measurements .
Quantitative Comparison of Signal/Image Restoration .
Quantitative Comparison of Image Enhancement .


6 OTHER APPLICATIONS OF WAVELET REPRESENTATIONS


6.1 Border Identification of Echocardiograms .
6.1.1 Overview of the Algorithm .
6.1.2 Multiscale Edge Detection .
6.1.3 Shape Modeling ...................
6.1.4 Boundary Contour Reconstruction .
6.1.5 Smoothing of a Closed Contour without Shrinkage
6.1.6 Sample Experimental Results .
6.2 Multiscale Segmentation of Masses .
6.2.1 Overview of the Metliod .
6.2.2 Feature Extraction ..................
6.2.3 Classification via a Radial Basis Neural Network .
6.2.4 Sample Experimental Results .


. .97
. 98
. .99
... 101
. .102
. .104
. .105
. 107
. 109
... 110
. 110
. 112






7 CONCLUSIONS .............................. 115


APPENDICES

A FIR FILTERS FOR COMPACT SUPPORT WAVELETS ....... 117

A.1 First Order Derivative Wavelets of Spline Smoothing Functions 117
A.2 Second Order Derivative Wavelets of Spline Smoothing Functions 122

B IMPLEMENTATION OF SUB-OCTAVE WAVELET TRANSFORMS 125

B.1 One-Dimensional Sub-Octave Wavelet Transform ......... .127
B.2 One-Dimensional Inverse Sub-Octave Wavelet Transform ..... 128
B.3 Two-Dimensional Sub-Octave Wavelet Transform ........ 129
B.4 Two-Dimensional Inverse Sub-Octave Wavelet Transform ..... 129

REFERENCES .................... ............... 131

BIOGRAPHICAL SKETCH ........................... 137













LIST OF TABLES


3.1 Impulse responses of filters H(w), G(w), K(w), and L(w). 35

3.2 Quantitative measurements of manually defined borders. ...... 55

3.3 Quantitative measurements of inter-observer mean border differences
in mm on original versus enhanced images, as shown in Figure 3.25. 58

4.1 Quantitative measurements of performance for de-noising and feature
restoration. ......................... ....... 77

5.1 Quantitative measurements: Average Square Errors 119'-912" for var-
ious signal restoration methods. .... ... 89

5.2 Quantitative measurements: RISE for various de-noising methods. .91

5.3 Comparison of contrast values obtained by traditional contrast stretch-
ing (CST), unsharp masking (UNS), and multiscale nonlinear process-
ing of sub-octave wavelet transform (SWT) coefficients of a mammo-
gram containing a mass lesion. ..... 93

5.4 Contrast improvement index (CII) for enhancement by traditional con-
trast stretching (CST), unsharp masking (UNS), and multiscale non-
linear processing of sub-octave wavelet transform (SWT) coefficients
of a mammogram with a mass ...................... .94

5.5 Comparison of contrast values obtained by multiscale adaptive gain
processing of dyadic wavelet transform (DWT) and sub-octave wavelet
transform (SWT) coefficients. Mammographic features: minute
microcalcification cluster (MMC), microcalcification cluster (MC),
spicular lesion (SL), circular (arterial) calcification (CC), and well-
circumscribed mass (WCM) ................... .. .. 96

5.6 CII for enhancement by multiscale adaptive gain processing of dyadic
wavelet transform (DWT) and sub-octave wavelet transform (SWT)
coefficients. Mammographic features: minute microcalcification clus-
ter (MMC), microcalcification cluster (MC), spicular lesion (SL), cir-
cular (arterial) calcification (CC), and well-circumscribed mass (WCM). 96












LIST OF FIGURES


2.1 Thresholding methods: soft thresholding and hard thresholding. 21

2.2 A nonlinear gain function for feature enhancement with noise suppres-
sion .... .... .. .. 22

2.3 A generalized adaptive gain function. ... 24

3.1 A 3-level DWT decomposition and reconstruction of a 1-D function. 29

3.2 A 3-level DWT decomposition and reconstruction of a 2-D function. 32

3.3 A 2-D analysis filter bank. ........................ 33

3.4 A 2-D synthesis filter bank. ................ ....... 34

3.5 Coefficient and energy distributions of signal "Blocks". 39

3.6 A sample scaling factor function. .... 41

3.7 Pseudo-Gibbs phenomena. (a) Orthonormal wavelet transform of an
original signal and its noisy signal with added spike noise. (b) Pseudo-
Gibbs phenomena after both hard thresholding and soft thresholding
de-noising under an OWT..................... ... 42

3.8 Multiscale discrete wavelet transform of an original and noisy signals. 42

3.9 DWT-based reconstruction after (a) hard thresholding, (b) soft thresh-
olding, and (c) soft thresholding with enhancement. ... 43

3.10 De-Noised and feature restored results of DWT-based algorithms; first
row: original signal, second row: noisy version, third row: de-noised
only result, and fourth row: de-noised and enhanced result signal. 46

3.11 De-Noising and enhancement. (a) Original signal. (b) Signal (a) with
added noise of 2.52dB. (c) Soft thresholding de-noising only (11.07dB).
(d) De-Noising with enhancement (12.25dB). ... 47


viii






3.12 De-Noising and enhancement. (a) Original image. (b) Image (a) with
added noise of 2.5dB. (c) Soft thresholding de-noising only (11.75dB).
(d) De-Noising with enhancement (14.87dB). .... 47

3.13 De-Noising and enhancement. (a) Original MRI image. (b) De-Noising
only. (c) DWT-based de-noising with enhancement. ... 49

3.14 De-Noising and enhancement. (a) Original MRI image. (b) De-Noising
only. (c) DWT-based de-noising with enhancement. ..... 49

3.15 An algorithm for speckle reduction and contrast enhancement. ... .49

3.16 Results of de-noising and enhancement. (a) A noisy ED frame. (b)
Wavelet shrinkage de-noising only method. (c) DWT-based de-noising
and enhancement.............................. 50

3.17 Results of de-noising and enhancement. (a) A noisy ES frame. (b)
Wavelet shrinkage de-noising only method. (c) DWT-based de-noising
and enhancement. ................... ......... 51

3.18 A generalized adaptive gain function for processing an echocardiogram
in Figure 3.17(a). .. .. .. .. .. .. .. ... ... .. 52

3.19 Results of various de-noising methods. (a) Original image with speckle
noise. (b) Median filtering. (c) Extreme sharpening combined with
median filtering. (d) Homomorphic Wiener filtering. (e) Wavelet
shrinkage de-noising only. (f) DWT-based de-noising with enhance-
m ent.. . ... .. 53

3.20 Results of various de-noising methods. (a) Original image with speckle
noise. (b) Median filtering. (c) Extreme sharpening combined with
median filtering. (d) Homomorphic Wiener filtering. (e) Wavelet
shrinkage de-noising only method. (f) DWT-based de-noising and en-
hancem ent. . .. 54

3.21 Area correlation between manually defined borders by two expert car-
diologist observers. ........................... .. 56

3.22 Border difference variation on the original images. (a) Distribution of
Epi ED border differences. (b) Distribution of Epi ES border differ-
ences. (c) Distribution of Endo ED border differences. (d) Distribu-
tion of Endo ES border differences. The solid lines are the third order
polynomial fitting curves .......................... 57






3.23 Border difference variation on the enhanced images. (a) Distribution
of Epi ED border differences. (b) Distribution of Epi ES border differ-
ences. (c) Distribution of Endo ED border differences. (d) Distribu-
tion of Endo ES border differences. The solid lines are the third order
polynomial fitting curves .................... .. 59

3.24 De-noising and image enhancement: (a) An original ED frame; (b) An
original ES frame; (c) The enhanced ED frame; (c) The enhanced ES
fram e .. . .. .. 60

3.25 Image and border display: (a) An original ED frame with manually-
defined borders overlaid; (b) An original ES frame with manually-
defined borders overlaid; (c) The enhanced ED frame with manually-
defined borders overlaid; (c) The enhanced ES frame with manually-
defined borders overlaid. Red and yellow borders represent the two
observers. . ... 61

4.1 A 3-level SWT decomposition and reconstruction diagram for a 1-D
function . . .. 70

4.2 Divisions of the frequency bands under the SWT shown in Figure 4.1. 71

4.3 A 2-level 4-sub-octave decomposition and reconstruction of a SWT. 72

4.4 Frequency plane tessellation and filter bank. (a) Division of the fre-
quency plane for a 2-level, 2-sub-octave analysis. (b) Its filter bank
along the horizontal direction ....................... 72

4.5 Smoothing, scaling, and wavelet functions for a SWT. These functions
are used for a 2-sub-octave analysis. . .. 74

4.6 An example of level one analytic filters for 2 sub-octave bands and
a low-pass band. The dashed curve is the frequency response of a
first order derivative approximation of a smoothing function and the
dash-dot curve is the frequency response of a second order derivative
approximation. The solid curve is a scaling approximation at level one. 75

4.7 De-noised and restored features from the SWT-based algorithm. From
top to bottom: original signal; noisy signal; de-noised signal; overlay
of original and de-noised signal . ... 78

4.8 Limitations of a DWT for characterizing band-limited high frequency
features. . .. .80

4.9 De-noised and enhanced results of a noisy "Doppler" signal under a
DWT (25.529dB) and a SWT (26.076dB). .... 81






4.10 Enhancement with noise suppression. (a) A low contrast image of RMI
model 156 phantom with simulated masses embedded. (b) Enhance-
ment by traditional histogram equalization. (c) SWT-based enhance-
ment with noise suppression. . .. 82

4.11 Enhancement with noise suppression. (a) Area of a low contrast mam-
mogram with a microcalcification cluster. (b) Best enhancement by
traditional unsharp masking. (c) SWT-based enhancement with noise
suppression. (d) SWT-based enhancement of a local region of interest
(ROI) with noise suppression. ..... 84

5.1 Enhancement results. (a) Area of a low contrast mammogram with
a mass. (b) Enhancement of (a) by traditional contrast stretching.
(c) Enhancement of (a) by traditional unsharp masking. (d) SWT-
based enhancement of (a) with noise suppression. (e) The same area
of a low contrast mammogram contaminated with additive Gaussian
noise. (f) Enhancement of (e) by traditional contrast stretching. (g)
Enhancement of (e) by traditional unsharp masking. (h) SWT-based
enhancement with noise suppression. (i) Hand-segmented mass and
ROI for quantitative measurements of performance. ... 92

5.2 Phantom enhancement results. (a) Phantom image. (b) Mammogram
M56 with blended phantom features. (c) Nonlinear enhancement un-
der a DWT. (d) SWT-based enhancement with noise suppression. 95

6.1 The circular arc templates for matched filtering. ... 101

6.2 Dynamic shape modeling. ........................ 102

6.3 Connecting broken boundary segments. The first row shows four typ-
ical cases showing the end points of two broken segments belong to a
large segment. The second row is the result after connecting the two
broken segments for each case. ..... 103

6.4 Attached point removal. The first row shows four typical cases with
attached points. The second row is the result after attached point
removal for each corresponding case. ... 104

6.5 Border identification of the LV from a short-axis view. (a) An original
frame of the LV. (b) Edge maps detected using a DWT. (c) The center
point of the LV and extracted boundary segments. (d) Connected
boundary contours. (e) Contours in (d) overlaid with the original. (f)
Final estimated boundaries. ....................... 106






6.6 Local non-shrinking smoothness filtering of a closed contour. (a) The
smoothed contours. (b) Contours in (a) overlaid with the contours in
Figure 6.5(d) before smoothness filtering. ... 107

6.7 Border identification of an echocardiogram at ED. (a) An original
frame of the LV at ED. (b) The detected center point and endocardial
as well as epicardial boundaries overlaid with the original. 108

6.8 Border identification of a frame at ES from the same sequence of
echocardiograms as Figure 6.7. (a) An original frame of the LV at
ES. (b) The detected center point and boundaries overlaid with the
original. .......................... ....... 108

6.9 Network architecture, a three-layer resource-allocating neural network
of radial basis functions. ......................... 111

6.10 Test Images. First row: original ROI images; Second row: smoothed
and enhanced images; Third row: ideal segmentation results.
Columns: (a-c) real mammograms, (d) a mathematical model. 113

6.11 Experimental results of image segmentation. Four test cases, one each
row, are shown. The first column (a) is an original image, column (b)
is smoothed and enhanced images, column (c) is the segmented result,
and column (d) is the result of a traditional statistical classifier. 114

A.1 (a) A cube spline function and its first and second order derivative
wavelets, and (b) the fourth order spline with its first and second
order derivatives. ............... .............. 124









Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy



SUB-OCTAVE WAVELET REPRESENTATIONS AND APPLICATIONS
FOR MEDICAL IMAGE PROCESSING

By

Xuli Zong

December 1997


Chairman: Dr. Andrew F. Laine
Major Department: Computer and Information Science and Engineering

This dissertation describes sub-octave wavelet representations and presents appli-

cations for medical image processing, including de-noising and feature enhancement.

A sub-octave wavelet representation is a generalization of a traditional octave dyadic

wavelet representation. In comparison to this transform, by finer divisions of each

octave into sub-octave components, we demonstrate a superior ability to capture

transient activities in a signal or image. In addition, sub-octave wavelet represen-

tations allow us to characterize band-limited features more efficiently. De-Noising

and enhancement are accomplished through techniques of minimizing noise energy

and nonlinear processing of sub-octave coefficients to improve low contrast features.

We identify a class of sub-octave wavelets that can be implemented through band-

splitting techniques using FIR filters corresponding to a mother dyadic wavelet. The

methodology of sub-octave based nonlinear processing with noise suplpression is ap-

plied to enhance features significant to medical diagnosis of dense radiographs.


Xlli






In our preliminary studies we investigated several de-noising and enhancement

algorithms. De-Noising under an orthonormal wavelet transform was shown to cause

artifacts, including pseudo-Gibbs phenomena. To avoid the problem, we adopt a

dyadic wavelet transform for de-noising and enhancement. The advantage is that

less pseudo-Gibbs phenomena was shown in our experimental results. We devel-

oped algorithms for reducing additive and multiplicative noise. The algorithm for

speckle reduction and contrast enhancement was applied to echocardiographic im-

ages. Within a framework of multiscale wavelet analysis, we applied wavelet shrink-

age techniques to eliminate noise while preserving the sharpness of salient features. In

addition, nonlinear processing of feature energy was carried out to improve contrast

within local structures and along object boundaries.

A study using a database of clinical echocardiographic images suggests that such

de-noising and enhancement may improve the overall consistency and reliability of

myocardial borders as defined by expert observers. Comparative studies on quantita-

tive measurements of experimental results between our algorithms and other methods

are presented. In addition, we applied wavelet representations under dyadic or sub-

octave wavelet transforms to other medical image processing problems, such as border

identification and mass segmentation.


xiv














CHAPTER 1
INTRODUCTION

1.1 Motivations

Noise, artifacts, and low contrast can cause signal and image degradations during

data acquisition of many signals and images, especially in areas of medical image ap-

plications. Different image modalities exhibit distinct types of degradation. Mammo-

graphic images often exhibit low contrast while images formed with coherent energy,

such as ultrasound, suffer from speckle noise. Transmitted audio signals sometimes

have the problem of channel-added noise. These degradations not only lower audio

or visual quality, but also cause analysis and recognition algorithms to sometimes fail

to achieve their objectives.

Since poor image quality often makes feature extraction, analysis, recognition,

and quantitative measurements problematic and unreliable in various areas of sig-

nal/image processing and computer vision, much research has been devoted to im-

prove the quality of acquired signals and images degraded by these factors [30]. Data

restoration techniques are often targeted to reduce noise. Unlike noise, artifacts

sometimes comprise not only high frequency noise components, but also middle-to-

low frequency components which are very hard to differentiate from other typical

features in the spectrum of frequency contents. Simple de-noising techniques will

have problems reducing artifacts, so application-specific methods often have to be

used to eliminate artifacts based on certain prior knowledge. Tle quality of low

contrast images can be improved through designated featiire (nhan'cement.








The promise of wavelet representations under dyadic or further sub-octave wavelet

transforms and the need for improving degraded signals/images have motivated this

dissertation research. Reducing artifacts is not a major concern of this research. The

major task of this research is to develop reliable techniques for reducing noise and

enhancing salient features important to various application problems. The promising

de-noising and feature enhancement techniques may improve the reliability and per-

formance of signal/image processing and computer vision algorithms for high-level

tasks, such as object detection or visual perception. In addition, we also show the

capability of wavelet representations for other medical imaging applications.


1.2 Review of Related Methods

Signal/image restoration and enhancement have been the focus of much research

in the areas of signal and image processing as well as computer vision. Several

de-noising methods and image enhancement techniques have been developed and re-

ported in the literature [30, 15, 26, 17, 62, 58, 68]. Most of these methods can be

roughly classified as spatial-, statistical-, and Fourier-domain-based. Many tradi-

tional methods were reviewed by Jain [30]. These conventional techniques for image

restoration and enhancement have shown certain limitations of balancing the effect

of removing noise and enhancing features.

For de-noising purposes, spatial and frequency domain smoothing methods often

not only reduce noise, but also smooth out high frequency components of wide-band

features as a side-effect. This is because the smoothing effect applies both noise and

high frequency components of features. Over-smoothing noise often causes certain

interested features being blurred. Recently multiscale and/or multiresolution wavelet

techniques for signal/image restoration have been reported, suggesting improved re-

sults in signal/image quality [50, 46, 19, 9, 18, 59, 8, 65].








Mallat and Hwang [50] introduced a local maxima based method for removing

white noise. Their method analyzes the evolution of local maxima of the dyadic

wavelet transform cross scales and identifies the local maxima curves above a cer-

tain measurement metric which more likely correspond to features than noise. The

de-noised signal/image is then reconstructed based on the extracted local maxima

corresponding to significant feature singularity points. Lu et al. [46] further extended

the ideal of local maxima curves to the local maxima tree cross scales and employed

a different measurement metric to detect features from noise. Coifman and Majid

[9] developed a wavelet packet-based method for de-noising signals. It is an iterative

method for extracting features based on the best wavelet packet basis, which removes

noise energy below a certain threshold.

Donoho and Johnstone [19, 20] presented thresholding-based wavelet shrinkage

methods for noise reduction. These methods uniformly reduce noise coefficients below

a global threshold. Hard thresholding and soft thresholding have trade off between

preserving features and achieving smoothness. Soft thresholding de-noising was fur-

ther explained by Donoho [18] and proved that with high probability the de-noised

signal is at least as smooth as the noise-free original in a wide variety of smoothness

measures and comes nearly as close (in the mean square sense) to the original as

any other estimated results. But the method still faces the problem of balancing the

removal of noise and signal details in order to get a better performance in terms of

visual quality and quantitative measurements. Thresholding-based wavelet shrinkage

under an orthonormal wavelet transform has shown undesired side-effects, including

pseudo-Gibbs phenomena [8]. In order to solve the problem, Coifman and Donoho

[8] developed a translation-invariant de-noising method. Their method alleviates the

problem, but oscillations after de-noising remain visible. Wei and Burrus [65] used








redundant wavelet representations to achieve translation-invariant effects for image

restoration when applied to various image coding schemes.

Most noise in reality is additive, but certain noise can not be characterized well

as additive noise, such as speckle noise. Speckle noise can be better approximated

as multiplicative noise [30]. Image formation under coherent waves results in a gran-

ular pattern known as speckle. The granular pattern is correlated with the surface

roughness of an object being imaged. Goodman [25] presented an analysis of speckle

properties under coherent irradiance, such as laser and ultrasound. The primary

differences between laser and ultrasound speckle were pointed out by Abbott and

Thurstone [1] in terms of coherent interference and speckle production. For speckle

reduction, earlier techniques include temporal averaging [25, 1], median filtering,

and homomorphic Wiener filtering [30]. Homomorphic Wiener filtering is a method

which converts multiplicative noise into additive noise and applies Wiener low-pass

filtering to reduce noise. Similar to temporal averaging, one speckle reduction tech-

nique [57] used frequency and/or angle diversity to generate multiple uncorrelated

synthetic-aperture radar (SAR) images which were summed incoherently to reduce

speckle. Hokland and Taxt [28] reported a technique which decomposed a coherent

image into three components, one of which, called subresolvable quasi-periodic scat-

ter, causes speckle noise. The component was eliminated by harmonic analysis and

processing.

In the last few years, several wavelet based techniques were developed for speckle

reduction. Moulin [54] introduced an algorithm based on the maximum-likelihood

principle and a wavelet regularization procedure on the logarithm of a radar image

to reduce speckle. Guo et al. [27] first reported a method based on wavelet shrinkage

for speckle reduction. In the method of Guo et al., wavelet shrinkage of a logarith-

mically transformed image is applied for speckle reduction of SAR images. They








also proposed several approaches to combine data from polarization to achieve better

performance. We [71, 72] have developed a method for speckle reduction similar to

the one by Guo et al. [27]. The differences are that (a) noise is modeled as multi-

plicative, taking a homomorphic approach to reduce the noise, (b) different wavelets

and multiscale overcomplete representations are employed in our approach, and (c)

an enhancement mechanism is incorporated into our de-noising process. Thus, our

method can not only reduce speckle noise, but also enhance interesting features.

In the last two decades, many image enhancement methods have been published

in the literature. Several spatial and frequency-based techniques [11, 30, 24, 44, 58]

were developed for various image enhancement purposes. Contrast stretching, high-

pass filtering, histogram modification methods are described in Jain [30]. Contrast.

stretching was an earlier technique for contrast enhancement [30]. This method has

limitations of selecting features based on local information for enhancement because

it is a global approach and the enhancement function is linear or piece-wise linear.

Contrast stretching may also amplify noise when input data are corrupted by noise.

Some image enhancement schemes applied to medical image modalities have been

developed and studied in the literature [30, 58, 45, 39, 35]. Specifically, spatial and

frequency-based techniques [30, 24, 11, 44] have been developed for ultrasound image

enhancement. A statistical enhancement method, which used the local standard

deviation of a surrounding region centered around each pixel to replace its value to

enhance edges in ultrasound images, was reported by Geiser [23].

Conventional filtering-based techniques for de-noising and image enhancement

have shown certain limited ability for removing noise without blurring features and

for enhancing contrast without amplifying noise because spatial and frequency rep-

resentations can not separate features from noise well. In the last few years, many

techniques based on multiscale features, such as edges and object boundaries, have








achieved success for image enhancement in several application areas [32, 40, 41, 45].

Recently wavelet-based nonlinear enhancement techniques have produced superior

results in medical image enhancement [39, 35, 73].


1.3 Objectives of De-Noising and Enhancement

To improve the quality of acquired signals and images degraded by noise and/or

low contrast, most traditional methods try either to reduce noise or to enhance fea-

tures. At first glance, de-noising and feature enhancement appear to be two conflict-

ing objectives, especially to traditional methods for de-noising or image enhancement.

However, they are simply two sides of the same coin. The purpose of de-noising is to

eliminate noise, primarily in high frequency, while methods of feature enhancement

attempt to enhance specific signal details, including contrast enhancement. The dif-

ference lies in the fact that features often occupy a wider frequency band than noise.

It is even more difficult to achieve both objectives when feature details are corrupted

by noise. Traditional spatial and filtering-based methods for de-noising often reduce

noise at a price of blurring features while single-scale conventional methods for con-

trast enhancement may amplify noise. The single-scale representation of a signal in

time (or pure frequency) is problematic when attempting to discriminate signal from

noise.

Because of the limited ability of traditional techniques for de-noising or feature

enhancement, the two conflicting objectives can not be accomplished simultaneously

through earlier methods under spatial or Fourier representations with a single res-

olution of frequency contents. When the two mechanisms, de-noising and feature

enhancement, are combined under a framework of a new representation or platform

which helps to overcome the drawback of each mechanism when acting alone, we will

have a much better chance to fulfill the two objectives at the same time. Wavelet








transforms and wavelet theory can be one method for new representation and plat-

form. This may be why wavelet representations have attracted more and more at-

tention of researchers aiming at signal/image restoration and feature enhancement.

Recently wavelet based methods have shown promise in accomplishing the two ob-

jectives at the same time because wavelet decomposition can fine tune frequency

resolution of signal details. We are able to treat distinct components of details at

fine-to-coarse scales differently to achieve desired effects of de-noising and feature en-

hancement. Algorithms have been developed under such a multiscale wavelet analysis

framework [71, 73].


1.4 Wavelet Based Approaches for De-Noising and Enhancement


Since Morlet and Grossmann [52] formulated the first wavelet decomposition,

wavelet theory [52, 12, 13, 14, 6, 47, 48, 49, 51, 64] has been developed and well

documented in the last 14 years. Some practical applications of the theory have been

developed, but more applications are still under the developing stage. There are many

choices of wavelets with different properties [12]. De-noising using some wavelets hav-

ing oscillations may lead to certain unwanted and undesired side-effects, for example

noise-induced ripples and oscillations when reconstructed under incomplete coeffi-

cients in the wavelet domain. This could be one of the major factors resulting in

artifacts, including the so-called pseudo-Gibbs phenomena in the neighborhood of

sharp variation points (singularities) after de-noising (for details see Coifman and

Donoho [8]).

Orthonormal Wavelet Transform (OWT) and discrete Dyadic Wavelet Transform

(DWT) have been used in various application areas, such as data compression, edge

detection, texture analysis, noise reduction, and image enhancement. The compact








and local support of wavelets in spatial and frequency domains has been a valu-

able property for characterizing features locally. This enable us to remove noise and

enhance features locally without affecting other features distant apart. In a prelim-

inary implemented method, DWT has been adopted as our major analysis tool for

de-noising and contrast enhancement [73]. The reasons are quite obvious. A DWT

with the first order derivative of a smoothing function as its basis wavelet can sep-

arate noise energy from feature energy reasonably well in the wavelet domain. The

DWT also correlates prominent features in its multiscale representation, such as edges

and object boundaries. After experimental analysis and understanding of signal and

noise behaviors in scale space, we are able to find out which wavelet coefficients to

modify to enhance certain features of interest (FOI) based on simple thresholding

and nonlinear processing. The mother wavelet is a smoothing function and is anti-

symmetric with few oscillations, which keeps us relatively free from the side-effects

shown under OWT with a basis wavelet having slight oscillations itself. This effect

can be clearly seen from the de-noised results under OWT and our de-noised results

[73]. The filters used to perform the DWT have compact support of a few taps. The

DWT is a stable and overcomplete representation. DWT wavelet coefficients (WC)

have a more clear meaning that they are proportional to the signal magnitude or

image intensity changes (gradients) at certain scales. WCs reflect energy in a signal,

so we can rephrase that a DWT with the aforementioned wavelets is a process for

the diffusion of the energy of a signal and converting it into the energy of the signal

at different scales in its wavelet representation.

Even through the DWT-based algorithm [73] has produced better results than

de-noising only methods for signal/image restoration, we have observed that a DWT

has limited ability to characterize band-limited features, such as texture information,








speech or sound signals including ultrasound signals. To more reliably identify fea-

tures through the time-scale space, we formulate and implement a sub-octave wavelet

transform, which is a generalization of the DWT. The sub-octave wavelet transform

provides a means to visualize signal details in equal-divided sub-octave frequency

bands and is shown to characterize signal details more effectively. The theoretical

development of a sub-octave wavelet transform, FIR filter design, and efficient im-

plementation are part of this thesis research. Further more, in this thesis, we are

developing a complete algorithm and quantitatively measure its performance which

will be compared to other techniques for de-noising and/or feature enhancement.

In an approach developed during this research, we achieve de-noising and fea-

ture enhancement under a framework of multiscale sub-octave wavelet analysis and

judicious nonlinear processing [42, 43]. We seek to eliminate noise while restoring

or enhancing salient features. Through multiscale representation by a discrete sub-

octave wavelet transform (SWT) with first and second order derivative approxima-

tions of a smoothing function as its basis wavelets, we can distinguish feature energy

from noise energy reasonably well. The objectives of de-noising and feature enhance-

ment are achieved by simultaneously lowering noise energy and raising feature energy

through designated nonlinear processing of wavelet coefficients in the transform do-

main. Through parameterized processing, we are able to achieve a flexible control and

the potential to reduce speckle and restore (or even enhance) contrast along features,

such as object boundaries. As shown in our earlier work [73], this approach for speckle

reduction and contrast enhancement is less affected by pseudo-Gibbs phenomena [8].

1.5 Organization of This Dissertation

The rest of this dissertation is organized as follows. In Chapter 2, we review .solli

de-noising and enhancement methods. We describe how to regulate threshold-blased








wavelet shrinkage through scale space and show how to design an enhancement func-

tion with noise suppression. In Chapter 3, we present dyadic wavelet transform based

techniques for de-noising and feature enhancement. Sample application results and

analysis are presented. In Chapter 4, we derive and formulate a sub-octave wavelet

transform mathematically and show how it generalizes the dyadic wavelet transform.

The advantage of a sub-octave representation over a dyadic wavelet representation

is presented. Sample application results are presented. In Chapter 5, we describe

how to quantitatively measure the performance of an algorithm for de-noising and

enhancement. Some comparisons are made between the results of other published

methods, and our DWT-based as well as SWT-based methods. In Chapter 6, we

apply wavelet representations under dyadic and sub-octave wavelet transforms to

other problems of medical image processing. Experimental results and analysis are

presented. This dissertation is concluded in Chapter 7. In Appendix A, we present

FIR filters used for dyadic and sub-octave wavelet transforms. In Appendix B, we

introduce procedures for a fast implementation of a sub-octave wavelet transform in


one and two dimensions.














CHAPTER 2
DE-NOISING AND ENHANCEMENT TECHNIQUES


In this chapter, we are going to overview the impact of noisy and low contrast sig-

nals/images, and review some related methods for de-noising and enhancement. We

also describe how to regulate the threshold-based de-noising techniques and design an

enhancement function with noise suppression. We then introduce the image restora-

tion and enhancement techniques employed in our algorithms. The reason that we

put this chapter ahead of wavelet representations described in Chapters 3 and 4 is

that both our DWT and SWT based algorithms share image restoration and enhance-

ment techniques introduced in this chapter. The advantage of this organization is

that we avoid describing the de-noising and enhancement operators repeatedly when

presenting our DWT and SWT based algorithms for noise reduction and contrast

enhancement, so we can simply refer to the operators in this chapter.

2.1 Introduction

Signal and image degradations by noise and low contrast are frequent phenomena

of signal/image data acquisition, especially in medical imaging. Image degradations

have a significant impact on the performance of human experts and computer-assisted

methods for medical diagnosis. For example, a human medical expert may fail to

capture some important information from a noisy and low contrast, image when per-

forming medical diagnosis, especially when exhausted. A cardiologist may have to

perform border interpolation in order to identify myocardial borders when border

information is incomplete and corrupted by speckle noise and make decision based

11








on unreliable information. Noise and low contrast make it problematic for human

experts and computer algorithms to identify features of diagnostic importance in

medical imaging. In addition, noise and low contrast often make feature extraction,

analysis, and recognition algorithms unreliable, so improving the quality of acquired

medical images becomes necessary. Signal/image restoration and/or enhancement

are usually taken as the first step of a high level task of image processing and com-

puter vision. For instance, in most image segmentation algorithms, image smoothing

is usually carried out as the first step (or preprocessing) for segmentation in order to

reduce noise interference on the performance of these algorithms.

Most traditional approaches for de-noising have a single-minded objective which

is to reduce noise while minimizing the smoothing of features. Traditionally, noise

is frequently not a concern in feature enhancement algorithms. In the combined ap-

proach developed during this thesis research, we will focus on two goals; (1) to remove

noise and (2) to enhance salient features to a desired degree. As part of this research,

we have implemented algorithms for removing additive and multiplicative noise re-

spectively while enhancing prominent features at the same time. These algorithms

are primarily based on wavelet representations, wavelet shrinkage, and feature em-

phasis. Wavelet shrinkage is a technique which uniformly reduces wavelet coefficients

through a certain operator, such as hard thresholding or soft thresholding. During

the process, small coefficients, mainly attributed to noise, are usually removed. For

feature enhancement, we revitalize low contrast FOI through feature emphasis (in-

creasing the energy level for each of these features). When the noise level in a signal

or image is high, these algorithms are capable of not only removing noise, but also

restoring features to near their original quality and even enhancing certain FOI se-

lectively. When the noise level is low, such as in a low contrast medical image, our

algorithms can enhance features with noise suppression to avoid amplifying noise.








The main ideas behind selected wavelet shrinkage and salient feature emphasis

encapsulate the two fundamental objectives of de-noising and feature enhancement:

(a) Remove noise, but not features, and

(b) Enhance the features of interest, but not noise.

These are two conflicting objectives in the sense that both sharp features and noise lie

in high frequency of the spectrum. Noise is often smoothed out at the price of blurred

features left in a traditional de-noising algorithm. On the other hand, enhancing cer-

tain FOI corrupted by noise is more likely to amplify noise in an enhancement-only

technique without a noise suppression mechanism incorporated. This prevents tradi-

tional algorithms from attempting to achieve both of the objectives simultaneously

because noise and features can not be distinguished well in spatial or Fourier repre-

sentations. In Fourier domain, de-noising is usually carried out through some type

of low-pass filtering in nature, including template-based spatial averaging. On the

other hand, feature enhancement is accomplished under a certain type of high-pass

filtering. They are in conflict with each other when performed on a single set of data

represented either in time or in frequency. From this analysis, it sounds like that some

type of band-pass filtering may be a choice. But in fact single band-pass filtering at

a frequency band has a very limited capability of removing noise and enhancing fea-

tures. To achieve both the objectives, we need a suitable representation or platform

which can separate features from noise well and locally. Multiscale wavelet represen-

tation developed by Mallat and Zhong [51] has shown promise in separating features

and noise through scale space. As outlined by Daubechies [13] as an example, we

formulated and implemented multiscale sub-octave wavelet representation [42, 43], a

generalization of the dyadic wavelet representation, to further improve the capability

of characterizing features from noise. A dyadic wavelet. t ransform is Iriefly explained








in Chapter 3 while a sub-octave wavelet transform is formulated and described in

Chapter 4.

2.2 Noise Modeling

Noise modeling is an important part of a noise reduction method and it affects

which kind of techniques should be used to reduce noise. Efficient noise models can

make de-noising more effective. When a noise behavior is not fully understood or

still can not be completely explained, its accurate noise model is very difficult to

obtain. However, approximate noise models, such as speckle noise modeling, may be

used in such a case. Continuous noise modeling is of theoretical importance while

discrete noise models are more related to practical signal/image processing for noise

reduction. Through the sampling theory, a discrete noise model can be obtained

from sampling its corresponding continuous noise model with a sample rate (at least

Nyquist rate) to avoid aliasing effect.

2.2.1 Additive Noise Model

For some signal/image processing applications considered, such as simulated sig-

nals and mammograms, noise is better approximated as an additive phenomenon. In

general, additive noise can be represented by the formula


f(x) = g(x) + %a(x), (2.1)


where g(x) is a desired unknown function. The function f(x) is a noisy observation

of g(x), 7a(x) is additive noise, and x is a vector of spatial locations or temporal

samples. By using vector notation, we unified the noise model for 1-D, 2-D, ..., N-D

cases. For our signal/image processing, 1-D and 2-D noise models are what we are

interested in. Noise %a(x) is usually approximated as Gaussian white noise, so it








has zero mean (pI = 0) and a noise level a, the standard deviation of the Gaussian

function. For 1-D signal processing, we discretize Equation (2.1) as


f(n) = g (n) + ra (n), (2.2)


where n Z, f(n) f(nT + s) (g(n) and ?7a(n) are similar), T is a sampling period,

and s is a sampling shift. For 2-D image processing, Equation (2.1) is discretized as


f(m,n) = g(m, n) + ra (m,n), (2.3)


where (m,n) E Z2, f(m,n) = f(mT, + s, nTy + s,) (g(m,n) and rla(m,n) are

similar), T, and T, are sampling periods along horizontal and vertical directions, s,

and s, are sampling shifts.

For an additive noise model, there exist techniques based on mean squared error

or 11 norm optimization to remove noise. Such techniques include Donoho and John-

stone's wavelet shrinkage techniques [19, 20, 18], Chen and Donoho's basis pursuit

de-noising [5], Mallat and Hwang's local-maxima-based method for removing white

noise [50], and wavelet packet-based de-noising [9, 10].

By incorporating de-noising and feature enhancement mechanisms within a frame-

work of wavelet representations [73, 42], we seek to reduce noise and enhance contrast

without amplifying noise. We shall demonstrate that subtle features barely seen or

invisible in a mammogram, such as microcalcification clusters, spicular lesions, and

circular (arterial) calcifications, can be enhanced via wavelet representations and

judicious selection and modification of transform coefficients. Since our algorithm

treats noise and features independently, superior results were obtained for similar








data when compared to existing algorithms designed for de-noising or enhancement

alone.

In our investigation, we studied hard thresholding and Donoho and Johnstone's

soft thresholding wavelet shrinkage [19, 18] for noise reduction. An advantage of soft

thresholding is that it can achieve smoothness while hard thresholding better pre-

serves features. In our approach for accomplishing de-noising and feature enhance-

ment, we take advantage of both thresholding methods. Donoho and Johnstone's

soft thresholding method [19, 18] was developed on an orthonormal wavelet trans-

form [12] primarily applied with Daubechies's Symmlet 8 basis wavelet. These results

showed some undesired side-effects, from pseudo-Gibbs phenomena [8]. By using an

overcomplete wavelet representation and basis wavelets with fewer oscillations, a re-

sult relatively free from such side-effects after de-noising was observed experimentally

on similar data sets. In our algorithm, we first adapt regularized soft thresholding

wavelet shrinkage to remove noise energy within the finer levels of scale (such as levels

1 and 2). We then apply to wavelet coefficients within the selected levels (such as

levels 3 and 4) of analysis a nonlinear gain with hard thresholding incorporated to

preserve features while removing small noise perturbations.

2.2.2 Approximate Speckle Noise Model

Coherent interfering cause speckle noise. An accurate and reliable model of the

noise is desirable for efficient speckle reduction. But it remains a difficult problem.

An approximate speckle noise model is formulated below. Here, our primary objective

is to accomplish speckle reduction for 2-D digital echocardiograms, so we formulate

the noise model directly in two dimensions. The formulation of a one-dimensional

noise model is similar.







Since speckle noise is not simply additive, Jain [30] presented a general model for

speckle noise as

f(x, y) = g(x, y) r7n (x, y) + ra (x, y), (2.4)

where g(x, y) is an unknown 2-D function, such as a noise-free original image, to be

recovered, f(x, y) is a noisy observation of g(x, y), rq (x,y) and ra(x, y) are multi-

plicative and additive noise respectively, x and y are the variables, such as spatial

locations, and (x, y) E R2. Since the effect of additive noise (such as sensor noise)

with level ad is considerably smaller than multiplicative noise (coherent interfering)

(||,a(x,y)|j2 proximated by

f(x, y) = g(x, y) 7(x, y). (2.5)

To separate the noise from the original image, we take a logarithmic transform on

both sides of Equation (2.5),


log(f(x, y)) = log(g(x, y)) + log(r, (x, y)). (2.6)


Equation (2.6) can also be rewritten as


f'(x, y) = g'(x, y) + l (x, y). (2.7)


Now we can approximate q (x, y) as additive white noise and may apply various

wavelet-based approaches for additive noise reduction. With uniform sampling, we

obtain the discrete version of equation (2.7) as


f(m, n) = g'(m, n) + 7 (m, n),


(2.8)







where (m, n) E Z2, f'(m, n) = ft(mT + s,, nTy + Sy), T. and Ty are sampling peri-

ods along horizontal and vertical directions, sx and s, are sampling shifts. Wavelet

representation and wavelet transforms will be presented in the next two chapters. To

describe how the de-noising method works, here, we only need the fact that a wavelet

transform is a linear transformation, and we borrow its notation for a wavelet rep-

resentation whose details are given in the following chapters. The symbol W is

represented as a wavelet transform, Wf as a wavelet coefficient at scale 23 (or level

j) and direction d (1 for horizontal and 2 for vertical), Sj is a scaling approximation

at a final level J. By the properties of a linear transform, we have


W[f (m, n)] = W[gt(m, n)] + W[ 7 (m, n)] (2.9)


after applying wavelet transform on the both sizes of Equation (2.8) where


W[f (m, n)] = {(Wf[f (m, n)])d=1,2, 1
W[g'(m, n)] = {( (g (m, n)])d=1,2,
W[ra (m,n)]= {( (7 (m,n)])d=1,2,1

Since noise lies in high frequency, it will reduce to near zero after a finite number

of repeated smoothings, so Sj[rlj7(m,n)] -+ 0. In fact, at most 5 level wavelet de-

composition is needed to smooth out noise for most noise reduction applications we

conducted. This is why we only carry out speckle noise reduction through eliminat-

ing noise energy d (W4 [ (rm, n)])2. For image restoration purposes, it is desirable

to recover W[gl(m, n)], the wavelet transform of a desired function g'(m, n), from

W[f'(m, n)] by reducing W[t7r(m, n)] in the wavelet domain. By taking the inverse
wavelet transform, we may be able to recover g((m, n) or a close approximation. For








noise reduction and feature enhancement, we want to increase further the sharpness

of features of interest, such as myocardial boundaries, through nonlinear stretching

for feature energy gain on signal details Wf[g'(m, n)].

Jain showed a similar homomorphic approach [30, pp. 313-316] for speckle reduc-

tion of images with undeformable objects through temporal averaging and homomor-

phic Wiener filtering. The motion and deformable nature of human hearts through

time prevents us from getting the same status of the left ventricle for multiple frames.

Because we treat noise and feature components differently, we are able to produce

a result that is superior to de-noising only algorithms. We show that our algorithm

is capable of not only reducing noise, but also enhancing features of diagnostic im-

portance, such as myocardial boundaries in 2-D echocardiograms obtained from the

parasternal short-axis view.

2.3 Uniform Wavelet Shrinkage Methods for De-Noising

Threshold-based de-noising is a simple and efficient technique for noise reduction

when being applied within a framework of wavelet representations which can separate

features from noise. Hard thresholding has long been used as a useful tool, includ-

ing de-noising. Soft thresholding wavelet shrinkage for de-noising was developed by

Donoho and Johnstone [19]. Hard thresholding and soft thresholding have trade off

between preserving features and achieving smoothness. When features in the wavelet

domain can be clearly distinguished from noise, applying hard thresholding wavelet

shrinkage can achieve a better result than soft thresholding. When it is not the case

and smoothness is more desirable, soft thresholding should be the choice.







2.3.1 Hard Thresholding


A hard thresholding operation can be expressed as


(x) = TH(v(x),t) = v(x)(Iv(x)I > t)+,


(2.10)


where t is a threshold value, x E D where D is the domain of v(x), and u(x) is the

result of hard thresholding and has the same sign as v(x) if nonzero. The meaning

of (Iv(x)| > t)+ is defined by the expression


(Iv(x)I>t)+ =
0


if Iv(x)>I t,
otherwise.


2.3.2 Soft Thresholdiny


Soft thresholding [19, 18] is a nonlinear operator and can be described by


u(x) = Ts(v(x), t) = sign(v(x)) (Iv(x) t)+,


(2.11)


where threshold parameter t is proportional to the noise level and x E D, the domain

of v(x), and u(x) is the result of soft thresholding and has the same sign as v(x) if

nonzero. The expression (Iv(x) t)+ is defined as


(Iv(x) Iv(x)l- t if v(x)j >t,

0 otherwise.













1

0.8

0.6

0.4

0.2

0

-0.2

-0.4

-0.6

-0.8

-1

-1


-0.8 -0.6 -0.4 -0.2


0 0.2 0.4
v(x) T


0.6 0.8 1


Figure 2.1. Thresholding methods: soft thresholding and hard thresholding.


The function sign(v) is defined as


1

sign(v) = -1


0


if v > 0,


if v < 0,


otherwise.


Figure 2.1 shows a soft thresholding operation compared with hard thresholding.


2.4 Enhancement Techniques


In this section, we describe how to design an enhancement function with noise


suppression incorporated. Several choices of enhancement functions are presented.


Analysis and discussion of the reasons for our design philosophy are also included.


Soft Thresholding vs Hard Thresholding

7

* ------- Soft Thresholding
- ofHard Thresholding
-.

-7

-
I -
,
,


-- ------------ ----- ----------=



7 '









Nonlinear Enhancement Function: T1=0.1, T2=0.2, T3=0.85, alpha = 0.4


-, .
IZ 0 ...... ......... ... .....
I I
-0.2

-0.4 / : I I

II I
/ / I I
-0.6-

-0.8-

-1 i I
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
v T1 T2 T3

Figure 2.2. A nonlinear gain function for feature enhancement with noise suppression.


2.4.1 Enhancement by a Nonlinear Gain Function


In the design of an enhancement function, we try to accomplish the two tasks of an

effective enhancement; (a) enhance features selectively and efficiently and (b) avoid

amplifying noise. An enhancement operator with noise suppression is desirable and

can be a choice for achieving the two aforementioned tasks. Since we can not fulfill

the tasks satisfactorily in the original or Fourier representation of a signal or image,

this leads us to look at other representations through some kinds of transformation.

Through our study and experiments, we observed that dyadic wavelet representations

have shown a great promise for separating features from noise. Therefore, we can

apply the kind of enhancement functions, which will be introduced momentarily,

to enhance FOI. We thereafter generalize dyadic wavelet representations to produce

sub-octave wavelet representations for characterizing band-limited features frequently

seen in medical images more efficiently.








A parameterized nonlinear gain function, which is targeted to accomplish the two

tasks of an effective enhancement, can be formulated as


0 if Ivi < T1,

ENL(v) = ssign(v) (T2 + T ((Ivl T2)/T) ) if T2 < Ivi < T3, (2.12)

s v otherwise,


where v E [-1, 1], 0 < a < 1, T = (T3 T2), s is a positive scaling factor which

is used to adjust the overall energy of a processed image. Parameters T1, T2, and

T3 are selected values. For each input value v less than T1, the small coefficient is

more likely resulted from noise where v is a normalized coefficient. For input value

v greater than T3, the contrast of the corresponding feature of v is already relatively

high. No special treatment for the coefficient is needed, so we only do linear scaling

which is needed to keep the enhancement function from becoming decreasing, which

may cause artifacts. The normalized coefficients within the range between T2 and

T3 are what we would like to enhance because their contrast is relatively low and

our features of interest have the corresponding coefficients in this range. Thus, we

nonlinearly stretch their energy gain to revitalize these features. The range between

T1 and T2 is considered as a risk area. Both noise and features may have components

in this range, so we try to avoid amplifying noise by simply linear scaling and treated

this range similar to the area of values greater than T3. Figure 2.2 shows a sample

enhancement function. This enhancement operator is less flexible than the operator

in Section 2.4.2, but it is computationally more efficient. The operator can serve as

a choice if speed is a concern.








Generalized Adaptive Gain: C=10, B=0.35, T1=0.1, T2=0.2, T3=0.9


-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8
v T1 T2 T3

Figure 2.3. A generalized adaptive gain function.


2.4.2 Enhancement by Generalized Adaptive Gain


In the last few years, several wavelet-based enhancement techniques have been de-

veloped [46, 45, 37, 39]. Adaptive gain through nonlinear processing [37, 34] has been

used to enhance features in digital mammograms. The adaptive gain through non-

linear processing is further generalized [42] to incorporate hard thresholding in order

to avoid amplifying noise and to remove small noise perturbations. The generalized

adaptive gain (GAG) nonlinear operator is defined as


0 if lvi < T1,

EGAG(V) q [T2 + a (sigm(c(u b)) sigm(-c(u + b)))] if T2 < lv < T3, (2.13)

s v otherwise,


where v E [-1, 1], a = (T3 T2) a, q = s sign(v), u = (vI T2)/(T3 T2), b E (0, 1),

0 < T1 < T2 < T3 < 1, c is a gain factor, s is a scaling factor which is used to adjust








the overall energy of a processed image. Parameter a can be computed by


1
a= (2.14)
a sigm(c(1 b)) sigm(-c(1 + b))' (2

1
sigm(v) = + (2.15)
1 + e-

Parameters T1, T2, and Ta are selected values. When T1 = T2 = 0 and Ta = 1,

the expression is equivalent to the adaptive gain nonlinear function used previously

[37, 34]. The interval [T2, T3] serves as a sliding window for feature selectivity. The

slide can be adjusted to emphasize coefficients within a specific range of variation.

To increase the overall energy of a processed image, we assign s a value greater than

one (s > 1). Similarly we may reduce the energy of a processed image by letting

s < 1. When s = 1, the scaling factor s does not contribute to the overall energy

change, and makes the overall operator equivalent to the operator reported by Laine

and Zong [42]. Thus, by selecting gain values and local windows of energy, we may

achieve a more desirable enhancement. Figure 2.3 presents an adaptive gain function

with feature selectivity.












CHAPTER 3
DYADIC WAVELET REPRESENTATIONS


In this chapter, we describe multiscale wavelet transforms at dyadic scales adopted

in our preliminary algorithms for de-noising and enhancement as well as edge detec-

tion. Wavelet-based de-noising and enhancement are presented in this chapter while

edge detection through wavelet maximum representation will be described in Chap-

ter 6. Since we use a dyadic wavelet transform primarily for discrete signal and

digital image processing, the transformation is presented in the discrete domain from

an implementation point of view. For our purposes, we are only interested in the

overcomplete (redundant) representation under a finite-level discrete dyadic wavelet

transform. For more theoretical work and continuous dyadic wavelet transforms, Mal-

lat and Zhong have presented in-depth details [51, 69]. DWT-based algorithms for

signal/image processing are developed on dyadic wavelet representations described

in this chapter, image restoration and enhancement operators presented in the last

chapter.

3.1 Discrete Dyadic Wavelet Transform

The discrete dyadic wavelet transform developed by Mallat and Zhong [51] has

been previously applied to areas, including edge detection, texture analysis, noise

reduction, and image enhancement. Multiscale representation under a dyadic wavelet

transform provides a useful framework for characterizing features in terms of sharp

variation points. For de-noising and enhancement purposes, compactly supported

wavelets can be utilized to eliminate noise and sharpen contrast within structures

26








and along object boundaries without affecting distant features [39, 35, 73] because

wavelet transforms localize feature representations.

3.1.1 One-Dimensional Dyadic Wavelet Transform

First, we describe the discrete dyadic wavelet transform in one dimension and

then extend it to two dimensions. For discrete signal and digital image processing,

only a finite-level discrete dyadic wavelet transform is usually needed for practical

applications. A J-level discrete dyadic wavelet transform of a 1-D discrete function

f(n) E 12(Z) can be represented as


W[f (n)] = {(VW[f(n)])i

where W [f(n)] is a wavelet coefficient at scale 2j (or level j), location n E Z, Sj[f(n)]

is a coarse scale approximation at the final level J and position n. Wavelet coefficient

Wj[f(n)] and scaling approximation Sj[f(n)] at level j can be defined as

+00
Wj[f(n)] = f 2* J(n) f(n')2 (n n'), (3.2)
7'=--o
+00
Sj[f(n)] = f (n) E= f(n')p2j(n n'), (3.3)
n'=-oo

respectively, where 2(n) = n) () and 2() = j () are analysis wavelets

and scaling functions dilated at scale 23. The inverse discrete dyadic wavelet trans-

form can be represented as W-1 (W-1[W[f(n)]]). For a perfect decomposition and

reconstruction, we have

J
f(n) = WY-[W[f(n)]] = Sj[f (n)] 2 (n) + W )[f ] (n),] 72(n), (3.4)
j=1







where (p2 (n) = p2J (-n). In order to get a perfect reconstruction of a 1-D discrete

function, analysis and synthesis wavelets and the scaling function should satisfy

J
I(w)12 = (2jw)'(2jw) + k1(2Jw)12
j=1
+oo
-= (2 w)7(2 w). (3.5)
j=1

The finite-level dyadic wavelet decomposition in (3.1) forms a complete representa-

tion for a J-level dyadic wavelet transform. For a particular class of dyadic wavelets,

such as the first order derivatives of spline smoothing functions, the finite-level di-

rect and inverse discrete dyadic wavelet transform of a 1-D discrete function can be

implemented in terms of three filters, H, G, and K. The three filters should satisfy

the following condition

IH(w) 2 + G(w)K(w) = 1, (3.6)

where H(w) is a low pass filter, G(w) and K(w) are high pass filters.

The dyadic wavelet decomposition in Equation (3.1) can be formulated in terms

of the following recursive relations between two consecutive levels j and j + 1 in the

Fourier domain as

Wj+l[f(w)] = G(2jw)Sj[f(w)], (3.7)

i+i[f(w))] = H(2jw)S [f(w)], (3.8)

where j > 0, and So[f(w)] := f(w). The reconstruction So[f(w)] from a dyadic

wavelet decomposition can be represented recursively as (j changes from J- 1 to 0)


Sj[f(w)] = Wj+l[f(w)]K(23w) + Sj+4[f(w)]H(2jw),


(3.9)
























Figure 3.1. A 3-level DWT decomposition and reconstruction of a 1-D function.

where H is the complex conjugate of H. The DWT decomposition and reconstruction

based on the above recursive relations are shown as a block diagram in Figure 3.1.

The process of wavelet decomposition is referred to as wavelet analysis while the

wavelet reconstruction process is sometimes called wavelet synthesis.

3.1.2 Two-Dimensional Dyadic Wavelet Transform

In the rest of this section, we shall present discrete dyadic wavelet analysis and

synthesis of a 2-D discrete function (image). The decomposition will produce both

high-to-middle frequency signal details (wavelet coefficients) and a low frequency

scaling approximation (scaling coefficients) of an image at some final level of analysis.

Similarly a finite-level discrete dyadic wavelet transform is desirable for our digital

image processing. A J-level discrete dyadic wavelet transform of a 2-D discrete

function f(na, ny) E 12(Z2) can be represented as


W[f(n,,n,)] = {(Wf [f (n, ny)])d=1,2,1

F ] w II-:


(3.10)








where W [f(nz, ny)] is a wavelet coefficient at scale 23 (or level j), position (n.x, n),

and spatial orientation d (1 for horizontal and 2 for vertical), S [f(nx, ny)] is a coarse

scale approximation at the final level J and position (nx, ny). Wavelet coefficient

Wf[f(n,, ny)] and scaling approximation Sj[f(nx, ny)] at level j can be defined as

+00 +00
Wf[f (nx, ny)]= f *d (n,ny)= n f(m', n')V ( m', n n), (3.11)
m'=-oo n'=-00
+00 +00
Sj[f(nz, n,)] = f (pj (n, ny)= E f(m', n')c (m m', n n'),(3.12)
m'=-o n'=-oo

respectively, where cd4(nI,n,) =- pd(nj, ) and 2(n, ny) = ) are

analysis wavelets and scaling functions dilated at scale 2-, and d = 1, 2 represents

horizontal or vertical spatial orientation. In our approach for de-noising and enhance-

ment, we are interested in some basis wavelets which are the first order derivatives

of continuous smoothing functions V(x, y); thus, l'(nx, ny) and 2(nx, ny) can be

formulated as


(nx, ny) (x,y) 2(nny) = 9(xy) (3.13)
y=ny y=ny

Convolution with dilated 1 (nx, ny) and 2 (nx, ny) produces sharp variations along

horizontal and vertical directions for salient features. In wavelet frame represen-

tations, we can employ a different synthesis basis wavelet yd(nx, ny) for the recon-

struction of the original 2-D discrete function. The inverse discrete dyadic wavelet

transform can be represented as W-' (W-1[W[f(nx, ny)]]). For a perfect decompo-

sition and reconstruction,



f(n, ny) = W-' [W[f(nx, n)]]
J 2
= Sj[f(n, ny)] *( 2(nx, ny) + Wf [f (nz, ny)] 72 (nx, ny), (3.14)
j=l d=l







where (~J (n, ny) = P2(-nt, -ny). In order to get a perfect reconstruction of a 2-D

discrete function, analysis and synthesis wavelets, the scaling function should satisfy

J 2
I (wx, W )2 = E d(2Jwx, 2jY)d(2Jw, 2jw) + 1I(2jwu, 2JWy) 2. (3.15)
j=l d=l

The finite-level dyadic wavelet decomposition in (3.10) generates a complete repre-

sentation for a J-level dyadic wavelet transform. For a particular class of 2-D dyadic

wavelets, such as the first order directional derivatives of spline smoothing functions,

Mallat and Zhong [51] showed that the finite-level direct and inverse dyadic wavelet

transform of a 2-D discrete function can be implemented in terms of four 1-D filters,

H, G, K, and L. The four filters should satisfy the following perfect decomposition

and reconstruction conditions


H(w)12 + G(w)K(w) 1, (3.16)

1 + IH(w)2 2
L(w) = (3.17)
2

Mallat and Zhong [51] also showed how to design 1-D finite impulse response (FIR)

filters, H, G, K, and L, for a 2-D wavelet transform.

Similar to the 1-D case, a 2-D dyadic wavelet decomposition in Equation (3.10) can

be formulated in terms of the following recursive relations between two consecutive

levels j and j + 1 in the Fourier domain as


WT+/f[f (uy, w)] = G(2jwu) S[f (w, Wy)], (3.18)


W+l [f (w ay)] = G(2Wy)S [f (wx, Wy)], (3.19)


S+l[f(wx, y)] = H(2jw2)H(2j-, )Sj[f(w,w y)],


(3.20)







1 I
Wif) :


Figure 3.2. A 3-level DWT decomposition and reconstruction of a 2-D function.

where j > 0, and o[f(wx, wy)] := f(w,, wy), the Fourier transform of f(nx, ny). The

reconstruction S0[f(wx, wy)] from a dyadic wavelet decomposition can be represented

recursively as


S [f(wX, wy)] = Wi+1l[f(wX, wy)]K(2jwx)L(2jwy) + W^? [f(w,, Wy)]L(2jwx)K(2jwy)

+Sj+l[f(wx, wy)]HT(2jx)H(2 j y), (3.21)


where H is again the complex conjugate of H. A 2-D DWT decomposition and

reconstruction based on the above recursive relations are shown as a functional block

diagram in Figure 3.2 for J = 3. For a pair of 2-D analysis and synthesis filter banks

shown in Figures 3.3 and 3.4, reconstructed f*(nx, ny) is equal to f(nx, ny) when no

processing is performed on V[f(n,, ny)]. The 2-D analysis and synthesis filter banks

in Figures 3.3 and 3.4 are constructed using FIR filters shown in Table 3.1 where, for

instance, H(w) = En h(n)e-in.




















Profiles


j=1














j=2
j-2


j=3














dc-cap (j=3)


0 4





- 2 0 3

' It





05 i
04

02I

-3 -2 A 1 2 3










03
02
2 I




0 i

05


-3 -2 0 I 2 3


09|
2



OI
07



04

01
a-


Figure 3.3. A 2-D analysis filter bank.


d=l


d=2


........






















Profiles d=1 d=2









j=0 -~ ---
o4


















0 o


o01
0=2




05








o4










02















01

071


de-cap (j=3) 1 -, a 1 2


Figure 3.4. A 2-D synthesis filter bank.








Table 3.1. Impulse responses of filters H(w), G(w), K(w), and L(w).


3.2 DWT-Based De-Noising and Feature Enhancement

In this section, we first introduce algorithms for noise reduction and feature

restoration or enhancement based on an additive noise model presented in Section

2.2.1 and for speckle reduction with feature enhancement based on an approximate

speckle noise model formulated in Section 2.2.2. We then describe the methods and

present formulation for de-noising and enhancement based on the operators intro-

duced in Sections 2.3 and 2.4.

3.2.1 Algorithm for Additive Noise Reduction and Enhancement

Because de-noising and enhancement techniques are incorporated into a frame-

work of wavelet representations under dyadic wavelet transforms, our algorithm for

noise reduction and contrast enhancement consists of four major steps. In these

steps, parameterized de-noising and enhancement operators are utilized. The sample

experimental results are shown for these operations. The parameters can be fine

tuned to achieve two distinct purposes. One is for de-noising with feature restoration

while the other is for image enhancement with noise suppression. They are related

in certain sense, such as removing noise and improving the quality of features. These


FIR filters for m = 4 and c = 2
n h (n) g(n) k(n) l(n)
-4 0.001953125
-3 -0.00390625 0.015625
-2 0.0625 -0.03515625 0.0546875
-1 0.25 1.0 -0.14453125 0.109375
0 0.375 -1.0 -0.36328125 0.63671875
1 0.25 0.36328125 0.109375
2 0.0625 0.14453125 0.0546875
3 0.03515625 0.015625
4 0.00390625 0.001953125








methods are designed to remove noise with feature restoration or enhancement in an

additive noise model. The four steps of a DWT-based de-noising and enhancement

method are listed as follows:

1. Carry out a DWT to obtain a complete representation of noisy data in the

wavelet domain.

2. Shrink transform coefficients within the finer scales to partially remove noise.

3. Emphasize features through a nonlinear point-wise operator to increase energy

among features within a specific range of variation.

4. Perform an inverse DWT and reconstruct the signal/image.

Unlike Donoho and Johnstone's methods [19] for de-noising, an advantage of this

method is that it also applies feature enhancement to further improve the performance

of signal/image restoration. This algorithm has an ability to suppress noise (without

amplifying noise) when applied for contrast enhancement compared to enhancement

only methods.

3.2.2 Algorithm for Speckle Reduction with Feature Enhancement

Speckle noise was modeled as approximate multiplicative noise in Section 2.2.2.

Similar to the method in Jain [30], we apply a homomorphic approach to reducing

speckle noise. The algorithm consists of six major steps. The six steps of a DWT-

based de-noising and enhancement method for the speckle noise model are listed as

follows:

1. Perform a logarithmic transform to convert an image containing multiplicative

noise into an image with additive noise.

2. Carry out a DWT and obtain a complete representation of the log-transformed

image in the transform domain.








3. Shrink coefficients within the finer scales to partially remove noise energy.

4. Emphasize features through nonlinear point-wise processing to increase the

energy among features within a specific range of variation.


5. Perform an inverse DWT and reconstruct the de-noised and enhanced image so

that it approximates its noise-free original in log scale with features enhanced.

6. Finally, perform an exponential transform on the reconstructed image to con-

vert it from log scale to linear scale. The resulting image is now de-noised and

enhanced.

This method takes a similar homomorphic transform to convert multiplicative noise

into additive noise. Unlike Jain's method [30], we incorporate a feature enhancement

mechanism into the noise reduction procedure to sharpen blurred features (feature

restoration or enhancement) after de-noising.

Wavelet representations under discrete dyadic wavelet transforms were described

in Section 3.1 for both one and two dimensions. These de-noising and contrast

enhancement schemes are based on wavelet shrinkage and feature emphasis on top

of the wavelet representations. Wavelet shrinkage is a technique to uniformly reduce

wavelet coefficients in order to remove noise coefficients for the purpose of de-noising.

Feature emphasis, on the other hand, is trying to increase the magnitudes of feature's

coefficients to gain energy for low contrast features. Below, we describe how to

perform DWT-based de-noising and enhancement.

3.2.3 DWT-Based De-Noising

Since dyadic wavelet transforms with first order derivatives of smoothing functions

as basis wavelets can efficiently identify features with sharp variation, we are able

to achieve the objective of noise reduction through either hard thresholding or soft








thresholding. Hard thresholding preserves features better while soft thresholding can

achieve the effect of smoothness.

Here we describe threshold-based de-noising in two dimensions. One dimensional

case is similar. To achieve the purpose of de-noising through hard thresholding, we

can modify DWT coefficients for noise reduction by


W fd,* [f(n, ny)] = MfTH (W[f (nx, ny)]/Mf, (3.22)


Mf = max(Wj [f (n, ny)] ), (3.23)


where d = 1,2, j = 1,...,k, k < J, and tf is, in general, a threshold related to noise

level and scale. Parameter tJ can be directionally related if we have orientation prefer-

ence. TH is the hard thresholding operator presented in Section 2.3.1. The threshold

tj should be selected to possibly remove most noise coefficients while preserving fea-

ture coefficients. Selection of thresholds in [71, 73] was trial-and-error based. The

selection can be guided by examining the histogram and energy distribution of coef-

ficients. Wavelet transforms generate a small number of large coefficients carrying a

significant amount of energy, especially from fine to coarse scales, for sharp variation

points while producing a large number of small coefficients mostly corresponding to

noise. Thresholds decrease from fine to coarse scales because noise energy is smoothed

out through repeated smoothings (scaling) by low pass filtering. This point is made

clear, as shown in Figure 3.5 for Donoho and Johnstone's synthetic signal "Blocks".

The guideline is to select decreasing thresholds which can remove a great number of

small coefficients carrying most noise energy and keep a limited number of large coef-

ficients for feature energy. Thresholds through fine to coarse levels may be regulated

by a decreasing function which will be discussed momentarily.














The Histogram of Wavelet Coefficients
An Aifl.


500

g400

S300

S200

100


Wavelet Coefficient Magnitude


2 4
Wavelet Coefficient Magnitude


Selected Threshold at Level 2




04


0 2 4 6
Wavelet Coefficient Magnitude

1--


600
400
I :o


0 2 4
Wavelet Coefficient Magnitude


The Energy of Wavelet Coefficients


Selected Threshold at Level 1
60


i40


20


0
0 2 4 6
Wavelet Coefficient Magnitude

i nn.-


80

60

S40

20


0 2 4 6
Wavelet Coefficient Magnitude


Selected Threshold at Level 2
0 o



!0




0 2 4 6
Wavelet Coefficient Magnitude


0 2 4 6
Wavelet Coefficient Magnitude


Figure 3.5. Coefficient and energy distributions of signal "Blocks".


300


o200


100


600


4400


200


Selected Threshold at Level 4


Selected Threshold at Level 3







LO~







When noise level is high, hard thresholding de-noising may not be able to achieve

overall smoothness. If this is the case, we can carry out soft thresholding based de-

noising. For reducing noise and achieving smoothness effect, we can modify DWT

coefficients by


wd*[ f(nx, n,)] = M Ts(W [f (n, ny)]/M), t) (3.24)


Mf = max(IWd[f (n:, ny)] ), (3.25)
Snx ny

where d = 1, 2, j = 1, ..., k, k < J, and tj is a threshold usually related to noise level

and scale. Ts is the soft thresholding operator presented in Section 2.3.2. Thresholds

can be selected similarly based on the above discussion. Wavelet coefficients are

normalized to the range between -1 to 1 before thresholding operations.

3.2.4 Regulating Threshold Selection through Scale Space

Donoho and Johnstone's method of soft thresholding uses a single global threshold

[19, 20] under orthonormal wavelet transforms. Since noise coefficients under a DWT

have average decay through fine-to-coarse scales, we can regulate both soft and hard

thresholding by applying coefficient dependent thresholds at different scales. The

regulated threshold tf can be computed through a linearly decreasing function


S Tmaz a(j 1)) d if Tm (j 1) > Tin,3.26)
Tmin a otherwise,


where Jd is the standard deviation, a is a decreasing factor between two consecutive

levels, Tmax is a maximum factor related to af while Tmin is a minimum factor,

1 < j < J, and d E {1, 2}. When the noise level in original corrupted data is

unknown, some methods use the standard deviation to approximate the noise level, so









Tmax-








Tmin -

1 J Level

Figure 3.6. A sample scaling factor function.

the thresholds are related to ao. Figure 3.6 shows a sample scaling factor function for

the computation of regulated thresholds. Our de-noising algorithms are implemented

in a way that a, Tmax, and Tmin can be interactively tuned to achieve distinct effect

of noise reduction.

3.2.5 DWT-Based Enhancement with Noise Suppression

Through either a nonlinear gain function or generalized adaptive gain nonlin-

ear operator, we can achieve the effect of contrast enhancement for certain FOI by

processing DWT coefficients as


W'* [f (n, ny)] = M Eop(W[f (nx, ny)]/M), (3.27)


M = max(IW d[f(nx,ny)]l), (3.28)
t n .n .

where position (nx, ny) E D, the domain of f(n,, ny), d = 1, 2, j E {k,..., J}, and

1 < k < J. The enhancement operator Eop can be ENL or EG -.; presented in

Section 2.4. Since these operators are defined on input values between -1 to 1, we

normalize wavelet coefficients before applying the operators.




























0 02 0.4 0.6 0.8 1

(c). WCs of Orignal Box
8

7

6

5

4
0 0.2 0.4 0.6 0.8 1


(b). Noised Box


0 0.2 04 0.6 0.8 1

(d). WCs of Noised Box
8

7

6




4
0 0.2 0.4 0.6 0.8 1


0 0.2 0.4 0.6 0.8 I

(c). HardThresholded WCs of Noised Box
8

7 I

6

5

4
0 0.2 0.4 0,6 0.8 I


(b) Reconstoicied after Soflibresholdoig


(b). Reconstructed after SoffThresholding
30

20

10




0 0.2 04 0.6 08

(d). SoftThresholded WCs of Noised Box
8

7

6

5

4
0 0.2 0.4 0.6 0.8 1


Figure 3.7. Pseudo-Gibbs phenomena. (a) Orthonormal wavelet transform of an orig-

inal signal and its noisy signal with added spike noise. (b) Pseudo-Gibbs phenomena

after both hard thresholding and soft thresholding de-noising under an OWT.











20 20
10 10
0 0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1


(a) Original Signal


(b) Noisy Signal


(c) Original DWT Coefficients (d) Noisy DWT Coefficients


Figure 3.8. Multiscale discrete wavelet transform of an original and noisy signals.


10.



0-lo- ------------------,----




S 01 2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

10[ ''A'
-10





0 0.1 0.2 0.3 0.4 0.5 0S6 0.7 0.8 0.9 1
20-
10

0 0.2 0 0.4 0.5 0.6 0.7 0


0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1


10-


0 0.1 0.2 0 3 0.4 0.5 0.6 0 7 0.8 0.9
-10-
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1







0 0.1 0.2 0.3 0.4 0o.5 0.6 0.7 0.8 0.9 1





10
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1





0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1





















































0 0.1 0.2 0.3 0.4


0.5 0.6 0.7 0.8 0.9 1


Figure 3.9. DWT-based reconstruction after (a) hard thresholding, (b) soft thresh-

olding, and (c) soft thresholding with enhancement.


The Enhanced Signal and the Processed DWT Coefficients
20
10

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 018 0.9 1
5- I- 1


5[ I I i


0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

5-s

0 0.1 0.2 0.3 04 0.5 0.6 0.7 0.8 0.9 1
-5
J -----"~----------



0 0.1 02 03 04 0.5 0.6 0.7 0.8 0.9 1
20


The Enhanced Signal and the Processed DWT Coefficients
20

10


0 A


0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
-5


0 0.1 0.2 0 3 0.4 0.5 0 6 0.7 0.8 0.9 1



-50 01 02 03 04 05 06 07 08 0.9 1
0 0.1 0.2 03 0.4 0.5 04 07 0.8 0.9



0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1


The Enhanced Signal and the Processed DWT Coefficints
20
10

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

5






0 01 02 03 04 05 06 0,7 08 09
-5
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 I




0 0.1 02 03 04 05 06 07 08 09
0 0
-5 [ I I v










0 O.A 0.2 0.3 0.4 0.5 0,6 0.7 0.8 0.9 1


-








3.3 Application Samples and Analysis

In this section, we present the experimental results of our algorithms on some

sample signals and images. First, we show that these algorithms are less affected

from pseudo-Gibbs phenomena through a simple and illustrative experiment. Our

results are compared to Donoho and Johnstone's results to show this effect. We

then present de-noising and enhancement results for additive noise and speckle noise

models.

3.3.1 Less Affection from Pseudo-Gibbs Phenomena

Coifman and Donoho [8] showed that both hard and soft thresholding de-noising

under an orthonormal wavelet transform produced undesired side-effects, including

pseudo-Gibbs phenomena. To solve the problem, they [8] presented translation-

invariant de-noising methods to overcome the artifacts partially caused by the lack

of shift-invariance of an OWT. Their methods alleviated the problem by making it

less obvious, but oscillations after de-noising remained visible. Several experimental

results showed that our algorithms were less affected from pseudo-Gibbs phenomena

[73]. We have used a simple and intuitive synthetic signal to demonstrate this effect of

our algorithms when compared to Donoho et al.'s methods. Our experimental results

on Donoho and Johnstone's four synthetic signals also demonstrate this point.

Figures 3.7, 3.8, and 3.9 are used to show that our methods are relatively free

from pseudo-Gibbs phenomena. We generated a synthetic signal to illustrate what

may cause the side-effect and how our methods can basically avoid it. Figures 3.7(a)

shows an original signal, its noisy version with added spike noise, and the orthonor-

mal wavelet coefficients of the original and noisy signals. Figure 3.7(b) shows the

effect of pseudo-Gibbs phenomena under Donoho et al.'s hard thresholding and soft

thresholding methods through WaveLab (a software package from Donoho's research








group). Notice that a feature of sharp variation produces not only large coefficients

but also small coefficients under an OWT. The small coefficients are removed under

both hard and soft thresholding methods. A typical orthonormal wavelet usually has

at least certain oscillations [12, 64] in order to satisfy the (admissibility) condition of

an orthonormal wavelet
f +00 1 )(W) 12
/o (I dw < oo,

where #(w) = f+ /i(x)eixwdx. In the spatial domain, the corresponding contin-

uous wavelet function O(x) has sufficient decay and satisfy


f +00
J (x)dx = 0.
-OO


Figures 3.9(a) and 3.9(b) present our de-noising results under regulated hard

thresholding and soft thresholding. Both methods remove the noise without causing

pseudo-Gibbs phenomena, but soft thresholding also smoothes features (step edges)

a little bit. The features are basically restored through our enhancement mechanism

in figure 3.9(c). This experiment is used to illustrate the fact that the results of our

algorithm are less affected from the side-effect (pseudo-Gibbs phenomena) compared

to the results from Donoho et al.'s methods.

3.3.2 Additive Noise Reduction and Enhancement

Based on an additive noise model, here, we present application results of de-

noising and enhancement for synthetic signals and medical images. The first part is

targeted for signal/image restoration while the second part is for enhancement with

noise suppression.















20 8 1 1
*:^U3W----nj---
-1400-- 600 80- 1000 1200 -0i-0' -- 1800 200

0

0 200 400 600 800 1000 1200 1400 1600 1800 2000
2 0 0 1 1 1

10 1 1 1 1
-10t -- i -- i -- -- i--- i--- i -- i--- i-


0 200 400 600 800 1000 1200 1400 1600 1800 2000
20 0 0 2 0


10 -


0 200 400 600 800 1000 1200 1400 1600 1800 2000







-10
0 200 400 600 800 1000 1200 1400 1600 1800 2000


10
-10


0 200 400 600 800 1000 1200 1400 1600 1800 2000

0 200 400 600 800 1000 1200 1400 1600 1800 2000

0-
-30-
0 200 400 600 800 1000 1200 1400 1600 1800 2000




0 200 400 600 800 1000 1200 1400 1600 1800 2000


4 200 400 600 800 1000 1200 1400 1600 1800 2000




0 200 400 600 800 1000 1200 1400 1600 1800 2000




0 200 400 600 800 1000 1200 1400 1600 1800 2000
0 | ||I I j


0 200 400 600 800 1000 1200 1400 1600 1800 2000
40 o I I I I I I 0









(b) "Bumps"





0 200 400 600 800 1000 1200 1400 1600 1800 2000




0 200 400 600 800 1000 1200 1400 1600 1800 2000
40 I















0 0 0 A 0 0 6
-10
0 200 400 600 800 1000 1200 1400 1600 1800 2000








0 200 400 600 800 1000 1200 1400 1600 1800 2000


(c) "HeaviSine" (d) "Doppler"


Figure 3.10. De-Noised and feature restored results of DWT-based algorithms; first

row: original signal, second row: noisy version, third row: de-noised only result, and

fourth row: de-noised and enhanced result signal.



De-Noising with Feature Restoration



In this part of experiments, our de-noising and enhancement techniques are pri-


marily used for signal/image restoration. To achieve this objective, we want to reduce


noise and restore salient features. Since noise reduction usually causes features to be


blurred, our enhancement methods are deployed to sharpen the blurred features.


Figures 3.10 displays de-noised with feature restored results of our DWT-based


algorithms. First rows of figure 3.10(a)-(d) are original signals, second rows are noisy


signals, third rows are de-noised only results, and the last rows are de-noised with


































Figure 3.11. De-Noising and enhancement. (a) Original signal. (b) Signal (a) with
added noise of 2.52dB. (c) Soft thresholding de-noising only (11.07dB). (d) De-Noising
with enhancement (12.25dB).


(b) (c)


Figure 3.12. De-Noising and enhancement. (a) Original image. (b) Image (a) with
added noise of 2.5dB. (c) Soft thresholding de-noising only (11.75dB). (d) De-Noising
with enhancement (14.87dB).


2

0
0 50 100 150 200 250



0 50 100 150 200 250



0 50 100 150 200 250
0 50 100 150 200 250








feature restored result signals. Our results are pretty close to Coifman and Donoho's

new and best results produced by the full cycle-spinning translation-invariant de-

noising algorithm which is computationally more complex. For their "Bumps" and

"Doppler" test signals, our results are better than their best results. When processing

a noised signal or image with low-contrast, this algorithm can be used to boost up

the contrast through adding external energy to its signal energy by specifying a larger

gain factor.

Figures 3.11 and 3.12 show our de-noising and enhancement results on Mallat

and Hwang's test signal and image. The signal and image are corrupted by noise

of higher levels, but we get better results (higher recovery SNR) than Mallat and

Hwang's results on the same signal and image with less noise.

Enhancement with Noise Suppression

In this part of experiments, we try to achieve contrast enhancement without

amplifying noise. Figures 3.13 and 3.14 show the de-noising and enhancement results

on two MRI head images with unknown noise level. The experimental results of de-

noised and enhanced images from our algorithm are visibly and quantitatively better

than the results from the thresholding-based methods alone for de-noising, especially

for high level noise.

3.3.3 Speckle Reduction with Feature Enhancement

Speckle reduction and contrast enhancement can be accomplished in the trans-

form domain by judicious multiscale nonlinear processing of wavelet coefficients

(Wd[f(nx, ny)])d=1,2, 1
approximate speckle noise model in Section 2.2.2, we can usually separate the noise

component from a desired function. Wavelet transforms help to further distinguish

signal from noise in the spatial-scale space.






















(b) (c)


Figure 3.13. De-Noising and enhancement. (a) Original MRI image.
only. (c) DWT-based de-noising with enhancement.


(b) De-Noising


(b) (c)


Figure 3.14. De-Noising and enhancement. (a) Original MRI image. (b) De-Noising
only. (c) DWT-based de-noising with enhancement.


Figure 3.15. An algorithm for speckle reduction and contrast enhancement.



















(a) (b) (c)

Figure 3.16. Results of de-noising and enhancement. (a) A noisy ED frame. (b)
Wavelet shrinkage de-noising only method. (c) DWT-based de-noising and enhance-
ment.

Our multiscale homomorphic algorithm, as shown in Figure 3.15, for speckle re-

duction and feature enhancement was tested on echocardiograms of varying quality.

These image sequences were acquired from the parasternal short-axis view. Figures

3.16 and 3.17 show the results of de-noising with or without feature enhancement

on end diastolic (ED) and end systolic (ES) frames. The speckled original frames

are shown first. Results from wavelet shrinkage de-noising only and de-noising with

enhancement are shown in the Figures 3.16(b) and 3.16(c) respectively. Figure 3.18

shows a nonlinear operator for enhancing the image in Figure 3.17(a). This operator

looks much different from Figure 2.3 because of the log transform effect. Experimen-

tal results are also compared with other speckle reduction techniques, such as median

filtering, extreme sharpening combined with median filtering [11, 44], homomorphic

Wiener filtering, and a wavelet shrinkage de-noising [19, 18]. Figures 3.19 and 3.20

show sample results of the above mentioned methods on two typical frames from two

different echocardiographic sequences with the left ventricle as the region of interest.

Figure 3.19(a) is sample noisy image. The result of median filtering with a 5x5

mask is shown in Figure 3.19(b). Figure 3.19(c) displays sample result of extreme

sharpening combined with median filtering. The result from homomorphic Wiener




















(a) (b) (c)

Figure 3.17. Results of de-noising and enhancement. (a) A noisy ES frame. (b)
Wavelet shrinkage de-noising only method. (c) DWT-based de-noising and enhance-
ment.

filtering is shown in Figure 3.19(d). The last two images in Figures 3.19(e) and

3.19(f), display the results from wavelet shrinkage de-noising only and our de-noising

and enhancement algorithms. The algorithm produces smoothness inside a uniform

region and improves contrast along structure and object boundaries in addition to

speckle reduction. The de-noised and enhanced results of noisy echocardiographic

images from this algorithm appear to outperform the results from soft thresholding

de-noising alone. Our current algorithm is implemented such that most parameters

are set dynamically for adaptive de-noising and feature enhancement.

3.4 Clinical Data Processing Study

A study of clinical image processing was conducted to investigate the effect of

de-noising on the consistency and reliability to manually defined borders of the left

ventricle in 2-D short-axis echocardiographic images [70]. Experimental results in-

dicate the algorithm is promising. Myocardial borders manually defined by expert

observers exhibit less variation after de-noising. It seems that in echocardiograms,

where no real borders are clearly visible and incomplete, expert borders usually in-

dicate a close range where real borders may occur. When two expert borders agree









Generalized Adaptive Gain

I I II
0.8 ii
0.6 -
0.4 -

0.2 -






-0.6 ii-
I I I
II I

-1 -0.8 -0.6 -0.4 -0.2 O 0.2 0.4 0.6 0.8
T2 T3


Figure 3.18. A generalized adaptive gain function for processing an echocardiogram
in Figure 3.17(a).


with each other, the range of real borders is more likely limited around the two expert

borders. The study of clinical image processing shows that de-noising and feature

enhancement help to improve the consistency and reliability of manually defined

borders by expert observers.

The set of test images in our study of clinical image processing was selected

from an echocardiographic database exhibiting diverse image quality. Sixty systolic

sequences of 2-D short-axis echocardiographic images were selected. Half of the test

images were rated as good quality while the rest were considered as poor quality.

For more details about how these echocardiographic sequences were acquired, we

refer the reader to Wilson and Geiser [66]. Statistical results have shown that there

is some improvement in consistency and reliability for manually defined borders by

expert observers examining de-noised images compared to their original noisy images.

Quantitative measurements were calculated in terms of the mean of absolute border

differences (MDistDiff) in distance (mm) and the mean of border area differences









































(d)


Figure 3.19. Results of various de-noising methods. (a) Original image with speckle
noise. (b) Median filtering. (c) Extreme sharpening combined with median filtering.
(d) Homomorphic Wiener filtering. (e) Wavelet shrinkage de-noising only. (f) DWT-
based de-noising with enhancement.





























(a) (b)


(d) (e)


Figure 3.20. Results of various de-noising methods. (a) Original image with speckle
noise. (b) Median filtering. (c) Extreme sharpening combined with median filtering.
(d) Homomorphic Wiener filtering. (e) Wavelet shrinkage de-noising only method.
(f) DWT-based de-noising and enhancement.







Table 3.2. Quantitative measurements of manually defined borders.
All Test Images Good Images Poor Images
Ori vs Enh Ori vs Enh Ori vs Enh
MDistDiff Endo (in mm) 2.1040 1.8168 1.5972 1.5322 2.6118 2.1014
Epi (in mm) 1.7846 1.6743 1.3979 1.5886 2.1713 1.7601
MAreaDiff Endo (in cm2) 2.3731 1.8893 1.6597 1.4543 3.0865 2.2058
Epi (in cm2) 2.5676 2.0799 1.5823 1.9540 3.5528 2.3243

(MAreaDiff) in cm2. The border difference was measured by its close approximation

in 64 radial directional difference from an estimated center [66] of the left ventricle.

Manually defined borders by experts looking at poor images were improved more

than those of good images after de-noising. The statistical results of quantitative

measurements of two sets of expert manually defined borders are shown in Table 3.2.

The statistical computation results listed under the column "Ori" are the quantitative

measurements between two sets of expert borders on the original speckled images

while the results under the column "Enh" are based on the de-noised and enhanced

images. It is worth mentioning that a single set of de-noising and enhancement

parameters were used to process all the test echocardiographic images used in this

study. We suggest that a single value set of parameters should be enough for de-

noising and enhancing a class of images with a similar noise pattern and selected

features.

Figure 3.21 shows the correlation between the areas delineated by the two expert

observers. The four diagrams in Figure 3.21(a) present the correlation of ED Epi

(epicardial) border areas, ES Epi border areas, ED Endo (endocardial) border areas,

and ES Endo border areas on the original noisy images. The four diagrams in Fig-

ure 3.21(b) show similar results for the de-noised images with features restored or

enhanced. The solid lines in the figure are the linear regression lines, while the dash

and dotted lines are ideal regression lines. From the diagrams, it is clear that the






56


ED Epi Area Origial Image ES Epi Ar Oiginal Image ED Epi Arem DeNoised Image ES Epi Area DNoised Image
S0 80 -
60
b ++ 60


So o

Observer 1 Observer I Observer I Observer I
ED Endo Area Original Image ES Endo Area Original Image ED Endo Ara DcNoised Image ES Endo Area DeNoised Image
40 30 40 30 ,
30 30 20
S20 15 20 15
0 + +0 310
10 10 +
5 5
0 00 01
o2 -- -- -,-- I Q2 '. --- -- o ----- -------

0 10 20 30 40 0 10 20 30 0 10 20 30 40 0 10 20 30
Observer I Observer I Observer I Observer

(a) (b)

Figure 3.21. Area correlation between manually defined borders by two expert car-
diologist observers.


points which represent the two expert border areas on the same de-noised image are,

in general, more toward the ideal regression line. Additional improvement can be seen

on the Endo area correlation for the de-noised images. For most echocardiograms in

the study, there is usually less Endo border information than Epi border information.

Noisy border information (low signal-to-noise ratio (SNR)) affects border interpola-

tion by human observers for the manually defined borders. After de-noising, Endo

border information in terms of SNR is improved, so the expert border areas tend to

agree with each other, especially ES Endo areas. The statistical computation results

shown in Table 3.2, show evidence for this analysis.

Figure 3.22 shows the distributions of mean border differences on the original

images; (a) the distribution of Epi ED border differences, (b) the distribution of Epi

ES border differences, (c) the distribution of Endo ED border differences, and (d) the

distribution of Endo ES border differences. Figures 3.23(a)-(d) show the distributions

of mean border differences on the enhanced images, similar to Figures 3.22(a)-(d).

The solid lines in Figures 3.22 and 3.23 are the third order polynomial fitting curves in

a least-squares sense. With the same scale for both Figures 3.22 and 3.23, Figure 3.23


























Epi ED Border Difference Distribution (Graph Date: 24-Jul-97)


S10 20 30 40
Index of Original Image Sequences


50 60


Endo ED Border Difference Distribution (Graph Date: 24-Jul-97)
+
6
Dalhdot Line: Mean
Dashed Line: Mean /- 2SD
5 Solid Line Regression Curve

S + +
4 + + + + +


3 4- + + +-.+


2.7 .. .. .^^ ^ ---
++ +
+ ++ + + + +4+
++ + ++ +
I + +

0


Border Difference Mean 217
-1 Boler Difference SD 18
ii


S10 20 30 40
Index of Original Image Sequences


50 60


5 Dashdot Line Mean
Dashed Line Mean 4/ 2SD +
Solid Line Regression Crve +
4 -------------------
+ +
+ + +
3 + +
+ +

2 --.-+- -+-.- -. -4 --- ------ -
+ + ^ + + +
2 + .+..+ + +A
+ + + +


0
Border Differnce Mean 1 92
Border Difference SD 102
-2tIi


0 10 20 30 40
Index of Original Image Sequences


Figure 3.22.


Border difference variation on the original images. (a) Distribution


of Epi ED border differences. (b) Distribution of Epi ES border differences. (c)

Distribution of Endo ED border differences. (d) Distribution of Endo ES border

differences. The solid lines are the third order polynomial fitting curves.


Dashdo. iUe Mean
S Dashed Line Mean 2SD
Solid Line Regression Crve

03
3 ------ -- -- -- -- -- -- -- -- -- -- -- --------.
+ +

0 +2 4 +
2- + + +
++

+ + + + +

0-------------44-
BoRd Difference Mean = 1.65
Border Difference SD = 0.74
-1


0 10 20 30 40 50 60
Index of Original Image Sequences


(b)


Endo ES Border Difference Distrbution (Graph Date: 24-Jul-97)

6
Dashdn Line Mean
5 Dashed ne Mean l- 2SD +
Solid Line: Regression Curve

E 4-
3: ------- ---- -:.4

I ..+ ... ++ + + .
S+ + +

S+ +
4 .+4- + + +

0
+, + 4+
+ + +
+4+ 4 + +



Border Difference Mean = 204
-1 Border Differenc SD I 13


50 60


Epi ES Border Difference Distribution (Graph Date: 24-Jul-97)








Table 3.3. Quantitative measurements of inter-observer mean border differences in
mm on original versus enhanced images, as shown in Figure 3.25.

Epi-ED Epi-ES Endo-ED Endo-ES
Ori 4.7 4.0 6.3 4.6
Enh 1.4 2.3 1.2 1.3


shows that border distance differences for enhanced images have smaller means and

standard deviations than the corresponding differences for the original noisy images

as shown in Figure 3.22.

Figure 3.24 shows an example of de-noising and image enhancement. Figure 3.25

shows the same images as Figures 3.24 with two expert manually-defined borders

overlaid. Significant overall improvement on the agreement of two expert borders is

visible from the overlaid borders of the enhanced images compared to the original

images, as shown visually in Figure 3.25 and quantitatively in Table 3.3. The Endo

borders have more improvement than the Epi borders based on quantitative mea-

surements and visual appearance. Statistical analysis shows improvement in terms of

the mean of absolute border differences and mean border area differences of de-noised

images compared to their original images. From the overall statistical analysis, the

greatest impact is on the expert borders drawn on images with poor image quality,

such as the images in Figure 3.24.
































Epi ED Border Difference Distribution (Graph Date: 29-Jul-97)


) 10 20 30 40
Index of Enhanced Image Sequences


0 10 20 30 40
Index of Enhanced Image Sequences


50 60


50 60


10 20 30 40
Index of Enhanced Image Sequences


0 10 20 30 40
Index of Enhanced Image Sequences


Figure 3.23. Border difference variation on the enhanced images. (a) Distribution


of Epi ED border differences. (b) Distribution of Epi ES border differences. (c)


Distribution of Endo ED border differences. (d) Distribution of Endo ES border


differences. The solid lines are the third order polynomial fitting curves.


Dashdat Line Mean
Duhed Line: Mean ./- 2SD +
+ Solid Lie: Regresion Curve
S ++ +


+ +

+ + + + +
...+ +_ + + + +

+ +
+ 4 4 + + 4+4 +
--- -
+ +




Border Difference Mean = 183
Border Difference SD = 088


4 Dashdo Line; Mean
Dalid Line: Mean /- 2SD
Solid Line Regresion Curve
+ ++
+ ++ + 4+
+ + + +
+
+ ++ + + + ++ + +++++,



0 Border Difference Mea 1.52
Border Difference SD 0.53
-1


-2


Endo ES Border Difference Distribution (Graph Date: 29-Jul-97)


Endo ED Border Difference Distribution (Graph Date: 29-Jul-97)


6
Dashdot Line: Mean
Dulhed Line: Man +/- 2SD
5 Solid Line: Regresson Clv +

--- ------------- ------ ----- ---
34 +


+ + ++ + + ++
++ +


+++

+ + + ++ +
+ 4 ++






Border Difference Mean 207
Border Difference SD = 1.09


50 6C


50 60


Dashdot Line Mean
Dashed Line: Mean r 2SD
Solid Line: Regrssion Curve

----- -------- --------

-+



+ ++ + + +4 + +
+ + +
+++ + -



Border Difference Man 1.57
Boder Difference SD 0; 69


Epi ES Border Difference Distribution (Graph Date: 29-Jul-97)


























(a) Original ED (b) Original ES


(c) Enhanced ED (d) Enhanced ES

Figure 3.24. De-noising and image enhancement: (a) An original ED frame; (b) An
original ES frame; (c) The enhanced ED frame; (c) The enhanced ES frame.


























(a) Original ED (b) Original ES


(c) Enhanced ED (d) Enhanced ES

Figure 3.25. Image and border display: (a) An original ED frame with manually-
defined borders overlaid; (b) An original ES frame with manually-defined borders
overlaid; (c) The enhanced ED frame with manually-defined borders overlaid; (c)
The enhanced ES frame with manually-defined borders overlaid. Red and yellow
borders represent the two observers.












CHAPTER 4
SUB-OCTAVE WAVELET REPRESENTATIONS


In this chapter, we introduce sub-octave wavelet transforms. A sub-octave wavelet

transform is a generalization of a traditional dyadic wavelet transform and we use

an example to show the advantage of sub-octave representations over dyadic wavelet

representations for characterizing band-limited features frequently seen in medical

imaging. We formulate both continuous and discrete sub-octave representations in

one and two dimensions.

4.1 Introduction

Our DWT based algorithms for de-noising and enhancement have achieved im-

proved performance compared to other published methods. Through our analysis

and experiments, we observed that a DWT has a limited ability to characterize fea-

tures, such as texture, and subtle features of importance in mammographic images.

The traditional DWT is an octave-based transform where scales increase as powers

of two [51]. However, the best representation of a signal's details may exist be-

tween two consecutive levels of scale within a DWT [36]. To more reliably isolate

noise and identify features through scale space, we designed a multiscale sub-octave

wavelet transform (SWT), which generalizes the DWT. A sub-octave wavelet trans-

form provides a means to represent. details within sub-octave frequency bands of

equally-spaced divisions within each octave frequency band. The theoretical devel-

opment of a sub-octave wavelet transform and its efficient implementation was briefly

described by Laine and Zong [42], and later explained and extended [43].

62








The initial motivation for us to explore sub-octave decomposition was that we had

observed the limitation of dyadic wavelet transforms for characterizing band-limited

features and sought better frequency resolution for detecting such subtle features. A

dyadic wavelet transform is an octave-based transformation, where scales increase as

powers of two. Daubechies [13] introduced the generalization and extension of wavelet

decomposition and reconstruction under the context of orthonormal wavelet trans-

forms by sub-band splitting and presented early examples. An extension and gener-

alization of dyadic wavelet transforms is multiscale sub-octave wavelet decomposition

and reconstruction. Both Daubechies's methods and our techniques for sub-octave

wavelet transforms have achieved similar (better) frequency localization. However,

we are primarily interested in overcomplete (redundant) wavelet representations, a

generalization of dyadic wavelet representations. Most orthonormal wavelet bases

have the effect of decorrelating features while dyadic wavelet bases correlate salient

features through scales, which is what we are most interested in for enhancement

purposes. In the rest of the chapter, we present the mathematical formulation of

sub-octave wavelet bases in both space and frequency domains. The decomposition

and reconstruction procedure is carried out in terms of filter bank theory and band

splitting techniques.

4.2 Continuous Sub-Octave Wavelet Transform

Through a wavelet transform, a function (input signal) can be represented by

its projection onto a family of wavelet bases for decomposition and possibly perfect

reconstruction. If the family of wavelets bases {pn(x)} is complete and orthonormal,

a wavelet transform with critical sampling is usually referred to as an orthonormal

wavelet transform [13]. A Haar wavelet is a simple example of an orthonormal wavelet.

However, the Haar wavelet is a discontinuous function and is not localized in the







frequency domain. The analysis filters {H, G} for computing an orthonormal wavelet

transform must satisfy the following design constraints


IH(w)12 + IG(w) 2 = 1,

H(w) G(w) = H(w) G(w) = 0. (4.1)


If the family of bases {'n((x)} is complete and linearly independent, but not

orthonormal, the wavelet transform is called a biorthogonal wavelet transform.

Biorthogonal wavelets have dual basis functions. More generally, if the family of

wavelets { n(x)} is not linearly independent (redundant) and overcomplete, they

may form a wavelet frame representation [64].

For a dyadic wavelet transform, the orthonormal constraint is relaxed, so we can

have distinct decomposition and reconstruction wavelets as long as the corresponding

low-pass filter H(w) and high-pass filters G(w), K(w) satisfy [51]


IH(w)12 + G(w)K(w) = 1. (4.2)


The above discussion is for one dimension functions. It can easily extend to two

dimensions through a few well known methods [13, 51]. In this section, we focus on

continuous sub-octave wavelet transforms. We first discuss one-dimensional multi-

scale sub-octave wavelet transforms and corresponding sub-octave wavelet represen-

tations. We then introduce two-dimensional sub-octave wavelet transforms (SWT)

and 2-D sub-octave wavelet representations.

4.2.1 One-Dimensional Sub-Octave Wavelet Transform

If we further divide an octave frequency band into M equally-spaced sub-octave

bands (here, M is limited to a power of two), then M wavelet bases can be used to








capture the detail information of a signal in each sub-octave frequency band. The

M wavelet functions are represented as )m(x) E L2(R), where m = {1, 2,.., M},

L2(R) denotes the space of measurable, square-integrable 1-D real functions. An M

sub-octave wavelet transform of a 1-D function f(x) L2(R) at scale 2i (level j)

and location x, and for an m-th sub-octave frequency band is defined as


W f(x) = f ')b(x) = f(t)Vt (x t)dt, (4.3)


where M (x) = 1 'm( ) is the dilation of the m-th wavelet basis {m(x), at scale 2j,

m = {1, 2, ..., M}, and j E Z. In the frequency domain, we can write


WV f(w) = f (w) m(2'w), (4.4)


by taking the Fourier transform of Equation (4.3). A scaling approximation of a 1-D

function f(x) is defined as

Sf (z) = f pj (x). (4.5)

To provide a more flexible choice for the M basis wavelets, we impose that the wavelet

functions satisfy a wavelet frame condition (similar to Mallat and Zhong [51])

+oo M
A < E E Ihm(2Jw) 2 < B,
j=-oo m=l

where A and B are positive constants and w E R. In the spatial domain, we have

+oo M
AIIf(x)112 < IIWf(x)12 BIIf(xl2.
j=-oo m=1








The function f(x) can be recovered from its sub-octave wavelet transform by the

formula
+00 M +00 M
f(x)= E E Wj, f 'y7()= E E f* 1 *7 y(x ), (4.6)
j=-oo m=l j=-oo m=l

where y7"(x) is the m-th synthesis wavelet for the inverse sub-octave wavelet trans-

form. The set of frequency response of {I/J(x) I m = 1,2, ..., M} together at any scale

23 are required to capture the details within an octave frequency band. Finally, for

perfect reconstruction of f(x), analysis wavelets OM(x) and synthesis wavelets y2({x)

should satisfy
+00 M
E E (2jw) Im (2) = 1. (4.7)
j=-oo m=1

Equation (4.7) can be obtained by taking the Fourier transform on both sides of

Equation (4.6). To ensure exact reconstruction, the frequency axis is covered by

both analysis and synthesis wavelets. Thus, the wavelets Tm(x) can be any functions

whose Fourier transforms satisfy Equation (4.7). There are certainly many choices

for analysis and synthesis wavelets that satisfy Equation (4.7). For de-noising and

feature enhancement purposes, we are interested in the class of wavelets which are

an approximation to the first or second order derivatives of a smoothing function,

such as spline functions of any order. A 1-D sub-octave wavelet transform can be

easily extended to 2-D by computing sub-octave wavelet coefficients along horizontal

and vertical directions [51], as explained next in the following section. Extensions to

higher dimensions are straight forward and analogous.

4.2.2 Two-Dimensional Sub-Octave Wavelet Transform

Daubechies described two ways to extend 1-D orthonormal wavelet transforms

to two dimensions [13]. Here we adopt the way of dyadic wavelet extension to two

dimensions introduced by Mallat and Zhong [51] by computing sub-octave wavelet








coefficients along horizontal and vertical directions. An M sub-octave wavelet trans-

form of a 2-D function f(x, y) E L2(R2) at scale 2j (level j) and location (x, y), for

the m-th sub-octave frequency band is defined as


Wdmf(x, y) = f m(, y),


(4.8)


where mdm Y(x, y) = -f1d'm(,, -), d = {12} (for horizontal and vertical directions),

m = {1, 2,..., M}, and j E Z. L2(R2) denotes the space of measurable, square-

integrable 2-D functions. In the Fourier domain, Equation (4.8) simply becomes


WT'I ,mf(Wx, w) = f/(Wz, W) ?dm(2JWx, 2JY).


(4.9)


The function f(x, y) can be recovered from its 2-D sub-octave wavelet transform by

the formula


+oo 2 M
f(x,y) = E W~m f*m(Xy)
j=-oo d=l m=l
+00 2 M
= E EE *'im m m", ).
j=-oo d=1 m=1


For perfect reconstruction, M2 (x, y) and 72j(x, y) must satisfy

+00 2 M
E E E dm(2jw,, 2jyw) d'm(2jw, 2jwy) = 1.
j=-oo d=1 m=1


(4.10)


(4.11)


This exact reconstruction condition is obtained by taking the Fourier transform of

Equation (4.10).








4.3 Discrete Sub-Octave Wavelet Transform

Continuous wavelet transforms are useful to demonstrate the properties of wavelet

decomposition and reconstruction and are helpful for theoretical approval of the per-

fect reconstruction while discrete wavelet transforms are practical important for dis-

crete signal and digital image processing. The transform parameters in a sub-octave

wavelet transform are continuous variables. This results in a highly redundant rep-

resentation. It is possible to discretize these parameters and still achieve perfect

reconstruction [64]. For digital image processing, the sub-octave wavelet transform

of a discrete function can be carried out through uniform sampling of a continu-

ous sub-octave wavelet transform. Below, we describe the discrete formulation of a

sub-octave wavelet transform.

4.3.1 One-Dimensional Discrete Sub-Octave Wavelet Transform

In the discrete domain, scales are also discrete and limited by the finest scale of

one unit. A sub-octave wavelet transform can similarly be decomposed into dyadic

scales and we can get a perfect reconstruction through its corresponding inverse

sub-octave wavelet transform. In general, a function can be decomposed into fine-

to-coarse dyadicc) scales by its convolution with dilated wavelets { 2j (x)}jEZ. This

can be done through repeated smoothings (low pass filtering) and detail finding (high

pass filtering). In the discrete domain, because of the limitation of the finest scale,

scales have to be greater than or equal to 1, so we let

+oo M
I(w) 2 = E E m(2w) m(2iw). (4.12)
j=1 m=l1








Thus, for a J-level discrete sub-octave wavelet transform, we can write

J M
i(w)12 = E E em(24w) im(2ji) + I(2Jw)12. (4.13)
j=l m=1

The notation {Wjl f f(n), Sjf(n) Ij = 1, 2,..., J, and m = 1,2,..., M} is defined as

the wavelet representation of a discrete function f(n) under a J-level discrete M

sub-octave wavelet transform. Wjf (n) and Sjf(n) are uniform samplings of their

continuous counterparts respectively.

If we let J = 1, then Equation (4.13) becomes

M
Ig(w)12 = (2w) m(2w) + |(2w)12. (4.14)
m=r

Let us assume that for each basis wavelet pair Qm(w) and (m(w), there exists a pair of

corresponding functions Gm(w) and Km(w) which need to be determined and (with

certain temporal shift t) satisfy


Om(2w) = (w) Gm(w) e- w,

im(2w) = #(w) Km(w) et.


And, for scaling function ((w), there exists a function H(w) which satisfies


b(2w) = <'(w) H(w) e-it.


Replacing hm(2w), fm(2w), and ((2w) in Equation (4.14), we obtain a fundamental

relation for sub-octave wavelet transforms (SWT); that is,

M
IH(w) 2 + Gm(w) Km(w) = 1. (4.15)
m=l






























Figure 4.1. A 3-level SWT decomposition and reconstruction diagram for a 1-D


Figure 4.1. A 3-level SWT decomposition and reconstruction diagram for a 1-D
function.

The discrete sub-octave wavelet transform of a function f(n) C 12(Z) can be

implemented by using the following recursive relations between two consecutive levels

j and j +1

W4J+lf(w) = Sjf(w) Gm(2'w), (4.16)


Si+lf(w) = if (w) H(2jw), (4.17)

where j > 0, 1 < m < M, and Sof(w) = f(w). And, the reconstruction Sof(W)

from a sub-octave wavelet decomposition can be implemented through the recursive

relation
M
Sjf(w) = Sj+lf(w) H(23w) + W Tlf (w) K m(2iw), (4.18)
m=l








low frequency high frequency




SM 3 W2 W1
3 2 1 1 1 1
S M ww
3 3 2 1W2 2 2 2
W3 W3W3W3

Figure 4.2. Divisions of the frequency bands under the SWT shown in Figure 4.1.

where H is the complex conjugate of H. A three-level M sub-octave wavelet decom-

position and reconstruction process based on the above recursive relations is shown in

Figure 4.1. The corresponding divisions of frequency bands are depicted graphically

in Figure 4.2. In general, for an M sub-octave analysis and synthesis, we require

M pairs of corresponding basis wavelets. A SWT is a multiwavelet transform with

a single scaling function [60, 61]. When M is a power of 2, we can carry out de-

composition and reconstruction using a set of FIR filters corresponding to a single

pair of basis wavelets. Figure 4.3 presents a filter bank for carrying out a 2-level

4 sub-octave decomposition and reconstruction using FIR filters corresponding to a

single pair or two pairs of basis wavelets. This describes a more general way to do the

sub-octave decomposition where fine-to-coarse octave decomposition and sub-octave

band splitting can be carried out through two sets of different FIR filters. It reduces

to the case [42] when H = Hs and G = G, where H, and G, are used for sub-octave

band splitting.

4.3.2 Two-Dimensional Discrete Sub-Octave Wavelet Transform

For the decomposition of a 2-D discrete function, we let the frequency response

of a scaling function be defined in the formula

+oo 2 M
I '(WX, y)2 = d,(2j y) dm(2j, 2jWy) (4.19)
j=l d=l m=l


































Figure 4.3. A 2-level 4sub tae decomposition and reconstruction of a SWT.
Figure 4.3. A 2-level 4-sub-octave decomposition an


The Fiwcr Bank for a 2-Levd. 2-Sub~Oav WaVC1l Tra'sfrm


0W4
H....)x 1 0







0.2
-- -3 -2 Frequecy


(a) (b)

Figure 4.4. Frequency plane tessellation and filter bank. (a) Division of the frequency
plane for a 2-level, 2-sub-octae analysis. (b) Its filter bank along the horizontal
direction.







For a J-level 2-D discrete sub-octave wavelet transform, we can formulate

J 2 M
(W, y) 2 E d, m(2JW, 2Jy) +d'm(2jW, 2jW) + 1(2jw, 2wy) 2
j=1 d=l m=l
(4.20)

If the scaling approximation of a function f(x, y) at scale 2i is represented by


Sjf(x, Y) = f 2 (x, y), (4.21)


then { Wdfmf(nz, ny), Sjf(n,, ny) Id = 1,2, j = 1,.., J, and m = 1,.., M } is called

the wavelet representation of a discrete function f (n, ny) for a 2-D J-level discrete

M sub-octave analysis. In general, cp(x, y) is a 2-D scaling function and Cd'(x, y)

and 7d'm(x, y) are the m-th directional analysis and synthesis wavelets. There are

many choices of these functions that satisfy Equation (4.20). Similar to the way 2-D

wavelets are constructed using 1-D wavelets [51], we use tensor products to construct

2-D sub-octave wavelets using 1-D sub-octave wavelets. Thus we can write


,l'm(2jx, 2jwy) = ?m(2jwx)0(2J-lWy), (4.22)

2,m(2jws, 2jwy) = b(2J-lwx) m(2wy), (4.23)

o(23wx, 23wy) = b(2Jwx)2)(2Jwy). (4.24)


Through this construction, we implemented a 2-D sub-octave wavelet transform using

1-D convolution with FIR filters of the 1-D sub-octave wavelet transform previously

described. Figure 4.4(a) shows the division of the frequency plane under a 2-level

SWT where M = 2. Figure 4.4(b) shows the corresponding filter bank along the

horizontal direction where the curve shown in red corresponds to the analytic fil-

ter 'i(2w, 2wy) (WI'1) projected along the wx axis, the black curve for W'2, the










Spline Smoothing Function First Order Derivative Approximation


-1 0 1 -1 0 1

Second Order Derivative Approximation Cubic Spline Scaling Function
2
1 1.2
0----- ------- --- -- 0.8
-1 0.6
0.4
-2
0.2
-3 0
-1 0 1 -1 0 1


Figure 4.5. Smoothing, scaling, and wavelet functions for a SWT. These functions
are used for a 2-sub-octave analysis.


magenta curve for W'1 the blue curve for W21'2, and the green one for S2. A 2-D

sub-octave wavelet transform can be implemented by 1-D convolution with FIR filters

along horizontal and vertical directions. The details along the diagonal directions are

embedded in the details along horizontal and vertical directions. In Figure 4.5, a

fourth-order spline smoothing function, its first and second derivative approximation

as sub-octave wavelets, and the corresponding scaling function are shown.

A dyadic wavelet transform can be a special case of a sub-octave wavelet trans-

form with M = 1. As an example, a discrete 2-sub-octave wavelet transform is shown

to divide the details of an octave band into details of 2 sub-octave bands. As shown

in Figure 4.6, one sub-octave can be used for detecting local maxima while the other

sub-octave band identifies zero-crossings. The dashed curve corresponds to the fre-

quency response of a first order derivative approximation of a smoothing function

and the dash-dot curve shows the frequency response of a second order derivative






















1 2 3 4 5 6


Figure 4.6. An example of level one analytic filters for 2 sub-octave bands and a
low-pass band. The dashed curve is the frequency response of a first order deriva-
tive approximation of a smoothing function and the dash-dot curve is the frequency
response of a second order derivative approximation. The solid curve is a scaling
approximation at level one.

approximation. The solid curve is a scaling approximation for analysis at level one.

Thus, these analysis filters take advantage of both local maxima and zero-crossing

representations to characterize local features emergent within each scale.

4.4 SWT-Based De-Noising

Our experiments showed that a SWT with first and second order derivative ap-

proximation of a smoothing function as its basis wavelets, separated coefficients best

characterized by feature energy from coefficients characterized by noise energy.

Sub-octave wavelet coefficients can be modified by hard thresholding for noise

reduction by

Wfr*f(x) = Mdm TH (W mf(x)/M. tmm) (4.25)

Mjm = max(IWfmf(x)) (4.26)







where d = {1, 2} (omitted for 1-D signals), j = {1,..., k}, and k < J, m = {1,..., M},

and td4m is (in general) a threshold related to noise and scale. The processed result

wdm,'*f(x) is a modified coefficient. Position x denotes n for 1-D signal processing

and (n,, ny) for 2-D image noise reduction.

SWT coefficients can be processed through soft thresholding wavelet shrinkage

for noise reduction by


Wf,*f(x) = M s(Wm w f(x)/M d tm), (4.27)


M2zdm = max(lW,'mf(x)), (4.28)

where d = {1, 2} (omitted for 1-D signals), j = {1, ..., k}, k < J, m = {1,..., M}, and

tdim is a noise and scale-related threshold. Again, the result Wf,'m*f(x) is a processed

coefficient. Similarly, position x denotes n for 1-D signal processing and (n,, ny) for

2-D image noise reduction. Recall that Donoho and Johnstone's soft thresholding

method used a single global threshold [19]. However, since noise coefficients under

a SWT have average decay through fine-to-coarse scales, we regulate soft threshold-

ing wavelet shrinkage by applying coefficient-dependent thresholds decreasing across

scales [72]. When features and noise can be clearly separated at the finer levels of

scale, applying hard thresholding instead of soft thresholding may further improve

performance. However, hard thresholding may fail to smooth noise if the noise is

strong locally.








Table 4.1. Quantitative measurements of performance for de-noising and feature
restoration.
Method or Measurement Blocks Bumps HeaviSine Doppler
Noisy Signal: ao,/a, 6.856 6.735 6.895 7.017
Noisy Signal: 10 oglo(au/a,) (dB) 16.726 16.567 16.771 16.923
Restored Result: as/a, 27.258 20.988 35.453 20.129
Restored Result: 10 logio(ar2/a) (dB) 28.779 26.439 30.993 26.076


4.5 SWT-Based Enhancement with Noise Suppression

Through a nonlinear enhancement function or a generalized adaptive gain opera-

tor, SWT coefficients were modified for contrast enhancement by


wd,m,* f (n, nZ) d m EOP (4.29)


Mm = max(Wdm f(n, ny) ), (4.30)
2rx iny

where d = {1,2}, 1 < m < M, and 1 < j < J. The pointwise operator's output,

Wjm,* f(n,, ny) is simply a processed coefficient. The enhancement operator Eop

can be ENL or EGAG presented in Section 2.4. Since these operators are defined

on input values between -1 to 1, we normalize sub-octave wavelet coefficients before

applying the operators.

4.6 Application Samples and Analysis

In this section, we present several experimental results of SWT based de-noising

and enhancement for both 1-D signal and 2-D images. First we present experimen-

tal results based on a 1-D sub-octave wavelet transform. We show the de-noising

capability of our method with feature restoration compared to existing de-noising

methods.



























20I
10


0 200 400 600 800 1000 1200 1400 1600 1800 2000

10

-1 0
0-
0 200 400 600 800 1000 1200 1400 1600 1800 2000

10 I,, ,

-10 --i
0 200 400 600 800 1000 1200 1400 1600 1800 2000
20




0 200 400 600 800 1000 1200 1400 1600 1800 2000


(a) "Blocks"






-1



0 200 400 600 800 1000 1200 1400 1600 1800 2000
10 1 1 1 1


-10
0 200 400 600 800 1000 1200 1400 1600 1800 2000




0 200 400 600 800 1000 1200 1400 1600 1800 2000


10

0 200 400 600 800 1000 1200 1400 1600 1800 2000


(c) "HeaviSine"


20

0 200 400 600 800 1000 1200 1400 1600 1800 2000



40 2 4 I I 0 0 A






0 200 400 600 800 1000 1200 1400 1600 1800 2000

0 200 400 600 800 1000 1200 1400 1600 1800 2000
0 200 400 600 800 1000 1200 1400 1600 1800 2000













-10
0 200 400 600 800 1000 1200 1400 1600 1800 2000





0 200 400 600 800 1000 1200 1400 1600 1800 2000






-10
0 200 400 600 800 1000 1200 1400 1600 1800 2000
-to0








0 200 400 600 800 1000 1200 1400 1600 1800 2000



(d) "Doppler"


Figure 4.7. De-noised and restored features from the SWT-based algorithm. From

top to bottom: original signal; noisy signal; de-noised signal; overlay of original and

de-noised signal.








Experimental results of SWT-based de-noising are shown in Figure 4.7. In the

fourth plot shown in each Figure 4.7(a)-(d), the de-noised results are overlaid with

their corresponding original signals. Table 4.1 shows quantitative measurements of

each result shown in Figures 4.7. In comparison to previously published methods

processing the same signals [8], results of the SWT-based method are noticeably im-

proved and basically free from artifacts, pseudo-Gibbs phenomena. In the next exper-

iment, we show the limitations of a traditional DWT for characterizing band-limited

high frequency features. Figures 4.8(a)-(b) show the original and noisy "Doppler"

signals with their corresponding 5-level DWT and a coarse scale approximation. The

finest scale band-limited features (see the second plot in Figure 4.8(a)) are weak and

obscured when noise is present (see the second plot in Figure 4.8(b)). These high fre-

quency band-limited features are lost in a soft thresholding de-noising method, shown

in Figure 4.9(b). Figures 4.8(c)-(f) show 2-sub-octave wavelet transforms of the orig-

inal and noisy "Doppler" signals. Figures 4.8(c) and 4.8(e) show first sub-octave

coefficients while Figures 4.8(d) and 4.8(f) display second sub-octave coefficients.

Figure 4.9 shows de-noised and enhanced results of noisy "Doppler" under a DWT

and a SWT. The loss of high frequency band-limited features, made the result from

the DWT-based method less attractive than the SWT-based method, for processing

medical images, such as microcalcification in mammograms.

Next, experimental results of enhancement with noise suppression of mammo-

graphic images via a 2-D sub-octave wavelet transform are presented. We demon-

strate the advantage of enhancement without amplifying noise, including background

noise [37]. In the first experiment for image enhancement, we attempt to enhance

the visibility of a radiographic image of a RNII (Radiation Measurements Inc., Mid-

dleton, WI) model 156 phantom with simulated masses embedded. Figure 4.10(a)

presents a low contrast image of the RMI model 156 phantom with simulated masses





































I 200 400 600 800 1000 1200 1400 1600 1800 2000

(a)


The Original Signal and Its First Sub-Octave Coefficients ofa SWT



-30


-5
5n nA I n'? np (a n tig n.6 or 0,2 np I
A A Al Al Ad 06 0A7 -0 2 --A A




5
5 n 3 a, n I. n o> at-n n ,o
A Al Al AY A A's A 0,6 aI AS 0a



0







0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1


(c)

The Noisy Signal and Its First Sub-Octave Coefficients of a SWT

', -, ,

-s ." .. .
,10 o o,. o,: o o. o,
A 01 02 0A 0 AAS A A 0A7 A 09
0
5


0

j:^y\-T---- .^..^ .-
-5












t i nj i7 nI nd nai nA n7 nV no
A A-l -a A n A A Aft nl A o





1 -- 9 T A n ( m A nt i t il Or 0. n O


0 0.1 0.2 0.3 0.4 05 0.6 0.7 0.8 0.9 1


The Original Signal and Its DWT Coefficients






i= i Al ,l Sy i lfyn 11(p i~ fn i tft n 5 (5 (









0 P> n 5 (5 an iAyn i1(5 11n ijpn1 lA 1if
-I.
4o


Figure 4.8. Limitations of a DWT for characterizing band-limited high frequency

features.


A119 APO 60 i90a in iDPO 12p lnW14 1111 i2C5
20
0

0 200 400 600 800 1000 1200 1400 1600 1800 2000


(b)


The Original Signal and Its Second Sub-Octave Coefficients ofa SWT


S0 n 2 0al an d A 5 n Aft 7 naA A. no


-n n ni ,7 n -- ns n,. n7 n, no





j'^, i yA- ,
-5



5 fi nf l, nA nQ nA n n no
0












-s
0 0.1 02 0.3 0.4 0.5 0.6 0.7 0.8 0.9 I
0













(d)

The Noisy Signal and IIs Second Sub-Octave Coedfcients ofa SWT








-l fi .k n HA f, H --7 R "-a -
-5
I nit 0 nA nA nA nAA 17 ; A n
-5
A l ni 03 nA n AS 7 n nA o A
5
0











A A A, A A\ n A Ao 7 A AS a



















0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
-30





-10


-5




A At1 0.2 0,3 0.4 A.5 Alk 0l AS AS9


The Noisy Signal and Its DWT Coefficients


-, 290 490 60 2--i90 i CP i4PO i*0




4 100 1-a--i NOi Wi 1poi i12 14f i )ii) 14M I p


4 1 12 AA 61490- 1 590-lop1 1 i*0 itA t /po I"p
4 5129 4151 5111 5155 11 '1115 1(11 151n 15 1 i l I
40-~--*v '? *" "--Ua-ip painiln*




















































(a)



The Enhanced Signal and the Processed First Sub-Octave Coefficients of a SWT





-5 ,
01 U Ad n-Ai f n nA n n no













0 0.1 0,2 0,3 0.4 0.5 06 07 08 0.9 1
-- I- A 11.1 n- a n q miA ni 0. no

-5
a 0i 0n- 2- ni -an 5 a i- ni no





I 0.1 0,2 0.3 0.4 0.5 0,6 03 08 0.9 1


Figure 4.9. De-noised and enhanced results of a noisy "Doppler" signal under a DWT

(25.529dB) and a SWT (26.076dB).


The Original Signal and Its DWT Coefficients




h unn m\ 0 a i8 in 41 n lan4i WO 14
Q I I I ---------------




_J (BlHd\-^------- -------------





i I I i Ql Am -- p 1In



-0 200 400 600 800 1000 1200 600 8 20
0 200 400 600 800 1000 1200 1400 1600 1800 2000


The Enhanced Signal and the Processed DWT Coefficients



t h awm an a0 i n9n 9i9 1in iain 1i4 ?tyl i
I I I I I I ; W I I I I1



a tim m an anm iian ai un9i in 1- 1iii n lP 1iP n










-10 2
0 200 400 600 800 1000 1200 1400 1600 1800 2000


The Enhanced Signal and the Processed Second Sub-Octave Coefficients of a SWT
10

0 1 n 2 (11 a, ii an nA n1 7 ni0 no
-s
-5



-5
a nm n, ni nA n ni, m A A a no
1 5 (





-5
-1 i 02 ii n( 1 i- in n "a


0 01 0.2 0.3 04 0.5 0.6 0.7 08 09



















(a) (b) (c)
Figure 4.10. Enhancement with noise suppression. (a) A low contrast image of RMI
model 156 phantom with simulated masses embedded. (b) Enhancement by tradi-
tional histogram equalization. (c) SWT-based enhancement with noise suppression.

of distinct sizes. Figure 4.10(b) shows enhancement by traditional histogram equal-

ization. SWT-based enhancement with noise suppression is shown in Figure 4.10(c).

Unsharp masking, a popular technique for enhancing radiographs, failed to enhance

barely seen masses (low frequency features) in Figure 4.10(a). In comparison to tra-

ditional histogram equalization, the SWT-based method enhanced the visibility of

masses without amplifying noise.

In the next experiment, we enhance low contrast mammographic images contain-

ing a microcalcification cluster. Figure 4.11(a) shows a region from a low contrast

mammogram containing a distinct microcalcification cluster. Next, enhancement by

traditional unsharp masking is presented in Figure 4.11(b). Figure 4.11(c) shows the

result of SWT-based enhancement with noise suppression. In practice, a radiologist

may want to view certain suspicious areas of low contrast within a large digital mam-

mogram for potential breast lesions with a magnifier to improve visibility of an area.

In Figure 4.11(d), we try to provide a similar function by improving the visibility of a

local region of interest (ROI) through SWT-based enhancement with noise suppres-

sion. The region within the black square (120x120) is enhanced. Note that the area

does not have to be a square, but a rectangle. Traditional unsharp masking shown





83

in Figure 4.11(b) enhances the area of the microcalcification cluster slightly but also

amplifies noise. As shown in Figures 4.11(c) and 4.11(d), enhancement under a SWT

makes barely seen microcalcification clusters more visible without amplifying noise.





84










































(c) (d)

Figure 4.11. Enhancement with noise suppression. (a) Area of a low contrast mam-
mogram with a microcalcification cluster (b) Best enhancement by traditional un-
sharp masking (c) SWT-based enhancement with noise suppression (d) SWT-based














enhancement of a local region of interest (ROI) with noise suppression.
,3 .P,...






sharp m i. () w.(d)


enhancemen.t"of a loca region of interest (1101) "wthno p .
": }i "" .. -' "' ,,- '


.. .. ( .. .,.. .., : --. .. ... ; _.,.,
Figue 411.Enhacemnt ithnois supresio. (a Ara o a uw cntrst ani
,,,ra r ,,it ,: i r c l if c t o -_: : .: .. -,. ,.. ..._:!_._,--:.:' .:..-i-... :- ";_ : ,-# iI ...... ..
sharp ~ ~ ~ ~ ~ mak"g (c ";k. se ,nemn ','h nos sup.sso- ,'-t< .W-
en a.-mnto ,- %,, reo ,f _ner s R I .it ,os ,: .. _.... ....














CHAPTER 5
PERFORMANCE MEASUREMENT AND COMPARATIVE STUDY


In this chapter, we present several measures of relevant quantitative metric for

evaluating an algorithm's performance and show a few comparative studies of quan-

titative measurements between our algorithms and other published methods. In

Section 5.1, a few quantitative measurements used in this dissertation are described

and formulated mathematically. Section 5.2 shows the quantitative results of our

algorithms and other relevant methods for signal/image restoration. In Section 5.3,

we present quantitative measurements of image enhancement among our developed

algorithms and a few other related methods. Earlier chapters demonstrated visual

quality of de-noised as well as enhanced results (visual performance) while this chap-

ter focuses on quantitative performance.

5.1 Performance Metric for Quantitative Measurements

The quality of a noisy signal/image is often measured by the ratio of signal vari-

ance to noise variance (or signal energy level of variation to noise energy level) using

a log scale. The quantity is called signal-to-noise ratio. A signal/image is more likely

severely degraded when noise level is high (low signal-to-noise ratio). We have used

the quantitative term signal-to-noise ratio when displaying our earlier experimental

results. A formal definition of signal-to-noise ratio is given below.

Signal-to-noise ratio (SNR) is defined as

2
SNR = 10loglo ( q),
an








where a2 and a are signal variance and the average noise energy (average least

squares error between a signal and its noise-free original version), respectively [30].

Here, we denote an ideal (original) signal as g and the restored version from its noise

signal as g*. For 1-D signals, a2 is defined as


2 1
ao = NE(g() -g)2
n=1

1N
N = N g (n),
n=1

and 2 is defined by

= -1 g* (n)- g(n))2.
n=1

For 2-D images, a 2 is defined as


1 M N
a2 = MN E(g(m, n) 9)2,


1 M N
= MN~E Eg(m,n)
m=l n=1

while a,2 is defined by


1 M N
a =MN Z: (g*(m,n)-g(m,n))2
S Nm=1 n=1


The term SNR was frequently used in many noise reduction algorithms for quan-

titative measurements of performance and has also been applied to measure our

de-noising methods. The quality improvement of a signal/image can be measured by

an improved (higher) SNR compared to the SNR of its noisy version.

The quantitative measurements of average square errors were used to measure the

performance of noise reduction algorithms [19, 20, 8]. For our comparative studies, we




Full Text
xml version 1.0 encoding UTF-8
REPORT xmlns http:www.fcla.edudlsmddaitss xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.fcla.edudlsmddaitssdaitssReport.xsd
INGEST IEID ECX4ODT0E_T571PI INGEST_TIME 2013-02-14T14:58:02Z PACKAGE AA00013543_00001
AGREEMENT_INFO ACCOUNT UF PROJECT UFDC
FILES