<%BANNER%>

Bandwidth-aware video transmission with adaptive image scaling

University of Florida Institutional Repository

PAGE 1

BANDWIDTH-AWARE VIDEO TRANSMISSION WITH ADAPTIVE IMAGE SCALING By ARUN S. ABRAHAM A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLOR IDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF ENGINEERING UNIVERSITY OF FLORIDA 2003

PAGE 2

Copyright 2003 by Arun S. Abraham

PAGE 3

Dedicated to my Father and Mother.

PAGE 4

ACKNOWLEDGMENTS I would like to express my sincere gratitude to my thesis advisor, Dr. Jonathan C. L. Liu, for his guidance and encouragement. Without his confidence in me, I would not have been able to do this thesis. I would like to thank Dr. Douglas D. Dankel II and Dr. Richard E. Newman for serving on my thesis committee. I would also like to thank Dr. Ju Wang of the Distributed Multimedia Group for his critical guidance and contributions towards this research especially regarding the optimal rate-resizing factor section (Chapter 3). I would like to thank my parents for always encouraging me towards higher studies and for all their love and support. Also, I would like to thank my dear wife for her love and sacrifice while living a student life pursuing higher studies. I would also like to thank my brother, the first one in the family to attain the masters degree, for his love and encouragement. And above all, I would like to thank God for making everything possible for me. iv

PAGE 5

TABLE OF CONTENTS page ACKNOWLEDGMENTS.................................................................................................iv LIST OF TABLES............................................................................................................vii LIST OF FIGURES.........................................................................................................viii ABSTRACT.........................................................................................................................x CHAPTER 1 INTRODUCTION........................................................................................................1 1.1 Novel Aspects.........................................................................................................3 1.2 Goals.......................................................................................................................6 1.3 Overview.................................................................................................................8 2 BACKGROUND..........................................................................................................9 2.1 MPEG-2 Overview.................................................................................................9 2.2 Rate Distortion......................................................................................................11 2.3 MPEG-2 Rate Control..........................................................................................13 3 APPROXIMATING OPTIMAL RATE RESIZING FACTOR FUNCTION.........18 4 SYSTEM DESIGN.....................................................................................................22 4.1 Scaled-PSNR Based Approach.............................................................................24 4.2 Dynamic Adaptive Image Scaling Design............................................................26 5 EXPERIMENTAL RESULTS...................................................................................30 5.1 Case 1....................................................................................................................30 5.2 CASE 2.................................................................................................................33 5.3 Further Analysis....................................................................................................36 v

PAGE 6

6 CONCLUSION...........................................................................................................38 6.1 Contributions........................................................................................................38 6.2 Future Work..........................................................................................................38 APPENDIX A PERTINENT MPEG-2 ENCODER SOURCE CODE CHANGES...........................40 mpeg2enc.c.................................................................................................................40 putseq.c.......................................................................................................................47 putbits.c.......................................................................................................................56 Sample Encoder Parameter (PAR) File......................................................................58 B PERTINENT MPEG-2 DECODER SOURCE CODE CHANGES...........................59 mpeg2dec.c.................................................................................................................59 getbits.c.......................................................................................................................69 C MATLAB CODE FOR OPTIMAL RATE-RESIZING FACTOR APPROXIMATION...................................................................................................73 D CASE 2 TEST PICTURES.........................................................................................74 LIST OF REFERENCES...................................................................................................76 BIOGRAPHICAL SKETCH.............................................................................................78 vi

PAGE 7

LIST OF TABLES Table page 1. Case 1 Adaptive Image Scaling I-Frame PSNR Data..............................................31 2. Case 2 Adaptive Image Scaling I-Frame PSNR Data..............................................33 3. Frame 45 Testing Of Sequence mei20f.m2v............................................................36 4. Frame 45 Testing Of Sequence bbc3_120.m2v.......................................................36 vii

PAGE 8

LIST OF FIGURES Figure page 1. Programming Model..................................................................................................6 2. MPEG Frame References.........................................................................................10 3. Hierarchical Layers of an MPEG Video..................................................................10 4. Group of Pictures.....................................................................................................11 5. R-D Curve Showing the Convex Hull of the Set of Operating Points.....................12 6. Bit Allocation of TM5 Rate Control Bit Rate Is Set to 700kbps.Video Resolution Is 720x240. Image Complexity for I, P, B Frames is set to Xi=974kbits, Xp=365kbits, Xb=256kbits, respectively...........................................14 7. Average Quantization Scale for a GOP, Encoded at 700kbps.................................15 8. GOP Target Buffer Overflow, Encoded at 700 kbps...............................................15 9. Reducing the Number of MB Before Normal Rate Control and Encoding.............16 10. Effects of Image Scaling..........................................................................................17 11. The Overall Distortion-Resizing Functions When Different Target Bit Rates Are Given. The Video Resolution Is 704 (DVD Quality)....................21 12. System Design Components.....................................................................................22 13. Bandwidth-Adaptive Quantization...........................................................................23 14. Scaled-down PSNR Values......................................................................................24 15. Scaled-down Quantization Values...........................................................................25 16. Adaptive Image Scaling Scenario............................................................................29 17. Case 1 Scaled-PSNR Comparison............................................................................31 18. Case 1 Scaled-Quantization Comparison.................................................................32 viii

PAGE 9

19. Case 1 GOP Target Bits Overflow Comparison......................................................33 20. Case 2 Quantization Comparison.............................................................................34 21. Comparison of Scaled-down PSNRs........................................................................34 22. Reference Picture for 3rd I-Frame (PSNR = 49.0, S=1,425,311)............................74 23. 3rd I-Frame Using Original Encoded Picture (PSNR = 20.3, S=91,627)................75 24. 3 rd I-Frame Using Adaptive Image Scaled Picture (PSNR=19.9, S=36,081)..........75 ix

PAGE 10

Abstract of Thesis Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Master of Engineering BANDWIDTH-AWARE VIDEO TRANSMISSION WITH ADAPTIVE IMAGE SCALING By Arun S. Abraham August 2003 Chair: Jonathan C. L. Liu Major Department: Computer and Information Science and Engineering It is essential that MPEG-2/4 video encoding makes the best use of the available bandwidth and generate the compressed bit-stream with highest quality possible. With many types of client machines, it is preferred to have a single format/resolution stored in the storage system and adaptively transmit the video content with proper resolution for the targeted client and network. Conventional approaches use large quantization scales to adapt to low bit rates. However, the larger quantization scale degrades the image quality severely. We believe that in addition to adaptive quantization for bit allocation, the image resolution should be jointly adjustable. The key design factor should be to reduce the total number of macro-blocks per frame. Our goal is thus to investigate the effects of scaling down the video as an alternative to adapt to bandwidth fluctuations (e.g., 3/4G wireless networks). We envision that the proposed scheme in addition to the standard TM5 rate control mechanism would be more effective during low-bandwidth situations. From the results, x

PAGE 11

we see that during low bandwidth situations, scaling down the image does provide comparable image quality while greatly reducing bandwidth requirements (i.e., usually more than 2X). The gain on the video quality can be significantly improved (i.e., up to 2 dB) by re-rendering the image into a smaller resolution with a relatively precise quantization scale (i.e., usually 50% to 100% less than the conventional encoders). xi

PAGE 12

CHAPTER 1 INTRODUCTION Streaming video is becoming a more common practice recently due to the rapidly growing high-speed networks. To provide high quality video images to the end-user, the underlying network needs to be able to maintain the necessary bandwidth during the playback session. Ideally, QoS-guaranteed communication network (e.g., ATM [12] and IPv6 [3]) should be used for multimedia transportation. However, in the near future, the majority of streaming video applications will still run using best-effort networks (e.g., Internet), where the actual bandwidth is characterized by short-term fluctuations (and the amount can be limited). This observation remains valid when applied to the next generation wireless networks (3G/4G systems). For example, cellular phone users are rarely guaranteed optimal bandwidth availability. As demonstrated in Wang and Liu [17], the per-channel bandwidth for mobile users is subjected to the cell load and the spreading factor used. Furthermore, the probability of bandwidth changes for the mobile can be higher than 60% during the handoff transition. Thus, without a proper design, the playback at the end users can be often delayed and video quality suffers different degrees of jitter when transporting compressed video through these networks [1]. This is the common experience when commercial video players (e.g., RealPlayer and Microsofts Media Player) are used with the current Internet. 1

PAGE 13

2 Thus, with the use of best-effort networks, it is essential that video encoding makes the best use of the available bandwidth and generate the compressed bit-stream with highest quality as possible. Bandwidth-adaptive video encoding technology, which dynamically updates the video encoding parameters according to the current network situation, is expected to deliver better video quality with limited bandwidth. However, providing a single video quality for all kinds of client machines becomes a challenging issue. Todays client machines can be conventional PCs, laptop computers, PDA, and Information Appliances (IA). These machines/devices have different sizes of display resolutions and capability. Having different resolutions of the same video content in the servers will waste the efficiency of the storage system since a video title can occupy several gigabytes of disk space. Thus, it is preferred to have a single format/resolution stored in the storage system and adaptively transmit the video content with proper resolution for the targeted client and network. To perform this kind of adaptation, a video server needs to dynamically adjust the video quality to accommodate the changes in the network conditions (e.g., sudden drop of bandwidth available). There are various quality factors that can be adjusted to adapt to the sudden drop in bandwidth: increase quantization factor, decrease the color [13], and decrease the video dimensions. The problem in its general format can trace its origin back to the classic rate-distortion theory [2,6]. In general, the rate-distortion theory provides fundamental guidelines to find an optimal coding scheme for an information source, such that the average loss of data

PAGE 14

3 fidelity can be minimized, where the average bits per symbol of the coding scheme is given [2]. For a block transform based video coder (e.g., MPEG-2/4 [19]), the key problem is how to distribute limited bit budget to the image blocks (e.g., 16 macroblock in MPEG-2) to achieve the optimal R-D curve. In a practical video encoder, we need to decide the quantization scale and other factors for the proper transformed image blocks. 1.1 Novel Aspects Some related work on this area can be found in the literature [2,4,6,7,13,15]. However, with the exception of Puri and Aravind [15] and Fox et al. [4], their common focus is on adjusting the quantization parameter of the DCT coefficients based on a particular rate-distortion model. Furthermore, their models are based on using a fixed image resolution, which imposes unnecessary limitations on further exploiting the rate-distortion theory and results in worse video quality when the bit budget is low. Puri and Aravind [15] discuss various general ways an application can adapt to changes in resources but does not mention resolution scaling. Fox et al. [4], on the other hand, provide design principles in application-aware adaptations including resolution scaling as one of the ways of dealing with network variability. However, Fox et al. [4] do not focus on performing a thorough quantitative and theoretical analysis of image scaling. Additionally, no other work that we know of provides experimental results of MPEG-2 encoder using dynamic Adaptive Image Scaling. When the available data rate is low, a large quantization scale will be used in the traditional rate control methods [19], to maintain the desired bit rate. The larger quantization scale degrades the image quality severely and causes perceptual artifacts

PAGE 15

4 (such as blockness, ring, and mosquito noise), since the detailed image structure cannot survive a coarse quantization. Another level of bit allocation occurs in the group of picture (GOP) level, where bits are distributed among frames in each GOP. However, due to the imperfect rate control, the P and B frames at the end of a GOP are often over-quantized to prevent the buffer overflow. A large variation of image quality is thus observed, especially when the given bit rate is low. Therefore, a more precise and effective method for the bit allocation and encoding scheme should be investigated to provide robust video quality when time-network condition is varying. The new method should be effective in improving video quality both objectively and subjectively. It must be able to react quickly to the changes from the network and the encoding complexity should be low to suit the real time encoding requirements. To this end, a network-probing module as proposed by Kamat et al. [8] and Noble and Satyanarayanan [13] can be used in the video server to monitor the connection status. Additional processing overhead is also required at the video server to perform the refined bit allocation algorithm. Nevertheless, the computational overhead can be affordable since the CPU resources at the video server are largely under-utilized [10]. When live encoding is required, we assume that the video server is equipped with programmable encoding hardware/software that is able to accept different encoding parameters. The success of the proposed scheme depends on the answer to an immediate question: how can we use a smaller quantization for each macroblock (i.e., more bits) while maintaining the overall bit rate requirement for each frame, or each GOP?

PAGE 16

5 The task sounds extremely difficult at first, since increasing bits per macroblock and reducing the bits per frame can be opposite goals. However, with further study along this direction, we have found that it is possible to meet both goals under one condition. The key design factor should be to reduce the total number of macroblocks per frame. Previous study in adaptive rate control usually assumes that this design factor is to be fixed all the time. But this assumption can be challenged by todays advanced video card technology. Many modern video cards come with advanced image processing capability (e.g., dithering and Z-buffering). These capabilities help to deliver better video quality at the client machines. Thus, though given a smaller resolution (i.e., reduction of the macro-blocks), the final display at the client machines can be compatible with the original quality. It is based on this idea that we proposed a new video encoding scheme, where not only adaptive quantization is used for bit allocation, but also the image resolution is adjustable to control the bit rate in a higher level. The proposed scheme also addresses the unfair bit allocation issue under small bit rate budgets. It is observed that, with a small bit rate budget, a larger quantization scale Q is often used, which makes it more erroneous to control the bit allocation. We have found that the actual bits used for the last few frames usually generated a significant buffer overflow. Our proposed scheme did eliminate the unfair bit allocation (thus the corresponding quality degradation). The choices thus becomes: when the bit budget runs low, we can either down-sample the image to a smaller resolution and use a rather precise quantization parameter,

PAGE 17

6 or directly adapt to a coarse quantization scale with the original image size. Since low-speed networks are more common than high-speed networks these days, scaling-down is perhaps more urgent than scaling-up. 1.2 Goals Our study is thus to investigate the effects of scaling down the video as an alternative to adapt to bandwidth fluctuations. We believe that gradually decreasing the image size would help alleviate potential losses because of the low bandwidth. Along this line, one major design goal was to retain the integrity of the baseline functionality of the encoder and decoder even at the loss of performance. We wanted to first understand the behavior of Adaptive Image Scaling and afterwards focus on performance. As shown in Figure 1, the image scaling adaptation was thus added to the encoder as an outer loop outside the main encoding functionality. Therefore, the image scaling adaptation loop determines which scale to be used based on the current bit rate. This type of programming model is also used in Noble and Satyanarayanan [13] to obtain positive results for making applications adaptive. Adaptive Image Scaling Client/Server Baseline CoreEncoding Functionality Figure 1. Programming Model Of course, our study also stresses the importance of providing a user-unaware adaptation as much as possible (i.e., the size variations should not be noticed by the user).

PAGE 18

7 This is accomplished by specifying the preferred display resolution in the encoded video stream, a feature defined by MPEG-2. At the decoder side, the decompressed video images will be re-rendered to the preferred display size, instead of the actual encoded resolution. For the sake of simplicity, the display size at the client remains unchanged for the same video sequence to provide a consistent viewing effect. Therefore a big challenge is: for a given bit rate, how can we determine the best image scale factor and quantization parameters for encoding the video at the best quality level. In this thesis, our focus is on the determination of the scale factor and frame level bit allocation. For the macroblock level bit allocation, we assume that adaptive quantization is used as proposed by Gonzales and Viscito [5] and ISO/IEC 13818-2 [19]. Theoretically, there exists a mapping between the current bit rate and the optimal image-resizing factor that leads to the minimal distortion. The optimal mapping could be obtained empirically by encoding with all possible resizing factors and comparing the objective video quality. However, such pre-encoding processing lacks practical value due to the tremendous computation involved and is impossible in the case of live video. We investigated two approaches: the first one assumes a mathematic model obtained from a variation of classic rate-distortion function, and the second method uses an implicit rate control based on PSNR feedback. We used the mathematical model as an approximation for modeling our system. Then the performance results from the PSNR-based approach are demonstrated for supporting the overall design goal. We have finished the implementation by introducing an image re-sizing module to generate the image data on the fly. The resizing factor is calculated every GOP, based on the current network bandwidth and the image complexity. With the continuous image

PAGE 19

8 resizing, a more accurate rate control can be implemented. Another advantage of the online image re-sampling is that the video server now only stores the original video images. From our experimental results, we have observed that video quality can be significantly improved (i.e., up to 2 dB) by re-rendering the image into a smaller resolution with a relatively precise quantization scale (i.e., usually 50% to 100% less than the conventional encoders). Specifically, the experimental results show a promising trend for low-bit-rate video transmission. Using a scaled-down resolution for encoding provides comparable picture quality. Note that the conventional encoders uses a drastically higher quantization scale and utilizes more than double the bandwidth required for approximately the same picture quality. We thus believe the proposed scheme is suitable for the emerging 3G wireless networks, which are targeted for multimedia communications using a limited bandwidth. 1.3 Overview The remainder of this thesis is organized as follows: Chapters 2 and 3 introduce the theoretical background of rate control theory along with the expected results on the reduction factor. Chapter 4 explains the basic software design of the proposed encoder system. Chapter 5 presents detailed experimental results. Chapter 6 concludes this thesis by discussing the unique contributions and potential future work for this research.

PAGE 20

CHAPTER 2 BACKGROUND This chapter provides essential background information that will help understand this research. The organization and syntax of an MPEG video will be provided. Also, theoretical information of what can be done as the rate deteriorates will be discussed. Finally, this chapter ends by discussing what the baseline encoder does when the bit rate decreases. 2.1 MPEG-2 Overview MPEG (Moving Picture Experts Group) is a committee that produces standards to be used in the encoding and decoding of audio-visual information (e.g., movies and music). MPEG works with International Standards Organization (ISO) and the International Electro-Technical Commission (IEC). MPEG-1, MPEG-2, and MPEG-4 are widely used standards generated from MPEG. MPEG is different from JPEG in that MPEG is primarily for moving pictures while JPEG is only for still pictures. Moving pictures are generated by decoding frames of still pictures together, usually at a rate of 30 frames per second. To provide optimum compression and user perceived quality, MPEG capitalizes on the redundancy of video both from subsequent frames (i.e., temporal) and from the neighboring pixels of each frame (i.e., spatial). More specifically, MPEG exploits temporal redundancy by the prediction of motion from one frame to the next, while spatial redundancy is exploited by the use of Discrete Cosine Transform (DCT) for frame compression. To exploit these redundancies, MPEG strongly relies on syntax. 9

PAGE 21

10 The three main types of frames in MPEG are (listed in priority): P, B, and I. The I-frame is used as the reference frame and has no dependency on any other frames. The I-frame is intra-coded (without dependency on other frames), while the Pand B-frames are inter-coded (depends on other frames). P-frames (i.e., predicted frames) use only forward references while B-frames (i.e., bidirectional frames) use both forward and backward references. Figure 2 depicts the relation between the different frame types. Figure 2. MPEG Frame References [11] An MPEG video has six hierarchical layers (shown in Figure 3). The highest layer is the Sequence layer, which provides a complete video. The next layer is the Group of Figure 3. Hierarchical Layers of an MPEG Video [11]

PAGE 22

11 Pictures (GOP) layer (shown in Figure 4) which compromises a full set of the different frame types consisting of only one I-frame and multiple Pand B-frames. The next layer is the Picture layer which is a single frame consisting of multiple slices (the Slice Layer). Each slice can contain multiple macroblocks (the Macroblock Layer), which contain information about motion and transformation. Each macroblock consists of blocks (the Block Layer) which are 8 values encoded using DCT. Figure 4. Group of Pictures [2] 2.2 Rate Distortion Video is encoded using various parameters such as the dimensions of the frame, the color format, and the available bit rate. If the available bit rate is high, then the encoder can use low compression parameters such as a small quantization scale to provide the best quality video at a constant bit rate (CBR). Using a small quantization scale, the frames are encoded precisely and result in larger-sized frames. When the available bit rate varies (VBR), the encoder must decrease the amount of generated bits by using a high quantization scale to meet the lower bit budgets. By adapting to the network variability and providing VBR encoding, the encoder can provide a constant quality video [14]. If the encoder was bandwidth-unaware, during low bandwidth situations, it will generate wasteful information [9].

PAGE 23

12 Keeping all the other encoding parameters constant and gradually lowering the bit rate results in the gradual degradation (i.e., distortion) of the video. Minimizing the distortion while meeting the budget implied by the bit rate is the general rate-distortion problem. Choosing the optimal encoding parameters that will provide the best possible video quality under the circumstances can minimize distortion. An instance of encoding parameters can be referred to as an operating point [16]. An encoder could try different operating points to see which one provides the best video quality. The convex hull of all the set of operating points provides the best rate distortion performance [16]. The goal of rate-distortion optimization is to find the operating point that will be as close to the curve generated by the convex hull as shown in Figure 5. For a given rate (R1), the distortion will vary based on the different encoding parameters used. Convex Hull Rate DistortionR1 Operating Points Figure 5. R-D Curve Showing the Convex Hull of the Set of Operating Points [16]. Given the real-time constraints of encoding, optimal operating point cannot be computed on-the-fly, but rather rate control algorithms can be employed to provide the best operating point using general guidelines to meet the target bit budget.

PAGE 24

13 2.3 MPEG-2 Rate Control The bit allocation in MPEG-2 consists of two steps: (1) target bit estimation for each frame in the GOP and (2) determination of reference quantization scales in the macroblock level for each frame. In the GOP level, bit assignment is based on the frame type. For instance, an I-frame usually has the highest weight and gets the most share, while P-frames have the next priority, and B-frames are assigned the smallest portion of the total bits. In the frame level, the TM5 rate control algorithm in MPEG-2 controls the output data rate by adaptive quantization. The DCT coefficients, which consist of an 8 block, are first quantized by a fixed quantization matrix, and then by an adaptive quantization scale Q. The elements in the fixed quantization matrix are decided according the sensitivity of human visual system to the different AC coefficients and are obtained empirically based on many experiments. The quantization scale Q serves as an overall quantization precision. This rate control algorithm performs well when the bit rate budget is high; however, it might result in very unfair bit allocation under small bit rate budgets. With a small bit rate budget, a larger quantization scale Q is often used, which makes it more erroneous to control the bit allocation. An often-observed direct result is that the actual bits used for the first few frames are greater than the target bits calculated in the GOP level rate control. This further devastates the bit shortage for the rest of the frames in the same GOP and either causes buffer overflowing (generated bit-stream consumes more bandwidth than allowed) or sudden decrease of the picture quality. Figure 6 demonstrates such performance degradation for an MPEG-2 encoder using a low bit rate.

PAGE 25

14 The performance degradation of the conventional rate control at low bit rates is not so surprising, though, since the accuracy of the rate-distortion theory will only be guaranteed when the bit budget is sufficient and the quantization scale is small. While at low bit rates, the quantization scale is large, and consequently makes the low-order (or even linear) approximation of the rate-distortion function more erroneous. Figure 6. Bit Allocation of TM5 Rate Control Bit Rate Is Set to 700kbps.Video Resolution Is 720x240. Image Complexity for I, P, B Frames is set to Xi=974kbits, Xp=365kbits, Xb=256kbits, respectively. The average quantization scale and the amount of buffer overflow are shown in Figure 7 and Figure 8 for the above encoding experiments. In Figure 7, it can be seen that the quantization scale increases as the encoding progresses. Towards the end of the GOP, the quantization scalar (Q) used for the B-frames increases to 112, which is the pre-defined maximum value allowed value in the encoder. Any Q value higher than 112 is cut off to maintain a minimum level of viewing quality. This, however, causes severe buffer overflowing for these frames. The buffer overflow measurement provides a good indication for the reliability and effectiveness of a rate control algorithm. The overflow equals zero when the buffer is not full and equals the accumulated actual bit-count that exceeds the transmission buffer otherwise. Figure 8 shows the measured buffer overflow for the low bit rate encoding

PAGE 26

15 mentioned above. The size of the (transmission) buffer is set to the target bit budget for a group of picture, which is 303333 bits in this test. As the frames in the GOP are encoded, the encoded bits are temporarily stored in the transmission buffer, which will be sent to the network when full. Also in Figure 8, we observed that the buffer is not overflowed for the first four frames. At the fifth frame (a B-frame), the actual accumulated encoding bits exceed the transmission buffer by 25888 bits. The buffer overflow keeps increasing for the rest of the frames and reaches the highest level of 138411 bits by the end of GOP. 02040608010012001234567891011121314Frame NumberAverage Quantization Scale Figure 7. Average Quantization Scale for a GOP, Encoded at 700kbps Figure 8. GOP Target Buffer Overflow, Encoded at 700 kbps We believe that a new way to control the encoding bit rate should be considered besides tuning the quantization scalar. A direct but effective method is to reduce the number of macroblocks (MB) to be encoded. With a smaller number of macroblocks, we

PAGE 27

16 will have a higher bit per symbol value, thus a smaller quantization scalar. Figure 9 describes the simplified new video encoding scheme. N ormal rate control, Encoding Reduce number of MB Proposed Encode r Figure 9. Reducing the Number of MB Before Normal Rate Control and Encoding. The rationality behind this new scheme is that, by reducing the number of macroblocks, the encoder should be able to reduce the information loss due to heavy quantization which otherwise is inevitable. Nevertheless, we do suffer information loss during the first step when the number of macroblocks are reduced. There is a tradeoff to make via the comparison of information loss. We provide the theoretical analysis of this decision process and the empirical method in the rest of this thesis. Another aspect of reducing the number of MB is how we should select the original image data in the scaled-down version. For example, we can cut off the image data on the edges, while only preserving the image in the center, or one may sort the MB in the order of importance (e.g., the higher importance is given to MB with high variance) and skip those MB at the end of the list. The full discussion of the strategies on this direction is beyond the scope of this thesis. In the context of this thesis, we assume that the reduction of the macroblock number is achieved by an image resizing process, which converts the original image to a new image by proportionally scaling down in both dimensions. The resizing process is described by the following equations:

PAGE 28

17 {} ](1,0.',.')'()'(|),()(1)','('0222,,,==<+==fandvfvufudvyuxNyxAwherepIAvuIvuApvuvu As shown in Figure 10, is the pixel value of the resized image at coordinate represents the corresponding pixels within a radius of in the original image centered at and is the resizing factor. The pixel value in the new-scaled image is the average of the pixels in the neighbor area in the original image defined by the resizing factor and the coordinate (. )','('vuI)vff )','(vu vuA, 0d ,(u ),vu u, v u ,v A u, v I I u, v u ,v A u, v I I H W H W H W H W Figure 10. Effects of Image Scaling The rate distortion theory implies that there is an optimum resizing factor, which will provide the minimum distortion. The next chapter takes an analytical approach into finding the optimal resizing factor.

PAGE 29

CHAPTER 3 APPROXIMATING OPTIMAL RATE RESIZING FACTOR FUNCTION In this chapter, we provide an analytical model such that the optimal image size can be derived. The distortion measurements in video encoding are the Mean Square Error (MSE) and the Peak Signal to Noise Ratio (PSNR). MSE is the average squared error between the compressed and the original image, whereas PSNR is a measure of the peak error in dB. In MPEG-2 encoding, color component (YUV) in the raw image is represented by an 8-bit unsigned integer number. Thus the peak signal value is always less than or equal to 255. The PSNR and MSE are expressed as follows: )255log(102MSEPSNR= and ===HyWxyxIyxIHWMSE112)),(),((.1 The width and height of the image are represented by W and respectively; represents the original image and the reconstructed image after encoding-decoding. However, PSNR and MSE distortion measurements are very difficult to be used in an analytical model. In the following discussion, we use absolute difference as the measure of distortion. ,H ),(yxI ),,(yxI We assume that the pixels of the video source represent a zero-mean i.i.d. (independent and identical distribution) with the signal variance of each pixel as Results in Hang and Chen [6] indicate that the bits per symbol () versus the distortion caused by the normal encoding process () can be approximated by 2X b ED 18

PAGE 30

19 )(loglog1)(2222EXEDeDb= (1) In (1), is a constant 1.386 and is source dependent (usually 1.2 for Laplacian distribution). Rearranging (1), we have 2 22)(XbEebD= (2) In the case of MPEG-2 encoding, our focus is on the I-frame, which is completely intra-coded. For P and B frames, the rate distortion model is not applicable due to the coding gain from motion estimation. Now if we let be the available bit count for a GOP, B r be the percentage of bits assigned to the I-frame, and be the image scale factor, then we have the following relation: f fWHbrB= (3) Equation (3) reflects a simple fact, that the total bits to encode an I-frame () equals the number of pixels (which is after the resizing effect) times the average bits per symbol, b. Substitute (3) into (2), we have rB fWH 2)(2)(XfWHrBEebD= (4) From (4), we can see that the scale factor () is inversely proportional to the available bit count (). f B Equation (4) only represents the distortion between the pre-encoded image and the reconstructed image. To describe the complete distortion of the resizing-then-encoding process, we should quantatize the information loss during the resizing process With the assumption of i.i.d. distribution for the pixels, we define the image information (complexity) via the cumulative variance of all image pixels. The image complexity before and after resizing is and respectively. RD xHWIH=)( xfHWIH=)'(

PAGE 31

20 The loss of the information from the resizing process could be expressed by xRHWfIHIHD==)1()'()( Now let us define the total distortion as the summation of distortion caused by normal encoding (), and the distortion caused by resizing (), we have TD ED RD xXfWHrBERTHWfeDDD+=+=)1(2)(2 (5) The optimal resizing factor must correspond to the smallest total distortion (). By taking the first-order derivative of represented by (5) and equaling it to zero, we have TD TD 0)(222==xfWHrBXTHWefWHBfD (6) The solution of (6) corresponds to the peaks (or valleys) of local minimum/maximum distortion. To find the optimal resizing factor, the local peaks/valleys are substituted into (5) as well as the end point = 1. f Figure 11 shows the relationship between the distortion and resizing factor based on (5). The image size is fixed to 704, and the curves for different target bit rates are plotted. When the target bit rate is set to 2400kbps, we observed that the optimal resizing factor is about 0.64. As the target bit rate decreases, the best resizing factor moves to the left (i.e., becomes smaller). For instance, the optimal resizing factor at 1800 kbps bit rate is 0.55. This behavior matches well with our predication. However, the resizing factor corresponding to the low-end bit rate (e.g., 100kbps) shows that the image size should be scaled-down by more than 30 times, which is not acceptable in reality. In fact, since the distortion model of (1) & (2) is valid when the bits per pixel are relatively

PAGE 32

21 sufficient, this model only provides a reasonable predication when the target bit rate is adequate. Figure 11. The Overall Distortion-Resizing Functions When Different Target Bit Rates Are Given. The Video Resolution Is 704 (DVD Quality). From the mathematical analysis, we have observed that it is theoretically possible to find the optimal resizing factor given the bit rate and the image resolution. Nevertheless, the accuracy of the above analysis highly depends on the statistical assumptions of each pixel (i.e., that the variance is the same for all pixels). In reality, the i.i.d. assumptions are not always applicable. Therefore, we sought the PSNR based approach to investigate the effects of image scaling. The following chapters describe the experimental design and results of this investigation.

PAGE 33

CHAPTER 4 SYSTEM DESIGN This chapter provides the design rationale for this thesis. Results from initial testing are presented to provide reasoning for the choices made in the system design. Experimental assumptions, parameters, and scenarios will also be discussed. To test Adaptive Image Scaling, software only versions of the MPEG-2 encoder and decoder were used. Figure 12 shows the System Design Components. The baseline for the encoder/decoder codec used in this study was from the MPEG Software Simulation Group (MSSG) shareware site [21]. The baseline encoder and decoder are standalone, and were modified to allow streaming video as shown in Figure 12. The modified encoder sends video streams over the network to the decoder. The individual frames that are to be encoded are stored on disk in PPM format. The frames are encoded and decoded on the fly over the network. The server and the client are connected using TCP/IP. Encode r Decoder ServerClien t p pm Figure 12. System Design Components 22

PAGE 34

23 One of the parameters that can be passed to the encoder is the bit rate. The baseline encoder does adapt the quantization parameter based on the bit rate. As discussed earlier, the quantization parameter is bandwidth-adapted per Figure 13 (i.e., as the bit rate decreases, the quantization parameter increases). For high bit rates, the quantization parameter is set to the minimum value of 1, and for low bit rates, the quantization parameter is gradually increased until the maximum value of 112 is used. The bit count vs. quantization is nonlinear [2]. 0400000800000120000016000000204060Qunatization ScaleBit Count Figure 13. Bandwidth-Adaptive Quantization When the encoder detects a drop in the bit rate, the reference frame will be scaled-down. As explained in the earlier chapters, sending a scaled-down version of the frame will require smaller amounts of bits to be transferred over the network, thus lowering the bandwidth requirement. The trade off for scaling down is the loss of picture quality. But the loss of quality that is experienced from scaling down should be better than without. When the baseline decoder receives the scaled-down image, it automatically adjusts to the new scale. However, as mentioned earlier, from the user-unaware perspective, the user should only view the image at a fixed resolution (i.e., that of the reference picture). This can be achieved by modifying the decoder to only display the scaled image at the

PAGE 35

24 reference size. When scaling up an image, negligible losses in the PSNR value were observed (average loss of 0.02). 4.1 Scaled-PSNR Based Approach Since measuring video quality can be subjective [18], we decided to primarily use the objective metric of PSNR. The baseline encoder provides frame-by-frame PSNR information along with other statistical information that was used in our study. Reasons for pursuing a Scaled-PSNR based approach were gathered from initial experimental results which show that smaller sized resolutions produce as good or better PSNR as that of the original resolution for low bit rates. Figure 14 and Figure 15 show the PSNR and quantization values, respectively, for Frame 45 (4 th I-Frame) obtained from multiple runs of the baseline encoder using different combinations of bit rates and resolutions. For initial testing, three different resolutions of the mei20f.m2v [20] sequence were used: 704 (f=1.0), 352 (f=0.5), and 352 (f=0.25). The three sets of resolutions were created and stored on disks prior to running the encoder. The bit rates that were tested are as follows: 10kbs, 100kbs, 300kbs, 500kbs, 1mbs, 1.5mbs, 1.7mbs, 2mbs, 3mbs, 4mbs, 5mbs, 7mbs, 9mbs, 15mbs, 20mbs, 40mbs, and 60mbs. 1722273237424710005000002000000700000040000000Bit RatePSNR (dB) f = 1.00 f = 0.50 f = 0.25 Figure 14. Scaled-down PSNR Values

PAGE 36

25 Bit Rate Figure 15. Scaled-down Quantization Values It can be seen that as the bit rate increases for the different resolutions, the PSNR values saturate at a certain value when the quantization scale is set to the minimum value (1). For the f=0.25 scale, the saturation point was seen at the bit rate of 9mbs with a value of 48.8dB (f=0.5, at 20mbs with a value of 48.9dB; f=1.0, at 40mbs with a value of 49.0dB). It can be observed that when the bit rate was more than adequate, the original resolution had better PSNR values. This shows that, when the bit rate is adequate, image scaling is not required. However, as the bit rate decreases from the saturation point, we see that there is a huge variance between the quantization values used for the different scales. The biggest difference was seen at 500kbs where the f=1.0 scale is using the maximum allowed quantization (112) while the f=0.5 scale is using roughly half the amount, and the f=0.25 scale is still maintaining a low quantization scale. We can see that the f=0.25 scale increases its quantization to half the maximum value only at bit rates of 300kbs or lower, while the f=1.0 scale begins to use the maximum quantization at bit rates less than 1mbs. At low bandwidths, the f=1.0 scale does lose information from the use of higher quantization scales.

PAGE 37

26 From the initial testing, it can be seen that the PSNR value does vary based on the scale at different bit rates. As a result of the initial testing, we have more quantitative evidence that for low bit rates, transmission of smaller amounts of data allows less information loss due to quantization. The initial results show that the scaled-PSNR approach is only useful in low-bandwidth situations (here, less than 1mbs) where the loss of information from scaling down would be better off than the effect of using significantly larger quantization on the original image. Additionally, when there is enough bandwidth, it is better to use the original resolution since a comparably smaller quantization scale is used. Thus, the scaled-PSNR based approach was chosen to test Adaptive Image Scaling. 4.2 Dynamic Adaptive Image Scaling Design In the dynamic Adaptive Image Scaling scheme, the bit rate is monitored and any changes are notified to the encoder. When the encoder detects a change in the bit rate, it must decide whether scaling the image would produce a better picture for the end user. The start of a GOP is used to make adjustments to the bandwidth for simplicity and to gather initial results. Adding finer granularity (e.g., per frame monitoring) can be added at later stages of this research. When the encoder detects a significant change in the bandwidth, the current image size is scaled-down dynamically in an effort to cope with the limited resources. For the current research purposes, to test different scales, the scale factor was specified as a parameter to the encoder. For simplicity, if a new scale is to be used, the encoder closes the current sequence and restarts a new sequence using the scaled-down image. Another design decision was to dynamically scale (i.e., render) the image as the encoder executes. The execution time for scaling the image was negligible especially

PAGE 38

27 considering the under-utilization of the server CPU [10] as discussed earlier. This decision reduces storage requirements and allows support for live video. To focus on the effects of image scaling on the encoding process the issue of real-time detection of bandwidth changes and reacting to those changes will not be addressed in this thesis. The contributions in this area [8,13] can be integrated in future works on Adaptive Image Scaling. For experimental purposes, for every GOP, the initial bit rate is reduced by a bandwidth reduction rate. The bandwidth reduction rate was used to simulate dynamic network bandwidth degradation. For example, if the initial bandwidth was specified to be 700kbs and the bandwidth reduction rate was 50%, the first GOP will be encoded using 700kbs, the second GOP will be encoded using 350kbs, the third GOP will use 175kbs, etc. In the current design, when encountering a lower bit rate, if a smaller image does not yield a better PSNR value, the current image scale is used. In other words, no further scaling is performed. This was done in the interest of addressing the timing issues of encoding. As discussed earlier, it is not realistic to expect an encoder to try multiple image scale factors before proceeding. Thus the encoder must make a quick decision based on its current scale and its next lower size. Note that the actual scaled size must be slightly adjusted to ensure that dimensions meet the other encoder constraints such as macroblock divisibility. Since the resizing factor changes gradually with only a small difference every time, the above procedure could require multiple-frame delays to converge to an optimal image size. To be able to quickly benefit from a sudden recovery of network bandwidth and also to obtain comparison data, the original reference picture resolution is always

PAGE 39

28 compared to the current and next lower level resolutions. The resolution that yields the best PSNR value will be used. The original picture resolution is preferred when there is enough bandwidth. To see the effects of image scaling we added an override parameter to the encoder to ignore the dynamic rendering results and simply choose the reference picture resolution at every GOP. In the override mode, the scale factor is used to report potential resolutions that the encoder could have chosen instead of the original resolution. An example of the proposed dynamic Adaptive Image Scaling is shown in Figure 16. In this scenario, the encoder dynamically scales down the original image (704x240) using a factor of f=0.5. The scaled image is encoded and sent over the network. When the decoder receives the scaled image, it will read the display size and scales up the image to the original size. It should be noted that the input to the encoder is the original image (i.e., no other resolutions are stored on disk). The next chapter provides the experimental results from using the dynamic Adaptive Image Scaling with different bit rates and scale factors. The effects of image scaling will be quantitatively examined and the resulting objective and user-perceived quality metrics will be provided.

PAGE 40

29 Decoder Displays Video at Constant Resolution Encoder Sends Scaled Image Due to Low Bandwidth N etwor k Figure 16. Adaptive Image Scaling Scenario

PAGE 41

CHAPTER 5 EXPERIMENTAL RESULTS Our proposed dynamic rendering encoder was tested with various combinations of the initial bit rate, bandwidth reduction rate, and scale factors. In addition to the mei20f.m2v sequence used in the initial testing, the bbc3_120.m2v sequence was also tested. Both sequences were downloaded from the MPEG2-Video Test Stream Archive web site [20]. The bbc3_120.m2v has a faster rate of motion having dimensions of 704. We generalized the data that we gathered into two major cases of behavior: Case 1, when the scaled-down image had a higher objective quality and Case 2, when the scaled-down image had a slightly lower objective quality. 5.1 Case 1 The dynamic rendering test was run with an initial bit rate of 1.5 mbps, Bandwidth Reduction Rate of 50%, and a Scale Factor of 50%. Using the override flag, the same test was also conducted to see the effects of not using image scaling. To compare the effects of image scaling on the rest of the frames, the following metrics were compared: PSNR, Quantization Parameter, Target Bits, and Actual Bits. Four GOPs were started using Adaptive Image Scaling. When image scaling was used, the PSNR values from different resolutions at the start of every GOP are listed in Table 1. The best PSNR value is highlighted in the table. Note that in Table 1 the Previous Scale reflects the scale that has being used for the previous GOP (i.e., current scale). 30

PAGE 42

31 Table 1. Case 1 Adaptive Image Scaling I-Frame PSNR Data GOP Bit Rate Scaled-down Resolution Previous Scale PSNR (dB) Original PSNR (dB) Scaled-down PSNR (dB) 1 1.5mbs 512x192 N/A 24.6 26.0 2 750kbs 352x128 23.0 22.0 24.2 3 375kbs 256x96 21.1 20.7 22.9 4 187.5kbs 176x64 19.4 20.2 21.0 Figure 17 shows frame-by-frame PSNR values of Normal and Adaptive Image Scaled Encoding. The PSNR results show that image scaling produces comparable results. The overall PSNR values for all the frames are relatively close (i.e., our encoder did not lose significant quality though the bit rates have been reduced significantly). All the I-frames (e.g., Frames 0,15,30,45) even have better PSNR values for the scaled-down version of the frames. The maximum PSNR improvement can be in the range of 2 dB. Figure 17. Case 1 Scaled-PSNR Comparison By analyzing each frame, it can be seen that the initial frames (0-14, associated to the first GOP) have similar PSNR values, while the frames of the second and third GOP (15 42) have better PSNR values using Adaptive Image Scaling. However, towards the end of the 3 rd GOP (43-45), the PSNR values for Adaptive Image Scaling start to provide lower results (though it is still quite comparable except for the final frame). As shown in Figure 18, the quantization parameter comparison shows some more reasons for the increase in the original PSNR value. When image scaling is used, a lower

PAGE 43

32 quantization parameter is required. As the encoding of the frames progress with lower bit rates, higher quantization values are used in the normal encoding. 020406080100120048121620242832364044Frame NumberAverage Quantization Scale Normal Encoding Adaptive Image Scaling Figure 18. Case 1 Scaled-Quantization Comparison We observed that at low bandwidths, a scaled-down image using a lower quantization parameter would do better than a non-scaled image using a high quantization parameter. It can be seen that the gap between the quantization parameters required for the image scaling run and the non-image scaling run increases as the encoding encounters lower bit rates. When image scaling is used, the quantization parameter stays relatively the same with a much smaller variance: 25 60. This is not the case for the normal encoding run which has a range of 37 112. Lower quantization values produce images with better PSNR. Our proposed scheme generally use 50% to 100% less quantization parameters compared to the normal encoding schemes. Another metric can be examined with the target bits of each GOP. Figure 19 shows that using normal encoding, the overflow of the GOP target bits per frame gradually increases. It can be seen that as the encoding progresses after a certain point (here, after frame number 21) the buffer overflow in the normal encoding gets worse. Note that at

PAGE 44

33 the start of every GOP, the target bits of the current GOP needs to be reset. We did not observe the similar buffer overflow in our proposed encoder. Normal Encoding Adaptive Image Scaling Figure 19. Case 1 GOP Target Bits Overflow Comparison For Case 1, the scaled-down resolution was always chosen as shown in Table 1. This particular case perhaps represents the best scenario (i.e., that the scaled-down adaptation simply outperformed the normal encoder for all four GOPs). Nevertheless, no encoder can work perfectly for every kind of video content. The following Case 2 represents this possibility. 5.2 CASE 2 It is true that sometimes the Adaptive Image Scaling needs to also consider additional factors. Case 2 was run with an initial bit rate of 700 kbs, Bandwidth Reduction Rate of 50%, and a Scale Factor of 50%. The Case 2 PSNR values from different resolutions at the start of every GOP are listed in Table 2. Using the override flag, a test was conducted to see the effects of dynamic rendering using the mentioned parameters. Table 2. Case 2 Adaptive Image Scaling I-Frame PSNR Data GOP Bit Rate Scaled-down Resolution Previous Scale PSNR (dB) Original PSNR (dB) Scaled-down PSNR (dB) 1 700kbs 512x192 N/A 22.0 22.6 2 350kbs 352x128 20.8 20.7 20.9 3 175kbs 352x128 19.2 20.3 19.9 4 87.5kbs 256x96 N/A 20.0 19.6

PAGE 45

34 Figure 20 shows the effects of not choosing the scaled-down resolution for the last two GOPS. It can be seen that the Adaptive Image Scaling version had better quantization until the beginning of the 3rd GOP, which is where both runs produce the same quantization values. 020406080100120051015202530354045Frame NumberAverage Quantization Scale Normal Encoding Adaptive Image Scaling Figure 20. Case 2 Quantization Comparison Correlation between the image size and the PSNR value can be seen in Figure 21. In Figure 21, for each scale, the bit rate decreases. It can be observed that in higher resolutions, the PSNR of the scaled-down image does relatively same as that of the original resolution. The scaled-down PSNR becomes a better indicator only when the image size decreases below the scale of 0.25. Thus the results suggest that, for this particular video content, the gain is only achieved for 3G wireless data transmission. Scaled-down PSNR Original PSNR Figure 21. Comparison of Scaled-down PSNRs

PAGE 46

35 For the example of the 3 rd I-Frame, Adaptive Image Scaling strictly used the PSNR values to pick the original resolution over the scaled resolution. The differences between the two PSNR values were small: 20.3 vs. 19.9. However, given the 0.4dB loss to the conventional encoders, it is always true that our proposed encoder transmits a smaller amount of video data than the conventional ones. From the purely PSNR consideration, the following are the deciding parameters used at the start of each GOP. For the first 2 GOPs, the scaled-down resolution was picked. For the 3 rd and 4 th GOP, the original picture had the better PSNR and thus it was chosen. Case 2 deserves some further analysis since the decision is no more clear-cut. It is arguable that perhaps the small PSNR variation can be tolerable, while saving the bandwidth is always a design plus for the overall networks. To show that small variations in the scaled-PSNR still provide comparable pictures, we provided the pictures for subjective quality measurements among some human testers. The pictures were taken from the right side of the video frames (see Appendix D). The reference picture was encoded with bit rate of 60 mbps (i.e., highest quality) and as a result used 1425311 bits with a PSNR of 37.3dB. However, the other two pictures were taken from the 3 rd I-Frames of Case 2 above (encoded using a significantly lower bit rate of 175kbs) and as their PSNR values indicate (20.3dB vs. 19.9dB), the visual quality is reported by the testers to be also very comparable. Since at the low bit rates the quality of both images suffers, choosing the scaled-down version does provide comparable results with more than half the bit requirements. Thus, it is better to choose the slightly lower PSNR valued image and reduce the bandwidth requirement.

PAGE 47

36 5.3 Further Analysis From dynamic image scaling we find that for low bit rates, when the PSNR values of the original and scaled-down frames are close, it is better to encode using the scaled-down resolution. Base on this finding, we further investigated encoding using different scales at low (i.e., 700kbs or lower), constant bit rates. The findings for the fourth I-Frame (45 th frame) of both sequences are listed in Table 3 and Table 4. At low bandwidths, we tested the scaled-down PSNR values for the following scales: 0.25, 0.50 and 0.75. Table 3. Frame 45 Testing Of Sequence mei20f.m2v Bit Rate Optimal Scale Scaled-PSNR Gain Recommended Scale (Tolerance:0.5) Scaled-PSNR Gain 700kbs 0.25 1.2 0.25 1.2 300kbs 0.5 -0.1 0.25 -0.5 64kbs 0.75 -0.2 0.5 -0.4 Table 4. Frame 45 Testing Of Sequence bbc3_120.m2v Bit Rate Optimal Scale Scaled-PSNR Gain Recommended Scale (Tolerance:1.5) Scaled-PSNR Gain 700kbs 0.75 0.4 0.25 -0.4 300kbs 0.75 0.1 0.5 -1.0 64kbs 0.75 -0.2 0.5 -1.6 The results suggest that scaling down to at least the scale of 0.75, would produce as good or better results than using the original resolution. In Table 3 and Table 4, the Scaled-PSNR Gain is the gain when compared to the original PSNR. The Recommended Scale should be used to achieve minimum quality requirements using a tolerance for the scaled-PSNR loss. The exact values vary based on the image complexity of the sequence used. We can see that the bbc3_120.m2v sequence requires a higher tolerance to use the recommended scales and its optimal scale is higher. Sequence having a higher rate of motion and detail will have higher rate-distortion

PAGE 48

37 values [16]. The next chapter draws conclusions regarding the results gathered from this thesis.

PAGE 49

CHAPTER 6 CONCLUSION In this thesis, we presented a study for a novel design and implementation of bandwidth-aware Adaptive Image Scaling for MPEG-2/4 encoding. The strength of the scheme is that this mechanism can reduce the network traffic during congestion situations and still obtain comparable quality. From the experimental results, it was observed that using a small PSNR tolerance provided better results than using a zero-tolerance criteria. This work proposes a complimentary scheme that integrates well with adaptive quantization and other methods of degrading the videos fidelity. 6.1 Contributions Even though works such as Fox et al. [4] address resolution scaling as an option to adjusting the data fidelity, this thesis has performed a thorough investigation on the effects of MPEG-2 dynamic Adaptive Image Scaling. The most significant contribution from this thesis is that we understand that under low bandwidth requirements, image scaling can produce as good or better quality video with significantly less bits than using the original resolution. Furthermore, this thesis uniquely provides an independent theoretical model which, when tested, backs the experimental results. Both the theoretical and experimental results show that there is an optimal scale factor that will provide the minimum distortion. 6.2 Future Work All the VBR experiments that were done for this thesis assumed a decreasing bit rate to focus on low bandwidth behavior. This thesis can be extended to handle random 38

PAGE 50

39 bit rate fluctuations. Though a two-pass approach was used to study the effects of image scaling with that of the reference, an integrated one-pass approach can be implemented by utilizing an optimal algorithm using rules that will determine the appropriate fidelity based on factors such as PSNR, bit rate, and scale.

PAGE 51

APPENDIX A PERTINENT MPEG-2 ENCODER SOURCE CODE CHANGES (Note: Modified code from MPEG Software Simulation Group [21] is in boldface.) mpeg2enc.c /* mpeg2enc.c, main() and parameter file reading */ /* Copyright (C) 1996, MPEG Software Simulation Group. All Rights Reserved. */ /* Disclaimer of Warranty * These software programs are available to the user without any license fee or royalty on an "as is" basis. The MPEG Software Simulation Group disclaims any and all warranties, whether express, implied, or statuary, including any implied warranties or merchantability or of fitness for a particular purpose. In no event shall the copyright-holder be liable for any incidental, punitive, or consequential damages of any kind whatsoever arising from the use of these programs. * This disclaimer of warranty extends to the user of these programs and user's customers, employees, agents, transferees, successors, and assigns. * The MPEG Software Simulation Group does not represent or warrant that the programs furnished hereunder are free of infringement of any third-party patents. * Commercial implementations of MPEG-1 and MPEG-2 video, including shareware, are subject to royalty fees to patent holders. Many of these patents are general enough such that they are unavoidable regardless of implementation design. */ #include #include #define GLOBAL /* used by global.h */ #include "config.h" #include "global.h" #include #include #include #include #include #include #include #include #include #include #include #define MAXSERVERDATASIZE 80 #define MAXHOSTNAME 80 #define MYPORT 5000 // the port users will be connecting to #define BACKLOG 5 // how many pending connections queue will hold #define MAXCLIENTS 4 #define MAXTITLES 10 #define MAXTITLELENGTH 15 /* pointer to a signal handler */ 40

PAGE 52

41 typedef void (*sighandler_t)(int); int clientcount = 0; /* private prototypes */ static void readparmfile _ANSI_ARGS_((char *fname)); static void readquantmat _ANSI_ARGS_((void)); static int openConnection _ANSI_ARGS_((void)); // control c signal handler static void CtrlC() { printf (": Handling CONTROL-C\n"); printf (": Server is going DOWN!\n"); close(newSockfd); close(sockfd); fclose(outfile); fclose(statfile); } // signal handler for fork'ed process termination static void ChildTerm() { clientcount--; printf(": child has died\n"); printf(": clientcount is %d \n", clientcount); } int main(argc,argv) int argc; char *argv[]; { if (argc!=3) { printf("\n%s, %s\n",version,author); printf("Usage: mpeg2encode in.par out.m2v\n"); exit(0); } /* read parameter file */ readparmfile(argv[1]); printf("Reading quantization matrices....\n"); /* read quantization matrices */ readquantmat(); /* open output file */ if (!(outfile=fopen(argv[2],"wb"))) { sprintf(errortext,"Couldn't create output file %s",argv[2]); error(errortext); } printf("Openning Connection ....\n"); /* open connection */ openConnection(); return 0; } static int openConnection() { struct sockaddr_in serverAddr; // my address information struct sockaddr_in clientAddr; // connector's address information struct hostent *hp; int count, addrlen, nbytes; char myname[MAXHOSTNAME+1];

PAGE 53

42 char serverBuffer[MAXSERVERDATASIZE]; int i, selections; char *bitStream; int spaceindex, userIndex; // clientcount = 0; if ((sockfd = socket(AF_INET, SOCK_STREAM, 0)) == -1) { perror("socket"); exit(1); } memset(&serverAddr, 0, sizeof(struct sockaddr_in)); // clear our address if (gethostname(myname, MAXHOSTNAME) == -1) // who are we... { perror("socket"); exit(1); } if ((hp= (struct hostent *)gethostbyname(myname)) == NULL) // get our address info { perror("gethostbyname"); exit(-1); // per tutorial -1 } serverAddr.sin_family = AF_INET; serverAddr.sin_port = htons(port_number); // short, network byte order if (bind(sockfd, (struct sockaddr *)&serverAddr, sizeof(serverAddr))== -1) { perror(": bind"); exit(1); } printf(": Server is UP at %s, port %d\n", myname, port_number); // signal hanler for control c --> close the socket! signal(SIGINT, CtrlC); if (listen(sockfd, BACKLOG) == -1) { printf(": listen failure %d\n",errno); perror(": listen"); exit(1); } // ignore child process termination // signal (SIGCHLD, SIG_IGN); signal (SIGCHLD, ChildTerm); clientcount = 0; while(1) // main accept() loop { addrlen = sizeof(serverAddr); if (clientcount < MAXCLIENTS) { printf(": waiting to accept new connection \n"); if ((newSockfd = accept(sockfd, &serverAddr, &addrlen)) == -1) { printf(": accept failure %d\n",errno); perror(": accept"); exit(1); } printf(": got new connection from %s\n", inet_ntoa(clientAddr.sin_addr)); switch(fork()) { /* try to handle connection */ case -1 : /* bad news. scream and die */

PAGE 54

43 perror(": fork"); close(sockfd); close(newSockfd); exit(1); case 0 : /* we're the child, do something */ close(sockfd); // close the parent socket fd //1. read request for list of titles if ((nbytes = read(newSockfd, serverBuffer, MAXSERVERDATASIZE)) < 0) { perror(": read"); } if (nbytes != 19) { printf("Error -could not read request for titles, btyes read is %d\n", nbytes); } printf(": Read request for titles, btyes read is %d\n", nbytes); printf(": Ready to send bitstream for decoding\n"); init(horizontal_size, vertical_size); putseq(); break; default : /* we're the parent so look for */ close(newSockfd); clientcount++; printf(": clientcount is %d \n", clientcount); continue; }; } } } void init(int inputH_size, int inputV_size) { int i, size; static int block_count_tab[3] = {6,8,12}; static int firstTime = 1; horizontal_size = inputH_size; vertical_size = inputV_size; imageScaleTestBufCnt = 0; range_checks(); profile_and_level_checks(); /* Clip table */ if (!(Clip=(unsigned char *)malloc(1024))) Error("Clip[] malloc failed\n"); Clip += 384; for (i=-384; i<640; i++) Clip[i] = (i<0) ? 0 : ((i>255) ? 255 : i); initbits(); init_fdct(); init_idct(); /* round picture dimensions to nearest multiple of 16 or 32 */ mb_width = (horizontal_size+15)/16; mb_height = prog_seq ? (vertical_size+15)/16 : 2*((vertical_size+31)/32); mb_height2 = fieldpic ? mb_height>>1 : mb_height; /* for field pictures */ width = 16*mb_width; height = 16*mb_height; chrom_width = (chroma_format==CHROMA444) ? width : width>>1; chrom_height = (chroma_format!=CHROMA420) ? height : height>>1;

PAGE 55

44 height2 = fieldpic ? height>>1 : height; width2 = fieldpic ? width<<1 : width; chrom_width2 = fieldpic ? chrom_width<<1 : chrom_width; block_count = block_count_tab[chroma_format-1]; /* clip table */ if (!(clp = (unsigned char *)malloc(1024))) error("malloc failed\n"); clp+= 384; for (i=-384; i<640; i++) clp[i] = (i<0) ? 0 : ((i>255) ? 255 : i); for (i=0; i<3; i++) { size = (i==0) ? width*height : chrom_width*chrom_height; if (refHsize == horizontal_size && refVsize == vertical_size) refSize[i] = size; if (!firstTime) { free(newrefframe[i]); free(oldrefframe[i]); free(auxframe[i]); free(neworgframe[i]); free(oldorgframe[i]); free(auxorgframe[i]); free(predframe[i]); free(tempframe[i]); free(temp2frame[i]); } if (!(newrefframe[i] = (unsigned char *)malloc(size))) error("malloc failed\n"); if (!(oldrefframe[i] = (unsigned char *)malloc(size))) error("malloc failed\n"); if (!(auxframe[i] = (unsigned char *)malloc(size))) error("malloc failed\n"); if (!(neworgframe[i] = (unsigned char *)malloc(size))) error("malloc failed\n"); if (!(oldorgframe[i] = (unsigned char *)malloc(size))) error("malloc failed\n"); if (!(auxorgframe[i] = (unsigned char *)malloc(size))) error("malloc failed\n"); if (!(predframe[i] = (unsigned char *)malloc(size))) error("malloc failed\n"); if (!(tempframe[i] = (unsigned char *)malloc(size))) error("malloc failed\n"); if (!(temp2frame[i] = (unsigned char *)malloc(size))) error("malloc failed\n"); } mbinfo = (struct mbinfo *)malloc(mb_width*mb_height2*sizeof(struct mbinfo)); if (!mbinfo) error("malloc failed\n"); blocks = (short (*)[64])malloc(mb_width*mb_height2*block_count*sizeof(short [64])); if (!blocks) error("malloc failed\n"); // should not recreate a new file on multiple call (just open and append to it!) /* open statistics output file */ if (firstTime) { if (statname[0]=='-') statfile = stdout; else if (!(statfile = fopen(statname,"w"))) {

PAGE 56

45 sprintf(errortext,"Couldn't create statistics output file %s",statname); error(errortext); } } if (firstTime) firstTime = 0; } void error(text) char *text; { fprintf(stderr,text); putc('\n',stderr); exit(1); } static void readparmfile(fname) char *fname; { int i; int h,m,s,f; FILE *fd; char line[256]; static double ratetab[8]= {24000.0/1001.0,24.0,25.0,30000.0/1001.0,30.0,50.0,60000.0/1001.0,60.0}; extern int r,Xi,Xb,Xp,d0i,d0p,d0b; /* rate control */ extern double avg_act; /* rate control */ if (!(fd = fopen(fname,"r"))) { sprintf(errortext,"Couldn't open parameter file %s",fname); error(errortext); } fgets(id_string,254,fd); fgets(line,254,fd); sscanf(line,"%s",tplorg); printf("tplorg = %s\n",tplorg); fgets(line,254,fd); sscanf(line,"%s",tplref); fgets(line,254,fd); sscanf(line,"%s",iqname); fgets(line,254,fd); sscanf(line,"%s",niqname); fgets(line,254,fd); sscanf(line,"%s",statname); fgets(line,254,fd); sscanf(line,"%d",&inputtype); fgets(line,254,fd); sscanf(line,"%d",&nframes); fgets(line,254,fd); sscanf(line,"%d",&frame0); fgets(line,254,fd); sscanf(line,"%d:%d:%d:%d",&h,&m,&s,&f); fgets(line,254,fd); sscanf(line,"%d",&N); fgets(line,254,fd); sscanf(line,"%d",&M); fgets(line,254,fd); sscanf(line,"%d",&mpeg1); fgets(line,254,fd); sscanf(line,"%d",&fieldpic); fgets(line,254,fd); sscanf(line,"%d",&horizontal_size); fgets(line,254,fd); sscanf(line,"%d",&vertical_size); refHsize = horizontal_size; refVsize = vertical_size; fgets(line,254,fd); sscanf(line,"%d",&aspectratio); fgets(line,254,fd); sscanf(line,"%d",&frame_rate_code); fgets(line,254,fd); sscanf(line,"%lf",&bit_rate); fgets(line,254,fd); sscanf(line,"%lf",&bwReductionRate); fgets(line,254,fd); sscanf(line,"%lf",&scaleFactor); fgets(line,254,fd); sscanf(line,"%lf",&overrideImageScaleResults); fgets(line,254,fd); sscanf(line,"%lf",&staticMode); fgets(line,254,fd); sscanf(line,"%d",&port_number); fgets(line,254,fd); sscanf(line,"%d",&vbv_buffer_size); fgets(line,254,fd); sscanf(line,"%d",&low_delay); fgets(line,254,fd); sscanf(line,"%d",&constrparms); fgets(line,254,fd); sscanf(line,"%d",&profile); fgets(line,254,fd); sscanf(line,"%d",&level); fgets(line,254,fd); sscanf(line,"%d",&prog_seq); fgets(line,254,fd); sscanf(line,"%d",&chroma_format); fgets(line,254,fd); sscanf(line,"%d",&video_format); fgets(line,254,fd); sscanf(line,"%d",&color_primaries);

PAGE 57

46 fgets(line,254,fd); sscanf(line,"%d",&transfer_characteristics); fgets(line,254,fd); sscanf(line,"%d",&matrix_coefficients); fgets(line,254,fd); sscanf(line,"%d",&display_horizontal_size); fgets(line,254,fd); sscanf(line,"%d",&display_vertical_size); fgets(line,254,fd); sscanf(line,"%d",&dc_prec); fgets(line,254,fd); sscanf(line,"%d",&topfirst); fgets(line,254,fd); sscanf(line,"%d %d %d", frame_pred_dct_tab,frame_pred_dct_tab+1,frame_pred_dct_tab+2); fgets(line,254,fd); sscanf(line,"%d %d %d", conceal_tab,conceal_tab+1,conceal_tab+2); fgets(line,254,fd); sscanf(line,"%d %d %d", qscale_tab,qscale_tab+1,qscale_tab+2); fgets(line,254,fd); sscanf(line,"%d %d %d", intravlc_tab,intravlc_tab+1,intravlc_tab+2); fgets(line,254,fd); sscanf(line,"%d %d %d", altscan_tab,altscan_tab+1,altscan_tab+2); fgets(line,254,fd); sscanf(line,"%d",&repeatfirst); fgets(line,254,fd); sscanf(line,"%d",&prog_frame); /* intra slice interval refresh period */ fgets(line,254,fd); sscanf(line,"%d",&P); fgets(line,254,fd); sscanf(line,"%d",&r); fgets(line,254,fd); sscanf(line,"%lf",&avg_act); fgets(line,254,fd); sscanf(line,"%d",&Xi); fgets(line,254,fd); sscanf(line,"%d",&Xp); fgets(line,254,fd); sscanf(line,"%d",&Xb); fgets(line,254,fd); sscanf(line,"%d",&d0i); fgets(line,254,fd); sscanf(line,"%d",&d0p); fgets(line,254,fd); sscanf(line,"%d",&d0b); if (N<1) error("N must be positive"); if (M<1) error("M must be positive"); if (N%M != 0) error("N must be an integer multiple of M"); motion_data = (struct motion_data *)malloc(M*sizeof(struct motion_data)); if (!motion_data) error("malloc failed\n"); for (i=0; i
PAGE 58

47 putseq.c /* putseq.c, sequence level routines */ /* Copyright (C) 1996, MPEG Software Simulation Group. All Rights Reserved. */ /* Disclaimer of Warranty * These software programs are available to the user without any license fee or royalty on an "as is" basis. The MPEG Software Simulation Group disclaims any and all warranties, whether express, implied, or statuary, including any implied warranties or merchantability or of fitness for a particular purpose. In no event shall the copyright-holder be liable for any incidental, punitive, or consequential damages of any kind whatsoever arising from the use of these programs. * This disclaimer of warranty extends to the user of these programs and user's customers, employees, agents, transferees, successors, and assigns. * The MPEG Software Simulation Group does not represent or warrant that the programs furnished hereunder are free of infringement of any third-party patents. * Commercial implementations of MPEG-1 and MPEG-2 video, including shareware, are subject to royalty fees to patent holders. Many of these patents are general enough such that they are unavoidable regardless of implementation design. */ #include #include #include "config.h" #include "global.h" #include void outputMessage() { printf(message); fprintf(statfile,message); } int scaleImage(tinfile, toutfile, twidth, theight, fnum, startIndex) char *tinfile; char *toutfile; int twidth, theight, fnum, startIndex; { IGC igc; IImage image,nimage; char outfile[30]; IFileFormat input_format = IFORMAT_PPM; IFileFormat output_format = IFORMAT_PPM; char infile[30]; FILE *fp; IError ret; int loop; int i; for (i=startIndex;i<(fnum+startIndex);i++) { sprintf(infile,"%s%d.%s",tinfile,i,"ppm"); sprintf(outfile,"%s%d.%s",toutfile,i,"ppm"); sprintf(message,"Converting from %s to %s\n",infile,outfile); outputMessage(); if ( infile ) fprintf ( stderr, "No infile specified. Reading from stdin.\n" ); if ( outfile ) { strcpy(outfile "out.ppm");

PAGE 59

48 fprintf ( stderr, "No outfile specified. Writing to %s.\n", outfile ); } /* try and determine file types by extension */ if ( infile ) { ret = IFileType ( infile, &input_format ); if ( ret ) { fprintf ( stderr, "Input file error: %s\n", IErrorString ( ret ) ); exit ( 1 ); } } if ( outfile ) { ret = IFileType ( outfile, &output_format ); if ( ret ) { fprintf ( stderr, "Output file error: %s\n", IErrorString ( ret ) ); fprintf ( stderr, "Using PPM format.\n" ); } } if ( infile ) { fp = fopen ( infile, "rb" ); if ( fp ) { perror ( "Error opening input file:" ); exit ( 1 ); } } else fp = stdin; if ( ( ret = IReadImageFile ( fp, input_format, IOPTION_NONE, &image ) ) ) { fprintf ( stderr, "Error reading image: %s\n", IErrorString ( ret ) ); exit ( 1 ); } if ( infile ) fclose ( fp ); igc=ICreateGC ( ); nimage=ICreateImage(twidth,theight,IOPTION_NONE); ICopyImageScaled ( image,nimage,igc,0,0,IImageWidth(image), IImageHeight(image),0,0,twidth,theight); if ( outfile ) { fp = fopen ( outfile, "wb" ); if ( fp ) { perror ( "Cannot open output file: ); exit ( 1 ); } } else fp = stdout; IWriteImageFile ( fp, nimage, output_format, IOPTION_INTERLACED ); if ( outfile ) fclose ( fp ); } return ( 0 ); } void putseq() { /* this routine assumes (N % M) == 0 */ int i, j, k, f, f0, pf0, n, np, nb, sxf, syf, sxb, syb; int ipflag;

PAGE 60

49 FILE *fd; char name[256]; unsigned char *neworg[3], *newref[3]; static char ipb[5] = {' ','I','P','B','D'}; struct snr_data snrVals; // Val1 = snrvals in ref section int scaleDir = 0; // -1 = down, 0 = steady, 1 = up int mb_widthTemp, mb_heightTemp; double prev_bit_rate = bit_rate, currScale = 1.0, testScale = 1.0; char refFrameName[256], currFrameName[256]; char baseRefFrameName[256], baseScaledFrameName[256], tplorgTemp[256], scaledUpFrameName[256]; char smallEncStoredFName[256], scaledUpSmallEncStoredFName[256]; char smallEncStoredFNameBase[256], scaledUpSmallEncStoredFNameBase[256]; int currHsize = horizontal_size, currVsize = vertical_size; int prevHsize = horizontal_size, prevVsize = vertical_size; int testHsize = horizontal_size, testVsize = vertical_size; int bestYSnrLevel = 0; float bestYSnr = 0.0, level1YSnr = 0.0, level2YSnr = 0.0, level3YSnr = 0.0, scldNormalYSnr;; int firstTimeInLevel1 = 1; char tempFileName[30]; strcpy(refFrameName, tplorg); strcpy(currFrameName, tplorg); //set Testing Flag = false imageScaleTesting = 0; testLevel = 0; initFlag = 1; refSnrPass = 0; strcpy(tplorgTemp,tplorg); strcpy(baseRefFrameName, strtok(tplorgTemp,"%")); //gets "des" out of "des%d" sprintf(message,"baseFrameName = %s\n",baseRefFrameName); outputMessage(); sprintf(message, //put the text in the opposite coloumn "DDATA,-,Level,-,display#,-,frame,-,Type,-,Dim,-,Area,-,Bit Rate,-,S,-,TargetBits,-,GOPOverflow,-,Q,-,YSnr,-,Level1Snr,-,Level1S,-,Level1Q,-,Scale\n"); outputMessage(); /* loop through all frames in encoding/decoding order */ for (i=0; i
PAGE 61

50 scaleDir = 0; // steady else if (bit_rate > prev_bit_rate) scaleDir = 1; // increasing else // decreasing scaleDir = -1; if ((bwReductionRate == 1.0) && (staticMode == 1)) scaleDir = -1; // assume scale down if ((scaleDir != 0) || (i == 0)) // need to test if adaptive scaling needs to be done { // if there is a bandwidth change or if initial I-frame if (i != 0) putseqend(); // close the current sequence initFlag = 1; // this flag will be set to 0 after initialization testLevel = 1; // start at base test imageScaleTesting = 1; if (i == 0) scaleDir = -1; // for the first I-frame test to see if scale down is better than ref. } } // before potential testing, save the current size prevHsize = currHsize; prevVsize = currVsize; testHsize = currHsize; testVsize = currVsize; refSnrPass = 0; // this flag should be set when ref SNR is to be done while (testLevel > 0) { imageScaleTestBufCnt = 0; if ((testLevel == 2) && (prevHsize == refHsize) && (prevVsize == refVsize)) //verify if level 2 is required testLevel++; sprintf(message,"--->Level = %d, i = %d, ref SNR pass = %d\n", testLevel, i, refSnrPass); outputMessage(); if (imageScaleTesting == 1) // if you enter the loop with testing in mind, initialiation is required initFlag = 1; if (testLevel == 1) // level 1 testing get values for reference { currHsize = refHsize; currVsize = refVsize; // set to refernce frame name & size strcpy(tplorg, refFrameName); } else if (testLevel == 2) // level 2 testing get values for curr scale { currHsize = prevHsize; currVsize = prevVsize; testHsize = currHsize; testVsize = currVsize; } else if (testLevel == 3) // level 3 testing scale up/down { if (scaleDir == 1) // increasing { testScale = currScale + currScale scaleFactor; if (testScale > 1.0) // floor it to the reference size & in this case no need to run level 3 testing !!! testScale = 1.0; // POSTPONING since this case is not being dealt w/ in this THESIS (i.e., assume decreasing) } else if (scaleDir == -1) // decreasing {

PAGE 62

51 testScale = currScale currScale scaleFactor; } // need to adjust ref by the scale currHsize = (int) (refHsize sqrt(testScale)); currVsize = (int) (refVsize sqrt(testScale)); if (currHsize % 2 != 0) currHsize += 1; if (currVsize % 2 != 0) currVsize += 1; mb_widthTemp = (currHsize+15)/16; mb_heightTemp = prog_seq ? (currVsize+15)/16 : 2*((currVsize+31)/32); currHsize = 16*mb_widthTemp; currVsize = 16*mb_heightTemp; testHsize = currHsize; testVsize = currVsize; sprintf(message,"testScale = %lf, scaled H = %d, scaled V = %d\n", testScale, currHsize, currVsize); outputMessage(); } else if (testLevel == MaxTestLevels + 1) //the real run { if (imageScaleTesting == 1) // if we are in testing mode, pick the best SNR level { if ((bestYSnrLevel == 1) || (overrideImageScaleResults == 1)) { if (overrideImageScaleResults == 1) { sprintf (message,"****OVERRIDE ImageScaline ON\n"); outputMessage(); } currHsize = refHsize; currVsize = refVsize; strcpy(tplorg, refFrameName); currScale = 1.0; // since reference is chosen, scale is set to 1 } else if (bestYSnrLevel == 2) { currHsize = prevHsize; currVsize = prevVsize; //currScale is unchanged } else if (bestYSnrLevel == 3) { currHsize = testHsize; currVsize = testVsize; currScale = testScale; } sprintf (message,"****The best Normal Y SNR is found at Level %d Testing with value of %3.3g\n", bestYSnrLevel, bestYSnr); outputMessage(); sprintf (message,"SUMMARY of Y SNRS: Level 1: %3.3g, Level 2: %3.3g, Level 3: %3.3g\n", level1YSnr, level2YSnr, level3YSnr); outputMessage(); } imageScaleTesting = 0; } if ((currHsize != refHsize) || (currVsize != refVsize)) //if not ref { //construct the name from the current scaled size: hxw%d sprintf(tplorg,"%dx%d%s",currHsize,currVsize,refFrameName); strcpy(tplorgTemp,tplorg); strcpy(baseScaledFrameName, strtok(tplorgTemp,"%")); } if (initFlag == 1)

PAGE 63

52 { init(currHsize, currVsize); rc_init_seq(); /* initialize rate control */ /* sequence header, sequence extension and sequence display extension */ putseqhdr(); if (!mpeg1) { putseqext(); putseqdispext(); } /* optionally output some text data (description, copyright or whatever) */ if (strlen(id_string) > 1) putuserdata(id_string); initFlag = 0; // this flag must be set to 1 on a need basis } if (!quiet) { fprintf(stderr,"Encoding frame %d ",i); fflush(stderr); } /* f0: lowest frame number in current GOP * first GOP contains N-(M-1) frames, all other GOPs contain N frames */ f0 = N*((i+(M-1))/N) (M-1); if (f0<0) f0=0; if (i==0 || (i-1)%M==0) { /* I or P frame */ for (j=0; j<3; j++) { /* shuffle reference frames */ neworg[j] = oldorgframe[j]; newref[j] = oldrefframe[j]; oldorgframe[j] = neworgframe[j]; oldrefframe[j] = newrefframe[j]; neworgframe[j] = neworg[j]; newrefframe[j] = newref[j]; } /* f: frame number in display order */ f = (i==0) ? 0 : i+M-1; if (f>=nframes) f = nframes 1; if (i==f0) /* first displayed frame in GOP is I */ { /* I frame */ pict_type = I_TYPE; forw_hor_f_code = forw_vert_f_code = 15; back_hor_f_code = back_vert_f_code = 15; /* n: number of frames in current GOP * first GOP contains (M-1) less (B) frames */ n = (i==0) ? N-(M-1) : N; /* last GOP may contain less frames */ if (n > nframes-f0) n = nframes-f0; /* number of P frames */

PAGE 64

53 if (i==0) np = (n + 2*(M-1))/M 1; /* first GOP */ else np = (n + (M-1))/M 1; /* number of B frames */ nb = n np 1; rc_init_GOP(np,nb); putgophdr(f0,i==0); /* set closed_GOP in first GOP only */ } else { /* P frame */ pict_type = P_TYPE; forw_hor_f_code = motion_data[0].forw_hor_f_code; forw_vert_f_code = motion_data[0].forw_vert_f_code; back_hor_f_code = back_vert_f_code = 15; sxf = motion_data[0].sxf; syf = motion_data[0].syf; } } else { /* B frame */ for (j=0; j<3; j++) { neworg[j] = auxorgframe[j]; newref[j] = auxframe[j]; } /* f: frame number in display order */ f = i 1; pict_type = B_TYPE; n = (i-2)%M + 1; /* first B: n=1, second B: n=2, ... */ forw_hor_f_code = motion_data[n].forw_hor_f_code; forw_vert_f_code = motion_data[n].forw_vert_f_code; back_hor_f_code = motion_data[n].back_hor_f_code; back_vert_f_code = motion_data[n].back_vert_f_code; sxf = motion_data[n].sxf; syf = motion_data[n].syf; sxb = motion_data[n].sxb; syb = motion_data[n].syb; } temp_ref = f f0; frame_pred_dct = frame_pred_dct_tab[pict_type-1]; q_scale_type = qscale_tab[pict_type-1]; intravlc = intravlc_tab[pict_type-1]; altscan = altscan_tab[pict_type-1]; fprintf(statfile,"\nFrame %d (#%d in display order):\n",i,f); fprintf(statfile," picture_type=%c\n",ipb[pict_type]); fprintf(statfile," temporal_reference=%d\n",temp_ref); fprintf(statfile," frame_pred_frame_dct=%d\n",frame_pred_dct); fprintf(statfile," q_scale_type=%d\n",q_scale_type); fprintf(statfile," intra_vlc_format=%d\n",intravlc); fprintf(statfile," alternate_scan=%d\n",altscan); if (pict_type!=I_TYPE) { fprintf(statfile," forward search window: %d...%d / %d...%d\n", -sxf,sxf,-syf,syf); fprintf(statfile," forward vector range: %d...%d.5 / %d...%d.5\n", -(4<
PAGE 65

54 fprintf(statfile," backward search window: %d...%d / %d...%d\n", -sxb,sxb,-syb,syb); fprintf(statfile," backward vector range: %d...%d.5 / %d...%d.5\n", -(4<encoded { if (testLevel == 2) level2YSnr = snrVals.Ymse;

PAGE 66

55 else if (testLevel == 3) level3YSnr = snrVals.Ymse; //need to store it to ppm sprintf(smallEncStoredFName,"smEncSto%s\0",name); sprintf(smallEncStoredFNameBase,"smEncSto%s\0",baseScaledFrameName); store_ppm_tga(smallEncStoredFName,newref,0,horizontal_size,vertical_size,0); //this is small->encoded->stored } sprintf(message, "DDATA,Level,%d,display#,%d,frame,%d,Type,%c,Dim,%dx%d,Area,%d,Bit Rate,%lf,S,%d,TargetBits,%d,GOPOverflow,%d,Q,%.1f,YSnr,%3.3g,Level1Snr,%3.3g,Level1S,%d,Level1Q,%.1f,Scale,%lf\n", testLevel, f, i, ipb[pict_type], horizontal_size, vertical_size, horizontal_size*vertical_size, bit_rate, currLevelS, TargetBits, gopOverflow, currLevelQ, snrVals.Ymse, level1YSnr, level1S, level1Q, testScale); outputMessage(); if (snrVals.Ymse > bestYSnr) { bestYSnr = snrVals.Ymse; bestYSnrLevel = testLevel; } stats(); if (testLevel == MaxTestLevels + 1) // we have just ran it for real { testLevel = 0; // so falsify loop condition } else // see if can increment to the next level { testLevel++; refSnrPass = 0; } }// end of while (testLevel > 0) // at this point, any imageScaleTesting should be over (i.e., the flag no be turned on after this point) sprintf(name,tplref,f+frame0); writeframe(name,newref); } putseqend(); }

PAGE 67

56 putbits.c /* putbits.c, bit-level output */ /* Copyright (C) 1996, MPEG Software Simulation Group. All Rights Reserved. */ /* Disclaimer of Warranty * These software programs are available to the user without any license fee or royalty on an "as is" basis. The MPEG Software Simulation Group disclaims any and all warranties, whether express, implied, or statuary, including any implied warranties or merchantability or of fitness for a particular purpose. In no event shall the copyright-holder be liable for any incidental, punitive, or consequential damages of any kind whatsoever arising from the use of these programs. * This disclaimer of warranty extends to the user of these programs and user's customers, employees, agents, transferees, successors, and assigns. * The MPEG Software Simulation Group does not represent or warrant that the programs furnished hereunder are free of infringement of any third-party patents. * Commercial implementations of MPEG-1 and MPEG-2 video, including shareware, are subject to royalty fees to patent holders. Many of these patents are general enough such that they are unavoidable regardless of implementation design. */ #include #include "config.h" #include "global.h" // added for accessing imageScaleTesting #define BUFLENGTH 2048 extern int sockfd,newSockfd; extern FILE *outfile; /* the only global var we need here */ /* private data */ static unsigned char outbfr; static int outcnt; static int bytecnt; static unsigned char buf[BUFLENGTH]; static int bufCnt = 0, bufcounter = 0; FILE *fp; FILE *tfp; /* initialize buffer, call once before first putbits or alignbits */ void initbits() { outcnt = 8; bytecnt = 0; } /* write rightmost n (0<=n<=32) bits of val to outfile */ void putbits(val,n) int val; int n; { int i; unsigned int mask; int index = 0, fill = 0; imageScaleTestBufCnt += n; mask = 1 << (n-1); /* selects first (leftmost) bit */ for (i=0; i
PAGE 68

57 outbfr <<= 1; if (val & mask) outbfr|= 1; mask >>= 1; /* select next bit */ outcnt--; if (outcnt==0) /* 8 bit buffer full */ { /* printf("writing to %s\n",outfile); */ putc(outbfr,outfile); buf[bufCnt++] = outbfr; if (bufCnt == BUFLENGTH) { if (imageScaleTesting != 1) write(newSockfd,buf, bufCnt); bufCnt = 0; } outcnt = 8; bytecnt++; } } if (val == 0x1B7L) { // fill = BUFLENGTH bufCnt; // for (index = 0; index < fill; index++) // { // buf[bufCnt++] = 0; // } fill = write(newSockfd,buf, bufCnt); bufCnt = 0; imageScaleTestBufCnt = 0; } } /* zero bit stuffing to next byte boundary (5.2.3, 6.2.1) */ void alignbits() { if (outcnt!=8) putbits(0,outcnt); } /* return total number of generated bits */ int bitcount() { return 8*bytecnt + (8-outcnt); }

PAGE 69

58 Sample Encoder Parameter (PAR) File MPEG-2 Test Sequence, 30 frames/sec des%d /* name of source files */ reconDes%d /* name of reconstructed images ("-": don't store) */ /* name of intra quant matrix file ("-": default matrix) */ inter.mat /* name of non intra quant matrix file ("-": default matrix) */ statNetDyn700a55.out /* name of statistics file ("-": stdout ) */ 2 /* input picture file format: 0=*.Y,*.U,*.V, 1=*.yuv, 2=*.ppm */ 48 /* number of frames */ 0 /* number of first frame */ 00:00:00:00 /* timecode of first frame */ 15 /* N (# of frames in GOP) */ 3 /* M (I/P frame distance) */ 0 /* ISO/IEC 11172-2 stream */ 0 /* 0:frame pictures, 1:field pictures */ 704 /* horizontal_size -see header file of ppm inputs*/ 240 /* vertical_size -see header file of ppm inputs*/ 2 /* aspect_ratio_information 1=square pel, 2=4:3, 3=16:9, 4=2.11:1 */ 5 /* frame_rate_code 1=23.976, 2=24, 3=25, 4=29.97, 5=30 frames/sec. */ 700000.0 /* bit_rate (bits/s) total target bitrate budget for 30 frames */ 0.5 /* bit_rate/bw reduction rate*/ 0.5 /* scale up/down step factor */ 0 /* override imagescaling results*/ 0 /* staic mode = 1, dynamic mode = 0 */ 8000 /* Port Number */ 112 /* vbv_buffer_size (in multiples of 16 kbit) */ 0 /* low_delay */ 0 /* constrained_parameters_flag */ 4 /* Profile ID: Simple = 5, Main = 4, SNR = 3, Spatial = 2, High = 1 */ 6 /* Level ID: Low = 10, Main = 8, High 1440 = 6, High = 4 */ 0 /* progressive_sequence */ 1 /* chroma_format: 1=4:2:0, 2=4:2:2, 3=4:4:4 */ 2 /* video_format: 0=comp., 1=PAL, 2=NTSC, 3=SECAM, 4=MAC, 5=unspec. */ 5 /* color_primaries */ 5 /* transfer_characteristics */ 4 /* matrix_coefficients */ 352 /* display_horizontal_size */ 120 /* display_vertical_size */ 0 /* intra_dc_precision (0: 8 bit, 1: 9 bit, 2: 10 bit, 3: 11 bit */ 1 /* top_field_first */ 0 0 0 /* frame_pred_frame_dct (I P B) */ 0 0 0 /* concealment_motion_vectors (I P B) */ 1 1 1 /* q_scale_type (I P B) */ 1 1 1 /* intra_vlc_format (I P B)*/ 0 0 0 /* alternate_scan (I P B) */ 0 /* repeat_first_field */ 0 /* progressive_frame */ 0 /* P distance between complete intra slice refresh */ 0 /* rate control: r (reaction parameter) */ 0 /* rate control: avg_act (initial average activity) */ 0 /* rate control: Xi (initial I frame global complexity measure) */ 0 /* rate control: Xp (initial P frame global complexity measure) */ 0 /* rate control: Xb (initial B frame global complexity measure) */ 0 /* rate control: d0i (initial I frame virtual buffer fullness) */ 0 /* rate control: d0p (initial P frame virtual buffer fullness) */ 0 /* rate control: d0b (initial B frame virtual buffer fullness) */ 2 2 11 11 /* P: forw_hor_f_code forw_vert_f_code search_width/height */ 1 1 3 3 /* B1: forw_hor_f_code forw_vert_f_code search_width/height */ 1 1 7 7 /* B1: back_hor_f_code back_vert_f_code search_width/height */ 1 1 7 7 /* B2: forw_hor_f_code forw_vert_f_code search_width/height */ 1 1 3 3 /* B2: back_hor_f_code back_vert_f_code search_width/height */

PAGE 70

APPENDIX B PERTINENT MPEG-2 DECODER SOURCE CODE CHANGES (Note: Modified code from MPEG Software Simulation Group [21] is in boldface.) mpeg2dec.c /* mpeg2dec.c, main(), initialization, option processing */ /* Copyright (C) 1996, MPEG Software Simulation Group. All Rights Reserved. */ /* Disclaimer of Warranty * These software programs are available to the user without any license fee or royalty on an "as is" basis. The MPEG Software Simulation Group disclaims any and all warranties, whether express, implied, or statuary, including any implied warranties or merchantability or of fitness for a particular purpose. In no event shall the copyright-holder be liable for any incidental, punitive, or consequential damages of any kind whatsoever arising from the use of these programs. * This disclaimer of warranty extends to the user of these programs and user's customers, employees, agents, transferees, successors, and assigns. * The MPEG Software Simulation Group does not represent or warrant that the programs furnished hereunder are free of infringement of any third-party patents. * Commercial implementations of MPEG-1 and MPEG-2 video, including shareware, are subject to royalty fees to patent holders. Many of these patents are general enough such that they are unavoidable regardless of implementation design. */ #include #include #include #include #include #include #include #include #include #include #include #include #define GLOBAL #include "config.h" #include "global.h" #define MAXCLIENTDATASIZE 80 /* private prototypes */ static int video_sequence _ANSI_ARGS_((int *framenum)); static int Decode_Bitstream _ANSI_ARGS_((void)); static int Headers _ANSI_ARGS_((void)); static void Initialize_Sequence _ANSI_ARGS_((void)); static void Initialize_Decoder _ANSI_ARGS_((void)); static void Deinitialize_Sequence _ANSI_ARGS_((void)); 59

PAGE 71

60 static void Process_Options _ANSI_ARGS_((int argc, char *argv[])); #if OLD static int Get_Val _ANSI_ARGS_((char *argv[])); #endif /* #define DEBUG */ static void Clear_Options(); #ifdef DEBUG static void Print_Options(); #endif int main(argc,argv) int argc; char *argv[]; { int ret, code; Clear_Options(); /* decode command line arguments */ Process_Options(argc,argv); Initialize_My_Buffer(); #ifdef DEBUG Print_Options(); #endif ld = &base; /* select base layer context */ /* open MPEG base layer bitstream file(s) */ /* NOTE: this is either a base layer stream or a spatial enhancement stream */ /* if ((base.Infile=open(Main_Bitstream_Filename,O_RDONLY|O_BINARY))<0) { fprintf(stderr,"Base layer input file %s not found\n", Main_Bitstream_Filename); exit(1); }*/ if(sockfd != 0) { printf("Buffer Initialized.\n"); Initialize_Buffer(); if(Show_Bits(8)==0x47) { sprintf(Error_Text,"Decoder currently does not parse transport streams\n"); Error(Error_Text); } next_start_code(); code = Show_Bits(32); printf ("code = %d\n",code); switch(code) { case SEQUENCE_HEADER_CODE: break; case PACK_START_CODE: System_Stream_Flag = 1; case VIDEO_ELEMENTARY_STREAM: System_Stream_Flag = 1; break; default: sprintf(Error_Text,"Unable to recognize stream type\n"); Error(Error_Text); break; } /* lseek(base.Infile, 0l, 0); */

PAGE 72

61 myLseek(); Initialize_Buffer(); } /* if(base.Infile!=0) */ /* { */ /* lseek(base.Infile, 0l, 0); */ /* } */ myLseek(); Initialize_Buffer(); if(Two_Streams) { ld = &enhan; /* select enhancement layer context */ /* if ((enhan.Infile = open(Enhancement_Layer_Bitstream_Filename,O_RDONLY|O_BINARY))<0) { sprintf(Error_Text,"enhancment layer bitstream file %s not found\n", Enhancement_Layer_Bitstream_Filename); Error(Error_Text); }*/ Initialize_Buffer(); ld = &base; } Initialize_Decoder(); ret = Decode_Bitstream(); close(sockfd); /* if (Two_Streams) close(enhan.Infile);*/ return 0; } /* IMPLEMENTAION specific rouintes */ static void Initialize_Decoder() { int i; /* Clip table */ if (!(Clip=(unsigned char *)malloc(1024))) Error("Clip[] malloc failed\n"); Clip += 384; for (i=-384; i<640; i++) Clip[i] = (i<0) ? 0 : ((i>255) ? 255 : i); /* IDCT */ if (Reference_IDCT_Flag) Initialize_Reference_IDCT(); else Initialize_Fast_IDCT(); } /* mostly IMPLEMENTAION specific rouintes */ static void Initialize_Sequence() { int cc, size; static int Table_6_20[3] = {6,8,12}; /* check scalability mode of enhancement layer */ if (Two_Streams && (enhan.scalable_mode!=SC_SNR) && (base.scalable_mode!=SC_DP))

PAGE 73

62 Error("unsupported scalability mode\n"); /* force MPEG-1 parameters for proper decoder behavior */ /* see ISO/IEC 13818-2 section D.9.14 */ if (!base.MPEG2_Flag) { progressive_sequence = 1; progressive_frame = 1; picture_structure = FRAME_PICTURE; frame_pred_frame_dct = 1; chroma_format = CHROMA420; matrix_coefficients = 5; } /* round to nearest multiple of coded macroblocks */ /* ISO/IEC 13818-2 section 6.3.3 sequence_header() */ mb_width = (horizontal_size+15)/16; mb_height = (base.MPEG2_Flag && !progressive_sequence) ? 2*((vertical_size+31)/32) : (vertical_size+15)/16; Coded_Picture_Width = 16*mb_width; Coded_Picture_Height = 16*mb_height; /* ISO/IEC 13818-2 sections 6.1.1.8, 6.1.1.9, and 6.1.1.10 */ Chroma_Width = (chroma_format==CHROMA444) ? Coded_Picture_Width : Coded_Picture_Width>>1; Chroma_Height = (chroma_format!=CHROMA420) ? Coded_Picture_Height : Coded_Picture_Height>>1; /* derived based on Table 6-20 in ISO/IEC 13818-2 section 6.3.17 */ block_count = Table_6_20[chroma_format-1]; for (cc=0; cc<3; cc++) { if (cc==0) size = Coded_Picture_Width*Coded_Picture_Height; else size = Chroma_Width*Chroma_Height; if (!(backward_reference_frame[cc] = (unsigned char *)malloc(size))) Error("backward_reference_frame[] malloc failed\n"); if (!(forward_reference_frame[cc] = (unsigned char *)malloc(size))) Error("forward_reference_frame[] malloc failed\n"); if (!(auxframe[cc] = (unsigned char *)malloc(size))) Error("auxframe[] malloc failed\n"); if(Ersatz_Flag) if (!(substitute_frame[cc] = (unsigned char *)malloc(size))) Error("substitute_frame[] malloc failed\n"); if (base.scalable_mode==SC_SPAT) { /* this assumes lower layer is 4:2:0 */ if (!(llframe0[cc] = (unsigned char *)malloc((lower_layer_prediction_horizontal_size*lower_layer_prediction_vertical_size)/(cc?4:1)))) Error("llframe0 malloc failed\n"); if (!(llframe1[cc] = (unsigned char *)malloc((lower_layer_prediction_horizontal_size*lower_layer_prediction_vertical_size)/(cc?4:1)))) Error("llframe1 malloc failed\n"); } } /* SCALABILITY: Spatial */ if (base.scalable_mode==SC_SPAT) {

PAGE 74

63 if (!(lltmp = (short *)malloc(lower_layer_prediction_horizontal_size*((lower_layer_prediction_vertical_size*vertical_subsampling_factor_n)/vertical_subsampling_factor_m)*sizeof(short)))) Error("lltmp malloc failed\n"); } #ifdef DISPLAY if (Output_Type==T_X11) { Initialize_Display_Process(""); Initialize_Dither_Matrix(); } #endif /* DISPLAY */ } void Error(text) char *text; { fprintf(stderr,text); exit(1); } /* Trace_Flag output */ void Print_Bits(code,bits,len) int code,bits,len; { int i; for (i=0; i>(bits-1-i))&1); } /* option processing */ static void Process_Options(argc,argv) int argc; /* argument count */ char *argv[]; /* argument vector */ { int i, LastArg, NextArg; struct sockaddr_in serverAddr; struct hostent *hp; int nbytes, endoflist, counter; char *asciiServerAddr; char clientBuffer[MAXCLIENTDATASIZE]; char *charptr, userChoice; /* at least one argument should be present */ if (argc != 3) { printf("\n%s, %s\n",Version,Author); printf("Usage: mpeg2decode \n\n"); printf(" or: mpeg2decode standalone out.m2v\n\n"); exit(0); } sprintf(sentFile,"sentFile.m2v"); sentFilePtr = fopen(sentFile, "w"); Output_Type = 4; Output_Picture_Filename = ""; if (strcmp(argv[1],"standalone") == 0) { printf ("Got switch \n"); InputSrc = -1; // open m2v file for reading if(!(sockfd=open(argv[2],O_RDONLY|O_BINARY))<0) { printf("ERROR: unable to open reference filename (%s)\n",argv[2]); exit(1);

PAGE 75

64 } printf ("Opened %s \n",argv[2]); } else { InputSrc = 0; if ((hp=gethostbyname(argv[1])) == NULL) { perror("Error getting host IP."); exit(1); } if ((sockfd = socket(AF_INET, SOCK_STREAM, 0)) == -1) { perror("socket"); exit(1); } serverAddr.sin_family = AF_INET; // host byte order printf("Attempting to connect to server....\n"); memcpy((char *) &serverAddr.sin_addr, (char *) hp->h_addr, hp->h_length); asciiServerAddr = (char *) inet_ntoa(serverAddr.sin_addr); printf("server address: %s\n",asciiServerAddr); serverAddr.sin_port = htons(atoi(argv[2])); // short, network byte order if (connect(sockfd, (struct sockaddr *)&serverAddr, sizeof(serverAddr)) == -1) { perror("connect"); exit(1); } printf("Connected to server....\n"); printf ("Enter to request bitstream from server.\n"); getchar(); //1. make request for list of titles if ((nbytes = write (sockfd, "SEND LIST OF TITLES", 19)) < 0) { perror("write"); } if (nbytes != 19) { printf("Error -could not send request for titles, btyes written is %d\n", nbytes); } printf("Send request for bitstream, btyes written is %d\n", nbytes); //printf ("Enter to begin reading bitstream from server.\n"); // getchar(); }// else InputSrc = socket; /* force display process to show frame pictures */ if((Output_Type==4 || Output_Type==5) && Frame_Store_Flag) Display_Progressive_Flag = 1; else Display_Progressive_Flag = 0; #ifdef VERIFY /* parse the bitstream, do not actually decode it completely */ #if 0 if(Output_Type==-1) { Decode_Layer = Verify_Flag; printf("FYI: Decoding bitstream elements up to: %s\n", Layer_Table[Decode_Layer]); } else #endif Decode_Layer = ALL_LAYERS;

PAGE 76

65 #endif /* VERIFY */ /* no output type specified */ if(Output_Type==-1) { Output_Type = 9; Output_Picture_Filename = ""; } #ifdef DISPLAY if (Output_Type==T_X11) { if(Frame_Store_Flag) Display_Progressive_Flag = 1; else Display_Progressive_Flag = 0; Frame_Store_Flag = 1; /* to avoid calling dither() twice */ } #endif } #ifdef OLD /* this is an old routine used to convert command line arguments into integers */ static int Get_Val(argv) char *argv[]; { int val; if (sscanf(argv[1]+2,"%d",&val)!=1) return 0; while (isdigit(argv[1][2])) argv[1]++; return val; } #endif static int Headers() { int ret; ld = &base; /* return when end of sequence (0) or picture header has been parsed (1) */ ret = Get_Hdr(); if (Two_Streams) { ld = &enhan; if (Get_Hdr()!=ret && !Quiet_Flag) fprintf(stderr,"streams out of sync\n"); ld = &base; } return ret;

PAGE 77

66 } static int Decode_Bitstream() { int ret; int Bitstream_Framenum; Bitstream_Framenum = 0; for(;;) { #ifdef VERIFY Clear_Verify_Headers(); #endif /* VERIFY */ ret = Headers(); if(ret==1) { ret = video_sequence(&Bitstream_Framenum); } else return(ret); } } static void Deinitialize_Sequence() { int i; /* clear flags */ base.MPEG2_Flag=0; for(i=0;i<3;i++) { free(backward_reference_frame[i]); free(forward_reference_frame[i]); free(auxframe[i]); if (base.scalable_mode==SC_SPAT) { free(llframe0[i]); free(llframe1[i]); } } if (base.scalable_mode==SC_SPAT) free(lltmp); #ifdef DISPLAY if (Output_Type==T_X11) Terminate_Display_Process(); #endif } static int video_sequence(Bitstream_Framenumber) int *Bitstream_Framenumber; { int Bitstream_Framenum; int Sequence_Framenum; int Return_Value; Bitstream_Framenum = *Bitstream_Framenumber; Sequence_Framenum=0;

PAGE 78

67 Initialize_Sequence(); /* decode picture whose header has already been parsed in Decode_Bitstream() */ Decode_Picture(Bitstream_Framenum, Sequence_Framenum); /* update picture numbers */ if (!Second_Field) { Bitstream_Framenum++; Sequence_Framenum++; } /* loop through the rest of the pictures in the sequence */ while ((Return_Value=Headers())) { Decode_Picture(Bitstream_Framenum, Sequence_Framenum); if (!Second_Field) { Bitstream_Framenum++; Sequence_Framenum++; } } /* put last frame */ if (Sequence_Framenum!=0) { Output_Last_Frame_of_Sequence(Bitstream_Framenum); } Deinitialize_Sequence(); #ifdef VERIFY Clear_Verify_Headers(); #endif /* VERIFY */ *Bitstream_Framenumber = Bitstream_Framenum; return(Return_Value); } static void Clear_Options() { Verbose_Flag = 0; Output_Type = 0; Output_Picture_Filename = "; hiQdither = 0; Output_Type = 0; Frame_Store_Flag = 0; Spatial_Flag = 0; Lower_Layer_Picture_Filename = "; Reference_IDCT_Flag = 0; Trace_Flag = 0; Quiet_Flag = 0; Ersatz_Flag = 0; Substitute_Picture_Filename = "; Two_Streams = 0; Enhancement_Layer_Bitstream_Filename = "; Big_Picture_Flag = 0; Main_Bitstream_Flag = 0; Main_Bitstream_Filename = "; Verify_Flag = 0; Stats_Flag = 0; User_Data_Flag = 0; }

PAGE 79

68 #ifdef DEBUG static void Print_Options() { printf("Verbose_Flag = %d\n", Verbose_Flag); printf("Output_Type = %d\n", Output_Type); printf("Output_Picture_Filename = %s\n", Output_Picture_Filename); printf("hiQdither = %d\n", hiQdither); printf("Output_Type = %d\n", Output_Type); printf("Frame_Store_Flag = %d\n", Frame_Store_Flag); printf("Spatial_Flag = %d\n", Spatial_Flag); printf("Lower_Layer_Picture_Filename = %s\n", Lower_Layer_Picture_Filename); printf("Reference_IDCT_Flag = %d\n", Reference_IDCT_Flag); printf("Trace_Flag = %d\n", Trace_Flag); printf("Quiet_Flag = %d\n", Quiet_Flag); printf("Ersatz_Flag = %d\n", Ersatz_Flag); printf("Substitute_Picture_Filename = %s\n", Substitute_Picture_Filename); printf("Two_Streams = %d\n", Two_Streams); printf("Enhancement_Layer_Bitstream_Filename = %s\n", Enhancement_Layer_Bitstream_Filename); printf("Big_Picture_Flag = %d\n", Big_Picture_Flag); printf("Main_Bitstream_Flag = %d\n", Main_Bitstream_Flag); printf("Main_Bitstream_Filename = %s\n", Main_Bitstream_Filename); printf("Verify_Flag = %d\n", Verify_Flag); printf("Stats_Flag = %d\n", Stats_Flag); printf("User_Data_Flag = %d\n", User_Data_Flag); } #endif

PAGE 80

69 getbits.c /* getbits.c, bit level routines */ /* All modifications (mpeg2decode -> mpeg2play) are Copyright (C) 1996, Stefan Eckart. All Rights Reserved. */ /* Copyright (C) 1996, MPEG Software Simulation Group. All Rights Reserved. */ /* Disclaimer of Warranty * These software programs are available to the user without any license fee or royalty on an "as is" basis. The MPEG Software Simulation Group disclaims any and all warranties, whether express, implied, or statuary, including any implied warranties or merchantability or of fitness for a particular purpose. In no event shall the copyright-holder be liable for any incidental, punitive, or consequential damages of any kind whatsoever arising from the use of these programs. * This disclaimer of warranty extends to the user of these programs and user's customers, employees, agents, transferees, successors, and assigns. * The MPEG Software Simulation Group does not represent or warrant that the programs furnished hereunder are free of infringement of any third-party patents. * Commercial implementations of MPEG-1 and MPEG-2 video, including shareware, are subject to royalty fees to patent holders. Many of these patents are general enough such that they are unavoidable regardless of implementation design. */ #include #include #include "config.h" #include "global.h" /* initialize buffer, call once before first getbits or showbits */ char myBuf[1][DECODE_WINDOW_SIZE]; int bufc; static int bufcounter = 0; int blocking_readSocket(int s, char *bptr, int buflen) { int n = 0, actualRead = 0; char *myptr = bptr; printf("actualRead = %d\n", actualRead); while (actualRead != buflen) { n = read(s, myptr, buflen actualRead); printf("read %d\n", n); if (n <= 0) { fclose(sentFilePtr); break; } fwrite (myptr, 1, n, sentFilePtr); myptr += n; actualRead += n; } printf("actualRead = %d\n", actualRead); return actualRead; } void Initialize_Buffer() {

PAGE 81

70 ld->Incnt = 0; // ld->Rdptr = ld->Rdbfr + 2048; ld->Rdptr = ld->Rdbfr + DECODE_WINDOW_SIZE; ld->Rdmax = ld->Rdptr; #ifdef VERIFY /* only the verifier uses this particular bit counter Bitcnt keeps track of the current parser position with respect to the video elementary stream being decoded, regardless of whether or not it is wrapped within a systems layer stream */ ld->Bitcnt = 0; #endif ld->Bfr = 0; Flush_Buffer(0); /* fills valid data into bfr */ } void Initialize_My_Buffer() { int i; bufc = 0; for(i=0;i<1;i++) { // read(sockfd,myBuf[i],2048); blocking_readSocket(sockfd,myBuf[i],DECODE_WINDOW_SIZE); printf("%d\n",bufcounter++); } } void myLseek() { bufc = 0; } void Fill_Buffer() { int Buffer_Level; if (bufc < 1) { //memcpy(ld->Rdbfr,myBuf[bufc],2048); memcpy(ld->Rdbfr,myBuf[bufc],DECODE_WINDOW_SIZE); bufc++; //Buffer_Level = 2048; Buffer_Level = DECODE_WINDOW_SIZE; } else { // Buffer_Level = read(sockfd,ld->Rdbfr,2048); Buffer_Level = blocking_readSocket(sockfd,ld->Rdbfr,DECODE_WINDOW_SIZE); printf("%d\n",bufcounter++); } // Buffer_Level = read(ld->Infile,ld->Rdbfr,2048); ld->Rdptr = ld->Rdbfr; if (System_Stream_Flag) // ld->Rdmax -= 2048; ld->Rdmax -= DECODE_WINDOW_SIZE; /* end of the bitstream file */ // if (Buffer_Level < 2048) if (Buffer_Level < DECODE_WINDOW_SIZE) { /* just to be safe */ if (Buffer_Level < 0) Buffer_Level = 0;

PAGE 82

71 /* pad until the next to the next 32-bit word boundary */ while (Buffer_Level & 3) ld->Rdbfr[Buffer_Level++] = 0; /* pad the buffer with sequence end codes */ // while (Buffer_Level < 2048) while (Buffer_Level < DECODE_WINDOW_SIZE) { ld->Rdbfr[Buffer_Level++] = SEQUENCE_END_CODE>>24; ld->Rdbfr[Buffer_Level++] = SEQUENCE_END_CODE>>16; ld->Rdbfr[Buffer_Level++] = SEQUENCE_END_CODE>>8; ld->Rdbfr[Buffer_Level++] = SEQUENCE_END_CODE&0xff; close(sockfd); // nework (why close fd?) } } } /* MPEG-1 system layer demultiplexer */ int Get_Byte() { // while(ld->Rdptr >= ld->Rdbfr+2048) while(ld->Rdptr >= ld->Rdbfr+DECODE_WINDOW_SIZE) { //read(ld->Infile,ld->Rdbfr,2048); // read(sockfd,ld->Rdbfr,2048); blocking_readSocket(sockfd,ld->Rdbfr,DECODE_WINDOW_SIZE); printf("%d\n",bufcounter++); // putchar('.'); // ld->Rdptr -= 2048; ld->Rdptr -= DECODE_WINDOW_SIZE; // ld->Rdmax -= 2048; ld->Rdmax -= DECODE_WINDOW_SIZE; } return *ld->Rdptr++; } /* extract a 16-bit word from the bitstream buffer */ int Get_Word() { int Val; Val = Get_Byte(); return (Val<<8) | Get_Byte(); } /* return next n bits (right adjusted) without advancing */ unsigned int Show_Bits(N) int N; { return ld->Bfr >> (32-N); } /* return next bit (could be made faster than Get_Bits(1)) */ unsigned int Get_Bits1() { return Get_Bits(1); } /* advance by n bits */ void Flush_Buffer(N) int N; {

PAGE 83

72 int Incnt; ld->Bfr <<= N; Incnt = ld->Incnt -= N; if (Incnt <= 24) { if (System_Stream_Flag && (ld->Rdptr >= ld->Rdmax-4)) { do { if (ld->Rdptr >= ld->Rdmax) Next_Packet(); ld->Bfr |= Get_Byte() << (24 Incnt); Incnt += 8; } while (Incnt <= 24); } else if (ld->Rdptr < ld->Rdbfr+2044) { do { ld->Bfr |= *ld->Rdptr++ << (24 Incnt); Incnt += 8; } while (Incnt <= 24); } else { do { // if (ld->Rdptr >= ld->Rdbfr+2048) if (ld->Rdptr >= ld->Rdbfr+DECODE_WINDOW_SIZE) Fill_Buffer(); ld->Bfr |= *ld->Rdptr++ << (24 Incnt); Incnt += 8; } while (Incnt <= 24); } ld->Incnt = Incnt; } #ifdef VERIFY ld->Bitcnt += N; #endif /* VERIFY */ } /* return next n bits (right adjusted) */ unsigned int Get_Bits(N) int N; { unsigned int Val; Val = Show_Bits(N); Flush_Buffer(N); return Val; }

PAGE 84

APPENDIX C MATLAB CODE FOR OPTIMAL RATE-RESIZING FACTOR APPROXIMATION %Author: Ju Wang, June 2003 function d3=hang_d(target_bitrate,W,H,sigma_sqr); %hang_d(700000,704,480,10) f=(0.001:0.001:1); epsilong_sqr=1.2 %dependency on X, 1.2 for Laplasian alpha=1.36 %log_2^e b=target_bitrate./(4*f*W*H) d_f=epsilong_sqr^2*sigma_sqr^2*exp(-alpha.*b) %figure %hold %plot(f,d_f) d2=(1-f)*sigma_sqr; d3=d_f+d2; plot(f,d3) 73

PAGE 85

APPENDIX D CASE 2 TEST PICTURES Figure 22. Reference Picture for 3rd I-Frame (PSNR = 49.0, S=1,425,311) 74

PAGE 86

75 Figure 23. 3rd I-Frame Using Original Encoded Picture (PSNR = 20.3, S=91,627) Figure 24. 3 rd I-Frame Using Adaptive Image Scaled Picture (PSNR=19.9, S=36,081)

PAGE 87

LIST OF REFERENCES [1] Mark Claypool and Jonathan Tanner, The Effects of Jitter on the Perceptual Quality of Video, Proceedings of the 7 th ACM International Conference on Multimedia '99, vol. 2, Oct. 30 Nov. 5, 1999, pp.115-118. [2] W. Ding and B. Liu, Rate Control of MPEG Video Coding and Recording by Rate-Quantization Modeling," IEEE Trans. on Circuits and Systems for Video Technology, vol.6, no.1, Feb. 1996, pp.12-20. [3] A Durand, Deploying IPv6, IEEE Internet Computing, vol.5, no.1, Feb. 2001, pp.79-81. [4] Armando Fox, Steven D. Gribble, Eric A. Brewer, Elan Amir, Adapting to Network and Client Variability via On-Demand Dynamic Distillation, In Sixth International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS VII), Cambridge, MA, Oct.1996, pp. 160-170. [5] C.A. Gonzales and E. Viscito, Motion Video Adaptive Quantization in the Transform Domain, IEEE Transactions on Circuits and Systems for Video Technology, vol.1, no.4, Dec. 1991, pp.351-361. [6] Hsueh-Ming Hang and Jiann-Jone Chen, Source Model for Transform Video Coder and Its ApplicationPart I: Fundamental Theory, IEEE Transactions on Circuits and Systems for Video Technology, vol.7, no.2, April 1997, pp.287-298. [7] Hsueh-Ming Hang and Jiann-Jone Chen, Source Model for Transform Video Coder and Its ApplicationPart II: Variable Frame Rate Coding, IEEE Transactions on Circuits and Systems for Video Technology, vol.7, no.2, April 1997, pp.299-311. [8] Nashnashi Kamat, J. Wang and J.C Liu, An Efficient Re-routing Scheme for Voice over IP, to appear at ICME2003, Baltimore, 2003. [9] Javed I. Khan, Qiong Gu and Raid Zaghal, Symbiotic Video Streaming by Transport Feedback Based Quality-Rate Selection, Proceedings of the 12 th IEEE International Packet Video Workshop 2002, April 2002. [10] Jonathan C.L. Liu, Jenwei Hsieh, David H.C.Du and Meng-jou Lin, Performance of a Storage System for Supporting Different Video Types and Qualities, IEEE Journal on Selected Areas in Communications, vol.14, no.9, Aug. 1996, pp.1087-1097. 76

PAGE 88

77 [11] Victor Lo, A Beginners Guide for MPEG-2 Standard, http://www.fh-friedberg.de/fachbereiche/e2/telekom-labor/zinke/mk/mpeg2beg/beginnzi.htm, accessed June 2003. [12] J.M. McManus and K.W. Ross, Video-on-Demand Over ATM: Constant-Rate Transmission and Transport, IEEE Journal on Selected Areas in Communications, vol.14, no.9, Aug. 1996, pp. 1087-1097. [13] B.D. Noble and M. Satyanarayanan, Experience with Adaptive Mobile Applications in Odyssey, Mobile Networks and Applications, vol. 4, 1999, pp. 245-254. [14] Antonio Ortega, Variable Bit Rate Video Coding, Compressed Video over Networks, 2001, pp. 343-382. [15] A. Puri and R. Aravind, Motion-Compensated Video Coding with Adaptive Perceptual Quantization, IEEE Transactions on Circuits and Systems for Video Technology, vol.1, Dec. 1991, pp. 351-361. [16] Iain E. G. Richardson, Video Codec Design, John Wiley & Sons Ltd, London, 2002. [17] J. Wang and J. Liu, Handoff Algorithms in Dynamic Spreading WCDMA System Supporting Multimedia Traffic, IEEE Journal of Selected Areas on Communication, to appear in 2003. [18] Magda El Zarki, Video Coding and Quality Issues, CENIC QoS Workshop, Jan. 24, 2002. [19] ISO/IEC 13818-2, Information Technology Generic Coding of Moving Pictures and Audio Information: Video, 1998. [20] MPEG Elementary Streams, http://www.mpeg.org/MPEG/video.html#video-test-bitstreams, accessed June 2003. [21] MPEG Software Simulation Group, MPEG-2 video codec version 1.2, http://www.mpeg.org.tristan/MPEG/MSSG, accessed October 2002.

PAGE 89

BIOGRAPHICAL SKETCH Arun S. Abraham was born in India. He joined the University of Florida in 1990 and received his bachelors degree in computer and information science and engineering in 1994. Since then he has worked in the software engineering industry. He came back to the University of Florida in 2001 to pursue the masters degree. His research interests include object oriented methodologies, design patterns, and multimedia. 78


xml version 1.0 encoding UTF-8
REPORT xmlns http:www.fcla.edudlsmddaitss xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.fcla.edudlsmddaitssdaitssReport.xsd
INGEST IEID E20110109_AAAAZG INGEST_TIME 2011-01-10T01:36:15Z PACKAGE UFE0001221_00001
AGREEMENT_INFO ACCOUNT UF PROJECT UFDC
FILES
FILE SIZE 38037 DFID F20110109_AADCQM ORIGIN DEPOSITOR PATH abraham_a_Page_52.QC.jpg GLOBAL false PRESERVATION BIT MESSAGE_DIGEST ALGORITHM MD5
7afea0d027534a635d4c55130dfeea61
SHA-1
da3429337d5087b19a7f6c4c0f6148f7912f9710
45991 F20110109_AADCPY abraham_a_Page_44thm.jpg
55f65e1f1ae9d40970341919a7e2d5c3
47a772ecb18589b7af3ca37b630ea94305d6963e
12562 F20110109_AADCQN abraham_a_Page_52thm.jpg
16157092720349b477396e9a10d26c88
a42aeb83a791af57f91d658f152ff68704af1659
84436 F20110109_AADCPZ abraham_a_Page_45.QC.jpg
31d4e1868cfd7aa278432401157d47e8
da80a703e25f5ccd92819ef8d77137af3cce91f9
61462 F20110109_AADCRB abraham_a_Page_60.QC.jpg
b39159cd6aaa46592540cdfb5c2df7c4
05871bd827b8143b7f85f5ddc217d4308a4763eb
48749 F20110109_AADCQO abraham_a_Page_53.QC.jpg
5e9e991a6580dd51563cfd5822324310
7f26fe7d66bdf1aefcf9d0fe5fc8a84231d8a2e9
18132 F20110109_AADCRC abraham_a_Page_60thm.jpg
662d82f2478f8ca76befbc00f294b299
7164c0e3264169210809e0278157ed00d51d7d1f
15644 F20110109_AADCQP abraham_a_Page_53thm.jpg
70059acd4d6567e94a09816495cf5fe8
528c052793ad38b926d3557b22ec3db5c64d393f
58582 F20110109_AADCRD abraham_a_Page_61.QC.jpg
bf183aaf1beed47bd4a954ba93e9496f
84673652eb96044430032b7dae53272a97dbf471
39910 F20110109_AADCQQ abraham_a_Page_54thm.jpg
ae51d389fe32cf8053bfce2c52ba60e9
9ab26ce7237b4485e9092651b6322311e89ee280
17840 F20110109_AADCRE abraham_a_Page_61thm.jpg
946fcf8fcc5aa4f1a151e8cda37e2147
87c7ed2051e0fe2a4a18e26b27035bfac1c9c65c
52379 F20110109_AADCQR abraham_a_Page_55.QC.jpg
7f6fc75dfd0dbbb140acfabb4ab1a6c6
7cc978e8c2e53372aedc8fc209d3840761ee749b
55996 F20110109_AADCRF abraham_a_Page_62.QC.jpg
6f41d073be403a3e08922d911e8d91c9
e03f0ed3049dbbca3efa4c767e2a889a5cb5fd13
15052 F20110109_AADCQS abraham_a_Page_55thm.jpg
0be6a40b5c26d2df1e7dd64bfb1df665
4ddd5dd3ed81a21e837feab751bc3bdc6310dfe9
38393 F20110109_AADCRG abraham_a_Page_63.QC.jpg
c5adf6c9e94020b1bc2464734dad696f
641de9b75536edb32843674d8c7d647d62dbc76c
52655 F20110109_AADCQT abraham_a_Page_56.QC.jpg
5ba440316f397e9c5d7384bf2049911f
7358e7ffc2fd8da8bb9abd2a3d6289b66e72e0f7
12165 F20110109_AADCRH abraham_a_Page_63thm.jpg
847e66f26d7683c67fb1435b290bf3ee
60c908241441c53eca9cf04d62ae31c80ab6ed72
15296 F20110109_AADCQU abraham_a_Page_56thm.jpg
c5142e13a70d4ac82a1516b706a58373
36a7f65905e614cd378ca0c8d0fe9078cba7f3ed
43649 F20110109_AADCRI abraham_a_Page_64.QC.jpg
7516fff4b1294e767b4b0f3828471ffe
702efbe414e23c6fef0353efa4a6d9cbc24f5613
45476 F20110109_AADCQV abraham_a_Page_57.QC.jpg
93d45c4829265359d29755015c0fbb36
9ee7fd2232db99b4d6598981315f4f37946bdd01
58653 F20110109_AADCRJ abraham_a_Page_65.QC.jpg
763616ab24008d46fd53f14a1eaef94e
94d946fe9acd4199e427b84464a4387669a72e2e
13378 F20110109_AADCQW abraham_a_Page_57thm.jpg
12a1d52f28f43c7fe92de4cadfb0855e
d82b221de8a3c7bf618c72ce76b2d9df922b19b6
17052 F20110109_AADCRK abraham_a_Page_65thm.jpg
0180c9b551b4617cb7141c0df2336af1
e11055a81298d8a54fc1a63fef9ee42ad71f9ffa
75159 F20110109_AADCQX abraham_a_Page_58.QC.jpg
57cc3f486c90d49aece64b59a7aa7041
fc4584d119275fca9064790ca1a48fee3439fa60
40125 F20110109_AADCSA abraham_a_Page_74.QC.jpg
1c11af94c52109ef3335de7ad4a8aca8
98b3cff067281682f6b2cf3356d2f758cc042291
62287 F20110109_AADCRL abraham_a_Page_66.QC.jpg
e94accb592f0aa5c12fb18074fc4fad5
076fabde9a805fe098f75d637afc63cabef46b46
40720 F20110109_AADCQY abraham_a_Page_58thm.jpg
c4f73f5690ecf48c92a814f73310da80
cf8759f9e035524ad737e529051b9214c1397e0c
13474 F20110109_AADCSB abraham_a_Page_74thm.jpg
962fa5ce536ec173ac5bb1e3b2b80e6b
d2102e06b3d5c6d4655ef1fc98bbff4478b442ba
37329 F20110109_AADCRM abraham_a_Page_66thm.jpg
0a5db963159d75373a7407d14d708ee1
3f304cb7d18716a3c3c254473757aa7f41b1c4d5
67337 F20110109_AADCQZ abraham_a_Page_59.QC.jpg
ef138f527428585e4caecadcd4554627
5dd6c0826560e878dc17d8ee4148ff1ed40f3e29
50105 F20110109_AADCRN abraham_a_Page_67.QC.jpg
ce3fecf62412069bbc9711f13e3f11da
a90f8b4b54df9740b2ee6e40d26e7be0105c8d7a
71957 F20110109_AADCSC abraham_a_Page_75.QC.jpg
c496f22cabd1476275585dba8d3e3d5e
247e99354d92cce974169c80e3fa1571bcade5eb
15081 F20110109_AADCRO abraham_a_Page_67thm.jpg
50a3e2eea2971d934d9144162e8f618f
e2319ee4ed6734777f0cc82dae5211b910440623
40381 F20110109_AADCSD abraham_a_Page_75thm.jpg
c2434cdbecbebd94ab19efa2c3582c5b
c4589e6de1e328b1f48d56ec8b8262447d80d405
22648 F20110109_AADCRP abraham_a_Page_68.QC.jpg
9801c4ee31c67bfed90eafa7a2b5c37f
7b46026dae50aa8be030e4956fa4f05830da4b8c
27026 F20110109_AADCSE abraham_a_Page_76.QC.jpg
8283606323bd6ca98f430bde911ec1db
af9eb3ee52cf679917e28978d3fe5e3abd4a7ef6
7716 F20110109_AADCRQ abraham_a_Page_68thm.jpg
76496eaf06e86ed77827677f51e4530f
e689d598dfcb93a5b18ad3e69ad6b06ae1c9180e
9552 F20110109_AADCSF abraham_a_Page_76thm.jpg
ff948ce28a3cc615c449fe1092ef45c6
303e71b5def3a50faa8f37f1aa7d5f1ac53f35ea
60970 F20110109_AADCRR abraham_a_Page_69.QC.jpg
429373fcf0f9fa4908e645656bc66db5
5e27312390ff7b280f38a5282146ce445c289831
25616 F20110109_AADCSG abraham_a_Page_77.QC.jpg
58a025e3efc49701751e2929013360ca
8e62996c6f9b6f529e2f14f031c21207867c2db2
17043 F20110109_AADCRS abraham_a_Page_69thm.jpg
03d9b6b51ab02e2f76933d27f96428c0
11471fcb67dc686a4e93a6012fca2254646beab1
33449 F20110109_AADCSH abraham_a_Page_78.QC.jpg
06157095783ff1ef05b043e41f4938c5
c761cced02fcef3516dd6202902d44a267349ace
54080 F20110109_AADCRT abraham_a_Page_70.QC.jpg
c9a8fcf37fe8cc65d8790d01cdb40983
343d352f278c28e5d10ccea3f4e9327193655b72
11224 F20110109_AADCSI abraham_a_Page_78thm.jpg
80bdc955c01775f5725b7ecc8217d8a4
d3af5bb2de7de422a17a7c60740bbadfe5c2c88f
14816 F20110109_AADCRU abraham_a_Page_70thm.jpg
790ac680f6d0bf382c63fcb9508b3da8
1cb1b23b1f4e4ee7938a211339b97fc4cf18d6ff
27257 F20110109_AADCSJ abraham_a_Page_79.QC.jpg
1522ad824edb876c361a0beed894bb5c
260b317591397ff522ae29c9f30df60eab72c551
38685 F20110109_AADCRV abraham_a_Page_71.QC.jpg
64b74e438d6029027db48d23dd169178
467c3ae2bd269942151667748567c356cb9805c1
9411 F20110109_AADCSK abraham_a_Page_79thm.jpg
506ea21c01b7ac0ad12992bf2f406e00
3efcfb343944022bf57202d9f2d887502e2f6d91
12544 F20110109_AADCRW abraham_a_Page_71thm.jpg
2450a074a8beb09a05c15fe747d5f32e
bf5a5a57dbdc787986c95b069462e540e438b4b4
74970 F20110109_AADCSL abraham_a_Page_80.QC.jpg
9ff38d1d4d18cb24fbcb0a54d8550663
60cd4ebf133db6d48ae0f79fe680d97bd7d0b082
61930 F20110109_AADCRX abraham_a_Page_72.QC.jpg
72256976d903246ae8cc8a118e2a93c7
cfd01c3701c88aeac810df69cf6a6124c873c25a
23559 F20110109_AADCTA abraham_a_Page_89.QC.jpg
1cb9c111454b9cb0cfc8d259350e0141
24911620d0a27439972b27e318d8fed79d4e2df9
41212 F20110109_AADCSM abraham_a_Page_80thm.jpg
5903d7fd4674ba28b4a47dd80c539514
8496dc37243cb5a68450aee4963393252fc8b3c5
37264 F20110109_AADCRY abraham_a_Page_72thm.jpg
309e399cc95841539abd18fd6e2a4e1d
de3b920847051e952904d7076e8493a6658bbe99
8178 F20110109_AADCTB abraham_a_Page_89thm.jpg
ef0d7f86c63fa27370848f73ef106b32
9733a044d7ebe04bfd83465ca6220272ec2d1d12
62362 F20110109_AADCSN abraham_a_Page_81.QC.jpg
39324e36e787f2c1fb98bcfdf7fd7e69
0f800c9b4c4f682bcac6bd4e140825394890fc4b
54646 F20110109_AADCRZ abraham_a_Page_73.QC.jpg
4674bbb20c65219d0fa8081a7ab71a35
72ca88e3992ce76ea6c70e95bde47bdc769af5be
37644 F20110109_AADCSO abraham_a_Page_81thm.jpg
5e26224db3ba74a91f197afd7276070b
4ce71fc16529662af94aaaff955ad24dc3d7f521
37434 F20110109_AADCSP abraham_a_Page_82.QC.jpg
ae474e4dda24bc1af0399531482d0073
acbb55e9743c96ead93be173edffebd22c5f2d3c
12192 F20110109_AADCSQ abraham_a_Page_82thm.jpg
eb83f2b1f37d396782cf87046fc4ddce
d4f62903967f436f02a1b89736f6f24169490103
25881 F20110109_AADCSR abraham_a_Page_83.QC.jpg
76278c4aa329122ef6988c1553896e3b
6bc9ddb51e5a5990086fd68b817eed6db99cb15c
8658 F20110109_AADCSS abraham_a_Page_83thm.jpg
0dc1ebc29fba4341eb01f56304ceb80f
3c3e652b7beeb181c6bf7e2bfc7a6c4ddee3c0e4
16450 F20110109_AADCST abraham_a_Page_84.QC.jpg
5d32ec55991caf70a26ab4de0586c4cf
bee24bd038a3f9fe21ed05b98c956af12a73dc11
6144 F20110109_AADCSU abraham_a_Page_84thm.jpg
88b85e368d97ad9d7e662c6e06ec44a1
ae0cac7700ac97ec6e909e74658d678414e44396
35723 F20110109_AADCSV abraham_a_Page_85.QC.jpg
d713186ee8d8dc54fb418927f6faddc6
c5096202447df8cc1141bee17c8dbaeb9d8c9777
66993 F20110109_AADCSW abraham_a_Page_86.QC.jpg
7bd1d07be3b8bc8204dde173d986462f
0953bdecf2ad10ac304269280dad07a40b34eccc
38309 F20110109_AADCSX abraham_a_Page_86thm.jpg
698df955a2b0ddc1e968a1439ef1fc0c
16b4871d197e0646d5136302ea9f2c6e4bb76366
64323 F20110109_AADCSY abraham_a_Page_88.QC.jpg
0624554f617a0cf4abaa96fa696eb8cd
b7c0b712f63cabd4a6c425aabf174fa323d81d44
19642 F20110109_AADCSZ abraham_a_Page_88thm.jpg
f2c6c61abae8490ae248a10d8a822487
766c328ad2f35b25b7881768ded864ce940d5bf0
83892 F20110109_AADCAA abraham_a_Page_19.jp2
efd0341f612ff1ab280220e23e974ed4
ae655f0320032356999777d903eacc9f38a465e5
88672 F20110109_AADCAB abraham_a_Page_20.jp2
d20a3b72cd3bc6ce993e0295f5e337ab
f1a2696ff2a97db3c735427a6022a7b29b42187b
942700 F20110109_AADCAC abraham_a_Page_22.jp2
3928feca3bd67d51757a2934e880c401
62b6d98a6c41109742fae7837387c1cdd8a4ef73
865243 F20110109_AADCAD abraham_a_Page_23.jp2
9b133280176c8384071beba4bb820627
34a378f8fe1dfa9ccca2a2cad1851e31d5e555df
101763 F20110109_AADCAE abraham_a_Page_24.jp2
305160b32ff13d81074f4cacc26e26b7
2dde8bb06ebf29dcd821cb96f54de75278bafac4
1042371 F20110109_AADCAF abraham_a_Page_25.jp2
8b5f47fe89af4b796f0eff68910bdb37
92ca9fa3c24a618e60474dc68d954f5a276e7fce
771098 F20110109_AADCAG abraham_a_Page_26.jp2
bdd9078ce26c96f1eeb6aeb05e5f250f
b7d36278be338c09fe813a326fced9b7692c5157
164422 F20110109_AADBVA abraham_a_Page_19.jpg
06b09ae0108e8e1c22cd1816b631cd78
77bb28f3debaf4a158db442e12f0b880d78ef5b1
89757 F20110109_AADCAH abraham_a_Page_27.jp2
77a0256da9725e379e731fac0eb701b7
d564330e78b4b0f72a490d505681cc1fac7e912e
1669 F20110109_AADBVB abraham_a_Page_52.txt
b865388480c03cfff46cab5e7b7fa546
29be72c857af74e52562ad37ea6d115de7ab119c
533200 F20110109_AADCAI abraham_a_Page_28.jp2
75dd22e23e34601f5f7f0f73851d6c72
2f4b4d3248f7136b55bd71350ab2899e8decb84b
65740 F20110109_AADBVC abraham_a_Page_68.jpg
7bef05666d35a9625bffd3fa7bb7da26
74eda730a375676f84742a10905901b70aeba5b1
83325 F20110109_AADCAJ abraham_a_Page_29.jp2
194f06ebed20416d955b86708d90a899
1b95d8ae14b8c828eb5bf6d9e026b28aa110e2af
35631 F20110109_AADBVD abraham_a_Page_23.pro
be6ca2eba921c3c8635819ba33a23ce8
ca3f3a90b2fe9081aeb0067a3cfebaacef80ecce
84660 F20110109_AADCAK abraham_a_Page_30.jp2
b5e83b2bfadae30a5bfe874cf0719109
96e8b05b123f495fa8db6b2ff409b03294869579
16284 F20110109_AADBVE abraham_a_Page_33thm.jpg
01be7dedd5ed0c1f353faaba4b9c0ee8
518f14ace241106ba4ec2cd6257200e49d7a21f9
88930 F20110109_AADCAL abraham_a_Page_31.jp2
5895ae15a69e0118427e1c7f0b3a0fe3
812169a1e2012f4831f2f936336d09df632dd6eb
198500 F20110109_AADBVF abraham_a_Page_88.jpg
a608355ff83023324013647705ac1243
1809a81d803936dcaff9b4e28159d40aba4363f0
104443 F20110109_AADCBA abraham_a_Page_46.jp2
b4dc9abb966b7aba91397878f6650cb6
925ac4dee0e650e1ed3b893504e14874bc7ee869
999122 F20110109_AADCAM abraham_a_Page_32.jp2
0f2aee34cafd2f5dcb93fa92b9443f16
bf1f8e7eae424b7a9eaf888577d2fdcf10454ea0
91742 F20110109_AADCBB abraham_a_Page_47.jp2
809a958d25b30f867413d9ad14acc5ff
2bd318c7d2df61bb4c428c65113b25a61cdd724c
63575 F20110109_AADCAN abraham_a_Page_33.jp2
9db4e5fe62fa0d0837166d8206f400a9
f7079a763e2e8cc68eb66a8ef6ed70cb66de5f5c
60872 F20110109_AADBVG abraham_a_Page_19.QC.jpg
2329f151a1d7af3e679306206c49b9b4
c8ed4070e0f86ce4c7dc535d61a25c450f961112
9499 F20110109_AADCBC abraham_a_Page_48.jp2
1f2f1e53a621991a5e5ce359f7fa57ac
61f096bc57a84bd3e4345cd737f9152611fda25a
881766 F20110109_AADCAO abraham_a_Page_34.jp2
d3b8cae95713793add7a7f3e573a0c4b
76f5dcd5e2974721dfc331b2442b44b53c994c37
40099 F20110109_AADBVH abraham_a_Page_28thm.jpg
1b85e50e656736bb4b120bbfb2a7549c
514cabc27f67b1dc270c3d7fc28738d2455b7922
21920 F20110109_AADCBD abraham_a_Page_50.jp2
dbf146eca487ac3e0452dc4c9240a9a8
777ef7b2aead2bd84b2e9df1ef22d5248cc76895
924141 F20110109_AADCAP abraham_a_Page_35.jp2
df016b69198dec0f0319e0b34a6d309e
afdb9311c8d644ddbd79bdff51f6fc36c6a93f3a
695848 F20110109_AADBVI abraham_a_Page_21.jp2
3783768a66ed4fcc7a1a76809ecf501c
68eeefe612aa40dfb96925133d965a190f0ec3ee
85737 F20110109_AADCBE abraham_a_Page_51.jp2
d5fe8fbd86c03e8df4075251d98a77bd
e95277ff436bffcc46d48f718e42fe4226ef98db
91541 F20110109_AADBVJ abraham_a_Page_65.jp2
3a69aa3dcdc9d8858f865f92677c063c
fb2a184d6f2a52d9b5bfe486c0c4df01e6bc568a
65818 F20110109_AADCBF abraham_a_Page_52.jp2
fd35afc768ee1c7cf9a5411b08a5005c
acd0d8bc68d45b51c36be53722b90ddfcb1498b5
881805 F20110109_AADCAQ abraham_a_Page_36.jp2
a8dfa05e2d88e4aea30f2452b35fbce0
167b74e6ff5d9ca8c9f14533936eece25cacdc16
1046 F20110109_AADBVK abraham_a_Page_68.txt
be819235535c32b9f9168b9dfc1ac656
91cf520afda6b1fa6d88b2de49f76f1c9e4c0e34
76042 F20110109_AADCBG abraham_a_Page_53.jp2
bc30115568e07f1ed4303d9afe9c92f4
fca062fb203c2b82893ed31d25e1e3bdcbc0496a
54612 F20110109_AADBWA abraham_a_Page_01.jpg
5acd3f66ef8454f300c74707a7218957
8241ce63f72571e6cbacaa7b549c14f1ea30c2d6
108668 F20110109_AADCAR abraham_a_Page_37.jp2
52e27e7171fb319b2381e13fe8029f3c
a5f9919a9cfe38abfb11b5d6498bc8b468ed8aaf
31788 F20110109_AADBVL abraham_a_Page_26.pro
aed1d56d3f2c152181c5cec9c430bbab
faba52751d555181d26d356ed5afcb544a2495e6
80219 F20110109_AADBUY abraham_a_Page_73.jp2
be29fd139b4fc518914d353a2d645453
bec740f28592fc0dcc50d5daf9fe215ca9116202
660400 F20110109_AADCBH abraham_a_Page_54.jp2
e3821a7fadefb0b23fdb319107be49c3
bc0cd5bd3a7786a350626ee84d3e22a89bfea3a1
14710 F20110109_AADBWB abraham_a_Page_02.jpg
39703d1eb3fd8c7416fc0ac72a1e8794
6e82ba35b0de73cf067e5a8f25173fb4bc369ffe
106478 F20110109_AADCAS abraham_a_Page_38.jp2
9c24d0e37691c8007d5581894a530c20
b3515b03c2f613fdf886858823a002c1d2bdf82c
25298 F20110109_AADBVM abraham_a_Page_37thm.jpg
56f5dff71ac4d8d3b2c16a6a193482be
018150d2a61024dc16266d84caaa6b88c0f54a21
19022 F20110109_AADBUZ abraham_a_Page_41thm.jpg
b838227c5a542539fc84c434e6d9c2cd
f9f570c2a3171b6d1ef51f20b9dd3b01128185d3
74458 F20110109_AADCBI abraham_a_Page_55.jp2
279171b543fc93e521c894f6250b4857
41be2ea0d9b1b3a83dec9649c3dcc0f9a94653f7
14586 F20110109_AADBWC abraham_a_Page_03.jpg
ad1d6bc85b98bc7dfde1b1f591ce0584
e03cee43cb0c6a4eb2cb8a6ffaf52628443d8cd0
76527 F20110109_AADCAT abraham_a_Page_39.jp2
f04ae79bd12be293b3a76b69ad31434f
e73d7f0b064e8e2f1136635cdb8e16c0e1db74e7
81308 F20110109_AADBVN abraham_a_Page_21.QC.jpg
73d8fabe43f8fd8fdbf037e0d59e3380
0a46e78d615a8c65eb644510ca2b4221726527dd
92284 F20110109_AADCBJ abraham_a_Page_56.jp2
f8b36fc093199d0a8637a529844f1e4c
4d6849aa6f175d9105cc8e1f6d2301b59329433b
115960 F20110109_AADBWD abraham_a_Page_04.jpg
b70b92492316f95509c62e057b6c15dc
96d9e16214263fe381777bb7e894c7f79fcef5f3
1051980 F20110109_AADCAU abraham_a_Page_40.jp2
4322a89c5ee2f163604ae2c6416a7b4e
131c2c5c763242381fc338aefe983b9f2de4b88e
89145 F20110109_AADBVO abraham_a_Page_49.jp2
1ba2b667da9130b3cd8c0e4c51d54917
9ce00d2a37e4e8f233a8e96a8766f9d674125e99
74439 F20110109_AADCBK abraham_a_Page_57.jp2
9d5c741d8231b5426b75c77765fb5370
072151c23c5e6cd9c5aaa7e967f7a1458bdf2830
199525 F20110109_AADBWE abraham_a_Page_05.jpg
795e96e9f44fce148cc5dfba2d68632a
e7f3f3211aa5cddd49ec189ef928bc7806d151d6
85755 F20110109_AADCAV abraham_a_Page_41.jp2
a1f58ce04fbac4ff004057bc33e775ee
be48f64e66dfd743b9e585bdde5bde14a9e0e977
22010 F20110109_AADBVP abraham_a_Page_87thm.jpg
1a0394be0a2128b3e35b2d31a8403b21
30400e193f3659e3587f9523d53874855d95254a
827052 F20110109_AADCBL abraham_a_Page_58.jp2
bb552535e01204f6c6fd983da5b9694f
0c5031a742ba122ecadcf674b45c89f89583995c
171425 F20110109_AADBWF abraham_a_Page_06.jpg
5d8210e61abdd4b037cd402c36c95553
f2097ae2d8c314a0aaba0f01483008d207134ab7
993431 F20110109_AADCAW abraham_a_Page_42.jp2
81dd827d935f4521ecab559c1eeefe17
d7bc53d2e84de6ce1aab6080e87457f4a6ddb88e
91060 F20110109_AADBVQ abraham_a_Page_43.QC.jpg
a30d64f80ed6c67976d81eefe394e998
e3c81cc51d6c96f330371b281ee9205ab142bb7b
535102 F20110109_AADCBM abraham_a_Page_59.jp2
1bb1f0ca8753190997726c148c55ae79
7a5b9198c2fa73006df762cf735832e3e75117d9
80117 F20110109_AADBWG abraham_a_Page_07.jpg
846d1c7e831a8d80f0ff1c65276252d5
65a8c6b865d5e4ac80dfdb6014f0a4c1860d0367
958740 F20110109_AADCAX abraham_a_Page_43.jp2
a629c5222a493736c981cee43c821a1b
aaef91604253d663bb93a079870100a1d57c4295
26275 F20110109_AADBVR abraham_a_Page_04.pro
e5446867275ce7963cbd4c59022827dd
e8ad779e5f04cad209ef6a9ed2ec432d0230fbf9
681231 F20110109_AADCCA abraham_a_Page_75.jp2
66e896b8d35b49eb0a3df3a74c9c622c
311eba8e5f5ecdd2669e551d925d529e6673eaf4
105517 F20110109_AADCBN abraham_a_Page_60.jp2
af060f6f6916d8e1832522d93526c9ac
af35e600427c5ea761268978aadab387c4f077c4
867628 F20110109_AADCAY abraham_a_Page_44.jp2
b5a589bcdad6d2f6af606a56dd850f05
a24ed49af5a431ddc440cb37827bef7692648739
7008 F20110109_AADBVS abraham_a_Page_11thm.jpg
b9ed003edc1eea57e3a9ed0e00ad7997
3bed30143456db8d9fc179b6b3d81e49f8b738c9
36537 F20110109_AADCCB abraham_a_Page_76.jp2
1ea21ba629df977eb61e6435b38c6cd9
ca1131f9aac8bf0b6d903a2fc7e9c9c1aaad6603
92128 F20110109_AADCBO abraham_a_Page_61.jp2
a483dd1ab2e6c226a9d12904ad314cd7
cde7c8c8106c68f8a3e27fdf656710e5a5a34ce7
245378 F20110109_AADBWH abraham_a_Page_08.jpg
e5ec25bc73ca1b77bb61b386d0555092
bddd2539d3ec275df019bfa45200b176d6b9fbfd
888749 F20110109_AADCAZ abraham_a_Page_45.jp2
6303ef097f38eb0dfefd5598350835a8
1cbb68074b11c2d4d8c7a28f6efe64d78d443eb0
16516 F20110109_AADBVT abraham_a_Page_85thm.jpg
68b7d691ddf065ce13ed619cac13e952
df91e8543ed2b0e2b1155698bb181560d35ffcd9
37505 F20110109_AADCCC abraham_a_Page_77.jp2
1b6bc8cd13dc2a2ca67d46716e0220fc
2a215dff31326d44880377de1ae14c39ad985e56
88868 F20110109_AADCBP abraham_a_Page_62.jp2
da6627b56caa6e6792381eb9b2521c00
db3d0e300c2c9f863d405ba54abbdceae69fccb8
101437 F20110109_AADBWI abraham_a_Page_09.jpg
d86468ec3fbd44df2896bfb2ada85384
163149d9c65d2445a2eb2b58e98e45485f0680db
25271604 F20110109_AADBVU abraham_a_Page_58.tif
fd7238eb31d05a8698dbcbae4f916ce0
0facf0ec0424acdc461359fa2d9b0e77a6df3879
50029 F20110109_AADCCD abraham_a_Page_78.jp2
e84241e0fcf3425415b44a0f6b40672a
7fb1337b993c49bdb39080470500d45c47f442ac
53216 F20110109_AADCBQ abraham_a_Page_63.jp2
f9c7c335cd3b57ce831312f076ea1e5d
adc4b53763355ace8fa3e9201e850f528af32d58
165423 F20110109_AADBWJ abraham_a_Page_10.jpg
183a60f8d547db590b6be497a3c8aefd
083899ee849d705b620b4794acf78aa8de184d07
2335 F20110109_AADBVV abraham_a_Page_87.txt
80626a57cec639dff807df7f6a25a7f0
e317b1b3befaa047049c561f5468f361835d5cd9
45371 F20110109_AADCCE abraham_a_Page_79.jp2
281b0faa2c930b603b4c8da34beddd68
64fb0cb2564041e72e8ad149914275884e5915ba
53123 F20110109_AADBWK abraham_a_Page_11.jpg
9fb6ff412781a09ff5d56b5e02c35eab
dca2ea8759c04ceddf92f3051f3611299c937bc7
8423998 F20110109_AADBVW abraham_a_Page_85.tif
e97765f724d0a8266d8299099ae21ae9
99175a67323cc496fc082cc9e52d92980f7edddf
795220 F20110109_AADCCF abraham_a_Page_80.jp2
b605598c7edca239312444a91d321d67
dcdefd145732bec9366e39575be4f3dcff24a216
131294 F20110109_AADBXA abraham_a_Page_28.jpg
91d8ca432cf323cbe954ceb9bffcfa5c
66528a7a81ae9b5ceb0e3815d828ec591029e09e
67303 F20110109_AADCBR abraham_a_Page_64.jp2
c414970cae573ef5a7573b5b935e1631
b4273a749d5026b80f6c18aba94d2f0e4683da88
164778 F20110109_AADBWL abraham_a_Page_12.jpg
d1575c4a8853a64f1b09286498a84abc
832675c2b5f852c9bfa9cde3923a838014dc3d97
104176 F20110109_AADBVX UFE0001221_00001.mets FULL
74cdb79c736c2f5ead346e3e92f7931b
22b3c81d47d930707a60fe121a0041539b6ac3d7
561866 F20110109_AADCCG abraham_a_Page_81.jp2
0b2820cf1addaa21246807f0476b9b6d
a8916e212f63c03738a5a82629d980923421774f
160225 F20110109_AADBXB abraham_a_Page_29.jpg
9d0b24ce15dc9ede5d08a5f903ed90cd
7f94b5c27a44f31471ee72a69d136f63d2b3eaf5
532867 F20110109_AADCBS abraham_a_Page_66.jp2
c98dfce9aa346886bf40fefe458e69ef
f3152c0b0e8a479acee239c6c6314e3ea10a72de
194185 F20110109_AADBWM abraham_a_Page_13.jpg
2cefab18f5d5eca1f1d90be5d555433f
fae0723948ca656f07f4aae410a9f704c18a55ad
54580 F20110109_AADCCH abraham_a_Page_82.jp2
f2585836576567a1dfe74a9e17b9f40d
5bba4fd5712db41e7e9bd6b4aaff4de2b31f136e
166073 F20110109_AADBXC abraham_a_Page_30.jpg
b960788886ecf9bf8d152a9af6659af7
f735559e1608046b732ca37acc715128478457d1
76654 F20110109_AADCBT abraham_a_Page_67.jp2
d6196ac10eb55f0450106f281781e015
a36bd30ce6578684cc18d56fee695c50fb46c0c3
198446 F20110109_AADBWN abraham_a_Page_14.jpg
e9975c9054b458a9c401af24af708e20
0b08ad7b1152c4a92d9d4ada9c49cd84e6caf539
32062 F20110109_AADCCI abraham_a_Page_83.jp2
82af643f8bad0bcc7c9315a5a01f0bf9
452f0fc1bcf905155cc3165a58b52336e571216a
175545 F20110109_AADBXD abraham_a_Page_31.jpg
47795fa7e73dae79c09d56e306e3fbb3
e403fca7f5b773b3688b487b0e9d9f7b075add8b
32047 F20110109_AADCBU abraham_a_Page_68.jp2
05f200c37cb867ea330fa98b717ba24b
8da20c16f8ac7ef825c5249b01e88e5775382bea
196237 F20110109_AADBWO abraham_a_Page_15.jpg
10bcf1437e300c5e2cdadaf3874849ee
cdeea63c8b9807740b96745f5ddaa7a0b5184630
22365 F20110109_AADCCJ abraham_a_Page_84.jp2
ed687cfc6da196e65e8f835f96e3aed7
2d6b0d1b7877dc7669982b96cb44d913006a8a40
199024 F20110109_AADBXE abraham_a_Page_32.jpg
ca3db56d0a6f508230b8ff3d464cb9c2
f58e4e0aee4ee3d8c833c68232379719bdcb6d40
103614 F20110109_AADCBV abraham_a_Page_69.jp2
8a55d6e65e5f0c32a3ccca6ac56eddda
ac39f193f7b9e498c1a84855a689a3b685dbb90e
192713 F20110109_AADBWP abraham_a_Page_16.jpg
1d811f065a8093a0be884e88f7decdb5
87fba8e8fbe143f131fa565f93f235efb13b065e
515634 F20110109_AADCCK abraham_a_Page_85.jp2
6b0c7ff990db3dd3f35c06da61a4d967
fe1a086582d5683132a9051d5065790465b29e82
126062 F20110109_AADBXF abraham_a_Page_33.jpg
e5421063ebe9bc3004ab4061cdb80e44
2721261147595b6d4ee7135cb19547068e40c2bb
84029 F20110109_AADCBW abraham_a_Page_70.jp2
8233fe0dbffeba9441f45e93c784b2c1
4fcbd2d2fb39a2ca604a14b195148ed6ba0c7a12
158869 F20110109_AADBWQ abraham_a_Page_17.jpg
d6718d6a32278318a63c52cb19cc1026
8de4965f72248038eeec5bf87fc985476b339b19
776429 F20110109_AADCCL abraham_a_Page_86.jp2
8218d6fa418a98d5a0f3905de4b6b0da
6ad2d2dbd0d9d25c37e139e9a226758b5c0481fb
195398 F20110109_AADBXG abraham_a_Page_34.jpg
a9296c3c3f730614787e0084203b3aa1
46678bd87db1a74f302934d6dd92de4b7294e5a0
54768 F20110109_AADCBX abraham_a_Page_71.jp2
12022c30415bb6e084b6da15ff5100f9
6a218abf3d01cda71c7db4a6c6aaa7caa7286f47
216253 F20110109_AADBWR abraham_a_Page_18.jpg
0f0a8fa8af7877d6af9240a9e9711a24
b3598ec69cbf9534229af45bf849f7792d09638b
1053954 F20110109_AADCDA abraham_a_Page_12.tif
bb0f05d1c3731bb60f657dae92cea760
644bc53da065f88835ac233d0f4bee9dd310ebdc
120470 F20110109_AADCCM abraham_a_Page_87.jp2
c490eb06c1d0c1562ddbb856ca720c78
5bb8f286e5cd960906b5d6a6998903d346f4e4ed
201712 F20110109_AADBXH abraham_a_Page_35.jpg
c0f1be1b6e2fe90844731170fd142d3d
1302c0a57e0fca9556684bf091efd99867307b49
496671 F20110109_AADCBY abraham_a_Page_72.jp2
516ef37f3ef35c3fb55562ab5fe0cc61
9d0d5f42d6872f6864b84306e215b7ed0e03f9f5
173082 F20110109_AADBWS abraham_a_Page_20.jpg
6f2e3d4b2ebcf96aea009d150c3e66e8
d0012388d7072b2640a7dea281510d8542b61a17
F20110109_AADCDB abraham_a_Page_13.tif
bfa9d501216221f18e2236a78a43901a
001d584b59b5eceac29e3cc267bb855cb7f3b565
94437 F20110109_AADCCN abraham_a_Page_88.jp2
0b08bc2fedbe7fb286bacfc3240475ba
8995574cf8ff5c78760317fccc35c3b51649a457
62605 F20110109_AADCBZ abraham_a_Page_74.jp2
16f54f05069bccadb6e150c36a833cb7
e90292f909297a8e644eca37e7d3df0a486e6e2b
168038 F20110109_AADBWT abraham_a_Page_21.jpg
4c068012ec81326a2c68ba2c332ed2ba
0bf41fbbc61354d67a3c40ec5b1c61eb6791e899
F20110109_AADCDC abraham_a_Page_14.tif
425c88cb947092b16738ebc295efa125
48457931e4b7303369f9e4e4dc338ecc7359a426
29001 F20110109_AADCCO abraham_a_Page_89.jp2
bcb5b85a37efe8dd708d2d35a27709ce
04e0dc6a4b2321742f463a3eb4b4af7edfb06619
191809 F20110109_AADBXI abraham_a_Page_36.jpg
37aa091d234a4883257142976b9b68bb
31b72584d99f3195aa8e23a21993d3c8fb9155a1
210519 F20110109_AADBWU abraham_a_Page_22.jpg
f949724605d4a745dc472b89bd8e8e21
4885106be4480c634c69cb28df1cb1bc2f40bfd0
F20110109_AADCDD abraham_a_Page_15.tif
4f34a212b52a992fcb038e7e527c1e9c
e40c00a45459d85aa00e05809077915ed7a37bd2
F20110109_AADCCP abraham_a_Page_01.tif
a3c4253f4363ed5503a2f49fc713c335
fa3395b00090db48493a9fad6f2f62282e0b1ee9
188542 F20110109_AADBWV abraham_a_Page_23.jpg
874a42c591e5cea80cccc7cb1ee1bf93
d67ae9c373e1483c1be55b8cd4a64a3b2730ba3b
F20110109_AADCDE abraham_a_Page_16.tif
7d8bc72a2d227568fa3e9b765d3a7cc8
6aaa0014ebdc8ea4ca67a8aa4135e6cbadaf5845
F20110109_AADCCQ abraham_a_Page_02.tif
35436557e0859dc7608a35358aa1b33e
cffde46d6e663ba6b94cae07d6dd1ecd242a159d
213065 F20110109_AADBXJ abraham_a_Page_37.jpg
dd900e11fb80888e7c414bc3c4da71c4
91303fa48f592a3f2bed8e53e1a6ec0a846f9691
199755 F20110109_AADBWW abraham_a_Page_24.jpg
6e650691305de6353445085e39c00d53
b5519665595e90054dd6fd0f36894d49e19f5a77
F20110109_AADCDF abraham_a_Page_17.tif
434336ea2349d1ee9f171cb6a5f2a94d
a34658b429f4713ab6d697dde07e415202700bf2
F20110109_AADCCR abraham_a_Page_03.tif
f054bb36c43e871cf6de5519b60432cf
57cc2ab02e2a26a9df2aed12c988f6150aa8340f
203224 F20110109_AADBXK abraham_a_Page_38.jpg
a56ffa61e3064bc1cd556c576e613d69
b265005b00bc005ced49ae36cc210d78926d53d9
221116 F20110109_AADBWX abraham_a_Page_25.jpg
3ace935887a9ecadd23b396541b23c47
1124209573cc12cc78b28b4151670353e0bb03e8
F20110109_AADCDG abraham_a_Page_18.tif
a1c65029386b368913f141c6d10e1820
5bc32f473d64500b9cc91af92f5982105dc22585
160235 F20110109_AADBYA abraham_a_Page_54.jpg
83e79733337657372bb9d1a7eee0d549
4b931443b727f1793b534ed566b0fd6c68809db4
147009 F20110109_AADBXL abraham_a_Page_39.jpg
816f33038018a6aba63be26c7a539f22
a33903f9e893b71bed4e34a36b78ca1d1a0238c3
171126 F20110109_AADBWY abraham_a_Page_26.jpg
c1da5137df73ebe5d1d9ebfd7c7b9240
2c0612883661aba2f36358cd109989fefbb6c846
F20110109_AADCDH abraham_a_Page_19.tif
81d1a646dcf3dfb32f85ecaf079664a7
114d130d9ba428bfe7b521417dda26ed900819fa
149383 F20110109_AADBYB abraham_a_Page_55.jpg
3f660a90ab674b8f8da67fb216803f38
3a22964e656d78df17e6b0221fd7440689e477e7
F20110109_AADCCS abraham_a_Page_04.tif
677f3e3ca5175ecc06f2604fb290155c
125d3163c0b1b93d4eb3ca9721d6e4d63372608c
197247 F20110109_AADBXM abraham_a_Page_40.jpg
d54616d12fe88d8668c3c0c594a3d472
6174e969905e14ce378b1f154792b2fca9e6dbf8
171619 F20110109_AADBWZ abraham_a_Page_27.jpg
15f680dfa135c8e456450cc0efa4af64
a39f9155bac4b49de6590c1052d832f7b46f089a
F20110109_AADCDI abraham_a_Page_20.tif
76d9f607fc4bdc53b58fd4f33f92a11a
f122d600b59b27902af5742748a66cef47c74d57
174055 F20110109_AADBYC abraham_a_Page_56.jpg
26530e82b18f0ab7e248b0841e528265
cf2d72c7ec50ca6eeef7acfd3e3cbe1bb842155e
F20110109_AADCCT abraham_a_Page_05.tif
117dba1f38f35309b70b4a9801ebf73d
b3e5f66dae4134a6fff7285aa68b102722f6c2e1
166330 F20110109_AADBXN abraham_a_Page_41.jpg
6d3474fd437d74516751c7db3d7c650f
de1d11d48c8da59ec2c426cb56dd26965d479dcb
F20110109_AADCDJ abraham_a_Page_21.tif
d3d361e19080996a8ab3fa48bf9bbf6c
5dcd2865e4232d895d5f7c6041f83134495ffb5f
148889 F20110109_AADBYD abraham_a_Page_57.jpg
ee9f7a19ac2cb634fe2705203909a480
da83eaff84118300d9785dc3353938a6b987aeea
F20110109_AADCCU abraham_a_Page_06.tif
f44ae45a83bb48143c928cdae9dab435
15005ca24d7843c5629c6ff7391feaa64738d4f4
213516 F20110109_AADBXO abraham_a_Page_42.jpg
773b5bc99827644de8b08d7fb6a41726
0827ee867b60944a89ad0778542adb19a775222c
F20110109_AADCDK abraham_a_Page_22.tif
8aca299104f0b1a79a4218a66b5b6cd1
61912ea4d2f3c834ef56ad82d6852c3cf3eef9af
180992 F20110109_AADBYE abraham_a_Page_58.jpg
42993ce3c8d3924d89ccff24503f39a4
1e323e3973fc78f27ec89356d37b54f86cd63cef
F20110109_AADCCV abraham_a_Page_07.tif
885342c658e2596bc9d7f2d2553c9734
5d67f3693f98538620ce64631ab028ab539b3ef2
201014 F20110109_AADBXP abraham_a_Page_43.jpg
764d1d908c8ef67845fdd99fba61bf19
a5b67f100f05066f0fef9a6bcf125750757b4cc7
F20110109_AADCDL abraham_a_Page_23.tif
9160d3067c500dffaccbe25a19f461ee
f8dc525a6680efa1836af706496b0333e688cdde
141083 F20110109_AADBYF abraham_a_Page_59.jpg
8ebcd8dbe3a54cd181159b9053b11d0f
dd78eacf9d6da9f67268659343a9736b8fa18cf4
F20110109_AADCCW abraham_a_Page_08.tif
d9d9a8d9e9aeb4d70c74c38486188d7b
1f7abbd41475ba01b70960a3cff85af616faebfd
189869 F20110109_AADBXQ abraham_a_Page_44.jpg
3448ae01b9091228629af225ac3b26fa
18b418b7bdd741c7e09675ab49c9e44d9c293dce
F20110109_AADCEA abraham_a_Page_38.tif
aea069496b4a573e2592c629482b33c2
a162d0af2615713dd5e624e1a7c72bd47ea947cb
F20110109_AADCDM abraham_a_Page_24.tif
eb29d260719fda55fcaa41f5f02d119c
c0e8eb2077f923934174f8442810a8cf114f0e0d
186476 F20110109_AADBYG abraham_a_Page_60.jpg
5445b47ff08dbca7e090aa433083abed
cc136ecb9912c79ba9f2d4476a8caace71f69a8d
F20110109_AADCCX abraham_a_Page_09.tif
ecb8b4100359a5b669cd9ef5352e73e9
56e493d862b4cedabcc0483dd04fd8c6c4ba836b
187665 F20110109_AADBXR abraham_a_Page_45.jpg
b9de1232ab8b8fa418cb5f6a1bcf679d
4e20517e655399399cf83a5624ec991a2e428aef
F20110109_AADCEB abraham_a_Page_39.tif
06abeb85248c6cf34e1f11e655be878e
6ccd207e71a20b4e9b322d5a27c5957858174d97
F20110109_AADCDN abraham_a_Page_25.tif
cab8237a453eff39bd8cec84d9b448f9
103cf812c0f11ed49cd74e97c4ed9f0f5b666e20
161108 F20110109_AADBYH abraham_a_Page_61.jpg
d15c19ea8f51c33d0dc04302bbcbe5da
429d0e3574cad57da4af98901b682b7c354db65a
F20110109_AADCCY abraham_a_Page_10.tif
a4d5bbe76f926e6addd41089f73e291e
c6de098180de885f939f0b88ef70fb94d5c51562
202427 F20110109_AADBXS abraham_a_Page_46.jpg
a019c961c8980201a868d0aff429e915
7723c5dab4d6b71ae5f5a0deb970c6796fd34cc4
F20110109_AADCEC abraham_a_Page_40.tif
1c1ed785623ad41338a2226582938a07
37ce29c43ce69889e6d29809e6135251d4cfc680
F20110109_AADCDO abraham_a_Page_26.tif
9beb875183be581625544f7c224d6dce
a604abdee8911533b15004f342504badb035394c
162415 F20110109_AADBYI abraham_a_Page_62.jpg
72c5d4f8f9d9d664fe37f3082f43a2a2
c2cfdf704098985c5a168f40f19c231dee414b57
F20110109_AADCCZ abraham_a_Page_11.tif
4e6c180e8fc41868f29600848e1d5f76
0d442136e0f4c03120b78b0c54dc8a60606c499f
178132 F20110109_AADBXT abraham_a_Page_47.jpg
b23285ae8833d0e33ded1638ad741d22
573857bfeeae5ff20e9117404ceff73a1b69c419
F20110109_AADCED abraham_a_Page_41.tif
7d229e3d206fd99afb81369964dceaa8
8d8ff1c0efbb5174228111b47e5a27f63a62343c
F20110109_AADCDP abraham_a_Page_27.tif
080959609047ac4d7e3ef359472e7d95
acfe1cdc5605ffd1d82287c28a3c941bf3d8f247
19199 F20110109_AADBXU abraham_a_Page_48.jpg
955bbc7d31f943c4569e6d81c86d9c5f
f3035f316d5ba3a35989c007c245b5419801e1a5
F20110109_AADCEE abraham_a_Page_42.tif
040cbe5ac6b13529498bfbbacde55efc
f2dbc2ca0efa1280aa78185002d80b2f23c1eeb7
F20110109_AADCDQ abraham_a_Page_28.tif
9c6c6c6f7c117a3e8ef7d3bfa588bea6
1acf2288c82eebb958fad065a4acbb817b8c7717
110729 F20110109_AADBYJ abraham_a_Page_63.jpg
7ec5911604113d79766a5780c3153163
ac1301f29277ea800f71cf5aeec2d0ca87055484
173331 F20110109_AADBXV abraham_a_Page_49.jpg
c78867e2fb4584cf2bf037710f39c13d
959d53a62be39b1a5cde9a63345682402a7835a4
F20110109_AADCEF abraham_a_Page_43.tif
3588eb517aa106b686e12d1f97758ed7
babb514c0a86799afaf5fb25114c3ea86434bf01
F20110109_AADCDR abraham_a_Page_29.tif
2743e8264a209f26e3ecd70acb808bf2
96616bf508e0190b8fa7cd5f9c0d1cfc9db67117
137407 F20110109_AADBYK abraham_a_Page_64.jpg
0567c4a9663b361d98251109ab52df84
a46481d0526dc1c98b89316c938f9e834f08ff3e
43151 F20110109_AADBXW abraham_a_Page_50.jpg
1deb4562df19776808f6982c79e9de7b
cfcaeda0030bb9b7f9213e7814cd7463dd3e0724
F20110109_AADCEG abraham_a_Page_44.tif
eef74b64e4d0831cec6170ae4d5911fb
d1128d15affaee13248e6becd8a318a519319c50
136836 F20110109_AADBZA abraham_a_Page_81.jpg
6d9d480204ce40f580ecf1496c143e64
5e3c034226fe0012a8ffcf2647db389535c548f6
F20110109_AADCDS abraham_a_Page_30.tif
83660ab02a91393d307e3e3a630b7365
12783451bf233a747c6cc05544ba94025f0dadb6
165872 F20110109_AADBYL abraham_a_Page_65.jpg
6636c48e55a7fca8a7e486bbbe767d43
e97941f6c7ea0318ed9b8904cf8fd821a30f2a13
161737 F20110109_AADBXX abraham_a_Page_51.jpg
e8e5666bfca453ef0a5ab832202c05a4
13cbd7b956fda0456604feda0a6c7a0563bd89fa
112062 F20110109_AADBZB abraham_a_Page_82.jpg
1e51fee4953d69c0357db860f56f8f1a
bb79b07232afbddb75520ac9c6b71198674092e5
132181 F20110109_AADBYM abraham_a_Page_66.jpg
5a5d79f6f2768acf6261c4cb5f5ba1cc
0ad05bb4632d1e42b4047542ef46af58bffaa06d
113332 F20110109_AADBXY abraham_a_Page_52.jpg
92f0c407b5f1334b24a5f63b9813b994
9b7c88397ce1c117f3908ea487cb2aef4d9afd74
F20110109_AADCEH abraham_a_Page_45.tif
c047d07f9338d20464a5e198b4dc5409
91c4b16578e8d95c87a1ac7e319e020529d05ce5
68193 F20110109_AADBZC abraham_a_Page_83.jpg
31e64564862afcf8b4f55cd130499a4d
9baa1c5cdff8157385088731c0e4d31c0bd547c6
F20110109_AADCDT abraham_a_Page_31.tif
d6b7c00eb8204a69a832585b0915e422
77fefe0adffa7afed773adc6d4927cc144cba562
151360 F20110109_AADBYN abraham_a_Page_67.jpg
9f104d04064bc848c3b8945766a6bcc2
7af27dd18d1cff4ee972c47b18fa3a086311e1a4
137181 F20110109_AADBXZ abraham_a_Page_53.jpg
8815dc8e8dec29a29780e9cdfeb48915
6884761e0489193cb855450c743c383f3ad0c977
F20110109_AADCEI abraham_a_Page_46.tif
9d92e9b5e21225a2a0f888bd216339a5
abc91c290c76a0a6a311db47e81ca4f29d0781d4
44711 F20110109_AADBZD abraham_a_Page_84.jpg
3c109e49f4ffe4020f5b7dfdd215be08
7bd0db7c2b6c82aaf7f3f18499b1d1d66eb0dfe9
F20110109_AADCDU abraham_a_Page_32.tif
19fbbd13622101eb281f799736fcad3a
17b5585e4b2f8371aca144bf4a0cd5fbd5a9f84c
208276 F20110109_AADBYO abraham_a_Page_69.jpg
efbcfb1cd3b59dfb60f18b15dee81d01
1453c2319a8c52202af4131a0ef2c5dbd0cb5c93
F20110109_AADCEJ abraham_a_Page_47.tif
7db479aa3054f286040b27fa17d80fc9
b17377c83684f2f573200f7d7a39db122d9e1b6a
88656 F20110109_AADBZE abraham_a_Page_85.jpg
3f1709b2c161c427189ff9ee927998fa
2e71d497d7fad9db6bb46a33b274a3cd17a433c2
F20110109_AADCDV abraham_a_Page_33.tif
ac9a67a2acd633f19c0e31e8da8d144f
67ca56d30d7fa7e1af3e98971b908b3d47b22498
161489 F20110109_AADBYP abraham_a_Page_70.jpg
d70bec1c6293feaeab1bedcd656c2acb
8bcebbad3b0b1509c1689db69343c314e01ae9d4
F20110109_AADCEK abraham_a_Page_48.tif
4742e93cd8cbb918b596322b2a7727c9
d0518f51f95f4468c71f0d76f5d7a19358d2eebb
149559 F20110109_AADBZF abraham_a_Page_86.jpg
655643f6b0fcc85feaecf2d40aa9b9f1
7597ad3222c46f48df845c86ceab7bd177dfd395
F20110109_AADCDW abraham_a_Page_34.tif
7d378aa0d1f7949a9b5ad6afa4cc3f7d
6ed73485601f419be06afc720b6a83bd851988aa
104774 F20110109_AADBYQ abraham_a_Page_71.jpg
c0617e4ef54d5434d3aa72f58d7535a7
b30ece9b0e6681d353af400cfc9512c683afdc6f
F20110109_AADCEL abraham_a_Page_49.tif
4e859c4184ed63b4ee4422041a9da311
12e1cadb500ed8dec11c4314316dc076e9919f36
237167 F20110109_AADBZG abraham_a_Page_87.jpg
f45499f3e4df4314ad4e8f5e0f7c76d1
d57f23f9f7c10107609d777a924b9c85b5ac983b
F20110109_AADCDX abraham_a_Page_35.tif
5e17f4c29e85a0a068fb0a65edaa2a04
74bbaa47281689906aa3e86df68c8e5d99f0657c
127181 F20110109_AADBYR abraham_a_Page_72.jpg
4b931ee10fcb3fbf5c4774e430954d86
524f2e9ec5d680bc4958bd3ef93a9e96807dc9b8
F20110109_AADCFA abraham_a_Page_65.tif
ae31f885db290a6d1b9106ba44ebd999
9fa672798218ed7f16667a2616d1444adf6bb112
F20110109_AADCEM abraham_a_Page_50.tif
e0bd82a81511d1aff6f53b945c2bf792
84eb16de1d1ed77d14db755744699cddc0d78a06
58388 F20110109_AADBZH abraham_a_Page_89.jpg
a60f49b57bc6e68bae2dd9854e5cc0e3
c9e7e6b55377a8af3f0682e43a1ae60283d10f32
F20110109_AADCDY abraham_a_Page_36.tif
1dd0632543d178fb826e996014b9379e
3e87146f99366f451a1c0dfff687e3d679497b95
164689 F20110109_AADBYS abraham_a_Page_73.jpg
b592b816178e33a525ff90016c3ecbaf
0237226c6807d5a6996f53a9a6d24d51f3fa754b
F20110109_AADCFB abraham_a_Page_66.tif
ce1071d7cd9ed6ec80a822d15da89f41
7bd9ecde77aa89fee62f0de658e5e8a3bd23b88f
F20110109_AADCEN abraham_a_Page_51.tif
64f2cd7d15b66394f9a731f15298ae2f
256822c09fae43d72a7ec3d3c5694e396faf553d
22598 F20110109_AADBZI abraham_a_Page_01.jp2
fbf122b70308dbabf7c740bea8f7e393
e69f9ceb4d0bf8c8d24014a3145e62275e701287
F20110109_AADCDZ abraham_a_Page_37.tif
5389a41fc8b07432ed0a7240ebd7b3b2
75d2f2ef4d4ab08f319222eb0b116a9e33b93a6a
119125 F20110109_AADBYT abraham_a_Page_74.jpg
cc2924f5f87017ff84690e6043a147ba
2753d1bbf8466177f0e02a55ff75b29ece1dc22b
F20110109_AADCFC abraham_a_Page_67.tif
d63ca00850f8f7aec285dbe81686bb9e
befdfb984adc957f67ded2e71bc10d94356540b6
F20110109_AADCEO abraham_a_Page_52.tif
c19b2184cfc9711231d65433922e8f4f
cf3d299bd2101df6890e93059d54cdb3e7098408
5563 F20110109_AADBZJ abraham_a_Page_02.jp2
cc2f7f10f9b8510c041a908da6efcd45
a9e671bf98021f370b98bcdd9cd8e68d2fe04ccb
163949 F20110109_AADBYU abraham_a_Page_75.jpg
537ad37d006b74cced3050169b5dd83e
b10b286775526cb68c008a5e91788c8d6ef8ecb5
F20110109_AADCFD abraham_a_Page_68.tif
2ae75e12b415f49dd237c0a19d2d056d
e4530bafe1d722012550bf8a1afeb3df635016c1
F20110109_AADCEP abraham_a_Page_53.tif
940b2e462a1b004950567ef597dd644e
8557f29a6f551a2eb5f4100aebc785865b6c9b1f
74115 F20110109_AADBYV abraham_a_Page_76.jpg
f03a42c02dc53b48ffbf5b33699af261
ac74e5c80ebd4d7cce4d7aaaa605a3bcbed337f1
F20110109_AADCFE abraham_a_Page_69.tif
7c22a2af85e8703856c73b8256e19bf8
467beba4bd1f919da88b6ecf508cd73fdbf19d26
F20110109_AADCEQ abraham_a_Page_54.tif
b6cdad943d67700b938996f62d30723b
429cf8b1658aa96f8facf4372d26dd09d1235bce
5639 F20110109_AADBZK abraham_a_Page_03.jp2
6257c1ed6cb08c1258a64ee3195ccb85
aa6f13390f94e43af267cdbaf20aa9e768638453
75153 F20110109_AADBYW abraham_a_Page_77.jpg
925ef07db4a9e1409a32da834eb932af
ea88d55bbbcbec9774575fa1af1a37891f67765b
F20110109_AADCFF abraham_a_Page_70.tif
230d401f4df0d12c743754c3517eab27
eaf47680c84e557d2b33002e73b3d02d5fa9d348
F20110109_AADCER abraham_a_Page_55.tif
c4f2ebd6d164f5211e922988eed3f678
89cf4f19d99fce081a842f24053af56b2f6acbd6
59009 F20110109_AADBZL abraham_a_Page_04.jp2
f46ca670576f6a9f5df319950c47b995
cb95ef4959f5a0113245f6f21478160568791c56
99781 F20110109_AADBYX abraham_a_Page_78.jpg
75f72d5d983e5f0ab0f1b8d9cebbece8
b8e22f4b168509317b4fc5cba074bd4d66cb25d0
F20110109_AADCFG abraham_a_Page_71.tif
16d1755c1e3eb9ff40960821f840e84e
06c5e8d708ba3c21698fed65f7404874910ed27b
F20110109_AADCES abraham_a_Page_56.tif
cf7ad0162db3525822e4969f19fadad9
fcd62f7ddc860c6b2376fc4b2c52f00819eea05a
1051977 F20110109_AADBZM abraham_a_Page_05.jp2
c06011145424ee5c78d1bb62bb226141
bdd37989f75c75b4a953bee825ba5574dff32da6
89450 F20110109_AADBYY abraham_a_Page_79.jpg
a9e29a752ada641ca07f33c601f45e61
fd971f76dfbdd7ff9210c252f2cd424b05736a5b
F20110109_AADCFH abraham_a_Page_72.tif
e5b2909fd1dffbfa67e88ac5b9dbb652
a441fd9d110a4a700168160b5c952f33373150b9
F20110109_AADCET abraham_a_Page_57.tif
e330c28fc53a1d95d1b0bd0282e84a80
83e20e1b946f5ac4e7db847fc9c75634243bce03
1051981 F20110109_AADBZN abraham_a_Page_06.jp2
94e0bc4b27043eec00dc90837d231d8d
75aa1a75258f9e891eef017550d88abb9259f622
177368 F20110109_AADBYZ abraham_a_Page_80.jpg
b647ed8e6965231a7fe42408a5b0dffa
4ad9493ced658090b64e809db4715743c02e8519
F20110109_AADCFI abraham_a_Page_73.tif
6a857736d4c09f69f48f1c3de7c705d0
787f8c15835ab6bbee46227ad0d083e9679dbd0a
397465 F20110109_AADBZO abraham_a_Page_07.jp2
17881fea84dc718061f166b18b39c547
1ae6144719f31e96e2ac125fc68b09fd30f9aed1
F20110109_AADCFJ abraham_a_Page_74.tif
73414c6a3a5d6f5c5eb9163c4e496e31
f1bff7cb5641f1ba416b7701414a64ac2f293b4b
F20110109_AADCEU abraham_a_Page_59.tif
49947b2de931eb1a04f7b423b3ef5b12
9742ced939697d9dbfd334af2b0545453c7f0032
1051978 F20110109_AADBZP abraham_a_Page_08.jp2
f9815bef7441a6ec1b7eed80d95bdb1b
1b0d43ad388abf748ae7a57ba576d146c040a7eb
F20110109_AADCFK abraham_a_Page_75.tif
a8c3cd820bf889ff768cfaf5fe97bbdd
48fac239109f7819681dcbf9909e961518d15c72
F20110109_AADCEV abraham_a_Page_60.tif
795ef45883c91afb89d7610287d46fd6
8e6c10359c0524f48858e433971e36e81ae5a27a
586975 F20110109_AADBZQ abraham_a_Page_09.jp2
25fc5e8262586705d83dfe9d859c06fd
a344d6dab535b832f76e4c14060b6348f74c8bd5
F20110109_AADCFL abraham_a_Page_76.tif
f780e9ad628b27f9932193e69927783c
5e99e906a693c572165ed206383be638b1580682
F20110109_AADCEW abraham_a_Page_61.tif
0c19a508d8708bec8eef015f2be04907
523c8adc46b8028a1a248adc0a3a47c6ebe1a5d7
83802 F20110109_AADBZR abraham_a_Page_10.jp2
95da0311be7c723e2f1e8edc471e2565
2a08238ee50344a9ca12f9bcab22e7d32c12d191
1171 F20110109_AADCGA abraham_a_Page_03.pro
e2431f4ddddea1aed8d1811b2833d5c2
f75fadb3ef1431d01b9f22816d6c5d5cc5723f30
F20110109_AADCFM abraham_a_Page_77.tif
5bc2f8f9fa22676aa38484890eabeff1
23cc997482a3aedd910ab81ecc2c73b9f36da4fb
F20110109_AADCEX abraham_a_Page_62.tif
fc1bcfff8b83d46e3f31a329f89819ef
72f5bdf21073532f5a4a5f07e6c48dcae5c80f4f
27892 F20110109_AADBZS abraham_a_Page_11.jp2
ec5a3c5c0e8ad4ae8f744c3ec354087c
27c6fb10c89cb7c2a49f7c0feebf4960c4a7b312
57117 F20110109_AADCGB abraham_a_Page_05.pro
49e0fa39561e397e1639c0567312f0fe
f8640551b29febbd7657039a515c9669b34be8f3
F20110109_AADCFN abraham_a_Page_78.tif
48c78b336ff45fb03876c48b5a5c20c3
77fa345da0ba59a5b518fde8bb2774984e503aea
F20110109_AADCEY abraham_a_Page_63.tif
73e3978a05eb2c8306d50931586c0afb
5ee7110cb35c908abd8c987e9c2d76b4451de4e3
85160 F20110109_AADBZT abraham_a_Page_12.jp2
f99b32f964dd329a8dd60dcd7f8d2cba
dc66103e6577fff56f9c6bf99b752f70b9b2d2cd
44198 F20110109_AADCGC abraham_a_Page_06.pro
29012c2be316f53f44085a43b568d180
7967f6b5f80a1d097d5328e095ce1182aeeceb68
F20110109_AADCFO abraham_a_Page_79.tif
f11fc5255aa9ba58e4235aad77bdb90b
cd395f3e8dc56954b8b27227e4a469a538453a83
F20110109_AADCEZ abraham_a_Page_64.tif
716ca2cf56a4808f7c6d3a35d2974799
f4d28644ecaafe08d20e998aaba3502ac2a4d159
100434 F20110109_AADBZU abraham_a_Page_13.jp2
cf83cef0c14550da6d413ff54a75f8fd
35f4407cb684b91d4d6518046f0030ba74376fa1
12064 F20110109_AADCGD abraham_a_Page_07.pro
198648779af2f055c23913d74b0b1d79
ebe202d2dd3b7f0e8bdeb33133163235f481493f
F20110109_AADCFP abraham_a_Page_80.tif
924c5df851ba09243b8a6bfa53fb9da2
fb97779487b19a84692bcb4dd5e7fc575f117d34
101701 F20110109_AADBZV abraham_a_Page_14.jp2
d9752dce5409581093214ca428c8aa04
8bc84d89efb2c09547a5911711741be0d0618bd2
56057 F20110109_AADCGE abraham_a_Page_08.pro
60862333de6f3364f40495dd5f2b0daf
cc057b0aa5b1b2ffe79b68c0104fd1f0faf38ad0
F20110109_AADCFQ abraham_a_Page_81.tif
c38d057915cb008cc4504af434ebb423
8c75802d7c3249891179119bd1b7358af88f0c2d
102894 F20110109_AADBZW abraham_a_Page_15.jp2
a655ebc2c9666bf6ca28d23cefab2f2c
ccc3abf05f9ad994de63695ed801327a14c48d72
15132 F20110109_AADCGF abraham_a_Page_09.pro
b1176c04dbb47a33f2e3e5b963fa7316
295a38676fd3fb32f253630499e3b7859c7c465d
F20110109_AADCFR abraham_a_Page_82.tif
4fe96fcd22e31a9029ac675c584b09bd
0958381a73e209a01341cdaf577666a05553247d
100192 F20110109_AADBZX abraham_a_Page_16.jp2
f09be3c102505627b1e9fa58aec5a9d3
33286bb7e5ca925c7564f3c24231ab9d1350969f
38171 F20110109_AADCGG abraham_a_Page_10.pro
61df1681242f8f0f5ad125869b0465a6
26e410430edc6b1bbdd81a5f4002850493ff28e7
F20110109_AADCFS abraham_a_Page_83.tif
b38c1337561d04972b48b80438110b79
acda258df99220c177558d4f9b306e194da796c2
82979 F20110109_AADBZY abraham_a_Page_17.jp2
224af0095449c08dda1ce96269c84ef5
e4d15add3a5be32d66ee169ef5b7560b7e1bb9e5
11319 F20110109_AADCGH abraham_a_Page_11.pro
1de2a382f8ba78c711e6c028c9da6200
df878af4fccfba246b3de0f0165adb5d08d7f0dd
F20110109_AADCFT abraham_a_Page_84.tif
6583332ed18d004083e1d3e39096893c
e7468d061e66e7d9d6a6995154c96a299f6f61af
112105 F20110109_AADBZZ abraham_a_Page_18.jp2
8fd0682a3bb6b46a2c8b6ad900d3c2c2
524d96a53b5ba30851848fe6a6a3d76039d93f14
38541 F20110109_AADCGI abraham_a_Page_12.pro
9a9ae02611ddd80535c0630ea6ca78e0
08b12f95d57f2f980d6ca786dde428048a99c1c8
F20110109_AADCFU abraham_a_Page_86.tif
819193c1e69e9658a66894d10f7864f8
8efe4c3d982f8bdb9147442f058514e566ef9bc0
45864 F20110109_AADCGJ abraham_a_Page_13.pro
3bc729dcd42f6b18eeb39ce18cef3aa0
50c5f69956ba642efb62be1c3616c61542ff11ca
47489 F20110109_AADCGK abraham_a_Page_14.pro
0d2ebe269194007b74d8d700504c2d81
315a0476168aa50a5b46044e039fb573bd53e866
F20110109_AADCFV abraham_a_Page_87.tif
64635d0eabb71457d62116dc8b87eac1
7244dee0fa4dd4b3f7d8b2d42d21cef28d535373
46845 F20110109_AADCGL abraham_a_Page_15.pro
1ca10c882a49846a52e8aa93dcd17ce4
d890ef618d2a371dc88c3babdea3a636ac5e3088
F20110109_AADCFW abraham_a_Page_88.tif
8f4fa1d1bbcb7e89a8595d50dbf229e0
13077071e6876f3e5b65ca62f61f34b34e356538
46659 F20110109_AADCGM abraham_a_Page_16.pro
a2eeb29a8e11aea4f5f74f7ab5aaf559
205fa06f4c395c3fe85ff98a7c83b4146736cd98
F20110109_AADCFX abraham_a_Page_89.tif
c1f3557ccb2b706f478f86c0862f841e
919cf2df618f6569feff2abd7ce8c654170e91c6
27477 F20110109_AADCHA abraham_a_Page_32.pro
c56d00046da6739719b59c5a73d9b93d
9c2934d317909afb3adef98bca717d850e3ff2af
37089 F20110109_AADCGN abraham_a_Page_17.pro
9a982aa554f0c14f65a6f09c24a6fb1c
2d95393bfb3f798379e09f9a36bf589169ad961b
7590 F20110109_AADCFY abraham_a_Page_01.pro
f0607d5dce7d3930e5d7cc39d4135b87
a6a6a7f70b73e614b24e918936fa27ab6a0ca5aa
26857 F20110109_AADCHB abraham_a_Page_33.pro
2c24e1af1a95430f858168422e713806
6123a4f12d8dcb42b14233a6d183644fdb4972e9
51534 F20110109_AADCGO abraham_a_Page_18.pro
39f1939d7f2378343c491b70e60c53e6
3e8f4bf75fcd57d53de74f6b02282a00ae35395e
1175 F20110109_AADCFZ abraham_a_Page_02.pro
a8ac4f391f5ebd331e87bb6fad0472d3
243d461cfb7af3b311dc075868bc656374af6d50
39450 F20110109_AADCHC abraham_a_Page_34.pro
44bdcee0dd235df9a6f9f78542411478
0a39531f1f454233eb47b7fe78fc04330801a5bf
37452 F20110109_AADCGP abraham_a_Page_19.pro
2597518639deef000fd13448b0eb8fdf
a50d34bb01244505eb7aba0e5a736595cfddd338
37194 F20110109_AADCHD abraham_a_Page_35.pro
7b47fb3dbfc4ad895642c2913b0f23d2
f6bda77d01b905e66f2282eec972cfaa4f80939a
39542 F20110109_AADCGQ abraham_a_Page_20.pro
9d190a296be6dd838946bbcd5c739fce
95b230e94ddd040c5bdffac1ac774df510562ab7
38866 F20110109_AADCHE abraham_a_Page_36.pro
5c55ce04da526bb3f5924a27563e3e03
993d25fa90af8916cb63ed5215c76fcc40eef263
26273 F20110109_AADCGR abraham_a_Page_21.pro
b20ada7e33deaf9a3e312014bb58ebdf
246a6e25084a63bd6fd969379ce8522d73a0da21
50731 F20110109_AADCHF abraham_a_Page_37.pro
14b301859c140fa0bc5af24825e2ac7e
09245b268803c5d9ed847f4be3d23c29d9ba64e1
38753 F20110109_AADCGS abraham_a_Page_22.pro
ce5b6e68c80e35bcfdeb84738880e3c2
cc5c6a6d84b32e03db2b766dc23d221902f90a4e
49327 F20110109_AADCHG abraham_a_Page_38.pro
fcc8abae6ab38dcc8706098b0124abfe
04e838bf46aae822ff48c6d4319b78b146e3ccaf
47539 F20110109_AADCGT abraham_a_Page_24.pro
f160cb8fbfcd038defd4b1446e043ef5
934585e91b203688bd73435842797a62ccc4b9e4
34276 F20110109_AADCHH abraham_a_Page_39.pro
76f9bbb2e08be3f00e30d4894c8b0bae
e7b6587be9fd7285c2719c962760e7084356a2c9
44405 F20110109_AADCGU abraham_a_Page_25.pro
bfde9e2eeb9a74cc8fd120a99216e64a
0bbc0e91406acdcb6ea5e4bb40ea97ea4c77c350
4793 F20110109_AADCHI abraham_a_Page_40.pro
fcaf277807fbd0d1514c08b48d0f82f9
0ea842c42269512105c744dffe52bd3ca43b16af
41130 F20110109_AADCGV abraham_a_Page_27.pro
de6c79f07608c7db1ecd79c205013397
28542962e939e9794cb4c024df6720cf21543100
38022 F20110109_AADCHJ abraham_a_Page_41.pro
c29060e2dc3f0fc97ba3efd32374a4f3
4fb968ac551c2ca118763bde1acc6bab040d75ee
42420 F20110109_AADCHK abraham_a_Page_42.pro
4125e614fb9efeb24d7428ce4cb447d0
64f763b8a0cf98e77d7f7c32a8d485cf8734d58d
20494 F20110109_AADCGW abraham_a_Page_28.pro
fe0bc658750d0c2723b0edfcdb2dd76d
fc1746f147757e5d935bedede1aa8921e8a150b4
35731 F20110109_AADCHL abraham_a_Page_43.pro
37b3cb77e3e42d418968c4e2cbf058d7
9674a349671b4d48062db4658b8693a2f0deac61
37299 F20110109_AADCGX abraham_a_Page_29.pro
3def6d0daaf1fa753f7561e64113bd91
8ec9ca131f24a35ee360c532919fd65e2448e931
53138 F20110109_AADCIA abraham_a_Page_58.pro
dce5891767111e779fa18a978d99c8bc
dc01a5818de7bdb13dd4e8b5d95a2edc1963f878
38567 F20110109_AADCHM abraham_a_Page_44.pro
f31030e86b75144df717bec51581c2d1
8969c32d0a05f770f8fb950e6c0ff21c1b0a6d1d
39880 F20110109_AADCGY abraham_a_Page_30.pro
d10b3217d52b1b6308d07b3ab0efd4ad
f636234fd58cb809bd66bd8178a55f2c1e403676
34996 F20110109_AADCIB abraham_a_Page_59.pro
fe531d35c72fdebee6e37f4de2a5d840
3296be2e8882b52560175f0d1a8be0ff514d1041
31152 F20110109_AADCHN abraham_a_Page_45.pro
75ac3239f1a9419025f298c15521da16
01efcb1c8635cf8bd6c49db76e1597631741ece4
42060 F20110109_AADCGZ abraham_a_Page_31.pro
fd8cda5ad06f0dd5e663d6f3bdc39f88
15086482c036067dd6496e24dbea3a2141d0bb7e
60432 F20110109_AADCIC abraham_a_Page_60.pro
fb282e7508f809be0ddf448efd5503c4
1f0fa0290cf9f5ffd0fb9dad563fbd181ff2847b
48386 F20110109_AADCHO abraham_a_Page_46.pro
48eb6f67d83d45b611ac9c8f4ec1625d
1ea11f9f819471b4bcd5124dc5733eb64ea93027
52658 F20110109_AADCID abraham_a_Page_61.pro
a89ccda9dcc949f31e2d2e2a1bb7c52f
4c6d593f9eb9373bb7056d59dde39dfb012d2580
45744 F20110109_AADCHP abraham_a_Page_47.pro
d88c09f1a5ab6c6a3c49f5afc939f996
c335309ab4686195622283dcbf60923f465da6a2
49004 F20110109_AADCIE abraham_a_Page_62.pro
5315598eaab3fbb20540862dcbbbf011
030dc53d8fa851b2acb46a9115bee787ea0e7959
2884 F20110109_AADCHQ abraham_a_Page_48.pro
c84fa8bde5ca177f3cdb17af586ce8b4
452ed3962defd72d72e7d214246d6cc80cedd463
36358 F20110109_AADCIF abraham_a_Page_63.pro
607c85f58865ecdd7944241073ca55a2
0e9fdc4eedbae7d06521e9946caa42634742a7aa
40242 F20110109_AADCHR abraham_a_Page_49.pro
28bc5baf29b0839cba1164407d3697e7
b146ddb656ef53524495a3362af1ca1c817c990b
47823 F20110109_AADCIG abraham_a_Page_64.pro
ff5046d7ac7d9edfaadcb9f1565df9ab
673ff63a4463d825ec6cfb2700bf05d859fb9f2c
8686 F20110109_AADCHS abraham_a_Page_50.pro
93edec2cd78bb45f417bf9cdca1db80e
0d00f690f8efdef4785aa5a46d00efb126dd01e8
52527 F20110109_AADCIH abraham_a_Page_65.pro
996f388d55e44fe2db9d309f8a709d91
229500d2f9de61c5f08b7e50ceef0332825a2bb4
52307 F20110109_AADCHT abraham_a_Page_51.pro
a571547e83b948cf309350a7e804b091
1b5cbb7c4e82244f093b3d9fdafc45de5f367ce0
31948 F20110109_AADCII abraham_a_Page_66.pro
c4ba3272f3f4648b09f583daad5cddfb
987bd5684d199f5e809bc525d89fdc884ffb729b
36707 F20110109_AADCHU abraham_a_Page_52.pro
13066d4b3f580949980d3cc94a3ff179
a027199c0d45c0f5e846317c18b934eae1334ded
50939 F20110109_AADCIJ abraham_a_Page_67.pro
1e7a405cbf184b4652632fd7b028e83f
a3d84710534d9714ec1ab287bf326485f8c6a67e
43210 F20110109_AADCHV abraham_a_Page_53.pro
9af12312d0a8bbcd82c1fc1bdab01c8a
ab6ff0acd27015330ce003ef2b7098506873ee50
19877 F20110109_AADCIK abraham_a_Page_68.pro
2b113e4cf08613c8ca35e2a42c3dcd29
93c181a0ce5910bbcd57af275274d90e7c4cd302
43264 F20110109_AADCHW abraham_a_Page_54.pro
9ed8e46076df43dee8bd199a66fbcca1
f13ad81cb97d3c98ca8e7552c4851a6fe812e17e
78346 F20110109_AADCIL abraham_a_Page_69.pro
aa3584404825325327664ca1483db45b
a0f4987ad0dfbc08ac53bd71b8ed8bdc324f2403
10707 F20110109_AADCJA abraham_a_Page_84.pro
49f52faed8c854024965d4fc21a7fdf1
487d7172f4cdc07bc17ad7b5060a0648caa28abf
52366 F20110109_AADCIM abraham_a_Page_70.pro
86f420267cd9f328e71646b5c547eafa
67fd78bc318eb015a7b805eaa0e9c6e16642e21a
49283 F20110109_AADCHX abraham_a_Page_55.pro
c000b1fd1397f6bf95006a5cb1d997f6
8d8c2fff31ed1b390eca61fcf951b46fd87b5638
3058 F20110109_AADCJB abraham_a_Page_85.pro
624ad94b919fd86cefd9d37f5adb16dc
70700217f69f2905f14be3d190ffae68c9323138
35119 F20110109_AADCIN abraham_a_Page_71.pro
9060ab3c358cfe49cc2fdac0609144e4
7a387c33d933af64add9129d6cafecc2b68bb06d
61367 F20110109_AADCHY abraham_a_Page_56.pro
6a99a210fdcae15d2bd397c7be1eb117
02b3d0f80de18bcb674b6f6b72b65009bf36fe82
8006 F20110109_AADCJC abraham_a_Page_86.pro
1e9c47aa39f04f6381a428fc0f441160
fd7d4f7e20c914b7f4922486b02c1a60b5312dc4
32376 F20110109_AADCIO abraham_a_Page_72.pro
45b723bf2352207f2a7875b2c04741f3
41b944c7f332e0087453b271f3d51e68d245fc5b
50238 F20110109_AADCHZ abraham_a_Page_57.pro
28b92ab7293905b8ba4febbe844e89bb
d89f31d253be64e7ff527dfa2906cb9c8a3c8651
55988 F20110109_AADCJD abraham_a_Page_87.pro
f728673c08ed8370f2a0d84b2ba4853f
e672d28d620ae7097d8703c65335e02138c5edf7
53052 F20110109_AADCIP abraham_a_Page_73.pro
8129d726b82edf454653f1271fbebaa0
78b3cee938c0e47f01f22b32048e909b43fd1c94
39196 F20110109_AADCIQ abraham_a_Page_74.pro
c2f944656400e133eadf379a16de2a68
09f685a55b88ad9892480cf74cf6aa881cce558e
40730 F20110109_AADCJE abraham_a_Page_88.pro
074c2acdf2c6294171afd84589a963ea
72379ba91f61330bb55df148edd1d1c4f86a66b5
42313 F20110109_AADCIR abraham_a_Page_75.pro
bf43004b7dd653e08693b95c9721e690
fa42dae0c5397e43ffbf72c064d04583f5a3e7a0
11635 F20110109_AADCJF abraham_a_Page_89.pro
d0af7710e4ef44dc9c40478226baa8fb
e4dd093643c35bdf59c4ff12d8effcf4de91cfcd
22430 F20110109_AADCIS abraham_a_Page_76.pro
27300990f880be5a8ecedfe99a0de658
6765190eac67d483759c33746589a105074463aa
411 F20110109_AADCJG abraham_a_Page_01.txt
b2109cfce56d3338551cbe1fecc54069
adf7a4f0070af5589106c07734a769e9cf295b92
22814 F20110109_AADCIT abraham_a_Page_77.pro
a251694271da16f90427cef43fe0f2f7
e173c84ed1d053ff43955673cab832826159e200
110 F20110109_AADCJH abraham_a_Page_02.txt
4d152b4ca93e4a7b07121543ff223b80
7ef3f7786d044465b62ed8004ed3d66dadf5ec99
31383 F20110109_AADCIU abraham_a_Page_78.pro
2ec48a698c5d7d64221ae371c5a710ea
2e1447ea010359174e11d517aa70791721b0f588
99 F20110109_AADCJI abraham_a_Page_03.txt
f82dbca733d1d31830f1a11045bfa24f
556ead1f481fe89a9f5e0e270cb202938c0ea9a2
25545 F20110109_AADCIV abraham_a_Page_79.pro
8b37ebaa298022462bd8941153ebe12a
756e986fd8a2735078d37c128fd63afdd83edfa7
1099 F20110109_AADCJJ abraham_a_Page_04.txt
9cfe32d42e17b3f6e7468b8e1ae9f925
977b415a3855e64aa3eea331bd8deb9a8e2b4eab
51219 F20110109_AADCIW abraham_a_Page_80.pro
142bd98601eb301c81de0efb0e583b17
13598a5ee88f82f5a41d9c314776cd5a305722fd
2413 F20110109_AADCJK abraham_a_Page_05.txt
2df0e39498f69ca4fa72287eabdd4ee0
a8728cf70f2fc92fb2dcb19bc249ab1841f0a018
35625 F20110109_AADCIX abraham_a_Page_81.pro
ed2ce821fe2234f629477b6b951fbe11
105bd09b085d58bb98021658efbed5c78d0e6040
1748 F20110109_AADCJL abraham_a_Page_06.txt
0cb38cdd6e14d922b4af09f31a242874
837692c00d94e8ff7756907da57d4698a73f1398
1535 F20110109_AADCKA abraham_a_Page_21.txt
7e5814ba05d55879131a5f2a019fce45
319a8eb4c4a2fe81722c7c264c26b843e53066e5
553 F20110109_AADCJM abraham_a_Page_07.txt
a277e4e6aa4f87f418591f5bdf92ae36
968273b1ecc84cfd3503b5a5a387f6475237709b
35722 F20110109_AADCIY abraham_a_Page_82.pro
f20b11b2bd1a1a41c60c56144e56b065
9e1d38046b7af12976aa0b3850c2e698e9230a5d
1668 F20110109_AADCKB abraham_a_Page_22.txt
fb46564f84765b72aafdd4005c4ee1f5
47542b0855c00b20e06056cd3bde380b05713a16
2242 F20110109_AADCJN abraham_a_Page_08.txt
b2d3d5a2fcbf888856438569a6fd7898
4b52c31efd6351efe08a27e142fea69992a5e752
20606 F20110109_AADCIZ abraham_a_Page_83.pro
659090477e21645e9a7fc82040e46fbd
a22945db217a55fe79b22ef83e59f13eff4a2b40
1610 F20110109_AADCKC abraham_a_Page_23.txt
9bb18afe17ba1d8bda96a4af642f38f1
395a07d5531f3174176c2c0540d66eefebe44ff1
600 F20110109_AADCJO abraham_a_Page_09.txt
b387307dc2864a504160967fb1d441c2
7a7acc41c42bbdf8657e243ba9ceab361fe1685f
1910 F20110109_AADCKD abraham_a_Page_24.txt
a5c7f0e7a5ef84667f3ac00ab624c227
bb41f60b356d4fada8cb2daf6cb789cb7964c50d
1721 F20110109_AADCJP abraham_a_Page_10.txt
2084a61b36c8d5e69840eb6d4ab90b28
d6310686d2fedc673116f38719e0fdd1fbeaa70f
2010 F20110109_AADCKE abraham_a_Page_25.txt
f2b4c5fc68006086fd2bea8b03f1b7eb
c9ecc6b4e9382db936ba35bd8603b64dfc856156
451 F20110109_AADCJQ abraham_a_Page_11.txt
9e4b5eb6e2e4f43583ec0400d4c490df
7fedd0fd8900fc0a93bde629ad2896fb49a14212
1644 F20110109_AADCKF abraham_a_Page_26.txt
934cc5818181dc691b6845bfd8893cc3
ccae7a5f404891fd76d6735c86c38d153da206d8
1613 F20110109_AADCJR abraham_a_Page_12.txt
2434f9bacbe49ea137a3f18995adf98f
0d4906a4bd6f6bf52e17c451c91f829e6aa35e86
1758 F20110109_AADCKG abraham_a_Page_27.txt
f8dde8f409ae2cd4cc071d61adcaa82c
21d7596b93b5c8e88a0aaf5810351fd639664944
1836 F20110109_AADCJS abraham_a_Page_13.txt
f194b82e39a3b16dd554b90c8efcc380
750edcaf3c6205d3b4ac94b2baa36004fc6aff32
989 F20110109_AADCKH abraham_a_Page_28.txt
3d6ce952b215d9addc1d448e11126366
4b4f3ecec1020329112d12ac040e44980a14826a
1914 F20110109_AADCJT abraham_a_Page_14.txt
1bc2553c3c9ef4c4f634bcd990de409f
2721a044715f92234c480066ea6b17035260ac0a
1756 F20110109_AADCKI abraham_a_Page_29.txt
047b7a148db339e0f07677fa52ab39c7
fb56c05fccf76607bd1c1c4426c6f7352f42ff69
1861 F20110109_AADCJU abraham_a_Page_15.txt
d88c5920c97ca39e64913cad4828f847
fb807a82d70a7b17047ed9a62ea8c51143210648
1871 F20110109_AADCKJ abraham_a_Page_30.txt
9a6f81f3adf920fe19078cf2f5e396dc
bba772aefd54655e6a76b951173bf5b31d7d8707
1863 F20110109_AADCJV abraham_a_Page_16.txt
234480fa2644043974427b3fc16710ce
1acc5061b6bf8c0c38e2e1b9f3e27136ab912be3
1867 F20110109_AADCKK abraham_a_Page_31.txt
c82bd699fa3193f43f867cdd2757394a
0ed5ad397b1d961f6eaea0d18448cedba3043c00
1762 F20110109_AADCJW abraham_a_Page_17.txt
13fba7bef62d13e33ccf4ef452f65e7a
8a28d61268f311a4790c8fdcdc3b421b7a3ff9f0
1321 F20110109_AADCKL abraham_a_Page_32.txt
b64e1bb276f4d48c50826ed03e00ee74
b10b0f28a786132194597da21aac8b890f5fa0a0
2041 F20110109_AADCJX abraham_a_Page_18.txt
491aad79b59fec52c0f474fe1d293555
eda9ac6b1a8e5a6a0b9149df670d059ada9d27dd
2211 F20110109_AADCLA abraham_a_Page_47.txt
178013fb1643cb55a965fde8bdf2dfe9
1a8a7cfea5863d3f2acc6bc107aa8699c6a094e3
1281 F20110109_AADCKM abraham_a_Page_33.txt
73f6a842ef8df0b7e84eeab330b0944e
a27868f3549aa17bcf6ca60d0f634e29551e7a33
1544 F20110109_AADCJY abraham_a_Page_19.txt
dc238e6d2d51b4ef3cd8a44d6e1ccf69
3485b3ce3dfb729a2caf764c976d41c3089f5359
159 F20110109_AADCLB abraham_a_Page_48.txt
1c1f799833996261b5a9201cad05af4c
d38671f8d1a4e035030ac787fec0588288b15242
1784 F20110109_AADCKN abraham_a_Page_34.txt
f993a6dcab2fca32f0812bec60a278dc
d9352baf491c50915a36ff42619f8696060eb6a6
1750 F20110109_AADCLC abraham_a_Page_49.txt
ff4dd43ff1205f5b766b7385f96269d7
174c60391ca42cb2686316cc261d6260f0de2d43
1551 F20110109_AADCKO abraham_a_Page_35.txt
f5469f6717de840e0b60f07e0e6f0d68
bd344531c275809583d9d714020c04764a454d48
1678 F20110109_AADCJZ abraham_a_Page_20.txt
bb8ca39125dd99eb6785f3982794a799
954e59a129e361a41b777dcd1e119092c2dd9a25
388 F20110109_AADCLD abraham_a_Page_50.txt
7102e08694c4bc3f884c8fe0234b0432
3902f49f7778320127fc4686508260e7dd812d9e
F20110109_AADCKP abraham_a_Page_36.txt
7e2f84e5ed453837bd38b9942a8e2326
c618d82e50c5f0a5a83ab37d972cc5146eb6504c
2251 F20110109_AADCLE abraham_a_Page_51.txt
b69944553411f44b455b10868a4f4439
a0db9ca776f5a388278f896576a29fd099d5a3cf
2024 F20110109_AADCKQ abraham_a_Page_37.txt
9e145ef81e97bf5ce43cb9ebe33f6ece
5e421945f175fcd891bb6074ec7c2a04a082af4d
1915 F20110109_AADCLF abraham_a_Page_53.txt
7ddc02c7e112c2121abf20a17253652d
1c8542f6581a0a8801c9f25c393459e21de2fcaf
1949 F20110109_AADCKR abraham_a_Page_38.txt
0d1f20d2091543c90d7bf2054d234071
181640eaff057297946b8a5121fff79033c362b6
1870 F20110109_AADCLG abraham_a_Page_54.txt
68697203b9a10db6df8917a1bca61481
502e46b4dace6355a43ea5c21bb3ab87d11263bf
1381 F20110109_AADCKS abraham_a_Page_39.txt
15dea9a1c6380f09af391c4ea9306632
c8421025df9be828685cb7decf66323e67ce3cf1
2165 F20110109_AADCLH abraham_a_Page_55.txt
168aeaba0fcb2e3e6e27c7f95db6866f
dc6ca5cbe4f2647d03fba85ec9fc42780893388c
212 F20110109_AADCKT abraham_a_Page_40.txt
646f7a2deddee35a12a51c7c7c1676c0
f001ad99467e7a4a6a4959f4dcdd97ed78ca21ce
2557 F20110109_AADCLI abraham_a_Page_56.txt
d3b34a90b2608d6106b91e020368815a
3418d63cad130a40073e3dd1b4a59a297355f90f
1622 F20110109_AADCKU abraham_a_Page_41.txt
c088071fab7b61e787b2b6ecd76459d5
0abc1db910511cfaafac322c39cc2faca08768e6
2193 F20110109_AADCLJ abraham_a_Page_57.txt
9159a3b60421ae7b91151370c7bdc77b
cb4bcba9a4ad4f3b2265b09c2b7b5ab33b536440
1995 F20110109_AADCKV abraham_a_Page_42.txt
a04431f4a570841fa1cefdaa52475a4a
5c72f4e92189edc27c242f1c2a8926dd403ad151
2328 F20110109_AADCLK abraham_a_Page_58.txt
f427e1fc0fa8a4c678cdeef5d86c4ee6
b68e32f4261f3180d0eac234090b251988a223a1
1579 F20110109_AADCKW abraham_a_Page_43.txt
214b4b37063359a118c3fafc51ceafa1
ffac74f020f60776e28530eb13a8440e19da7bac
1806 F20110109_AADCLL abraham_a_Page_59.txt
cc58c25a7cb628e5c98aadeb5c3c3021
5a9f5abcabe166d9f2a1e2086cbf3b75293d3798
2048 F20110109_AADCKX abraham_a_Page_44.txt
d19141909bd4d976e46b52642b7affb5
1d510689b705a69eb9f883b9834412064d2808ca
2670 F20110109_AADCLM abraham_a_Page_60.txt
d288fc2af0dd02761481dcf8ce3b9bea
a767850813ae2f816366ba2fe35c1aae9fe06638
1565 F20110109_AADCKY abraham_a_Page_45.txt
06d2e4b667b281af750e809efc5cacea
e1b4c422e1ba6b9b06047474d1cc2559415b55f6
2003 F20110109_AADCMA abraham_a_Page_75.txt
1a7002d19c5aadf9ac596473945c35a4
502c2c2ea8073c09641c00aa4b49d0d7b3d400bf
2590 F20110109_AADCLN abraham_a_Page_61.txt
12928effbfff07e7ba813201e2d51513
2d82301173fc1a79ae55f6acf849de66de768029
1920 F20110109_AADCKZ abraham_a_Page_46.txt
f3103a9cdb1cc282e2d3bbd62c447870
be31fb1c72bf3780978546f92b37f31b6d641451
1067 F20110109_AADCMB abraham_a_Page_76.txt
6b4899512910a2df1f9751d3bece8d7d
6a9f34f63a56b4c808020d67adc2df0da759d047
2170 F20110109_AADCLO abraham_a_Page_62.txt
8bec5abc39e04fdca49bc815da778745
22f2298b0432fcc8640dd43eb2c42fe835335ac7
1016 F20110109_AADCMC abraham_a_Page_77.txt
0b077a423aebf51a1907926eb9797786
9a42fd32823d177ff6b5a092107ee89713eabf63
1650 F20110109_AADCLP abraham_a_Page_63.txt
1eb4f0184dd6c68d04fa144f8d6792da
944a8a474d757f1bc02596b35d62cbccfda10fcf
1465 F20110109_AADCMD abraham_a_Page_78.txt
f2ab826cb2f902e788f8dd379de7f612
2a00015633441f2c6556809f5a1a8bbb443efc22
2109 F20110109_AADCLQ abraham_a_Page_64.txt
0646167fb46c5dab8a0b0f5d4131ff59
958ffc2c89e8d9d78a5c7b67a3a92233a90a4487
1104 F20110109_AADCME abraham_a_Page_79.txt
a17d4b7bb201fb754f6cae84497c0afa
f07273afa8610a374ec30150799d6626896684ac
2654 F20110109_AADCLR abraham_a_Page_65.txt
4f9a8f81f95d9784c49ea390cdc7fd35
302e5c3b6dfe1e6e70a420084d3fbb75ac47260b
2259 F20110109_AADCMF abraham_a_Page_80.txt
4e02829c22bb2448c054986f221d4d92
85781541aa02f90194585732c40681f3608202a6
1534 F20110109_AADCLS abraham_a_Page_66.txt
01b017d692bcfe34e2e1040d653c545c
b3108b0e0cbf6c3ba7c51cb078f165e84cb50fb9
1652 F20110109_AADCMG abraham_a_Page_81.txt
3433a32d7bc76c7375363b7053ad56fa
558f4bd29f6cb6ecf9f2971b7330733e9dc95088
F20110109_AADCLT abraham_a_Page_67.txt
e869e61f9fd32d69e8cc69a97f164d00
e09b40d6a9784527680792e169554b438dca86a1
1600 F20110109_AADCMH abraham_a_Page_82.txt
1f464634c84ca9ee7fc276569f2e9c49
7e721f80541f3c942bc56a68e6ae710857bc232a
3416 F20110109_AADCLU abraham_a_Page_69.txt
3dab7264afae62e155ff1bc69aabed5e
dc1d576cd58e2a0f2e18aeeb713fd9873e3b3f44
1144 F20110109_AADCMI abraham_a_Page_83.txt
f567fdddd35b43acc535f69fbb794bda
42ac35e67b5cf8bb9b6a4b2b5a1149ab6f587389
2229 F20110109_AADCLV abraham_a_Page_70.txt
ab8222bbbdb53baa8c659caa88b2fe66
adccbfbe34ebb30f4048d2d9b88e8d7c167d2671
500 F20110109_AADCMJ abraham_a_Page_84.txt
ba5d91d5ed9c23c2189cf067ce5a4ccd
70b65a00678d75becb6b22ca4bd56363a061ae72
1539 F20110109_AADCLW abraham_a_Page_71.txt
a2f80c2173cd4e3f97dc6d86d8669b20
d120d6279bda22e76b6d1c9276aabca5e3968fde
143 F20110109_AADCMK abraham_a_Page_85.txt
1752912ec669ef926faa0eec7056e7a8
5953801d4fda245474b8705cf15706684cb4a9b9
1546 F20110109_AADCLX abraham_a_Page_72.txt
eef3d79f30690d2baea819d01d018bf8
63949ab4d82318b2e1db5720087bfd7ea1e57ee6
362 F20110109_AADCML abraham_a_Page_86.txt
8e92ce1b74bdbb8692f95a3d93207925
684c85b3a66cbd40703e981baa7ea85b3439878c
2423 F20110109_AADCLY abraham_a_Page_73.txt
9f176359c21813504fec53e74d8abb4d
8acf78609ed27b23b5a1048d73252185581ac646
16648 F20110109_AADCNA abraham_a_Page_62thm.jpg
519175c3e85229540e00b3b90603b584
fd9c8213d0baea5f11242d72220e3c5fc7da0aa8
1732 F20110109_AADCMM abraham_a_Page_88.txt
1243749862e5c91e7feff46d51e111b0
6fb787291b2a53abde8a62d584b61f2844831b65
1810 F20110109_AADCLZ abraham_a_Page_74.txt
f471d839fccea37387b478a9ad725a8b
abef71c14388b1d7c8e0fcd6811ad32c2baa7aab
17282 F20110109_AADCNB abraham_a_Page_50.QC.jpg
ee2a18df538c4b8fc918181c104d22a8
c5cd60fc3b09d7589ad1a00d12b28e207997b1f0
517 F20110109_AADCMN abraham_a_Page_89.txt
f721f02adc24bd34e3082ba7b3b4ebe2
6e25d9ca9faa5b97e73f8cb734adf2fef28f0289
134777 F20110109_AADCNC UFE0001221_00001.xml
f893e5bc0336cf57e2a4775bc39494c1
46441f64111898da010dbc5f77045e6f6646ebe5
6958 F20110109_AADCMO abraham_a_Page_01thm.jpg
fa89160f202c41a832a98a1063d090f5
6f6ab81542cc39cc27e3b9569768ae8d1e272b07
5784 F20110109_AADCND abraham_a_Page_02.QC.jpg
cfa0f22e4b8ebf64cb6d102b245697d8
488cc879ee7fea14b66c46fef085494caaccef1d
1512814 F20110109_AADCMP abraham_a.pdf
45d60e6e1beef8b5f10eb32b799f0bcf
10d85b2d0fc0dc69e418e61d0c474e7d027d02b6
3175 F20110109_AADCNE abraham_a_Page_02thm.jpg
4b411ac9b9370a170d450e151ad7c40d
c87d218b57ed058d9728a079c42d7967ac66eac2
74596 F20110109_AADCMQ abraham_a_Page_15.QC.jpg
2d31e1c97cfe3f668e648080c2707d0f
4e9d9a6634dc5eabc02921aad91953a3e26fbedb
5141 F20110109_AADCNF abraham_a_Page_03.QC.jpg
0d5b2b54564751c9bac0a48ca332cd94
ba216ff15f9de29bc5d34b18f129f151fcc51d69
73574 F20110109_AADCMR abraham_a_Page_54.QC.jpg
8085b76fc469ee5fdc9a9e3e466f6ded
c3a79651d6ef14630e66a510c2957e869e6337c5
2960 F20110109_AADCNG abraham_a_Page_03thm.jpg
8fc0a922dbc9fa0c95a906590ad83e40
3ca2e40c911e6b98a7bed9edda5178c47c2c2283
13616 F20110109_AADCMS abraham_a_Page_64thm.jpg
16d3709514f846d74403f5cc1d73fe05
b5571187cd9e68b8973f8a3687d7f483771a70f1
44195 F20110109_AADCNH abraham_a_Page_04.QC.jpg
fcaebdcdde8c1ffce04ef7602f268440
531262d11654ac8a0c576e6d49939d3c3a76a4c3
19780 F20110109_AADCMT abraham_a_Page_12thm.jpg
88dfcd07d38f2588f1aeb9c01eae93bb
c7283c0a2baba3ea885806bff900fba97167b3bd
13907 F20110109_AADCNI abraham_a_Page_04thm.jpg
de2693c1186d521f644e7826c9cf6265
6a3d10becc0bd4b5e22e9c5066cc98823de08740
9208 F20110109_AADCMU abraham_a_Page_77thm.jpg
a622574609a38e9c75acf8e6077fa27e
e8967137f2ed115ffdb2979fc5b8427afc65b748
81647 F20110109_AADCNJ abraham_a_Page_05.QC.jpg
8be8cca597e1e24bd5a2c6cddecd8e1c
58d0469673456f0f3ef738eb4401c02b62434fc7
16009 F20110109_AADCMV abraham_a_Page_73thm.jpg
a2a450beb99d454daa27d0dc3595fe88
be5e89c97a969d9af1961c57600a5db42ffbe647
41914 F20110109_AADCNK abraham_a_Page_05thm.jpg
e84b8e2786e395ef1d1e0195a8398931
4b0a80b8644cc027ada69afc3a0db7b5aebccf47
77402 F20110109_AADCMW abraham_a_Page_87.QC.jpg
cfe6d8fb6c7660438ad61a6c11b29c56
a4d461bfb26dd449010fd51c60e0a21cb27ac1a3
71120 F20110109_AADCNL abraham_a_Page_06.QC.jpg
fa65f524f80b14b4c71e9faf4a7ca820
208c195fa442a6fb82a30517cdf985973faf089f
20278 F20110109_AADCMX abraham_a_Page_17thm.jpg
6543df220209ad146efd66371d582142
a62b16d8453d48e43bc560e9040b6fc4b699a462
23051 F20110109_AADCOA abraham_a_Page_15thm.jpg
99bdf030ba0d9e45bc98e71690960e3d
678a726f2c0cbdc41e32e9e19d6a587baaa49697
39352 F20110109_AADCNM abraham_a_Page_06thm.jpg
74439823af55975011eb06c5db8d4ba8
3dd77bfbf232c6ce14f04a16ea802503f3d1ce6e
18483 F20110109_AADCMY abraham_a_Page_01.QC.jpg
e870765668307d3a9512f8a18183a4b9
6368c8913ae5ad40aad74cf198049f5b941d9bce
45464 F20110109_AADCNN abraham_a_Page_07.QC.jpg
11b7bf0a5601a9aca7cd667ad2e96825
a8f3687f6381b223b5e3d6f57decd8de5dc04186
34366 F20110109_AADCMZ abraham_a_Page_09thm.jpg
9cacecd6dbd05a9927e648b8e9a7c583
a3e956c087c11fec3218d5ef9cde8bd73e94e944
72018 F20110109_AADCOB abraham_a_Page_16.QC.jpg
a0738674f33ac82f2ddf277c681651f8
183eb2bf4459eb684ebb22e9923f0a91d9a4dbf1
32362 F20110109_AADCNO abraham_a_Page_07thm.jpg
e62b50785bd49af8704e741c91b5f321
d73a4d9912e67e752f56e66849df58bd22ae92be
23088 F20110109_AADCOC abraham_a_Page_16thm.jpg
0046dddade20c05c1f1d57335f3c169b
2ed15706a73cce19929700828bd77980608fe20f
100416 F20110109_AADCNP abraham_a_Page_08.QC.jpg
6a46f0834689fe1045f260051772b5e5
23f4959f23c6ee23ceb3463f109b0b6df1ccd74a
60065 F20110109_AADCOD abraham_a_Page_17.QC.jpg
b27218b0caeb7de35894cc2f36ed5786
bd70a3c07a1c28c20abd98f110105c6f350e5cae
46548 F20110109_AADCNQ abraham_a_Page_08thm.jpg
f2e7e4bffaeeec2a3308f6327bcc85b8
19639d559b3e789d8d8573cec355ac1a2ded039f
80682 F20110109_AADCOE abraham_a_Page_18.QC.jpg
bfa5d8a102494c3fb946336665ae81d0
8998206d0bde752c49bcb96a6b2c043f867d791b
50316 F20110109_AADCNR abraham_a_Page_09.QC.jpg
943a84333941f019d28225ed97a73207
43ab78033f80058f84009e028a339de7453b3625
25938 F20110109_AADCOF abraham_a_Page_18thm.jpg
dd9a9cb1255d1790e7b0d94ae52673c9
7932bba8a2e252c41de6e8797693918b4e14da02
58731 F20110109_AADCNS abraham_a_Page_10.QC.jpg
5d59d479fbae0b6a6681e708d2d87dfc
da54ac9fd5ca8b82606ee4b1e6e80476938da521
18971 F20110109_AADCOG abraham_a_Page_19thm.jpg
0390cff1cd80b7ef20b7cdc6458be974
67903d986da2ecb0c4c09a953264ef1918c96ac3
18940 F20110109_AADCNT abraham_a_Page_10thm.jpg
ad89eddc0636f27a91a829e843be2199
99a308deeb9f2718772d01316ac24d33489e5217
62783 F20110109_AADCOH abraham_a_Page_20.QC.jpg
60919e57c350b4611d0ef4217aef9311
2b58e91691856af318f799a5fce632bef740c7dc
21206 F20110109_AADCNU abraham_a_Page_11.QC.jpg
d4beb783f250c62f211c9509e61b51c6
7f831e423b131638ce204c3b42d431ddd60f455e
21168 F20110109_AADCOI abraham_a_Page_20thm.jpg
e2dd68bc9cb4258f843faa16a83e7167
8e1654d1116f4406714437e6f6829bdd53c25f7b
61760 F20110109_AADCNV abraham_a_Page_12.QC.jpg
33f21f415d63f00b571ed1e42aec6ea7
b245a59b81bb1cbba8eeb4611455419f62517a0f
44319 F20110109_AADCOJ abraham_a_Page_21thm.jpg
fc5ac2f5b79a275f1918d72f016b66da
714b2de748aa011b09cbf145a10b6dde2b6ac043
72527 F20110109_AADCNW abraham_a_Page_13.QC.jpg
22bcd69d366c406d5cd5d1df0beebce1
7202c1554fbf3c0550a459689bece27acddba857
97816 F20110109_AADCOK abraham_a_Page_22.QC.jpg
2170c2f2dc58388d24c3a01714c329f2
20000d6c17730206ec88e133f6369a69f9d87c69
23319 F20110109_AADCNX abraham_a_Page_13thm.jpg
51e80934cc74e144e360ef3318611608
1a06900c379be2704bb35796344a514eb51ae177
47863 F20110109_AADCOL abraham_a_Page_22thm.jpg
22730cb9f6de833f642ddf6d88fee4b0
04dd82a6cb93b46df83157edc018db8c2d839676
74141 F20110109_AADCNY abraham_a_Page_14.QC.jpg
572b80841db89fe9d3b9d928aa8ae846
1e06e265e5fc90783f5750e20a2a3a276f642f07
21018 F20110109_AADCPA abraham_a_Page_30thm.jpg
e0f620be9f303985870b3afdfb67813f
40872f2ffd87932e5c1179f2c120c20f982386f9
87607 F20110109_AADCOM abraham_a_Page_23.QC.jpg
daa11aa555d08fd1ba286525872af5cf
a48273034397f38e5b3ed1e66b92646b06b0d728
22481 F20110109_AADCNZ abraham_a_Page_14thm.jpg
125ba0801f022793ad80227221d41b5e
ad53c8dfb9b5b44c1b4772a233bb16d5d4a201d8
63148 F20110109_AADCPB abraham_a_Page_31.QC.jpg
24b962a7ad9912ca8645f21b081c0530
fcfca34057b3ec460dd6a98bcaf81abab5024d38
44845 F20110109_AADCON abraham_a_Page_23thm.jpg
9a51638cddddfd2a9676c13c2d030c36
6e94e7a67048dc6b87be3c0791c88c78164fe701
21300 F20110109_AADCPC abraham_a_Page_31thm.jpg
0345db63d11c8b7d18c93987480f2900
2358d449806737f41372df62b9ca3a58dbbb08e5
72518 F20110109_AADCOO abraham_a_Page_24.QC.jpg
0d8983b2bd337a777de765d9799c65f5
8a458c4c6c0b0487f68bedb0d5474861b0271700
86182 F20110109_AADCPD abraham_a_Page_32.QC.jpg
734e3cb290c2b6f35d4801d132916ab2
1befd46ad264f27671d4a2ba6440c8467240e93f
22095 F20110109_AADCOP abraham_a_Page_24thm.jpg
9e7349223886546a0df9123d6ce6c568
c93c3d5987d034eecd82e1a9bd61dc994971dee4
45453 F20110109_AADCPE abraham_a_Page_32thm.jpg
9b96d4069ce99f8787b3dd9cdb65650c
72026718c41dbe9405c3489e6880a5d5f3f3a58a
96301 F20110109_AADCOQ abraham_a_Page_25.QC.jpg
291a430ed69ae470a418251f3722cee0
90bd4caa2a8b0c9de2dafc06087df489e64b9bfa
48106 F20110109_AADCPF abraham_a_Page_33.QC.jpg
779f2b2f86e76f2aa59d8220cdae4b87
a2b456acbfa67e20d886e47e7d360c4fb9a99206
48212 F20110109_AADCOR abraham_a_Page_25thm.jpg
32c58fa88d1f5b07d5566317da6e6d86
fbaf21634afcd47c952f9e69446480a289cba56b
89639 F20110109_AADCPG abraham_a_Page_34.QC.jpg
4f4cd1448466faa81e2f6822cb7261e1
dd26091c3026fe87e020bf4f6513474fa4bba8c2
81712 F20110109_AADCOS abraham_a_Page_26.QC.jpg
231f396ceba9012a6e2a46958d09bd55
bbe2165b21db61c967f26e3ff0a4a10799cf7cae
45364 F20110109_AADCPH abraham_a_Page_34thm.jpg
cf24a09bfff5348a174170e3cac9891d
af44de0448f15f5b6b2ecd1572212d7314297721
45217 F20110109_AADCOT abraham_a_Page_26thm.jpg
f779e580867a75cf53a07b7b4fafbbec
4388a0766c1a07341306698a50f0b227e9e3af0a
89921 F20110109_AADCPI abraham_a_Page_35.QC.jpg
0277bec3c96c53add5ea2b980f0ee04b
7f01eb3834841c5bf14b09be44ac3e9a7205f4c7
68162 F20110109_AADCOU abraham_a_Page_27.QC.jpg
7ed726ba212c060aa9cd5a53468dbfdb
0efc32cb4f49c4ee03383d8a6f2ad24796a4320b
47199 F20110109_AADCPJ abraham_a_Page_35thm.jpg
7d8677ce5d4566d12674f8ca469070b4
cc803e2c1b42ca8943c06169359f2fc6fced2054
21408 F20110109_AADCOV abraham_a_Page_27thm.jpg
f4c9e6a956f5f42a82a089d3d1be5fe8
c13f90f7cb4439419039c46bed75703295d46113
90127 F20110109_AADCPK abraham_a_Page_36.QC.jpg
898b9f2b3b081da3dd24d0c9f9ee9d89
d0ae4af223d48b91b80a4bd1a75df8e6262a95a7
69359 F20110109_AADCOW abraham_a_Page_28.QC.jpg
4d74e6d0fcad5fe2130cbee5781691f9
92ebd87c8004f1b95e59fb5321671ddcd0383091
45850 F20110109_AADCPL abraham_a_Page_36thm.jpg
5dccab98d60f24aab7e8b4c8e95e6d5f
32b878532640cda1020b236dfb0a9de25cc38818
62911 F20110109_AADCOX abraham_a_Page_29.QC.jpg
bb88f0eae16c5fe69a5f897c55bf477a
e8842e8102b97a59f5f41a5c8ed4f79020dc8248
44649 F20110109_AADCQA abraham_a_Page_45thm.jpg
1f190fd1eaf61b076dbd1da4757a966b
d664f0aa8f482929e18f7a0f3fb967c5e52e0cf6
78406 F20110109_AADCPM abraham_a_Page_37.QC.jpg
16f4dde4215c87c501acc946e8146f23
5e9cad05021b7d12711dae6f92fb905b5f8b0c88
19337 F20110109_AADCOY abraham_a_Page_29thm.jpg
928acf2bf307f8c41284c36581371ebd
aa3342f960311897fc2139b088d1c10cb949b542
76262 F20110109_AADCQB abraham_a_Page_46.QC.jpg
6cb3cee4091ea28df998bedc02f507f1
c2aab2a7f25316d96a908d5e9f74870520fdce01
75237 F20110109_AADCPN abraham_a_Page_38.QC.jpg
598c7455a03783a666651d876bb72db7
c57c65569c5606bac040ad50472f139ebf97b38e
61287 F20110109_AADCOZ abraham_a_Page_30.QC.jpg
f63d44d8bf8cf6df696c124919c52f69
61a73c616dc97ba358ba22052bded24214afc0dd
24711 F20110109_AADCQC abraham_a_Page_46thm.jpg
3f74b16f71e2db3f0363f6ce3272c920
d5bf69a3b635087d4be67b78e17f8008b3ed5b5d
24957 F20110109_AADCPO abraham_a_Page_38thm.jpg
0cd365f8e853100906217bbff08765e2
5d83006cd9d2939e04c5e00e047ee6b2c575d58f
65983 F20110109_AADCQD abraham_a_Page_47.QC.jpg
483414ea6c09575cb418e9959d85377f
f899fe098e3c9c8e47b5d06eb93ec71c95eff8ed
56531 F20110109_AADCPP abraham_a_Page_39.QC.jpg
986d34b8a885368d05bad2952ddc4aad
0376476767cea1dd29c40812c0c4a426e99d2371
20794 F20110109_AADCQE abraham_a_Page_47thm.jpg
318a7f111442feabe4f2354a63b5c76c
143ae0625ee24d653435ddb019adfb0c7eb7eb98
17994 F20110109_AADCPQ abraham_a_Page_39thm.jpg
e1f4f1124b9dd05adfae005daefc1e88
0930fc4192103211306b0dc2cc01dd224dad6203
8082 F20110109_AADCQF abraham_a_Page_48.QC.jpg
2b074a466a331af116272aa1873613dc
af352e7f64eb44345d7ed4d254ec524b53a0e17f
78370 F20110109_AADCPR abraham_a_Page_40.QC.jpg
652d20eb90ebf1cc74f183bc46f0c11b
24d6d851b29e93ebe6f5289e3ce66545e1cc1d40
3889 F20110109_AADCQG abraham_a_Page_48thm.jpg
8737f36a82192250a3fd53296d3954f4
d77e504600b4ae6f91e651b1b5e1ebb5a2373de4
41672 F20110109_AADCPS abraham_a_Page_40thm.jpg
094bcf019d6b8309fb3dc01754530afb
43c877dcf1cf805e97bc8584a78ce61d42c0171b
64431 F20110109_AADCQH abraham_a_Page_49.QC.jpg
ad9d2c2ed4b3d9e002d2c0a5b07fb1fb
965c3efe6899b859d80d8b25a671618fa579d301
61374 F20110109_AADCPT abraham_a_Page_41.QC.jpg
cb46ff9a44a31c69c93fb45f950d41f0
9ced69d51e14cc400e90cca48c5bb174e1385fbf
20889 F20110109_AADCQI abraham_a_Page_49thm.jpg
501793cd9cbe8f043dca7cf6cf3bed74
3aba56f2b7b17f8fac801c6d4b31812de81601e0
91731 F20110109_AADCPU abraham_a_Page_42.QC.jpg
9db276ec4a4466d787bcf0e6b9bf5f76
3e653b5ead1ef75b5ff8ba89631b4c5e0fe87126
6576 F20110109_AADCQJ abraham_a_Page_50thm.jpg
7622e774d547da0e63e57b21f7902f30
57a99034e494265b1e545beb4c13737da9682f2c
47797 F20110109_AADCPV abraham_a_Page_42thm.jpg
e9c9baaa876783b2720a52ca78a051ba
747cde5ce28879e87ea53b8f611fb693624e828e
53598 F20110109_AADCQK abraham_a_Page_51.QC.jpg
55c6d61fbdaf441f67ea86e111155435
c91ad05339e8f51441c33315d468a40d720c4925
46973 F20110109_AADCPW abraham_a_Page_43thm.jpg
9553957cdc8015b88be4a210a2b992c9
733cfd844180ab94a65c9f685b7b2da5454cb40b
15806 F20110109_AADCQL abraham_a_Page_51thm.jpg
3920476d1534443a95c0f1c5ef9c4706
10f54ff5f4ecaddcb52867597cafd53d2e36f2fe
87030 F20110109_AADCPX abraham_a_Page_44.QC.jpg
c1e3d372c974a5b247f030e61bc33653
96d3fa525dff22c55be20d0877aa27e77aa525ea
38224 F20110109_AADCRA abraham_a_Page_59thm.jpg
94f87884b0e27f9e383b753defddc549
739fcc1ae6a2bd9ce12fed24829b53d08d5ca508


Permanent Link: http://ufdc.ufl.edu/UFE0001221/00001

Material Information

Title: Bandwidth-aware video transmission with adaptive image scaling
Physical Description: Mixed Material
Creator: Abraham, Arun S. ( Author, Primary )
Publication Date: 2003
Copyright Date: 2003

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0001221:00001

Permanent Link: http://ufdc.ufl.edu/UFE0001221/00001

Material Information

Title: Bandwidth-aware video transmission with adaptive image scaling
Physical Description: Mixed Material
Creator: Abraham, Arun S. ( Author, Primary )
Publication Date: 2003
Copyright Date: 2003

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0001221:00001


This item has the following downloads:


Full Text












BANDWIDTH-AWARE VIDEO TRANSMISSION
WITH ADAPTIVE IMAGE SCALING
















By

ARUN S. ABRAHAM


A THESIS PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
MASTER OF ENGINEERING

UNIVERSITY OF FLORIDA


2003

































Copyright 2003

by

Arun S. Abraham

































Dedicated to my Father and Mother.















ACKNOWLEDGMENTS

I would like to express my sincere gratitude to my thesis advisor, Dr. Jonathan C.

L. Liu, for his guidance and encouragement. Without his confidence in me, I would not

have been able to do this thesis. I would like to thank Dr. Douglas D. Dankel II and Dr.

Richard E. Newman for serving on my thesis committee. I would also like to thank Dr.

Ju Wang of the Distributed Multimedia Group for his critical guidance and contributions

towards this research especially regarding the optimal rate-resizing factor section

(Chapter 3).

I would like to thank my parents for always encouraging me towards higher studies

and for all their love and support. Also, I would like to thank my dear wife for her love

and sacrifice while living a student life pursuing higher studies. I would also like to

thank my brother, the first one in the family to attain the master's degree, for his love and

encouragement. And above all, I would like to thank God for making everything possible

for me.
















TABLE OF CONTENTS

page

A C K N O W L E D G M E N T S ................................................................................................. iv

LIST OF TABLES .......... .......................................... ................ ........ vii

LIST OF FIGU RE S ........ ........ .......................................... .............. viii

A B ST R A C T ......... ..... ............................................................................... ......

CHAPTER

1 IN TR O D U C T IO N ............................................................. .. ......... ...... .....

1.1 N ovel A aspects ................................... ............................... ......... 3
1 .2 G o als ........................................................ ..................... 6
1 .3 O v e rv iew ............................................................................... 8


2 B A C K G R O U N D .................... .... ................................ ...... ........ ...............

2.1 M PEG -2 Overview ........... .................................. ... .... .......... ........ ....
2.2 Rate Distortion ........... .. .............. .............. ........11
2 .3 M P E G -2 R ate C control .............................................. ....................................... 13


3 APPROXIMATING OPTIMAL RATE RESIZING FACTOR FUNCTION .........18

4 SY ST E M D E SIG N ..................................................................... ..... ...................... 22

4.1 Scaled-PSN R B ased A pproach.......................................... ......................... 24
4.2 Dynamic Adaptive Image Scaling Design.................................. ...............26


5 EXPERIMENTAL RESULTS ............................................................................30

5 .1 C a se 1 ................................................................3 0
5 .2 C A S E 2 ............................................................................3 3
5.3 F further A naly sis........... .................................................................. ......... .... 36




v












6 C O N C L U SIO N ......... ...................................................................... .. .......... ..... .. 38

6.1 Contributions ..................................... ................ ...........38
6 .2 F u tu re W o rk .................................................................................................... 3 8

APPENDIX

A PERTINENT MPEG-2 ENCODER SOURCE CODE CHANGES..................40

m p eg 2 en c .c ......................................................................... 4 0
putseq.c .......... .... ................................................ ........... ......... 47
p u tb its.c ............................... .. .................... .................................. 5 6
Sam ple Encoder Param eter (PAR) File ........................................... ............... 58

B PERTINENT MPEG-2 DECODER SOURCE CODE CHANGES................... 59

m p e g 2 d e c .c ........................................................................................................... 5 9
g e tb its .c ....................................................................................................................... 6 9

C MATLAB CODE FOR OPTIMAL RATE-RESIZING FACTOR
A P P R O X IM A T IO N ......................................................................... ....................73

D CA SE 2 TE ST PIC TU R E S........................................................................... ...... 74

LIST OF REFEREN CE S .............................................................................. 76

BIOGRAPH ICAL SKETCH ...................................................... 78
















LIST OF TABLES

Table pge

1. Case 1 Adaptive Image Scaling I-Frame PSNR Data......................................31

2. Case 2 Adaptive Image Scaling I-Frame PSNR Data......................................33

3. Frame 45 Testing Of Sequence mei20f.m2v.........................................................36

4. Frame 45 Testing Of Sequence bbc3_120.m2v ....................................... .......... 36
















LIST OF FIGURES


Figure page

1. Program m ing M odel ...................... .... .......... ........................... .6

2. M PEG Fram e R eferences......... ................. ................... ................. ............... 10

3. Hierarchical Layers of an MPEG Video ..... ..................... ............10

4. G group of Pictures ......... .. .... .............................. ....... .... ........ .. ............ 11

5. R-D Curve Showing the Convex Hull of the Set of Operating Points.....................12

6. Bit Allocation of TM5 Rate Control Bit Rate Is Set to 700kbps.Video
Resolution Is 720x240. Image Complexity for I, P, B Frames is set to
Xi=974kbits, Xp=365kbits, Xb=256kbits, respectively........................................14

7. Average Quantization Scale for a GOP, Encoded at 700kbps.............................15

8. GOP Target Buffer Overflow, Encoded at 700 kbps ......................................15

9. Reducing the Number of MB Before Normal Rate Control and Encoding ............16

10. E effects of Im age Scaling ................................... .... ................................................. 17

11. The Overall Distortion-Resizing Functions When Different Target Bit
Rates Are Given. The Video Resolution Is 704x480 (DVD Quality). .................21

12. System D esign Com ponents.......................................................... ............... 22

13. Bandwidth-Adaptive Quantization...................... .... ........................... 23

14. Scaled-down PSNR Values ......... .................................... ....................... 24

15. Scaled-dow n Quantization V alues ........................................ ........ ............... 25

16. A daptive Im age Scaling Scenario ........................................ ........................ 29

17. Case 1 Scaled-PSNR Com prison ...................................................................... 31

18. Case 1 Scaled-Quantization Comparison...................................................32









19. Case 1 GOP Target Bits Overflow Comparison ............................................... 33

20. Case 2 Quantization Com parison.................................... .......................... ......... 34

21. Comparison of Scaled-down PSNRs................................. ...............34

22. Reference Picture for 3rd I-Frame (PSNR = 49.0, S=1,425,311) ..........................74

23. 3rd I-Frame Using Original Encoded Picture (PSNR = 20.3, S=91,627) ...............75

24. 3rd I-Frame Using Adaptive Image Scaled Picture (PSNR=19.9, S=36,081) ..........75















Abstract of Thesis Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Master of Engineering

BANDWIDTH-AWARE VIDEO TRANSMISSION
WITH ADAPTIVE IMAGE SCALING
By

Arun S. Abraham

August 2003

Chair: Jonathan C. L. Liu
Major Department: Computer and Information Science and Engineering

It is essential that MPEG-2/4 video encoding makes the best use of the available

bandwidth and generate the compressed bit-stream with highest quality possible. With

many types of client machines, it is preferred to have a single format/resolution stored in

the storage system and adaptively transmit the video content with proper resolution for

the targeted client and network.

Conventional approaches use large quantization scales to adapt to low bit rates.

However, the larger quantization scale degrades the image quality severely. We believe

that in addition to adaptive quantization for bit allocation, the image resolution should be

jointly adjustable. The key design factor should be to reduce the total number of macro-

blocks per frame. Our goal is thus to investigate the effects of scaling down the video as

an alternative to adapt to bandwidth fluctuations (e.g., 3/4G wireless networks).

We envision that the proposed scheme in addition to the standard TM5 rate control

mechanism would be more effective during low-bandwidth situations. From the results,









we see that during low bandwidth situations, scaling down the image does provide

comparable image quality while greatly reducing bandwidth requirements (i.e., usually

more than 2X). The gain on the video quality can be significantly improved (i.e., up to 2

dB) by re-rendering the image into a smaller resolution with a relatively precise

quantization scale (i.e., usually 50% to 100% less than the conventional encoders).














CHAPTER 1
INTRODUCTION

Streaming video is becoming a more common practice recently due to the rapidly

growing high-speed networks. To provide high quality video images to the end-user, the

underlying network needs to be able to maintain the necessary bandwidth during the

playback session. Ideally, QoS-guaranteed communication network (e.g., ATM [12] and

IPv6 [3]) should be used for multimedia transportation. However, in the near future, the

majority of streaming video applications will still run using best-effort networks (e.g.,

Internet), where the actual bandwidth is characterized by short-term fluctuations (and the

amount can be limited). This observation remains valid when applied to the next

generation wireless networks (3G/4G systems).

For example, cellular phone users are rarely guaranteed optimal bandwidth

availability. As demonstrated in Wang and Liu [17], the per-channel bandwidth for

mobile users is subjected to the cell load and the spreading factor used. Furthermore, the

probability of bandwidth changes for the mobile can be higher than 60% during the

handoff transition.

Thus, without a proper design, the playback at the end users can be often delayed

and video quality suffers different degrees of jitter when transporting compressed video

through these networks [1]. This is the common experience when commercial video

players (e.g., RealPlayer and Microsoft's Media Player) are used with the current

Internet.









Thus, with the use of best-effort networks, it is essential that video encoding makes

the best use of the available bandwidth and generate the compressed bit-stream with

highest quality as possible. Bandwidth-adaptive video encoding technology, which

dynamically updates the video encoding parameters according to the current network

situation, is expected to deliver better video quality with limited bandwidth.

However, providing a single video quality for all kinds of client machines becomes

a challenging issue. Today's client machines can be conventional PCs, laptop computers,

PDA, and Information Appliances (IA). These machines/devices have different sizes of

display resolutions and capability. Having different resolutions of the same video content

in the servers will waste the efficiency of the storage system since a video title can

occupy several gigabytes of disk space.

Thus, it is preferred to have a single format/resolution stored in the storage system

and adaptively transmit the video content with proper resolution for the targeted client

and network. To perform this kind of adaptation, a video server needs to dynamically

adjust the video quality to accommodate the changes in the network conditions (e.g.,

sudden drop of bandwidth available).

There are various quality factors that can be adjusted to adapt to the sudden drop in

bandwidth: increase quantization factor, decrease the color [13], and decrease the video

dimensions. The problem in its general format can trace its origin back to the classic

rate-distortion theory [2,6].

In general, the rate-distortion theory provides fundamental guidelines to find an

optimal coding scheme for an information source, such that the average loss of data









fidelity can be minimized, where the average bits per symbol of the coding scheme is

given [2].

For a block transform based video coder (e.g., MPEG-2/4 [19]), the key problem is

how to distribute limited bit budget to the image blocks (e.g., 16x 16 macroblock in

MPEG-2) to achieve the optimal R-D curve. In a practical video encoder, we need to

decide the quantization scale and other factors for the proper transformed image blocks.

1.1 Novel Aspects

Some related work on this area can be found in the literature [2,4,6,7,13,15].

However, with the exception of Puri and Aravind [15] and Fox et al. [4], their common

focus is on adjusting the quantization parameter of the DCT coefficients based on a

particular rate-distortion model. Furthermore, their models are based on using a fixed

image resolution, which imposes unnecessary limitations on further exploiting the rate-

distortion theory and results in worse video quality when the bit budget is low. Puri and

Aravind [15] discuss various general ways an application can adapt to changes in

resources but does not mention resolution scaling. Fox et al. [4], on the other hand,

provide design principles in application-aware adaptations including resolution scaling as

one of the ways of dealing with network variability. However, Fox et al. [4] do not focus

on performing a thorough quantitative and theoretical analysis of image scaling.

Additionally, no other work that we know of provides experimental results of MPEG-2

encoder using dynamic Adaptive Image Scaling.

When the available data rate is low, a large quantization scale will be used in the

traditional rate control methods [19], to maintain the desired bit rate. The larger

quantization scale degrades the image quality severely and causes perceptual artifacts









(such as blockness, ring, and mosquito noise), since the detailed image structure cannot

survive a coarse quantization.

Another level of bit allocation occurs in the group of picture (GOP) level, where

bits are distributed among frames in each GOP. However, due to the imperfect rate

control, the P and B frames at the end of a GOP are often over-quantized to prevent the

buffer overflow. A large variation of image quality is thus observed, especially when the

given bit rate is low.

Therefore, a more precise and effective method for the bit allocation and encoding

scheme should be investigated to provide robust video quality when time-network

condition is varying. The new method should be effective in improving video quality

both objectively and subjectively. It must be able to react quickly to the changes from

the network and the encoding complexity should be low to suit the real time encoding

requirements.

To this end, a network-probing module as proposed by Kamat et al. [8] and Noble

and Satyanarayanan [13] can be used in the video server to monitor the connection status.

Additional processing overhead is also required at the video server to perform the refined

bit allocation algorithm. Nevertheless, the computational overhead can be affordable

since the CPU resources at the video server are largely under-utilized [10]. When live

encoding is required, we assume that the video server is equipped with programmable

encoding hardware/software that is able to accept different encoding parameters.

The success of the proposed scheme depends on the answer to an immediate

question: how can we use a smaller quantizationfor each macroblock (i.e., more bits)

while maintaining the overall bit rate requirement for each frame, or each GOP?









The task sounds extremely difficult at first, since increasing bits per macroblock

and reducing the bits per frame can be opposite goals. However, with further study along

this direction, we have found that it is possible to meet both goals under one condition.

The key design factor should be to reduce the total number of macroblocks per frame.

Previous study in adaptive rate control usually assumes that this design factor is to be

fixed all the time.

But this assumption can be challenged by today's advanced video card technology.

Many modern video cards come with advanced image processing capability (e.g.,

dithering and Z-buffering). These capabilities help to deliver better video quality at the

client machines. Thus, though given a smaller resolution (i.e., reduction of the macro-

blocks), the final display at the client machines can be compatible with the original

quality.

It is based on this idea that we proposed a new video encoding scheme, where not

only adaptive quantization is used for bit allocation, but also the image resolution is

adjustable to control the bit rate in a higher level. The proposed scheme also addresses

the unfair bit allocation issue under small bit rate budgets.

It is observed that, with a small bit rate budget, a larger quantization scale Q is

often used, which makes it more erroneous to control the bit allocation. We have found

that the actual bits used for the last few frames usually generated a significant buffer

overflow. Our proposed scheme did eliminate the unfair bit allocation (thus the

corresponding quality degradation).

The choices thus becomes: when the bit budget runs low, we can either down-

sample the image to a smaller resolution and use a rather precise quantization parameter,










or directly adapt to a coarse quantization scale with the original image size. Since low-

speed networks are more common than high-speed networks these days, scaling-down is

perhaps more urgent than scaling-up.

1.2 Goals

Our study is thus to investigate the effects of scaling down the video as an

alternative to adapt to bandwidth fluctuations. We believe that gradually decreasing the

image size would help alleviate potential losses because of the low bandwidth.

Along this line, one major design goal was to retain the integrity of the baseline

functionality of the encoder and decoder even at the loss of performance. We wanted to

first understand the behavior of Adaptive Image Scaling and afterwards focus on

performance.

As shown in Figure 1, the image scaling adaptation was thus added to the encoder

as an outer loop outside the main encoding functionality. Therefore, the image scaling

adaptation loop determines which scale to be used based on the current bit rate. This type

of programming model is also used in Noble and Satyanarayanan [13] to obtain positive

results for making applications adaptive.

Client/Server
Adaptive Image
Scaling

Baseline Core
Encoding
Functionality





Figure 1. Programming Model

Of course, our study also stresses the importance of providing a user-unaware

adaptation as much as possible (i.e., the size variations should not be noticed by the user).









This is accomplished by specifying the preferred display resolution in the encoded video

stream, a feature defined by MPEG-2. At the decoder side, the decompressed video

images will be re-rendered to the preferred display size, instead of the actual encoded

resolution. For the sake of simplicity, the display size at the client remains unchanged for

the same video sequence to provide a consistent viewing effect.

Therefore a big challenge is: for a given bit rate, how can we determine the best

image scale factor and quantization parameters for encoding the video at the best quality

level. In this thesis, our focus is on the determination of the scale factor and frame level

bit allocation. For the macroblock level bit allocation, we assume that adaptive

quantization is used as proposed by Gonzales and Viscito [5] and ISO/IEC 13818-2 [19].

Theoretically, there exists a mapping between the current bit rate and the optimal

image-resizing factor that leads to the minimal distortion. The optimal mapping could be

obtained empirically by encoding with all possible resizing factors and comparing the

objective video quality. However, such pre-encoding processing lacks practical value

due to the tremendous computation involved and is impossible in the case of live video.

We investigated two approaches: the first one assumes a mathematic model

obtained from a variation of classic rate-distortion function, and the second method uses

an implicit rate control based on PSNR feedback. We used the mathematical model as an

approximation for modeling our system. Then the performance results from the PSNR-

based approach are demonstrated for supporting the overall design goal.

We have finished the implementation by introducing an image re-sizing module to

generate the image data on the fly. The resizing factor is calculated every GOP, based on

the current network bandwidth and the image complexity. With the continuous image









resizing, a more accurate rate control can be implemented. Another advantage of the

online image re-sampling is that the video server now only stores the original video

images.

From our experimental results, we have observed that video quality can be

significantly improved (i.e., up to 2 dB) by re-rendering the image into a smaller

resolution with a relatively precise quantization scale (i.e., usually 50% to 100% less than

the conventional encoders).

Specifically, the experimental results show a promising trend for low-bit-rate video

transmission. Using a scaled-down resolution for encoding provides comparable picture

quality. Note that the conventional encoders uses a drastically higher quantization scale

and utilizes more than double the bandwidth required for approximately the same picture

quality. We thus believe the proposed scheme is suitable for the emerging 3G wireless

networks, which are targeted for multimedia communications using a limited bandwidth.

1.3 Overview

The remainder of this thesis is organized as follows: Chapters 2 and 3 introduce

the theoretical background of rate control theory along with the expected results on the

reduction factor. Chapter 4 explains the basic software design of the proposed encoder

system. Chapter 5 presents detailed experimental results. Chapter 6 concludes this thesis

by discussing the unique contributions and potential future work for this research.














CHAPTER 2
BACKGROUND

This chapter provides essential background information that will help understand

this research. The organization and syntax of an MPEG video will be provided. Also,

theoretical information of what can be done as the rate deteriorates will be discussed.

Finally, this chapter ends by discussing what the baseline encoder does when the bit rate

decreases.

2.1 MPEG-2 Overview

MPEG (Moving Picture Experts Group) is a committee that produces standards to

be used in the encoding and decoding of audio-visual information (e.g., movies and

music). MPEG works with International Standards Organization (ISO) and the

International Electro-Technical Commission (IEC). MPEG-1, MPEG-2, and MPEG-4

are widely used standards generated from MPEG. MPEG is different from JPEG in that

MPEG is primarily for moving pictures while JPEG is only for still pictures.

Moving pictures are generated by decoding frames of still pictures together, usually

at a rate of 30 frames per second. To provide optimum compression and user perceived

quality, MPEG capitalizes on the redundancy of video both from subsequent frames (i.e.,

temporal) and from the neighboring pixels of each frame (i.e., spatial). More specifically,

MPEG exploits temporal redundancy by the prediction of motion from one frame to the

next, while spatial redundancy is exploited by the use of Discrete Cosine Transform

(DCT) for frame compression. To exploit these redundancies, MPEG strongly relies on

syntax.









The three main types of frames in MPEG are (listed in priority): P, B, and I. The I-

frame is used as the reference frame and has no dependency on any other frames. The I-

frame is intra-coded (without dependency on other frames), while the P- and B-frames

are inter-coded (depends on other frames). P-frames (i.e., predicted frames) use only

forward references while B-frames (i.e., bidirectional frames) use both forward and

backward references. Figure 2 depicts the relation between the different frame types.


B i-dicctiot l
Intrpolation





r B B B P B B B P

\Predtion


Figure 2. MPEG Frame References [11]

An MPEG video has six hierarchical layers (shown in Figure 3). The highest layer

is the Sequence layer, which provides a complete video. The next layer is the Group of


Video Sequence b-
-- Group of Pictures


/ Block
Picture -/
Slice Macroblock Bo
pixels


Lpixiels


Figure 3. Hierarchical Layers of an MPEG Video [11]









Pictures (GOP) layer (shown in Figure 4) which compromises a full set of the different

frame types consisting of only one I-frame and multiple P- and B-frames. The next layer

is the Picture layer which is a single frame consisting of multiple slices (the Slice Layer).

Each slice can contain multiple macroblocks (the Macroblock Layer), which contain

information about motion and transformation. Each macroblock consists of blocks (the

Block Layer) which are 8x8 values encoded using DCT.

0OP




0 1 2 3 4 5 6 ** 12 13 14 15



i B B P B B P P B B T

Figure 4. Group of Pictures [2]

2.2 Rate Distortion

Video is encoded using various parameters such as the dimensions of the frame, the

color format, and the available bit rate. If the available bit rate is high, then the encoder

can use low compression parameters such as a small quantization scale to provide the

best quality video at a constant bit rate (CBR). Using a small quantization scale, the

frames are encoded precisely and result in larger-sized frames. When the available bit

rate varies (VBR), the encoder must decrease the amount of generated bits by using a

high quantization scale to meet the lower bit budgets. By adapting to the network

variability and providing VBR encoding, the encoder can provide a constant quality

video [14]. If the encoder was bandwidth-unaware, during low bandwidth situations, it

will generate wasteful information [9].









Keeping all the other encoding parameters constant and gradually lowering the bit

rate results in the gradual degradation (i.e., distortion) of the video. Minimizing the

distortion while meeting the budget implied by the bit rate is the general rate-distortion

problem. Choosing the optimal encoding parameters that will provide the best possible

video quality under the circumstances can minimize distortion. An instance of encoding

parameters can be referred to as an operating point [16]. An encoder could try different

operating points to see which one provides the best video quality. The convex hull of all

the set of operating points provides the best rate distortion performance [16]. The goal of

rate-distortion optimization is to find the operating point that will be as close to the curve

generated by the convex hull as shown in Figure 5. For a given rate (R1), the distortion

will vary based on the different encoding parameters used.


D 1
i Operating Points






SConvex Hull ~-~- -

R
nRate


Figure 5. R-D Curve Showing the Convex Hull of the Set of Operating Points [16].

Given the real-time constraints of encoding, optimal operating point cannot be

computed on-the-fly, but rather rate control algorithms can be employed to provide the

best operating point using general guidelines to meet the target bit budget.









2.3 MPEG-2 Rate Control

The bit allocation in MPEG-2 consists of two steps: (1) target bit estimation for

each frame in the GOP and (2) determination of reference quantization scales in the

macroblock level for each frame. In the GOP level, bit assignment is based on the frame

type. For instance, an I-frame usually has the highest weight and gets the most share,

while P-frames have the next priority, and B-frames are assigned the smallest portion of

the total bits.

In the frame level, the TM5 rate control algorithm in MPEG-2 controls the output

data rate by adaptive quantization. The DCT coefficients, which consist of an 8x8 block,

are first quantized by a fixed quantization matrix, and then by an adaptive quantization

scale Q. The elements in the fixed quantization matrix are decided according the

sensitivity of human visual system to the different AC coefficients and are obtained

empirically based on many experiments. The quantization scale Q serves as an overall

quantization precision.

This rate control algorithm performs well when the bit rate budget is high;

however, it might result in very unfair bit allocation under small bit rate budgets. With a

small bit rate budget, a larger quantization scale Q is often used, which makes it more

erroneous to control the bit allocation. An often-observed direct result is that the actual

bits used for the first few frames are greater than the target bits calculated in the GOP

level rate control. This further devastates the bit shortage for the rest of the frames in the

same GOP and either causes buffer overflowing (generated bit-stream consumes more

bandwidth than allowed) or sudden decrease of the picture quality. Figure 6

demonstrates such performance degradation for an MPEG-2 encoder using a low bit rate.










The performance degradation of the conventional rate control at low bit rates is not

so surprising, though, since the accuracy of the rate-distortion theory will only be

guaranteed when the bit budget is sufficient and the quantization scale is small. While at

low bit rates, the quantization scale is large, and consequently makes the low-order (or

even linear) approximation of the rate-distortion function more erroneous.


160 Target Bit Count
140
g 120 Actual Bit Count
Z 100
E 80-
o 60
40

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Frame Number



Figure 6. Bit Allocation of TM5 Rate Control Bit Rate Is Set to 700kbps.Video
Resolution Is 720x240. Image Complexity for I, P, B Frames is set to
Xi=974kbits, Xp=365kbits, Xb=256kbits, respectively.

The average quantization scale and the amount of buffer overflow are shown in

Figure 7 and Figure 8 for the above encoding experiments. In Figure 7, it can be seen

that the quantization scale increases as the encoding progresses. Towards the end of the

GOP, the quantization scalar (Q) used for the B-frames increases to 112, which is the pre-

defined maximum value allowed value in the encoder. Any Q value higher than 112 is

cut off to maintain a minimum level of viewing quality. This, however, causes severe

buffer overflowing for these frames.

The buffer overflow measurement provides a good indication for the reliability and

effectiveness of a rate control algorithm. The overflow equals zero when the buffer is not

full and equals the accumulated actual bit-count that exceeds the transmission buffer

otherwise. Figure 8 shows the measured buffer overflow for the low bit rate encoding










mentioned above. The size of the (transmission) buffer is set to the target bit budget for a

group of picture, which is 303333 bits in this test. As the frames in the GOP are encoded,

the encoded bits are temporarily stored in the transmission buffer, which will be sent to

the network when full.

Also in Figure 8, we observed that the buffer is not overflowed for the first four

frames. At the fifth frame (a B-frame), the actual accumulated encoding bits exceed the

transmission buffer by 25888 bits. The buffer overflow keeps increasing for the rest of

the frames and reaches the highest level of 138411 bits by the end of GOP.



g 120
c 100
80
60
S40
& 20
3 0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Frame Number


Figure 7. Average Quantization Scale for a GOP, Encoded at 700kbps


0 150000

o 100000
S50000

o 0 Ii I&
3 O 't CO 0O 0O N t-
Frame Number


Figure 8. GOP Target Buffer Overflow, Encoded at 700 kbps

We believe that a new way to control the encoding bit rate should be considered

besides tuning the quantization scalar. A direct but effective method is to reduce the

number of macroblocks (MB) to be encoded. With a smaller number of macroblocks, we









will have a higher bit per symbol value, thus a smaller quantization scalar. Figure 9

describes the simplified new video encoding scheme.

Proposed Encoder

Reduce number Normal rate control,
of MB Encoding



Figure 9. Reducing the Number of MB Before Normal Rate Control and Encoding.

The rationality behind this new scheme is that, by reducing the number of

macroblocks, the encoder should be able to reduce the information loss due to heavy

quantization which otherwise is inevitable. Nevertheless, we do suffer information loss

during the first step when the number of macroblocks are reduced. There is a tradeoff to

make via the comparison of information loss. We provide the theoretical analysis of this

decision process and the empirical method in the rest of this thesis.

Another aspect of reducing the number of MB is how we should select the original

image data in the scaled-down version. For example, we can cut off the image data on

the edges, while only preserving the image in the center, or one may sort the MB in the

order of importance (e.g., the higher importance is given to MB with high variance) and

skip those MB at the end of the list. The full discussion of the strategies on this direction

is beyond the scope of this thesis.

In the context of this thesis, we assume that the reduction of the macroblock

number is achieved by an image resizing process, which converts the original image to a

new image by proportionally scaling down in both dimensions. The resizing process is

described by the following equations:









I'(u',v') = I(p) where

A,= (xy)e N2 x-') +(y-v') <
u'I= 7.u, v'= .v and fe (0, 1].

As shown in Figure 10, I (u', v') is the pixel value of the resized image at

coordinate (u', v'), A,, represents the corresponding pixels within a radius of d, in the

original image centered at (u, v), and f is the resizing factor. The pixel value in the

new-scaled image is the average of the pixels in the neighbor area in the original image

defined by the resizing factor f and the coordinate (u, v).


UIV











W W'

Figure 10. Effects of Image Scaling

The rate distortion theory implies that there is an optimum resizing factor, which

will provide the minimum distortion. The next chapter takes an analytical approach into

finding the optimal resizing factor.















CHAPTER 3
APPROXIMATING OPTIMAL RATE RESIZING FACTOR FUNCTION

In this chapter, we provide an analytical model such that the optimal image size can

be derived. The distortion measurements in video encoding are the Mean Square Error

(MSE) and the Peak Signal to Noise Ratio (PSNR). MSE is the average squared error

between the compressed and the original image, whereas PSNR is a measure of the peak

error in dB. In MPEG-2 encoding, color component (YUV) in the raw image is

represented by an 8-bit unsigned integer number. Thus the peak signal value is always

less than or equal to 255. The PSNR and MSE are expressed as follows:

255
PSNR = 10 log( 2 and
A4 MSE

1H W
MSE = Y (I(x, y)- (x,y))
W .H y=

The width and height of the image are represented by W and H, respectively;


I(x, y) represents the original image and I(x, y), the reconstructed image after encoding-

decoding. However, PSNR and MSE distortion measurements are very difficult to be

used in an analytical model. In the following discussion, we use absolute difference as

the measure of distortion.

We assume that the pixels of the video source represent a zero-mean i.i.d.

(independent and identical distribution) with the signal variance of each pixel as a.

Results in Hang and Chen [6] indicate that the bits per symbol (b ) versus the distortion

caused by the normal encoding process (DE) can be approximated by









1 02
b(DE)= 1 log, (C (1)
alog e DE

In (1), a is a constant 1.386 and 82 is source dependent (usually 1.2 for Laplacian

distribution). Rearranging (1), we have

DE(b)= -2e C G. (2)

In the case of MPEG-2 encoding, our focus is on the I-frame, which is completely

intra-coded. For P and B frames, the rate distortion model is not applicable due to the

coding gain from motion estimation. Now if we let B be the available bit count for a

GOP, r be the percentage of bits assigned to the I-frame, and f be the image scale

factor, then we have the following relation:

rB = JWHb. (3)

Equation (3) reflects a simple fact, that the total bits to encode an I-frame (rB)

equals the number of pixels (which is JWH after the resizing effect) times the average

bits per symbol, b. Substitute (3) into (2), we have

DE(b) = ee "H 2 (4)

From (4), we can see that the scale factor (f) is inversely proportional to the available bit

count (B).

Equation (4) only represents the distortion between the pre-encoded image and the

reconstructed image. To describe the complete distortion of the resizing-then-encoding

process, we should quantatize the information loss during the resizing process DR. With

the assumption of i.i.d. distribution for the pixels, we define the image information

(complexity) via the cumulative variance of all image pixels. The image complexity

before and after resizing is H(I) = HWox and H(I') = HWo respectively.









The loss of the information from the resizing process could be expressed by

DR = H(I) H(I') = (1 f)HWo,.

Now let us define the total distortion DT as the summation of distortion caused by normal

encoding (DE), and the distortion caused by resizing (DR), we have

rB ,
DT = D +DE = e -" 2 +(1- f)HW,. (5)

The optimal resizing factor must correspond to the smallest total distortion (Dr).

By taking the first-order derivative of Dr represented by (5) and equaling it to zero, we

have

f cf-e HW =0.
f WH f (6)

The solution of (6) corresponds to the peaks (or valleys) of local

minimum/maximum distortion. To find the optimal resizing factor, the local

peaks/valleys are substituted into (5) as well as the end point f = 1.

Figure 11 shows the relationship between the distortion and resizing factor based

on (5). The image size is fixed to 704x480, and the curves for different target bit rates

are plotted. When the target bit rate is set to 2400kbps, we observed that the optimal

resizing factor is about 0.64. As the target bit rate decreases, the best resizing factor

moves to the left (i.e., becomes smaller). For instance, the optimal resizing factor at 1800

kbps bit rate is 0.55. This behavior matches well with our predication. However, the

resizing factor corresponding to the low-end bit rate (e.g., 100kbps) shows that the image

size should be scaled-down by more than 30 times, which is not acceptable in reality. In

fact, since the distortion model of (1) & (2) is valid when the bits per pixel are relatively











sufficient, this model only provides a reasonable predication when the target bit rate is

adequate.


Resizing factor Vs Distortion under different target bitrate
140


120


100
mean
absolute
distortion
per 80 -
symbol

60




/k




0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

resizing factor (f)


Figure 11. The Overall Distortion-Resizing Functions When Different Target Bit Rates
Are Given. The Video Resolution Is 704x480 (DVD Quality).

From the mathematical analysis, we have observed that it is theoretically possible

to find the optimal resizing factor given the bit rate and the image resolution.

Nevertheless, the accuracy of the above analysis highly depends on the statistical

assumptions of each pixel (i.e., that the variance is the same for all pixels). In reality, the

i.i.d. assumptions are not always applicable. Therefore, we sought the PSNR based

approach to investigate the effects of image scaling. The following chapters describe the

experimental design and results of this investigation.















CHAPTER 4
SYSTEM DESIGN

This chapter provides the design rationale for this thesis. Results from initial

testing are presented to provide reasoning for the choices made in the system design.

Experimental assumptions, parameters, and scenarios will also be discussed.

To test Adaptive Image Scaling, software only versions of the MPEG-2 encoder

and decoder were used. Figure 12 shows the System Design Components. The baseline

for the encoder/decoder codec used in this study was from the MPEG Software

Simulation Group (MSSG) shareware site [21]. The baseline encoder and decoder are

standalone, and were modified to allow streaming video as shown in Figure 12. The

modified encoder sends video streams over the network to the decoder. The individual

frames that are to be encoded are stored on disk in PPM format. The frames are encoded

and decoded on the fly over the network. The server and the client are connected using

TCP/IP.



PPm

Encoder Decoder





Server Client


Figure 12. System Design Components










One of the parameters that can be passed to the encoder is the bit rate. The baseline

encoder does adapt the quantization parameter based on the bit rate. As discussed earlier,

the quantization parameter is bandwidth-adapted per Figure 13 (i.e., as the bit rate

decreases, the quantization parameter increases). For high bit rates, the quantization

parameter is set to the minimum value of 1, and for low bit rates, the quantization

parameter is gradually increased until the maximum value of 112 is used. The bit count

vs. quantization is nonlinear [2].


1600000
2 1200000 -
o 800000
m 400000
0-
0 20 40 60
Qunatization Scale



Figure 13. Bandwidth-Adaptive Quantization

When the encoder detects a drop in the bit rate, the reference frame will be scaled-

down. As explained in the earlier chapters, sending a scaled-down version of the frame

will require smaller amounts of bits to be transferred over the network, thus lowering the

bandwidth requirement. The trade off for scaling down is the loss of picture quality. But

the loss of quality that is experienced from scaling down should be better than without.

When the baseline decoder receives the scaled-down image, it automatically adjusts

to the new scale. However, as mentioned earlier, from the user-unaware perspective, the

user should only view the image at a fixed resolution (i.e., that of the reference picture).

This can be achieved by modifying the decoder to only display the scaled image at the










reference size. When scaling up an image, negligible losses in the PSNR value were

observed (average loss of 0.02).

4.1 Scaled-PSNR Based Approach

Since measuring video quality can be subjective [18], we decided to primarily use

the objective metric of PSNR. The baseline encoder provides frame-by-frame PSNR

information along with other statistical information that was used in our study. Reasons

for pursuing a Scaled-PSNR based approach were gathered from initial experimental

results which show that smaller sized resolutions produce as good or better PSNR as that

of the original resolution for low bit rates. Figure 14 and Figure 15 show the PSNR and

quantization values, respectively, for Frame 45 (4th I-Frame) obtained from multiple runs

of the baseline encoder using different combinations of bit rates and resolutions. For

initial testing, three different resolutions of the mei20fm2v [20] sequence were used:

704x240 (f=1.0), 352x240 (f=0.5), and 352x120 (f=0.25). The three sets of resolutions

were created and stored on disks prior to running the encoder. The bit rates that were

tested are as follows: l0kbs, 100kbs, 300kbs, 500kbs, Imbs, 1.5mbs, 1.7mbs, 2mbs,

3mbs, 4mbs, 5mbs, 7mbs, 9mbs, 15mbs, 20mbs, 40mbs, and 60mbs.


Figure 14. Scaled-down PSNR Values


47
Of = 1.00
42 f = 0.50
S 37 Of = 0.25
z 32
27
22
17LLMAI 1
1000 500000 2000000 7000000 40000000
Bit Rate

























Figure 15. Scaled-down Quantization Values

It can be seen that as the bit rate increases for the different resolutions, the PSNR

values saturate at a certain value when the quantization scale is set to the minimum value

(1). For thef=0.25 scale, the saturation point was seen at the bit rate of 9mbs with a

value of 48.8dB (f=0.5, at 20mbs with a value of 48.9dB;f=1.0, at 40mbs with a value of

49.0dB). It can be observed that when the bit rate was more than adequate, the original

resolution had better PSNR values. This shows that, when the bit rate is adequate, image

scaling is not required.

However, as the bit rate decreases from the saturation point, we see that there is a

huge variance between the quantization values used for the different scales. The biggest

difference was seen at 500kbs where thef=1.0 scale is using the maximum allowed

quantization (112) while thef=0.5 scale is using roughly half the amount, and thef=0.25

scale is still maintaining a low quantization scale. We can see that thef=0.25 scale

increases its quantization to half the maximum value only at bit rates of 300kbs or lower,

while thef=1.0 scale begins to use the maximum quantization at bit rates less than Imbs.

At low bandwidths, thef=1.0 scale does lose information from the use of higher

quantization scales.


S100 f = 1.00
n 80- f = 0.50
.2 60- Of =0.25
40
C 20-

1000 50000020000007000000 4E+07
Bit Rate









From the initial testing, it can be seen that the PSNR value does vary based on the

scale at different bit rates. As a result of the initial testing, we have more quantitative

evidence that for low bit rates, transmission of smaller amounts of data allows less

information loss due to quantization. The initial results show that the scaled-PSNR

approach is only useful in low-bandwidth situations (here, less than Imbs) where the loss

of information from scaling down would be better off than the effect of using

significantly larger quantization on the original image. Additionally, when there is

enough bandwidth, it is better to use the original resolution since a comparably smaller

quantization scale is used. Thus, the scaled-PSNR based approach was chosen to test

Adaptive Image Scaling.

4.2 Dynamic Adaptive Image Scaling Design

In the dynamic Adaptive Image Scaling scheme, the bit rate is monitored and any

changes are notified to the encoder. When the encoder detects a change in the bit rate, it

must decide whether scaling the image would produce a better picture for the end user.

The start of a GOP is used to make adjustments to the bandwidth for simplicity and to

gather initial results. Adding finer granularity (e.g., per frame monitoring) can be added

at later stages of this research. When the encoder detects a significant change in the

bandwidth, the current image size is scaled-down dynamically in an effort to cope with

the limited resources. For the current research purposes, to test different scales, the scale

factor was specified as a parameter to the encoder. For simplicity, if a new scale is to be

used, the encoder closes the current sequence and restarts a new sequence using the

scaled-down image.

Another design decision was to dynamically scale (i.e., render) the image as the

encoder executes. The execution time for scaling the image was negligible especially









considering the under-utilization of the server CPU [10] as discussed earlier. This

decision reduces storage requirements and allows support for live video.

To focus on the effects of image scaling on the encoding process the issue of real-

time detection of bandwidth changes and reacting to those changes will not be addressed

in this thesis. The contributions in this area [8,13] can be integrated in future works on

Adaptive Image Scaling. For experimental purposes, for every GOP, the initial bit rate is

reduced by a bandwidth reduction rate. The bandwidth reduction rate was used to

simulate dynamic network bandwidth degradation. For example, if the initial bandwidth

was specified to be 700kbs and the bandwidth reduction rate was 50%, the first GOP will

be encoded using 700kbs, the second GOP will be encoded using 350kbs, the third GOP

will use 175kbs, etc.

In the current design, when encountering a lower bit rate, if a smaller image does

not yield a better PSNR value, the current image scale is used. In other words, no further

scaling is performed. This was done in the interest of addressing the timing issues of

encoding. As discussed earlier, it is not realistic to expect an encoder to try multiple

image scale factors before proceeding. Thus the encoder must make a quick decision

based on its current scale and its next lower size. Note that the actual scaled size must be

slightly adjusted to ensure that dimensions meet the other encoder constraints such as

macroblock divisibility.

Since the resizing factor changes gradually with only a small difference every time,

the above procedure could require multiple-frame delays to converge to an optimal image

size. To be able to quickly benefit from a sudden recovery of network bandwidth and

also to obtain comparison data, the original reference picture resolution is always









compared to the current and next lower level resolutions. The resolution that yields the

best PSNR value will be used. The original picture resolution is preferred when there is

enough bandwidth.

To see the effects of image scaling we added an override parameter to the encoder

to ignore the dynamic rendering results and simply choose the reference picture

resolution at every GOP. In the override mode, the scale factor is used to report potential

resolutions that the encoder could have chosen instead of the original resolution.

An example of the proposed dynamic Adaptive Image Scaling is shown in Figure

16. In this scenario, the encoder dynamically scales down the original image (704x240)

using a factor off 0.5. The scaled image is encoded and sent over the network. When

the decoder receives the scaled image, it will read the display size and scales up the

image to the original size. It should be noted that the input to the encoder is the original

image (i.e., no other resolutions are stored on disk).

The next chapter provides the experimental results from using the dynamic

Adaptive Image Scaling with different bit rates and scale factors. The effects of image

scaling will be quantitatively examined and the resulting objective and user-perceived

quality metrics will be provided.











Encoder Sends Scaled
Image Due to Low
Bandwidth


I sn" I4 nn


Network


Decoder Displays Video at
Constant Resolution


Figure 16. Adaptive Image Scaling Scenario














CHAPTER 5
EXPERIMENTAL RESULTS

Our proposed dynamic rendering encoder was tested with various combinations of

the initial bit rate, bandwidth reduction rate, and scale factors. In addition to the

mei20fm2v sequence used in the initial testing, the bbc3 120.m2v sequence was also

tested. Both sequences were downloaded from the MPEG2-Video Test Stream Archive

web site [20]. The bbc3 120.m2v has a faster rate of motion having dimensions of

704x288. We generalized the data that we gathered into two major cases of behavior:

Case 1, when the scaled-down image had a higher objective quality and Case 2, when the

scaled-down image had a slightly lower objective quality.

5.1 Case 1

The dynamic rendering test was run with an initial bit rate of 1.5 mbps, Bandwidth

Reduction Rate of 50%, and a Scale Factor of 50%. Using the override flag, the same

test was also conducted to see the effects of not using image scaling.

To compare the effects of image scaling on the rest of the frames, the following

metrics were compared: PSNR, Quantization Parameter, Target Bits, and Actual Bits.

Four GOPs were started using Adaptive Image Scaling. When image scaling was used,

the PSNR values from different resolutions at the start of every GOP are listed in Table 1.

The best PSNR value is highlighted in the table. Note that in Table 1 the Previous Scale

reflects the scale that has being used for the previous GOP (i.e., current scale).











Table 1. Case 1 Adaptive Image Scaling I-Frame PSNR Data

GOP Bit Rate Scaled- Previous Original Scaled-
down Scale PSNR down
Resolution PSNR (dB) PSNR
(dB) (dB)
1 1.5mbs 512x192 N/A 24.6 2..
2 750kbs 352x128 23.0 22.0 2-' 2
3 375kbs 256x96 21.1 20.7 2 *
4 187.5kbs 176x64 19.4 20.2 21

Figure 17 shows frame-by-frame PSNR values of Normal and Adaptive Image

Scaled Encoding. The PSNR results show that image scaling produces comparable

results. The overall PSNR values for all the frames are relatively close (i.e., our encoder

did not lose significant quality though the bit rates have been reduced significantly). All

the I-frames (e.g., Frames 0,15,30,45) even have better PSNR values for the scaled-down

version of the frames. The maximum PSNR improvement can be in the range of 2 dB.


Figure 17. Case 1 Scaled-PSNR Comparison

By analyzing each frame, it can be seen that the initial frames (0-14, associated to

the first GOP) have similar PSNR values, while the frames of the second and third GOP

(15 42) have better PSNR values using Adaptive Image Scaling. However, towards the

end of the 3rd GOP (43-45), the PSNR values for Adaptive Image Scaling start to provide

lower results (though it is still quite comparable except for the final frame).

As shown in Figure 18, the quantization parameter comparison shows some more

reasons for the increase in the original PSNR value. When image scaling is used, a lower


-- Normal Encoding
tE-Adaptive Image Scaling
24
z 22
20
18
SC) C C C L Frame C) C Number
Frame Number










quantization parameter is required. As the encoding of the frames progress with lower bit

rates, higher quantization values are used in the normal encoding.

S120 Normal Encoding


S60
a 40
20
0


Frame Number


Figure 18. Case 1 Scaled-Quantization Comparison

We observed that at low bandwidths, a scaled-down image using a lower

quantization parameter would do better than a non-scaled image using a high quantization

parameter. It can be seen that the gap between the quantization parameters required for

the image scaling run and the non-image scaling run increases as the encoding encounters

lower bit rates.

When image scaling is used, the quantization parameter stays relatively the same

with a much smaller variance: 25 60. This is not the case for the normal encoding run

which has a range of 37 112. Lower quantization values produce images with better

PSNR. Our proposed scheme generally use 50% to 100% less quantization parameters

compared to the normal encoding schemes.

Another metric can be examined with the target bits of each GOP. Figure 19 shows

that using normal encoding, the overflow of the GOP target bits per frame gradually

increases. It can be seen that as the encoding progresses after a certain point (here, after

frame number 21) the buffer overflow in the normal encoding gets worse. Note that at










the start of every GOP, the target bits of the current GOP needs to be reset. We did not

observe the similar buffer overflow in our proposed encoder.


S250000 -- Normal Encoding
S 1 200000 Adaptive Image Scaling
p 150000
I 100000
Oa 50000
00
0o 0

Frame Number



Figure 19. Case 1 GOP Target Bits Overflow Comparison

For Case 1, the scaled-down resolution was always chosen as shown in Table 1.

This particular case perhaps represents the best scenario (i.e., that the scaled-down

adaptation simply outperformed the normal encoder for all four GOPs). Nevertheless, no

encoder can work perfectly for every kind of video content. The following Case 2

represents this possibility.

5.2 CASE 2

It is true that sometimes the Adaptive Image Scaling needs to also consider

additional factors. Case 2 was run with an initial bit rate of 700 kbs, Bandwidth

Reduction Rate of 50%, and a Scale Factor of 50%. The Case 2 PSNR values from

different resolutions at the start of every GOP are listed in Table 2. Using the override

flag, a test was conducted to see the effects of dynamic rendering using the mentioned

parameters.

Table 2. Case 2 Adaptive Image Scaling I-Frame PSNR Data

GOP Bit Rate Scaled- Previous Original Scaled-
down Scale PSNR down
Resolution PSNR (dB) PSNR
(dB) (dB)
1 700kbs 512x192 N/A 22.0
2 350kbs 352x128 20.8 20.7 2
3 175kbs 352x128 19.2 2 11 19.9
4 87.5kbs 256x96 N/A 2 11 19.6












Figure 20 shows the effects of not choosing the scaled-down resolution for the last

two GOPS. It can be seen that the Adaptive Image Scaling version had better


quantization until the beginning of the 3rd GOP, which is where both runs produce the


same quantization values.


Figure 20. Case 2 Quantization Comparison

Correlation between the image size and the PSNR value can be seen in Figure 21.


In Figure 21, for each scale, the bit rate decreases. It can be observed that in higher


resolutions, the PSNR of the scaled-down image does relatively same as that of the


original resolution. The scaled-down PSNR becomes a better indicator only when the


image size decreases below the scale of 0.25. Thus the results suggest that, for this


particular video content, the gain is only achieved for 3G wireless data transmission.

-+-Scaled-down PSNR
40 -Original PSNR
35
: 30
U) 25
20
15
0.01 0.13 0.27 0.38 0.38 0.58 0.58 0.58 0.78 0.87
Scale (f) with decreasing bit rates


Figure 21. Comparison of Scaled-down PSNRs


a 120
o
S 100-
80 -
60
O a 40
S2 -- Normal Encoding
a 20
0 -u-Adaptive Image Scaling
0

Frame Number









For the example of the 3rd I-Frame, Adaptive Image Scaling strictly used the PSNR

values to pick the original resolution over the scaled resolution. The differences between

the two PSNR values were small: 20.3 vs. 19.9. However, given the 0.4dB loss to the

conventional encoders, it is always true that our proposed encoder transmits a smaller

amount of video data than the conventional ones.

From the purely PSNR consideration, the following are the deciding parameters

used at the start of each GOP. For the first 2 GOPs, the scaled-down resolution was

picked. For the 3rd and 4th GOP, the original picture had the better PSNR and thus it was

chosen.

Case 2 deserves some further analysis since the decision is no more clear-cut. It is

arguable that perhaps the small PSNR variation can be tolerable, while saving the

bandwidth is always a design plus for the overall networks. To show that small

variations in the scaled-PSNR still provide comparable pictures, we provided the pictures

for subjective quality measurements among some human testers.

The pictures were taken from the right side of the video frames (see Appendix D).

The reference picture was encoded with bit rate of 60 mbps (i.e., highest quality) and as a

result used 1425311 bits with a PSNR of 37.3dB. However, the other two pictures were

taken from the 3rd I-Frames of Case 2 above (encoded using a significantly lower bit rate

of 175kbs) and as their PSNR values indicate (20.3dB vs. 19.9dB), the visual quality is

reported by the testers to be also very comparable. Since at the low bit rates the quality

of both images suffers, choosing the scaled-down version does provide comparable

results with more than half the bit requirements. Thus, it is better to choose the slightly

lower PSNR valued image and reduce the bandwidth requirement.










5.3 Further Analysis

From dynamic image scaling we find that for low bit rates, when the PSNR values

of the original and scaled-down frames are close, it is better to encode using the scaled-

down resolution. Base on this finding, we further investigated encoding using different

scales at low (i.e., 700kbs or lower), constant bit rates. The findings for the fourth I-

Frame (45th frame) of both sequences are listed in Table 3 and Table 4. At low

bandwidths, we tested the scaled-down PSNR values for the following scales: 0.25, 0.50

and 0.75.

Table 3. Frame 45 Testing Of Sequence mei20f.m2v

Bit Rate Optimal Scaled- Recommended Scaled-
Scale PSNR Scale PSNR
Gain (Tolerance:0.5) Gain
700kbs 0.25 1.2 0.25 1.2
300kbs 0.5 -0.1 0.25 -0.5
64kbs 0.75 -0.2 0.5 -0.4

Table 4. Frame 45 Testing Of Sequence bbc3_120.m2v

Bit Rate Optimal Scaled- Recommended Scaled-
Scale PSNR Scale PSNR
Gain (Tolerance:1.5) Gain
700kbs 0.75 0.4 0.25 -0.4
300kbs 0.75 0.1 0.5 -1.0
64kbs 0.75 -0.2 0.5 -1.6

The results suggest that scaling down to at least the scale of 0.75, would produce as good

or better results than using the original resolution.

In Table 3 and Table 4, the Scaled-PSNR Gain is the gain when compared to the

original PSNR. The Recommended Scale should be used to achieve minimum quality

requirements using a tolerance for the scaled-PSNR loss. The exact values vary based on

the image complexity of the sequence used. We can see that the bbc3 120.m2v sequence

requires a higher tolerance to use the recommended scales and its optimal scale is higher.

Sequence having a higher rate of motion and detail will have higher rate-distortion






37


values [16]. The next chapter draws conclusions regarding the results gathered from this

thesis.














CHAPTER 6
CONCLUSION

In this thesis, we presented a study for a novel design and implementation of

bandwidth-aware Adaptive Image Scaling for MPEG-2/4 encoding. The strength of the

scheme is that this mechanism can reduce the network traffic during congestion situations

and still obtain comparable quality. From the experimental results, it was observed that

using a small PSNR tolerance provided better results than using a zero-tolerance criteria.

This work proposes a complimentary scheme that integrates well with adaptive

quantization and other methods of degrading the video's fidelity.

6.1 Contributions

Even though works such as Fox et al. [4] address resolution scaling as an option to

adjusting the data fidelity, this thesis has performed a thorough investigation on the

effects of MPEG-2 dynamic Adaptive Image Scaling. The most significant contribution

from this thesis is that we understand that under low bandwidth requirements, image

scaling can produce as good or better quality video with significantly less bits than using

the original resolution. Furthermore, this thesis uniquely provides an independent

theoretical model which, when tested, backs the experimental results. Both the

theoretical and experimental results show that there is an optimal scale factor that will

provide the minimum distortion.

6.2 Future Work

All the VBR experiments that were done for this thesis assumed a decreasing bit

rate to focus on low bandwidth behavior. This thesis can be extended to handle random






39


bit rate fluctuations. Though a two-pass approach was used to study the effects of image

scaling with that of the reference, an integrated one-pass approach can be implemented

by utilizing an optimal algorithm using rules that will determine the appropriate fidelity

based on factors such as PSNR, bit rate, and scale.





















APPENDIX A
PERTINENT MPEG-2 ENCODER SOURCE CODE CHANGES


(Note: Modified code from MPEG Software Simulation Group [21] is in boldface.)


mpeg2enc.c

/* mpeg2enc.c, main() and parameter file reading */
/* Copyright (C) 1996, MPEG Software Simulation Group. All Rights Reserved. */
/*
* Disclaimer of Warranty

* These software programs are available to the user without any license fee or
* royalty on an "as is" basis. The MPEG Software Simulation Group disclaims
* any and all warranties, whether express, implied, or statuary, including any
* implied warranties or merchantability or of fitness for a particular
* purpose. In no event shall the copyright-holder be liable for any
* incidental, punitive, or consequential damages of any kind whatsoever
* arising from the use of these programs.

* This disclaimer of warranty extends to the user of these programs and user's
* customers, employees, agents, transferees, successors, and assigns.

* The MPEG Software Simulation Group does not represent or warrant that the
* programs furnished hereunder are free of infringement of any third-party
* patents.

* Commercial implementations of MPEG-1 and MPEG-2 video, including shareware,
* are subject to royalty fees to patent holders. Many of these patents are
* general enough such that they are unavoidable regardless of implementation
* design.
*


#include
#include
#define GLOBAL /* used by global.h */
#include "config.h"
#include "global.h"

#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include

#define MAXSERVERDATASIZE 80
#define MAXHOSTNAME 80
#define MYPORT 5000 // the port users will be connecting to
#define BACKLOG 5 // how many pending connections queue will hold
#define MAXCLIENTS 4
#define MAXTITLES 10
#define MAXTITLELENGTH 15

/* pointer to a signal handler */








41


typedef void (*sighandler t)(int);

int clientcount = 0;


/* private prototypes */
static void readparmfile ANSI ARGS ((char nameme);
static void readquantmat ANSI ARGS ((void));
static int openConnection ANSI ARGS ((void));

// control c signal handler
static void CtrlC()
{
printf (": Handling CONTROL-C\n");
printf (": Server is going DOWN!\n");
close(newSockfd);
close(sockfd);

fclose(outfile);
fclose(statfile);
}

// signal handler for forked process termination
static void ChildTerm)
{
clientcount--;
printf(": child has died\n");
printf(": clientcount is %d \n", clientcount);

}



int main(argc,argv)
int argc;
char *argv[];
{
if (argcl=3)
{
printf("\n%s, %s\n",version,author);
printf("Usage: mpeg2encode in.par out.m2v\n");
exit(0);
}

/* read parameter file */
readparmfile(argv[l]);

printf("Reading quantization matrices....\n");
/* read quantization matrices */
readquantmat();

/* open output file */
if (I(outfile=fopen(argv[2],"wb")))
{
sprintf(errortext,"Couldn't create output file %s",argv[2]);
error(errortext);
}

printf("Openning Connection ....\n");
/* open connection */
openConnection();
return 0;


static int openConnection()
{
struct sockaddr in serverAddr; // my address information
struct sockaddr in clientAddr; // connector's address information
struct hostent *hp;
int count, addrlen, nbytes;
char myname[MAXHOSTNAME+1];








42



char serverBuffer[MAXSERVERDATASIZE];
int i, selections;
char *bitStream;
int spaceindex, userIndex;

// clientcount = 0;
if ((sockfd = socket(AF INET, SOCK STREAM, 0)) == -1)
{
perror("socket");
exit(l);
}
memset(&serverAddr, 0, sizeof(struct sockaddr in)); // clear our address

if (gethostname(myname, MAXHOSTNAME) == -1) // who are we...
{
perror("socket");
exit(l);
}

if ((hp= (struct hostent *)gethostbyname(myname)) == NULL) // get our address info
{
perror("gethostbyname");
exit(-l); // per tutorial -1


serverAddr.sinfamily
serverAddr.sin_port
if (bind(sockfd, (struct
{


= AF INET;
= htons(portnumber); // short, network byte order
sockaddr *)&serverAddr, sizeof(serverAddr))== -1)


perror(": bind");
exit(l);


printf(": Server is UP at %s, port %d\n",
myname, port number);

// signal hanler for control c --> close the socket!
signal(SIGINT, CtrlC);

if (listen(sockfd, BACKLOG) == -1)


printf(": listen failure
perror(": listen");
exit(l);
}

// ignore child process termination
// signal (SIGCHLD, SIG IGN);
signal (SIGCHLD, ChildTerm);

clientcount = 0;
while(1) // main accept() loop
{

addrlen = sizeof(serverAddr);

if (clientcount < MAXCLIENTS)


%d\n",errno);


printf(": waiting to accept new connection \n");
if ((newSockfd = accept(sockfd, &serverAddr, &addrlen)) == -1)
{
printf(": accept failure %d\n",errno);
perror(": accept");
exit(l);


}
printf(": got new connection from %s\n",
inet ntoa(clientAddr.sin addr));
switch(fork())
{ /* try to handle connection


/


/* bad news. scream and die */


case -1 :





























nbytes);


perror(": fork");
close(sockfd);
close(newSockfd);
exit(l);
case 0 : /* we're the child, do something */
close(sockfd); // close the parent socket fd

//1. read request for list of titles
if ((nbytes = read(newSockfd, serverBuffer, MAXSERVERDATASIZE)) < 0)
{
perror(": read");
}
if (nbytes != 19)
{
printf("Error -- could not read request for titles, btyes read is %d\n",

}
printf(": Read request for titles, btyes read is %d\n", nbytes);

printf(": Ready to send bitstream for decoding\n");
init(horizontal size, vertical size);
putseq();
break;
default : /* we're the parent so look for */
close(newSockfd);
clientcount++;
printf(": clientcount is %d \n", clientcount);
continue;


void init(int inputH_size, int inputV_size)
{


int i, size;
static int block counttab[3]
static int firstTime = 1;


{6,8,12};


horizontal size = inputH_size;
vertical size = inputV_size;
imageScaleTestBufCnt = 0;

range checks();
profile and level checks();


/* Clip table */
if (!(Clip=(unsigned char *)malloc(1024)))
Error("Clip[] malloc failed\n");

Clip += 384;

for (i=-384; i<640; i++)
Clip[i] = (i<0) ? 0 : ((i>255) ? 255 : i);


initbits();
init fdct();
init idct();


/* round picture dimensions to nearest multiple of 16 or 32 */
mb width = (horizontal size+15)/16;
mb height = prog seq ? (vertical size+15)/16 : 2*((vertical size+31)/32);
mbheight2 = fieldpic ? mbheight>>l : mbheight; /* for field pictures *
width = 16*mb width;
height =16*mbheight;


(chroma format==CHROMA444) ? width : width>>l;
(chroma format!=CHROMA420) ? height : height>>l;


chrom width
chrornheight








44



height = fieldpic ? height>>l : height;
width2 = fieldpic ? width< chrom width2 = fieldpic ? chrom width<
block count = block count tab[chroma format-1];

/* clip table */
if (I(clp = (unsigned char *)malloc(1024)))
error("malloc failed\n");
clp+= 384;
for (i=-384; i<640; i++)
clp[i] = (i<0) ? 0 : ((i>255) ? 255 : i);

for (i=0; i<3; i++)
{
size = (i==0) ? width*height : chrom width*chrom height;

if (refHsize == horizontal size && refVsize == vertical size)
refSize[i] = size;
if (IfirstTime)
{
free(newrefframe[i]);
free(oldrefframe[i]);
free(auxframe[i]);
free(neworgframe[i]);
free(oldorgframe[i]);
free(auxorgframe[i]);
free(predframe[i]);
free(tempframe[i]);
free(temp2frame[i]);
}
if (I(newrefframe[i] = (unsigned char *)malloc(size)))
error("malloc failed\n");
if (I(oldrefframe[i] = (unsigned char *)malloc(size)))
error("malloc failed\n");
if (I(auxframe[i] = (unsigned char *)malloc(size)))
error("malloc failed\n");
if (I(neworgframe[i] = (unsigned char *)malloc(size)))
error("malloc failed\n");
if (I(oldorgframe[i] = (unsigned char *)malloc(size)))
error("malloc failed\n");
if (I(auxorgframe[i] = (unsigned char *)malloc(size)))
error("malloc failed\n");
if (I(predframe[i] = (unsigned char *)malloc(size)))
error("malloc failed\n");
if (I(tempframe[i] = (unsigned char *)malloc(size)))
error("malloc failed\n");
if (I(temp2frame[i] = (unsigned char *)malloc(size)))
error("malloc failed\n");


mbinfo = (struct mbinfo *)malloc(mb width*mb height2*sizeof(struct mbinfo));

if (Imbinfo)
error("malloc failed\n");

blocks =
(short (*) [64])malloc(mbwidth*mbheight2*blockcount*sizeof(short [64]));

if ([blocks)
error("malloc failed\n");

// should not recreate a new file on multiple call (just open and append to it!)

/* open statistics output file */
if (firstTime)
{
if (statname[0]=='-')
statfile = stdout;
else if (!(statfile = fopen(statname,"w")))
{












sprintf(errortext,"Couldn't create statistics output file %s",statname);
error(errortext);


}
if (firstTime)
firstTime = 0;


void error(text)
char *text;
{
fprintf(stderr,text);
putc('\n',stderr);
exit(l);


static void readparmfile(fname)
char *fname;
{
int i;
int h,m,s,f;
FILE *fd;
char line[256];
static double ratetab[8]=
{24000.0/1001.0,24.0,25.0,30000.0/1001.0,30.0,50.0,60000.0/1001.0,60.0};
extern int r,Xi,Xb,Xp,d0i,d0p,dob; /* rate control */
extern double avg act; /* rate control */

if (!(fd = fopen(fname,"r")))
{
sprintf(errortext,"Couldn't open parameter file %s",fname);
error(errortext);
}

fgets(id string,254,fd);
fgets(line,254,fd); sscanf(line,"%s",tplorg);
printf("tplorg = %s\n",tplorg);
fgets(line,254,fd); sscanf(line,"%s",tplref);
fgets(line,254,fd); sscanf(line,"%s",iqname);
fgets(line,254,fd); sscanf(line,"%s",niqname);
fgets(line,254,fd); sscanf(line,"%s",statname);
fgets(line,254,fd) ; sscanf(line,"%d",&inputtype);
fgets(line,254,fd); sscanf(line,"%d",&nframes);
fgets(line,254,fd); sscanf(line,"%d",&frame0);
fgets(line,254,fd); sscanf(line,"%d:%d:%d:%d",&h,&m,&s,&f);
fgets(line,254,fd); sscanf(line,"%d",&N);
fgets(line,254,fd); sscanf(line,"%d",&M);
fgets(line,254,fd) ; sscanf(line,"%d",&mpegl) ;
fgets(line,254,fd); sscanf(line,"%d",&fieldpic);
fgets(line,254,fd) ; sscanf(line,"%d",&horizontalsize);
fgets(line,254,fd); sscanf(line,"%d",&vertical size);
refHsize = horizontal size;
refVsize = vertical size;
fgets(line,254,fd); sscanf(line,"%d",&aspectratio);
fgets(line,254,fd); sscanf(line,"%d",&frameratecode);
fgets(line,254,fd); sscanf(line,"%lf",&bit rate);
fgets(line,254,fd); sscanf(line,"%lf",&bwReductionRate);
fgets(line,254,fd); sscanf(line,"%lf",&scaleFactor);
fgets(line,254,fd); sscanf(line,"%lf",&overrideImageScaleResults);
fgets(line,254,fd); sscanf(line,"%lf",&staticMode);
fgets(line,254,fd); sscanf(line,"%d",&port number);
fgets(line,254,fd); sscanf(line,"%d",&vbv buffer size);
fgets(line,254,fd); sscanf(line,"%d",&low delay);
fgets(line,254,fd); sscanf(line,"%d",&constrparms);
fgets(line,254,fd); sscanf(line,"%d",&profile);
fgets(line,254,fd); sscanf(line,"%d",&level);
fgets(line,254,fd); sscanf(line,"%d",&progseq);
fgets(line,254,fd); sscanf(line,"%d",&chroma format);
fgets(line,254,fd); sscanf(line,"%d",&video format);
fgets(line,254,fd); sscanf(line,"%d",&color_primaries);








46



fgets(line,254,fd); sscanf(line,"%d",&transfercharacteristics);
fgets(line,254,fd); sscanf(line,"%d",&matrix coefficients);
fgets(line,254,fd) ; sscanf(line, "%d",&display_horizontalsize);
fgets(line,254,fd) ; sscanf(line, "%d",&display_verticalsize);
fgets(line,254,fd) ; sscanf(line,"%d",&dc_prec);
fgets(line,254,fd); sscanf(line,"%d",&topfirst);
fgets(line,254,fd); sscanf(line,"%d %d %d",
frame_preddct_tab,frame_pred dct_tab+l,frame_preddct_tab+2);

fgets(line,254,fd); sscanf(line,"%d %d %d",
conceal tab,conceal tab+l,conceal tab+2);

fgets(line,254,fd); sscanf(line,"%d %d %d",
qscale tab,qscale tab+l,qscale tab+2);

fgets(line,254,fd); sscanf(line,"%d %d %d",
intravlc tab,intravlc tab+l,intravlc tab+2);
fgets(line,254,fd); sscanf(line,"%d %d %d",
altscan tab,altscan tab+l,altscan tab+2);
fgets(line,254,fd); sscanf(line,"%d",&repeatfirst);
fgets(line,254,fd); sscanf(line,"%d",&prog frame);
* intra slice interval refresh period */
fgets(line,254,fd); sscanf(line,"%d",&P);
fgets(line,254,fd); sscanf(line,"%d",&r);
fgets(line,254,fd); sscanf(line,"%lf",&avgact);
fgets(line,254,fd); sscanf(line,"%d",&Xi);
fgets(line,254,fd); sscanf(line,"%d",&Xp);
fgets(line,254,fd); sscanf(line,"%d",&Xb);
fgets(line,254,fd); sscanf(line,"%d",&d0i);
fgets(line,254,fd); sscanf(line,"%d",&d0p);
fgets(line,254,fd); sscanf(line,"%d",&d0b);

if (N<1)
error("N must be positive");
if (M<1)
error("M must be positive");
if (N%M 1= 0)
error("N must be an integer multiple of M");

motion data = (struct motion data *)malloc(M*sizeof(struct motion data));
if (Imotion data)
error("malloc failed\n");

for (i=0; i {
fgets(line,254,fd);
sscanf(line,"%d %d %d %d",
&motion data[i] .forw hor f code, &motion data[i] .forw vert f code,
&motion data[i] .sxf, &motiondata[i] .syf);

if (il=0)

fgets(line,254,fd);
sscanf(line,"%d %d %d %d",
&motion data[i].back hor f code, &motion data[i].back vert f code,
&motion data[i] .sxb, &motion data[i] .syb);


fclose(fd)
fclose(fd);








47



putseq.c

/* putseq.c, sequence level routines */

/* Copyright (C) 1996, MPEG Software Simulation Group. All Rights Reserved. */

/*
* Disclaimer of Warranty

* These software programs are available to the user without any license fee or
* royalty on an "as is" basis. The MPEG Software Simulation Group disclaims
* any and all warranties, whether express, implied, or statuary, including any
* implied warranties or merchantability or of fitness for a particular
* purpose. In no event shall the copyright-holder be liable for any
* incidental, punitive, or consequential damages of any kind whatsoever
* arising from the use of these programs.

* This disclaimer of warranty extends to the user of these programs and user's
* customers, employees, agents, transferees, successors, and assigns.

* The MPEG Software Simulation Group does not represent or warrant that the
* programs furnished hereunder are free of infringement of any third-party
* patents.

* Commercial implementations of MPEG-1 and MPEG-2 video, including shareware,
* are subject to royalty fees to patent holders. Many of these patents are
* general enough such that they are unavoidable regardless of implementation
* design.
*


#include
#include
#include "config.h"
#include "global.h"
#include

void outputMessage()
{
printf(message);
fprintf(statfile,message);
}

int scalelmage(tinfile, toutfile, twidth, theight, fnum, startIndex)
char *tinfile;
char *toutfile;
int twidth, theight, fnum, startIndex;
{
IGC igc;
IImage image,nimage;
char outfile[30];
IFileFormat input format = IFORMATPPM;
IFileFormat output format = IFORMATPPM;
char infile[30];
FILE *fp;
IError ret;
int loop;
int i;

for (i=startIndex;i<(fnum+startIndex);i++)
{
sprintf(infile,"%s%d.%s",tinfile,i,"ppm");
sprintf(outfile,"%s%d.%s",toutfile,i,"ppm");
sprintf(message,"Converting from %s to %s\n",infile,outfile);
outputMessage();
if ( infile )
fprintf ( stderr, "No infile specified. Reading from stdin.\n" );
if ( outfile )
{
strcpy(outfile "out.ppm");








48


fprintf ( stderr, "No outfile specified. Writing to %s.\n", outfile );
}

/* try and determine file types by extension */
if ( infile )
{
ret = IFileType ( infile, &input format );
if ( ret )
{
fprintf ( stderr, "Input file error: %s\n", IErrorString ( ret ) );
exit ( 1 );
}
}
if ( outfile )
{
ret = IFileType ( outfile, &output format );
if ( ret )
{
fprintf ( stderr, "Output file error: %s\n", IErrorString ( ret ) );
fprintf ( stderr, "Using PPM format.\n" );
}
}

if ( infile )
{
fp = fopen ( infile, "rb" );
if ( fp )
{
perror ( "Error opening input file:" );
exit ( 1 );
}
}
else
fp = stdin;

if ( ( ret = IReadImageFile ( fp, inputformat, IOPTION NONE, &image ) ) )
{
fprintf ( stderr, "Error reading image: %s\n", IErrorString ( ret ) );
exit ( 1 );
}
if ( infile )
close ( fp );
igc=ICreateGC ( );
nimage=ICreateImage(twidth,theight,IOPTIONNONE);
ICopyImageScaled ( image,nimage,igc,0,0,IImageWidth(image),
IImageHeight(image),0,0,twidth,theight);
if ( outfile )
{
fp = fopen ( outfile, "wb" );
if ( fp )
{
perror ( "Cannot open output file: );
exit ( 1 );
}
}
else
fp = stdout;

IWritelmageFile ( fp, nimage, output format, IOPTION INTERLACED );

if ( outfile )
close ( fp );
}
return ( 0 );
}

void putseq()

/* this routine assumes (N % M) == 0 */
int i, j, k, f, fO, pf0, n, np, nb, sxf, syf, sxb, syb;
int ipflag;








49



FILE *fd;
char name[256];
unsigned char *neworg[3], *newref[3];
static char ipb[5] = {' ','I','P','B' 'D'};

struct snr data snrVals; // Vail = snrvals in ref section
int scaleDir = 0; // -1 = down, 0 = steady, 1 = up
int mb widthTemp, mb heightTemp;
double prev bit rate = bit rate, currScale = 1.0, testScale = 1.0;
char refFrameName[256], currFrameName[256];
char baseRefFrameName[256], baseScaledFrameName[256], tplorgTemp[256],
scaledUpFrameName[256];
char smallEncStoredFName[256], scaledUpSmallEncStoredFName[256];
char smallEncStoredFNameBase[256], scaledUpSmallEncStoredFNameBase[256];
int currHsize = horizontal size, currVsize = vertical size;
int prevHsize = horizontal size, prevVsize = vertical size;
int testHsize = horizontal size, testVsize = vertical size;
int bestYSnrLevel = 0;
float bestYSnr = 0.0, levellYSnr = 0.0, level2YSnr = 0.0, level3YSnr = 0.0,
scldNormalYSnr;;
int firstTimeInLevell = 1;
char tempFileName[30];

strcpy(refFrameName, tplorg);
strcpy(currFrameName, tplorg);
//set Testing Flag = false
imageScaleTesting = 0;
testLevel = 0;
initFlag = 1;
refSnrPass = 0;
strcpy(tplorgTemp,tplorg);
strcpy(baseRefFrameName, strtok(tplorgTemp,"%")); //gets "des" out of "des%d"
sprintf(message,"baseFrameName = %s\n",baseRefFrameName);
outputMessage();


sprintf(message, //put the text in the opposite column
"DDATA,-,Level,-,display#,-,frame,-,Type,-,Dim,-,Area,-,Bit Rate,-,S,-,TargetBits,-
,GOPOverflow,-,Q,-,YSnr,-,LevellSnr,-,LevellS,-,LevellQ,-,Scale\n");
outputMessage();

/* loop through all frames in encoding/decoding order */
for (i=0; i {
pfO = N*((i+(M-1))/N) (M-l); // used to peek if the current frame is an I frame
if (pf0<0)
pf0=0;
sprintf(message,"pf0 = %d\n", pf0);
outputMessage();

// Assume the following unless overridden in the I frame testing:
testLevel = MaxTestLevels + 1; // skip all test levels
imageScaleTesting = 0; // do it for real

bestYSnrLevel = 0;
bestYSnr = 0;
levellYSnr = 0;
level2YSnr = 0;
level3YSnr = 0;

if (pfO == i) //@ every I frame, need to determine to re-scale
{
sprintf(message,"DD Start OF GOP fO = %d------------------------\n",pf0);
outputMessage();
if (i != 0) // execpt for the 1st gop do:
{
prev bit rate = bit rate; // save previously used bitrate
bit rate *= bwReductionRate; // get current bit rate
}


if (bitrate == prevbitrate)








50



scaleDir = 0; // steady
else if (bitrate > prevbitrate)
scaleDir = 1; // increasing
else // decreasing
scaleDir = -1;

if ((bwReductionRate == 1.0) && (staticMode == 1))
scaleDir = -1; // assume scale down

if ((scaleDir != 0) || (i == 0)) // need to test if adaptive scaling needs to be
done
( // if there is a bandwidth change or if initial I-
frame
if (i != 0)
putseqendO; // close the current sequence
initFlag = 1; // this flag will be set to 0 after initialization
testLevel = 1; // start at base test
imageScaleTesting = 1;
if (i == 0)
scaleDir = -1; // for the first I-frame test to see if scale down is better
than ref.
}
}

// before potential testing, save the current size
prevHsize = currHsize;
prevVsize = currVsize;
testHsize = currHsize;
testVsize = currVsize;
refSnrPass = 0; // this flag should be set when ref SNR is to be done
while (testLevel > 0)
{
imageScaleTestBufCnt = 0;
if ((testLevel == 2) && (prevHsize == refHsize) && (prevVsize == refVsize))
//verify if level 2 is required
testLevel++;

sprintf(message,"--->Level = %d, i = %d, ref SNR pass = %d\n", testLevel, i,
refSnrPass);
outputMessage();

if (imageScaleTesting == 1) // if you enter the loop with testing in mind,
initialiation is required
initFlag = 1;

if (testLevel == 1) // level 1 testing get values for reference
{
currHsize = refHsize;
currVsize = refVsize;
// set to reference frame name & size
strcpy(tplorg, refFrameName);
}
else if (testLevel == 2) // level 2 testing get values for curr scale
{
currHsize = prevHsize;
currVsize = prevVsize;
testHsize = currHsize;
testVsize = currVsize;
}
else if (testLevel == 3) // level 3 testing scale up/down
{
if (scaleDir == 1) // increasing
{
testScale = currScale + currScale scaleFactor;
if (testScale > 1.0) // floor it to the reference size & in this case no need to
run level 3 testing !!!
testScale = 1.0; // POSTPONING since this case is not being dealt w/ in this
THESIS (i.e., assume decreasing)
}
else if (scaleDir == -1) // decreasing
{












testScale = currScale currScale scaleFactor;


}
// need to adjust
currHsize = (int)
currVsize = (int)


ref by the scale
(refHsize sqrt(testScale));
(refVsize sqrt(testScale));


if (currHsize % 2 != 0)
currHsize += 1;
if (currVsize % 2 != 0)
currVsize += 1;


mb widthTemp = (currHsize+15)/16;
mb heightTemp = progseq ? (currVsize+15)/16
currHsize = 16*mbwidthTemp;
currVsize = 16*mb heightTemp;

testHsize = currHsize;
testVsize = currVsize;
sprintf(message,"testScale = %lf, scaled H =
currHsize, currVsize);
outputMessage();


: 2*((currVsize+31)/32);





%d, scaled V = %d\n", testScale,


}
else if (testLevel == MaxTestLevels + 1) //the real run
{
if (imageScaleTesting == 1) // if we are in testing mode, pick the
{
if ((bestYSnrLevel == 1) | | (overrideImageScaleResults == 1))


best SNR level


if (overrideImageScaleResults == 1)
{
sprintf (message,"****OVERRIDE ImageScaline ON\n");
outputMessage();
}
currHsize = refHsize;
currVsize = refVsize;
strcpy(tplorg, refFrameName);
currScale = 1.0; // since reference is chosen, scale is set to 1
}
else if (bestYSnrLevel == 2)
{
currHsize = prevHsize;
currVsize = prevVsize;
//currScale is unchanged
}
else if (bestYSnrLevel == 3)


currHsize =
currVsize =
currScale =


testHsize;
testVsize;
testScale;


}
sprintf (message,"****The best Normal Y SNR is found at Level %d Testing with
value of %3.3g\n", bestYSnrLevel, bestYSnr);
outputMessage();
sprintf (message,"SUMMARY of Y SNRS: Level 1: %3.3g, Level 2: %3.3g, Level 3:
%3.3g\n", levellYSnr, level2YSnr, level3YSnr);
outputMessage();

}
imageScaleTesting = 0;


if ((currHsize != refHsize) (currVsize != refVsize)) //if not ref
{
//construct the name from the current scaled size: hxw%d
sprintf(tplorg,"%dx%d%s",currHsize,currVsize,refFrameName);
strcpy(tplorgTemp,tplorg);
strcpy(baseScaledFrameName, strtok(tplorgTemp,"%"));


if (initFlag == 1)








52



{
init(currHsize, currVsize);

rcinit seq(); /* initialize rate control */

/* sequence header, sequence extension and sequence display extension */
putseqhdr();
if (Impegl)
{
putseqext();
putseqdispext();
}

/* optionally output some text data (description, copyright or whatever) *
if (strlen(id string) > 1)
putuserdata(idstring);

initFlag = 0; // this flag must be set to 1 on a need basis
}
if (!quiet)
{
fprintf(stderr,"Encoding frame %d ",i);
fflush(stderr);
}
/* fO: lowest frame number in current GOP
*
* first GOP contains N-(M-1) frames,
* all other GOPs contain N frames
*/
f0 = N*((i+(M-1))/N) (M-l);

if (f0<0)
f0=0;

if (i==0 | (i-1)%M==0)
{
/* I or P frame */
for (j=0; j<3; j++)
{
/* shuffle reference frames */
neworg[j] = oldorgframe[j];
newref[j] = oldrefframe[j];
oldorgframe[j] = neworgframe[j];
oldrefframe[j] = newrefframe[j];
neworgframe[j] = neworg[j];
newrefframe[j] = newref[j];
}

/* f: frame number in display order */
f = (i==0) ? 0 : i+M-1;
if (f>=nframes)
f = nframes 1;

if (i==f0) /* first displayed frame in GOP is I */
{
/* I frame */
picttype = ITYPE;
forw hor f code = forw vert f code = 15;
back hor f code = back vert f code = 15;

/* n: number of frames in current GOP

first GOP contains (M-l) less (B) frames
*/
n = (i==0) ? N-(M-1) : N;

/* last GOP may contain less frames */
if (n > nframes-f0)
n = nframes-f0;


* number of P frames *








53



if (i==0)
np = (n + 2*(M-1))/M 1; /* first GOP */
else
np= (n + (M-1))/M 1;

/* number of B frames */
nb = n -np 1;

rcinit_GOP(np,nb);

putgophdr(f0,i==0); /* set closed GOP in first GOP only */
}
else
{
/* P frame */
picttype = PTYPE;
forw hor f code = motion data[0].forw hor f code;
forw vert f code = motion data[0].forw vert f code;
back hor f code = back vert f code = 15;
sxf = motion data[0].sxf;
syf = motion data[0].syf;
}
}
else
{
/* B frame */
for (j=0; j<3; j++)
{
neworg[j] = auxorgframe[j];
newref[j] = auxframe[j];
}

/* f: frame number in display order */
f = i 1;
pict type = B TYPE;
n = (i-2)%M + 1; /* first B: n=l, second B: n=2, ... */
forw hor f code = motion data[n].forw hor f code;
forw vert f code = motion data[n].forw vert f code;
back hor f code = motion data[n].back hor f code;
back vert f code = motion data[n].back vert f code;
sxf = motion data[n].sxf;
syf = motion data[n].syf;
sxb = motion data[n].sxb;
syb = motion data[n].syb;


temp ref = f fO;
framepreddct = framepred dct_tab[pict_type-l];
q_scale_type = qscale_tab[pict_type-l];
intravlc = intravlc tab[picttype-l];
altscan = altscan tab[picttype-l] ;

fprintf(statfile,"\nFrame %d (#%d in display order):\n",i,f);
fprintf(statfile," picture type=%c\n",ipb[pict type]);
fprintf(statfile," temporal reference=%d\n",temp ref);
fprintf(statfile," framepredframe_dct=%d\n",frame_preddct);
fprintf(statfile," q_scale_type=%d\n",q_scale_type);
fprintf(statfile," intravlcformat=%d\n",intravlc);
fprintf(statfile," alternatescan=%d\n",altscan);

if (pict type!=I TYPE)

fprintf(statfile," forward search window: %d...%d / %d...%d\n",
-sxf,sxf,-syf,syf);
fprintf(statfile," forward vector range: %d...%d.5 / %d...%d.5\n",
-(4< -(4< )


if (pict type==B TYPE)
{








54



fprintf(statfile," backward search window: %d...%d / %d...%d\n",
-sxb,sxb,-syb,syb);
fprintf(statfile," backward vector range: %d...%d.5 / %d...%d.5\n",
-(4< -(4<

sprintf(name,tplorg,f+frame0);
sprintf(message,"\nReading frame from name = %s\n", name);
outputMessage();
sprintf(message,"Horizontal Size = %d, Vertical Size = %d, Bit Rate = %lf\n",
horizontal size, vertical size, bit rate);
outputMessage();

if ((currHsize != refHsize) (currVsize != refVsize)) //if not ref
{
scalelmage(baseRefFrameName, baseScaledFrameName, currHsize, currVsize, 1,
f+frame0); // for the next i ?? f+frame0???
}

readframe(name,neworg);

sprintf(tempFileName,"neworgL%d%s\0",testLevel,name);
storeppmtga(tempFileName,neworg,0,horizontalsize,verticalsize,0);

// FEILD PICTURES ARE NOT SUPPORTED
pictstruct = FRAMEPICTURE;

/* do motion estimation

uses source frames (...orgframe) for full pel search
and reconstructed frames (...refframe) for half pel search
*/

motion estimation(oldorgframe[0],neworgframe[0],
oldrefframe[0],newrefframe[0],
neworg[0],newref[0],
sxf,syf,sxb,syb,mbinfo,0,0);

predict(oldrefframe,newrefframe,predframe,0,mbinfo);
dcttypeestimation(predframe[0],neworg[0],mbinfo);
transform(predframe,neworg,mbinfo,blocks);

putpict(neworg[0]);

for (k=0; k
if (mbinfo[k] .mbtype & MBINTRA)
for (j=0; j iquant intra(blocks[k*block count+j],blocks[k*block count+j],
dc_prec,intra_q,mbinfo[k].mquant);
else
for (j=0;j iquant non intra(blocks[k*block count+j],blocks[k*block count+j],
inter q,mbinfo[k].mquant);

itransform(predframe,newref,mbinfo,blocks);

sprintf(tempFileName,"newrefL%d%s\0",testLevel,name);
store_ppmtga(tempFileName,newref,0,horizontalsize,verticalsize,0);

snrVals = calcSNR(neworg,newref);

if (testLevel == 1)
{
levellYSnr = snrVals.Ymse;
}
else if ((testLevel == 2) || (testLevel == 3)) // then we have: small->encoded
{
if (testLevel == 2)
level2YSnr = snrVals.Ymse;












else if (testLevel == 3)
level3YSnr = snrVals.Ymse;

//need to store it to ppm
sprintf(smallEncStoredFName,"smEncSto%s\0",name);
sprintf(smallEncStoredFNameBase,"smEncSto%s\0",baseScaledFrameName);
store ppm tga(smallEncStoredFName,newref,0,horizontal size,vertical size,0);
//this is small->encoded->stored
}

sprintf(message,
"DDATA,Level,%d,display#,%d,frame,%d,Type,%c,Dim,%dx%d,Area,%d,Bit
Rate,%lf,S,%d,TargetBits,%d,GOPOverflow,%d,Q,%.lf,YSnr,%3.3g,LevellSnr,%3.3g,LevellS,%d,L
evellQ,%.lf,Scale,%lf\n",
testLevel, f, i, ipb[pict type], horizontal size, vertical size,
horizontal size*vertical size, bit rate, currLevelS, TargetBits,
gopoverflow, currLevelQ, snrVals.Ymse, levellYSnr, levellS, levellQ,
testScale);
outputMessage();


if (snrVals.Ymse > bestYSnr)
{
bestYSnr = snrVals.Ymse;
bestYSnrLevel = testLevel;
}

stats();

if (testLevel == MaxTestLevels + 1) // we have just ran it for real
{
testLevel = 0; // so falsify loop condition
}
else // see if can increment to the next level
{
testLevel++;
refSnrPass = 0;
}
}// end of while (testLevel > 0)
// at this point, any imageScaleTesting should be over (i.e., the flag no be turned
on after this point)

sprintf(name,tplref,f+frame0);
writeframe(name,newref);
putseqend
putseqend();








56



putbits.c

/* putbits.c, bit-level output */

/* Copyright (C) 1996, MPEG Software Simulation Group. All Rights Reserved. */

/*
* Disclaimer of Warranty

* These software programs are available to the user without any license fee or
* royalty on an "as is" basis. The MPEG Software Simulation Group disclaims
* any and all warranties, whether express, implied, or statuary, including any
* implied warranties or merchantability or of fitness for a particular
* purpose. In no event shall the copyright-holder be liable for any
* incidental, punitive, or consequential damages of any kind whatsoever
* arising from the use of these programs.

* This disclaimer of warranty extends to the user of these programs and user's
* customers, employees, agents, transferees, successors, and assigns.

* The MPEG Software Simulation Group does not represent or warrant that the
* programs furnished hereunder are free of infringement of any third-party
* patents.

* Commercial implementations of MPEG-1 and MPEG-2 video, including shareware,
* are subject to royalty fees to patent holders. Many of these patents are
* general enough such that they are unavoidable regardless of implementation
* design.
*


#include
#include "config.h"
#include "global.h" // added for accessing imageScaleTesting

#define BUFLENGTH 2048
extern int sockfd,newSockfd;
extern FILE *outfile; /* the only global var we need here */

/* private data */
static unsigned char outbfr;
static int outcnt;
static int bytecnt;
static unsigned char buf[BUFLENGTH];
static int bufCnt = 0, bufcounter = 0;
FILE *fp;
FILE *tfp;


/* initialize buffer, call once before first putbits or alignbits */
void initbits()
{
outcnt =8;
bytecnt =0;


/* write rightmost n (0<=n<=32) bits of val to outfile */
void putbits(val,n)
int val;
int n;
{
int i;
unsigned int mask;
int index = 0, fill = 0;

imageScaleTestBufCnt += n;
mask = 1 << (n-1); /* selects first (leftmost) bit */


for (i=0; i {








57


outbfr <<= 1;

if (val & mask)
outbfrl= 1;

mask >>= 1; /* select next bit */
outcnt--;

if (outcnt==0) /* 8 bit buffer full */
{
/* printf("writing to %s\n",outfile); */
putc(outbfr,outfile);
buf[bufCnt++] = outbfr;
if (bufCnt == BUFLENGTH)
{
if (imageScaleTesting != 1)
write(newSockfd,buf, bufCnt);
bufCnt = 0;
}
outcnt = 8;
bytecnt++;
}
}
if (val == OxlB7L)
{
/ fill = BUFLENGTH bufCnt;
/ for (index = 0; index < fill; index++)
// {
/ buf[bufCnt++] = 0;
// }
fill = write(newSockfd,buf, bufCnt);
bufCnt = 0;
imageScaleTestBufCnt = 0;
}



/* zero bit stuffing to next byte boundary (5.2.3, 6.2.1) *
void alignbits()
{
if (outcnt!=8)
putbits(0,outcnt);
}

/* return total number of generated bits */
int bitcount()

return 8*bytecnt + (8-outcnt);
}








58



Sample Encoder Parameter (PAR) File

MPEG-2 Test Sequence, 30 frames/sec
des%d /* name of source files */
reconDes%d /* name of reconstructed images ("-": don't store) */
/* name of intra quant matrix file ("-": default matrix) *
inter.mat /* name of non intra quant matrix file ("-": default matrix) *
statNetDyn700a55.out /* name of statistics file ("-": stdout ) */
2 /* input picture file format: 0=*.Y,*.U,*.V, l=*.yuv, 2=*.ppm */
48 /* number of frames */
0 /* number of first frame */
00:00:00:00 /* timecode of first frame */
15 /* N (# of frames in GOP) */
3 /* M (I/P frame distance) *
0 /* ISO/IEC 11172-2 stream */
0 /* 0:frame pictures, 1:field pictures */
704 /* horizontal size -- see header file of ppm inputs*/
240 /* vertical size -- see header file of ppm inputs*/
2 /* aspect ratio information l=square pel, 2=4:3, 3=16:9, 4=2.11:1 */
5 /* frame rate code 1=23.976, 2=24, 3=25, 4=29.97, 5=30 frames/sec. */
700000.0 /* bit rate (bits/s) total target bitrate budget for 30 frames */
0.5 /* bit rate/bw reduction rate*/
0.5 /* scale up/down step factor */
0 /* override imagescaling results*/
0 /* staic mode = 1, dynamic mode = 0 */
8000 /* Port Number */
112 /* vbv buffer size (in multiples of 16 kbit) */
0 /* low delay */
0 /* constrained_parameters_flag */
4 /* Profile ID: Simple = 5, Main =4, SNR = 3, Spatial = 2, High =1 */
6 /* Level ID: Low = 10, Main = 8, High 1440 = 6, High = 4 *
0 /* progressivesequence */
1 /* chroma format: 1=4:2:0, 2=4:2:2, 3=4:4:4 */
2 /* video format: 0=comp., 1=PAL, 2=NTSC, 3=SECAM, 4=MAC, 5=unspec. *
5 /* colorprimaries */
5 /* transfer characteristics */
4 /* matrix coefficients */
352 /* displayhorizontalsize */
120 /* display_verticalsize */
0 /* intra_dc_precision (0: 8 bit, 1: 9 bit, 2: 10 bit, 3: 11 bit */
1 /* top field first */
0 0 0 /* framepredframe_dct (I P B) *
0 0 0 /* concealment motion vectors (I P B) *
1 1 1 /* q_scale_type (I P B) */
1 1 1 /* intra vlc format (I P B)*/
0 0 0 /* alternate scan (I P B) */
0 /* repeat first field */
0 /* progressiveframe */
0 /* P distance between complete intra slice refresh */
0 /* rate control: r (reaction parameter) */
0 /* rate control: avgact (initial average activity) *
0 /* rate control: Xi (initial I frame global complexity measure) */
0 /* rate control: Xp (initial P frame global complexity measure) */
0 /* rate control: Xb (initial B frame global complexity measure) */
0 /* rate control: doi (initial I frame virtual buffer fullness) */
0 /* rate control: dOp (initial P frame virtual buffer fullness) *
0 /* rate control: dob (initial B frame virtual buffer fullness) *


forw hor
forw hor
back hor
forw hor
back hor


code forw vert
code forw vert
code back vert
code forw vert
code back vert


f code search width/height
f code search width/height
f code search width/height
f code search width/height
f code search width/height





















APPENDIX B
PERTINENT MPEG-2 DECODER SOURCE CODE CHANGES


(Note: Modified code from MPEG Software Simulation Group [21] is in boldface.)


mpeg2dec.c

/* mpeg2dec.c, main(), initialization, option processing */

/* Copyright (C) 1996, MPEG Software Simulation Group. All Rights Reserved. */

/*
* Disclaimer of Warranty

* These software programs are available to the user without any license fee or
* royalty on an "as is" basis. The MPEG Software Simulation Group disclaims
* any and all warranties, whether express, implied, or statuary, including any
* implied warranties or merchantability or of fitness for a particular
* purpose. In no event shall the copyright-holder be liable for any
* incidental, punitive, or consequential damages of any kind whatsoever
* arising from the use of these programs.

* This disclaimer of warranty extends to the user of these programs and user's
* customers, employees, agents, transferees, successors, and assigns.

* The MPEG Software Simulation Group does not represent or warrant that the
* programs furnished hereunder are free of infringement of any third-party
* patents.

* Commercial implementations of MPEG-1 and MPEG-2 video, including shareware,
* are subject to royalty fees to patent holders. Many of these patents are
* general enough such that they are unavoidable regardless of implementation
* design.
*


#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include

#define GLOBAL
#include "config.h"
#include "global.h"
#define MAXCLIENTDATASIZE 80

/* private prototypes */
static int videosequence _ANSIARGS_((int *framenum));
static int Decode Bitstream ANSI ARGS ((void));
static int Headers ANSI ARGS ((void));
static void Initialize Sequence ANSI ARGS ((void));
static void Initialize Decoder ANSI ARGS ((void));
static void Deinitialize_Sequence ANSIARGS_((void));








60



static void Process Options ANSI ARGS ((int argc, char *argv[]));


#if OLD
static int Get Val ANSI ARGS
#endif

/* #define DEBUG */

static void Clear Options();
#ifdef DEBUG
static void Print Options();
#endif

int main(argc,argv)
int argc;
char *argv[];

t ret, code;
int ret, code;


((char *argv[]));


Clear Options();

/* decode command line arguments */
Process Options(argc,argv);
InitializeMy_Buffer();
#ifdef DEBUG
Print Options();
#endif

ld = &base; /* select base layer context */

/* open MPEG base layer bitstream file(s) */
/* NOTE: this is either a base layer stream or a spatial enhancement stream */
/* if ((base.Infile=open(Main_BitstreamFilename,O_RDONLY O_BINARY))<0)

fprintf(stderr,"Base layer input file %s not found\n", Main Bitstream Filename);
exit(1);


if(sockfd != 0)


printf("Buffer Initialized.\n");
Initialize Buffer();


if(Show Bits(8)==0x47)
{
sprintf(Error_Text,"Decoder
Error(Error Text);


currently does not parse transport streams\n");


next start code();
code = Show Bits(32);


printf ("code = %d\n",code);
switch(code)
{
case SEQUENCE HEADER CODE:
break;
case PACK START CODE:
System Stream Flag = 1;
case VIDEO ELEMENTARY STREAM:
System Stream Flag = 1;
break;
default:
sprintf(ErrorText,"Unable to
Error(Error Text);
break;
}


recognize stream type\n");


'* lseek(base.Infile, 01, 0); *








61



myLseek();
Initialize Buffer();


/* if(base.Infile!=0) */
/* { */
/* lseek(base.Infile, 01, 0); */
/* } */
myLseek();

Initialize Buffer();

if(Two Streams)
{
Id = &enhan; /* select enhancement layer context */

/* if ((enhan.Infile =
open(Enhancement_Layer_BitstreamFilename,O_RDONLY O_BINARY))<0)
{
sprintf(ErrorText,"enhancment layer bitstream file %s not found\n",
EnhancementLayerBitstream Filename);

Error(Error Text);
}*/

Initialize Buffer();
ld = &base;
}

Initialize Decoder();

ret = Decode Bitstream();

close(sockfd);

/* if (Two Streams)
close(enhan.Infile);*/

return 0;
}

/* IMPLEMENTAION specific rouintes */
static void Initialize Decoder()
{
int i;

/* Clip table */
if (!(Clip=(unsigned char *)malloc(1024)))
Error("Clip[] malloc failed\n");

Clip += 384;

for (i=-384; i<640; i++)
Clip[i] = (i<0) ? 0 : ((i>255) ? 255 : i);

/* IDCT */
if (ReferenceIDCTFlag)
Initialize Reference IDCT();
else
Initialize Fast IDCT();



/* mostly IMPLEMENTAION specific rouintes */
static void InitializeSequence()
{
int cc, size;
static int Table 6 20[3] = {6,8,12};

/* check scalability mode of enhancement layer */
if (Two Streams && (enhan.scalable mode!=SC SNR) && (base.scalable mode!=SC DP))








62



Error("unsupported scalability mode\n");

/* force MPEG-1 parameters for proper decoder behavior *
/* see ISO/IEC 13818-2 section D.9.14 */
if (lbase.MPEG2 Flag)
{
progressivesequence = 1;
progressive frame = 1;
picturestructure = FRAMEPICTURE;
framepredframe_dct = 1;
chroma format = CHROMA420;
matrix coefficients = 5;


/* round to nearest multiple of coded macroblocks */
/* ISO/IEC 13818-2 section 6.3.3 sequenceheader() */
mb width = (horizontal size+15)/16;
mb height = (base.MPEG2_Flag && Iprogressivesequence) ? 2*((verticalsize+31)/32)
: (vertical size+15)/16;

Coded Picture Width = 16*mb width;
Coded Picture Height = 16*mb height;

/* ISO/IEC 13818-2 sections 6.1.1.8, 6.1.1.9, and 6.1.1.10 */
Chroma Width = (chroma format==CHROMA444) ? Coded Picture Width
: Coded Picture Width>>l;
Chroma Height = (chroma format1=CHROMA420) ? Coded Picture Height
: Coded Picture Height>>l;

/* derived based on Table 6-20 in ISO/IEC 13818-2 section 6.3.17 */
block count = Table 6 20[chroma format-1];

for (cc=0; cc<3; cc++)

if (cc==0)
size = Coded Picture Width*Coded Picture Height;
else
size = Chroma Width*Chroma Height;

if ( (backward reference frame[cc] = (unsigned char *)malloc(size)))
Error("backward reference frame[] malloc failed\n");

if (!(forward reference frame[cc] = (unsigned char *)malloc(size)))
Error("forward reference frame[] malloc failed\n");

if (I(auxframe[cc] = (unsigned char *)malloc(size)))
Error("auxframe[] malloc failed\n");

if(Ersatz Flag)
if (1(substituteframe[cc] = (unsigned char *)malloc(size)))
Error("substitute frame[] malloc failed\n");


if (base.scalable mode==SC SPAT)
{
/* this assumes lower layer is 4:2:0 */
if (I(llframeo[cc] = (unsigned char
*)malloc((lower_layerpredictionhorizontal_size*lower_layerprediction
c?4:1))))
Error("llframeo malloc failed\n");
if (I(llframel[cc] = (unsigned char
*)malloc((lower_layer_predictionhorizontal_size*lower_layer_prediction
c?4:1))))
Error("llframel malloc failed\n");


verticalsize)/(c



vertical size)/(c


/* SCALABILITY: Spatial */
if (base.scalable mode==SC SPAT)








63


if (!(lltmp = (short
*)malloc(lower_layer_predictionhorizontal_size*((lower_layer_predictionvertical_size*ve
rtical subsampling factor n)/vertical subsampling factor m)*sizeof(short))))
Error("lltmp malloc failed\n");
}

#ifdef DISPLAY
if (OutputType==T_Xll)
{
InitializeDisplay_Process("");
Initialize Dither Matrix();
}
#endif /* DISPLAY */



void Error(text)
char *text;
{
fprintf(stderr,text);
exit(1);
}

/* TraceFlag output */
void Print Bits(code,bits,len)
int code,bits,len;
{
int i;
for (i=0; i printf("%d",(code>>(bits-l-i))&l);
}



/* option processing */
static void Process Options(argc,argv)
int argc; /* argument count */
char *argv[]; /* argument vector */
{
int i, LastArg, NextArg;
struct sockaddr in serverAddr;
struct hostent *hp;
int nbytes, endoflist, counter;
char *asciiServerAddr;
char clientBuffer[MAXCLIENTDATASIZE];
char *charptr, userChoice;

/* at least one argument should be present */
if (argc != 3)
{
printf("\n%s, %s\n",Version,Author);
printf("Usage: mpeg2decode \n\n");
printf(" or: mpeg2decode standalone out.m2v\n\n");
exit(0);
}

sprintf(sentFile,"sentFile.m2v");
sentFilePtr = fopen(sentFile, "w");
Output Type = 4;
OutputPictureFilename = "";


if (strcmp(argv[1 ,"standalone") == 0)
{
printf ("Got switch \n");
InputSrc = -1;
// open m2v file for reading
if(!(sockfd=open(argv[2],0_RDONLY O_BINARY))<0)
{
printf("ERROR: unable to open reference filename (%s)\n",argv[2]);
exit(l);








64



}
printf ("Opened %s \n",argv[2]);
}
else
{

InputSrc = 0;
if ((hp=gethostbyname(argv[1])) == NULL)
{
perror("Error getting host IP.");
exit(l);
}

if ((sockfd = socket(AF_INET, SOCK_STREAM, 0)) == -1) {
perror("socket");
exit(l);
}

serverAddr.sinfamily = AFINET; // host byte order

printf("Attempting to connect to server....\n");
memcpy((char *) &serverAddr.sinaddr, (char *) hp->haddr, hp->hlength);
asciiServerAddr = (char *) inet ntoa(serverAddr.sin addr);
printf("server address: %s\n",asciiServerAddr);
serverAddr.sin_port = htons(atoi(argv[2])); // short, network byte order
if (connect(sockfd, (struct sockaddr *)&serverAddr, sizeof(serverAddr)) == -1)
{
perror("connect");
exit(l);
}
printf("Connected to server....\n");


printf ("Enter to request bitstream from server.\n");
getchar();

//1. make request for list of titles
if ((nbytes = write (sockfd, "SEND LIST OF TITLES", 19)) < 0)
{
perror("write");
}
if (nbytes != 19)
{
printf("Error -- could not send request for titles, btyes written is %d\n",
nbytes);
}
printf("Send request for bitstream, btyes written is %d\n", nbytes);

//printf ("Enter to begin reading bitstream from server.\n");
// getchar();
}// else InputSrc = socket;
/* force display process to show frame pictures */
if((Output_Type==4 || Output_Type==5) && Frame_Store_Flag)
Display_Progressive_Flag = 1;
else
Display_Progressive_Flag = 0;

#ifdef VERIFY
/* parse the bitstream, do not actually decode it completely */


#if 0
if(Output Type==-l)

Decode_Layer = Verify_Flag;
printf("FYI: Decoding bitstream elements up to: %s\n",
LayerTable[DecodeLayer]);
}
else
#endif
Decode Layer = ALL LAYERS;













#endif /* VERIFY */

/* no output type specified */
if(Output Type==-l)
{
OutputType = 9;
OutputPictureFilename = ""
}


#ifdef DISPLAY
if (Output Type==T Xll)
{
if(Frame Store Flag)
Display_Progressive_Flag = 1;
else
Display_Progressive_Flag = 0;

Frame Store Flag = 1; /* to avoid calling dither() twice */
}
#endif





#ifdef OLD
/*
this is an old routine used to convert command line arguments
into integers
*/
static int Get Val(argv)
char *argv[];
{
int val;

if (sscanf(argv[l]+2,"%d",&val) !=1)
return 0;

while (isdigit(argv[l] [2]))
argv[l]++;

return val;
}
#endif



static int Headers()
{
int ret;

ld = &base;


/* return when end of sequence (0) or picture
header has been parsed (1) */

ret = Get Hdr();


if (Two Streams)
{
Id = &enhan;
if (Get Hdr() =ret && lQuietFlag)
fprintf(stderr,"streams out of sync\n");
ld = &base;

return ret;
return ret;

















static int Decode Bitstream()


int ret;
int Bitstream Framenum;


Bitstream Framenum = 0;

for(;;)
{

#ifdef VERIFY
Clear Verify_Headers();
#endif /* VERIFY */

ret = Headers();

if(ret==l)

ret = video sequence(&Bitstream Framenum);


else
return(ret);


static void Deinitialize_Sequence()


int i;


/* clear flags */
base.MPEG2 Flag=0;

for(i=0;i<3;i++)


free(backward reference frame[i]);
free(forward reference frame[i]);
free(auxframe[i]);

if (base.scalable mode==SC SPAT)


free(llframe0[i]);
free(llframel[i]);



if (base.scalable mode:
free(lltmp);


=SC SPAT)


#ifdef DISPLAY
if (OutputType==TXll)
TerminateDisplay_Process();
#endif
}


static int video sequence(Bitstream
int *Bitstream Framenumber;


Framenumber)


int Bitstream Framenum;
int SequenceFramenum;
int Return Value;


Bitstream Framenum = *Bitstream Framenumber;
Sequence Framenum=0;








67



Initialize Sequence();

/* decode picture whose header has already been parsed in
Decode Bitstream() */


DecodePicture(BitstreamFramenum, SequenceFramenum);

/* update picture numbers */
if (ISecond Field)
{
Bitstream Framenum++;
Sequence Framenum++;
}

/* loop through the rest of the pictures in the sequence *
while ((Return Value=Headers()))
{
Decode Picture(Bitstream Framenum, Sequence Framenum);

if (ISecond Field)
{
Bitstream Framenum++;
Sequence Framenum++;
}


/* put last frame */
if (Sequence Framenum!=0)
{
OutputLastFrame of Sequence(Bitstream Framenum);
}

DeinitializeSequence();

#ifdef VERIFY
Clear Verify_Headers();
#endif /* VERIFY */

*Bitstream Framenumber = Bitstream Framenum;
return(Return Value);




static void Clear Options()
{
VerboseFlag = 0;
OutputType = 0;
OutputPictureFilename = "
hiQdither = 0;
Output Type = 0;
Frame Store Flag = 0;
Spatial Flag = 0;
Lower Layer Picture Filename =
Reference IDCTFlag = 0;
TraceFlag = 0;
QuietFlag = 0;
ErsatzFlag = 0;
Substitute Picture Filename = "
Two Streams = 0;
Enhancement Layer Bitstream Filename = "
Big Picture Flag = 0;
Main Bitstream Flag = 0;
Main Bitstream Filename = "
Verify_Flag = 0;
StatsFlag = 0;
UserDataFlag = 0;













#ifdef DEBUG
static void Print Options()
{


printf("Verbose Flag
printf("Output_Type
printf("OutputPictureFilename
printf("hiQdither
printf("Output_Type
printf("Frame Store Flag
printf("Spatial Flag
printf("Lower Layer Picture Filename
printf("Reference IDCT Flag
printf("Trace Flag
printf("Quiet Flag
printf("ErsatzFlag
printf("SubstitutePictureFilename
printf("Two Streams
printf("Enhancement Layer Bitstream Filename
Enhancement Layer Bitstream Filename);
printf("Big Picture Flag
printf("Main Bitstream Flag
printf("Main Bitstream Filename
printf("Verify_Flag
printf("Stats Flag
printf("User Data Flag


Verbose Flag);
Output_Type);
OutputPictureFilename);
hiQdither);
OutputType);
Frame Store Flag);
Spatial Flag);
Lower Layer Picture Filename);
Reference IDCT Flag);
Trace Flag);
QuietFlag);
Ersatz Flag);
Substitute Picture Filename);
Two Streams);


Big Picture Flag);
Main Bitstream Flag);
Main Bitstream Filename);
Verify_Flag);
StatsFlag);
User Data Flag);


#endi
#endif








69



getbits.c

/* getbits.c, bit level routines */

/*
* All modifications (mpeg2decode -> mpeg2play) are
* Copyright (C) 1996, Stefan Eckart. All Rights Reserved.
*/

/* Copyright (C) 1996, MPEG Software Simulation Group. All Rights Reserved. */

/*
* Disclaimer of Warranty

* These software programs are available to the user without any license fee or
* royalty on an "as is" basis. The MPEG Software Simulation Group disclaims
* any and all warranties, whether express, implied, or statuary, including any
* implied warranties or merchantability or of fitness for a particular
* purpose. In no event shall the copyright-holder be liable for any
* incidental, punitive, or consequential damages of any kind whatsoever
* arising from the use of these programs.

* This disclaimer of warranty extends to the user of these programs and user's
* customers, employees, agents, transferees, successors, and assigns.

* The MPEG Software Simulation Group does not represent or warrant that the
* programs furnished hereunder are free of infringement of any third-party
* patents.

* Commercial implementations of MPEG-1 and MPEG-2 video, including shareware,
* are subject to royalty fees to patent holders. Many of these patents are
* general enough such that they are unavoidable regardless of implementation
* design.
*


#include
#include

#include "config.h"
#include "global.h"

/* initialize buffer, call once before first getbits or showbits */
char myBuf[1] [DECODE WINDOWSIZE];
int bufc;
static int bufcounter = 0;
int blockingreadSocket(int s, char *bptr, int buflen)
{
int n = 0, actualRead = 0;
char *myptr = bptr;
printf("actualRead = %d\n", actualRead);
while (actualRead != buflen)
{
n = read(s, myptr, buflen actualRead);
printf("read %d\n", n);
if (n <= 0)
{
fclose(sentFilePtr);
break;
}
fwrite (myptr, 1, n, sentFilePtr);
myptr += n;
actualRead += n;
}
printf("actualRead = %d\n", actualRead);

return actualRead;
}
void InitializeBuffer()
{








70



id->Incnt = 0;
// Id->Rdptr = Id->Rdbfr + 2048;
id->Rdptr = Id->Rdbfr + DECODE WINDOW SIZE;
id->Rdmax = Id->Rdptr;

#ifdef VERIFY
/* only the verifier uses this particular bit counter
Bitcnt keeps track of the current parser position with respect
to the video elementary stream being decoded, regardless
of whether or not it is wrapped within a systems layer stream
*/
ld->Bitcnt = 0;
#endif

ld->Bfr = 0;
Flush Buffer(0); /* fills valid data into bfr */


void InitializeMy_Buffer()
{
int i;
bufc = 0;
for(i=0;i {
// read(sockfd,myBuf[i],2048);
blocking readSocket(sockfd,myBuf[i],DECODE WINDOW SIZE);
printf("%d\n",bufcounter++);
}
}

void myLseek()
{
bufc = 0;
}

void Fill Buffer()
{
int Buffer Level;
if (bufc < 1)
{
//memcpy(ld->Rdbfr,myBuf[bufc],2048);
memcpy(ld->Rdbfr,myBuf[bufc],DECODE WINDOW SIZE);
bufc++;
//Buffer Level = 2048;
Buffer Level = DECODE WINDOW SIZE;

else
{

// Buffer Level = read(sockfd,ld->Rdbfr,2048);
Buffer Level = blocking readSocket(sockfd,ld->Rdbfr,DECODE WINDOW SIZE);
printf("%d\n",bufcounter++);



// Buffer Level = read(ld->Infile,ld->Rdbfr,2048);
ld->Rdptr = d->Rdbfr;

if (System StreamFlag)
// ld->Rdmax -= 2048;
ld->Rdmax -= DECODE WINDOW SIZE;


/* end of the bitstream file */
// if (Buffer Level < 2048)
if (Buffer Level < DECODE WINDOW SIZE)
{
/* just to be safe */
if (Buffer Level < 0)
Buffer Level = 0;








71



/* pad until the next to the next 32-bit word boundary *
while (Buffer Level & 3)
ld->Rdbfr[Buffer Level++] = 0;

/* pad the buffer with sequence end codes */
// while (Buffer Level < 2048)
while (Buffer Level < DECODE WINDOW SIZE)


Id->Rdbfr[Buffer
Id->Rdbfr[Buffer
Id->Rdbfr[Buffer
Id->Rdbfr[Buffer
close(sockfd); //


Level++] = SEQUENCE END_
Level++] = SEQUENCE END_
Level++] = SEQUENCE END_
Level++] = SEQUENCE END_
network (why close fd?)


/* MPEG-1 system layer demultiplexer */

int Get Byte()
{
// while(ld->Rdptr >= ld->Rdbfr+2048)
while(ld->Rdptr >= Id->Rdbfr+DECODEWINDOW SIZE)
{
//read(ld->Infile,ld->Rdbfr,2048);
// read(sockfd,ld->Rdbfr,2048);
blocking readSocket(sockfd,ld->Rdbfr,DECODE WINDOW SIZE);
printf("%d\n",bufcounter++);
// putchar('.');

// ld->Rdptr -= 2048;
id->Rdptr -= DECODE WINDOWSIZE;
// ld->Rdmax -= 2048;
ld->Rdmax -= DECODE WINDOW SIZE;
}
return *ld->Rdptr++;


/* extract a 16-bit word from the bitstream buffer */
int Get Word()
{
int Val;

Val = Get Byte();
return (Val<<8) | Get_Byte();



/* return next n bits (right adjusted) without advancing */

unsigned int Show Bits(N)
int N;
{
return ld->Bfr >> (32-N);



/* return next bit (could be made faster than Get Bits(l)) */


unsigned int
{


Get Bitsl()


return Get Bits(l);
}


/* advance by n bits *

void Flush Buffer(N)
int N;
{


CODE>>24;
CODE>>16;
CODE>>8;
CODE&Oxff;








72


int Incnt;

id->Bfr <<= N;

Incnt = Id->Incnt -= N;

if (Incnt <= 24)
{
if (System Stream Flag && (id->Rdptr >= Id->Rdmax-4))
{
do
{
if (id->Rdptr >= Id->Rdmax)
Next Packet();
id->Bfr |= Get_Byte() << (24 Incnt);
Incnt += 8;
}
while (Incnt <= 24);
}
else if (ld->Rdptr < ld->Rdbfr+2044)
{
do
{
ld->Bfr I= *ld->Rdptr++ << (24 Incnt);
Incnt += 8;
}
while (Incnt <= 24);
}
else
{
do
{
// if (id->Rdptr >= ld->Rdbfr+2048)
if (ld->Rdptr >= Id->Rdbfr+DECODE WINDOW SIZE)
Fill Buffer();
ld->Bfr I= *ld->Rdptr++ << (24 Incnt);
Incnt += 8;
}
while (Incnt <= 24);
}
ld->Incnt = Incnt;
}

#ifdef VERIFY
ld->Bitcnt += N;
#endif /* VERIFY */




/* return next n bits (right adjusted) */

unsigned int Get Bits(N)
int N;
{
unsigned int Val;

Val = Show Bits(N);
Flush Buffer(N);

return Val;





















APPENDIX C
MATLAB CODE FOR OPTIMAL RATE-RESIZING FACTOR APPROXIMATION

%Author: Ju Wang, June 2003
function d3=hangd(target bitrate,W,H,sigmasqr);
%hangd(700000,704,480,10)

f=(0.001:0.001:1);
epsilong_sqr=1.2 %dependency on X, 1.2 for Laplasian
alpha=1.36 %log 2^e
b=target bitrate./(4*f*W*H)
d_f=epsilong_sqr^2*sigma_sqr^2*exp(-alpha.*b)
%figure
%hold
%plot(f,d f)
d2=(l-f)*sigmasqr;
d3=d f+d2;
plot(f,d3)















APPENDIX D
CASE 2 TEST PICTURES


Figure 22. Reference Picture for 3rd I-Frame (PSNR = 49.0, S=1,425,311)










an


I ---
- II


- P. iS


Figure 23. 3rd I-Frame Using Original Encoded Picture (PSNR = 20.3, S=91,627)

-C reC'1 *~- rL


i -

AA,


i ,.i j ,.
, '( A .
II


Figure 24. 3rd I-Frame Using Adaptive Image Scaled Picture (PSNR=19.9, S=36,081)















LIST OF REFERENCES


[1] Mark Claypool and Jonathan Tanner, "The Effects of Jitter on the Perceptual
Quality of Video," Proceedings of the 7th ACM International Conference on
Multimedia '99, vol. 2, Oct. 30 -Nov. 5, 1999, pp.115-118.

[2] W. Ding and B. Liu, "Rate Control of MPEG Video Coding and Recording by
Rate-Quantization Modeling," IEEE Trans. on Circuits and Systems for Video
Technology, vol.6, no.1, Feb. 1996, pp.12-20.

[3] A Durand, "Deploying IPv6," IEEE Internet Computing, vol.5, no. 1, Feb. 2001,
pp.79-81.

[4] Armando Fox, Steven D. Gribble, Eric A. Brewer, Elan Amir, "Adapting to
Network and Client Variability via On-Demand Dynamic Distillation," In S.i\lh
International Conference on Architectural Support for Programming Languages
and Operating Systems (ASPLOS VII), Cambridge, MA, Oct.1996, pp. 160-170.

[5] C.A. Gonzales and E. Viscito, "Motion Video Adaptive Quantization in the
Transform Domain," IEEE Transactions on Circuits and Systems for Video
Technology, vol.1, no.4, Dec. 1991, pp.351-361.

[6] Hsueh-Ming Hang and Jiann-Jone Chen, "Source Model for Transform Video
Coder and Its Application-Part I: Fundamental Theory," IEEE Transactions on
Circuits and Systems for Video Technology, vol.7, no.2, April 1997, pp.287-298.

[7] Hsueh-Ming Hang and Jiann-Jone Chen, "Source Model for Transform Video
Coder and Its Application-Part II: Variable Frame Rate Coding," IEEE
Transactions on Circuits and Systems for Video Technology, vol.7, no.2, April
1997, pp.299-311.

[8] Nashnashi Kamat, J. Wang and J.C Liu, "An Efficient Re-routing Scheme for
Voice over IP," to appear at ICME2003, Baltimore, 2003.

[9] Javed I. Khan, Qiong Gu and Raid Zaghal, "Symbiotic Video Streaming by
Transport Feedback Based Quality-Rate Selection," Proceedings of the 12 IEEE
International Packet Video Workshop 2002, April 2002.

[10] Jonathan C.L. Liu, Jenwei Hsieh, David H.C.Du and Meng-jou Lin, "Performance
of a Storage System for Supporting Different Video Types and Qualities," IEEE
Journal on Selected Areas in Communications, vol.14, no.9, Aug. 1996, pp. 1087-
1097.









[11] Victor Lo, "A Beginners Guide for MPEG-2 Standard," http://www.fh-
friedberg.de/fachbereiche/e2/telekom-labor/zinke/mk/mpeg2beg/beginnzi.htm,
accessed June 2003.

[12] J.M. McManus and K.W. Ross, "Video-on-Demand Over ATM: Constant-Rate
Transmission and Transport," IEEE Journal on Selected Areas in Communications,
vol.14, no.9, Aug. 1996, pp. 1087-1097.

[13] B.D. Noble and M. Satyanarayanan, "Experience with Adaptive Mobile
Applications in Odyssey," Mobile Networks and Applications, vol. 4, 1999, pp.
245-254.

[14] Antonio Ortega, "Variable Bit Rate Video Coding," Compressed Video over
Networks, 2001, pp. 343-382.

[15] A. Puri and R. Aravind, "Motion-Compensated Video Coding with Adaptive
Perceptual Quantization," IEEE Transactions on Circuits and Systems for Video
Technology, vol.1, Dec. 1991, pp. 351-361.

[16] Iain E. G. Richardson, Video Codec Design, John Wiley & Sons Ltd, London,
2002.

[17] J. Wang and J. Liu, "Handoff Algorithms in Dynamic Spreading WCDMA System
Supporting Multimedia Traffic," IEEE Journal of Selected Areas on
Communication, to appear in 2003.

[18] Magda El Zarki, "Video Coding and Quality Issues," CENIC QoS Workshop, Jan.
24, 2002.

[19] ISO/IEC 13818-2, Information Technology Generic Coding of Moving Pictures
and Audio Information: Video, 1998.

[20] MPEG Elementary Streams, http://www.mpeg.org/MPEG/video.html#video-test-
bitstreams, accessed June 2003.

[21] MPEG Software Simulation Group, MPEG-2 video codec version 1.2,
http://www.mpeg.org.tristan/MPEG/MSSG, accessed October 2002.















BIOGRAPHICAL SKETCH

Arun S. Abraham was born in India. He joined the University of Florida in 1990

and received his bachelor's degree in computer and information science and engineering

in 1994. Since then he has worked in the software engineering industry. He came back to

the University of Florida in 2001 to pursue the master's degree.

His research interests include object oriented methodologies, design patterns, and

multimedia.