Citation
Surgical Training with an Augmented Digital Environment (SurgADE): An Adaptable Approach for Teaching Minimally Invasive Surgery Techniques

Material Information

Title:
Surgical Training with an Augmented Digital Environment (SurgADE): An Adaptable Approach for Teaching Minimally Invasive Surgery Techniques
Copyright Date:
2008

Subjects

Subjects / Keywords:
Buffer storage ( jstor )
Digital cameras ( jstor )
Image files ( jstor )
Images ( jstor )
Simulation training ( jstor )
Simulations ( jstor )
Software ( jstor )
Surgeons ( jstor )
Surgical procedures ( jstor )
Surgical specialties ( jstor )

Record Information

Source Institution:
University of Florida
Holding Location:
University of Florida
Rights Management:
Copyright the author. Permission granted to the University of Florida to digitize, archive and distribute this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.
Embargo Date:
2/28/2005

Downloads

This item is only available as the following downloads:


Full Text

PAGE 1

SURGICAL TRAINING WITH AN AUGMENTED DIGITAL ENVIRONMENT (SurgADE): AN ADAPTABLE APPROACH FOR TEACHING MINIMALLY INVASIVE SURGERY TECHNIQUES By WILLIAM WILSON A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE UNIVERSITY OF FLORIDA 2004

PAGE 2

Copyright 2004 by William Wilson

PAGE 3

This thesis is dedicated to my guardian angels, William and Edward Gabriel, and to the resolute women who continue to support me throughout every life experience—Deb, Emily, Ellen, and Kristen.

PAGE 4

ACKNOWLEDGMENTS I would like to thank my supervisory committee chair, Dr. Jorg Peters, and committee members, Dr. Benjamin Lok and Dr. Paul Fishwick. They inspired the concept and directed the development process. I sincerely appreciate everyone who contributed to the completion of this thesis: Debra Wilson and Kristen Raulerson, for editing the document; Keith Rahn and Adam Sides, for helping to create the physical filming apparatus; and Minho Kim and David McDonald, for providing programming assistance. During the past two years, I have received vast support from my colleagues and friends in the Digital Arts and Sciences Program, particularly Chirag Kadiwar. With the help of faculty, family, and friends, I have completed a pivotal project. I greatly appreciate God for guiding me throughout the process, and for placing these talented people in my path. iv

PAGE 5

TABLE OF CONTENTS Page ACKNOWLEDGMENTS .................................................................................................iv LIST OF TABLES ...........................................................................................................viii LIST OF FIGURES ...........................................................................................................ix ABSTRACT .........................................................................................................................x CHAPTER 1 INTRODUCTION........................................................................................................1 Simulated Surgery Uses................................................................................................1 Laparoscopic Training..................................................................................................1 Innovative Training Techniques...................................................................................2 The Human Interface Technology Laboratory......................................................2 Medicine Meets Virtual Reality............................................................................2 Virtual Patients and Simulators.............................................................................2 Commercial Training Products.....................................................................................3 Virtual Reality.......................................................................................................4 Training Models....................................................................................................5 2 SurgADE BENEFITS...................................................................................................7 Cost...............................................................................................................................8 Reproducibility.............................................................................................................8 Training.........................................................................................................................9 3 SurgADE PHYSICAL INTERFACE.........................................................................10 Haptic Feedback.........................................................................................................10 Visual Feedback..........................................................................................................11 Live Video Stream...............................................................................................11 Surgery Video......................................................................................................11 The Physical Apparatus..............................................................................................12 4 SurgADE NECESSARY COMPONENTS................................................................13 v

PAGE 6

5 SurgADE BACKGROUND.......................................................................................14 Loading Images as Textures.......................................................................................14 Loading Video as Textures.........................................................................................16 Windows Media Video........................................................................................16 Audio Video Interleave.......................................................................................16 6 INCORPORATING A LIVE VIDEO STREAM.......................................................18 ImageControl Software...............................................................................................18 PPM Use..............................................................................................................18 Non-PPM Use......................................................................................................18 OpenGL Software.......................................................................................................19 TextureSubImage................................................................................................19 Chroma key technique..................................................................................20 Repositioning the video stream....................................................................20 Buffer Replacement.............................................................................................20 Chroma key technique..................................................................................22 Repositioning the video stream....................................................................23 7 CONCLUSION...........................................................................................................25 Implications................................................................................................................25 Future Developments..................................................................................................25 Digital Environment Feedback............................................................................26 Controller.....................................................................................................26 Additional digital camera.............................................................................26 Enhanced User-Friendly Interface.......................................................................26 Video Manipulation.............................................................................................27 Evaluation Component........................................................................................27 APPENDIX A SurgADE DEVELOPMENT COMPONENTS..........................................................28 B SurgADE EXECUTABLE COMPONENTS.............................................................29 C SurgADE ADJUSTED CODE SOURCES................................................................30 D SurgADE CODE.........................................................................................................31 Main.cpp.....................................................................................................................31 SetupDevice.cpp.........................................................................................................41 SetupDevice.h.............................................................................................................43 LoadVideo.cpp............................................................................................................44 LoadVideo.h...............................................................................................................46 LoadTexture.cpp.........................................................................................................46 vi

PAGE 7

LoadTexture.h.............................................................................................................51 LIST OF REFERENCES...................................................................................................52 BIOGRAPHICAL SKETCH.............................................................................................54 vii

PAGE 8

LIST OF TABLES Table page A-1 Programs used to develop SurgADE........................................................................28 A-2 Libraries linked in Visual Studio 6.0.......................................................................28 B-1 Libraries needed to run SurgADE*..........................................................................29 C-1 External code used to develop SurgADE.................................................................30 viii

PAGE 9

LIST OF FIGURES Figure page 1-1 SimSurgery virtual reality surgery products..............................................................5 1-2 Simulab training models.............................................................................................6 3-1 The SurgADE frame.................................................................................................11 3-2 Digital camera and tablet PC....................................................................................11 3-3 The full SurgADE setup...........................................................................................12 5-1 The Initialize function..............................................................................................15 5-2 The Create Texture method......................................................................................17 6-1 OpenGL functions....................................................................................................19 6-2 The GrabImage function..........................................................................................22 6-3 The SurgADE digital interface.................................................................................24 ix

PAGE 10

Abstract of Thesis Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Master of Science SURGICAL TRAINING WITH AN AUGMENTED DIGITAL ENVIRONMENT (SurgADE): AN ADAPTABLE APPROACH FOR TEACHING MINIMALLY INVASIVE SURGERY TECHNIQUES By William Wilson August 2004 Chair: Jorg Peters Major Department: Computer and Information Science and Engineering With minimally invasive surgery (MIS), medical tools are inserted into small incisions in the body. An internal camera then provides an image of the procedure. These intricate procedures require intensive training. Unfortunately, current training methods are generally expensive or ineffective. They emulate the “feel” of surgery, but do not adequately replicate an actual MIS environment. I have created a surgical simulator, SurgADE, that trains the user in a realistic environment. SurgeADE displays a still image of a surgical environment or a video of an MIS procedure performed by a skilled surgeon. A trainee replicates the procedure with an apparatus containing surgical forceps. Then, a digital camera films the ends of the forceps and their interactions with synthetic flesh. A modified video stream concurrently shows the surgical tools at an adjustable location in the surgical environment. x

PAGE 11

SurgADE consists of the following components: Information capture. A skilled surgeon executes an MIS procedure on video. Information review. After viewing the video, the trainee replicates the MIS performed and makes necessary improvements. Evaluation. The user assesses competency in performing the MIS procedure. An economical, portable training device, SurgADE enables trainees to learn diverse procedures with enhanced accuracy, in an environment that replicates MIS. xi

PAGE 12

CHAPTER 1 INTRODUCTION For the first time [physicians can be trained] in advanced medical skills whilst posing no risk to the patient. [The physician can reach] a pre-defined skill level before ever performing the procedure on a patient. Dr. Anthony Gallagher [ 1 ] Simulated Surgery Uses In recent years, Minimally Invasive Surgery (MIS) has become an increasingly viable technique for performing a wide array of medical procedures. Before MIS, traditional procedures required large incisions in a patient in order for the surgeon to view work with internal organs and tissues. Modern medical technology, however, offers surgeons and patients a far better alternative to invasive techniques via innovative MIS procedures. This less-invasive surgery enables highly trained surgeons to make small incisions, and use a visual aid to guide tools within the body. The most common uses of MIS include laparoscopy (various abdominal procedures); arthroscopy (joint surgery); and, most recently, cardiology (valve replacement and vessel grafting). Procedures may also be performed with surgical robots (that the surgeon manipulates with controllers similar to MIS tools). Laparoscopic Training The MIS procedures require extensive training that traditionally involved live animals and human cadavers [ 2 ]. This type of training presents a number of economic and ethical dilemmas, and has significant drawbacks. It must be performed in a surgical setting, and the biological tissue does not truly replicate that of an actual patient 1

PAGE 13

2 undergoing the procedure. This means that, during actual MIS, the surgeon may react inappropriately, based on information learned in “less than ideal” training situations. Innovative Training Techniques As an alternative to this archaic training approach, computer scientists and skilled surgeons are working cooperatively to develop surgical simulator devices to improve MIS training. The Human Interface Technology Laboratory The Human Interface Technology Laboratory (HITLab) [ 3 ] at the University of Washington, for example, develops virtual reality (VR) training that creates a realistic simulation of diverse medical procedures. Current research at the HITLab includes using finite element analysis to simulate suturing, generating an expert surgical assistant to guide a trainee in various procedures, and creating a virtual prototype of medical robotic interfaces that would aid in MIS and microsurgery. Medicine Meets Virtual Reality An annual conference called Medicine Meets Virtual Reality (MMVR) further disseminates the latest medical technology training developments. Studies that have been presented at recent MMVR conferences include applying a virtual environment (VE) to laparoscopic training [ 4 ], and including an angled laparoscope within VE training [ 5 ]. The 2003 conference featured a presentation that surveys current surgery simulation techniques and describes their usefulness for the medical community [ 6 ]. Virtual Patients and Simulators Recent developments in surgical simulation enable trainees to make diagnoses on virtual patients made of plastic, wires, and computer circuits. These “patients” actually

PAGE 14

3 have beating hearts and breathing lungs. They are programmed to simulate diverse medical situations and to respond to a trainee’s medical treatment [ 7 ]. Other newer simulators, such as the Immersion Medical Accutouch and CathSim devices, combine video or virtual images with physical feedback to teach surgery. With these devices, trainees insert needles or surgical tools into a plastic box and feel the sensation of cutting flesh or pushing through organs such as the colon or uterus. Meanwhile, a video screen shows what a doctor would view via ultrasound images [ 7 ]. Experts predict that virtual training may soon become standard for instructing new doctors and testing experienced surgeons seeking re-certification. Simulators are currently widely used for training U.S. military medical staff at community colleges. Fifty percent of the 120 medical schools in the U.S. have already added the Medical Education Technologies Inc. Human Patient Simulator or the Laerdal Medical SimMan to training labs [ 7 ]. Dr. Jeffrey Hammond, surgical professor at Robert Wood Johnson, believes advanced simulators enhance learning better than work with cadavers or animals. Advanced learning by trainees helps justify the cost. However, surgical simulators are still expensive. Scaled-down models cost about $40,000, and high-tech versions, over $200,000. Hammond estimates that creating a basic simulator lab costs at least $600,000, and a top of the line one, $2.5 million [ 7 ]. Commercial Training Products Two simulated training techniques that eliminate the need for biological tissue in laparoscopic training—virtual reality and physical training models—have gained widespread global support from academics, professors, physicians, and researchers.

PAGE 15

4 Virtual Reality Innovative companies such as Mentice and SimSurgery [ 8 ] have developed (and continue to refine) VR training modules. In recent years, these companies have introduced devices that use digital environments replicating the look of organs and soft tissues. Controllers designed to emulate surgical tools simulate the feel of surgery. This software and hardware combination enables the trainee to actually perform “virtual surgery.” SimSurgery specifically developed a product, called SimMentor [ 9 ], to better train users to perform a variety of MIS procedures, without using live animals or cadavers. SimMentor simulates and trains the user in robotic-assisted endoscopic Coronary Artery Bypass Grafting by creating a 3D virtual environment. The system provides physical and visual guides that train the user. It offers both training and interactive modules. It also includes a program that evaluates the user’s overall performance. Although it is one of the best surgical simulator devices on the market, SimMentor has two fundamental weaknesses. First, most medical schools cannot afford the training device, which costs $15,000 for the basic setup alone. This means that, because of limited funding, most aspiring MIS surgeons may never be exposed to this simulator that simplifies training and, ideally, enhances surgical performance. Second, force feedback devices and modifiable digital environments cannot yet replicate the true look and feel of actual surgery. Logically, the trainee may not learn a procedure with utmost accuracy because the digital environment cannot replicate all situations encountered during actual MIS procedures.

PAGE 16

5 A B Figure 1-1. SimSurgery virtual reality surgery products. A) A surgeon using SimCor. B) The SimLap setup. (Source: http://www.simsurgery.no/downloads.html , Last accessed July 20, 2004). Training Models Like Mentice and SimSurgery, Simulab [ 10 ] serves as another company that has made an impressive mark in surgical training through simulation. Simulab’s training devices, unlike SimMentor and its counterparts, use mockups of laparoscopic surgical environments that incorporate artificial tissue. One such device, SimuView, reduces operational costs by using mirrors to eliminate the need for traditional video-endoscopic camera equipment. With another device, SimuVision, a digital camera and computer setup allows the trainee to watch surgical tools interact with synthetic tissue. Simulab’s devices offer an affordable alternative to SimMentor, plus a more physically accurate approach to MIS training. Despite enhanced affordability and accuracy, SimuView and SimuVision lack a number of SimMentor’s benefits. These devices unfortunately become expensive if the user wants to train in various procedures. They require varied organs and tissue samples for different MIS procedures, and cannot easily be adapted to various surgical scenarios. Simulab’s products also lack instructional guidance, and require the user to rely on an outside source to train proper methods and evaluate performance.

PAGE 17

6 A B Figure 1-2. Simulab training models. A) Rigid Suture Box using SimuView reflection technology. B) The SimuVision setup, which uses a digital camera to show the user’s procedure. (Source: http://www.simulab.com/LaparoscopicSurgery.htm , Last accessed July 20, 2004).

PAGE 18

CHAPTER 2 SURGADE BENEFITS The major drawbacks of current simulation training devices—cost and difficulty in precisely replicating actual surgical situations—have created a critical need for an improved MIS training option. Computer scientists, mechanical engineers, and medical students have helped to develop a surgical simulator device that offers guaranteed affordability and improved accuracy through the interjection of video versus virtual reality simulation. Surgical training with an Augmented Digital Environment (SurgADE), addresses the training issues where SimSurgery and Simulab falter. SurgADE has been designed specifically to improve MIS training, and to give the medical community access to an affordable, easily installed system. With SurgADE, the user performs a procedure on synthetic tissue, while a digital camera captures the procedure. The SurgADE system then displays video of an actual surgical procedure, and incorporates only the user’s tools into the scene displayed. The user can then mimic the perfected technique shown in the prerecorded version, performed by a skilled surgeon. The user enjoys the visual benefits of an augmented digital environment, and the physical advantages of working with synthetic tissue. For enhanced training, the surgical video provides a benchmark from which the user can learn. The user can self-evaluate performance during the video, and make adjustments to improve skills. 7

PAGE 19

8 SurgADE offers a unique variety of valuable educational features: Prerecorded MIS surgical procedures by skilled surgeons for replication by trainees The ability to view the quality and effectiveness of a trainee’s mimicked surgical procedure, providing valuable feedback through self analysis of instrument trajectories in filmed results A flexible model needed to address different anatomies and pathological processes and stages encountered in actual MIS Simplified simulator training that encourages trainees and surgeons to improve skills and attempt new and varied MIS techniques Lower costs than current training simulators via a video software package that has been refined for use on a variety of computers, including laptops. Cost SurgADE costs significantly less than other surgical simulators. The setup consists of the following reasonably priced items: Digital camera that costs $30 MIS apparatus—a prototype,using high quality materials, costs $50 to construct Tablet PC—a laptop or desktop PC can also be used. SurgADE does not require any expensive controller devices. Instead, actual medical tools (forceps, needles, and sutures) available through most medical schools interact naturally with synthetic tissue. Additionally, users do not need to purchase a variety of synthetic organs, as SurgADE allows the surgical environment to be altered through the software. Reproducibility SurgADE’s visual environment shows actual filmed surgery footage in the background. The user may apply the SurgADE system to different MIS procedures, by simply loading in different surgical films. A variety of synthetic tissue samples may be used to simulate different surgical environment textures. However, the tissue shape is

PAGE 20

9 inconsequential. SurgADE’s versatility is further enhanced because medical students can easily acquire video of actual surgical procedures. The students can then use these videos to hone their own skills, and to learn diverse MIS techniques. Training SurgADE offers three key training benefits. First, it provides a live tutorial, by allowing the user to mimic video of properly performed MIS procedures. Second, as the user watches his work in conjunction with the video, he can recognize deviations from the actual procedure. Finally, the user sutures a synthetic texture that can be reviewed with an instructor for further evaluation.

PAGE 21

CHAPTER 3 SURGADE PHYSICAL INTERFACE SurgADE’s mission involves developing a physical setup that interacts appropriately with a digital setup. The original design for the physical setup included an Immersion Microscribe [ 11 ] that would serve as the input device for the system. The Microscribe is an anchored stylus that communicates 3D position and rotation coordinates to the computer. However, the Microscribe alone does not provide physical, haptic feedback. Combining the Microscribe with forceps to work on synthetic flesh limits the freedom of movement necessary to train surgical dexterity. Ideally, a Sensable Phantom [ 12 ] feedback device can be incorporated as an alternative training feature in the future. However, this will emulate other simulators, limit natural movement, and increase costs. Haptic Feedback With SurgADE, working with actual medical tools and synthetic flesh provides the physical feedback needed by the trainee to skillfully learn varied MIS procedures. SurgADE’s haptic feedback offers the benefit of feeling similar to working with real flesh without the need for complex collision detection algorithms that generally lack true realistic feedback. The prototype apparatus provides the user with laparoscopic forceps that are separated from the workspace by a neoprene rubber membrane. The ends of these forceps are used to manipulate the synthetic flesh. Because no device limits the user’s natural desired movement, SurgADE provides an environment that parallels the actual MIS setting. 10

PAGE 22

11 Figure 3-1. The SurgADE frame. Visual Feedback The visual system of SurgADE also parallels actual MIS surgery. Rather than directly viewing the procedure, the user watches his work on a monitor. The technique involved parallels performing tasks using real objects within virtual environments [ 13 ]. Live Video Stream A digital camera records the trainee’s actual work and sends the video stream to the computer screen. This helps to acclimate the user to performing three-dimensional tasks while watching a two-dimensional viewpoint. Surgery Video SurgADE then composites the trainee’s technique over video of an actual MIS procedure performed by a skilled surgeon. The recorded example allows the user to learn proper technique. Compositing allows the user to mimic the procedure and view the digital image of his work over a professional’s, resulting in more practical training. Figure 3-2. Digital camera and tablet PC.

PAGE 23

12 The Physical Apparatus The prototype for SurgADE uses a laparoscopic setup similar to Simulab’s SimuView rigid box. However, it omits reflection technology. SurgADE’s apparatus frame is a box with a tilted front. The (12” to 13 ”) x 12” x 12” frame is constructed with the following materials: Eight 1”, 16-gauge, 90 angle aluminum posts Two 1”, 8-gauge aluminum posts One 1”, 16-gauge, 70 angle aluminum post for the front bottom One 1”, 16-gauge, 110 angle aluminum post for the front top Quarter-inch thick acrylic frames the front, back, and bottom of the box. One and one half inch holes, covered with neoprene rubber and bolted down with ” x 16-gauge aluminum rings, have been drilled into the front sheet. Various laparoscopic tools enter through the rubber and can manipulate the environment within the apparatus. The digital camera is docked on a base attached with Velcro adjacent to the front window, facing the workspace, and connected to the tablet PC, which can be positioned according to the user’s specifications. Figure 3-3. The full SurgADE setup.

PAGE 24

CHAPTER 4 SURGADE NECESSARY COMPONENTS In addition to the apparatus, the development of SurgADE software remains integral to this project because it entails intricate computer programming. A detailed description of the required programming can help the user should any “glitches” be encountered during set up. Further, skilled programmers can reference program codes and continue to enhance images and software capabilities in the future. The components required to develop SurgADE have been included in Appendix A. In SurgADE’s present form, the package bundle includes all files required to implement the video simulator. The necessary libraries for execution can be found in Appendix B. The user merely needs to install the Imaging Source IC Imaging Control software [ 14 ] and Microsoft DirectX (8.0 or higher) [ 15 ]. The SurgADE training package includes the trial version of the software. Simplified for installation purposes, the software ensures that the user need not acquire any files other than those provided in the package. The developer has already completed necessary modifications so that the user can easily install SurgADE software in its present format. 13

PAGE 25

CHAPTER 5 SURGADE BACKGROUND SurgADE loads a medical image into the background of the laptop to emulate an MIS environment. The image may be a still picture or a moving video that adds an extra training component to the package. Loading Images as Textures Originally, SurgADE implemented a live video stream over a still background image. This helps familiarize the user with the different look of various MIS environments through digital representations. The code that supports background textures still appears within the program. However, it does not execute when a video texture appears. The basic process for each format involves creating an “Image” pointer that contains size information and a string of GL unsigned bytes. The initialize function creates a blank, square, power-of-two-sized background texture based on the size of the medical image. The background texture is then saved into a buffer and loaded with GL Texture Image. Then, the program saves the texture image into a separate buffer that is loaded with GL Sub Texture Image, which does not require a power-of-two size. This allows the user to supply a background texture image of any size. Every time the background image changes, the background texture image is saved and decoded (if necessary). 14

PAGE 26

15 Figure 5-1. The Initialize function. With a Joint Photographic Experts Group (JPEG) image, the program must decode the data, then flip it vertically. A for-loop flips the image, and reloads only when the background image changes. With a bitmap (BMP) file, the program must decode the data and switch the colors from BGR format to RGB format. With a Portable Pixel Map (PPM), the program need only scan the data. Originally, the developer required a PPM scanner to decode image files captured by the digital camera. Information on required libraries and code sources can be found in Appendix C.

PAGE 27

16 Loading Video as Textures During development, system developers noted that SurgADE’s training capabilities could be enhanced by live video in the background as a template for the user. The live video could provide valuable MIS training examples. Windows Media Video Acquired footage, in the Windows Media Video format, made it seem ideal for SurgADE to load WMV files. However, Microsoft provides limited access to this format. Finding examples of WMV manipulators that would work with SurgADE ultimately proved to be too difficult a task. Audio Video Interleave SurgADE therefore supports Audio Video Interleave (AVI) 1 . Information on required libraries and code sources can be found in Appendix C. SurgADE opens an AVI stream and creates a compatible device used with a Device Independent Bitmap (DIB) Section. This allows the program to grab a frame from the AVI and convert it to a DIB, which then loads into an Image with colors properly reversed. This initial Image can then be used in the same manner as discussed previously with Image background textures. The initialize function also determines the rough milliseconds per frame (MPF) by dividing the time of the video by the number of frames. Tick count determines frame speed, with the frame number equaling the difference between tick counts divided by the MPF. The program then continues to load new Images into the sub texture buffer as the program runs. 1 The system running SurgADE must support the codec associated with the AVI file.

PAGE 28

17 Figure 5-2. The Create Texture method.

PAGE 29

CHAPTER 6 INCORPORATING A LIVE VIDEO STREAM A live video stream is necessary to composite the trainee’s procedure over the correct procedure. Information about the source code acquired to capture a live video stream is included in Appendix C. ImageControl Software SurgADE utilizes the Image Control Software to capture the live video stream. This software creates a grabber function that determines the properties of a USB-connected camera device and allows the user to perform various tasks with the video. SurgADE uses this feature to film the MIS procedure as replicated by the user. PPM Use The original SurgADE code allowed the user to take a snapshot of an image and save it into a PPM file. Developers determined how to decode and load PPM images that can be reopened and saved into a sub texture buffer. This process, however, requires repeated opening of files, which adds extraneous overhead work for the program. Non-PPM Use As development of SurgADE progressed, developers discovered a way to eliminate this unnecessary overhead work performed by the operating system. Further research showed that the grabber function could put data directly into the image buffer. SurgADE creates one grabber type at the beginning of execution. Then, every time OpenGL redisplays, it regrabs a frame from the streaming digital video to be used with an Image buffer. 18

PAGE 30

19 A B Figure 6-1. OpenGL functions. A) The Main function. B) The Display function. OpenGL Software SurgADE relies on OpenGL to incorporate the captured video stream into the digital environment. TextureSubImage Originally, a sub texture enabled the video stream to be incorporated into the scene. First, the program creates the square background texture and loads the still (or video) medical image as a sub texture. The grab image function then stores the video stream data and creates a separate sub texture buffer. Putting the horizontal and vertical centers into the proper sub texture parameters places the buffer at the center of the quad.

PAGE 31

20 Chroma key technique The original process to show only the desired part of the image (the medical tools) also required insightful programming to make the images realistic and the mimicked surgery more accurate. This is solved by working with the tools over a green screen, and “keying” out the green color. After loading the video onto the background, SurgADE’s grab image function stores the video stream data. Wherever the image appears green (with that pixel component value greater than the red and blue values), the alpha is set to zero (with the background alphas at 255). However, the clear color shows instead of the medical background texture when the alpha of the foreground texture is zero with the sub textures placed on the same quad. Basically, sub textures replace entirely the textures that they substitute. Incorporating two quads—with the medical background on the rear one, and the live stream on the front one—originally solved this. Keeping the sub texture parameter at the center and moving the front quad simplified the reposition function. Repositioning the video stream A necessary component of SurgADE is the ability to reposition the video stream. This allows the user to relocate the virtual workspace when necessary. This requires adding adjusted horizontal and vertical positions to the horizontal and vertical offsets in the sub texture parameter. Then, the program simply needs to make certain that the horizontal and vertical positions do not exceed the background image boundaries. Buffer Replacement Developers soon realized the slow nature of the video sub texture approach, as it caused the live stream to flicker. Basically, two sub textures need to be reloaded with

PAGE 32

21 three different texture buffers—the blank, square background; the medical image; and the video stream. Other problems ensued. Moving the front quad to reposition the live stream revealed a texturing flaw—the medical image is textured onto both quads. Fixing the problem with separate texture functions proved unnecessarily difficult after discovering a faster method to texture the scene. Rather than load the video buffer into a second sub texture, the developer simply alters the buffer stream storing the medical image with live video data. SurgADE essentially takes the buffer that stores the still medical image and replaces part of it with the live video data. However, manipulating two one-dimensional data streams so that their two-dimensional images correspond emerged as another difficult task. Extensive thought eventually enabled SurgADE to properly display images. First, the system creates a video only if it is smaller than the background image. SurgADE then determines the horizontal and vertical origins (X o ,Y o ) of the video placement by subtracting half of the video size (V Size ) from half of the background image size (BG Size ). X o = (BG SizeX – V SizeX ) / 2 (6-1) Y o = (BG SizeY – V SizeY ) / 2 (6-2) Next, the system reads the video horizontally into the data buffer line by line using a loop. At each horizontal line, the image data stream pointer changes according to the difference in image widths. The pointer (P) is placed at the vertical origin plus the row number (R), multiplied by the background image width. The system adds the horizontal origin at the end. P = (Y o + R) * BG SizeX + X o (6-3)

PAGE 33

22 To further enhance the speed of the system to support real-time images, SurgADE only alters the section of the background image that the video replaces. Figure 6-2. The GrabImage function. Chroma key technique To implement the chroma key technique under the updated texture method, a segment of the original background image the same size as the live video must be saved. The user covers the workspace green cloth. Then, the video stream replaces the background where the video image color 2 differs from the cloth color values. R V (< R CMin || > R CMax ) || G V (< G CMin || > G CMax ) || B V (< B CMin || > B CMax ) (6-4) 2 The camera may need to be calibrated to a proper white balance option, according to individual camera specifications.

PAGE 34

23 Otherwise the segment of the original background remains, making it seem as though the medical tools appear in the pre-recorded scene. The temporary segment reloads to prevent the video from replacing itself, resulting in banding. Repositioning the video stream Repositioning the stream within the buffer requires the background image to return to its original state. However, a video background eliminates this step as the buffer reloads when the display function executes. The saved segment of background changes with the segment the video stream replaces. The new vertical position (Y pos ) gets subtracted from the vertical origin and the row number, and the new horizontal position (X pos ) gets added to the end. P = (Y o + R – Y pos ) * BG SizeX + X o + X pos (6-5) To guarantee that the stream data does not exceed the boundaries of the background buffer, the horizontal and vertical positions must be clamped by the sizes of the background and the stream.

PAGE 35

24 Figure 6-3. The SurgADE digital interface.

PAGE 36

CHAPTER 7 CONCLUSION SurgADE, in its present form, stands alone among video surgical simulators in providing users with an affordable, adaptable means for effectively training Minimally Invasive Surgical (MIS) procedures. Implications A video-based simulator, SurgADE offers a means for students to view diverse MIS procedures performed by highly trained surgeons. Then, via an inexpensive apparatus, the trainee replicates the surgical procedure. Most important, the trainee can perform simulated surgery with synthetic tissue in an environment that replicates actual surgery. Therefore, the user can learn the procedure more precisely and better prepare for surgery in an actual MIS environment. Comparatively inexpensive, SurgADE can be used by Training Institutions despite limited funding. Easy to install and comprehend, it can be used by a broader base of trainees. This makes possible more trained surgeons and access to innovative MIS procedures at more sites worldwide. Future Developments Given time, the developer will make continued enhancements to SurgADE. These include digital environment feedback, an enhanced user-friendly interface, further video manipulation, and an improved evaluation device. 25

PAGE 37

26 Digital Environment Feedback SurgADE, in its present form, lacks one key component. It cannot manipulate the background environment on the screen. When mimicking actual surgical footage, manipulating the experienced surgeon’s prerecorded procedure defies logic and could impair proper training. Using a digital environment, however, could enhance the still image version presently offered by the program. Controller Conceivably, a controller could be used to manipulate the background environment. Although the Microscribe seems to limit natural movement, a joystick or other manipulation device may support enhanced interactivity and feedback. Due to additional costs, the inclusion of such a device must be justified by increased natural movement and improved accuracy through the manipulation of the background environment. Additional digital camera A second digital camera placed perpendicular to the work to determine depth and height may be added to provide the system with the user’s coordinates. Addition of another low cost camera, however, does not offer the same level of accuracy as a hardware device. Again, SurgADE’s use of video as a guide reduces the need for enhanced visual feedback. Enhanced User-Friendly Interface Still in prototype phase, SurgADE lacks some refined features that would help the user better benefit from the system. For example, the user might want to reload different pictures or videos while running the program. Currently, SurgADE only allows loading a

PAGE 38

27 video or image at the beginning of the program. A graphical user interface with menu options would address this shortcoming. Video Manipulation The current version of SurgADE allows the user to load the video and work during normal play, from the beginning to the end of the video. Ideally, further video features would be added to enable the user to manipulate the surgical video and perform tasks such as pause and replay. Another useful feature would enable the user to save the video containing his mimicked procedure within the picture for later review. Regardless of these additional options, SurgADE in its present form effectively allows a trainee to mimic the actual work of experienced surgeons and effectively train within an MIS environment. Evaluation Component Finally, SurgADE would greatly benefit from a more advanced evaluation component. Currently, the user can self evaluate work based upon the precision of replicating the actual MIS procedure. Ideally, a feedback component that judges the performance and measures deviations from desired performance would be added to SurgADE. The video simulator still offers an acceptable means for judging performance through self-evaluation.

PAGE 39

APPENDIX A SURGADE DEVELOPMENT COMPONENTS Table A-1. Programs used to develop SurgADE Program Program type Function Windows XP Operating system File management, etc. Visual Studio 6.0 Development platform C++ programming language Glut for Windows [ 16 ] Libraries, etc. OpenGL programming language IC Imaging Control [ 14 ] External camera device-manipulation software Capture the digital camera stream DirectX .0 [ 15 ] Video drivers, etc. Necessary to run IC Imaging Control software Table A-2. Libraries linked in Visual Studio 6.0 Library Function glut32.lib OpenGL language support for Visual Studio glu32.lib OpenGL language support for Visual Studio tis_udshl05.lib IC Imaging Control software support strmiids.lib Video color support jpeg.lib JPEG decoder support vfw32.lib AVI stream decoder support gdi32.lib DIB section support 28

PAGE 40

APPENDIX B SURGADE EXECUTABLE COMPONENTS Table B-1. Libraries needed to run SurgADE * Library Source glut32.dll Visual Studio 6.0 mfc42d.dll Visual Studio 6.0 msvcp60d.dll Visual Studio 6.0 msvcrtd.dll Visual Studio 6.0 tis_dshowlib05.dll IC Imaging Control software tis_udshl05.dll IC Imaging Control software *IC Imaging Control software and DirectX 8.0 (or better) must also be installed 29

PAGE 41

APPENDIX C SURGADE ADJUSTED CODE SOURCES Table C-1. External code used to develop SurgADE Code Type Code supported Source Image Capture code SetupDevice.cpp SetupDevice.h initializeCam grabImage UNC Charlotte Digital Image Processing Homework Assignment [ 17 ] PPM Loader LoadPPM Brandeis University Computer Graphics Outline [ 18 ] Bitmap Decoder LoadBMP getint getshort University of York Introduction to OpenGL [ 19 ] Jpeg Decoder DecodeJPG LoadJPG Game Tutorials [ 20 ] AVI Decoder LoadVideo.cpp LoadVideo.h Neon Helium Productions [ 21 ] 30

PAGE 42

APPENDIX D SURGADE CODE Main.cpp // SurgADE // Author: William Wilson // e-mail: whw4429@hotmail.com #include "TISUDSHL.h" #include "setupdevice.h" #include #include "conio.h" #include "loadtexture.h" #include "loadvideo.h" #ifndef CALLBACK #define CALLBACK #endif int xpos = 0; // screen horizontal position int ypos = 0; // screen vertical position double zpos = -2.5; // screen depth position int streamX = 0; // Video Stream Width int streamY = 0; // Video Stream Height int stop = 0; // Animation Flag int BGtype = 0; // 0 = AVI, 1 = JPG, 2 = BMP, 3 = PPM int frame; // Frame Counter int next; // Used For Animation char textureFile [40]; GLubyte * BGImage; // Specify the number of buffers to be used. #define NUM_BUFFERS 1 using namespace _DSHOWLIB_NAMESPACE; using namespace std; void grabImage(DShowLib::Grabber::tMemBufferPtr buff); void initCam(DShowLib::Grabber::tMemBufferPtr buff); void calCam(DShowLib::Grabber::tMemBufferPtr buff); void keyboard ( unsigned char key, int x, int y ); void initializeCam(); void deleteCam(); void freeMem(); Grabber::tMemBufferCollectionPtr pMemBuffColl; Grabber::tMemBufferPtr pMemBuff; Grabber *grabber; HDC hDC=NULL; // Private GDI Device Context HGLRC hRC=NULL; // Permanent Rendering Context HWND hWnd=NULL; // Holds Our Window Handle HINSTANCE hInstance; // Holds Instance Of Application 31

PAGE 43

32 int BGImageWidth = 2; int BGImageHeight = 2; void earlyExit(int code) { printf("Press any key to continue"); getch(); exit(code); } // Create a Blank Background for the base texture void makeBGImage(int width, int height) { int i; while ((width >= BGImageWidth) || (height >= BGImageHeight)) { BGImageWidth *= 2; BGImageHeight *=2; } // read the data BGImage = (GLubyte *) malloc(4 * sizeof(GLubyte) * BGImageHeight * BGImageWidth); for (i = 0; i < (4 * sizeof(GLubyte) * BGImageHeight * BGImageWidth); i++) BGImage[i] = (GLubyte) 100; } int changeBG = 1; Image *pImage; void CreateTexture(const char* strFileName) { // load new Background, if necessary if (changeBG) { // clear previous data, if any if (pImage) { if (pImage->data) free(pImage->data); // Free texture data free(pImage); // Free image structure } // Load the image and store the data switch (BGtype) { case 1: pImage = LoadJPG(strFileName); break; case 2: pImage = LoadBMP(strFileName); break; case 3: pImage = LoadPPM(strFileName); break; } } if(pImage == NULL) { // can't load file, quit! printf("Background cannot be loaded\n\n"); earlyExit(0); } changeBG = 0; }

PAGE 44

33 void CreateTexture(int frameNumber) { // Load the image and store the data pImage = GrabAVIFrame(frameNumber); if(pImage == NULL) { // can't load file, quit! printf("Background cannot be loaded\n\n"); earlyExit(0); } } int tickCount [2]; void Update(void) { tickCount[1] = GetTickCount (); // Get Tick Count next+=tickCount[1] tickCount[0];// Increase next Based On Timer tickCount[0] = tickCount[1]; frame=next/getMPF(); // Calculate Current Frame frame+= 1; if (frame>=getLastFrame()) // At Or Past Last Frame { frame=0; // Reset Frame To Zero (Start) next=0; // Reset Animation Timer (next) } } void animate (void) { if (stop == 0) { if (!BGtype) Update(); glutPostRedisplay(); } } int width; int height; int centerX, centerY; int halfPX, halfPY; int newBG = 1; int clearBG = 0; void setxyPos(int x, int y) { // clear textures if (BGtype) { newBG = 1; clearBG = 1; } changeBG = 1; // initialize x and y centerX = 2.12 * (x width/2); centerY = 2.12 * (y height/2); halfPX = 0.5 * (pImage->sizeX-streamX); halfPY = 0.5 * (pImage->sizeY-streamY);

PAGE 45

34 if (centerX > halfPX) xpos = halfPX; else if (centerX < -halfPX) xpos = -halfPX; else xpos = centerX; if (centerY > halfPY) ypos = halfPY; else if (centerY < -halfPY) ypos = -halfPY; else ypos = centerY; } bool left_down = false; void moveObj(int button, int state, int x, int y) { // process only left mouse down hits if (button != GLUT_LEFT_BUTTON || state != GLUT_DOWN) return; if(button == GLUT_LEFT_BUTTON && state == GLUT_DOWN) left_down = true; else false; setxyPos(x, y); } void motion(int x, int y) { if (!left_down) return; setxyPos(x, y); } // Load the Background Image/Video File int initBGImage() { char tempFile [34]; printf("\nBackground Image Title: "); scanf("%s", tempFile); sprintf(textureFile, "data/%s", tempFile); if (strstr(textureFile, ".avi") || strstr(textureFile, ".AVI")) { BGtype = 0; return 1; } if (strstr(textureFile, ".jpg") || strstr(textureFile, ".JPG")) { BGtype = 1; return 1; } if (strstr(textureFile, ".bmp") || strstr(textureFile, ".BMP")) { BGtype = 2; return 1; } if (strstr(textureFile, ".ppm") || strstr(textureFile, ".PPM")) { BGtype = 3; return 1; } return 0; }

PAGE 46

35 // Initialize material property and depth buffer. void init(void) { GLfloat mat_diffuse[] = { 0.7, 0.7, 0.7, 1.0 }; GLfloat mat_specular[] = { 1.0, 1.0, 1.0, 1.0 }; GLfloat mat_shininess[] = { 100.0 }; glClearColor (0.0, 0.0, 0.0, 0.0); glColor3f(1.0,1.0,1.0); glMaterialfv(GL_FRONT, GL_DIFFUSE, mat_diffuse); glMaterialfv(GL_FRONT, GL_SPECULAR, mat_specular); glMaterialfv(GL_FRONT, GL_SHININESS, mat_shininess); glEnable(GL_LIGHTING); glEnable(GL_LIGHT0); glEnable(GL_DEPTH_TEST); glEnable(GL_AUTO_NORMAL); glEnable(GL_NORMALIZE); glClearDepth (1.0f); // Depth Buffer Setup glDepthFunc (GL_LEQUAL); // The Type Of Depth Testing glEnable(GL_DEPTH_TEST); // Enable Depth Testing glShadeModel (GL_SMOOTH); // Select Smooth Shading if (!initBGImage()) { // Load the Background Image/Video File printf("Improper File Format.\n\n"); earlyExit(0); } if (!BGtype) { initHDD(); if (!OpenAVI(textureFile)) // Open The AVI File earlyExit(0); CreateTexture(0); } else CreateTexture(textureFile); pMemBuff=pMemBuffColl->getBuffer(0); initCam(pMemBuff); calCam(pMemBuff); makeBGImage(pImage->sizeX, pImage->sizeY); } void display(void) { glClear(GL_COLOR_BUFFER_BIT);// | GL_DEPTH_BUFFER_BIT); // set texture values glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST); glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST); glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP); glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP); glTexImage2D(GL_TEXTURE_2D,0, GL_RGBA, BGImageWidth, BGImageHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, BGImage); // Add the Background Texture if (!BGtype) { CreateTexture(frame); }

PAGE 47

36 else CreateTexture(textureFile); // Add the Live Video Stream grabImage(pMemBuff); if (!BGtype) { // allow stream background to be updated newBG = 1; // allow new frame to be updated changeBG = 1; } // Enable texture to be used glEnable(GL_TEXTURE_2D); glBegin(GL_QUADS); glTexCoord2f(0.0, 0.0); glVertex3f(-1, -1, zpos); glTexCoord2f(1.0, 0.0); glVertex3f(1, -1, zpos); glTexCoord2f(1.0, 1.0); glVertex3f(1, 1, zpos); glTexCoord2f(0.0, 1.0); glVertex3f(-1, 1, zpos); glEnd(); glFlush(); glDisable(GL_TEXTURE_2D); glutSwapBuffers(); } void reshape(int w, int h) { glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective (45.0, (GLdouble)w/(GLdouble)h, 2.0, 3.0); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glViewport(0,0,(GLsizei) w,(GLsizei) h); width = w; height = h; } int main(int argc, char** argv) { initializeCam(); glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH); init(); atexit(freeMem); glutInitWindowPosition(100,100); glutInitWindowSize(BGImageHeight, BGImageWidth); glutCreateWindow("SurgADE"); glutReshapeFunc(reshape); glutKeyboardFunc(keyboard); glutMouseFunc (moveObj); glutMotionFunc(motion); glutDisplayFunc(display); glutIdleFunc(animate); glutMainLoop(); return 0; }

PAGE 48

37 int maxcol[3]; int mincol[3]; // calibrate the camera/chroma key scene void calCam(DShowLib::Grabber::tMemBufferPtr buff) { int bob; // make sure camera is "warmed up" for (int i = 0; i < 10; i++) BYTE *data = buff->getPtr(); BYTE *data = buff->getPtr(); maxcol[0] = 0; maxcol[1] = 0; maxcol[2] = 0; mincol[0] = 255; mincol[1] = 255; mincol[2] = 255; bob = 0; for(int r = 0; r < streamX * streamY; r++) { // max/min colors if (data[bob] > maxcol[0]) maxcol[0] = data[bob]; if (data[bob+1] > maxcol[1]) maxcol[1] = data[bob+1]; if (data[bob+2] > maxcol[2]) maxcol[2] = data[bob+2]; if (data[bob] < mincol[0]) mincol[0] = data[bob]; if (data[bob+1] < mincol[1]) mincol[1] = data[bob+1]; if (data[bob+2] < mincol[2]) mincol[2] = data[bob+2]; bob += 3; } } SIZE s; GLubyte *tImage; void initCam(DShowLib::Grabber::tMemBufferPtr buff) { s = buff->getSize(); streamX = s.cx; streamY = s.cy; // make sure stream is smaller than background if ((pImage->sizeX < streamX) || (pImage->sizeY < streamY)) { printf ("Video Stream Larger Than Background Image\n"); printf("Background cannot be loaded\n\n"); earlyExit(0); } // allocate memory for temporary image to hold background tImage = (GLubyte *) malloc(4 * sizeof(GLubyte) * streamX * streamY); }

PAGE 49

38 void grabImage(DShowLib::Grabber::tMemBufferPtr buff) { int R; int bob = 0; int BOB = 0; int xOrigin = 0; int yOrigin = 0; //pixel is stored as BGR BYTE *data = buff->getPtr(); xOrigin = (pImage->sizeX/2 streamX/2); yOrigin = (pImage->sizeY/2 streamY/2); // save background picture into buffer to replace if(newBG) { for(int r = 0; r < streamY; r++) { R = 3 * ((yOrigin + r ypos) * pImage->sizeX + xOrigin + xpos); BOB = 0; for( int c = 0; c < streamX; c++) { tImage [bob] = pImage->data[R + BOB]; tImage [bob + 1] = pImage->data[R + BOB + 1]; tImage [bob + 2] = pImage->data[R + BOB + 2]; bob += 3; BOB += 3; } } newBG = 0; } // copy visible parts of video stream to the background image bob = 0; for(int r = 0; r < streamY; r++) { BOB = 0; R = 3 * ((yOrigin + r ypos) * pImage->sizeX + xOrigin + xpos); for(int c = 0; c < streamX; c++) { pImage->data[R + BOB] = tImage[bob]; pImage->data[R + BOB + 1] = tImage[bob + 1]; pImage->data[R + BOB + 2] = tImage[bob + 2]; if ( ( (data[bob] > maxcol[0]) || (data[bob] < mincol[0])) || ((data[bob+1] > maxcol[1]) || (data[bob+1] < mincol[1])) || ((data[bob+2] > maxcol[2]) || (data[bob+2] < mincol[2]))) { pImage->data[R + BOB] = data[bob + 2]; pImage->data[R + BOB + 1] = data[bob + 1]; pImage->data[R + BOB + 2] = data[bob]; } bob += 3; BOB += 3; } }

PAGE 50

39 // replace old buffer back to the background if stream moves if(clearBG) { bob = 0; for(int r = 0; r < streamY; r++) { R = 3 * ((yOrigin + r ypos) * pImage->sizeX + xOrigin + xpos); BOB = 0; for(int c = 0; c < streamX; c++) { pImage->data[R + BOB] = tImage [bob]; pImage->data[R + BOB + 1] = tImage [bob + 1]; pImage->data[R + BOB + 2] = tImage [bob + 2]; bob += 3; BOB += 3; } } clearBG = 0; } glTexSubImage2D(GL_TEXTURE_2D, 0, (BGImageWidth pImage->sizeX)/2, (BGImageHeight pImage->sizeY)/2, pImage->sizeX, pImage->sizeY, GL_RGB, GL_UNSIGNED_BYTE, pImage->data); } void deleteCam() { grabber->stopLive(); delete grabber; } void initializeCam() { if( !InitLibrary( "IP-1110653689" ) ) { fprintf( stderr, "The library could not be initialized "); fprintf( stderr, "(invalid license key?).\n"); earlyExit( 1 ); } grabber=new DShowLib::Grabber(); if ( !setupDevice( grabber ) ) { delete grabber; earlyExit( 1 ); } // choose a sink format copmatible to given video format DShowLib::tColorformatEnum eSinkColorformat = DShowLib::eRGB24; GUID tmpGUID = grabber->getVideoFormat().getColorformat(); if( tmpGUID == MEDIASUBTYPE_RGB8 ) eSinkColorformat = DShowLib::eY8; else if( tmpGUID == MEDIASUBTYPE_RGB555 ) eSinkColorformat = DShowLib::eRGB555; else if( tmpGUID == MEDIASUBTYPE_RGB565 ) eSinkColorformat = DShowLib::eRGB565;

PAGE 51

40 else if( tmpGUID == MEDIASUBTYPE_RGB24 ) eSinkColorformat = DShowLib::eRGB24; else if( tmpGUID == MEDIASUBTYPE_RGB32 ) eSinkColorformat = DShowLib::eRGB32; else // e.g. if UYVY is specified as the current video format eSinkColorformat = DShowLib::eRGB24; // start the grabber in single snap mode, // with a Colorformat corresponding to the videoformat grabber->setSinkType( FrameGrabberSink( FrameGrabberSink::tFrameGrabberMode::eGRAB, eSinkColorformat )); // Set a MemBufferCollection pMemBuffColl = grabber->newMemBufferCollection( NUM_BUFFERS ); grabber->setActiveMemBufferCollection( pMemBuffColl ); grabber->startLive(false); } void freeMem() { if (pImage) { if (pImage->data) // Free the texture data free(pImage->data); // Free the image structure free(pImage); } if (tImage) // Free the texture data free(tImage); deleteCam(); if (!BGtype) CloseAVI(); } void keyboard ( unsigned char key, int x, int y ) { switch ( key ) { case ' ': stop = !stop; break; case 'c': case 'C': calCam(pMemBuff); break; case 27: exit(0); break; default: break; } }

PAGE 52

41 SetupDevice.cpp #include "setupdevice.h" /* The function setupDevice() executes a complete device selection in a DOS box. Usage: setupDevice( grabber ) Parameter : pointer of a class Grabber Generate a list of all video capture devices and open the one the user selects */ bool setupDevice(Grabber *grabber) { int i=0; int input=0; int choice; // Get the list of all available video capture devices Grabber::tVidCapDevListPtr pVidCapDevList = grabber->getAvailableVideoCaptureDevices(); if(pVidCapDevList == 0 || pVidCapDevList->empty()) { printf("No camera device found!\n\n"); return false; } // Iterate the list and print the name of each device printf("Available Grabbers: \n"); for (Grabber::tVidCapDevListPtr::value_type::iterator it = pVidCapDevList->begin(); it != pVidCapDevList->end(); ++it) printf( "[%i] %s\n", i++, it->c_str()); // Ask for which device to open printf( "Your choice: "); input = scanf("%i", &choice); if (choice >= 0 && choice < pVidCapDevList->size()) grabber->openDev( pVidCapDevList->at( choice )); else return false; // Generate a list of all available video norms and // set the one the user selects i=0; input=0; // Determine whether device is capable of setting video norm. if (grabber->isVideoNormAvailableWithCurDev()) { // Query for all available video norms Grabber::tVidNrmListPtr pVidNrmList = grabber->getAvailableVideoNorms(); if (pVidNrmList == 0) { fprintf(stderr, "Error: %s\n", grabber->getLastError().c_str()); return false; }

PAGE 53

42 // Iterate the list of available video norms and // print the name of each norm printf("\n\nAvailable video norms:\n"); for (Grabber::tVidNrmListPtr::value_type::iterator nrm_it = pVidNrmList->begin(); nrm_it != pVidNrmList->end(); ++nrm_it) printf("[%i] %s\n", i++, nrm_it->c_str()); // Ask the user for the norm to set printf("Your choice:"); input = scanf("%i", &choice); if (input == 0 || choice < 0 || choice >= pVidNrmList->size()) return false; grabber->setVideoNorm(pVidNrmList->at(choice)); if (grabber->getLastError()) { fprintf(stderr, "Error: ", grabber->getLastError().c_str()); return false; } } // Generate a list of all available video formats and // set the one the user selects. i=0; input=0; // Get the list of all available video formats printf("\n\nAvailable video formats: \n"); Grabber::tVidFmtListPtr pVidFmtList = grabber->getAvailableVideoFormats(); if (pVidFmtList == 0) { if (grabber->getLastError()) fprintf(stderr, "Error: %s\n", grabber->getLastError().c_str()); return false; } // Iterate the list of available video formats // and print the name of each format for (Grabber::tVidFmtListPtr::value_type::iterator fmt_it = pVidFmtList->begin(); fmt_it != pVidFmtList->end(); ++fmt_it) printf("[%i] %s\n", i++, fmt_it->c_str()); // Ask the user for the format to set printf( "Your choice: "); input=scanf( "%i", &choice ); if (input == 0 || choice < 0 || choice >= pVidFmtList->size()) return false; // Set the choosen video format grabber->setVideoFormat( pVidFmtList->at(choice)); if (grabber->getLastError()) { fprintf(stderr, "Error: ", grabber->getLastError().c_str()); return false; }

PAGE 54

43 i=0; input=0; // Determine whether the device is capable // of setting the input channel if (grabber->isInputChannelAvailableWithCurDev()) { // Get the list of all available channels printf("\n\nAvailable input channels: \n"); Grabber::tInChnListPtr pInChnList = grabber->getAvailableInputChannels(); if (pInChnList == 0) { if (grabber->getLastError()) fprintf(stderr, "Error: %s\n", grabber->getLastError().c_str()); return false; } // Iterate the list of all available channels // and print the name of each channel for (Grabber::tInChnListPtr::value_type::iterator chn_it = pInChnList->begin(); chn_it != pInChnList->end(); ++chn_it) printf("[%i] %s\n", i++, chn_it->c_str()); // Ask the user for the channel to use printf("Your choice: "); input=scanf( "%i", &choice ); if (input == 0 || choice < 0 || choice >= pVidFmtList->size()) return false; // Set the input channel grabber->setInputChannel(pInChnList->at(choice)); if (grabber->getLastError()) { fprintf(stderr, "Error: ", grabber->getLastError().c_str()); return false; } } return true; } SetupDevice.h #ifndef __SETUPDEVICE_H #define __SETUPDEVICE_H #pragma warning ( disable : 4786 ) // identifier truncated to '255' characters // in the debug information #include "TISUDSHL.h"

PAGE 55

44 using namespace _DSHOWLIB_NAMESPACE; // Prototype for the function that queries // the user for all essential grabber settings bool setupDevice( _DSHOWLIB_NAMESPACE::Grabber *grabber ); #endif LoadVideo.cpp #include "loadVideo.h" AVISTREAMINFO psi; // Pointer To A Structure Containing Stream Info PAVISTREAM pavi; // Handle To An Open Stream PGETFRAME pgf; // Pointer To A GetFrame Object BITMAPINFOHEADER bmih; // Header Information For DrawDibDraw Decoding long lastframe; // Last Frame Of The Stream int vidWidth; // Video Width int vidHeight; // Video Height char *pdata; // Pointer To Texture Data int mpf; // Will Hold Rough Milliseconds Per Frame HDRAWDIB hdd; // Handle For Our Dib HBITMAP hBitmap; // Handle To A Device Dependant Bitmap HDC hdc = CreateCompatibleDC(0); // Creates A Compatible Device Context unsigned char* imData = 0; // Pointer To Our Resized Image void initHDD(void) { hdd = DrawDibOpen(); // Grab A Device Context For Our Dib } Image* TextureFrame; bool OpenAVI(LPCSTR szFile) // Opens An AVI File (szFile) { AVIFileInit(); // Opens The AVIFile Library // Open The AVI Stream if (AVIStreamOpenFromFile( &pavi, szFile, streamtypeVIDEO, 0, OF_READ, NULL) !=0) { // An Error Occurred Opening The Stream printf("Failed To Open The AVI Stream\n\n"); return false; } AVIStreamInfo(pavi, &psi, sizeof(psi)); // Reads Information About The Stream Into psi vidWidth=psi.rcFrame.right-psi.rcFrame.left; // Width Is Right Side Of Frame Minus Left vidHeight=psi.rcFrame.bottom-psi.rcFrame.top; // Height Is Bottom Of Frame Minus Top // allocate memory for Image stream TextureFrame = (Image *) malloc (sizeof(Image));

PAGE 56

45 TextureFrame->data = (GLubyte *) malloc(sizeof(GLubyte) * vidWidth * vidHeight * 3); lastframe=AVIStreamLength(pavi); // The Last Frame Of The Stream // Calculate Rough Milliseconds Per Frame mpf=AVIStreamSampleToTime(pavi,lastframe)/lastframe; bmih.biSize = sizeof (BITMAPINFOHEADER); // Size Of The BitmapInfoHeader bmih.biPlanes = 1; // Bitplanes bmih.biBitCount = 24; // Bits Format We Want (24 Bit, 3 Bytes) bmih.biWidth = vidWidth; // Width We Want (256 Pixels) bmih.biHeight = vidHeight; // Height We Want (256 Pixels) bmih.biCompression = BI_RGB; // Requested Mode = RGB hBitmap = CreateDIBSection (hdc, (BITMAPINFO*)(&bmih), DIB_RGB_COLORS, (void**)(&imData), NULL, NULL); SelectObject (hdc, hBitmap); // Select hBitmap Into Our Device Context (hdc) pgf=AVIStreamGetFrameOpen(pavi, NULL); // Create The PGETFRAME Using Our Request Mode if (pgf==NULL) { // An Error Occurred Opening The Frame printf("Failed To Open The AVI Frame\n\n"); return false; } return true; } long getLastFrame(void) { return lastframe; } int getMPF(void) { return mpf; } Image *GrabAVIFrame(int frame) // Grabs A Frame From The Stream { LPBITMAPINFOHEADER lpbi; // Holds The Bitmap Header Information // Grab Data From The AVI Stream lpbi = (LPBITMAPINFOHEADER)AVIStreamGetFrame(pgf, frame); pdata=(char *)lpbi+lpbi->biSize+lpbi->biClrUsed * sizeof(RGBQUAD); // Pointer To Data Returned By AVIStreamGetFrame // Convert Data To Requested Bitmap Format DrawDibDraw (hdd, hdc, 0, 0, vidWidth, vidHeight, lpbi, pdata, 0, 0, vidWidth, vidHeight, 0); TextureFrame->sizeX = vidWidth; TextureFrame->sizeY = vidHeight;

PAGE 57

46 // Reverse colors of image int bob = 0; for(int r = 0; r < vidHeight; r++) { for( int c = 0; c < vidWidth; c++) { TextureFrame->data[bob] = imData[bob + 2]; TextureFrame->data[bob + 1] = imData[bob + 1]; TextureFrame->data[bob + 2] = imData[bob]; bob += 3; } } return TextureFrame; } void CloseAVI(void) // Properly Closes The Avi File { DeleteObject(hBitmap); // Delete Device Dependant Bitmap Object DrawDibClose(hdd); // Close The DrawDib Device Context AVIStreamGetFrameClose(pgf); // Deallocate GetFrame Resources AVIStreamRelease(pavi); // Release The Stream AVIFileExit(); // Release The File } LoadVideo.h #ifndef __LOADVIDEO_H #define __LOADVIDEO_H #include // Header File For Windows #include // Header File For Video For Windows #include "loadtexture.h" void initHDD(void); bool OpenAVI(LPCSTR szFile); // Opens An AVI File (szFile) long getLastFrame(void); int getMPF(void); Image *GrabAVIFrame(int frame); // Grabs A Frame From The Stream void CloseAVI(void); // Properly Closes The Avi File #endif LoadTexture.cpp #include "loadTexture.h" /* JPEG file loader */ // JPEG Decoder void DecodeJPG(jpeg_decompress_struct* cinfo, tImageJPG *pImageData) { jpeg_read_header(cinfo, TRUE); // Read in the jpeg file header jpeg_start_decompress(cinfo); // Start decompressing jpeg file // Get info to read in the pixel data pImageData->rowSpan = cinfo->image_width * cinfo->num_components; pImageData->sizeX = cinfo->image_width; pImageData->sizeY = cinfo->image_height;

PAGE 58

47 // Allocate memory for the pixel buffer pImageData->data = new unsigned char[pImageData->rowSpan * pImageData->sizeY]; // Create an array of row pointers unsigned char** rowPtr = new unsigned char*[pImageData->sizeY]; for (int i = 0; i < pImageData->sizeY; i++) rowPtr[i] = &(pImageData->data[i*pImageData->rowSpan]); // Extract the pixel data int rowsRead = 0; while (cinfo->output_scanline < cinfo->output_height) { // Read in current row of pixels; increase rowsRead count rowsRead += jpeg_read_scanlines(cinfo, &rowPtr[rowsRead], cinfo->output_height rowsRead); } delete [] rowPtr; // Delete the temporary row pointers jpeg_finish_decompress(cinfo); // Finish decompressing the data } // JPEG Loader Image *LoadJPG(const char *filename) { struct jpeg_decompress_struct cinfo; tImageJPG *tempData = NULL; Image *pImageData = NULL; FILE *pFile; // Pass in jpeg file name, and get pointer to tImageJPG structure // containing width, height and pixel data. Free data when done. // Open a file pointer to jpeg file and check if found and opened if((pFile = fopen(filename, "rb")) == NULL) { // Display an error message, then return NULL printf("File Not Found : %s\n",filename); return NULL; } jpeg_error_mgr jerr; // Error handler cinfo.err = jpeg_std_error(&jerr); // point to handler address jpeg_create_decompress(&cinfo); // Initialize decomp object jpeg_stdio_src(&cinfo, pFile); // Data source (file pointer) tempData = (tImageJPG*)malloc(sizeof(tImageJPG)); // Allocate // Decode jpeg file and fill in image data structure to pass back DecodeJPG(&cinfo, tempData); jpeg_destroy_decompress(&cinfo); // Release memory for jpeg fclose(pFile); // Close the file pointer that opened the file // Flip image upside down (rightside up) pImageData = (Image*)malloc(sizeof(Image)); // Allocate pImageData->sizeX = tempData->sizeX; pImageData->sizeY = tempData->sizeY; pImageData->data = new unsigned char[tempData->rowSpan * pImageData->sizeY]; int bob = 0;

PAGE 59

48 for(int r = tempData->sizeY 1; r >= 0; r--) { int R = r * tempData->rowSpan; for( int c = 0; c < tempData->rowSpan; c++) { pImageData->data[R + c] = tempData->data[bob]; bob++; } } free (tempData->data); free (tempData); return pImageData; // Return jpeg data } /* ppm loader */ Image *LoadPPM(const char *filename) { FILE *file; int k, size; // maxcolor, size values Image * TextureImage; Image * tempImage; char cbuf[100]; char temp; // make sure the file is there. if ((file = fopen(filename, "rb"))==NULL) { printf("File Not Found : %s\n",filename); exit(0); } // seek through the ppm header, up to the width height: fscanf(file,"%[^\n] ", cbuf); if(cbuf[0]!='P'|| cbuf[1] != '6') // P6 binary encoded, P3 ASCII { printf("%s is not a bin PPM file!\n", cbuf); exit(0); } fscanf(file, "%c",&temp); while(temp == '#') { fscanf(file, "%[^\n] ", cbuf); fscanf(file, "%c",&temp); } ungetc(temp, file); tempImage = (Image *) malloc (sizeof(Image)); fscanf(file, "%d %d %d ", &tempImage->sizeX, &tempImage->sizeY, &k); // calculate the size (assuming 24 bits or 3 bytes per pixel). size = tempImage->sizeX * tempImage->sizeY * 3; // read the data. tempImage->data = (GLubyte *) malloc(sizeof(GLubyte) * size); fread(tempImage->data, sizeof(GLubyte), size, file); // flip image upside down (rightside up) TextureImage = (Image *) malloc (sizeof(Image)); TextureImage->sizeX = tempImage->sizeX; TextureImage->sizeY = tempImage->sizeY;

PAGE 60

49 TextureImage->data = (GLubyte *) malloc(sizeof(GLubyte) * size); int bob = 0; for(int r = tempImage->sizeY 1; r >= 0; r--) { int R = r * tempImage->sizeX * 3; for( int c = 0; c < tempImage->sizeX * 3; c++) { TextureImage->data[R + c] = tempImage->data[bob]; bob++; } } free(tempImage->data); free(tempImage); fclose(file); // Close the file and release the filedes return TextureImage; } /* BMP file loader */ //getint and getshort are help functions to load the bitmap byte by byte static unsigned int getint(FILE *fp) { int c, c1, c2, c3; // get 4 bytes c = getc(fp); c1 = getc(fp); c2 = getc(fp); c3 = getc(fp); return ((unsigned int) c) + (((unsigned int) c1) << 8) + (((unsigned int) c2) << 16) + (((unsigned int) c3) << 24); } static unsigned int getshort(FILE *fp) { int c, c1; // get 2 bytes c = getc(fp); c1 = getc(fp); return ((unsigned int) c) + (((unsigned int) c1) << 8); } Image* LoadBMP(const char *filename) { Image *image; FILE *file; unsigned long size; // size of the image in bytes. unsigned long i; // standard counter. unsigned short int planes; // number of planes in image (1) unsigned short int bpp; // number of bits per pixel (24) GLubyte *tempImage; // used to change BGR to RGB int bob = 0;

PAGE 61

50 // make sure the file is there. if ((file = fopen(filename, "rb"))==NULL) { printf("File Not Found : %s\n",filename); return 0; } image = (Image *) malloc(sizeof(Image)); // seek through the bmp header, up to the width height: fseek(file, 18, SEEK_CUR); // read the width image->sizeX = getint (file); // read the height image->sizeY = getint (file); // calculate the size (assuming 24 bits or 3 bytes per pixel). size = image->sizeX * image->sizeY * 3; // read the planes planes = getshort(file); if (planes != 1) { printf("Planes from %s is not 1: %u\n", filename, planes); return 0; } // read the bpp bpp = getshort(file); if (bpp != 24) { printf("Bpp from %s is not 24: %u\n", filename, bpp); return 0; } // seek past the rest of the bitmap header. fseek(file, 24, SEEK_CUR); // read the data. image->data = (GLubyte *) malloc(sizeof(GLubyte) * size); tempImage = (GLubyte *) malloc(sizeof(GLubyte) * size); if ((image->data == NULL) || (tempImage == NULL)) { printf("Error allocating memory for corrected image data"); return 0; } if ((i = fread(tempImage, size, 1, file)) != 1) { printf("Error reading image data from %s.\n", filename); return 0; } // reverse colors of image for(int r = image->sizeY 1; r >= 0; r--) { for( int c = 0; c < image->sizeX; c++) { image->data[bob + 2] = tempImage[bob]; image->data[bob + 1] = tempImage[bob + 1]; image->data[bob] = tempImage[bob + 2]; bob += 3; } } // Close the file and release the filedes fclose(file); free (tempImage); return image; }

PAGE 62

51 LoadTexture.h #ifndef __LOADTEXTURE_H #define __LOADTEXTURE_H #include #include "jpeglib.h" /* Image type contains height, width, and data */ struct Image { unsigned long sizeX; unsigned long sizeY; GLubyte *data; }; static unsigned int getint(FILE *fp); static unsigned int getshort(FILE *fp); void DecodeJPG(jpeg_decompress_struct* cinfo, tImageJPG *pImageData); Image *LoadJPG(const char *filename); Image *LoadPPM(const char *filename); Image *LoadBMP(const char *filename); #endif

PAGE 63

LIST OF REFERENCES 1. “Paradigm Shift in the Way that Doctors Are Trained: Press Release.” Mentice. 10 June 2004. Gothenburg, Sweden: Mentice Corporation. 26 July 2004. < http://www.mentice.com/sch/mentice.nsf/0/A286BD2BD10E4BF5C1256EAF002620F1 > 2. Rtnes, Jan Sigurd, et al. “Digital Trainer Developed for Robotic Assisted Cardiac Surgery.” Medicine Meets Virtual Reality 2001. J.D. Westwood, et al. 81. Amsterdam: IOS Press, 2001. 424-430. 3. “HITLab Projects.” Human Interface Technology Lab – Research Areas. Seattle: University of Washington. 26 July 2004. < http://www.hitl.washington.edu/projects/ > 4. Tendick, Frank, et al. “A Virtual Environment Testbed for Training Laparoscopic Surgical Skills.” Presence: Teleoperators and Virtual Environments. 9.3 (2000): 236-255. 5. Eyal, Roy, and Frank Tendick, “Spatial Ability and Learning the Use of an Angled Laparoscope in a Virtual Environment.” Stud Health Technol Inform. 81 (2001): 146-152. 6. Liu, Alan, et al. “A Survey of Surgical Simulation: Applications, Technology, and Education.” Presence: Teleoperators and Virtual Environments. 12.6 (2003): 599-614. 7. Johnson, Linda. “Doctors Train on Virtual Patients.” Sydney Morning Herald. 23 July 2004. 26 July 2004. < http://www.smh.com.au/articles/2004/07/22/1090464787887.html > 8. “The SimSurgery Technology Suite” SimSurgery TM . Oslo: SimSurgery. November 2001. 26 July 2004. < http://www.simsurgery.no/technology.html > 9. Rtnes, Jan Sigurd, et al. “A Tutorial Platform Suitable for Surgical Simulator Training (SimMentor TM ).” Medicine Meets Virtual Reality 2002. J.D. Westwood, et al. Amsterdam: IOS Press, 2002. 419-425. 10. “Laparoscopic Surgery Training Products & Simulation Models.” Simulab Corporation. Seattle: Simulab. 30 June 2004. < http://www.simulab.com/LaparoscopicSurgery.htm > 52

PAGE 64

53 11. “MicroScribe Digitizer Overview.” Immersion Corporation. San Jose, CA: Immersion. 30 June 2004. < http://www.immersion.com/digitizer/ > 12. “PHANTOM Devices.” SensAble Technologies. Woburn, MA: Sensable. 26 July 2004. < http://www.sensable.com/products/phantom_ghost/phantom.asp > 13. Lok, Benjamin, et al. “Incorporating Dynamic Real Objects into Immersive Virtual Environments.” ACM Transactions on Graphics. 22.3 (July 2003): 31-40. 14. “Trial Version Download.” IC Imaging Control. Charlotte, NC: The Imaging Source Europe GmbH. 20 Jun 2002. 30 June 2004. < http://www.imagingcontrol.com/ic/downloads/trial/ > 15. “DirectX 9.0 SDK Update. (Summer 2003).” Microsoft Download Center. Redmond, WA: Microsoft Corporation. 26 July 2004. < http://www.microsoft.com/downloads/details.aspx?FamilyID=9216652f-51e0-402e-b7b5-feb68d00f298&displaylang=en > 16. Foshee, Jacob. “GLUT Using with Visual Studio .NET 2003.” CSG Helpdesk. College Station, TX: TAMU Computer Science. 2003. 26 July 2004. < http://helpdesk.cs.tamu.edu/docs/glut_Visual_Studio2003 > 17. Shin, Min C. “ITCS 6134/8134 Digital Image Processing – Spring 2004.” College of Engineering. Charlotte, NC: UNC Charlotte. 26 July 2004. < http://www.coe.uncc.edu/~mcshin/dip-g/ > 18. Motta, Giovanni. “COSI 155B: Computer Graphics.” Michtom School of Computer Science. Waltham, MA: Brandeis University. 2003. 26 July 2004. < http://www.cs.brandeis.edu/~cs155/ > 19. Fletcher, Robert P. “Introduction to OpenGL, Texture Maps.” Computing Services: Programming in Open GL. York, UK: University of York. 26 July 2004. < http://www.york.ac.uk/services/cserv/sw/graphics/OPENGL/L16.html > 20. Humphrey, Ben. “Texture Mapping Part3 (JPEG).” GameTutorials. 23 April 2002. 26 July 2004. < http://www.gametutorials.com/Tutorials/OpenGL/OpenGL_Pg2.htm > 21. Molofee, Jeff, “OpenGL Lesson #35.” NeHe Productions. 2001. 26 July 2004. < http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=35 >

PAGE 65

BIOGRAPHICAL SKETCH William "Billy" Wilson was born in East Ridge, Tennessee, on January 14, 1979, and spent his formative years in Jacksonville, Florida. Early in life, Billy suffered a tragedy when his father passed away. Psychologists note that people who hold onto good memories often achieve greatness, while those who simply dismiss them fail. Motivated to succeed, Billy graduated from Wolfson High School with the highest GPA in a class of 400. He entered the Honors Program at the University of Florida, and later transferred to the Florida State University (FSU) to pursue studies in Film. He secured, cum laude, a Bachelor of Science degree in computer science from FSU, with a minor in film. As a Phi Beta Kappa member, Billy received Graduate Assistantship offers to help him pursue a goal of working in media. He ultimately accepted an assistantship in Digital Arts and Sciences at UF. Similar to contributors touched by tragedy who precede him, Billy continues his efforts to create a better world. He keeps fresh the memory of his computer wizard father, who inspires him still. Be courageous. Be brave like your fathers before you. Have faith and go forward. Thomas Alva Edison 54