Citation
Distributed Virtual Rehearsals

Material Information

Title:
Distributed Virtual Rehearsals
Creator:
MORA, GEORGE ( Author, Primary )
Copyright Date:
2008

Subjects

Subjects / Keywords:
Acting ( jstor )
Actors ( jstor )
Facial expressions ( jstor )
Gestures ( jstor )
Questionnaires ( jstor )
Rehearsal ( jstor )
Shoulder ( jstor )
Theater rehearsal ( jstor )
Virtual avatars ( jstor )
Virtual reality ( jstor )

Record Information

Source Institution:
University of Florida
Holding Location:
University of Florida
Rights Management:
Copyright George Mora. Permission granted to University of Florida to digitize and display this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.
Embargo Date:
12/18/2004
Resource Identifier:
57735028 ( OCLC )

Downloads

This item has the following downloads:


Full Text











DISTRIBUTED VIRTUAL REHEARSALS


By

GEORGE MORA















A THESIS PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
MASTER OF SCIENCE

UNIVERSITY OF FLORIDA


2004































Copyright 2004

by

George Mora


































To my wife, Maria,
My parents, Jorge and Johanna Mora
And my family and friends
For their constant support and encouragement















ACKNOWLEDGMENTS

I would like to thank my thesis committee chairman, Dr. Benj amin C. Lok, for his

enthusiasm and interest in this project, as well as for keeping me motivated and on track.

I would also like to thank James C. Oliverio for being on my committee and for the

constant support, advice, and opportunities he has provided for me. I also give much

thanks to Dr. Jorg Peters for supporting both my undergraduate and graduate final

proj ects.

This thesis was completed with the help of several people. My gratitude goes out

to Jonathan Jackson, Kai Bernal, Bob Dubois, Kyle Johnsen, Cyrus Harrison, Andy

Quay, and Lauren Vogelbaum. This thesis would not have been possible without their

help.

I would like to thank my parents, Jorge and Johanna Mora, for always

encouraging me to grow both intellectually and creatively. Finally, I would like to thank

my wife, Maria Mora, for her unending love, support, and understanding.



















TABLE OF CONTENTS


page


ACKNOWLED G1VENT S .............. ............iv.. .......... ....


LIST OF FIGURES ............... ............vii. .......... ....


AB STRAC T ......__.............. ............viii......


CHAPTER


1 INTRODUCTION ............... ...........1...................


1.1 Motivation ............... ...........1.............. ...
1.2 Challenges ............... ...........4.............. ....
1.3 Project Goals ............... ...........4.............. ....
1.4 Organization of Thesis ............... ..............................4
1.5 Thesis Statement ............... ............5... ......... ...
1.6 Approach ............... ............5... ......... ....


2 PREVIOUS WORKS ............... ............7... ......... ....


2. 1 Distributed Performance ............... .............................7
2.2 Virtual Reality ................. ...........8.............. ....
2.3 Digital Characters and Avatars ............... ......... ...............10O


3 APPLICATION ............... ...........13...................


3.1 Scene Design and Experience Development ............... ................ 13
3.2 Tracking the Actors ............... ............15.. .......... ...
3.3 Putting It All Together ................. ...........17.................
3.4 Final Software and Hardware Setup ............... .....................18


4 RESULTS ............... ............22.. ...............


4.1 Description of Studies ............... ..............................22
4.2 Reaction from Actors ............... ............23.. ..............
4.3 Results. .......... .... ... .. .. ....... .... ....... ... ...........2
4.3.1 Virtual Reality Used for Successful Rehearsals .............. ... .........24
4.3.2 Lack of Presence Distracted the Actors ............... ............... .26












4.3.3 Improvements That Should Be Made to the System .....................27


5 CONCLUSION. ....___ ............... ............31.. .....


5.1 Usefulness to Acting Community ............... ......... .............31
5.2 Future Work ............... ...........31............. ....
5.3 Future Applications ............... ............33.. .......... ....


APPENDIX: STUDY QUESTIONNAIRES ............... ......... .............34


A. 1 Co-presence Questionnaire ............... ............... ..........34
A.2 Presence Questionnaire ............... .............................37


LIST OF REFERENCES ............... ............39.. .......... ....


BIOGRAPHICAL SKETCH ............... ...........42...................

















LIST OF FIGURES


Figure pg

1-1. Two actors rehearsing in a virtual environment ............... ...................2

3 -1. A parti cipant wearing the colored felt strap s ............... ................ ....1 6

3-2. A participant testing out the system ............... ............... ......... 19

3-3. Sample screenshot demonstrating the virtual script system ................ ...........19

3-4. Data flow for both rendering systems ............... .......... ..............20

3-5. Hardware setup for each location ............... ............................21

4-1. The location of each actor on the University of Florida campus ............ ...... ......23

4-2. Results of the co-presence questionnaire administered during the first study.........28

4-3. Results of the presence and co-presence questionnaires-second study .................28

4-4. Results of the presence and co-presence questionnaires-third study ................... .29

4-5. Comparison between question averages for the presence questionnaire ...............29

4-6. Compari son b between que sti on averages for the co-presence que sti onnaire............ 30















Abstract of Thesis Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Master of Science

DISTRIBUTED VIRTUAL REHEARSALS

By

George Mora

December, 2004

Chair: Benjamin C. Lok
Maj or Department: Computer and Information Science and Engineering

Acting rehearsals with multiple actors are limited by many factors. Physical

presence is the most obvious, especially in a conversation between two or more

characters. Cost is an obstacle that primarily affects actors who are in different locations.

This cost consists of travel and living expenses. Preparation time is another hindrance,

especially for performances involving elaborate costumes and intricate makeup. Many

recent high-budget motion pictures require that key actors go through several hours of

makeup application to complete their character's look.

Virtual reality can bring actors together to rehearse their scene in a shared

environment. Since virtual reality elicits emotions and a sense of perceived presence

from its users, actors should be able to successfully rehearse in a virtual environment.

This environment can range from an empty space to a fully realized set depending on the

director' s imagination and the proj ect' s scope.









Actors' movements will be tracked and applied to a digital character, creating a

virtual representation of the character. The digital character will resemble the actor--in

full costume and makeup. In the virtual environment, each actor will see (in real-time)

the character being controlled by their acting partner.

The goal is to show that multiple actors can use a shared virtual environment as

an effective acting rehearsal tool. This proj ect will also demonstrate that actors can hone

their skills from remote locations through virtual reality, and serve as a foundation for

future applications that enhance the virtual acting paradigm.















CHAPTER 1
INTRODUCTION

1.1 Motivation

Acting rehearsal is the process by which actors refine their acting skills and

practice scenes for future public performances. These rehearsals traditionally occur on a

stage with the principle actors and the director physically present. Although costumes

and makeup are not essential until the final few rehearsals (called dress rehearsals), a

functional set is important for determining when and where to move (known as

movement blocking).

There are several variations on the standard rehearsal. During the pre-production

stage, a "read through" or "reading" is scheduled to familiarize the actors with the script

and each other. Typically, actors are physically present in a conference room, although

this can be accomplished through using a video or telephone conference. After the

"reading", a blocking rehearsal will help choreograph the actors' movements.

"Blocking" rehearsals usually take place on a stage or practice set, since its dimensions

affect the choreography of a production. "Polishing and Building" rehearsals take up the

majority of the total rehearsal time. During these rehearsals, actors perfect their

performance and work out any major problems. The final rehearsals (dress and technical

rehearsals) involve practicing the performance in full costume and makeup with complete

lighting, sound, and props on a finished set.










Currently, a "reading" is the only rehearsal method which does not need an

actor's physical presence. The "reading" does not require that actors wear

costume/makeup or move on an assembled stage. Therefore it could be performed over

the telephone. One could argue that distributed rehearsals could be easily achieved

through video conferencing. However the cost and availability of a system which could

deliver satisfying results in terms of video/audio quality, bandwidth, and robustness make

video conferencing a poor choice for effective distributed rehearsals.

Allowing digital characters to represent an actor in a shared immersive virtual

environment increases the number of conditions under which an acting rehearsal can

occur. Physical presence, preparation time, and cost would no longer limit rehearsals.

This would allow multiple actors from anywhere in the world to meet and rehearse a

scene before there are costumes or constructed sets.
















Figure 1-1. Two actors rehearsing in a virtual environment. Actor 1 controls the
movements of Character 1 (Morpheus), while Actor 2 controls the movements of
Character 2 (Neo).

By allowing actors to meet in a virtual space, there is an added advantage of

virtual reality interaction. Such interaction includes stereoscopic vision, gaze tracking,

and easy prop and set maintenance. Stereoscopic vision allows the actor to see the acting










partner, set, and props in three dimensions. Gaze tracking changes the display based on

the location of the actor's head and the direction he or she looks. Prop and set

maintenance allow one to move, rotate, and replace any prop or piece of scenery.

Consideration must be given to acting theory since the actor's expressions will be

expressed through the avatar. The form of expression this thesis focuses on is gestures.

Kinesics encompasses all non-verbal forms of communicating. These include gestures,

body language, and facial expressions. There are several categories of kinesics:

Emblems: non-verbal messages with a verbal counterpart.

Illustrators: gestures associated with verbal messages.

Affective Displays: gestures or expressions conveying emotion.

Regulators: non-verbal signs that maintain the flow of a conversation.

Adaptors: small changes in composure that subconsciously convey mood [1].

Communication on stage mimics communication in real life. Therefore, the

relationship of kinesics to acting is obvious. Actors must pay special attention to their

movement, gestures, facial expressions, and body language in relation to what they are

saying or doing. Ryan reaffirms the connection between kinesics and acting:

The informal spatial code relates to the movement of the body on stage including

facial expression, gestures, formations of bodies (i.e. patterns, shapes), and

sometimes includes moving set and pieces of furniture.

Ryan also lists several properties of kinesics use on stage:

Gestures can't stand alone.

Gestures can't be separated from the general continuum of motion.

Gesture is the primary mode of ostending (i.e. showing) the body on stage [2].










1.2 Challenges

Actors are accustomed to being physically present with other actors on real sets.

For a virtual environment to be effective in improving acting skills, the actor must

experience a strong sense of being in the same space as the other actor (referred to as a

sense of co-presence).

Some challenges faced when trying to achieve this sense of co-presence include:

Keeping the actor comfortable and as unaware that their movements are being
tracked.

Ensuring high audio quality to simulate hearing the other voice in the same room.

Placing the cameras, projector, and screen in such a way that the actor has a clear
view of the screen while still being accurately tracked.

Providing realistic models and textures for the virtual environment.

Having the character exhibit human-like behavior and expressions.

Ensuring high-speed data transmission between systems.

1.3 Project Goals

This project seeks to enhance the fields of virtual environments research and

acting theory in the following ways:

Demonstrate that digital characters in virtual environments allow for effective
distributed rehearsals.

Provide a prototype system to allow actors to interact with each other virtually in
full costume/makeup across long distances.

1.4 Organization of Thesis

This thesis is organized into the following sections:

Introduction. Specifies the need for this research, obstacles in completing the
proj ect, and the ultimate goals for this thesis.










Previous Work. Describes the research and work that served as both inspiration
and a foundation for this proj ect.

Application. Details the process of creating the application that demonstrates the
ideas presented in this thesis.

Results. Discusses the results of a survey of several actors who tested the system
to rehearse a simple scene.

Conclusion. Summarizes results and lists future work and applications of the
research.

1.5 Thesis Statement

Distributed Virtual Rehearsals can bring actors together from different locations

to successfully rehearse a scene.

1.6 Approach

My approach is to build a system that will allow two or more actors to rehearse a

scene in a virtual world. Digital characters that resemble the actor in full costume and

makeup will represent the actor. The actor's movements will be tracked and will directly

affect the movements of the digital character. The actor will see digital characters

controlled by other actors in remote locations.

Since the data required for rendering the characters and props exists on local

machines, the only information that needs to be sent is the tracking data for each actor's

movements. The tracking data for each body part is contained in a text message

composed of several (three for position data only) floating point numbers. The

information can be efficiently transmitted which will reduce lag.

The system for this rehearsal setup (per actor):

Proj ector-based display system with a large proj section screen.

A well-lit room large enough for the actor to perform the scene.










* Two web cameras connected to two PCs.

* One rendering PC.

* High-speed network connecting each system.

* Several different colored straps attached to the actor's body.

* A headset with built-in microphone (wireless if extensive body movement is
required)















CHAPTER 2
PREVIOUS WORK

2.1 Distributed Performance

Distributed performance refers to media-centered events and actions that affect

each other yet occur in different locations. This could range from a simple telephone

call to a complex massive multiplayer online role-playing game (MMORPG).

In the study Acting in Virtual Reality, distributed performance brings several

actors and directors together to rehearse a short play. Each participant interacts with the

others through networked computers and his/her own non-immersive display. Semi-

realistic avatars represent the actors while the director is only heard over the sound

system. The study proved successful in allowing actors to rehearse in a shared virtual

environment. "A performance level was reached in the virtual rehearsal which formed

the basis of a successful live performance, one that could not have been achieved by

learning of lines or video conferencing" [3].

Networked virtual environments are also used in the Collaboration in Tele-

Immersive Environments proj ect at the University of London and the University of North

Carolina at Chapel Hill. This project investigated if virtual environments could facilitate

finishing a collaborative task in virtual reality. The task involved two people carrying a

stretcher along a path and into a building. Their result indicated that realistic interaction

in a virtual environment over a high-speed network- while possible--still suffers from

tracking delays, packet losses, and difficulty sharing










control of objects. "The data suggests that in order to have a sense of being with another

person, it is vital that the system 'works' in the sense that people have an impression of

being able to actually do what they wish to do" [4].

Dancing Beyond Boundaries~~~~dddd~~~~ddd involves the use of video conferencing over a high-

speed network as a method of distributed performance. This piece used Access Grid

technology and an Internet2 connection to allow dancers and musicians from four

different locations across North and South America to interact with each other. "Thus the

combination of multi-modal information from the four nodes created a 'virtual studio'

that could also be termed a distributed virtual environment, though perhaps not in the

usual sense" [5].

An important aspect of distributed performance is the state of co-presence that is

achieved. Co-presence is the sense of being in the same space with other people.

Distributed collaboration has shown to be successful since it achieves a high degree of

co-presence. It has been shown that the level of immersion is related to a user' s sense of

presence and co-presence. The level of immersion also relates to leadership roles in

group settings [6].

2.2 Virtual Reality

Frederick Brooks Sr. defines a virtual reality experience as "any in which the user

is effectively immersed in a responsive virtual world. This implies user dynamic control

of viewpoint." Effective immersion has been achieved through the use of novel display

and interaction systems [7].

Immersive displays create a sense of presence through multi-sensory stimulation.

Previous examples of these systems include head-mounted displays, CAVE (Cave-like









Automatic Virtual Environment) systems, projector-based displays, and computer

monitors. Several of the goals which inspired the creation of the CAVE system can be

applied to most other immersive display systems:

The desire for higher-resolution color images and a large field of view without
geometric distortion.

The ability to mix VR imagery with real devices (like one's hand, for instance).

The opportunity to use virtual environments to guide and teach others [8].

Effective interaction is as important as a novel display system in creating an

immersive virtual environment. Successful interaction involves allowing the user to

control the view of the environment or objects inside the environment. The degree to

which presence is experienced depends on how well the interface imitates "real world"

interaction. "A defining feature of virtual reality (VR) is the ability to manipulate virtual

obj ects interactively, rather than simply viewing a passive environment" [9].

Motion tracking provides a realistic means of interacting with a virtual

environment. It can adjust the view of the environment, manipulate objects in the

environment, and trigger visual and aural cues based on gaze and gesture recognition.

Motion tracking is often used in head-mounted display systems where the user's position

and orientation affect what the user sees and hears in the environment. "Although stereo

presentation is important to the three-dimensional illusion, it is less important than the

change that takes place in the image when the observer moves his head. The image

presented by the three-dimensional display must change in exactly the way that the image

of a real obj ect would change for similar motions of the user' s head" [10].












There are many commercial motion tracking devices:

Polhemus FASTRAK uses several electromagnetic coils per tracker to transmit
position and orientation information to a receiver.

InterSense InertiaCube small orientation tracker that provides 3 degrees of
freedom (yaw, pitch, and roll) and allows for full 3600 rotation about each axis.

HiBall-3100 wide-area position and orientation tracker. Uses hundreds of
infrared beacons on the ceiling and 6 lenses on the HiBall Sensor to track the user.

These commercial solutions are highly accurate and produce very low latency.

However, they are expensive and oftentimes encumber the user [l l, 12, 13].

Motion tracking has been applied to avatar movement through by tracking colored

straps. In Straps: A Simple M~ethod for Placing Dynamnic Avatars in an Immersive Virtual

Environment, colored straps are attached to the user's legs to accurately represent their

movement through an avatar in the virtual environment. The straps system has two major

advantages over other tracking systems:

Freedom there are no encumbering cable, which reduces system complexity,
accident hazards, and equipment failure.

Simplicity the colored straps are cheap, easy to create, and contain no moving
parts or electronics [14].

2.3 Digital Characters and Avatars

Inserting digital characters into virtual environments can make the experience

much more realistic to the user. "The user's more natural perception of each other (and

of autonomous actors) increases their sense of being together, and thus the overall sense

of shared presence in the environment" [15].









An avatar is the representation of oneself in a virtual environment. In an ideally

tracked environment, the avatar would follow the user's movements exactly. Slater and

Usoh discuss the influence of virtual bodies on their human counterpart:

The essence of Virtual Reality is that we (individual, group, simultaneously,

asynchronously) are transported bodily to a computer generated environment. We

recognize habitation of others through the representation of their own bodies.

This way of thinking can result in quite revolutionary forms of virtual

communication [16].

Digital character realism affects the amount of immersion experienced by the user

in a virtual environment. This realism is manifested visually, behaviorally, and audibly.

A break in presence (losing the feeling of presence) can occur if someone in a virtual

environment is distracted by a lack of realism in an avatar. In The Impact of Avatar

Realism on Perceived Quality of Communication in a Mai edl~( Immersive Virtual

Environment, avatar realism and a conversation-sensitive eye motion model are tested to

determine their effect on presence. "We conclude that independent of head-tracking,

inferred eye animations can have a significant positive effect on participants' responses to

an immersive interaction. The caveat is that they must have a certain degree of visual

realism" [17].

Even without realistic avatars, users can still be greatly affected by other users'

digital characters, as well as their own avatar. Establishing a sense of presence increases

the chances of a participant becoming attached to avatars in the virtual space. Emotional

attachment to avatars was a surprising result of the study Small Group Behavior in a

Virtual and Real Environment: A Comparative Study. "Although, except by inference, the









individuals were not aware of the appearance of their own body, they seemed to generally

respect the avatars of others, trying to avoid passing through them, and sometimes

apologizing when they did so." The avatars used in the study were simple models

associated with a unique color [18].

Digital characters have been successfully integrated into real environments using

computer vision and camera tracking techniques. These characters are used partly as

"virtual teachers" that train factory workers to operate some of the machinery. The

virtual humans pass on knowledge to participants using an augmented reality system.

Although the characters are automated, a training specialist can control them from a

different location via a networked application [19].

Digital character realism has been integrated into the character rendering system

created by Haptek. This system can integrate multiple highly realistic and customizable

characters into a virtual environment. These characters also act realistically (i.e. blinking,

looking around, and shifting weight). The Haptek system allows these characters to be

used as avatars, or as autonomous virtual humans [20].

There are many other commercially available character-rendering systems:

UGS Corp. Jack-usability, performance, and comfort evaluation using digital
characters that are incorporated into virtual environments.

Boston Dynamics DI-Guy-real-time human simulation used in military
simulations created by all branches of the United States Armed Forces.

VQ Interactive BOTizen-online customer support conducted by digital
characters. Characters respond to queries using a text-to-speech engine [21, 22,
23].















CHAPTER 3
APPLICATION

3.1 Scene Design and Experience Development

The first step in creating the distributed rehearsal system was to choose a sample

scene. This scene would determine character design, set design, and the dialogue.

Several factors influenced the decision:

Familiarity Since the actors would be testing the system without reading the
script beforehand, the scene needed to be immediately accessible to most actors.

Ease of tracking The system is a prototype; therefore extensive tracking would
be beyond its scope. Additionally, the acting focuses on kinesics, so gesture
tracking is the only requirement. The scene would involve characters that stay
relatively still acting primarily with gestures.

Interesting Set and Characters Presence is one of the main factors measured
when evaluating virtual environment systems. Incorporating stimulating digital
characters and sets into your environment can achieve presence.

Several scenes were evaluated using the above criteria. The scenes included the

"balcony scene" from Romeo and Juliet, the "heads and tails" scene from Rosencrantz

and Guildenstern are Dead, and the "red pill, blue pill" scene from The Matrix. The "red

pill, blue pill" scene was chosen because it is a very familiar scene that few actors would

have previously rehearsed.

Once the scene was selected, the characters and set needed to be constructed. The

modeling software 3D Studio Max was used to create the set. The set consisted of a dark









room with a fireplace, two red chairs and a small white table. The two main characters,

Neo and Morpheus, were created using the base Haptek male character model and

adjusting the texture accordingly.

The characters and environments, after being fully modeled and textured, were

then exported into a format that could be incorporated into a graphics engine. The 3DS

file format was chosen and then read into an OpenGL application along with each

character. Lighting was set up to reflect the locations of the physical lights in the scene.

It was important to be able to manipulate the character's skeleton in real-time.

Therefore, each joint in the character needed to be explicitly controlled. Haptek uses a

separate joint for each degree of freedom that exists in a joint. The shoulders, elbows,

and neck each have three joints. For simplicity, two joints were used for the neck, two

for each shoulder, and one for each elbow.

Special attention went toward developing aspects of the system that would

enhance the user's experience. The actor sat on a cushioned chair in front of a large

surface onto which the scene is projected. The setup was designed to physically immerse

the actors an environment similar to the one used in the designated scene. Each rendering

system had the option to allow each actor to use his/her head movement to affect his/her

camera's viewpoint. This simulates an actor looking through a large hole in the wall at

the other actor--if the actors tilt their heads to the side, their viewpoints rotate slightly

and allow them to see more of the room on the other side of the wall. The experience

began with a clip from The Matrix that leads into the scene. These were efforts to

increase the sense of presence each actor experiences.









3.2 Tracking the Actors

Setting up the tracking system required two cameras, two PC's (one for each

camera), colored paper or cloth for the straps, and sufficient USB or Firewire cable to

accommodate the setup. The tracking system worked under different lighting conditions

provided adequate training is performed on each camera. Training consisted of acquiring

many pictures of each strap and the background and determining the range of the colors

that belong to each strap. An application provided with the tracking system

accomplished most of this process.

Two sets of straps were created for the system. The first set of straps consisted of

colored pieces of paper fastened with tape. These straps were used in the first study and

the participants suggested using a different material because the paper was

"uncomfortable and sometimes distracting." The second set was constructed with

colored pieces of felt to increase the comfort level of each participant. These straps were

fastened with small strips of Velcro. Figure 3-1 shows the second set of straps attached

to a participant.

The tracking system on each PC then transmitted the two-dimensional coordinates

of each strap to a client computer. Tsai's camera calibration algorithm is used to

calibrate each camera and recover extrinsic and intrinsic camera parameters [24].

Calibration was achieved by explicitly mapping two-dimensional sample picture

coordinates to their three-dimensional equivalents. This provided a configuration Eile for

use in the client computer.

Once the rendering system was receiving correct tracking values from the straps

system, these values needed to be appropriately mapped to the digital characters









movements. The system first saved the fixed locations of each shoulder. Then, after

instructing the user to place their hands directly in front of them with their elbows fully

extended, the system determined their arm length and head height (distance from their

neck to their forehead). The system used the actor's arm length, head height, and

shoulder width constants to appropriately displace the digital character's hands and head.


Figure 3-1. A participant wearing the colored felt straps.

Forward shoulder joint animation (vertical movement) was accomplished by

determining the angle of displacement that a line passing from the shoulder to the hand

would create from a central horizontal line. The distance from the shoulder to the hand

determines the amount of elbow bend that is required. For instance, if the hand is arm's

length away from the shoulder, the elbow wouldn't be bent at all. Conversely, if the hand

was located adjacent to the shoulder, the elbow would be fully bent. Finally, shoulder









turn (horizontal) was calculated by determining the angle of displacement the hand would

make from a central vertical line.

3.3 Putting it all Together

VRPN was used to connect and transfer text messages between the tracking

system and the rendering system as well as between each rendering system. The tracking

system sends a text message containing the two-dimensional coordinates for each color

detected along with the width and height of the image in pixels. The rendering system

receives these values and, combined with the values from the second tracking system,

uses Tsai's algorithm for recovering the three-dimensional coordinates [24].

Once calibration has finished and the actors are accustomed to using the system,

the tracked data is shared between rendering systems via VRPN text messages. The text

message contains the two angles for each shoulder and for the neck, the bend angle for

the elbow, and the speaking state. The speaking state determines which actor is currently

speaking; this is used with the lip-syncing and virtual script systems.

Voice acting is an important aspect of rehearsal. Therefore it was necessary to

implement a system that allowed the actors to transmit their voice to their partner.

Headsets with built-in microphones were used. The headsets had a behind-the-neck

design so they would not interfere with the forehead strap. Voice was transmitted using

DirectPlay.

Instead of using a traditional physical script, a virtual script system allowed the

actors to read their lines without having to look away from the display. This system

displayed the actor' s current line on the bottom of the screen when their speaking state is

true. Incorporating the virtual script system introduced the problem of determining when










to proceed with the next line. Originally, the actor would use their foot to press a key that

would trigger the speak state to false and send a message to the remote rendering system

to change its speak state to true. However, this hindered presence and lowered the

comfort level. It was decided on to have the system operator, who calibrated the system

and trained the actor to use the system, manually switch the speak state to false when a

line was finished.

3.4 Final Software and Hardware Setup

The final hardware setup used to test the system was composed of the following (for

each location):

3 Dell PCs

2 OrangeMicro USB 2.0 web-cameras

Sony projector

Colored felt straps with Velcro attachments

GE stereo PC headset

Cushioned chair

The participant sat facing a large projection screen. The Sony projector was

placed under each participant's seat. The two web-cameras were each attached to a Dell

PC running the Straps software. These PCs had the VRPN and Imaging Control libraries

installed. The rendering PC connected to the projector ran the rehearsal software. This

PC had the VRPN and Haptek libraries installed. Figure 3-2 shows a diagram of the final

hardware setup used for each study.



































Figure 3-2. A participant testing out the system.


Figure 3-3. Sample screenshot demonstrating the virtual script system.




















































Figure 3-4. Data flow for both rendering systems.





,,


,,


.~,


Webcam Left ~









'P~C Left



endering


Webe~ram Right










PC Rightly


~I


"0' 1


RI


Acto~r


Figure 3-5. Hardware setup for each location.















CHAPTER 4
RESULTS

4.1 Description of Studies

The system was evaluated using three studies, each with two actors. The first

study was conducted before the system was fully operational. The second and third

studies were conducted using the complete system. The aspects of the system that

weren't incorporated into the first study included the introductory movie, head-controlled

viewpoint, and accurate hand tracking.

The participants (4 females and 2 males) ranged in age from 18 to 20. They had

significant acting experience. Before each study, each participant was given a small

tutorial on how the system worked, a brief overview of its limitations, and some time to

see his/her character being manipulated by his/her movements. The participants then

watched the introductory movie and rehearsed the scene provided for them. When the

scripted lines ended, the participants were given time to adlib. Each study concluded by

having participants fill out a presence and co-presence questionnaire.

All three sets of participants were given the co-presence questionnaire used in

Collaboration in Tele-Immersive Environments. This questionnaire gauges the degree to

which each participant felt they were "present" in the virtual environment with the other

participant. The last two sets of participants were also given the Slater, Usoh and Steed

(SUS) Presence questionnaire. The SUS questionnaire is commonly used to assess










virtual environment immersion. Along with each questionnaire, participants were asked

to specify their level of computer literacy and their level of experience with virtual

reality. The Appendix contains both questionnaires. An informal verbal debriefing

followed the questionnaires.


Actor 1 located at the

Lab in the CISE Building.












Actor 2 located at the
REVE Polymodal Imrmersive
Theater in the Norman Gymr.



Figure 4-1. The location of each actor on the University of Florida campus.

4.2 Reaction from Actors

The participants from the first study appeared to be initially frustrated with the

inaccurate hand tracking, although with some practice they compensated for it. One

participant used exaggerated gestures to counteract the limited forward/backward

character movement. During the adlib portion the participants spontaneously began a

slapping duel with each other that consisted of one person moving their arm in a slapping

motion and the other moving their head when hit and vise versa.

The second and third set of study participants quickly became adept at using the

system. They seemed very comfortable working through the scene despite having little










or no virtual reality experience. The introductory movie did not appear to significantly

affect the participants' experience. The adlib session flowed seamlessly from the scripted

section. The participants seemed to be highly engrossed in the experience-evidenced by

the fact that all four participants prolonged the adlib session for more than 5 minutes.

4.3 Results

The results from the questionnaires and the debriefing can be organized into the

following three categories:

Virtual Reality can be used to bring actors together for a successful rehearsal.

Lack of presence distracted the actors.

Improvements should be made to the system.

4.3.1 Virtual Reality Can Be Used to Bring Actors Together for a Successful
Rehearsal

The results of the questionnaires proved that the study was effective in achieving

successful rehearsals. The participants on average felt a stronger sense of co-presence

than standard presence. This is understandable considering the participants had limited

control over their own environment while still having significant interaction with their

partner.

The average responses to the co-presence questionnaire were low for the first

study (only 26% of the responses were above 4.5) yet moderately high for the second and

third studies (60% and 66% of the responses were above 4.5 for the second and third

studies, respectively). There was an average increase of .81 in the responses from the

first study to the second and third. This demonstrates that the increased interactivity

included in the system for the second and third studies positively influenced each actor's

experience. The high responses for the second and third studies also indicate that the










participants felt that they could effectively communicate both verbally and gesturally.

The following responses to the debriefing session reaffirm these Eindings:

"Ultimately, I had fun. There were a few synch issues but we found out ways to
interact with the limited tools at our disposal."

"I felt very connected to the other person and I felt that the acting experience was
very worthwhile. I actually felt like if we were rehearsing a scene we would have
gotten someplace with our exercise."

"It was very easy to feel like you're with another person in the space. One,
because you were talking to them. And two, because you're always conscious of
the fact that they're going to be looking at what you're at least partially doing."

"I started to think of the images on the screen as being another person in the room
with me; it very much seemed like a human Eigure and I felt as though I were in
their presence."

Several items on the co-presence questionnaire generated interesting results.

Question 4, which asked, "To what extent did you feel embarrassed with respect to what

you believed the other person might be thinking of you?" generated an average score of

1.25 (on a scale of 1 [not at all] to 7 [a great deal]). Questions 6 and 7, which determined

the degree to which each participant felt their progress was hindered by their partner and

vice versa, generated an average score of 1.5 and 1.75, respectively. These low results

are likely a result of the participants having previously worked with each other. This co-

presence questionnaire uses the participant' s unfamiliarity with their partner to gauge co-

presence by showing the existence of social phenomena such as shyness and awkward

interaction with the aforementioned questions. Thus, participants familiar with each

other, or those who have acted together before, would probably get low scores on those

questions.

Question 14, which measured the degree to which each participant had a sense

that there was another human being interacting with them (as opposed to just a machine),










generated an average score of 6. This score further supports the system's effectiveness.

Question 15, which determined how happy the participant thought their partner felt,

generated an average score of 7. This question assumes that the participants are strangers

(similar to questions 4, 6, and 7). All of the participants showed obvious signs of

enjoying the experience as evidenced by the average score of 7 (the maximum score) for

this question. Figures 4-2 to 4-6 detail the results of each questionnaire arranged by

study .

4.3.2 Lack of Presence Distracted the Actors

The results of the presence questionnaire that was given to the second and third

study participants were average. Typical presence scoring involves adding a point for

each response of 6 or 7, however that would give only one participant (ID number 3) a

score above 0. According to Figure 4-5, the average of the responses for both studies

also generates a score of 0. Since the average responses were all between 3 and 5, it can

be said that the participants were only moderately engrossed in the environment. This

affected the experience by distracting the participant. Several participants mentioned the

experience would have been enhanced if they could see a representation of their hands on

the screen. Had each participant's sense of presence been higher, they might have

accepted the reality of acting with the character on the screen as opposed to feeling that

they were physically controlling a character that is acting with the character on the

screen. The following responses from the debriefing session were the basis for these

conclusions:

*"It's kind of like a first-person shooter sort of game where you don't really see
any of yourself; you just see what' s going on. It' s a little bit disorienting."










"I would've really liked to see my character's hands on the screen--so I know
what they're doing."

"It was kind of skeletal but the way it works right now is really good for where it
1S."

"There was a little sense that you were really there (in the virtual environment)
like when you move your head and the camera pans back and forth."

4.3.3 Improvements Should Be Made to the System

The participants suggested a number of areas for improvement. Nearly all

suggested that more body parts be tracked and that interactive facial expressions be

added. One participant from the first study suggested abandoning the gesture tracking for

a system that would aid only in the "blocking" of a scene. The following are the

debriefing responses that dealt with system improvements and the overall idea of the

sy stem :

"For practice out of rehearsal this could work. It all depends on the level of
sophi sti cati on."

"It needs to incorporate more color straps to include the whole body and
hopefully, facial expressions. I like the idea of the opposite image being that of
the character instead of the other actor."

"I would add lots more tracking spots to allow for full body and maybe facial
movements."

"There's a lot more that goes into acting that just moving your arms. To make it
more of an acting experience there would have to be more mobility and
expression."











FIRST STUDY Co-presence Questionnaire Part 1
ID Number Literacy Experience 1 2 3 4 5 6 7
1 3 1 4 5 4 1 4 2 2
2 4 1 3 3 3 1 2 1 1
Averae: 3.5 4 3.5 1 3 1.5 1.5

FIRST STUDY- Co-presence Questionnaire Part 2
ID Number 8 9 10 11 12 13 14 15
1 2 5 1 3 1 5 6 7
2 4 5 4 3 1 6 5 7
Average: 3 5 2.5 3 1 5.5 5.5 7

Figure 4-2. Results of the co-presence questionnaire administered during the first study.

SECOND STUDY- Presence Questionnaire
ID Number 1 2 3 4 5 6
3 6 4 2 1 6 6
4 5 4 5 5 4 5
Average: 5.5 4 3.5 3 5 5.5

SECOND STUDY Co-presence Questionnaire Part 1
ID Number Literac Eprience 1 2 3 4 5 6 7
3 7 2 6 6 5 2 6 1 2
4 4 1 5 5 5 1 5 3 3
Average: 5.5 5.5 5 1.5 5.5 2 2.5

SECOND STUDY- Co-presence Questionnaire Part 2
ID Number 8 9 10 11 12 13 14 15
3 5 3 2 3 2 7 7 7
4 7 6 5 6 3 4 5 7
Average: 6 4.5 3.5 4.5 2.5 5.5 6 7

Figure 4-3. Results of the presence and co-presence questionnaires administered during
the second study.
















THIRD STUDY- Presence Questionnaire
ID Number 1 2 3 4 5 6
5 4 3 5 4 4 5
6 5 4 3 4 3 5
Average: 4.5 3.5 4 4 3.5 5

THIRD STUDY Co-presence Questionnaire Part 1
ID Number Literac Experience 1 2 3 4 5 6 7
5 6 1 4 4 6 1 3 1 1
6 7 2 5 5 5 1 4 1 1
Average: 4.5 4.5 5.5 1 3.5 1 1

THIRD STUDY- Co-presence Questionnaire Part 2
ID Number 8 9 10 11 12 13 14 15
5 5 3 3 4 2 4 6 7
6 5 6 5 5 6 5 6 7
Average: 5 4.5 4 4.5 4 4.5 6 7

Figure 4-4. Results of the presence and co-presence questionnaires administered during
the third study.

Presence Questionnaire Summar
Question 2nd Stud Average 3rd Stud Averae Total Avere
1 5.5 4.5 5
2 4 3.5 3.75
3 3.5 4 3.75
4 3 4 3.5
5 5 3.5 4.25
6 5.5 5 5.25

Figure 4-5. Comparison between question averages for the presence questionnaire.











Co-presence Questionnaire Summar
Question 1st Study 2n Study 3r Study Total Average
Avere Avere Average (2nd & 3rd Studies)
1 3.5 5.5 4.5 5
2 4 5.5 4.5 5
3 3.5 5 5.5 5.25
4 1 1.5 1 1.25
5 3 5.5 3.5 4.5
6 1.5 2 1 1.5
7 1.5 2.5 1 1.75
8 3 6 5 5.5
9 5 4.5 4.5 4.5
10 2.5 3.5 4 3.75
11 3 4.5 4.5 4.5
12 1 2.5 4 3.25
13 5.5 5.5 4.5 5
14 5.5 6 6 6
15 7 7 7 7

Figure 4-6. Comparison between question averages for the co-presence questionnaire
showing improvement from the first study to the second and third studies.















CHAPTER 5
CONCLUSION

5.1 Usefulness to the Acting Community

It has been shown that virtual environments allow multiple actors to successfully

rehearse scenes without the need to be in makeup or costume. The true usefulness of this

system to the acting community lies in the fact that it can bring actors together from two

remote locations for an engaging acting experience. A fully developed virtual rehearsal

system could save actors a significant amount of time and money. The system, however,

is far from being fully developed.

5.2 Future Work

The distributed virtual rehearsal system has many areas that can be improved.

The depth of an actor's experience in a virtual rehearsal is greatly affected by how

realistic their interaction is. Realistic interaction is achieved by making the digital

characters' movements as life-like as possible.

One main complaint from the study participants was that the character they were

facing lacked expression. Implementing interactive facial expressions would be costly

but would dramatically increase the realism of the experience. In Acting in Virtual

Reality, simple mouse strokes were used to change the character's expression [3],

however that solution isn't plausible if the actor is to remain wireless (as they are in the

virtual rehearsal system). Another solution would be to incorporate a third web-camera

into the system that would provide images of the actor's face to a PC that could detect










changes in facial expressions. The third and easiest solution would be to give the system

operator control over the character's facial expressions. The drawbacks to this solution

are operator subj activity and that the operator would have to be within visual range of the

actor.

The other main complaint from the study participants was the limited number of

tracked body parts. More tracked areas would have increased realism, although only 3

tracked areas were needed for the scene. The shoulders straps were used during system

calibration but weren't actively tracked during the rehearsal. Adding shoulder tracking

could have allowed for torso manipulation, which would have been especially useful

when the actors wanted to lean forward.

Orientation tracking, while not specifically mentioned by the study participants,

would have greatly affected character realism. This would allow the characters to look

left and right as well as rotate their hands. Using two colored straps to determine the

direction of the vector that passes through both straps could approximate head orientation

tracking. Hand orientation would be much more difficult since there are several axes of

rotation.

Automated accurate lip-synching is another aspect that would have a significant

effect on the user's sense of presence. For this to work, the actor's audio stream would

need to be analyzed in real-time. This would be difficult to implement and

computationally expensive.

The ideal system would not only track gestures and facial expressions, but allow

the actor to move freely around the stage. This could be achieved using a modified

CAVE system or a head-mounted display.










5.3 Future Applications

Motion capture systems are typically used to capture an actor's movements and

later add them to a digital character. Virtual rehearsals could be modified to record the

actors' movements as they rehearse their scene. It would then essentially be a real-time

motion capture system. The recorded movements could then be played back for the actor

to review or they could be sent directly to an animation package for the purpose of

rendering a digitally animated movie.

A virtual film director system could also be added to the virtual rehearsal system.

The virtual director could plan out camera angles, arrange and modify props, start and

stop the action, and direct the actors' movements. The director could be represented by a

digital character or simply watch the action from a monitor, speaking through a virtual

speaker.

Distributed virtual performances are another plausible extension of the virtual

rehearsal. This would introduce audience systems into the distributed virtual rehearsal

paradigm. While several actors perform the scene from separate locations, an audience

can watch the action unfold from a third-person point of view. Allowing a director to

control the camera angles would further enhance the experience by providing the

audience with cinematic visuals.























Your Given ID number
Your Ae
Your Gender O Male O Female
Undergraduate Student O
Masters Student O
PhD Student O
Occupational status Research Assistant/Fellow O
Staff systems, technical O
Faculty O
Administrative Staff O
Other O
Please state your level of computer literacy on a scale of (1...7)
(never used before) 1 O 2 O 3 O 4 O 5 O 6 O 7 O (a reat deal)
Have you ever experienced 'virtual reality' before?
(never used before) 1 O2 O3 O4 O5 O6 O7 O (a great deal)

Part B: Virtual Reality Experience

Please give your assessment as to how well you contributed to the successful
performance of the task.

My contribution to the successful performance of the task was...

(not at all) 1 02 03 04 05 06 07 0 (a great deal)

Please give your assessment as to how well the other person contributed to the
successful performance of the task.

The other person 's contribution to the task was...

(not at all) 1 02 03 04 05 06 07 0 (a great deal)


APPENDIX
STUDY QUESTIONNAIRES

A.1 Co-presence Questionnaire

Part A: Personal Information










To what extent were you and the other person in harmony during the course of the
experience.

We were in harmony...

(not at all) 1 02 03 04 05 06 07 0 (a great deal)

To what extent did you feel embarrassed with respect to what you believed the other
person might be thinking about you?

I felt embarrassed...

(not at all) 1 02 03 04 05 06 07 0 (a great deal)

Think about a previous time when you co-operatively worked together with another
person in order to achieve something similar to what you were trying to achieve
here. To what extent was your experience in working with the other person on this
task today like the real experience, with regard to your sense of doing something
together?

This was like working together with another person in the real world...

(not at all) 1 02 03 04 05 06 07 0 (a great deal)

To what extent, if at all, did the other person hinder you from carrying out the task?

The other person hindered me from carrying out this task...

(not at all) 1 02 03 04 05 06 07 0 (a great deal)

To what extent, if at all, did you hinder the other person from carrying out the task?

I hindered the other person from carrying out this task...

(not at all) 1 02 03 04 05 06 07 0 (a great deal)


Part C: Virtual Reality Experience Continued

Please give your assessment of how well you and the other person together performed
the task.

We performed the task successfully...


(not at all) 1 02 03 04 05 06 07 0 (a great deal)











To what extent, if at all, did you have a sense of being with the other person?

I had a sense of being with the other person...

(not at all) 1 02 03 04 05 06 07 0 (a great deal)

To what extent were there times, if at all, during which the computer interface
seemed to vanish, and you were directly working with the other person?

There were times during which I had a sense of working with the other person...

(not at all) 1 02 03 04 05 06 07 0 (a great deal)

When you think back about your experience, do you remember this as more like
just interacting with a computer or working with another person?

The experience seems to me more like interacting with a person...

(not at all) 1 02 03 04 05 06 07 0 (a great deal)

To what extent did you forget about the other person, and concentrate only on doing
the task as if you were the only one involved?

I forgot about the other person...

(not at all) 1 02 03 04 05 06 07 0 (a great deal)

During the time of the experience did you think to yourself that you were just
manipulating some screen images with a mouse-like device, or did you have a sense
of being with another person?

During the experience I often thought that I was really manipulating some screen
images ...

(not at all) 1 02 03 04 05 06 07 0 (a great deal)

Overall rate the degree to which you had a sense that there was another human
being interacting with you, rather than just a machine.

My sense of there being another person was...

(not at all) 1 02 03 04 05 06 07 0 (a great deal)





1. Please rate your sense of being in the environment, on the following scale from 1
to 7, where 7 represents your normal experience of being in a place.

I had a sense of "being there" in the environment...

(not at all) 1 02 03 04 05 06 07 0 (a great deal)

2. To what extent were there times during the experience when the environment
was the reality for you?

There were times during the experience when the environment was the reality for me...

(at no time) 1 O2 O3 O4 O5 O6 O7 O (Almost all the time)

3. When you think back about your experience, do you think of the environment
more as images that you saw, or more as somewhere that you visited?

The environment seems to me to be more like...

(Images that Isaw) 1 O2 O3 O4 O5 O6 O7 O (Somewhere Ivisited)


If you had a chance, would you like to meet the other person?

(not at all) 1 02 03 04 05 06 07 0 (a great deal)

Assess the mood of the other person on the basis of very depressed to very happy.

The mood of the other person seemed to be happy...

(not at all) 1 02 03 04 05 06 07 0 (a great deal)

Please write any additional comments here. Things you could consider are:

Things that hindered you or the other person from carrying out the task; what you
think of the person you worked with; and any other comments about the experience
and your sense of being there with another person. What things made you "pull
out" and more aware of the computer...


A.2 Presence Questionnaire










4. During the time of the experience, which was the strongest on the whole, your
sense of being in the environment, or of being elsewhere?

I had a stronger sense of...

(Being elsewhere) 1 O 2 O 3 O 4 O 5 O 6 O 7 O (Being in the environment)

5. Consider your memory of being in the environment. How similar in terms of the
structure of the memory is this to the structure of the memory of other places you
have been today? By 'structure of the memory' consider things like the extent to
which you have a visual memory of the environment, whether that memory is in
color, the extent to which the memory seems vivid or realistic, its size, location in
your imagination, the extent to which it is panoramic in your imagination, and other
such structural elements.

I think of the environment as a place in a way similar to other places that I've been
today...

(not at all) 1 O 2 O 3 O 4 O 5 O 6 O 7 O (very much so)

6. During the time of the experience, did you often think to yourself that you were
actually in the environment?

During the experience I often thought that I was really existing in the environment...

(not very often) 1 O 2 O 3 O 4 O 5 O 6 O 7 O (very much so)














LIST OF REFERENCES


1. Dahl, S., "Kinesics," Business School, Middlesex University, 2004. Retrieved 14
Mar. 2004 .

2. Ryan, D., "Semiotics," School of Arts and Sciences, Australian Catholic University,
2003. Retrieved 14 Mar. 2004
.

3. Slater, M., Howell, J., Steed, A., Pertaub, D-P., Garau, M. and Springel, S., "Acting
in Virtual Reality," ACM Collaborative Virtual Environments, CVE'2000, 2000.

4. Mortensen, J., Vinayagamoorthy, V., Slater, M., Steed, A., Lok, B. and Whitton,
M.C., "Collaboration in Tele-Immersive Environments," Proceedings of the Eighth
Eurographics Workshop on Virtual Environments, 2002.

5. Oliverio, J., Quay, A. and Walz, J., "Facilitating Real-time Intercontinental
Collaboration with Emergent Grid Technologies: Dancing Beyond Boundaries,"
Paper from the Digital Worlds Institute, 2001. Retrieved 9 Aug. 2004
.

6. Steed, A., Slater, M., Sadagic, A., Tromp, J. and Bullock, A., "Leadership and
Collaboration in Virtual Environments," IEEE Virtual Reality, Houston, March
1999, 112-115.

7. Brooks, F.P., "What's Real about Virtual Reality?" IEEE Computer Graphics and
Applications, Nov./Dec. 1999.

8. Cruz-Neira, C., Sandin, D.J. and DeFanti, T.A., "Surround-Screen Proj ection-Based
Virtual Reality: The Design and Implementation of the CAVE," Computer
Graphics (SIGGRAPH) Proceedings, Annual Conference Series, 1993.

9. Bowman, D.A. and Hodges, L.F., "An Evaluation of Techniques for Grabbing and
Manipulating Remote Obj ects in Immersive Virtual Environments," Symposium on
Interactive 3D Graphics, Apr. 1997.

10. Sutherland, I.E., "A Head-mounted Three Dimensional Display," Proceedings of
the AFIPS Fall Joint Computer Conference, Vol. 33, 757-764, 1968.

11. Polhemus, "FASTRAK: The Fast and Easy Digital Tracker," Colchester, VT, 2004.
Retrieved Apr. 2004 Brochure.pdf>.










12. InterSense, "InterSense InertiaCube2," Bedford, MA, 2004. Retrieved Apr. 2004


13. 3rdTech, Inc., "HiBall-3000 Wide Area Tracker and 3D Digitizer," Chapel Hill,
NC, 2004. Retrieved Apr. 2004

14. Jackson, J., Lok, B., Kim, J. Xiao, D., Hodges, L. and Shin, M., "Straps: A Simple
Method for Placing Dynamic Avatars in a Immersive Virtual Environment," Future
Computing Lab Tech Report FCL-0 1-2004, Department of Computer Science,
University of North Carolina at Charlotte, 2004.

15. Thalmann, D., "The Role of Virtual Humans in Virtual Environment Technology
and Interfaces," in Frontiers of Human-Centered Computing, Online Communities
and Virtual Environments, Springer, London, 2001, 27-38.

16. Slater, M. and Usoh, M., "Body Centered Interaction in Immersive Virtual
Environments," in N. Magnenat Thalmann and D. Thalmann (eds.) Artificial Life
and Virtual Reality, John Wiley and Sons, New York, 1994, 125-148.

17. Garau, M., Vinayagamoorthy, V., Slater, M., Steed, A. and Brogni, A., "The Impact
of Avatar Realism on Perceived Quality of Communication in a Shared Immersive
Virtual Environment," Equator Annual Conference, 2002.

18. Slater, M., Sadagic, A., Usoh, M. and Schroeder, R., "Small Group Behavior in a
Virtual and Real Environment: A Comparative Study," presented at the BT
Workshop on Presence in Shared Virtual Environments, June 1998.

19. Vacchetti, L., Lepetit, V., Papagiannakis, G., Ponder, M., Fua, P., Magnenat-
Thalmann, N. and Thalmann, D., "Stable Real-Time Interaction Between Virtual
Humans and Real Scenes," Proceedings of 3DIM 2003 Conference, 2003.

20. Haptek Inc., Santa Cruz, California, Sept. 2003. Retrieved 9 Aug. 2004
.

21. UGS Corporation, "E-Factory: Jake," Plano, TX, 2004. Retrieved Apr. 2004
.

22. Boston Dynamics, "DI-Guy: The Industry Standard in Real-Time Human
Simulation," Cambridge, MA, 2004. Retrieved Apr. 2004


23. VQ Interactive, Inc., "BOTizen: The Power of Interactivity," Selangor
Malaysia, 2003. Retrieved Apr. 2004 .










24. Tsai, R., "A Versatile Camera Calibration Technique for High-Accuracy 3D
Machine Vision Metrology Using Off-the-Shelf TV Cameras and Lenses," IEEE
Journal of Robotics and Automation, Aug. 1987, 323-344.















BIOGRAPHICAL SKETCH

George Victor Mora was born in Miami, Florida, on March 29th, 1980. He spent

the first 18 years of his life in South Florida. His obsession with art and technology

began at an early age. During high school, he focused his attention on art and computer

science classes. Upon completing high school, he moved to Gainesville, Florida, to

attend the University of Florida.

In August of 2002, George finished his undergraduate degree in computer

science. He returned to the University of Florida the following semester as a graduate

student in the newly formed digital arts and sciences program in the College of

Engineering. For the next two years, George focused on virtual environments and digital

media both through his school work and as an employee of the Digital Worlds Institute.

In December of 2004 George will receive his Master of Science degree in digital arts and

sciences.




Full Text

PAGE 1

DISTRIBUTED VIRTUAL REHEARSALS By GEORGE MORA A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE UNIVERSITY OF FLORIDA 2004

PAGE 2

Copyright 2004 by George Mora

PAGE 3

To my wife, Maria, My parents, Jorge and Johanna Mora And my family and friends For their constant support and encouragement

PAGE 4

ACKNOWLEDGMENTS I would like to thank my thesis committee chairman, Dr. Benjamin C. Lok, for his enthusiasm and interest in this project, as well as for keeping me motivated and on track. I would also like to thank James C. Oliverio for being on my committee and for the constant support, advice, and opportunities he has provided for me. I also give much thanks to Dr. Jorg Peters for supporting both my undergraduate and graduate final projects. This thesis was completed with the help of several people. My gratitude goes out to Jonathan Jackson, Kai Bernal, Bob Dubois, Kyle Johnsen, Cyrus Harrison, Andy Quay, and Lauren Vogelbaum. This thesis would not have been possible without their help. I would like to thank my parents, Jorge and Johanna Mora, for always encouraging me to grow both intellectually and creatively. Finally, I would like to thank my wife, Maria Mora, for her unending love, support, and understanding. iv

PAGE 5

TABLE OF CONTENTS page ACKNOWLEDGMENTS..iv LIST OF FIGURES...vii ABSTRACT.viii CHAPTER 1 INTRODUCTION...1 1.1 Motivation.1 1.2 Challenges. 1.3 Project Goals.4 1.4 Organization of Thesis..4 1.5 Thesis Statement...5 1.6 Approach...5 2 PREVIOUS WORKS..7 2.1 Distributed Performance... 2.2 Virtual Reality... 2.3 Digital Characters and Avatars........ 3 APPLICATION..13 3.1 Scene Design and Experience Development...13 3.2 Tracking the Actors.....15 3.3 Putting It All Together.....17 3.4 Final Software and Hardware Setup 4 RESULTS... 4.1 Description of Studies..22 4.2 Reaction from Actors... 4.3 Results..24 4.3.1 Virtual Reality Used for Successful Rehearsals.....24 4.3.2 Lack of Presence Distracted the Actors......26 v

PAGE 6

4.3.3 Improvements That Should Be Made to the System..27 5 CONCLUSION.......31 5.1 Usefulness to Acting Community....31 5.2 Future Work.31 5.3 Future Applications..33 APPENDIX: STUDY QUESTIONNAIRES....34 A.1 Co-presence Questionnaire.34 A.2 Presence Questionnaire...37 LIST OF REFERENCES...39 BIOGRAPHICAL SKETCH.42 vi

PAGE 7

LIST OF FIGURES Figure page 1-1. Two actors rehearsing in a virtual environment ......2 3-1. A participant wearing the colored felt straps......16 3-2. A participant testing out the system.......19 3-3. Sample screenshot demonstrating the virtual script system...19 3-4. Data flow for both rendering systems.....20 3-5. Hardware setup for each location.......21 4-1. The location of each actor on the University of Florida campus....23 4-2. Results of the co-presence questionnaire administered during the first study...28 4-3. Results of the presence and co-presence questionnairessecond study.....28 4-4. Results of the presence and co-presence questionnairesthird study.....29 4-5. Comparison between question averages for the presence questionnaire 4-6. Comparison between question averages for the co-presence questionnaire...30 vii

PAGE 8

Abstract of Thesis Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Master of Science DISTRIBUTED VIRTUAL REHEARSALS By George Mora December, 2004 Chair: Benjamin C. Lok Major Department: Computer and Information Science and Engineering Acting rehearsals with multiple actors are limited by many factors. Physical presence is the most obvious, especially in a conversation between two or more characters. Cost is an obstacle that primarily affects actors who are in different locations. This cost consists of travel and living expenses. Preparation time is another hindrance, especially for performances involving elaborate costumes and intricate makeup. Many recent high-budget motion pictures require that key actors go through several hours of makeup application to complete their characters look. Virtual reality can bring actors together to rehearse their scene in a shared environment. Since virtual reality elicits emotions and a sense of perceived presence from its users, actors should be able to successfully rehearse in a virtual environment. This environment can range from an empty space to a fully realized set depending on the directors imagination and the projects scope. viii

PAGE 9

Actors movements will be tracked and applied to a digital character, creating a virtual representation of the character. The digital character will resemble the actorin full costume and makeup. In the virtual environment, each actor will see (in real-time) the character being controlled by their acting partner. The goal is to show that multiple actors can use a shared virtual environment as an effective acting rehearsal tool. This project will also demonstrate that actors can hone their skills from remote locations through virtual reality, and serve as a foundation for future applications that enhance the virtual acting paradigm. ix

PAGE 10

CHAPTER 1 INTRODUCTION 1.1 Motivation Acting rehearsal is the process by which actors refine their acting skills and practice scenes for future public performances. These rehearsals traditionally occur on a stage with the principle actors and the director physically present. Although costumes and makeup are not essential until the final few rehearsals (called dress rehearsals), a functional set is important for determining when and where to move (known as movement blocking). There are several variations on the standard rehearsal. During the pre-production stage, a read through or reading is scheduled to familiarize the actors with the script and each other. Typically, actors are physically present in a conference room, although this can be accomplished through using a video or telephone conference. After the reading, a blocking rehearsal will help choreograph the actors movements. Blocking rehearsals usually take place on a stage or practice set, since its dimensions affect the choreography of a production. Polishing and Building rehearsals take up the majority of the total rehearsal time. During these rehearsals, actors perfect their performance and work out any major problems. The final rehearsals (dress and technical rehearsals) involve practicing the performance in full costume and makeup with complete lighting, sound, and props on a finished set. 1

PAGE 11

2 Currently, a reading is the only rehearsal method which does not need an actors physical presence. The reading does not require that actors wear costume/makeup or move on an assembled stage. Therefore it could be performed over the telephone. One could argue that distributed rehearsals could be easily achieved through video conferencing. However the cost and availability of a system which could deliver satisfying results in terms of video/audio quality, bandwidth, and robustness make video conferencing a poor choice for effective distributed rehearsals. Allowing digital characters to represent an actor in a shared immersive virtual environment increases the number of conditions under which an acting rehearsal can occur. Physical presence, preparation time, and cost would no longer limit rehearsals. This would allow multiple actors from anywhere in the world to meet and rehearse a scene before there are costumes or constructed sets. Figure 1-1. Two actors rehearsing in a virtual environment. Actor 1 controls the movements of Character 1 (Morpheus), while Actor 2 controls the movements of Character 2 (Neo). By allowing actors to meet in a virtual space, there is an added advantage of virtual reality interaction. Such interaction includes stereoscopic vision, gaze tracking, and easy prop and set maintenance. Stereoscopic vision allows the actor to see the acting

PAGE 12

3 partner, set, and props in three dimensions. Gaze tracking changes the display based on the location of the actors head and the direction he or she looks. Prop and set maintenance allow one to move, rotate, and replace any prop or piece of scenery. Consideration must be given to acting theory since the actors expressions will be expressed through the avatar. The form of expression this thesis focuses on is gestures. Kinesics encompasses all non-verbal forms of communicating. These include gestures, body language, and facial expressions. There are several categories of kinesics: Emblems: non-verbal messages with a verbal counterpart. Illustrators: gestures associated with verbal messages. Affective Displays: gestures or expressions conveying emotion. Regulators: non-verbal signs that maintain the flow of a conversation. Adaptors: small changes in composure that subconsciously convey mood [1]. Communication on stage mimics communication in real life. Therefore, the relationship of kinesics to acting is obvious. Actors must pay special attention to their movement, gestures, facial expressions, and body language in relation to what they are saying or doing. Ryan reaffirms the connection between kinesics and acting: The informal spatial code relates to the movement of the body on stage including facial expression, gestures, formations of bodies (i.e. patterns, shapes), and sometimes includes moving set and pieces of furniture. Ryan also lists several properties of kinesics use on stage: Gestures cant stand alone. Gestures cant be separated from the general continuum of motion. Gesture is the primary mode of ostending (i.e. showing) the body on stage [2].

PAGE 13

4 1.2 Challenges Actors are accustomed to being physically present with other actors on real sets. For a virtual environment to be effective in improving acting skills, the actor must experience a strong sense of being in the same space as the other actor (referred to as a sense of co-presence). Some challenges faced when trying to achieve this sense of co-presence include: Keeping the actor comfortable and as unaware that their movements are being tracked. Ensuring high audio quality to simulate hearing the other voice in the same room. Placing the cameras, projector, and screen in such a way that the actor has a clear view of the screen while still being accurately tracked. Providing realistic models and textures for the virtual environment. Having the character exhibit human-like behavior and expressions. Ensuring high-speed data transmission between systems. 1.3 Project Goals This project seeks to enhance the fields of virtual environments research and acting theory in the following ways: Demonstrate that digital characters in virtual environments allow for effective distributed rehearsals. Provide a prototype system to allow actors to interact with each other virtually in full costume/makeup across long distances. 1.4 Organization of Thesis This thesis is organized into the following sections: Introduction. Specifies the need for this research, obstacles in completing the project, and the ultimate goals for this thesis.

PAGE 14

5 Previous Work. Describes the research and work that served as both inspiration and a foundation for this project. Application. Details the process of creating the application that demonstrates the ideas presented in this thesis. Results. Discusses the results of a survey of several actors who tested the system to rehearse a simple scene. Conclusion. Summarizes results and lists future work and applications of the research. 1.5 Thesis Statement Distributed Virtual Rehearsals can bring actors together from different locations to successfully rehearse a scene. 1.6 Approach My approach is to build a system that will allow two or more actors to rehearse a scene in a virtual world. Digital characters that resemble the actor in full costume and makeup will represent the actor. The actors movements will be tracked and will directly affect the movements of the digital character. The actor will see digital characters controlled by other actors in remote locations. Since the data required for rendering the characters and props exists on local machines, the only information that needs to be sent is the tracking data for each actors movements. The tracking data for each body part is contained in a text message composed of several (three for position data only) floating point numbers. The information can be efficiently transmitted which will reduce lag. The system for this rehearsal setup (per actor): Projector-based display system with a large projection screen. A well-lit room large enough for the actor to perform the scene.

PAGE 15

6 Two web cameras connected to two PCs. One rendering PC. High-speed network connecting each system. Several different colored straps attached to the actors body. A headset with built-in microphone (wireless if extensive body movement is required)

PAGE 16

. CHAPTER 2 PREVIOUS WORK 2.1 Distributed Performance Distributed performance refers to media-centered events and actions that affect each other yet occur in different locations. This could range from a simple telephone call to a complex massive multiplayer online role-playing game (MMORPG). In the study Acting in Virtual Reality, distributed performance brings several actors and directors together to rehearse a short play. Each participant interacts with the others through networked computers and his/her own non-immersive display. Semi-realistic avatars represent the actors while the director is only heard over the sound system. The study proved successful in allowing actors to rehearse in a shared virtual environment. A performance level was reached in the virtual rehearsal which formed the basis of a successful live performance, one that could not have been achieved by learning of lines or video conferencing [3]. Networked virtual environments are also used in the Collaboration in Tele-Immersive Environments project at the University of London and the University of North Carolina at Chapel Hill. This project investigated if virtual environments could facilitate finishing a collaborative task in virtual reality. The task involved two people carrying a stretcher along a path and into a building. Their result indicated that realistic interaction in a virtual environment over a high-speed network while possiblestill suffers from tracking delays, packet losses, and difficulty sharing 7

PAGE 17

8 control of objects. The data suggests that in order to have a sense of being with another person, it is vital that the system works in the sense that people have an impression of being able to actually do what they wish to do [4]. Dancing Beyond Boundaries involves the use of video conferencing over a high-speed network as a method of distributed performance. This piece used Access Grid technology and an Internet2 connection to allow dancers and musicians from four different locations across North and South America to interact with each other. Thus the combination of multi-modal information from the four nodes created a virtual studio that could also be termed a distributed virtual environment, though perhaps not in the usual sense [5]. An important aspect of distributed performance is the state of co-presence that is achieved. Co-presence is the sense of being in the same space with other people. Distributed collaboration has shown to be successful since it achieves a high degree of co-presence. It has been shown that the level of immersion is related to a users sense of presence and co-presence. The level of immersion also relates to leadership roles in group settings [6]. 2.2 Virtual Reality Frederick Brooks Sr. defines a virtual reality experience as any in which the user is effectively immersed in a responsive virtual world. This implies user dynamic control of viewpoint. Effective immersion has been achieved through the use of novel display and interaction systems [7]. Immersive displays create a sense of presence through multi-sensory stimulation. Previous examples of these systems include head-mounted displays, CAVE (Cave-like

PAGE 18

9 Automatic Virtual Environment) systems, projector-based displays, and computer monitors. Several of the goals which inspired the creation of the CAVE system can be applied to most other immersive display systems: The desire for higher-resolution color images and a large field of view without geometric distortion. The ability to mix VR imagery with real devices (like one's hand, for instance). The opportunity to use virtual environments to guide and teach others [8]. Effective interaction is as important as a novel display system in creating an immersive virtual environment. Successful interaction involves allowing the user to control the view of the environment or objects inside the environment. The degree to which presence is experienced depends on how well the interface imitates real world interaction. A defining feature of virtual reality (VR) is the ability to manipulate virtual objects interactively, rather than simply viewing a passive environment [9]. Motion tracking provides a realistic means of interacting with a virtual environment. It can adjust the view of the environment, manipulate objects in the environment, and trigger visual and aural cues based on gaze and gesture recognition. Motion tracking is often used in head-mounted display systems where the users position and orientation affect what the user sees and hears in the environment. Although stereo presentation is important to the three-dimensional illusion, it is less important than the change that takes place in the image when the observer moves his head. The image presented by the three-dimensional display must change in exactly the way that the image of a real object would change for similar motions of the users head [10].

PAGE 19

10 There are many commercial motion tracking devices: Polhemus FASTRAK uses several electromagnetic coils per tracker to transmit position and orientation information to a receiver. InterSense InertiaCube small orientation tracker that provides 3 degrees of freedom (yaw, pitch, and roll) and allows for full 360 rotation about each axis. HiBall-3100 wide-area position and orientation tracker. Uses hundreds of infrared beacons on the ceiling and 6 lenses on the HiBall Sensor to track the user. These commercial solutions are highly accurate and produce very low latency. However, they are expensive and oftentimes encumber the user [11, 12, 13]. Motion tracking has been applied to avatar movement through by tracking colored straps. In Straps: A Simple Method for Placing Dynamic Avatars in an Immersive Virtual Environment, colored straps are attached to the users legs to accurately represent their movement through an avatar in the virtual environment. The straps system has two major advantages over other tracking systems: Freedom there are no encumbering cable, which reduces system complexity, accident hazards, and equipment failure. Simplicity the colored straps are cheap, easy to create, and contain no moving parts or electronics [14]. 2.3 Digital Characters and Avatars Inserting digital characters into virtual environments can make the experience much more realistic to the user. The users more natural perception of each other (and of autonomous actors) increases their sense of being together, and thus the overall sense of shared presence in the environment [15].

PAGE 20

11 An avatar is the representation of oneself in a virtual environment. In an ideally tracked environment, the avatar would follow the users movements exactly. Slater and Usoh discuss the influence of virtual bodies on their human counterpart: The essence of Virtual Reality is that we (individual, group, simultaneously, asynchronously) are transported bodily to a computer generated environment. We recognize habitation of others through the representation of their own bodies. This way of thinking can result in quite revolutionary forms of virtual communication [16]. Digital character realism affects the amount of immersion experienced by the user in a virtual environment. This realism is manifested visually, behaviorally, and audibly. A break in presence (losing the feeling of presence) can occur if someone in a virtual environment is distracted by a lack of realism in an avatar. In The Impact of Avatar Realism on Perceived Quality of Communication in a Shared Immersive Virtual Environment, avatar realism and a conversation-sensitive eye motion model are tested to determine their effect on presence. We conclude that independent of head-tracking, inferred eye animations can have a significant positive effect on participants responses to an immersive interaction. The caveat is that they must have a certain degree of visual realism [17]. Even without realistic avatars, users can still be greatly affected by other users digital characters, as well as their own avatar. Establishing a sense of presence increases the chances of a participant becoming attached to avatars in the virtual space. Emotional attachment to avatars was a surprising result of the study Small Group Behavior in a Virtual and Real Environment: A Comparative Study. Although, except by inference, the

PAGE 21

12 individuals were not aware of the appearance of their own body, they seemed to generally respect the avatars of others, trying to avoid passing through them, and sometimes apologizing when they did so. The avatars used in the study were simple models associated with a unique color [18]. Digital characters have been successfully integrated into real environments using computer vision and camera tracking techniques. These characters are used partly as virtual teachers that train factory workers to operate some of the machinery. The virtual humans pass on knowledge to participants using an augmented reality system. Although the characters are automated, a training specialist can control them from a different location via a networked application [19]. Digital character realism has been integrated into the character rendering system created by Haptek. This system can integrate multiple highly realistic and customizable characters into a virtual environment. These characters also act realistically (i.e. blinking, looking around, and shifting weight). The Haptek system allows these characters to be used as avatars, or as autonomous virtual humans [20]. There are many other commercially available character-rendering systems: UGS Corp. Jackusability, performance, and comfort evaluation using digital characters that are incorporated into virtual environments. Boston Dynamics DI-Guyreal-time human simulation used in military simulations created by all branches of the United States Armed Forces. VQ Interactive BOTizenonline customer support conducted by digital characters. Characters respond to queries using a text-to-speech engine [21, 22, 23].

PAGE 22

CHAPTER 3 APPLICATION 3.1 Scene Design and Experience Development The first step in creating the distributed rehearsal system was to choose a sample scene. This scene would determine character design, set design, and the dialogue. Several factors influenced the decision: Familiarity Since the actors would be testing the system without reading the script beforehand, the scene needed to be immediately accessible to most actors. Ease of tracking The system is a prototype; therefore extensive tracking would be beyond its scope. Additionally, the acting focuses on kinesics, so gesture tracking is the only requirement. The scene would involve characters that stay relatively still acting primarily with gestures. Interesting Set and Characters Presence is one of the main factors measured when evaluating virtual environment systems. Incorporating stimulating digital characters and sets into your environment can achieve presence. Several scenes were evaluated using the above criteria. The scenes included the balcony scene from Romeo and Juliet, the heads and tails scene from Rosencrantz and Guildenstern are Dead, and the red pill, blue pill scene from The Matrix. The red pill, blue pill scene was chosen because it is a very familiar scene that few actors would have previously rehearsed. Once the scene was selected, the characters and set needed to be constructed. The modeling software 3D Studio Max was used to create the set. The set consisted of a dark 13

PAGE 23

14 room with a fireplace, two red chairs and a small white table. The two main characters, Neo and Morpheus, were created using the base Haptek male character model and adjusting the texture accordingly. The characters and environments, after being fully modeled and textured, were then exported into a format that could be incorporated into a graphics engine. The 3DS file format was chosen and then read into an OpenGL application along with each character. Lighting was set up to reflect the locations of the physical lights in the scene. It was important to be able to manipulate the characters skeleton in real-time. Therefore, each joint in the character needed to be explicitly controlled. Haptek uses a separate joint for each degree of freedom that exists in a joint. The shoulders, elbows, and neck each have three joints. For simplicity, two joints were used for the neck, two for each shoulder, and one for each elbow. Special attention went toward developing aspects of the system that would enhance the users experience. The actor sat on a cushioned chair in front of a large surface onto which the scene is projected. The setup was designed to physically immerse the actors an environment similar to the one used in the designated scene. Each rendering system had the option to allow each actor to use his/her head movement to affect his/her cameras viewpoint. This simulates an actor looking through a large hole in the wall at the other actorif the actors tilt their heads to the side, their viewpoints rotate slightly and allow them to see more of the room on the other side of the wall. The experience began with a clip from The Matrix that leads into the scene. These were efforts to increase the sense of presence each actor experiences.

PAGE 24

15 3.2 Tracking the Actors Setting up the tracking system required two cameras, two PCs (one for each camera), colored paper or cloth for the straps, and sufficient USB or Firewire cable to accommodate the setup. The tracking system worked under different lighting conditions provided adequate training is performed on each camera. Training consisted of acquiring many pictures of each strap and the background and determining the range of the colors that belong to each strap. An application provided with the tracking system accomplished most of this process. Two sets of straps were created for the system. The first set of straps consisted of colored pieces of paper fastened with tape. These straps were used in the first study and the participants suggested using a different material because the paper was uncomfortable and sometimes distracting. The second set was constructed with colored pieces of felt to increase the comfort level of each participant. These straps were fastened with small strips of Velcro. Figure 3-1 shows the second set of straps attached to a participant. The tracking system on each PC then transmitted the two-dimensional coordinates of each strap to a client computer. Tsais camera calibration algorithm is used to calibrate each camera and recover extrinsic and intrinsic camera parameters [24]. Calibration was achieved by explicitly mapping two-dimensional sample picture coordinates to their three-dimensional equivalents. This provided a configuration file for use in the client computer. Once the rendering system was receiving correct tracking values from the straps system, these values needed to be appropriately mapped to the digital characters

PAGE 25

16 movements. The system first saved the fixed locations of each shoulder. Then, after instructing the user to place their hands directly in front of them with their elbows fully extended, the system determined their arm length and head height (distance from their neck to their forehead). The system used the actors arm length, head height, and shoulder width constants to appropriately displace the digital characters hands and head. Figure 3-1. A participant wearing the colored felt straps. Forward shoulder joint animation (vertical movement) was accomplished by determining the angle of displacement that a line passing from the shoulder to the hand would create from a central horizontal line. The distance from the shoulder to the hand determines the amount of elbow bend that is required. For instance, if the hand is arms length away from the shoulder, the elbow wouldnt be bent at all. Conversely, if the hand was located adjacent to the shoulder, the elbow would be fully bent. Finally, shoulder

PAGE 26

17 turn (horizontal) was calculated by determining the angle of displacement the hand would make from a central vertical line. 3.3 Putting it all Together VRPN was used to connect and transfer text messages between the tracking system and the rendering system as well as between each rendering system. The tracking system sends a text message containing the two-dimensional coordinates for each color detected along with the width and height of the image in pixels. The rendering system receives these values and, combined with the values from the second tracking system, uses Tsais algorithm for recovering the three-dimensional coordinates [24]. Once calibration has finished and the actors are accustomed to using the system, the tracked data is shared between rendering systems via VRPN text messages. The text message contains the two angles for each shoulder and for the neck, the bend angle for the elbow, and the speaking state. The speaking state determines which actor is currently speaking; this is used with the lip-syncing and virtual script systems. Voice acting is an important aspect of rehearsal. Therefore it was necessary to implement a system that allowed the actors to transmit their voice to their partner. Headsets with built-in microphones were used. The headsets had a behind-the-neck design so they would not interfere with the forehead strap. Voice was transmitted using DirectPlay. Instead of using a traditional physical script, a virtual script system allowed the actors to read their lines without having to look away from the display. This system displayed the actors current line on the bottom of the screen when their speaking state is true. Incorporating the virtual script system introduced the problem of determining when

PAGE 27

18 to proceed with the next line. Originally, the actor would use their foot to press a key that would trigger the speak state to false and send a message to the remote rendering system to change its speak state to true. However, this hindered presence and lowered the comfort level. It was decided on to have the system operator, who calibrated the system and trained the actor to use the system, manually switch the speak state to false when a line was finished. 3.4 Final Software and Hardware Setup The final hardware setup used to test the system was composed of the following (for each location): 3 Dell PCs 2 OrangeMicro USB 2.0 web-cameras Sony projector Colored felt straps with Velcro attachments GE stereo PC headset Cushioned chair The participant sat facing a large projection screen. The Sony projector was placed under each participants seat. The two web-cameras were each attached to a Dell PC running the Straps software. These PCs had the VRPN and Imaging Control libraries installed. The rendering PC connected to the projector ran the rehearsal software. This PC had the VRPN and Haptek libraries installed. Figure 3-2 shows a diagram of the final hardware setup used for each study.

PAGE 28

19 Figure 3-2. A participant testing out the system. Figure 3-3. Sample screenshot demonstrating the virtual script system.

PAGE 29

20 Figure 3-4. Data flow for both rendering systems.

PAGE 30

21 Figure 3-5. Hardware setup for each location.

PAGE 31

CHAPTER 4 RESULTS 4.1 Description of Studies The system was evaluated using three studies, each with two actors. The first study was conducted before the system was fully operational. The second and third studies were conducted using the complete system. The aspects of the system that werent incorporated into the first study included the introductory movie, head-controlled viewpoint, and accurate hand tracking. The participants (4 females and 2 males) ranged in age from 18 to 20. They had significant acting experience. Before each study, each participant was given a small tutorial on how the system worked, a brief overview of its limitations, and some time to see his/her character being manipulated by his/her movements. The participants then watched the introductory movie and rehearsed the scene provided for them. When the scripted lines ended, the participants were given time to adlib. Each study concluded by having participants fill out a presence and co-presence questionnaire. All three sets of participants were given the co-presence questionnaire used in Collaboration in Tele-Immersive Environments. This questionnaire gauges the degree to which each participant felt they were present in the virtual environment with the other participant. The last two sets of participants were also given the Slater, Usoh and Steed (SUS) Presence questionnaire. The SUS questionnaire is commonly used to assess 22

PAGE 32

23 virtual environment immersion. Along with each questionnaire, participants were asked to specify their level of computer literacy and their level of experience with virtual reality. The Appendix contains both questionnaires. An informal verbal debriefing followed the questionnaires. Figure 4-1. The location of each actor on the University of Florida campus. 4.2 Reaction from Actors The participants from the first study appeared to be initially frustrated with the inaccurate hand tracking, although with some practice they compensated for it. One participant used exaggerated gestures to counteract the limited forward/backward character movement. During the adlib portion the participants spontaneously began a slapping duel with each other that consisted of one person moving their arm in a slapping motion and the other moving their head when hit and vise versa. The second and third set of study participants quickly became adept at using the system. They seemed very comfortable working through the scene despite having little

PAGE 33

24 or no virtual reality experience. The introductory movie did not appear to significantly affect the participants experience. The adlib session flowed seamlessly from the scripted section. The participants seemed to be highly engrossed in the experienceevidenced by the fact that all four participants prolonged the adlib session for more than 5 minutes. 4.3 Results The results from the questionnaires and the debriefing can be organized into the following three categories: Virtual Reality can be used to bring actors together for a successful rehearsal. Lack of presence distracted the actors. Improvements should be made to the system. 4.3.1 Virtual Reality Can Be Used to Bring Actors Together for a Successful Rehearsal The results of the questionnaires proved that the study was effective in achieving successful rehearsals. The participants on average felt a stronger sense of co-presence than standard presence. This is understandable considering the participants had limited control over their own environment while still having significant interaction with their partner. The average responses to the co-presence questionnaire were low for the first study (only 26% of the responses were above 4.5) yet moderately high for the second and third studies (60% and 66% of the responses were above 4.5 for the second and third studies, respectively). There was an average increase of .81 in the responses from the first study to the second and third. This demonstrates that the increased interactivity included in the system for the second and third studies positively influenced each actors experience. The high responses for the second and third studies also indicate that the

PAGE 34

25 participants felt that they could effectively communicate both verbally and gesturally. The following responses to the debriefing session reaffirm these findings: Ultimately, I had fun. There were a few synch issues but we found out ways to interact with the limited tools at our disposal. I felt very connected to the other person and I felt that the acting experience was very worthwhile. I actually felt like if we were rehearsing a scene we would have gotten someplace with our exercise. It was very easy to feel like youre with another person in the space. One, because you were talking to them. And two, because youre always conscious of the fact that theyre going to be looking at what youre at least partially doing. I started to think of the images on the screen as being another person in the room with me; it very much seemed like a human figure and I felt as though I were in their presence. Several items on the co-presence questionnaire generated interesting results. Question 4, which asked, To what extent did you feel embarrassed with respect to what you believed the other person might be thinking of you? generated an average score of 1.25 (on a scale of 1 [not at all] to 7 [a great deal]). Questions 6 and 7, which determined the degree to which each participant felt their progress was hindered by their partner and vice versa, generated an average score of 1.5 and 1.75, respectively. These low results are likely a result of the participants having previously worked with each other. This co-presence questionnaire uses the participants unfamiliarity with their partner to gauge co-presence by showing the existence of social phenomena such as shyness and awkward interaction with the aforementioned questions. Thus, participants familiar with each other, or those who have acted together before, would probably get low scores on those questions. Question 14, which measured the degree to which each participant had a sense that there was another human being interacting with them (as opposed to just a machine),

PAGE 35

26 generated an average score of 6. This score further supports the systems effectiveness. Question 15, which determined how happy the participant thought their partner felt, generated an average score of 7. This question assumes that the participants are strangers (similar to questions 4, 6, and 7). All of the participants showed obvious signs of enjoying the experience as evidenced by the average score of 7 (the maximum score) for this question. Figures 4-2 to 4-6 detail the results of each questionnaire arranged by study. 4.3.2 Lack of Presence Distracted the Actors The results of the presence questionnaire that was given to the second and third study participants were average. Typical presence scoring involves adding a point for each response of 6 or 7, however that would give only one participant (ID number 3) a score above 0. According to Figure 4-5, the average of the responses for both studies also generates a score of 0. Since the average responses were all between 3 and 5, it can be said that the participants were only moderately engrossed in the environment. This affected the experience by distracting the participant. Several participants mentioned the experience would have been enhanced if they could see a representation of their hands on the screen. Had each participants sense of presence been higher, they might have accepted the reality of acting with the character on the screen as opposed to feeling that they were physically controlling a character that is acting with the character on the screen. The following responses from the debriefing session were the basis for these conclusions: Its kind of like a first-person shooter sort of game where you dont really see any of yourself; you just see whats going on. Its a little bit disorienting.

PAGE 36

27 I wouldve really liked to see my characters hands on the screenso I know what theyre doing. It was kind of skeletal but the way it works right now is really good for where it is. There was a little sense that you were really there (in the virtual environment) like when you move your head and the camera pans back and forth. 4.3.3 Improvements Should Be Made to the System The participants suggested a number of areas for improvement. Nearly all suggested that more body parts be tracked and that interactive facial expressions be added. One participant from the first study suggested abandoning the gesture tracking for a system that would aid only in the blocking of a scene. The following are the debriefing responses that dealt with system improvements and the overall idea of the system: For practice out of rehearsal this could work. It all depends on the level of sophistication. It needs to incorporate more color straps to include the whole body and hopefully, facial expressions. I like the idea of the opposite image being that of the character instead of the other actor. I would add lots more tracking spots to allow for full body and maybe facial movements. Theres a lot more that goes into acting that just moving your arms. To make it more of an acting experience there would have to be more mobility and expression.

PAGE 37

28 FIRST STUDY Co-presence Questionnaire Part 1 ID Number Literacy Experience 1 2 3 4 5 6 7 1 3 1 4 5 4 1 4 2 2 2 4 1 3 3 3 1 2 1 1 Average: 3.5 4 3.5 1 3 1.5 1.5 FIRST STUDY Co-presence Questionnaire Part 2 ID Number 8 9 10 11 12 13 14 15 1 2 5 1 3 1 5 6 7 2 4 5 4 3 1 6 5 7 Average: 3 5 2.5 3 1 5.5 5.5 7 Figure 4-2. Results of the co-presence questionnaire administered during the first study. SECOND STUDY Presence Questionnaire ID Number 1 2 3 4 5 6 3 6 4 2 1 6 6 4 5 4 5 5 4 5 Average: 5.5 4 3.5 3 5 5.5 SECOND STUDY Co-presence Questionnaire Part 1 ID Number Literacy Experience 1 2 3 4 5 6 7 3 7 2 6 6 5 2 6 1 2 4 4 1 5 5 5 1 5 3 3 Average: 5.5 5.5 5 1.5 5.5 2 2.5 SECOND STUDY Co-presence Questionnaire Part 2 ID Number 8 9 10 11 12 13 14 15 3 5 3 2 3 2 7 7 7 4 7 6 5 6 3 4 5 7 Average: 6 4.5 3.5 4.5 2.5 5.5 6 7 Figure 4-3. Results of the presence and co-presence questionnaires administered during the second study.

PAGE 38

29 THIRD STUDY Presence Questionnaire ID Number 1 2 3 4 5 6 5 4 3 5 4 4 5 6 5 4 3 4 3 5 Average: 4.5 3.5 4 4 3.5 5 THIRD STUDY Co-presence Questionnaire Part 1 ID Number Literacy Experience 1 2 3 4 5 6 7 5 6 1 4 4 6 1 3 1 1 6 7 2 5 5 5 1 4 1 1 Average: 4.5 4.5 5.5 1 3.5 1 1 THIRD STUDY Co-presence Questionnaire Part 2 ID Number 8 9 10 11 12 13 14 15 5 5 3 3 4 2 4 6 7 6 5 6 5 5 6 5 6 7 Average: 5 4.5 4 4.5 4 4.5 6 7 Figure 4-4. Results of the presence and co-presence questionnaires administered during the third study. Presence Questionnaire Summary Question 2 nd Study Average 3 rd Study Average Total Average 1 5.5 4.5 5 2 4 3.5 3.75 3 3.5 4 3.75 4 3 4 3.5 5 5 3.5 4.25 6 5.5 5 5.25 Figure 4-5. Comparison between question averages for the presence questionnaire.

PAGE 39

30 Co-presence Questionnaire Summary Question 1 st Study Average 2 nd Study Average 3 rd Study Average Total Average (2 nd & 3 rd Studies) 1 3.5 5.5 4.5 5 2 4 5.5 4.5 5 3 3.5 5 5.5 5.25 4 1 1.5 1 1.25 5 3 5.5 3.5 4.5 6 1.5 2 1 1.5 7 1.5 2.5 1 1.75 8 3 6 5 5.5 9 5 4.5 4.5 4.5 10 2.5 3.5 4 3.75 11 3 4.5 4.5 4.5 12 1 2.5 4 3.25 13 5.5 5.5 4.5 5 14 5.5 6 6 6 15 7 7 7 7 Figure 4-6. Comparison between question averages for the co-presence questionnaire showing improvement from the first study to the second and third studies.

PAGE 40

CHAPTER 5 CONCLUSION 5.1 Usefulness to the Acting Community It has been shown that virtual environments allow multiple actors to successfully rehearse scenes without the need to be in makeup or costume. The true usefulness of this system to the acting community lies in the fact that it can bring actors together from two remote locations for an engaging acting experience. A fully developed virtual rehearsal system could save actors a significant amount of time and money. The system, however, is far from being fully developed. 5.2 Future Work The distributed virtual rehearsal system has many areas that can be improved. The depth of an actors experience in a virtual rehearsal is greatly affected by how realistic their interaction is. Realistic interaction is achieved by making the digital characters movements as life-like as possible. One main complaint from the study participants was that the character they were facing lacked expression. Implementing interactive facial expressions would be costly but would dramatically increase the realism of the experience. In Acting in Virtual Reality, simple mouse strokes were used to change the characters expression [3], however that solution isnt plausible if the actor is to remain wireless (as they are in the virtual rehearsal system). Another solution would be to incorporate a third web-camera into the system that would provide images of the actors face to a PC that could detect 31

PAGE 41

32 changes in facial expressions. The third and easiest solution would be to give the system operator control over the characters facial expressions. The drawbacks to this solution are operator subjectivity and that the operator would have to be within visual range of the actor. The other main complaint from the study participants was the limited number of tracked body parts. More tracked areas would have increased realism, although only 3 tracked areas were needed for the scene. The shoulders straps were used during system calibration but werent actively tracked during the rehearsal. Adding shoulder tracking could have allowed for torso manipulation, which would have been especially useful when the actors wanted to lean forward. Orientation tracking, while not specifically mentioned by the study participants, would have greatly affected character realism. This would allow the characters to look left and right as well as rotate their hands. Using two colored straps to determine the direction of the vector that passes through both straps could approximate head orientation tracking. Hand orientation would be much more difficult since there are several axes of rotation. Automated accurate lip-synching is another aspect that would have a significant effect on the users sense of presence. For this to work, the actors audio stream would need to be analyzed in real-time. This would be difficult to implement and computationally expensive. The ideal system would not only track gestures and facial expressions, but allow the actor to move freely around the stage. This could be achieved using a modified CAVE system or a head-mounted display.

PAGE 42

33 5.3 Future Applications Motion capture systems are typically used to capture an actors movements and later add them to a digital character. Virtual rehearsals could be modified to record the actors movements as they rehearse their scene. It would then essentially be a real-time motion capture system. The recorded movements could then be played back for the actor to review or they could be sent directly to an animation package for the purpose of rendering a digitally animated movie. A virtual film director system could also be added to the virtual rehearsal system. The virtual director could plan out camera angles, arrange and modify props, start and stop the action, and direct the actors movements. The director could be represented by a digital character or simply watch the action from a monitor, speaking through a virtual speaker. Distributed virtual performances are another plausible extension of the virtual rehearsal. This would introduce audience systems into the distributed virtual rehearsal paradigm. While several actors perform the scene from separate locations, an audience can watch the action unfold from a third-person point of view. Allowing a director to control the camera angles would further enhance the experience by providing the audience with cinematic visuals.

PAGE 43

APPENDIX STUDY QUESTIONNAIRES A.1 Co-presence Questionnaire Part A: Personal Information Your Given ID number Your Age Your Gender Male Female Occupational status Undergraduate Student Masters Student PhD Student Research Assistant/Fellow Staff systems, technical Faculty Administrative Staff Other Please state your level of computer literacy on a scale of (1) (never used before) 1 2 3 4 5 6 7 (a great deal) Have you ever experienced virtual reality before? (never used before) 1 2 3 4 5 6 7 (a great deal) Part B: Virtual Reality Experience Please give your assessment as to how well you contributed to the successful performance of the task. My contribution to the successful performance of the task was (not at all) 1 2 3 4 5 6 7 (a great deal) Please give your assessment as to how well the other person contributed to the successful performance of the task. The other persons contribution to the task was (not at all) 1 2 3 4 5 6 7 (a great deal) 34

PAGE 44

35 To what extent were you and the other person in harmony during the course of the experience. We were in harmony (not at all) 1 2 3 4 5 6 7 (a great deal) To what extent did you feel embarrassed with respect to what you believed the other person might be thinking about you? I felt embarrassed (not at all) 1 2 3 4 5 6 7 (a great deal) Think about a previous time when you co-operatively worked together with another person in order to achieve something similar to what you were trying to achieve here. To what extent was your experience in working with the other person on this task today like the real experience, with regard to your sense of doing something together? This was like working together with another person in the real world (not at all) 1 2 3 4 5 6 7 (a great deal) To what extent, if at all, did the other person hinder you from carrying out the task? The other person hindered me from carrying out this task (not at all) 1 2 3 4 5 6 7 (a great deal) To what extent, if at all, did you hinder the other person from carrying out the task? I hindered the other person from carrying out this task (not at all) 1 2 3 4 5 6 7 (a great deal) Part C: Virtual Reality Experience Continued Please give your assessment of how well you and the other person together performed the task. We performed the task successfully (not at all) 1 2 3 4 5 6 7 (a great deal)

PAGE 45

36 To what extent, if at all, did you have a sense of being with the other person? I had a sense of being with the other person (not at all) 1 2 3 4 5 6 7 (a great deal) To what extent were there times, if at all, during which the computer interface seemed to vanish, and you were directly working with the other person? There were times during which I had a sense of working with the other person (not at all) 1 2 3 4 5 6 7 (a great deal) When you think back about your experience, do you remember this as more like just interacting with a computer or working with another person? The experience seems to me more like interacting with a person (not at all) 1 2 3 4 5 6 7 (a great deal) To what extent did you forget about the other person, and concentrate only on doing the task as if you were the only one involved? I forgot about the other person (not at all) 1 2 3 4 5 6 7 (a great deal) During the time of the experience did you think to yourself that you were just manipulating some screen images with a mouse-like device, or did you have a sense of being with another person? During the experience I often thought that I was really manipulating some screen images (not at all) 1 2 3 4 5 6 7 (a great deal) Overall rate the degree to which you had a sense that there was another human being interacting with you, rather than just a machine. My sense of there being another person was (not at all) 1 2 3 4 5 6 7 (a great deal)

PAGE 46

37 If you had a chance, would you like to meet the other person? (not at all) 1 2 3 4 5 6 7 (a great deal) Assess the mood of the other person on the basis of very depressed to very happy. The mood of the other person seemed to be happy (not at all) 1 2 3 4 5 6 7 (a great deal) Please write any additional comments here. Things you could consider are: Things that hindered you or the other person from carrying out the task; what you think of the person you worked with; and any other comments about the experience and your sense of being there with another person. What things made you pull out and more aware of the computer A.2 Presence Questionnaire 1. Please rate your sense of being in the environment, on the following scale from 1 to 7, where 7 represents your normal experience of being in a place. I had a sense of being there in the environment (not at all) 1 2 3 4 5 6 7 (a great deal) 2. To what extent were there times during the experience when the environment was the reality for you? There were times during the experience when the environment was the reality for me (at no time) 1 2 3 4 5 6 7 (Almost all the time) 3. When you think back about your experience, do you think of the environment more as images that you saw, or more as somewhere that you visited? The environment seems to me to be more like (Images that I saw) 1 2 3 4 5 6 7 (Somewhere I visited)

PAGE 47

38 4. During the time of the experience, which was the strongest on the whole, your sense of being in the environment, or of being elsewhere? I had a stronger sense of (Being elsewhere) 1 2 3 4 5 6 7 (Being in the environment) 5. Consider your memory of being in the environment. How similar in terms of the structure of the memory is this to the structure of the memory of other places you have been today? By structure of the memory consider things like the extent to which you have a visual memory of the environment, whether that memory is in color, the extent to which the memory seems vivid or realistic, its size, location in your imagination, the extent to which it is panoramic in your imagination, and other such structural elements. I think of the environment as a place in a way similar to other places that Ive been today (not at all) 1 2 3 4 5 6 7 (very much so) 6. During the time of the experience, did you often think to yourself that you were actually in the environment? During the experience I often thought that I was really existing in the environment (not very often) 1 2 3 4 5 6 7 (very much so)

PAGE 48

LIST OF REFERENCES 1. Dahl, S., Kinesics, Business School, Middlesex University, 2004. Retrieved 14 Mar. 2004 < http://stephan.dahl.at/nonverbal/kinesics.html >. 2. Ryan, D., Semiotics, School of Arts and Sciences, Australian Catholic University, 2003. Retrieved 14 Mar. 2004 < http://www.mcauley.acu.edu.au/staff/delyse/semiotic.htm >. 3. Slater, M., Howell, J., Steed, A., Pertaub, D-P., Garau, M. and Springel, S., Acting in Virtual Reality, ACM Collaborative Virtual Environments, CVE, 2000. 4. Mortensen, J., Vinayagamoorthy, V., Slater, M., Steed, A., Lok, B. and Whitton, M.C., Collaboration in Tele-Immersive Environments, Proceedings of the Eighth Eurographics Workshop on Virtual Environments, 2002. 5. Oliverio, J., Quay, A. and Walz, J., Facilitating Real-time Intercontinental Collaboration with Emergent Grid Technologies: Dancing Beyond Boundaries, Paper from the Digital Worlds Institute, 2001. Retrieved 9 Aug. 2004 < http://www.dwi.ufl.edu/projects/dbb/media/VSMM_DigitalWorlds.pdf >. 6. Steed, A., Slater, M., Sadagic, A., Tromp, J. and Bullock, A., Leadership and Collaboration in Virtual Environments, IEEE Virtual Reality, Houston, March 1999, 112-115. 7. Brooks, F.P., Whats Real about Virtual Reality? IEEE Computer Graphics and Applications, Nov./Dec. 1999. 8. Cruz-Neira, C., Sandin, D.J. and DeFanti, T.A., Surround-Screen Projection-Based Virtual Reality: The Design and Implementation of the CAVE, Computer Graphics (SIGGRAPH) Proceedings, Annual Conference Series, 1993. 9. Bowman, D.A. and Hodges, L.F., An Evaluation of Techniques for Grabbing and Manipulating Remote Objects in Immersive Virtual Environments, Symposium on Interactive 3D Graphics, Apr. 1997. 10. Sutherland, I.E., A Head-mounted Three Dimensional Display, Proceedings of the AFIPS Fall Joint Computer Conference, Vol. 33, 757-764, 1968. 11. Polhemus, FASTRAK: The Fast and Easy Digital Tracker, Colchester, VT, 2004. Retrieved Apr. 2004 < http://www.polhemus.com/FASTRAK/Fastrak Brochure.pdf >. 39

PAGE 49

40 12. InterSense, InterSense InertiaCube2, Bedford, MA, 2004. Retrieved Apr. 2004 < http://www.isense.com/products/prec/ic2/InertiaCube2.pdf >. 13. 3rdTech, Inc., HiBall-3000 Wide Area Tracker and 3D Digitizer, Chapel Hill, NC, 2004. Retrieved Apr. 2004 < http://www.3rdtech.com/images/hiballdatasheet02v5forweb2.PDF >. 14. Jackson, J., Lok, B., Kim, J. Xiao, D., Hodges, L. and Shin, M., Straps: A Simple Method for Placing Dynamic Avatars in a Immersive Virtual Environment, Future Computing Lab Tech Report FCL-01-2004, Department of Computer Science, University of North Carolina at Charlotte, 2004. 15. Thalmann, D., The Role of Virtual Humans in Virtual Environment Technology and Interfaces, in Frontiers of Human-Centered Computing, Online Communities and Virtual Environments, Springer, London, 2001, 27-38. 16. Slater, M. and Usoh, M., Body Centered Interaction in Immersive Virtual Environments, in N. Magnenat Thalmann and D. Thalmann (eds.) Artificial Life and Virtual Reality, John Wiley and Sons, New York, 1994, 125-148. 17. Garau, M., Vinayagamoorthy, V., Slater, M., Steed, A. and Brogni, A., The Impact of Avatar Realism on Perceived Quality of Communication in a Shared Immersive Virtual Environment, Equator Annual Conference, 2002. 18. Slater, M., Sadagic, A., Usoh, M. and Schroeder, R., Small Group Behavior in a Virtual and Real Environment: A Comparative Study, presented at the BT Workshop on Presence in Shared Virtual Environments, June 1998. 19. Vacchetti, L., Lepetit, V., Papagiannakis, G., Ponder, M., Fua, P., Magnenat-Thalmann, N. and Thalmann, D., Stable Real-Time Interaction Between Virtual Humans and Real Scenes, Proceedings of 3DIM 2003 Conference, 2003. 20. Haptek Inc., Santa Cruz, California, Sept. 2003. Retrieved 9 Aug. 2004 < http://www.haptek.com/ >. 21. UGS Corporation, E-Factory: Jake, Plano, TX, 2004. Retrieved Apr. 2004 < http://www.ugs.com/products/efactory/jack/ >. 22. Boston Dynamics, DI-Guy: The Industry Standard in Real-Time Human Simulation, Cambridge, MA, 2004. Retrieved Apr. 2004 < http://www.bdi.com/content/sec.php?section=diguy >. 23. VQ Interactive, Inc., BOTizen: The Power of Interactivity, Selangor Malaysia, 2003. Retrieved Apr. 2004 < http://www.botizen.com/ >.

PAGE 50

41 24. Tsai, R., A Versatile Camera Calibration Technique for High-Accuracy 3D Machine Vision Metrology Using Off-the-Shelf TV Cameras and Lenses, IEEE Journal of Robotics and Automation, Aug. 1987, 323-344.

PAGE 51

BIOGRAPHICAL SKETCH George Victor Mora was born in Miami, Florida, on March 29 th 1980. He spent the first 18 years of his life in South Florida. His obsession with art and technology began at an early age. During high school, he focused his attention on art and computer science classes. Upon completing high school, he moved to Gainesville, Florida, to attend the University of Florida. In August of 2002, George finished his undergraduate degree in computer science. He returned to the University of Florida the following semester as a graduate student in the newly formed digital arts and sciences program in the College of Engineering. For the next two years, George focused on virtual environments and digital media both through his school work and as an employee of the Digital Worlds Institute. In December of 2004 George will receive his Master of Science degree in digital arts and sciences. 42