Multi-user experience synthesizer (MUES) : an interactive digital installation

Material Information

Multi-user experience synthesizer (MUES) : an interactive digital installation
Strobel, Garrett
Place of Publication:
Gainesville, Fla.
College of Fine Arts, University of Florida
Publication Date:
Physical Description:
Project in lieu of thesis


Subjects / Keywords:
Animation ( jstor )
Bowls ( jstor )
Cerebellum ( jstor )
Control centers ( jstor )
Music cognition ( jstor )
Music psychology ( jstor )
Musical aesthetics ( jstor )
Musical rhythm ( jstor )
Singing ( jstor )
Treadmills ( jstor )


MUES is an interactive digital arts installation. The installation is a space in which the line between physical and digital is blurred. The goal of the project was to allow users, without prior training or instruction, to collaboratively generate music and manipulate digital projections through interacting with and exploring the project space, with the purpose being to augment user communication. Creating music could be considered a form of augmented communication which allows the creator to express emotional content without the reliance on spoken language; however the creation of music is a specified skill that requires many years of dedicated training to master. The MUES installation allows users without prior musical training to generate dynamic musical rhythms, tempo changes, and melodies through interacting and experimenting with the physical objects present in the MUES installation. In addition this interaction alters corresponding visual projections, creating another layer of user feedback and immersion. The project was housed in the University of Florida’s, Digital Worlds REVE immersive theater, on April 8th 2011. The space consists of 5 contiguous movie theater sized screens that served as the backdrop for the projects visual projections. The space also contains a 5.1 surround sound system that created immersive sound for the installation.
General Note:
Digital Arts and Sciences terminal project

Record Information

Source Institution:
University of Florida Institutional Repository
Holding Location:
University of Florida
Rights Management:
All rights reserved by the source institution and holding location.
Resource Identifier:
915060818 ( OCLC )
33603556 ( ALEPH )


This item is only available as the following downloads:

Full Text




2 Summary of Project Option in Lieu of Thesis Presented to the College of Fine Arts of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Master of Arts THE VERSIONS PROJECT: EXPLORING MASHUP CULTURE By G a r r e t t S t r o b e l August 2011 Chair: Benjamin DeVane Major: Digital Arts and Sciences MUES is an interactive digital arts installation. The installation is a space in which the line between physical and digital is blurred The goal of the project was to allow users, without prior training or instruction, to collaboratively generate music and manipulate digital projections through interact ing with and exploring the project space, with the purpose being to augment user communication. Creating music could be considered a form of augmented communication which allows the creator to express emotional content without the reliance on spoken lan guage; however the creation of music is a specified skill that requires many years of dedicated training to master. The MUES installation allows users without prior musical training to generate dynamic musical rhythms, tempo changes, and melodies through interacting and experimenting with the physical ob jects present in the MUES


3 installation. In addition this interaction alters corresponding visual projections, creating another layer of user feedback and immersion. The project was housed in the Universi th 2011. The space consists of 5 contiguous movie theater sized screens that served as the backdrop for the projects visual projections. The space also contains a 5.1 surround sound syste m that created immersive sound for the installation.


4 Conceptual and Aesthetic Considerations: Conceptually MUES seeks to explore the role of Immersive Installations and Interactive Digital Systems in augmenting the communicative act MUES allows users to collaboratively generate and alter aural, visual, and haptic stimuli through unstructured exploration of the space. The immersive nature of the Installation is achieved through a combination of five large scale interactive projections as the backd rop, three small scale projections in the center of the space and an array of haptic sculptures/components that alter visuals and sound based upon user interaction. The multi user manipulations and interactions between the unique components coalesce into a unified composition of music and imagery creating a sensorially saturated and surreal environment for the users to inhabit. The Installation engenders synesthesia between the real and the digital. Each component is not a random noise and imagery gen erator but a considered metaphor of novel analog to digital instruments that each reifies the relationship between the objects and the music/ visuals being created. MUES strives for a sense of hyper interactivity in which the entire space seems to bend an d bow to the presence and actions of the users. The T readmill exposes the association between bodily movement and tempo. Its Astroturf track and effects on the projection of the swaying grass investigate some of the repetitious elements associated with digital systems such as video games in which the same grass texture can be world of changing scenery without ever actually moving, much like walking on a treadmill.


5 The Singing Bowls cr eate harmonies and melodies that possess a correlation between the fluctuation of waves of water and the fluctuations of sound waves causing alterations in pitch. The haptic sensation of touching the water combined with its effect on altering the projecte d blob animations creates another layer of sensorial stimuli between phenomena. The Tangible Sequencer makes real an object found in many digital music systems. It allows untrained musicians the ability to craft rhythmic patterns. The Tangible Sequencer establishes a relationship between visual patterns and rhythm color and sound, as well as a relationship between size and loudness in the same way that a louder sound wave will have greater amplitude. The three projections on the Control Center screens at the center of the space display live video feeds from three webcams mounted at each corner of the triangle. The video feeds are delayed by 60 seconds causing dissociation between users and their embodied actions the theoretical implications of which are discussed later. Aesthetically MUES employs elements of DIY (Do It Yourself) culture juxtaposed against pervasive and surreal digital projections. The Treadmill, the Singing Bowls, the Tangible Sequencer and the Control Center are all constructed from lumber, plywood and materials available at most hardware stores. Their electronics consist of cheap webcams and other easily obtainable parts. The user is aware that high technology is at play somewhere in the space but it is obscured and never directly a ccessed. The handmade construction of the physical components is meant to be non threatening as opposed to the slick and exclusionary aesthetic utilized by most contemporary technological productions. This non confrontational aesthetic is meant to invit e the user to discover the space through exploration and experimentation. The saturated and surreal digital projections pervading the space are meant to absorb r consciousness into the experience of the space.


6 Theoretical Framework: MUES borrows from a diverse array of critical discourses of which I will analyze in turn. First I will establish a framework for the communicative action. The problem of communic ation has plagued philosophy and critical theory for centuries. The Soviet philosopher and literary critic, Mikhail Bakhtin, puts forth a theory of language and communication that I believe adequately covers the issues of communication within the scope of the MUES installation. First Order existing conditions. The event is expressed through the dimensions of language and meaning. However, Bakhtin was adamant in establishing that the evental relation of action and meaning cannot be restrained by and understood within language and the sign, such as in linguistics or semiotics. It is in the event that living consciousness orients its activity and that the orientation of action and thought in this event being is oriented through evaluation of the to distribute true from false. (Lazzarato, 2). It is through this evaluation of reality, to the speaking subject, to utterances that the possibilities of langu only by a series of encounters between individuals (5). The concept of dialogism holds that meaning is always contested and negotiated between participating con sciousnesses. made information. Information is created in the very process of communication. Information also cannot be understood as being transmitted from one human being to another; instead, it is constructed


7 living word the message is created for the first time in the process of communication and there is in fact, escribe works of art that aesthetically reflect erted It is this form of Bakh tinian dialogic communication that MUES participates. Being an immersive installation the work does not exist without living consciousness being a part of the relational event. Rather than present the user with a n authoritarianly constructed meaning, cri tique, or agenda, MUES exists polyphonically in which artist and users are both author. Meaning is constructed dialogically as users navigate and inhabit the space. Through the communicative event and interaction between user, space, and other users, mean ing is constantly redefined. As users alter the space the space alters them in a constantly relational action exerted upon the senses It is in this first order that the MUES installation plays a role in augmenting the communicative act. Second order I n the second order MUES augments communication in a less metaphysical sense. The installation allows for the generation of music through interaction As I hope to demonstrate, the creation and perception of music is a form of augmented communication. Mu sic can incorporate, but is distinct from verbal language in that music does not consist of a collection of phonemes organized into words but a collection of frequencies and beats organized into melodies and rhythms. Music is a human phenomenon in which sound is organized into culturally predefined patterns (Levitin, 14). The musicologist and cognitive psychologist Daniel J. Levitin has spent his entire career participating in s e ffects on the brain and mind.


8 s of the brain in evolutionary terms. The cerebellum structure is found across species and in popular terms is sometimes called the reptilian brain. Early studies into brain physiology revealed that the cerebellum is responsible for motor coordination. Movement amongst most animals can be thought of as a regularly repeating pattern with an oscillatory quality. When a human walks or runs they tend to do so at a relatively constant rate. The body settles into a gait and the cerebellum maintains this mot ion without conscious thought from higher brain functions (174). Through laboratory experiments Levitin found a strong correlation between listening to music and activations of the cerebellum, this same correlation did not appear when test subjects listene d to noise. According to Levitin the cerebellum is directly involved in tracking the beat. This would explain the involuntary foot tapping sometimes associated with music listening. Perhaps more importantly though Levitin has demonstrated a high degre e of cerebellar activation in test subjects l istening to music with familiar patterns versus music that was unfamiliar. Test subjects Levitin explains this correlation by examining the research of Harvard professor, Jeremy Schmahmann, which posits a relationship between the cerebellum and emotion a linkage previously denied by cognitive psychologists Schmahmann, among other convincing data, has noted the massive amounts of neural con nections between the cerebellum and more commonly acce pted centers of emotion such as the amygdala, a structure responsible for triggering and remembering emotional events. In addition, Levitin has found direct stimulation of the amygdala when listening t o musical patterns over noise (167).


9 These findings could help explain the relationship between music, emotion, and motion in humans (1 75 ). subject to evaluations of veracity. It would be difficult for someone accustomed to western music to say whether the communication of a Mozart or Beethoven symphony is true or false, but only how it makes you feel. Physiologically s poken and written language as s igns are processed by higher order brain structures such as the neo cortex and cerebrum capable of generating abstractions and rationalizations. The emotional dimensions of music, being processed by less conscious and unconscious brain structures, is cap able of surpassing the fundamental limitations inherent in semiotics and signs. It is in this sense that music is augmented communication. Although every person, ba r ring physical and mental impairments, is capable of understanding and responding to music, not every person learns to generate music. Music, though an augmented form of communication, is still not free from limitations The creation of music relies on th e learned and skilled manipulation of objects (including the vocal chords) to generate patterns of so und distinguishable from noise and spoken language. The MUES installation allows untrained musicians to generate this powerful and augmented form of com munication through exploring, manipulating and interacting with the space. Through the artist s construction of the previously mentioned analog to digital haptic sculptures the interactions are synthesized into music. For example, by playing and experim enting with placing shapes anywhere on the Tangible Sequencer it allows users to immediately compose repeating rhythmic patterns without a classical understanding of music theory. The Singing Bowls and Treadmill also function to similar effect. It is in this second order that MUES plays a role in augmenting the communicative act.


10 Third Order In the third order MUES augments the communicative act through a strategy of contemporary art described by the French curator and art critic Nicolas Bourriaud, by a term he described as discourse borrows heavily from ideas presented by Bakhtin several decades earlier though this is a personal assertion, rather than epistemologically proven meaning is expanded upo n collectively with the audience of the work envisioned as plural rather than a direct provide a structure to create community no matter how brief or utopian this may be. Bourriaud d oes not consider relational aesthetics as merely a theory of interactive art but as a means of situating contemporary art in the larger culture. In one sense r elational art is a response to the virtual relationships of the internet and gl obalization whi ch have prompted a yearning for face to face communication. artists attitudes towards social change and altered here and now; instead of trying to change their 116 117). The MUES installation exhibits many ax relational aesthetic. The installation creates a novel space ancillary to quotidian reality. The space frames an event which begets face to face


11 communication. However, MUES does diverge to some degree from relational aesthetics in t hat it seeks to be explicitly non political or critical, but perhaps this is in itself a political act. The hyper interactivity and open interpretation of the space with lack of explicitly created goals demands that users engage with one another to make s ense of their surroundings. As users discover methods to manipulate the various interactive components they communicate with other users informing them as to their discoveries. Besides the analog to digital interactive components the abstract and surrea l projections also facilitate face to face communication as users debate interpretations. For some the blobs become clouds, others see them as water droplets Some users even swear that they are able see the formations of letters and words in the randoml y exploding particles of the hypnagogic collage and try to convince others of this phenomenon. It is in this third order that MUES plays a role in augmenting the communicative act. Fourth Order In the Fourth Order the Installation augments the communicati ve act through a process described by the a loss in the boundaries of the bodily constrained ego (89) This is accomplished in MUES by immersive interplay between real and dig ital space, Large scale pervasive projections, surround sound, and the smaller digital projections on the surface of the of the triangular control center. The small projections play back live video feed of the space from three webcams situated on each co rner of the Control Center. The video is set to playback at a 60 second delay. This video delay allows users to see themselves disassociated from their embodied actions They can witness themselves walk around the space, or when approaching one of the interactive components users can witness those who previously occupied the space in which they now stand.


12 Th ese out of sync projections are Lacan rejected the mirror stage as confirmation of consciousness but instead described it as a willful self deception, a seduction of identification with the external representation as a complete totality when we are according to L acan, in fact fragmentary and incomplete, never wholly understandable to ourselves. Lacan provides the example of an individual standing between the infinite regress of two parallel mirrors, stating that this does not represent any progress into interiority nor confirm the efficacy of self identity. Instead the multiplicity of receding reflections serves to destabilize the egos fragile veneer (Bishop, 80). The delayed video projections further this Lacanian evaluation of the mirror stage challenging notions of self identification. Conflict between the perception of the ego and the construction of the other is a limiting factor in achieving communication. This dissolution of the ego and absorption into the space through the Installations immersive aspects and d elayed projections of the users serves to augment communication among users in the fourth order. Physical Description of Installation: Multi User Experience Synthesizer (MUES) is a large scale inter active installation. The work consists of five physical components : a treadmill with an Astroturf track, a tangible midi sequencer, and 3 glass bowls supported by wooden stands the Control Center, 5 large digital projection surfaces, and 3 smal l digital projection surfaces The first three components are located equidistant apart around a triangular structure at the center of the installation which serves as the Control Center of the installation as well as supports the 3 addit ional projection surfaces. The installation is located inside gymnasium consisting of 5 contiguous theater sized projection screens and surround sound system. All


13 of the com ponents of the installation are interrelated and serve to synthesize a cohesive audio/visual experience through the interactions of multiple simultaneous users. The Tangible M idi S equencer allows for the construction of complex, var iegated and dynamic musical rhythms to be created by the users manipulation of physical objects. The sequencer consists of a matte black rectangular structure 35 inches tall by 30 inches long and 24 inches wide a webcam mounted on an armature 30 inches a bove the table surface and a collection of square obje cts varying in size and color. Users, by manipulating the placement of the colored squares, are able to generate rhythmic patterns that directly correspond to the location of the squares as well as a ffect the digital projections on one of the screens of the triangular structure Red squares correspond to a kick drum, blue squares a snare drum, and yellow squares a hi hat. In addition the size of the square placed determines the velocity of the midi note with larger squares being louder and smaller ones being quieter The component called the Singing B allows for the creation of the melody and harmony of the installation as well as further manipulation of the digital projections. This component consists of three glass cylindrical bowls 11 inches in diameter by 4 inches deep. Each bowl is placed 2 inch es deep into a laser cut plywood stand 30 inches tall with webcams situated 15 inches below the glass bowl. Each bowl is filled with water. When a user disturbs the water causing waves in the glass bowl a note is generated from a c hord in the key of C w ith each of the bowls set to a different note in the triad. The residual waves of the water bowl continue to play the note as well as generate midi pitch bend values based on their speed and position. For example, the singing bowls are currently set to a e


14 triad The pace at which the chords change is proportional to the overall tempo of the installations but will always be within the same key. Lastly the Singing B owls control the speed and position of an animation of cloud like blobby particles being dig itally projected onto one of the five large background screens. The velocity of the waves in the bowl determine the speed at which the particles m orph and change and the vertical position of the waves controls the vertical position of the animation on the screens. The T readmill component controls the tempo of the sonic dimension of the installation as well as allows for further manipulation of the digital projections. The treadmill is constructed of 2X4 lumbar and melamine board; it is 57 inches long by 20 inches wide with an inclined angle of 40 degrees. The track is cons tructed of a sewn strip of artificial grass, or Astroturf. The rollers are constructed from 2 inch PVC pipe and four 52mm diameter bearings. The treadmill is able to sense how fast a user is walking upon it and alters the tempo at which the Tangible Midi S equencer is playing the user defined rhythm (if any) as well as altering the tempo o f the chord progression of the Singing B owls. The faster a user walks on the treadmill the more these two parameters will increase. The T readmill also influences the speed at which an animation of 3D waving grass particles plays. The grass animation extends across all five of the large background screens and sways slower when a u ser walks slower or faster when the pace increases. The triangular structure that serves as the Control C enter of the installation is also constructed of 2X4s and is 6 feet tall and 7 feet wide on each side The structure is bolted to gether and each face of the equilateral triangle is fitted with a 6X7 foot stretched canvas screen to create 3 rear projection surfaces in the center of the installation in addition to the 5 large background screens. The inside of the control center house s two of the four computers that make the installation possible as well as three projectors situated in each corner of the structure to allow for the rear projectio n on the three


15 canvas screens at the center of the space. On each corner of the triangle is mounted a webcam each one recording a different facet of the space. The video of each webcam feeds into one of the three projectors. The video stream to each projector is offset by 60 seconds Technical description s, challenges, solutions : In this section I wish to go further into the technical aspects behind the creation of each component of the installation as well as how the components were connected together. I will go into detail as to the software and hardware used and cover some of t he technical challenges and solutions. (for an overall To run the installation I chose to use the graphic programming environment Isadora Isadora was created by digital med ia artist and director of Troika Ranch, Mark Coniglio for use in interactive digital performances. The software provides the digital media artist with a plethora of building block objects known as Actors in Isadora, which can be linked together in a mul titude of ways ( Information and data is transmitted and converted between the Actors via this method of linking, similar to a telephone switching board of the past. I chose to use Isadora for its ability to receive, process, and transmi t various forms of data as well as for its robust video processing capabilities. Isadora was also chosen over other graphic programming environments such as Max/M SP or PD as the computer system at Digital Worlds REVE responsible for powering the large 5 s creen projection environment is already equipped with the software. Using Isadora and other graphic programming environments over hand coding the interactive elements in a lower level programming language such as Java or C++ is similar to building someth ing out of Lego blocks rather than carving and casting blocks then fitting them together. This allows for the digital media artist to relatively quickly; link together diverse amounts of data. However using such software does


16 present limitations as the artist must work with the objects given. The challenge is in finding novel ways to manipulate and connect the objects to create the desired interactivity. The Tangible Midi : The Tangible Midi Sequencer relies on a fundamental principle of computer vision known as blob tracking. Blob tracking is accomplished by an algorithm designed to analyze a video feed or still image and identify areas of contiguous pixels based on parameters like color or brightness. These areas of contiguous pixels are defined as a blob which the computer can then track the position and speed of (Nobel, 517). Many programming languages, such as Java, Processing, and C++, have external libraries for executing blob tracking. Isadora accompl ishes blob tracking through an Actor known as Eyes++ Each instance of Eyes++ in an Isadora patch can track up to sixteen unique blobs. The Tangible Midi Sequencer utilizes three instances of the Eyes++ Actor to track the placements of the physical squares and create the rhythm of the MUES installation. To understand the challenges in creating a Tangible Midi Sequencer lets first analyze a digital midi sequencer, also known as a beat box, a common feature of m an y digital audio work stations or DAWS A digital beat box is essentially a 2D array with each row representing a different midi instrument and each column representing a single beat in the measure. The user of the digital beat box clicks or selects eac h grid location that they would like to hear the chosen instrument play a beat. When the beat box is started the pattern is played continuously as a loop. The difficulty in using blob tracking in Isadora to create a tangible sequencer that any user can ap proach and begin placing and rearranging physical objects to create a musical rhythm, is three fold. One issue


17 being that whatever the first physical object recognized by the Eyes++ actor is always assigned as blob 1, the second object found as blob 2, an d so on. It would be possible to just have blob one always trigger beat one however this becomes problematic when the sequencer is designed to be used by participants without prior instructions For example if blob one were always assigned to beat one a nd so on, and a user did not place their objects sequentially from left to right the physical pattern and the rhythm would be uncorrelated. More importantly however this would not allow for the creation of interesting and complex rhythmic patterns such as on off on off on on off off on off off off off off off off blob on on off off off off off off To solve this issue I perceptually divided the sequencer surface into eight vertical columns each representing one beat and 12.5 percent of the total horizontal video image Each Eyes++ actor was set to track eight objects. Each distinct object has its own output from Eyes++ into its own Blob D ecoder actor. The Blob D ecoder actor will r eport the horizontal center of the object that it is tracking as a percentage of the total image I then took the horizontal center of the object and compared this to each percentage division of the surface. Whichever portion of the total image the blob was found to be in would now return true to its corresponding position in an array list eight items long known as a S elector actor For example if blob 1 is found to be between 12.5 percent and 25 percent of the total horizontal center is found to be between 87.5 to 100 percent of the total horizontal image then location 2 and location 8 in the S elector actor will be set to true and a note will sound every time those locations are iterated over. Th is effectively solves the issue of blob assignment and


18 object placement and more importantly allows the user to create complex rhythm patterns as physical spaces left blank will now be silent The second issue in using blob tracking to create a Tangible M idi Sequencer of physical objects was in assigning different rows to different midi instruments. I first attempted to use the same idea of checking the blobs horizontal center but now check for its vertical center. I quickly realized this method could no t be used in tandem with the horizontal center check method as it would require an exponential increase in calculations as now every single blob would have to be checked, not against 8 plus 8 parameters, but 8 X 8 parameters then be appropriately assigned to its beat location and instrument. To solve this issue I devised a system of color coding the physical objects to correspond to different midi instruments. The video input stream is divided out into 3 separate streams, a red stream, blue stream, and yel low stream using three of Chroma Key actor. I set up the Chroma key so that where it detects red pixels it substitutes in pure white pixels into the video stream and replaces everything not red with pure black. This video stream is then fed int o the first of the three Eyes++ blob trackers. The same is done for blue and yellow with each separate stream being fed into a separate Eyes++ actor Now with one video input different colors could be used to represent different midi instruments or dif ferent parts of a drum kit such as kick drum snare, and cymbal. The last challenge in creating the Tangible Midi Sequencer in Isadora was coming up with a mechanism to ensure that all the columns played their notes at the same time in a regular repeatin g sequence the heart of a sequencer. In a lower level programming language one could use a combination of programming control conventions such as for loops or while loops. Isadora lacks such an explicit Envelope Generator a ctor in conjunction with the Selector actor was used to function as such.


19 The envelope generator was programmed to sequentially output an integer value from 1 to 8 every half second, until it reaches 8 at which point it starts over from 1. This numerical output is then hooked into the input of every single Selector actor mentioned earlier This causes all the selector actors to simultaneously iterate over their eight item array lists (the 8 beats in the measure) and return true if a blob was found to be in that percentage of the screen and triggers its midi note With only one envelope generator opera ting at the top level of the patch hooked to all 24 selector actors, it keeps all the notes (or instruments)found in the same column playi ng on a simultaneous beat while sequentially iterating over the eight possible beats (or columns) of the measure No w a user would be able to stack instruments (represented by the different colored squares) vertically in the same column/beat, such as a kick drum and a cymbal, and hear them play simultaneously. Here is a table that will hopefully make this clear if it is not already. Kick ON off On on off ON off on Snare off ON on off ON on on off Cymbal off off on off off off ON ON seque ncer plays all note ons of a column simultaneously, moving to the next column every half second. The speed at which the Envelope Generator increments its value in playing the next column of beats was constrained to between 1/2 a second and 1/32 of a second. This input was then attached to a user controllable input to increment or decrement the tempo of the rhythm which wil l be discussed in the section on the Astroturf treadmill.


20 The loudness of each beat is determined by the size of the square object placed. This is accomplished very straightforwardly by adding the height and width of the object together, multiplying by ten, and sending this information to the velocity value of the Send Midi Note. Lastly t he send midi note values were sent out of Isadora to the DAW Ableton Live running on the same main processing computer via virtual midi ports created with software kno wn as MidiYoke The Audio output was then piped from the processing computer into the REVEs surround sound system via a simple audio cable. The Treadmill: Creation of the Treadmill to control the tempo and speed of certain digital projections was more straightforward then the creation of the Tangible Midi Sequencer. As mentioned before I constructed the treadmill from wood, bearings, PVC pipe, and an Astroturf track. To ascertain the speed value of the axel I had the idea to use an infrared LED emitte r and Infrared photodiode detector combination. Having used an infrared (IR) emitter/detector in the past to measure a pulse rate via a fingertip, I knew that IR was very accurate at detecting black and white transitions. When a maximum of IR light is reflected to the detector it returns a high voltage when little to no IR light is reflected it returns a low voltage. I affixed a paper disc divided into evenly spaced black and white sections perpendicular to the back axel of the Treadmill. The IR emi tter/detector was attached with the sensor pointing squarely at the paper disc less than a few millimeters away. Finally the sensor was connected to the analog input of an Arduino microcontroller.


21 As the axel turns the sensor sends a higher voltage when it sees white and a lower voltage when it sees black. A small program was written for the Arduino to convert these voltages into numerical values format the value, and to then send th e values out via a serial USB connection every 10 th of a second. The Isadora software allows for communication with serial ports as long as they are configured properly. An Isadora Serial In Watcher actor with the correct data parsing pattern defined, reads the values coming in from the Arduino microcontroller and send s a trigger every time the value crosses a predefined threshold. The rapidity of these triggers are counted against a 1 second pulse effectively extrapolating an average speed of the treadmill. This average speed is then used in many instances of the ov erall installation. The speed value controls the playback speed of the swaying grass animation by being sent out of the main processing computer as an Open Sound Control (OSC) message via TCP/IP over the local area network to the REVE computer controlling the digital projections on the 5 Large screens. The average pace of the treadmill is also used progression of the Singing Bowls. The most difficult chal lenges in creating the treadmill were in its physical construction rather than the digital components. Since the entire installation borrows from the DIY (do it yourself) aesthetic all of the physical components and objects were created by the artist. Ha ving never constructed a treadmill before there were issues such as the Astroturf track ripping during use, which could have been resolved given more time and resources. The Singing Bowls : The Singing Bowls were the most technically straight forward asp ect of the installation and one of the earliest concepts tested. The Bowls employed the same method of blob tracking as the Tangible Midi


22 Sequencer only in a much simpler way. When the user agitates the water in the bowl the waves create differences in the lighting conditions and the blob tracker is able to pick up on the largest waves track their motio n, and send a Midi note value to an Ableton Live midi instrument in the same way that the Sequencer does. The horizontal motion of the tracked waves is used to generate midi pitch bends in the frequency of the midi note being played. The vertical motion and velocity of the waves is sent out as OSC messages via TCP/IP over the Local Area Network to the REVE computer, altering the digital projections. Th e vertical motion of the waves controls the vertical position of the blob like animation mentioned previously and the velocity of the waves alters the speed at which the blob animation morphs. The Singing Bowls presented a significant challenge in the in terfacing of real world objects/phenomena to digital outputs. The lighting conditions played a huge role in the effectiveness of the Singing Bowls ability to function as intended. The lighting conditions under which they were tested were similar but not exactly recreated in the manner that they were presented in the installation. Even minutes changes in lighting conditions could make the whole component function i nadequately and require time recalibrating the digital components In addition the bottom o f each cylindrical bowl was not perfectly flat and these imperfections created glare and reflections that if the bowl was moved or rotated could significantly impact the calibration of the blob tracker. Conclusion: Ultimately the Multi User Experience Synthesizer is an exploration into the possibilities for immersive installations and interactive digital systems to augment the communicative action. MUES accomplishes this by becoming an event in which dialogic meaning can be negotiated between users; by allowing


23 to foster face to identity with the ego. Works Cite d Bishop, Claire. Installation Art: A Critical History. New York: Routledge, 2005. Print Coniglio, Mark. 10 Apr. 2011. Web < > Dialogism and Polyphony 2009. Web. February 2011 Levitin, Daniel J. This Is Your Brain On Music. London: Penguin Books Ltd., 2007. Print Noble, Joshua. Programming Interactivity


24 Appendix A: Tangible Midi Sequencer






27 Appendix D: Control Center


28 Appendix E: Large Projections


29 Appendix F: Blob Sequ e ncer Appendix G: Individual blob decoder


30 Appendix H: Schematic

xml version 1.0 encoding UTF-8
REPORT xmlns http:www.fcla.edudlsmddaitss xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.fcla.edudlsmddaitssdaitssReport.xsd