<%BANNER%>

Record for a UF thesis. Title & abstract won't display until thesis is accessible after 2014-05-31.

DARK ITEM
Permanent Link: http://ufdc.ufl.edu/UFE0043935/00001

Material Information

Title: Record for a UF thesis. Title & abstract won't display until thesis is accessible after 2014-05-31.
Physical Description: Book
Language: english
Creator: Udell, Chester J, Iii
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2012

Subjects

Subjects / Keywords: Music -- Dissertations, Academic -- UF
Genre: Music thesis, Ph.D.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Statement of Responsibility: by Chester J Udell.
Thesis: Thesis (Ph.D.)--University of Florida, 2012.
Local: Adviser: Sain, James P.
Electronic Access: INACCESSIBLE UNTIL 2014-05-31

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2012
System ID: UFE0043935:00001

Permanent Link: http://ufdc.ufl.edu/UFE0043935/00001

Material Information

Title: Record for a UF thesis. Title & abstract won't display until thesis is accessible after 2014-05-31.
Physical Description: Book
Language: english
Creator: Udell, Chester J, Iii
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2012

Subjects

Subjects / Keywords: Music -- Dissertations, Academic -- UF
Genre: Music thesis, Ph.D.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Statement of Responsibility: by Chester J Udell.
Thesis: Thesis (Ph.D.)--University of Florida, 2012.
Local: Adviser: Sain, James P.
Electronic Access: INACCESSIBLE UNTIL 2014-05-31

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2012
System ID: UFE0043935:00001


This item has the following downloads:


Full Text

PAGE 1

1 TOWARD INTELLIGENT MUSICAL INSTRUMENTS: NEW WIRELESS MODULAR GESTURAL CONTROL INTERFACES By CHESTER JAMES UDELL III A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF TH E REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2012

PAGE 2

2 2012 Chester James Udell III

PAGE 3

3 To my wife

PAGE 4

4 ACKNOWLEDGMENTS I thank Dr. James Paul Sain for his continual guidance through this long journey. I also thank Dr. Karl Gugel, who had the sense of adventure to entertain a crazy the professors who invested considerable time and effort in me including Dr. Paul Richards, Dr. Paul Koonce, Dr. Silvio Dos Santos, and Dr. Welson Tremura. You all have made a profound impact on my career and on my outlook of music, technology, and life. I am also obliged to Dr. A. Antonio Arroyo and Dr. Arthur Jennings for their advice and input during m y doctoral study and dissertation. And finally, I would not have made it this far if not for the continual support and wisdom of my wife. I love you, Monique.

PAGE 5

5 TABLE OF CONTENTS page ACKNOWLEDGMENTS ................................ ................................ ................................ .. 4 LIST OF FIGURES ................................ ................................ ................................ .......... 7 LIST OF OBJECTS ................................ ................................ ................................ ......... 8 LIST OF ABBREVIATIONS ................................ ................................ ............................. 9 ABSTRACT ................................ ................................ ................................ ................... 11 CHAPTER 1 INTRODUCTION ................................ ................................ ................................ .... 13 Current Issues for Augmented Music Instruments ................................ .................. 15 Proposed Solutions ................................ ................................ ................................ 17 Modular ................................ ................................ ................................ ............ 17 Reversible ................................ ................................ ................................ ........ 17 Non invasive ................................ ................................ ................................ ..... 18 Reconfigurable ................................ ................................ ................................ 1 8 Limitations and Scope ................................ ................................ ............................. 19 2 HISTORIC TRAJECTORY ................................ ................................ ...................... 22 Anatomy of a Musical Instrument ................................ ................................ ............ 22 Musical Instruments after Electricity ................................ ................................ ........ 23 Early Electronic Instruments (The Primacy of the Keyboard) ........................... 24 Music for Loudspeakers ................................ ................................ ................... 28 Computing, From Laboratory to Stage ................................ ............................. 29 MIDI and Other Serial Communication Protocols ................................ ............. 32 3 ELECTRIC VERSUS ELECTRONIC MUSIC INSTRUMENTS TODAY .................. 34 Electric VS Electronic ................................ ................................ .............................. 34 Instrument Taxonomy ................................ ................................ ............................. 35 Alternative Controllers ................................ ................................ ............................. 36 Augmented Instruments ................................ ................................ .......................... 39 Augmentations of the Trombone ................................ ................................ ............. 44 4 ON MUSICAL GESTURE ................................ ................................ ....................... 49 Sound, Motion, & Effort ................................ ................................ ........................... 50 Gesture ................................ ................................ ................................ ............. 52 Musical Gesture ................................ ................................ ............................... 53

PAGE 6

6 Instrumental Gesture ................................ ................................ ........................ 54 Spectro morphology ................................ ................................ ............................... 55 Gesture & Paralanguage ................................ ................................ ........................ 57 5 INTERFACING WITH THE ACOUSMATIC. ................................ ........................... 58 Sen sor Interfaces ................................ ................................ ................................ .... 58 Related Work: Wireless Sensor Interfaces ................................ ............................. 61 Current Trends ................................ ................................ ................................ ........ 62 Single Serving Innovations ................................ ................................ ............... 63 Transparency ................................ ................................ ................................ ... 65 Accessibility ................................ ................................ ................................ ...... 65 The Dysfunctions of MIDI ................................ ................................ ........................ 66 6 EMOTION: RECONFIGURABLE MODULAR WIRELESS SENSOR INTERFACES FOR MUSICAL INSTRUMENTS ................................ ..................... 70 Overall Design Philosophy ................................ ................................ ...................... 70 Sensor Nodes ................................ ................................ ................................ ......... 72 Node Design Considerations ................................ ................................ ............ 74 Node Types ................................ ................................ ................................ ...... 75 The MCU ................................ ................................ ................................ ................ 76 Addressing Protocol ................................ ................................ ................................ 76 Ra dio Specifications ................................ ................................ ............................... 77 Receiver Hub ................................ ................................ ................................ .......... 79 Hub Design Considerations ................................ ................................ .................... 80 Software Client ................................ ................................ ................................ ....... 81 Data Input ................................ ................................ ................................ ......... 81 Data Processing ................................ ................................ ............................... 84 Data Mapp ing ................................ ................................ ................................ ... 85 Implementation: Augmented Trombone Using eMotion ................................ .......... 86 7 CONCLUSIONS AND FUTURE DIRECTIONS ................................ ...................... 89 Broader Impacts ................................ ................................ ................................ ..... 90 Future Directions ................................ ................................ ................................ .... 90 Conclusions ................................ ................................ ................................ ............ 94 APPENDIX: MUSICAL SCORE: CAPOEIRISTA FOR FLUTE, BERIMBAU, AND LIVE ELECTRONICS ................................ ................................ ................................ .... 95 LIST OF REFERENCES ................................ ................................ ............................. 107 BIOGRAPHICAL SKETCH ................................ ................................ .......................... 113

PAGE 7

7 LIST OF FIGURES Figure page 3 1 An illustration of where in the process electricity is introduced for electric instruments. ................................ ................................ ................................ ........ 34 3 2 An illustration of where in the process electricity is introduced for electronic instruments ................................ ................................ ................................ ......... 35 3 3 Based on Miranda 006 text on Taxonomy of DMI Types. ............ 36 5 1 Market available wired sensor interfaces and their specifications as of 2011. ... 60 5 2 Market available wireless sensor interfaces and specifications as of 2011. ....... 62 6 1 System Overview ................................ ................................ ................................ 71 6 2 Sensor Nodes ................................ ................................ ................................ ..... 72 6 3 A comparison of various market available wireless data transceivers. ............... 79 6 4 Software client workflow. ................................ ................................ .................... 81 6 5 Software client screenshot of sensor modules. ................................ .................. 83 6 6 Screenshot of Data processing window. Processing flows from top down. ........ 84 6 7 Screenshot: Patch Bay Window ................................ ................................ ......... 85 6 8 Alternative Mapping Screenshot ................................ ................................ ......... 86 6 9 Implementation of the eMotion System on the Bass Trombone. ........................ 88 7 1 Workflow of a Novel DDW Software Client. ................................ ........................ 93

PAGE 8

8 LIST OF OBJECTS Object page A 1 ........ 95

PAGE 9

9 LIST OF ABBREVIATION S a Address Bit A/D, ADC Analog to Digital Converter AHRS Attitude Heading Reference System CSIR Council for Scientific and Industrial Research CSIRAC Council for Scientific and Industrial Research Automatic Computer CTIA Interna tional Association for the Wireless Communications Industry d Data Bit DAW Digital Audio Workstation DCM Direction Cosine Matrix DDW Digital Data Workstation DIY Do it yourself DMI Digital Musical Instrument DOF Degrees of Freedom EEPROM Electrically Erasa ble Programmable Read Only Memory EVI Electronic Valve Instrument EWI Electronic Wind Instrument FSR Force Sensitive Resistor GENA_1 General Analog Node, 1 Input GENA_6 General Analog Node, 6 Inputs GPIO General purpose Input/Output HMSL Hierarchical Musi c Specification Language IMU Inertial Measurement Unit IRCAM Institut de Recherche et Coordination Acoustique/Musique, Paris IP Internet Protocol

PAGE 10

10 LED Light emitting Diode MAC Media Access Control MCU Microprocessor MIDI Musical Instrument Digital Interface MIT Massachusetts Institute of Technology NIME New Interfaces for Musical Expression OSC Open Sound Control RF Radio Frequency RX, PRX Receive, Primary Receiver Sig CHI A special Interest group for Computer Human Interactivity in the Association for Compu ting Machinery STEIM Studio for Electro Instrumental Music, Amsterdam TPE Trombone Propelled Electronics. TX, PTX Transmit, Primary Transmitter UDP User Datagram Protocol UI User Interface USB Universal Serial Bus WSN Wireless Sensor Network

PAGE 11

11 Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy TOWARD INTELLIGENT MUSICAL INSTRUMENTS: NEW WIRELESS MODULAR GESTURAL CONTROL INTERFACES By Chester James Udell III May 2012 Chair: James Paul Sain Major: Music This dissertation is dedicated to combining some of the latest ultra low power microprocessor and radio frequency (RF) wireless technology to build a new wireless sensor network (WSN) for musical instruments called eMotion. The hardware is designed to address three current issues in the field of augmented musical instrumen ts: single serving innovation, accessibility, and transparency. This interface implements a unique design approach when compared to other similar music interfaces currently available. To this end, t he eMotion hardware will be modular, reversible, non inv asive, and reconfigurable permitting a musician to interact with computers in live performance using the physical gestures of their instrument. Beginning with a general account of historic developments and technological precedents the aesthetics of aug mented instrument design will be discussed in relief to other design approaches. In response to the issues facing augmented instrument development in the literature, a conceptual framework for the design of the eMotion system will be constructed based on the notions of musical gesture. The second half of the dissertation consists of technical documentation, construction, limitations, and

PAGE 12

12 potential applications of this unique system. An account implementing this new wireless gestural interface on the bass trombone will also be discussed. Also included is a piece composed by the author for Flute, Berimbau, and Live Electronics. The original version is a piece for live performers and electroacoustic sound for four channels. It is presented here as a stereo reduction along with the MaxMSP software performance program on a data CD.

PAGE 13

13 CHAPTER 1 INTRODUCTION Human interaction with sounding objects lies behind instrumental gesture. The passage from object experimentation to the creation of a musical instrument in volves the increasing refinement of hitting, scraping or blowing, such that sophisticated and detailed techniques of control are consciously elaborated as a result of the performer listener's conscious exploration of the interactive relationship. Gradually a performance practice evolves. However refined, specialized or rarefied the instrument, its morphologies and associated techniques may become, the primal hit, scrape or blow never fades from the listener's mind. It is a knowledge possessed by everyone, and its sophisticated instrumental guise is something all can appreciate. Denis Smalley. The Listening Imagination [1] Throughou t history, musical instruments have followed a process of co evolution with developing technology to become what we know them as today. From sticks and bone, to cat gut strings, to brass, to the well tempered piano forte, to electronic synthesizers, music instruments reflect the major technological breakthroughs of an epoch shaped by culture and the natural human predisposition to interact with more accessible, faster, and smaller, a question is raised: how can acoustic instruments and traditional performance practice be interfaced to present day technological developments? Ten years ago, Kim Binsted, in the paper Sufficiently Advanced Technology: Using Magic to Control the World contended that computers were ubiquitous (this is before the Apple i P od and i T ouch emerged on the market) [2] In this pas t decade, the saturation of new mobile and personal network technology has significantly shaped the paradigm of what it means to interact with computing. Having become invisibly interwoven into the fabric of our daily routine, these conceptual

PAGE 14

14 development s have only begun to marginally effect conventional music instrument design and performance practice. This dissertation is dedicated to combining some of the latest ultra low power microprocessor and radio frequency (RF) wireless technology to build a new wireless sensor network (WSN) for musical instruments called eMotion. A WSN is comprised of several individual wireless sensors called nodes that measure environmental conditions including temperature, pressure, ambient light, motion, and sound. This data is wirelessly transmitted to a central hub for monitoring. A special focus is placed on the trombone removing individual sensor nodes will be devised as might be required for p erformance with live interactive electronics. This interface implements a unique design approach when compared to other similar music interfaces currently available. The eMotion system will be modular, reversible, non invasive, and reconfigurable permi tting a musician to interact with computers in live performance using the physical gestures of their instrument. With this technology the performer, improviser, and composer will have the basic building blocks to easily combine or reconfigure unique senso r groupings to control parameters such as : effects processing, algorithmic and generative computer music, and virtual sonic environments all the while requiring no knowledge of microprocessors or programming. For instance, what if the clarinet (an omni d irectional instrument) could physically localize its sound to a specific area in a concert hall based on the directional orientation of the instrument? Or if the amount of distortion effect on an electric guitar could be controlled not with a foot pedal, but by simply tilting the guitar?

PAGE 15

15 Current Issues for Augmented Music Instruments In the late 197 0 s and early 1980 s, when the size and cost of microprocessors and sensors significantly decreased, composers began to experiment with retrofitting musical in struments with this new technology to communicate gestural data to computers. The principle behind augmented instruments is that they maintain their original acoustic properties and, to some degree, extend their traditional performance practice. Histori c al perspectives, context, pros and cons, and aesthetic issues of instrument augmentation presented in relief with other forms of digital music instrument design are detailed in Chapter 3. The implementation of WSN technology is an attempt to respond to th ree emergent issues that have been raised in the literature on augmented instruments over these last four decades: single serving innovations, transparency, and accessibility. Based on an observation of performances for specific augmented instruments Tod Hyper Cello Mutant Trumpet Metasax to name a few. I t seems they are not only technological extensions of the instruments themselves, but also specific to the composer/performer for whom it is designed. For designed to fit the idiosyncratic needs of performers and composers, but as such they have usually remained inextricably tied to their creators [3] Furthermore, the particular innovative developments these augmented instruments employ have commonly been disseminated only in a limited n umber of instances, namely conference presentations. This single serving approach to instrument development is not necessarily counter productive. On the contrary, individual experimentation is a catalyst for a rich diversity of solutions and methods. H owever, this trend may also be a contributing factor in the

PAGE 16

16 general failure to disseminate such innovations and performance practice to the broader community. Another major issue facing the design of augmented instruments is the invasiveness of the sensi ng technology. Instrument augmentation may require physical alterations that might be difficult to undo or are completely irreversible. Additionally, wires tetheri ng to the computer and the size and weight of the hardware components tend to throw off the balance of the instrument and often interfere with traditional performance practice. Using smaller sensors and microprocessors along with new wireless technology to mitigate the unwieldy nature of this process is a primary goal of this study Finally, de spite valuable advancements in wireless networking and microprocessors, along with a growing community of developers f or embedded devices like ARDUINO the accessibility of this technology to the average clas sically trained musician remains largely out of reach. Currently, the technology is at a point that still requires at least a hobbyist knowledge of microprocessors, programming, and sensors to even begin to experiment with novel interactive systems. This limits the use of these systems almost exclu sively to those who can acquire a certain level of technical expertise beyond music making. In a time where an elementary school child can effectively use a cell phone (a fantastically sophisticated piece of technology) without any knowledge of how to fab ricate or program one, trends in current conference proceedings for both Computer Music (NIME) and Electrical Engineering (SIG CHI for example) point towards a growing need for composers to employ (and for classically

PAGE 17

17 trained musicians to use) physical mod es of interactivity between an instrument and electroacoustic sound using computers without any expert knowledge [4 7] Proposed Solutions To this end, e Motion is step towards a general purpose solution based on the following design criterion: modular, reversible, non invasive, and reconfigurable. Modular The eMotion system is comprised of individual sensor nodes (a transmitter/sensor pair) and a single un iversal receiver that collects the sensor transmissions and sends compiled packets to the computer via USB. Each node has its own unique ID (based on its sensor type and instance) and dedicated to only transmitting the data of its attached sensor. For in stance, one node may be dedicated to transmitting sonar data and another node may be dedicated to measuring ambient light. The user is then free to utilize one, both, or neither of these sensor nodes by turning them on or off attaching or detaching. Mu ltiple sensor nodes of the same sensor type may also be used. In this manner, one can acquire a collection of general sensor nodes and use a subset of them in any combination as needed for a particular application. Reversible The use of eMotion should not require any destructive alterations to the music instrument itself so that the user can easily revert back to their original acoustic removable adhesives, and bendable s oft materials are being explored for attaching and detaching these sensor nodes.

PAGE 18

18 Non invasive One of the issues plaguing musical instrument augmentation is the network of wire often required. Similar musical interfaces available on the market have only on e relatively sizable wireless transmitter to which all sensors must be connected by wires. This issue can be mitigated when each sensor node wirelessly transmits its own data; localizing only short wires to the specific position on the instrument. The s ensor nodes are designed to be exceptionally small ( the largest prototype node is 1.26 inches in diameter) to minimize weight and clutter on the instrument. Reconfigurable The nature of this interface design allows the user to easily reconfigure the placem ent and combination of individual sensor nodes for any given project and instrument. This enables the user to find the most optimized network combination, sensor instrument placement, and mapping to meet their unique aesthetic goals without having to rede sign the technology itself. The sensor nodes may even be distributed amongst several instruments or non instrument al performers like dancers to create novel modes of musical interactivity. For instance, a dancer may control the spectral filtering of a flute player through their body movements. These eMotion sensor nodes can be viewed analogously to a piece of hardware that musicians are generally familiar with: the mute. For brass and string instruments, mutes are placed inside of the bell or on the br idge to alter the sound with interesting effect. These are readily available for musicians to acquire. eMotion sensor nodes may be regarded as an extension of this tradition, where a musician can easily acquire and place these objects on an instrument to extend its sonic and expressive capacities with the aid of the computer and microphone. With these nodes, the composers/performers

PAGE 19

19 can intuitively construct and reconfigure their own customized wireless physical interfaces between instruments and compute rs. The possibilities and interactive paradigms may reach a new level of flexibility, expression, and accessibility. Limitations and Scope The scope of this document is limited to detailing the historic trajectory, conceptual framework, technical developm ent, and implementation of eMotion and a novel software client designed by the author called a Data DAW. Additionally, this document specifies the general aesthetic issues in the fields of augmented instruments and acousmatic music to which eMotion is res ponding. Although this document focuses primarily on developing a wireless sensor array for the trombone in particular, the broader impact of this research is to create an open system where this technology will be easily transferable to any musical instru ment, dancer, or other object in a live performance setting. This sensor data could also be mapped to control stage lighting and video projections in addition to sound, but this is also beyond the scope of the dissertatio n. Furthermore, there are classes of technology geared towards reducing the amount of effort (which will be defined in Chapter 4) to perform certain tasks (which is a necessary mantra for increasing quality of life and extending opportunities for the disabled). However, it is not the aim of this research to make instrument performance easier per se. Rather, the agenda is to augment the expressive capacities of the instrument and take full advantage of effort that is naturally exerted by the performer. A musical instrument may be viewed hands, arms, fingers, etc) to make sound. Sensing the physical gestures of the performer and broadcasting this data to computers will enable new indicative relationships between a performer, instrumen t, and acousmatic sound. To this end, it is

PAGE 20

20 the natural, physical virtuosity of the performer that stems from countless hours spent in a practice room that eMotion will take advantage of. In Chapter 2, a general account of the historic trajectory and tech nological precedents leading up to current Digital Musical Instrument (DMI) [8] design types will be presented. Chapter 3 offers a look at DMIs today the concept of co evolution between music instruments and technology and the aesthetics of augmented instrument design in contrast to other classes of digital instruments. The dichotomy between augmented instruments and alternative con trollers will briefly be addressed. However, it approach than alternative controllers ( which are musical controllers that do not emulate traditional instruments for the sake of purposely avoiding traditional performance paradigms). Alternative controller design is a valuable field of exploration, but is well Chapter 4 poses the question: what does one mean by the word gesture and s focus Contributing towards this framework will be the perception of cro ss modal dissonance for composing and experiencing music for physical musician with non physical acousmatic sound. The ideas of Trevor Wishart (gesture as musical paralanguage) and Denis Smalley (spectro morphology and surrogacy in acousmatic sound) will be implemented to construct the point of convergence between physical and no n physical acoustic phenomenon.

PAGE 21

21 Sensor Interfaces and their technical limitations will be reviewed in Chapter 5 along with figures comparing several different market available prod ucts. The inadequacy of the MIDI protocol to encode and represent the intimate and necessary physical gestures of the performer to interact with sound will also be discussed. The technical development behind eMotion is detailed in Chapter 6 starting with t he overall design philosophy. The individual components such as microprocessor type, reasons behind selecting them over other popular alternatives. Specification tables including data rates, range, and current consumption illustrate the capability and capacity of eMotion for reference. Finally, a client designed in MaxMSP to receive and dynamically map this data in real time to available musical parameters will be detail ed. The application of the eMotion system for trombone will a lso be illustrated In the final chapter, the author concludes the dissertation by summarizing how eMotion attempts to address the issues discussed in previous chapters, the potential ramificati ons for music composition and performance practice, current limitations of the hardware, and proposing future directions for eMotion.

PAGE 22

22 CHAPTER 2 HISTORIC TRAJECTORY What we want is an instrument that will give us continuous sound at any pitch. The compose r and electrician will perhaps have to labor together to get it. At any rate, we cannot keep working in the old school colors. Speed and synthesis are characteristics of our epoch. We need twentieth century instruments to help us realize them in music. E dgard Varse Making Music Moder n [9] Throughout the history of Western music, instruments have been continuously evolving and eve ry period has given rise to new or modified instruments and playing techniques. This chapter details a brief history from the earliest electronic instruments leading up to the development of microcomputers. From there, four taxonomic branches of electronic instruments will be outlined as supported in the current literature, with a particular focus on augmented instruments. General historic accounts for electronic music instruments, especially for the years 1750 to 1960, are many [8], [10 12] The objective for this cha pter is not to thoroughly reiterate the existing literature. The intent is to outline a path that may serve to establish a contextual trajec tory and to provide a highlight reel or montage focusing on electronic music interfaces leading up to the development of new technology to be discussed in Chapter 6. Anatomy of a Musical Instrument Musical instruments are a specialized form of technology. They extend the expressive capacities of human movement to produce acoustic phenomena beyond what our bodies can make alone. Musical instruments are comprised of two fundamental components a performance device and a sound generator. The performance devic e is what the performer physically manipulates to control the sound

PAGE 23

23 (e.g., the valves on a trumpet, bow of a violin, or keys on a clarinet) and will be referred to within the scope of this dissertation as the gestural interface. The sound generator for a g iven instrument propagates the sound (vibrating strings or air columns, body resonance, shape, and material of the instrument) and it influences qualities like resonance and timbre [10] For conventional acoustic instruments, the physical interface and sound generator are inextricably related, meaning that the device the performer manipulates and the ob ject that propagates sound are fundamentally one cohesive system. In this manner, the physical gestures performed on an instrument (like buzzing lips, bowing strings, and striking material) directly and visibly produce and shape sound; sound and gesture ar e one. All musical instruments were bound to this causal gesture to sound paradigm, until the first electronic instrument was developed in 1759 [11] Musical Instruments after Electricity The ability to harness the power of the electron in the mid 1700 s set the stage for a quantum leap in the evolution of musical instruments. The Clavecin Electrique (electric harpsichord), invented by Jean Baptiste Delaborde, is one of the first electric instruments to mimic the form and performance paradigm of traditional acoustic musical instruments [11] Unlike its acoustic counterpart, the electric harpsichord generated acoustic tones mediated by electricity. The mechanical separation of performance device (a harpsichord keyboard) from the sound generator (electromagneticall y vibrating tines) set the stage for subsequent developments in electronic instrument design and is arguably the first musical instrument of record to have fractured the conventional gesture to sound paradigm. For these instruments, physical gesture and th e resulting sound is mediated by electrical processes. This disconnect between

PAGE 24

24 gesture and sound has served as the basis for both the prospects and problems in moving forward with electronic instrument design. Early Electronic Instruments (The Primacy of t he Keyboard) With all the creative potential that comes with the ability to electronically synthesize any tone, timbre, and temperament, the extant literature suggests an interesting limitation consistently imposed on a significant proportion of early elec tronic instruments [13] Conventional 12 tone chromatic keyboard interfaces served as the primary control interface of choice. A f actor contributing to this trend might be the alternative means to control electr onic instruments beyond the 12 tone keyboard were well explored during this period, few of these electronic instruments met with any significant level of success (defined by longevity and widespread use). Telharmonium (resembling a complex organ) piped the first Muzak via telephone wire to nearby hotels, restaurants, museums, and resident subscribers. Maurice Martenot premiered his Ondes Martenot at the Paris Opera in 1928. This instrument featured a unique electronic ribbon contr oller (a looped string with a ring attached to the middle). A performer controlled pitch by placing their finger in the ring and sliding along the length of the ribbon. The left hand could manipulate a variety of controls to vary loudness and timbre. A key board diagram was soon added to inform the performer of pitch location and eventually a keyboard controller was fully integrated into the design. Laurens Hammond introduced the Hammond Organ in 1935, the first commercially successful electric music instrum ent. Within the first 3 years of production, a Hammond

PAGE 25

25 could be found in 1,750 churches and was manufacturing roughly 200 instruments a month by the mid 1930 s [14] The original model A generated sound using 91 rotating tone wheels driven by a synchronous motor and had two 5 octave manuals and a 2 octave pedal board. Electronic oscillators eventually replaced the tone wheels in the 1960s. Hugh LeCaine, a Canadian scientist and electronic music pioneer invented the Electronic Sackbut (a sackbut being the medieval ancestor of the trombone) in 1948. Ironically, it did not resemble a brass instrument at all, but it vastly expanded the ex pressive capacity of the diatonic keyboard controller. Each key was not only sensitive to vertical pressure (to control articulation and amplitude) but also horizontal placement. In this manner, a musician could achieve glissandi spanning an octave in eith er direction by sliding their finger across the keys resembling the sliding pitch of a electronic tones to emulate more natural acoustic phenomena (e.g. the airy tone o f a flute or the fuzzy articulations of an oboe). Despite the prominence of keyboard controlled electronic instruments in early years, it is important to note that numerous non keyboard controlled electronic instruments were developed throughout this perio d and some achieved notable widespread use. For example, a non contact method to convert physical gesture into electrical signals emerged in post revolutionary Russia in 1920. Russian cellist and electrical engineer Leon Theremin unveiled the aetherphone also known as the theremin a television size cabinet with two antennas sensitive to the proximity of human hands (electrical capacitance). With this instrument a performer could control

PAGE 26

26 pitch using the antenna sticking straight up with one hand and amplit ude using the many performances for this instrument were staged. It is interesting to note that the theremin was often used to perform classical literature, including perform ances of Vocalize and in one case with the New York Philharmonic Hungarian Rhapsody no 1 for four theremins and orchestra keyboard instrument invented in 1928, inspiring Paul Hind emith to write a solo concerto for the instrument [10] Discrete control over pitch and articulation was possible using the Trautonium A performer controlled pitch by moving their finger along a length of wire with the right hand and articulated tones by pressing a bar with the left (resembling the right/left hand performance practice of the theremin ). T he instrument was so successful that Oskar Mixturtrautonium the Birds and bell sounds for the 1950s B Parsifal [12] In the late 1950s and 60s, Raymond Scott (an accomplished bandleader, pianist, compo ser, and engineer) designed a variety of electronic music instruments. Some instruments, like the Clavivox and Videola were controlled primarily with a keyboard. However, as Robert elegant ways of controlling an electronic circuit [15] sistors in his Circlemachine an electronic sequencer comprised of lights mounted on spinning wheels inside of wheels. It was autonomous, modulating speed and sound through its own movements. The Electronium nd

PAGE 27

27 Electronium algorithmically generated musical material on its own. An operator could interact with the process using an array of knobs, buttons sliders, and switches. This led Motown executive Guy Costa to refer to the [15] The invention of the transistor allowed for smaller and more cost effective design methods, moving technology away from expensive, fragile and sizable vacuum tube technology. This enabled Robert Moog to develop Modular Voltage Controll ed Synthesizers based on his research presented at the Audio Engineering Society Conference in 1964 [16] and his associations with pioneers like Raymond Scott and Herbert Deutsch [15] His modular methodology significantly changed the approach to electronic inst rument design and is a principle that informs the present document Studios, engineers, and composers often developed hardware to perform specific and limited tasks, requiring the development of new hardware from scratch each time. reak down the process of electronically generated sounds into were three types: signal generating, signal modifying, and voltage control). The user could then combine and co nfigure these blocks to synthesize a variety of sounds without having to redesign the hardware. Despite the fact that any interface could have keyboard interface. Ribbon contr ollers, knobs, patch bays, and sliders were often Therefore despite several early innovations and experiments, the literature suggests

PAGE 28

28 that musicians interacted with early el ectronic instruments almost exclusively through the conventional keyboard. Music for Loudspeakers Developed in parallel with early electronic instruments was the emergence of commercially available sound recordings. While evidence of earlier prototypes exi st [17] Thomas Edison is credited with the development of technology capable of recording sound around the year 1877. Recording al lowed sound to be disembodied from its original acoustic source and be reproduced via loudspeaker in a separate time and space. As a result, loudspeakers essentially became a universal instrument, capable of representing virtually any acoustic phenomenon. This notion sparked novel methods for composing with and experiencing sound. Musique concr te started in Paris working directly (concretely) with sound material versus th at of working indirectly (or abstractly) with sound using a system of notation, which had to be realized though instrumental or vocal performance. Thus, material for musique concr te was derived exclusively from pre recorded sound. The medium is often asso ciated with recordings objects. Conducive to the aesthetic of collage the sounds are to be appreciated beyond their original source and context to include their abstra ct musical properties (i.e. timbre, pitch, and rhythm) and relationships between one sound and another. By the 1950 s, music was promoted as a better term to describe a synthesis between the concrete and electronic approaches of working w ith sound [18] The relationship the basis for the acousmatic movement.

PAGE 29

29 Meanwhile in the United States, Bebe and Louis Barron created one of the earliest electroacoustic music studios in 1948 in New York. The Barrons often crafted novel electronic circuits for projects to create the sounds they needed and eventually amassed a collection of these devices to use for compositional purposes. Cage hired Williams Mix 1952. Also in 1952, Raymond Scott designed perhaps the first multi track tape machine in the world capable of recording 7 to 14 parallel tracks on a single reel that resulted in several patents for magnetic tape technology. Hugh LeCaine later devised a way to mix down six separate tracks of tape in 1955. Despite the limitations of early technology, co mposers worked to accommodate live performance situations with electroacoustic music. Some of the earliest examples of compositions for acoustic instruments and acousmatic sound include: Ottorino ghtingale song in his orchestra work Pines of Rome Imaginary Landscape no. 1 (1939) for phonographs and live instruments, and Vision and Prayer (1961) for tape and soprano. This gave rise to a new compositional aestheti c: pieces for live instruments and fixed media. At this point in time, the burden remained on the musician to chase the tape. Computing, From Laboratory to Stage In the 1950s, computers became a significant compositional tool. The ea rliest instance of a computer explicitly programmed to generate musical tones was the CSIR popular melodies as early as 1950 [19] An Australian composer and resident of the United States, Percy Grainger collaborated with Burnett Cross to build his Free Music

PAGE 30

30 Machine i n the mid 1950s. This was n ot a computer per se, two large rollers fed four sets of paper into a series of mechanical arms. The arms rolled over the contours cut into the paper and controlled pitch, timbre, and amplitude of eight tone oscillators. Voltage fluctuations corresponded to patterns on the paper, sonifying the graph drawings [11] Despite these early breakthroughs, the estab lishment of c omputer m usic is methods to convert sound into computer data for playback and for telephones, Matthews became inspired by the musical implications. His program Music I, resulted in an IBM 704 playing 17 seconds of music. Alongside the efforts of many engineers and composers including John Pierce, Milton Babbit, Jean Claude Risset, James Tenney, Pril Smiley, and Emmanuel Ghent, Matthews continued to develop his computer music program. Subsequent versions, Music II through Music V served as predecessors to many computer music programs in use today including Csound. Later, the Groove system was developed by Matthews in collaboration with F Richard Moore in 1968. E quipped with potentiometers, joysitck, keyboard, and typewriter (for inputting commands), the Groove system allowed a user to record and edit gestural input in real [20] note that only the gestural data was recorded in real time to later be used as control input for vol tage controlled oscillators and amplifiers of an analog synthesizer. The RCA Mark II Electronic Music Synthesizer was installed at the Columbia Princeton Electronic Music Center in 1959. The manner of programming this computer was similar to th at used with CSIRAC, punching holes in paper cards using binary

PAGE 31

31 number sequences and then feeding it into the machine. The process was time [10] Despite the limitations, the possibilities of this system provided unprecedented control over musical elements including timbre, timing, and texture. In the 1970s Charle s Moore created an interesting hybrid compiled/interpreted programming environment called Forth [21] The appeal of this approach i s the ability to program software using an interactive shell environment. Programming could be done virtually in real time, executing subroutines, testing, and debugging without having to recompile each time. This had an instant appeal to composers as a m oment of inspiration could quickly be realized in sound. It also made algorithmic composition and interactive performance more accessible and resulted in what George Lewis calls a [22] Successors of Forth include Hierarchical Music Specification Language (HMSL) [23] and Formula (FORth MUsic LAnguage) [24] In 1976, Giuseppi di Giugno developed the first all digital computer controlled synthesizer with the encouragement of Luciano Berio [25] The first model, the 4A, allowed a user to program not only the individual control function devices, but also the interconnections between the devices themselves; a kind of virtual patching [26] The next interactive computer music system developed at the Institut de Recherche et Coordination Acoustique/Musique (IRCAM) comprised of three processors running in parallel (to compute audio), an external

PAGE 32

32 computer (to run programs), and a graphic display. For contro l input and feedback, sliders, buttons, organ keyboard, joysticks, and a clarinet controller were provided. Developing in tandem with this system at IRCAM in the 1980 s was the work of Miller Puckette, who named his programming environment in honor of Max Matthews [26] Max became the first program designed for live performance and to be widely distributed to a significant user com munity. Initially used to program the 4X at IRCAM, additional collaborations between Puckette and David Zicarelli allowed for valuable refinements. This ultimately resulted in the program MaxMSP. Still in wide use today, this graphical programming environm ent allows users to design their own unique music programs by connecting basic blocks of code together. Even while the program is running, the user can reconfigure and modify the program and immediately observe the results in real time. It has become a sig nificant platform allowing musicians to program and interact with computers on stage. The capacity and power of computer processing has increased exponentially over time while the size of the hardware has decreased [27] Computational hardware that once filled up rooms of space can fit on a microchip. This has alleviated many technical performance barriers on stage. Computationally i ntensive processes like live pitch tracking and gesture analysis can serve as points of interactivity and control in music. Beyond MaxMSP, other significant music programs including Csound and SuperCollider have expanded their function to include real time applications and support new control interfaces as well. MIDI and Other Serial Communication Protocols While physical interfaces could have been mapped to any sound beginning with the first electronic instrument, practical applications were limited by the available

PAGE 33

33 technology in a given era. Before MIDI and other serial communication protocols developed, gestural controls were still mechanically bound to the physical synthesis device ( i.e. one could argue that the controller and the sound generator were st ill one cohesive instrument by design ) Disconnecting the keyboard controller from a synthesizer to use somewhere else was not originally supported in early technology. The inception of music hardware communication protocols dissolved this final tether m odularizing the control mechanism and the sound generator as two separate entities. This paradigm shift resulted in modular control devices for musical instruments. An issue that emerged, however, was that each manufacturer used a proprietary serial commun ication protocol between controllers and synthesizers. As a result, getting musical controllers and hardware to universally communicate was difficult. The solution was the development of the MIDI protocol, which established an industry standard for musical hardware communication in 1983 [28] Facilitated by the MIDI protocol, an alternative controller could be readily detached from a synthesizer or computer and reattached to another, or be used to interact with multiple sound generating devices simultaneously. Other music based serial protocols continue to emerge. MLAN (developed by the YAMAHA corporation) was publically released in 2000 and allows users to communicate not only audio, but also controller information on a single IEEE 1394 cable. Matthew Wright and Adrian Freed developed Open Sound Control (OSC) at CNMAT at UC Berkley to address the shortcomings of MIDI in 1997. Despite limitations, it has yet to be completely usurped by these newer, more powerful, and flexible protocols. However, current trend s are moving in that direction [29]

PAGE 34

34 CHAPTER 3 ELECTRIC VERSUS ELECTRONIC MUSIC INSTRUMENTS TODAY Broadly speaking, there are two major categories of instrument that employ electricity. The distinction between the two is determined by where in the conv ersion process e lectricity is employed [30] Electric VS Electronic Electric musical instruments retain a natural gesture sound pa radigm where the physical gesture directly initiates an acoustic sound. For example, plucking a string on a guitar or striking a string in a piano with a hammer In the case of electric instruments, the acoustic sound is converted into electrical signals v ia a transducer, or pickup, to extend the sonic palate of the instrument through amplification, effects processing, etc. The electric guitar and electric piano are common examples. Figure 3 1. An illustration of where in the process electricity is introd uced for electric instruments. Electronic musical instruments convert human gesture input into electrical energy at the outset of the process and generate the sound entirely through electronic means. For instance, pressing a key on a MIDI keyboard controll er generates an electrical impulse. The resulting electrical impulses may be mapped to control electronic phenomenon to generate sound using a loudspeaker. Some examples of electronic

PAGE 35

35 instruments include: the H AMMOND Organ, M OOG Music synthesizers, and t he Y AMAHA DX7. With notable exceptions like the EWI (electronic wind instrument), EVI (electronic valve instrument), and Y AMAHA WX7, most commercially successful electronic instruments today remain oriented towards the conventional keyboard interface. Figure 3 2. An illustration of where in the process electricity is introduced for electronic instruments Instrument Taxonomy As new technologies and interactive paradigms emerge, composers and musicians continue to experiment with the traditional relations hips between gesture and sound. A plethora of new musical controllers with a staggering variety of morphologies and functions have evolved as a result of the natural human predisposition to experiment and explore. Each device is truly unique, meeting the p ersonal aesthetics Digital Musical Instruments: Control and Interaction Beyond the Keyboard, a morphology comparing Digital Music Instruments based on their resemblance to ex isting acoustic instruments is proposed (Figure 3 3) [8]

PAGE 36

36 Figure 3 3. Based on Miranda I Types [8] A comprehensive taxonomy of electric and electronic instruments is beyond the scope of this historical review but a f ew useful resources exist that categorize these instruments into families based on varying criterion (including [8] and [31] ). However, identifying the fundamental differences between Alternative Controllers and Augmented Instruments will be of significant val ue here. This distinction is governed principally by two different aesthetics. Those who approach performance in electroacoustic music as a continuation of previous traditions are apt to approach instrument design starting with conventional instruments and performance practice (i.e. a ugmented i nstruments). Those who wish to demonstrate that technology affords radical differences from the past are likely to avoid emulating conventional instruments and paradigms (i.e. a lternative c ontrollers). Alternative Co ntrollers An alternative controller is defined within the scope of this document as: Alternative a physical interface that facilitates novel modes of interactivity between performer and sound by avoiding conventional instrument models (like the diatoni c

PAGE 37

37 keyboard) ; ontroller the device is a control interface that is to be mapped onto a sound producing mechanism, and is not the sound producing mechanism itself. It is important to note that there were electronic instruments dating back to the orig inal t heremin that utilized alternative methods to control sound. However, because the control mechanism was inextricably part of the synthesis system by design, these early instruments are not technically controllers. This document may be the first to mak e a distinction between alternative controllers versus alternatively controlled electronic instruments. The MIDI theremin original analogue theremin is such an example. Controllers may either be built from scratch using a collection of sensors or re appropriated from other existing objects (like game controllers and smart phones). Nicolas Collins in his article, recounts watching a performance of Vespers a piece by Alvin Lucier where blin dfolded performers use sonar instruments to audibly navigate around a performance space. This event retained core elements of live performance but bore little resemblance to a conventional music recital. The point here is that Collins attributes the ground breaking [32] sound from the limited types of objects sold in music stores and, through this disassociation, prompt new musical disco veries [32] Robert Boie developed an early alternative controller at Bell Labs under the supervision of Max Matthews. The Radiob aton is a percussion based instrument with radio transmitters embedded in two mallets. This device broke the mold for alternative

PAGE 38

38 music control in many respects. It was capable of detecting the three dimensional spatial position of each mallet over a senso [26] Several composers have written extensively for t his controller including Max Matthews and Richard Boulanger. The prospect for sculpting sound with hands is intriguing, and a class of glove based alternative controllers soon emerged. At first experimental and designed only by individuals for particular p rojects, the glove controller design eventually fund its way into the commercial market through the Nintendo Power glove and the P5. It is interesting to note that the success for these interfaces on the market was short lived. One of the first documented instances of glove controllers was built at STEIM for composer Michel Waisvisz. The Hands controller looks nothing like gloves, but rather a set of controls ergonomically mounted around his hands and wrists. Laetitia Sonami, after having worked with a pai r of dishwashing gloves with magnetic sensors decided to build one of the first of what is now microswitches for the fingers and flex sensors for the fingers and wrist, a sonar measuring hand di stance from her waist, magnetic sensors and a pressure pad. A current trend in the field is the use of game controllers as a means to interactively interface with musical computing. This approach takes advantage of the gestural skills people develop who re gularly engage with video games. Controllers also offer unique affordances for complex control of many simultaneous elements. There is also an inherent accessibility (familiarity) for new users to easily engage with sound. Little to no knowledge of hardwar e is needed to use these in a project: just plug and go. Some of the more popular controllers used are Nintendo Wii motes, perhaps because

PAGE 39

39 of the focus on visual gestural elements. Jeffery Stolet has composed several works for the Wii mote, treating it as an extended conductors wand. Flight controllers like the SAITEK X45 also maintain a good compromise between interesting gesture types visible to the audience and a variety of user control options. According to CTIA, an international association for the wi reless telecom industry, 96% of US citizens own a cell phone, growing from a modest 13% in 1995 [33] Composers immediately began composing works for these devices, like Dialtones by Levin and Gibbons where the entire concert is executed through audience participation through use of their ring tones. Most recently, the family of Apple iphone/touch/pad devices along with DROID phone s has made the use of personal wireless communication devices a viable gestural controller in music performance. Apps like TouchOSC by hexler.net facilitate communication of music and sensor data between a wireless handheld device and computer. Displayi ng an array of knobs and sliders the mobile device is transformed into a kind of control surface. Augmented Instruments A general reason for a growing desire to break clean from acoustic instruments and cultural baggage is the existence of an ever widening rift, with slow developing conventional acoustic instruments and performance practice on one side and rapidly developing technologies with novel modes of interactivity, experimentation, and expression on the other. Remarkably the instruments of the symph onic orchestra have maintained their fundamental features since the mid 1800 s despite the accelerating rate of technological development in other aspects of life [34] Conventional instrument fabrication and performance practice represent a thoroughly established tradition that has naturally evolved and undergone refinements over the course of centuries. The

PAGE 40

40 possibilities that emerg ing technology and electroacoustic music offer to extend the expressive capacities of traditional acoustic instruments are vast. Proponents for designing augmented instruments acknowledge and embrace the baggage of traditional performance practice to exten d these well established traditions into the realm and possibilities of computers and acousmatic music. There are several approaches to instrument augmentation and each instance employs a unique combination of methodologies. These include the use of sensor s to collect performance gestures on tracking), and converting the acoustic instrument into a resonant transducer for electronic sounds (i.e. the instrument itself is used as a kind of loudspeaker). The earliest attempts at augmenting conventional instruments sacrificed the original acoustic Synthophones [35] Trombone Propelled Electronics [36] ). A notable criterion that distinguishes augmented instruments from instrument like and instrument inspired controllers is that the original acoustic and mechanical function of the instrument is maintained, preserving and extending traditional perf ormance practice. The following discussion will focus on augmented instruments within the context of extending traditional instruments and performance practice. Some of the earliest examples of instrument augmentation principally employed amplification (e Wolfman Stimmung (1968)). Extending acoustic instruments beyond amplification became possible when computing became available on stage with the birth of integrated circuits. Experiments with indeterminacy and im provisation using electronic circuits gave rise to early

PAGE 41

41 interactive electronic music in the 1960s. A notable example involves John Cage, David Tudor, Marcel Duchamp, David Behrman, Gordon Mumma, and Lowell Cross using a photo resistor chessboard on stage in 1968. Playing chess generated electronic sonic and visual events [37] The year before, Gordon Mumma applied these capacities t o live instrument performance. Hornpipe performance space, and the electronic circuitry. As affordable sensors and microcon trollers emerged on the market in the 1970 s and 1980 s, retrofitting conventional acoustic instruments with sensor interfaces became more accessible than ever. Computers can be programmed to use the sensor data to influence musical algorithms, signal proc essing, or other non musical elements such as stage lighting. hyperinstruments is the first major project to interface acoustic instruments with computers using microprocessors and sensors while maintaining the acoustic propertie s of the original instrument. It began in 1986 with the goal of designing expanded musical instruments, using technology to give extra power and finesse to virtuosic performers. Hyperinstruments were designed to augment guitars, keyboards, percussion, stri ngs, and even conducting. A famous application is the hypercello (an acoustic cello with sensors and computer controlled synthesized sounds) played by Yo Yo Ma in the early 1990s. The hypercello allows the cellist to control an extensive array of sounds th rough performance nuance. Wrist measurements, bow pressure, position sensors, and left hand fingering position indicators enable the computer to measure, evaluate, and respond to a variety of

PAGE 42

42 ed aspects of the sound and modifying the synthesized sounds that accompanied the acoustic cello through the performers own gestures on the instrument [38] Metatrumpet The goal was to extend the performance practice inherent wit h playing the trumpet into the realm of continuous control and interaction for computer music. His intent was to chasing the tape [39] Instead of closely acting out a script, one could create musical situations that allowed the performer to explore and improvise. T he trumpet was interfaced to a computer using a variety of sensors, a STEIM Sensorlab interface, and pitch to midi conversion hardware. The 2 d i mensional position of the trumpet was calculated using sonar receivers attached on both sides and underneath the trumpet. switches. They discovered that breath pressure sensing could not be employed without compromising the playing technique and the acoustic integrity of the instrument. Thi s constraint remains true for brass instruments even today. Instead, Impett and Bongers values were converted to MIDI, relayed to a computer, and then mapped to musical co ntrol variables. practice of the saxophone using new technology to convert it into an electroacoustic instrument: the Metasaxophone His first attempt applied a variation of tec hnology from

PAGE 43

43 Gary Scavone at Stanford University and Parry Cook at Princeton to create the MIDI Saxophone [40] Using a Parallax I nc. Basic Stamp BIISX microprocessor fixed to the bell to convert analog sensor data to MIDI information, Burtner attached force sensitive resistors (FSRs) to the keys and other areas of the saxophone, triggers, and a 2 d accelerometer. He designed a softw are client in MaxMSP to read in the data and control digital signal processing and synthesis algorithms. His next Metasax version explore d and augment ed the natural acoustic characteristics of the instrument. Burtner designed a 3 microphone amplification s ystem that attaches to almost any location on the saxophone using flexible tubing. Generally, one mic is designed for a location deep inside the bell while the others are suspended outside of the horn to pick up the widest range of frequencies. With MaxMSP the audio signals can be mapped to modify the functions of the MIDI data resulting in complex, multifunctional control over parameters. sound or the MIDI control cha [40] Curtis Bahn mentions in his pap [41] Musical performance in a cultural context has always been inextricably linked to the human bo dy, yet, the body has played only a minor role in the feedback, and gesture the reintegration of the body in electronic music are all key to maintaining and extending musical/social tr aditions within a technological context. To this end, Bahn has worked to use sensing technology to interface traditional instruments and dancers to computers. One of his well known achievements is the SBass (Sensor Bass interface), a 5 string electric bass built by luthier Bill Merchant. Bahn equipped a small mouse touch pad under the fingerboard, several slide sensors,

PAGE 44

44 FSRs, potentiometers, a 2 d accelerometer, and several extra buttons. A microcontroller attached to the side of the bass converts all of th e sensor data into MIDI information, which is sent to a computer running MaxMSP [42] This system gives Bahn sensitive gestural con trol of musical parameters while performing on stage with a computer and other electronics Augmentations of the Trombone Trombonists are a peculiar breed of person, and it is of no coincidence that the instrument has been subjected to a plethora of expe riments as new technologies emerge. Because the author places particular focus on applying sensor interface technology to the trombone, it is appropriate to cover a few notable examples. d the horizon for the technology that could be employed on an acoustic trombone. It is also interesting to note he has never conventionally played the trombone [36] He used the trombone as a point of departure to create a novel controller to explore territory beyond traditional performance. The first version of TPE began when he interfaced a Commodore 64 personal computer, keyboard, and monitor with a Stargate digital reverb hardware to simulate an array of effects (reverb, time stretch, sample/loop). At a time when personal computing still took up significant space, George Lewis once mentioned it was smuggled there in the guise of a trombone [36] a dog leash to measure the length of the slide as it moved in and out. A small keypad was also attached to the slide, where pressing keys could be mapped onto any musical param eter. A small speaker driver was attached to the mouthpiece to send the sound of

PAGE 45

45 the Stargate into the resonant body of the trombone, turning the instrument i nto an acoustic transformer of the digitally processed sounds. Collins performed with this version till 1994 when it was run over by a taxi. Future versions included a STEIM Sensorlab ultrasonic transducers replacing the dog leash, and an Apple iMac speaker/amplifier to replace the old driver. Propelled Electronics (TPE) does not necessarily fall into the scope of augmented instruments (having There is still an integral acoustic function as the inst rument serves as a resonant transformer of sound. George Lewis is a trombonist, improviser, and computer music pioneer. Lewis created Voyager a non hierarchical interactive musical environment at STEIM in Amsterdam in 1986 [22] The program interfaces with any instrument capable of sending MIDI data and parses data for up to two performers (though Voyager autonomously generates music without the need for any human performer interaction). to midi converter is employed. Up to 64 asynchronous layers, or MIDI instruments, generate some form of musical behavior. There is an arbitration layer that decides which layers are present and MIDI data stream is analyzed and features like average pitch, speed, and tempo are assigned to contr ol parameters of the arbitrator. So while not interacting on a one to one level, the musicians are affecting the musical behavior of the system on multiple, complex levels As o pposed to the above examples where sensors are integral for

PAGE 46

46 parametric control, this interaction takes place exclusively with sound veto buttons, foot pedals, or physical cues [22] h stands out in that where approach is an extension of computer into the realm of the human an autonomous virtual improviser. In 2003, Richard Karpen created a music compo sition for an amplified trombone equipped with a Photoresistor Slide [43] In collaboration with Chad Kirby, they designed a thin, telescoping tube that could be easily attached to the slide that can measure the length of extension. There is a photo resistor in one end of the enclosed tube and a light source on the other. The light is angled in such a manner that the photoresistor re ceives brighter or dimmer light corresponding to the length the tub e is extended by the slide. This data is converted and sent to a computer to interact with digital signal processing. The composition was written for Stuart Dempster and called The slide data included position, speed of change, and direction of movement and was prerecorded spee ch and choral samples using just the slide. by trombonist and engineer Hilary Jeffery, Farwell was to expand on a virtual performance environment controlled by a sensor equipped tr ombone mute Jeffery had already designed called the tromboscillator [44] The outcome resulted in three components: the uSlide, an ultrasonic distance finder sensor to measure slide position; the eMouth a small Apple i P

PAGE 47

47 mouthpiece so that the trombone plays itself; and the eMute a hand held loudspeaker that is used like a plunger mute t hat can function either as a pickup or can drive sound into the bell of the instrument to change its acoustic properties [45] Wit h the uSlide the ultrasonic sensor measures time of flight, which can be used to accurately calculate distance (with extremely slight variance based on the temperature of air). The slide data is mapped to control parameters of the eMouth and eMute The eM outh expanded manner, the virtual lips could actuate the sound of the trombone much like the lips of a human performer. When propped up on stage, the trombone appears to be playing with no aid of an actual performer. The eMute similarly drives sound into the bell of the trombone using a n acoustic trombone physical model. This changes the muting characteristics of the horn while playing and also provides tactile feedbac k that one can feel in the embouchure. The player can then use this haptic sensitivity to effectively tease out different pulsations. A USB controller circuit was connected to the back of the eMute so the performer could switch the function from driver (ou tput) to pickup (input/amplification), and also turn it on and off. In the augmented instrument examples above, the underlying goal has been to create a more causal link between the physical gestures performed on a traditional instrument and electroacousti c music. As stated earlier, a paradigm shift occurred in the relationship between sound and its originating source when electricity became a mediating factor in sound production. The notion of gesture serves as an invaluable link when reconciling the aesth etics of non traditional performance practice. In the following chapter, the concept of musical

PAGE 48

48 performance gesture and the notion of gesture in electroacoustic music will serve as a kind of conceptual base for the significance of the technology presented in Chapter 6.

PAGE 49

49 CHAPTER 4 ON MUSICAL GESTURE Music controllers and the protocol that supports their communication with synthesis algorithms should be founded on a hierarchical structure with the performance gesture, not the score based note list, as its unit of currency. Joseph Paradiso Current Trends in Electronic Music Interfaces [7] The literature on music and gesture is an ever expanding field including the ongoing work by Cadoz, Wanderley, Paradiso, Payne, and Gody/Leman [13], [46 50] Institutions like IRCAM, MIT, and STEIM are continually extending the horizons of gesture capture technology applied to musical control Although entire dissertations could be written on this topic alone, the aim of this chapter is only to pull from the gesture sound literature to establish a framework linking non visual ( in corporeal) acousmatic music with visual (physical) instrumental performance. Ever since sound and gesture could be mediated by electricity (e.g. any arbitrary gesture may be mapped to produce any sound), the translational relationships between gestures per formed on stage and the sound heard by an audience has remained a point of major co ncern in the field of electroacoustic music. How does one convincingly reconcile acousmatic music (sound for loudspeakers alone) with the traditions of live performance? Mus ical gesture lies at the epicenter of these two domains. Even in the context of acousmatic music, the listener seeks stable gesture sound relationships to create mental visual associations with sound [1] Only recently have technologies emerged that allow the empirical capture, analysis, and study of gesture in sufficient detail [50] Emerging technology is spurring sudden growth in new sensor interfaces and fueling heightened interest in gesture sound relationships (such as the X BO X Kinect contactless motion sensor transforming the gaming industry). The following discussion

PAGE 50

50 offers a conceptual framework behind the technology developed by the author, which is a proposed step toward facilitating a more intimate relationship between p hysical music performance and the non visual morphological structures of sound. Sound, Motion, & Effort The traditional relationship between a performer and their musical instrument is more than simply manual (i.e. physically holding it). The tactile inte rface of a musical instrument is multi faceted, including breath control, embouchure, key action, lungs, diaphragm, throat, reverberations (of teeth, fingers, hands, and head), posture, pressure, tension, a complex feedback loop between the sound an instru ment makes [51] with the instrument to produce sound. This intimacy has long been sought after in electronically mediated musical instruments. When one is asked about what technology is, the responses often tend towards identifying a tool that makes performing a certain task easier or more efficient [52] Even the computer is thought to be primarily an effort saving device. The presence of too much effort in a system may be perce ived as an indicative sign of error or inefficiency. Minimizing physical effort is a trend that can be observed in the production of electronic musical instruments as well. Think about the physical conditioning it takes to effectively play a soprano saxoph one or clarinet versus the effort required to play a Yamaha engaging with electronic instruments. The comparison may be analogous to writing with a pencil versus typing with a keyboard (i.e. what is possible to do with a pencil versus a keyboard and how long it takes to gain proficiency at these skills). However, effort may

PAGE 51

51 Mousetrap it may be more interesting for an audience to observe someone contending with an incredibly complex instrument (making control as difficult as possible) rather than a rudimentary one. In fact, the notable characteristic of instrument performance (as well as any spectator sport) may fundamentally be the witnessing of the display of conditioned, physical effort. Because of the separation of sound source from the physical interface, designing systems to emulate intuitive gesture sound relationships for electronic in struments (instrument like, instrument inspired, and alternative controllers) is not without complications. This is partially due to the disparity of haptic force feedback in these control surfaces. Haptics refers to the sense of touch (tactation) and the sense of motion and body position (kinesthesia) [53] While causal and complex haptic relationships exist between a performer and a conventional musical instrument, touch synthesized tone will always be the same irrespective of the properties of the sound [54] visual feedback is often displ ayed on LCD or computer screens. Though it is possible to artificially produce reaction forces like vibrations within these instruments, doing so in a convincing manner has yielded persisting difficulties [55] This is not to say that ergonomic and expressive alternative controllers have not emerged, but the range of expressiveness and sensitivity in electronic instruments that mimic acoustic (and electric) counterparts has not yet reached a sufficient level of sophistication [54] On the other hand, conventio nal musical instruments have been greatly refined over the centuries. The primary motivation s have been to increase ranges, accuracy

PAGE 52

52 and nuance of sound, not to minimize the physical requirement of effort [52] Performing a conventional musical instrument is the act of mapping complex gestural territories onto a grid of musical parameters (pitch, harmony, articulation, etc). Thus, eff ort is closely related to expression. Musical instruments serve as a kind of transducer, acoustic sound [56] This attribute is also what makes musical instruments difficult to play well. However, the range of expression makes overcoming this difficulty fulfilling for both the performer and the audience. When compared to the physiological effort required to play a note on the flute and the sensitivity it affords the performer, a simple button press seems inadequate for musical control. To gain proficiency at perform ing with an instrument (as with learning to write) the difficulty of the task comes from being forced to use the generic capacities of the human motor system in order to master an arbitrary gestural code (as with learning to write with a pencil) [46] Music performance is fundamentally a conditioned, specialized form of movement. Musicians move to create sound and listeners often move intuitively in response (sometimes in the form of mimicking instruments or dancing). This is the gesture sound paradigm; experiencing music is inextricably linked to experiencing movement. Gesture There is no one common definition of the word gesture. The use of the word covers a vast territory depending on its context in different fields including neuroscience, physiology, computer science, engineering, linguistics, semiotics, etc [46] aim is to focus primarily on the word gesture and its use in music and not to propose an all

PAGE 53

53 mea ning within the context of this document. Based on work from Wanderley and Battier (2000), two premises should be defined to set the context for the rest of the chapter [50] : The word gesture necessarily makes reference to human corporeality and its physiological behavior whether this behavior be useful or not, significant or meaningless, expressive or inexpressive, conscious or su bconscious, intentional or reflexive, applied or not to a physical object, effective or ineffective, or suggested. Phenomena that are not necessarily produced by human bodily behavior also commonly have the word gesture ascribed to them. Take, for instance natural movements like flux (wind, waves, sand), falls, crumbling, etc. Gesture, associated to these phenomena carries an anthropomorphic intent. The perception of a gestural sound or gestural contour in sound (i.e. a Mannheim Rocket or Denis onic object) is such an associative metaphor. Gestures identified in sound itself are human physiological associations the listener ascribes to the stimulus. In other words, gesture is intimately tied to human physiological experience. Musical Gesture Jens with sounding music [50] It should be pointed out that t his definition might be too exclusive. The presence of audible music may be sufficient, but not necessary. Sound may evoke mental images of human gesticulation. However, visual stimuli can also evoke thoughts of unheard sound. Take for instance Mark Appleb Tln for three conductors and no musicians. Three people conduct ing an imaginary piece are kept in time using headphones and a click track. The gestures are still surprisingly musical in nature, evocative of an inaudible music, despite the lack of sound in the room. Leman and Gody describe musical gesture as being comprised of two levels. The primary plane of gesture is an extension of the human body (physical movement itself) while the secondary plane resides with intention (that which is imag ined,

PAGE 54

54 anticipated, or conveyed) [50] displacement of an object in space) an d meaning (a communicative message from sender to recipient) relating to sound (audible or not). or have encoded within it notions of physicality (e.g the galloping sensation of the William Tell O verture ). A study by Shove and Repp (1995) suggests that music is essentially an audible horse or bowing violinist; rather, the listener hears a horse galloping and a violini st bowing [57] perform ance defines what music essentially is for most people [1] From this perspective, the perception of music with the impression of physical motion is inextricably connected because of our cultural and social experiences, a kind of generalized stimulus response. Instrumental Gesture Instrumental gesture is musical gesture viewed specifically within the con text of performing with musical instruments. Wanderley (2000) and Cadoz (1994) describe instrumental gesture as comprised of three interrelated components [46] : E RGOTIC Referring to m aterial action, modification, and transformation. No communicative information is encoded. E PISTEMIC O Notions of virtuosi ty and effort reside within this plane. S EMIOTIC T he communication of information, meaning, and intent. For instrumental gesture, all three functions coexist interdependently. Exertion is applied to a physical object specific phenomena are prod uced whose forms and

PAGE 55

55 dynamic evolution can be mastered by the subject and these phenomena support a communicative function Similarly, Delalande (1988) posits that when a musician performs, four instrumental gesture types may be identified. Howeve r, it is important to note that any instrumental gesture may fit equally well into several categories [50] : S OUND PRODUCING Gestures that mechanically produce sound serve to either excite a new sound or to modify a sustained one. C OMMUNICATIVE Also referred to as semiotic gestures, meaning is encoded and intended as indica tive cues to another performer or audience member. This includes foot tapping or bobbing of the head, shoulders or instrument to indicate tempo, etc. S OUND FACILITATING OR ANCI LLARY G estures not directly involved in sound production that still affect the sound to some degree. For the case of a trombone player, the visible inhalation, expansion of the abdomen, the posturing of the shoulders and puckering of the embouchure all influence the moment leading up to the buzzing of the lips to create so und. The phrasing gesture is another type of sound facilitating motor activity. Wanderley has shown that these movements are consistent and reproducible even over long periods of time [58] Research con ducted by Campbell et al. and Quek et al. studying ancillary gestures in clarinetists show these reproducible motor patterns are integrally connected to the performance of musical phra ses and are often related to the movement of the clarinet bell [50] These gestures may also function as communicative signals, enh displayed (and even exaggerated) amongst virtuoso performers. However, the gestures still persist (albeit less) when a performer is asked to play immobilized, suggesting these gest ures are tied to shaping sound and are fundamentally separate from gestures intended purely for communicative purposes [58] S O UND ACCOMPANYING Gestures that have no role in sound production or modification. They follow the contour of the music and cover the gamut from complex choreography to simple fist pumping or head bobbing. Spectro morphology In 1986, Denis Smalley designat field of acousmatic composition. Comprised of four primary components spectrum typology, morphology, motion, and structuring process this laid the theoretical

PAGE 56

56 framework to describe morphological developm ents in sound spectra over time. His writing on the gestural nature of sound is integral to understanding the complex interrelationships between performance gesture and morphology of sound. The concept of acousmatic music engages deeply with reinforcing or confounding visual associations subjectively attempts to fill in the missing informati on based on gestural and morphological cues heard in the sound. With respect to instrumental performance, Trevor Wishart posits that the morphology of human physiological gesture may be directly translated into the morphology of sound through the use of th e vocal apparatus or some instrumental transducer [56] It is the instrument itself that forms an in teresting barrier in the translation from human performance gesture into the morphology of a sound. In this sense, Wishart posits that the vocal apparatus is the most sensitive gesture to sound transducer, capable of complex timbre, amplitude, frequency mo dulation and articulations. When human utterance is heard in the context of acousmatic music it is an unmistakably human gesture. Similarly, all wind instruments (dependent on the continuous breathing of the performer) and bowed instruments (where sound is produced through continuous physiological action) are also gesturally sensitive conception of a gesturally sensitive transducer. Percussive instruments like drums an d pianos, are viewed as less gesturally sensitive due to the fact that they lack the ability to modify a sound after excitation.

PAGE 57

57 Gesture & Paralanguage Expressivo, Cantable, Molto Agitato. These are arbitrary symbols a composer purposely marks on a score, placing the burden of interpretation on the performer. Most audience members are aware of and can appreciate a performer who can make a single note sound urgent, relaxed, happy, mournful, or regal. Music, like all arts, has a semiotic goal : communication communication is not of unadorned data, but of the more important items in the phenomenological garden: feelings, ideas, experiences, longings [27] In this sense, the communicative/semiotic plane of instrumental gesture and morphology in spectro morphology may be seen as serving an analogous purpose. They both function as the contextual paralanguage that the listener uses to interpret an experience when faced with sound. This idea is essential to the definition of gesture Even though the notions of non visual acousmatic music versus the traditions of live instrumental performa nce seem diametrically opposed, the paralanguage of musical gesture can serve as a common territory that causally relates the two. What may have been a translational disparity between acousmatic sound and physical performance gesture may instead be an oppo rtunity to explore the gestural territory between these two domains. However, current musical protocols like MIDI encode almost exclusively sound producing gestures (pitch, onset, velocity, duration, etc), not communicative gestures. The following chapter will detail the current hardware and issues central to interfacing musical instruments with acousmatic sound.

PAGE 58

58 CHAPTER 5 INTERFACING WITH THE ACOUSMATIC What was lost in [early] digital technology was the gestural, tactile immediacy of the analog world, so here you have this incredible computing power and just a mouse to control it with. Peter Otto Electric Sound [10] Sensor Inter faces Traditional acoustic instruments are often employed in compositions for live electroacoustic performance. Historically, sonic media (phonograph, magnetic tape) and the burden largely remained on the musician to maint ain synchronicity. In this manner, the musician could be influenced by the electroacoustic sound, but not the other way around. Once the medium shifted to a malleable digital data stream and computing became available on stage, real time interactivity betw een musicians and computers for live performance became a possibility. Hardware was soon developed that enabled musicians to influence electroacoustic sound in a number of ways (e.g. sensing performance gestures or pitch tracking) [59] map performance gesture onto interactive sound. However, this is far from the only possible method to achieve interactivity with electroacoustic music. Before market accessible microprocessors, composers and instrument designers often reappropriated computers and sensors from other hardware to attach to their Cybersonics Trombone Propelled Electronics By the mid 1980s, 11 for example) allowing instrument designers to more readily design custom hardware. In

PAGE 59

59 1983 the MIDI protocol was officially established as the industry standard for musical data, and by 1989 STEIM had developed the Sensor Lab (a portable microcomputer that co nverts sensor voltages into MIDI data). These developments enabled musicians to bypass a significant learning curve in hardware design and created an environment ripe for new alternative musical controllers to emerge. For the first time, the interactive la ndscape between music performance gesture and electroacoustic sound became accessible to a wider community of musicians, including those with only moderate levels of engineering experience, who could now purchase program, solder, and construct a unique in terface to meet the aesthetic demands of such projects. Performers today have more options to control sound than ever before and need not limit themselves to the chromatic piano keyboard or the early digital computer keypad and mouse paradigm. As for appro aches with sensor interface design, individuals fall along a spectrum ranging from DIY (do it yourself) to out of the box solutions. In most cases, an approach restricted to either extreme is impractical for broad application. The complex and economical na ture of current technology leads even the most advanced DIY hobbyist to start from off the shelf components at some point (e.g. purchasing microchips, integrated circuits, and other common electrical components). Conversely, many currently available out of the box interfaces, like game controllers, fail to meet the requirements of each unique project without modification. The appeal for DIY is the seemingly endless options to design custom built projects and the flexibility to alter designs as necessary. T he hobbyist market is an ever expanding field with thorough documentation, tutorials, and sample code for almost any design issue. The components often include microcontrollers (i.e. Arduino, Paralax,

PAGE 60

60 Propeller, BasicStamp, Teensy, etc.), embedded programm ers, sensors, serial interfaces, and wireless transceivers (zigbee, xbee, IR, and RF). The downside ( just like crafting anything from scratch such as jewelry or clothes) is that it is often more expensive to buy individual components and requires considera ble more time (and risk) to assemble the components by hand. Alternatively, various out of the box solutions have the advantage of mass production (lowering cost) and can save substantial development time. These benefit musicians and composers because the vast knowledge required for designing and programming the hardware can be bypassed to a certain degree. However, one is still often required to understand minimal circuit design and programming. This approach also benefits those who have the requisite eng ineering background, allowing more time for prototyping, testing, and assembly. Exhaustive lists of available sensor interfaces with manufacturers, specifications, and prices already exist including [6] [60] but a few notable s ystems worth mentioning here include: Teabox by Electrotap, MIDIsense (LadyAda), MicroSystem (I cubeX), Phidgets, Eobody (Eowave), Arduino, and Gluion. Figure 5 1 details the technical specifications for the above mentioned interfaces as of 2011. Figure 5 1. Market available wired sensor interfaces and their specifications as of 2011.

PAGE 61

61 These systems generally consist of a single, pre programmed microprocessor to convert a given number of analog sensor inputs into digital data. The processor then compiles t he sensor data using any number of protocols (e.g. MIDI, OSC, etc) and sends it to a computer via USB, MIDI, or other wired serial connection. Generally, these devices offer quick and easy sensor interface solutions with minimal hardware programming. Many versions also have General Purpose I/O pins and analog outputs to control motors, lights, and other devices. However, applications involving these systems require at least one wire tether from the interface to a computer. While this may not be an issue for installation art, this is not ideal for a musician (or dancer) who desires to move freely. Another challenge is the size of the hardware unit itself and the cabling between it and the many sensors, making this setup obtrusive to the performer. In some cas [8] Having to alter ones performance technique to contend with obtrusive technology has fueled the developm ent of wireless based sensor interfaces. However, these devices do not include the sensors themselves T his may require the user to purchase them individually (significantly increasing cost) or design their own. Related Work: Wireless Sensor Interfaces Ha rdware for wireless communication and the protocols that facilitate it have become increasingly assessable in recent years (e.g. Bluetooth, 802.11, UDP, etc). of their sy WiSe Box, Wi microDig (I cubeX), Sensemble (MIT), MIDITron Wireless, EcoMotes,

PAGE 62

62 and La Kitchen (IRCAM discontinued). Specifications, as of 2011, are shown in Figure 5 2. Figur e 5 2. Market available wireless sensor interfaces and specifications as of 2011. The primary advantage of these units is the severing of the umbilical tether between the sensor interface and the computer, allowing the musician or dancer to move freely. Ho wever there are disadvantages associated with the current ly available wireless systems as well. For example, cable is still required to connect the sensor circuits to the central processor, the hardware of many systems is too sizable to be inconspicuous or light and while wireless units may be worn as a belt pack, they do not easily attach to an instrument. T herefore, the risk of disconnected sensors and restricted spatial freedom of the performer may still persist. Current Trends As mentioned in Chapter 1, three emergent issues endure across the full range of market available sensor interface solutions introduced over the last four decades: single serving innovations, lack of transparency, and limited accessibility. While a single solution may not exist to address these complex matters, bringing the technology to a usable and accessible state for classically trained musicians would be an important step in the right direction.

PAGE 63

63 Single Serving Innovations Charles Dodge mentions, O ne of the impediments to writ ing enduring computer music is the relatively short lifetime of most computer and electronic instruments [59] Consider the life cycle of instruments and musical repertoire before electronic music (sometimes spanning centuries) compared to the life of the average computer operating system. The lifetime of the average augmented instrument is shorter stil l, often undergoing significant alterations from one project to the next. Furthermore, each sensor interface developed using the hardware described above invariably becomes one of a kind and idiosyncratic to individual aesthetics and projects. Based on the literature for specific augmented instruments Hyper Cello Mutant Trumpet Metasax to name a few it seems these instruments are not only technological extensions of the originals, but are specific to th e composer/performer for whom it is intended. For example, Wanderley and Orio posit, and composers, but as such they have usually remained inextricably tied to their crea tors [3] (human computer interactivity) to evaluate just how effective one method is compared to another. The particular innovative developments that these augmented instruments employ are commonly disseminated only in a limited number of instances, namely conference presentations. This single serving appro ach to instrument development is not necessarily counter productive. On the contrary, individual experimentation is a catalyst for a diversity of solutions. This trend may even usher in a new era of personally unique musical instruments designed to evolve and adapt to the individual. However, the ad hoc and volatile methodologies may also be a contributing factor in the

PAGE 64

64 general failure to proliferate such innovations and performance practice to the broader community. One solution may be to modularize the i ntricacies of instrument augmentation into develops in complexity and may stem from how the human mind handles information multi step processes into a single routine (like waving hello or tying a shoe) [61] One can then asse mble and combine these chunks to perform tasks of greater complexity move towards the asse mbly line, or the IKEA approach to furniture design (quality notwithstanding, it allows one to build a complex office desk in your living room without a wood shop or experience). In electronic music history, sound engineers like Bebe Barron and Raymond Sco tt had to design custom circuits to address project specific issues a worthwhile yet inefficient mode of working that was latter solved when Robert Max and the more recent Sc ratch from MIT are programming environments that take a modular approach to code, where users can quickly program complex games and other media by interconnecting blocks or objects. If applied to sensor interface design, this may preserve the flexibility f or individuals to assemble sensor blocks to meet unique aesthetic demands, while reducing the complexity of lower level design methods. This approach could also aid the evaluation of these methods by observing how efficient each sensor is at addressing spe cific tasks and noting common trends that emerge

PAGE 65

65 amongst a community of users. For example, if one observes a significant demand for placing accelerometer blocks on guitars, perhaps one day, guitar manufacturers may integrate them into the standard design. Transparency As mentioned previously, the display of wires may well lend itself to a technically appealing visual aesthetic. The tradeoff is the need for a performer to significantly alter their playing style to contend with shifting weight, decreased fle xibility, and the risk of sensor interfaces. It would be of great value to eliminate the need for wires entirely. Systems have been developed that utilize the architecture of wireless mesh networking. A wireless transmitter is used for each individual sensor in the network whereby all sensors can wirelessly communicate to a central receiver, or each other. For systems like FireFly developed at Carnegie Mellon University, eac h node comes preprogrammed to transmit light, audio, temperature, humidity, and acceleration [62] Several of these can be deploye d to monitor areas of an environment. The unprecedented flexibility these systems offer is conducive for industrial, military, swarm robotics, and environmental applications. However, a system has yet to be designed specifically for music performance appli cations such as augmenting acoustic instruments. Accessibility Despite (or perhaps as a result of) the phenomenal progress of technology, the knowledge required to design musical sensor interfaces spans a territory well beyond that of music making. New in terfaces are designed almost exclusively by persons who embody a unique combination of backgrounds including circuit design, digital systems,

PAGE 66

66 wireless networks, data protocols, microcontroller and computer programming, as well as having some degree of musi cianship. A classically trained musician with the desire to engage with these paradigms must overcome a significant technical learning curve. Furthermore, the time spent at the workbench tediously developing hardware extends the latency between the initial musical inspiration and the final product, greatly affecting the creative process. The following analogy may help illustrate the issue: CTIA wireless association reports that 80% of U.S. teenagers own a cell phone [63] Yet none of them have the capacity to build and program one completely from scratch. This fact does not prevent teens from effectively using the hardware, often out performing adults (even those who know how to fabricate or program phones) in common tasks like texting This begs the question with the technology at our fingertips, why is it necessary for musicians to design their own hardware to accomplish even rudim entary interaction with computers? The problem extends beyond acquiring a level of technical facility. When one purchases components one by one for specific projects, the cost quickly adds up. Often, what is made with the components is particular to a spec ific project. When a new project is begun, more of the same components are purchased again. A system designed specifically to be modular and reconfigurable by the user would solve many of these issues, but has yet to emerge. The Dysfunctions of MIDI The ve ry popularity of MIDI based systems testifies to their utility and wrong with it. F. Richard Moore The Dysfunctions of MIDI [29]

PAGE 67

67 Market available sensor interfaces often communicate to a computer using a directly to any other MI DI device. However, nearly all of these interfaces are connected directly to a computer. Thus, the user is also forced to purchase a MIDI interface, increasing the complexity and total price of the system. It is also important to note that for the tables l isted in this chapter, the resolution column in each table refers to the bit depth of the A/D hardware only (the process where the analog sensor voltage is converted into a digital representation). However, it is the data protocol that determines the ultim ate bit resolution when data reaches the computer. For instance, MIDI only allows a 7 bit resolution for control data ( a range of 0 127). Most A/D hardware offers at least 10 bit resolution (a range from 0 1023). The MIDI protocol is a kind of bottleneck a t the end of the data pipeline. MIDI was established as a standard partially for its low cost, but at the cost of speed and data transfer. While proven effective for studio situations, its usefulness is limited for many types of music and performance metho ds [59] The pitch/velocity/duration oriented nature of MIDI is almost exclusively a byproduct of p iano keyboard based control interfaces that have traditionally dominated the industry [29] Furthermore, each data parameter is in dependent from one another. This may be appealing for the engineer or composer desiring unprecedented control. However, the parameters of acoustic musical instruments are interdependent. For example, a saxophone player can change pitch and timbre not only by depressing keys, but also with air pressure and bending the embouchure (even environmental conditions, such as the air temperature of the room, might be considered). Other recent protocols

PAGE 68

68 including OSC and raw USB/serial are gaining momentum, addressin g some of these problems by offering higher resolution, lower latency, and better control. Of the four types of musical gesture (sound producing, semiotic, ancillary, and accompanying) suggested by Delalande ( discussed in Chapter 4 ) only sound producing g estures are reliably encoded with the MIDI protocol [50] limiting the gestural exchange that takes place between the performer, m usical instrument, and the audience to one fourth its potential. The context, nuance, and semiotics encoded in the other three gesture types (communicative, ancillary, and sound stream known as the MIDI event list. To be fair, a portion of the gesture sound disconnect in musical data does not originate with the limitations of MIDI itself, but from the context of the system that came before: Western musical notation. As with MIDI, the three salient features of this notational tradition (pitch, velocity/amplitude, and duration) do not inherently prescribe notationally the most elusive aspect of musi cal notation [56] true for Western notation, it should be acknowledged that there are forms of contem porary notation that do engage with effective use of gesture like that of Helmut over the course of its history while MIDI remains problematic for many live performance settings? The critical difference be tween these two systems occurs during the phase when information is sonified. It is expected (under conventional notions) that a human performer will interpret a musical score. Here, the performer fills in a great deal of missing information based on previ ous experience, training, performance practice, and

PAGE 69

69 creativity. The continuation of this practice suggests at least some subjective degree of success. Therefore a perfect analogy might imply that musical data (MIDI events) will be interpreted by a computer or other electronic device in the same way. However under most circumstances, the device instead expects that the data provided is complete and that there is nothing more to fill in. The fundamental fault might not lie with the cartographer (the one who creates the musical score in western notation or MIDI), but rather with the interpretive abilities of the map reader. Incredible work is being accomplished in neural networks and machine learning, and perhaps this point will someday become moot when compu ters, like people, can bring previous experiences, musical data protocols to reach beyond the limitations of MIDI to also include the communicative/semiotic, ancilla ry, and accompanying gesture information to create more sensitive relationships between musicians and computers. respond in consistent ways that are well matched to the psycho phy siological capabilities of highly practiced performers [29] aural and tactile feedback from a musical instrument otherwise the instrumentalist has no hope of learning how to perform on it expressively. One can turn this statement on its head to say that computers must receive consistent aural and tactile (gestural) feedback from the performer/ instrument, or else there is no hope of translating critical gestural nuance into sound.

PAGE 70

70 CHAPTER 6 EMOTION: RECONFIGURABLE MODULAR WIRELESS SENSOR INTERFACES FOR MUSICAL INSTRUMENTS Any sufficiently advanced technology is indistinguishable from magic. Arthur C. Clarke Profiles of the Future [64] Overall Design Philosophy The eMotion system is an all purpose wireless sensor mesh network that a user can implement to quickly and intuitively assemble wireless sensor nodes for control/interaction of computer processes including sound and visual effects. The novelty of this system with regards to all other human/computer interfaces on the market resides in the network architecture. Each embodiment of this network is distinctive from user to user depending on the unique aesthetic and technical demands of the individual or project. Although sensor configurations and mapping may vary widely from one use to the next, the core hardware/software itself need not be redesigned. The purpose of this chapter is to provide a technic al description of the hardware and software developed for this dissertation. The design philosophy adheres to four general principles. The system must be transparent, reversible, reconfigurable, and modular. Figure 6 1 illustrates the overall system flow. There are three major hardware components to the system: a computer running a software client, a central hub receiver connected to the computer, and a collection of wireless sensor nodes. The user acquires a set of sensor nodes and places them on the desir ed sensing locations (an The content of this chapter is protected by U.S. patent pending

PAGE 71

71 instrument, dancer, environment, etc.) depending on their aesthetic and technical needs. Figure 6 1. System Overview. The system flow (from left to right): Wireless nodes, each with specialized sensors are placed by the user at the desired sensing locations (instrument, body, or environment). A central hub receives all node transmissions and echoes the raw packets to the attached computer. A software client allows the user to re assign IDs, calibrate, an d apply processing effects to the incoming streams of sensor data. The client also allows the user to graphically map desired data streams to control musical or visual parameters during performance and convert the data into musical protocols like MIDI and OSC. Nodes independently transmit data wirelessly to the central hub. Next, the hub parses the data streams based on an addressing protocol developed by the author and sends the streams via USB to its attached computer. Then, a software client on the compu ter enables the user to process, mix, and map the data streams to interact with computer processes. The computer software client allows the user to convert sensor data into industry standard MIDI and OSC protocols, or web based UDP. This enables the user t o interact with virtually any hardware or software regardless of location. Each component of the system will be broken down and explained below. Node 1 Ex. Sonar Node 2 Ex. FSR Node 3 Ex. IMU Computer Mapping > Processing Base Hub

PAGE 72

72 Sensor Nodes Figure 6 2. Sensor Nodes. Each sensor node consists of a single sensor type, an ADC, a processor flash memory, power source, and 2.4 GHz wireless RF transceiver. The data packet transmitted by each node consists of a list of sensor values and a unique node ID. Each node transmits only the data of its attached sensor. For example, one node may be ded icated to transmitting sonar data and another node may be dedicated to measuring ambient light. The user is then free to utilize one, both, or neither of these sensor nodes by turning them on or off attaching or detaching. Multiple sensor nodes of the sa me sensor type may also be used (for example, a sonar matrix array comprised of a group of sonar nodes may be constructed). In this manner, one can acquire a collection of general sensor nodes and use a subset of them in any combination as needed for a par ticular application. When one sensor is turned off (in the event of battery depletion or user preference) the other sensors on the network remain unaffected. The nature of this modular interface design allows the user to easily reconfigure the placement an d combination of individual sensor nodes for any given project and instrument. This enables the user to find the most optimized sensor combination, sensing locations, and mapping scheme to meet their unique aesthetic goals without having to redesign or r eprogram the hardware itself. The sensor nodes

PAGE 73

73 may even be distributed amongst several instruments or non instrument al performers, such as dancers to create novel modes of interactivity. For instance, a dancer may control the spectral filtering of a flute player through their body movements. Each sensor node consists of a single sensor type, an ADC, a processor, flash memory, power source, and a 2.4 GHz wireless RF transceiver. Each node also includes a red and a green status LED to provide the user with vi sual feedback of the transmission stream. For each data packet the node transmits, the red LED is configured to blink if it is out of range of the receiver or does not get an indication that the receiver is on. The green LED blinks when the sensor node is within range of a receiver and working properly. The wireless data packet transmitted by each node consists of data byte pairs: the unique node ID and sensor value. The ID byte is hard coded into each sensor and will be presented in further detail below. Fulfilling the other three design philosophies, the nodes themselves are: T RANSPAREN T T he physical size of each node is exceptionally small compared to similar devices on the market (the largest eMotion prototype node is 1.26 inches in diameter). Beca use each node is wireless, the usual tangle of wires on the instrument or body can be eliminated completely. This transparency is intended to mitigate the burden on the performer, allowing him to perform as conventionally or dynamically as he would have pr ior to the modification. R EVERSIBLE In order to avoid mechanical and other non reversible alterations to musical instruments or performers utilizing the wireless sensor network, the sensor nodes can be temporarily fixed to an instrument, body, or other surface in a for particular implementations of this prototype system, which enables the user to easily revert back to their original acoustic instrument or physical self. R ECONFIGU RABLE T he user may rearrange the sensor placement and combination at any time (even during live performance) and switch sensors on and off with no adverse effects to the overall system.

PAGE 74

74 Node Design Considerations The design considerations for the senso r nodes evolved over time as the author experimented with various methods for programming the embedded devices. Originally, the aim was to develop intelligent or smart sensor nodes, meaning that a number of sophisticated processes would be embedded in ea ch device (e.g. dynamic sensor calibration, peer to peer networking, dynamic ID assignment, filtering/integration of data, etc). After experimentation, the author opted to streamline the sensor nodes and strip down the program logic to only the bare essent ials, placing the major computations (like auto consumption and processor speeds were not an issue. The advantages of stripped down sensor nodes are vast. Fewer computations pe r cycle means less power consumption with faster data transmit cycles. The nodes communicate only with the hub transceiver and do not communicate with each other (although the modules may be configured for peer to peer communication in the future) simplify ing the protocol. The addressing scheme changed from dynamic to static, requiring less initialization logic and ensuring that nodes appear to be the same device from one use to the next. This is the equivalent of using a hard coded MAC address instead of a dynamic IP. use, how does a node know which hub it should transmit to? When a hub is fabricated, it gets a hard coded random RF channel. The nodes also come programmed with a fa ctory default channel. The user can press a pin hole button on the nodes, which causes it to scan RF channels to synchronize with the closest hub. Once synchronized, the correct channel is saved in memory for the next time the node is turned on.

PAGE 75

75 Node Types Because each node is dedicated to transmitting the data of its attached sensor, each node type varies in the number of analog inputs, transmit rates, and number of data streams, and it may incorporate an i2c interface (a common 2 wire multi master serial bus) in addition to analog inputs. GENA_1: General Analog, one input/output. This node type has a sensor with one analog sensor channel. The node is configured to transmit the data of the embedded sensor at a rate of up to 1kHz (1,000 times per second). Th is is separate from the radio transmit rate which transmits data at a speed of 1 megabyte per second. To save battery power and bandwidth on the network, this node type transmits new data only when the sensor value has changed. This ensures that the networ k does not get bogged down and the nodes broadcast only when necessary. GENA_6: General Analog, six inputs/outputs. This node type is similar to GENA_1, except that it handles up to six analog sensor inputs like 3 axis accelerometers and gyroscopes. In thi s case, if one value changes on any channel, all values are transmitted on the network. This maintains synchronicity for all axes of sensor measurement. IMU: Inertial Measurement Unit, six inputs/outputs. This node type is equipped with a single chip that contains a 3 axis accelerometer and 3 axis gyro to measure inertial acceleration and tilt. A special kind of Direction Cosine Matrix (DCM) filtering [65] 3 d i mensional orientation of an object. Although the DCM algorithms may easily values a re transmitted by the device. Advantages include less calculations for the node, resulting in higher transmit rates and significantly lower power consumption. A secondary advantage is the ability to painlessly update filtering algorithms and firmware when needed on the software client without updating the hardware itself. However, the IMU works on a principle known as dead reckoning in that this unit only measures displacement from a specified origin. The minor errors in the filtering eventually compound, w hich results in wandering for the Yaw axis. Also on a flat plane. The unit is still exceptionally useful for orientation sensing and cheaper to produce than the more elegant AHR S node described below. Transmissions are at 50Hz and run on an internal timer interrupt, which is the frequency of update the DCM filter expects on the software client. AHRS: A full, low power, attitude heading reference system. This builds on the IMU str ucture to further include an i2c 3 axis magnetometer and outputs an additional 3 values: mag_x, mag_y, mag_z. This adds three dimensional magnetic

PAGE 76

76 orientation to the filter, which eliminates wandering inherent in the IMU. The wandering is eliminated due to the fact that magnetic orientation is a kind of discrete data (i.e. geographic magnetic variations and hard iron interference aside, magnetic coordinates are roughly dependable and repeatable). Adding discrete values to the relative accelerometer and gyro scope values eliminates the wandering inherent in a dead reckoning system. The cost to produce this unit is slightly more, but a reliable Yaw axis is invaluable for many implementations. The transmit rate is 50Hz (the frequency of the filter update on the software client). The result is a wander free three dimensional representation of an objects orientation at the sensing point. At the time of the writing of this document, the AHRS described above is significantly more cost effective (on the order of 60%) and user friendly than other similar AHRS devices available on the market. The MCU The selection of a microcontroller was dictated by (in order of importance) power consumption, required peripherals, cost, bit depth, clock speed, and memory. After working line of ultra low power devices, the author selected the MSP430 F2272 Ultra low power MCU as the central controller. The data sheet with specifications is located on the Texas In struments website [66] Addressing Protocol Due to the modular nature of the sensor network, where individual nodes may be added, addressing protocol so the client could reliably handle and reconfigure the dynamic node architecture. The original intent was to have the client dynamically assign an ID number to each node based on the order in which they are turned on (this scheme is used in many popular wireless controller systems for gaming). The problem with this method is repeatability. Many users will likely want to preserve a specific configuration and mappi ng of sensor nodes after deriving the desired setup. Order based ID assignment forces the

PAGE 77

77 user to turn on the devices in a particular order each time. This may be acceptable when turning on four game controllers, but can be aggravatingly tedious when deali ng bit long Instance ID is g enerated at compile time (the time the processor is programmed) during the fabrication stage of the hardware. It is a permanent number that is transmitted each time the node sends a packet of data and essentially tags each incoming data stream to its speci fic device origin. The Nordic nRF24L01+ RF radios are capable of receiving six simultaneous data receiver is capable of recognizing up to 64 possible instances for each of its six different sensor types. The protocol can be easily expanded in firmware to extend the addressing determine the sensor type and the unique instance of each node of the network allows the user to preserve configurations and mapping for specific nodes. In this manner, data is sent from each node to the receiver in address/data byte pairs: aaaa aadd dddd dddd, aaaa aadd Instan integrated into the address byte space, allowing 10 bit resolution for sensor values (0 bit resolution (0 127). Radio Specificati ons Three kinds of market available, wireless transceiver hardware protocols were considered: RF, Bluetooth and XBee /ZigBee An exhaustive description of each protocol is beyond the scope of this document. However, there are vast resources

PAGE 78

78 available th at detail the history and specifications of each protocol [67 69] The author explicitly chose to use RF (radio frequency) transceivers over other available market options for two primary reasons: power consumption and range. Although streaming audio was not a primary goal, the author selected a radio that would still have the capacity to stream PCM standard audio rates for future upgrades. A standard requirement across all transceivers was a standard operating voltage of 3.3V (the same levels as the node microprocessor). Additionally, all transceivers support point to point, point to multipoint, and peer to peer network configurations and reside within the 2.4GHz ISM (Industrial, Scientific, Medical) short range frequency band a commonly used frequency for wireless equipment including cordless phones, microwave ovens, and wireless LAN. Although the 2.4 GHz band is utilized by a wide range of devices, cross talk and interference between these devices can be mitigated through A ddress Masking which creates unique addressing protocols that pick out only the intended device. The transceivers shown in the figure below all have special protocols that address the risk of interference, including channel hopping, auto acknowledge, and address masking. Note that the specifications in the list are taken from the device specific datasheets and often do not reflect actual values in testing situations [68] [70] For budgetary reasons, the author tested the nRF24L01+ and TI CC2500 devices only. The rest of the specifications should be used for reference purposes only. It should be noted that some XBee and Bluetooth modules include ADC and GPIO hardware, eliminating the need for a separate processor, making these devices extremely appealing. The ultimate decision to select the Nordic transceiver came down

PAGE 79

79 to power consumption and transmit range even with the requirement for an external processor compared to Bluetooth. Figure 6 3. A comparison of various market available wireless data transcei vers. The fully designed prototype eMotion nodes oscillate between 5 and 15mA current consumption for standby and TX modes respectively. With a lithium ion rechargeable battery of 110 mA hours, each node can run about 10 hours between charges. Charge time is roughly 2 minutes for a depleted battery. The line of sight range is roughly 100m, suitable for any stage performance situation. Receiver Hub A singl e data hub is used to receive the data streams transmitted by the sensor nodes and sends each data stream to the computer. If multiple computers are to have access to the sensor data, a single computer with the hub may act as a networked host and share the data to the other computers using the software client. Multiple data hubs may also be used. The data hub includes an RF transceiver with antenna, an asynchronous receiver/transmitter (UART), a processor, indicator LED, and a USB interface for connecting t o a computer.

PAGE 80

80 Hub Design Considerations The hub uses the same MCU (MSP430) and radio (Nordic RF Transceiver) as the nodes to maximize compatibility. It is interfaced to the computer by a UART, which is converted from RS232 to TTL serial levels using a chi p by FTDI (TTL 232R 3v3 pcb) [71] This allows the computer to access the Hub as a serial device on its USB port. The single indica tor LED blinks when a transmission was successfully received, acknowledged, and sent to the serial port. The hub hardware runs at 8MHz, eight times faster than the transmitter nodes. This ensures that the hub can handle the incoming data at a faster rate a nd alleviate potential network bottlenecking. The hub waits in standby until a node transmits data. Once a transmission is TM protocol initiates in an auto acknowledge (ACK) routine. This is all handled by th e radio hardware (all ACK computations happen off of the hub processor requiring no extra computations). The hub radio sends an ACK message to the specific node that transmitted the data, then toggles an onboard LED to indicate a successful transaction wit h that node. Because different nodes send different packet lengths (from one up to nine pairs of data bytes) due to the varying kinds of sensors, the hub parses the received data coded sensor type. Once the data/address pairs are par sed, the hub sends the bytes to the host computer via USB. The software client then has accesses to the raw serial data streams.

PAGE 81

81 Software Client Data Input Figure 6 4. Software client workflow. There are three major processes in the client program fl ow. A) Raw serial address/data byte pairs are sent directly to the client and placed in a global buffer. B) The user processes the incoming data streams, which may include distortion, filtering, gesture or analysis. C) Raw and processed data streams are se nt to a graphic UI for the user to assign to control/interact with musical or visual parameters; or to other computers or programs using the UDP, OSC, or MIDI output options. Once the hub sends the address/data pairs to the computer, the softwar e client places all raw data into a global buffer that can be accessed by virtually any program that receives data on a serial port (MaxMSP, Csound, supercollider, etc.). There are three major processes in the client program flow. A) Raw serial address/dat a byte pairs are sent directly to the client and placed in a global buffer. B) Each sensor type comes h may include floating point calibration, filtering, gesture recognition or analysis. C) Raw and processed data streams are sent to a graphic UI for the user to assign to control/interact with musical or visual parameters or to other computers, mobile de vices, or programs chosen for its ease and speed for designing interactive user interfaces, access to OpenGL and Javasc architecture is similar to the hardware in that all components are broken down into

PAGE 82

82 modules that perform specific tasks. For each active sensor on the network, a corresponding module is act ivated. Each module handles several important functions including the following: Data display: each module graphically displays the raw incoming data of its corresponding sensor. If the user creates multiple data buses during processing, each processed dat a bus is also graphically displayed. Auto calibrate: Sensors are going to output different ranges of data from one use to the next depending on a number of factors including environment, battery power, user, and varying physical placement. The software cli ent, however, expects a total range of data between 0.0 and 1.0. A button on the UI allows the user to run an auto calibrate routine on the incoming data. Calibration uses this linear mapping function: y=(x xmin)/(xmax xmin)*(ymax ymin)+ymin; where ymin an d ymax are the permanent lower and upper boundaries (0.0 and 1.0) and xmin and xmax are the movable boundaries of incoming data. Pressing the calibrate button erases the xmin and xmax values. When the user extends the sensor across the expected range on st artup, the xmin/max values adjust to reflect the actual range for that moment and continues to adjust the X boundary values during use. Whatever the actual sensor ranges, the linear mapping function ensures the software client receives the expecte d floating point values between 0.0 and 1.0 regardless of extraneous and unpredictable factors. Originally, all sensor nodes were programmed with a physical button and an auto calibrate routine in hardware. This resulted in the user having to physically pr ess a small button on each sensor node. Depending on the node placement and performance situation, this method quickly became ungainly. Instead, placing the routine within the software modules allows the user, or a technician with the computer off stage, t o simply click all of the buttons on the screen to calibrate the whole network if desired even in mid performance. The following is a breakdown of each software module for the prototype. Hub Input: This module polls the serial port for all incoming senso r data at 50 Hz. The fastest motor reflex reaction latency for humans rests somewhere between 10 to 15 hz. The standard frame rate for cinema is 24 frames per second (fps). Thus, a 50 hz gesture capture resolution is more than sufficient for this system. T he data modules can pull specific data from the stream. The Hub module detects new sensors on the network by combining the device ID and unique instance ID to rial number and saves the list of devices in a file. If the serial number is novel, the Hub module automatically generates a popup window visualizing the data with its corresponding device and instance information. GENA_1: The user can open this module wh en sonar, force sensitive resistor, or other GENA_1 device appears on the network. This module pulls the data only

PAGE 83

83 with GENA_1 type addresses from the global data stream. An instance of this module is generated for each GENA_1 device on the network. The ra w streams are displayed and can be accessed by the user for further processing and mapping. GENA_6: This module pulls the data only with GENA_6 type addresses from the global data stream. Each raw stream is displayed and can be accessed by the user for fu rther processing and mapping. IMU Module: This module should be opened by the user to access IMU devices on the network. OpenGL is used to render a 3 demensional visual representation of the IMU device, providing orientation feedback to the user. The raw data of each axis from the accelerometer and gyroscope is also displayed and can be accessed by the user. AHRS Module: This module is the same as the IMU module above except it also receives magnetometer data and uses a slightly different algorithm to vis ualize the orientation of the object. A B Figure 6 5. Software client screenshot of sensor modules. The client detects new devices as they appear on the network and automatically generates an instance of the corresponding module using the unique insta nce ID. The interface visualizes the data and give the user options to process and assign each data stream. A) represents a sonar module instance and B) illustrates an IMU module.

PAGE 84

84 Data Processing A user may modify the raw sensor data before using it at the output. For instance, the user may want the data stream from the sonar to have inverted values before mapping it to control a parameter. Each module allows the user to open up a processing window. Various kinds of processes are available like filtering, g esture analysis, and even physical interactive models. The user may access data along any point in the process chain using data bus outputs. These data buses are stored in a list by the client as potential data outputs, which can be mapped by the user to c ontrol musical or visual parameters in the mapping window. Figure 6 6. Screenshot of Data processing window. Processing flows from top down.

PAGE 85

85 Data Mapping A visual matrix mapping scheme resembling a patch bay provides a user configurable method of connect ing and disconnecting (i.e. patching) data streams to parameter outputs, in a one to one, one to many, many to one, or many to many setup. One data stream may control multiple parameters. In addition, multiple streams may be used to affect a single paramet er (multi modal control). Figure 6 7. Screenshot: Patch Bay Window. Displays data outputs (vertical axis) and the available list of output control parameters. Connections are made by clicking on the points where respective outputs and inputs intersect. Additionally, a Digital Audio Workstation (DAW) method was explored for a user to side toolbar, a user selects from a list of available data streams. The data is visualized in the center, which can be subsequently assigned to control a parameter by choosing from a

PAGE 86

86 list in the right side toolbar. Either of these methods allows the user to experiment with the most optimal and intuitive mapping scheme for a project in real time. The ou tputs available for mapping can be virtually any software or devices. For example, musical processes, FX processes, video projections, and lights can be controlled by the mapped data. Figure 6 8. Alternative Mapping Screenshot. Like a DAW, the user can c reate data menus. Implementation: Augmented Trombone Using eMotion Figure 6 9 illustrates the particular implementation of the eMotion prototype rument of formal training: the bass trombone. Sensor nodes are reversibly fixed to the trombone A at strategic sensing locations. The first sensor node B, a force sensitive resistor (FSR), may be placed on the valve shifting mechanism of the trombone, or a t a position where the left hand would make contact when a musician is playing the instrument. The function of the FSR in this location is twofold: to detect when the trigger is depressed (on/off) and to indicate after touch pressure (squeezing the mechani sm). A second FSR sensor node C is placed at a position where the right hand would make contact when a musician is playing the instrument (e.g., on or near the main slide hand grip). The objective is to make the points of contact between the musician and t he instrument sensitive to tactile pressure, which can be mapped onto intuitive sonic phenomena. For example, the average

PAGE 87

87 pressure detected could be mapped to the amplitude of effects processing for the control the amount of distortion/overdrive applied to their signal by squeezing the instrument. A third sensor node D sonar, is placed on a fixed part of the main slide near the mouthpiece to measure how far the slide is extended. A lightweight dowel E ca n be attached to provide a detectable object for the sonar of the sonar node to continuously detect. The dowel is attached onto the mobile part of the slide a particular distance from the mounted position of the sonar sensor node D The sonar device attach ed to the node is an XL MaxSonar EZ3, operating at 3.3v levels. The particular distance spacing between the sonar D and the dowel E depends on the minimum detectable distance the sonar can detect (20 cm for this sensor). As the slide extends, the sensor va lues increase with a 1cm (0.39 in) resolution out to a 765cm total length. The sonar software module allows the user to auto length. Data is low pass filtered and linearly mapped to a 0. 1. range. A fourth sensor node F, a six degree of freedom (6dof) IMU, is placed on or near the bell. The IMU is the 6dof sparkfun razor board and includes a 3 axis accelerometer and 3 axis gyroscope. When attached to the instrument, it measures overall instrument or the exact 3 demensional posture of the device cannot be exactly determined. An IMU is calibrated at a known position to the user. The device proceeds to measure inertial f orces and calculates the position relative to the original known origin. Although the inertial sensor values are useful in their own right, the exact posture of the device is reliable for only a short time due to wandering and accumulative error. To mainta in an

PAGE 88

88 exact sense of 3 demensional orientation, discrete sensors must be used in conjunction with the relative sensors. In this case, a 3 axis magnetometer is included in the IMU to provide a full Attitude Heading Reference System (AHRS). Figure 6 9. Im plementation of the eMotion System on the Bass Trombone. A single data hub G is used to receive the data transmitted by the sensor nodes and transfer data to an attached computer H via USB. The software client parses and processes the incoming data, which is subsequently mapped by the user to control various musical and visual parameters.

PAGE 89

89 CHAPTER 7 CONCLUSIONS AND FUTURE DIRECTIONS Could an instrument become intelligent, and adapt in an automated manner references of a particular musician, and modify itself in response to what it learns? Ken Jordan Sound Unbound [15] This dissertation presents a new technology developed by the author for augmenting traditional musical instruments with the most up to date ultra low power microprocessor and RF wireless technology. More precisely, the system is a modul ar, reversible, non invasive, and reconfigurable wireless sensor mesh network combined with a graphical user interface. A musician (or dancer, conductor, etc) is able to wirelessly connect a variety of small individual wireless nodes where each performs a particular sensing task (e.g. orientation, acceleration, distance, or pressure). This gives the user the unprecedented ability to choose particular combinations of these nodes and place them on desired sensing locations depending on the unique demands for a given project. Nodes may also be recombined, rearranged, removed, remapped, and turned on or off in real time with no adverse affects to the system itself. Though the technology does not propose to completely solve the variety of aesthetic and technical issues in the field of augmented musical instruments, this system embodies a significant step in a positive direction. Called eMotion, this system essentially allows someone with no knowledge in microprocessors, analog circuit design, or programming (requi site knowledge to perform similar tasks without this system) to intuitively control interactive sound, algorithmic music, lights, or any number of effects using the gestures of their musical instrument. The eMotion system is also useful for engineers and h obbyists with technical

PAGE 90

90 backgrounds to quickly and easily deploy wireless sensing technology in projects, saving considerable development time. However, the author acknowledges that there are multitudes of methods to interact with digital media beyond inst rument augmentation and wireless sensing technology (including alternative controllers) and it is not intended Broader Impacts Although developed to respond to current trends in music al instrument augmentation, the capacity of the eMotion system has ramifications that span well beyond making music or other forms of digital media. Some other potential areas include home automation environmental, urban, and industrial sensor monitoring providing alternative methods for access and interaction for the disabled robotics and personal area networking technologies (e.g. health and sport monitoring). Future Directions The first prototype developed by the author took place over a number of ye ars (from 2008 to 2011). The intellectual property has been registered with the University of Florida office of Technology Licensing and the Mk II version of the prototype is already underway The Mk II prototyp e will address a number of improvements including the following: Miniaturization : S urface mount components and double sided circuit boards will decrease the size minimally by 33%. Further miniaturization will result from employing the latest MEMS sensing t echnology. For example, the IMU described in the previous chapter had 3 ICs (integrated circuits) to measure 3 axes of acceleration and 3 axes of rotation. Currently, a single IC that is smaller than any of the previous ones is being used, which can manage all 6 axes of sensing. S each other to interface with a charging station (built into the receiver hub) when not in use.

PAGE 91

91 Battery indicator: A battery indicator IC will be integrate d into the sensor node design to inform the user when the battery needs charging Improved mounting hardware : A non abrasive weak adhesive strip with adaptor is being explored as a means to reversibly attach sensors to desired sensing locations. Each sensor node will have a special groove etched into the back of the housing that couples with the adhesive adapter. Hook loop fasteners have been successfully deployed, but repeated use may wear the fasteners over time, decreasing their effectiveness. slip rubber may be adapted to the housing of the sensor nodes that allow the user to bend the nodes to conform with the surface of their instrument. Furthermore, a novel software client based on Digital Audio Workstation (DAW) models will be developed for Processing & Mapping sensor data. The user interface exte nds the existing software module system described earlier by taking advantage of a generic layout most musicians are familiar with: the DAW. However, instead of audio signals (or in addition to them), this processing system utilizes/visualizes the incoming data from the sensor nodes of the wireless sensor network. Examples of DAWs used in audio production include Apple Logic Pro, Avid Pro Tools, Cakewalk Sonar, and Steinberg Cubase. Accordingly, a similar multi track interface can be used in order to furthe wireless sensor network and processing system. Using this model, a user interface for a computer based Digital Data Workstation (DDW) is under development, having a layout (e.g. play, pause, and record), track controls, a data mixer, and multi track waveform display of data streams. In addition, the user can save processing and mapping settings for a given session to a file for later access. The settings can be saved as a s ession or as specified presets within a session. This allows the user to not only explore the most optimal processing and mapping of their network, but to save settings for later access as well. The user can change network configuration presets (and thus t he behavior of the sensor network) in mid performance. By setting

PAGE 92

92 presets, the user can change from one discrete state to another, or progressively tween (a kind of cross fade from one state to another ) from one preset to the next The unique ID of incomi ng data streams are saved and compared to detect when a new node is activated. If a novel address ID is detected, the software client prompts turns on a new sonar node the client detects a novel address. Using the addressing protocol, the software client prompts the user that a new sonar node is active and a new track streaming live data appears in the DDW UI window. The sensor type, instance number, and live data trac k stream are all displayed in the DDW front end. development to handle the Processing and Mixing stages of the program flow. Each raw data track corresponds to a corresponding track of the data mixer. Here, the user can create additional data buses (similar to auxiliary sends on an mixer) of that track and then assign a number of processes to each of those busses. The processed data busses are visualized within the main UI console and ap pear, in order, below the raw data inside of its original track. Each bus may be mapped to control user defined parameters. The slider for each mixer track differs from an audio slider in that there is a min slider and max slider. Like adjusting the gain o n an audio track, the min and max sliders correspond to the ymin and ymax variables in the calibration linear function. This gives the user unprecedented flexibility to ensure the sensor data rests within a desired range of values for each track (default i s floating point values between 0.0 and 1.0). As an alternative to the patch bay matrix scheme, a more intuitive (or at least engaging) method was demonstrated by Usman Haque for a project called

PAGE 93

93 Reconfigurable House V.2 [72] For this project, users could reconfigure sensor mappings to control various actuation outputs by using an intuitive representation of floating icons. Sensors w ere represented by one set of icons and outputs were represented by another set. The user could drag these icons around using a touch screen. If sensor and output icons are within proximity of each other, a connection is formed. When dragged apart, the con nection is broken. In this manner, the user is free to arrange the sensors and control outputs in one to one, one to many, and many to many groupings by arranging icons on a screen. Figure 7 1. Workflow of a Novel DDW Software Client. In the more distant future, the author hopes to develop the technology on a number of levels This includ es (1) i nteractive instructional video games for traditional musical instruments (e.g. a virtual lessons instructor) (2) e xploring alternative methods to power the senso r nodes (3) c recognition algorithms and (4) i ntegrating this technology fully into traditional musical

PAGE 94

94 instrument fabrication (e.g. musical instruments are purchased with this technology already embedded) Conclusions History has shown that continued engagement with technological innovation is integral for the evolution of musical instrument design and performance practice. As mentioned previously, the instruments of the s ymphonic o rchestra have re mained largely unchanged since the mid 1800s [34] Conventional musical instruments must continue to evolve to take adva ntage of current technological capabilities if they are to remain in the contemporary dialog of stylistic progress. Trends in Imagine a time when instruments purchased off the shelf still function as they always have, but also carry the capabilit ies of the smart phones in our pocket. We have the capacity today.

PAGE 95

95 APPENDIX MUSICAL SCORE: CAPOEIRISTA FOR FLUTE, BERIMBAU, AND LIVE ELECTRONICS Object A 1. wav file 77MB )

PAGE 96

96

PAGE 97

97

PAGE 98

98

PAGE 99

99

PAGE 100

100

PAGE 101

101

PAGE 102

102

PAGE 103

103

PAGE 104

104

PAGE 105

105

PAGE 106

106

PAGE 107

107 LIST OF REF ERENCES [1] Contemporary Music Review vol. 13, no. 2, p. 77, 1996. [2] New York, NY, USA, 2000, pp. 205 206. [3] Computer Music Jo urnal vol. 26, no. 3, pp. 62 76, Nov. 2011. [4] H. Proceedings of NIME 2009. [5] Procee dings of the 2001 conference on New interfaces for musical expression 2001. [6] Cost Music Computer Music Modeling and Retrieval vol. 3902, R. Kronland Martinet, T. Voinier, and S. Ystad, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006, pp. 123 129. [7] Journal o f New Music Research vol. 32, p. 2003, 2003. [8] E. R. Miranda and M. Wanderley, New Dig ital Musical Instruments: Control And Interaction Beyond the Keyboard 1st ed. A R Editions, Inc., 2006. [9] C. J. Oja, Making Music Modern: New York in the 1920s Oxford University Press, 2003. [10] J. Chadabe, Electric Sound: The Past and Promise of El ectronic Music Prentice Hall, 1996. [11] http://120years.net/. [Accessed: 13 Jul 2009]. [12] Perspectives of New Music vol. 7, no. 1, pp. 32 6 5, Oct. 1968. [13] http://web.media.mit.edu/~joep/SpectrumWeb/SpectrumX.html. [Accessed: 20 Jun 2011]. [14] Grove Music Online 2007.

PAGE 108

108 [15] P. D. Miller Sound Unbound: Sampling Digital Music and Culture The MIT Press, 2008, pp. 181 201. [16] Audio Enginee ring Society Convention 16 vol. 13, no. 3, pp. 200 206, Jul. 1965. [17] http://www.loc.gov/rr/record/nrpb/registry/nrpb 2010reg.html. [18] Grove Music Online [19] P. Doornbusch, Common Ground, 2005. [20] Computer iversity, Aug 2003. [21] L. Brodie and Brodie, Starting Forth 2nd ed. Prentice Hall, 1987. [22] Leonardo Music Journal vol. 10, pp. 33 39, 2000. [23] L. Polansky, P. Burk, an Perspectives of New Music vol. 28, no. 2, pp. 136 178, Jul. 1990. [24] computer mus Computer vol. 24, no. 7, pp. 12 21, Jul. 1991. [25] Oct 1997. [Online]. Available: http://www.nytimes.com/1997/10/21/a rts/visions music like wheels space electronic challenges posed berio s new.html?pagewanted=all&src=pm. [Accessed: 02 Mar 2012]. [26] P. Manning, Electronic and computer music Oxford University Press US, 2004. [27] R. Kurzweil, The Age of Spiritual Mach ines: When Computers Exceed Human Intelligence Penguin (Non Classics), 2000. [28] D. M. Huber and R. E. Runstein, Modern Recording Techniques, Seventh Edition 7th ed. Focal Press, 2009. [29] Computer Music Journ al vol. 12, no. 1, pp. 19 28, Apr. 1988.

PAGE 109

109 [30] B. Schrader, Introduction to Electroacoustic Music Longman Higher Education, 1982. [31] Nov 2008. [Onli ne]. Available: http://vipre.uws.edu.au/tiem/?page_id=676. [Accessed: 25 Aug 2010]. [32] Pervasive Computing (IEEE) vol. 7, no. 3, pp. 32 38, Sep 2008. [33] 2010. [Online]. Available: http://www.ctia.org/advocacy/research/index.cfm/aid/10323. [Accessed: 08 Sep 2011]. [34] So cial Studies of Science vol. 34, no. 5, pp. 649 674, Oct. 2004. [35] http://www.softwind.com/. [Accessed: 15 Aug 2010]. [36] 2009. [37] Leonardo Music Journal vol. 9, pp. 35 42, Jan. 1999. [38] A Progress Report 1987 Laboratory, Jan 1992. [39] Meta Proceedings of the International Computer Music Conference San Francisco: International Computer Music Association, 1994, pp. 147 150. [40] for a ne Organised Sound vol. 7, no. 2, pp. 201 213, 2002. [41] Proceedings of the International Computer Music Conference vol. 2001 p. 44 -51, 2001. [42] sensor bass interface [Online]. Available: http://www.arts.rpi.edu/crb/Activities/sbass.htm. [Accessed: 10 Sep 2011].

PAGE 110

110 [43] Anterior View of an Interior with Reclining Trombonist: The Conservation of Energy 2003. [Online]. Available: http://dxarts.washington.edu/dxdev/profile_research.php?who=karpen&project=ant erior. [Accessed: 09 Sep 2011]. [44] http://www.mti.dmu.ac.uk/~jrich/kreepa/tromb.html. [Accessed: 20 Sep 2011]. [45] acoustic interventions for the Proceedings of the 2006 conference on New interfaces for musical expression 2006. [46] Reprint from Gestural Control of Music Ircam Centre Pompidou, 2000. [47] SMC Conference 2009 2009. [48] U se Technologies for Electronic Music Controllers: A NIME Proceedings pp. 228 234, 2003. [49] Organised Sound vol. 7, no. 3, pp. 295 304, 2002. [50] R. I. Gody and M. Leman, Musical Gestures: Sound, Movement, and Meaning 2009. [51] Contemporary Music Review vol. 25, no. 1 2, pp. 151 162, Feb. 2006. [52] STEIM [ texts ] 16 Feb 2010. [Online]. Available: http://www.steim.org/steim/texts.php?id=3. [Accessed: 23 Aug 2011]. [53] P. R. Cook, Music, Cognition, and Computerized Sound: An Introduction to Psychoacoustics The MIT Press, 2001. [54] ctronic Arts. Interaction Theory and Interfacing Techniques for Real Trends In Gestural Control p. 41 -70, 2000. [55] Musical Expression: Borrowing Tools f Computer Music Journal vol. 26, no. 3, pp. 62 76, 2002. [56] T. Wishart, On Sonic Art (Contemporary Music Studies) Routledge, 1996.

PAGE 111

111 [57] The P ractice of Performance: Studies in Musical Interpretation Cambridge University Press, 2005, pp. 55 83. [58] of the Journal of New Music Research vol. 34, no. 1, p. 97, 2005. [59] C. Dodge and T. A. Jerse, Computer Music: Synthesis, Composition, and Performance 2nd ed. Schirmer, 1997. [60] line]. Available: http://www.snm.ethz.ch/Main/HomePage. [Accessed: 18 Oct 2011]. [61] D. T. Willingham, Cognition: The Thinking Animal (Value Pack w/MySearchLab) 3rd ed. Prentice Hall, 2009. [62] Synchronized Real CRC Press Book Chapter Nov. 2006. [63] 2008. [Online]. Available: http://ctia.org/media/press/body.cfm/prid/1774. [Accessed: 08 Sep 2011]. [64] A. C. Clarke, Profiles of the Future: An Inquiry into the Limits of the Possible Rev Sub. Henry Holt & Co, 1984. [65] Apr 2010. [Online]. Available : http://code.google.com/p/imumargalgorithm30042010sohm/. [66] 2008. [67] W. Hayward, Introduction to radio frequency design American Radio Relay League, 1994. [68] Digi International Inc., ee PRO RF Modules 802.15.4 v1.xEx 2009. [69] Bluetooth Technology 101 2011. [Online]. Available: http://www.bluetooth.com/Pages/Fast Facts.aspx. [Accessed: 23 Aug 2011]. [70] Aug 2009. [71] 232R

PAGE 112

112 [72] Reconfigurable House 2006. [Online]. Available: http://www.haque.co.uk/recon figurablehouse.php. [Accessed: 02 Aug 2009].

PAGE 113

113 BIOGRAPHICAL SKETCH From the ancient cypress swamps of Wewahitchka, FL, Chester Udell studied Music Technology and Digital Ar t at Stetson Univ ersit y He earned a M aster of Music degree in composition at t he University of Florida in 2008 and a Ph.D. in m usic c omposition with outside studies in e lectrical e ngineering at the University of Florida in the spring of 2012 Some of his honors include: SEAMUS/ASCAP Student Commission Competition 2010 1st prize, Pri x Destellos 2011 Nominee, and Finalist for the Sound in Space 2011 International Composition Competition. His music can be heard on the SEAMUS and Summit record labels and is also featured in Behavioural Processes a peer reviewed scientific publication