Citation
A Prototype Method for Parameterized Soundscape Mapping

Material Information

Title:
A Prototype Method for Parameterized Soundscape Mapping
Creator:
Bettcher, Adam Dale
Place of Publication:
[Gainesville, Fla.]
Florida
Publisher:
University of Florida
Publication Date:
Language:
english
Physical Description:
1 online resource (238 p.)

Thesis/Dissertation Information

Degree:
Master's ( M.S.A.S.)
Degree Grantor:
University of Florida
Degree Disciplines:
Architecture
Committee Chair:
SIEBEIN,GARY W
Committee Co-Chair:
ZWICK,PAUL D
Committee Members:
GOLD,MARTIN ARNOLD
Graduation Date:
8/9/2014

Subjects

Subjects / Keywords:
Acoustic data ( jstor )
Acoustic noise ( jstor )
Audio recordings ( jstor )
Brakes ( jstor )
Genetic mapping ( jstor )
Maps ( jstor )
Modeling ( jstor )
Sound ( jstor )
Sound pressure ( jstor )
Sound propagation ( jstor )
Architecture -- Dissertations, Academic -- UF
mapping -- noise -- planning -- sound -- soundscape
Genre:
bibliography ( marcgt )
theses ( marcgt )
government publication (state, provincial, terriorial, dependent) ( marcgt )
born-digital ( sobekcm )
Electronic Thesis or Dissertation
Architecture thesis, M.S.A.S.

Notes

Abstract:
The objective of this thesis was to investigate a method of determining acoustical suitability and preference across a geographic region that accounted for the importance of the makeup of sounds ('soundscape') when evaluating the sonic environment. Objectives included determining a method of establishing acoustic suitability, and developing a way to map this suitability across a region. This was accomplished through combining existing methods in predictive noise mapping and soundscape analysis to create a map database of relevant parameters that described the aural landscape from site-measured data. This database was then used to calculate acoustic preference for a sample evaluator. The results showed that the development of a comprehensive system of mapping multiple soundscape-based parameters enhanced the descriptive quality of sound maps, with the possibility of describing substantially varied preferences for different listeners when compared to level-based sound mapping alone. ( en )
General Note:
In the series University of Florida Digital Collections.
General Note:
Includes vita.
Bibliography:
Includes bibliographical references.
Source of Description:
Description based on online resource; title from PDF title page.
Source of Description:
This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Thesis:
Thesis (M.S.A.S.)--University of Florida, 2014.
Local:
Adviser: SIEBEIN,GARY W.
Local:
Co-adviser: ZWICK,PAUL D.
Statement of Responsibility:
by Adam Dale Bettcher.

Record Information

Source Institution:
UFRGP
Rights Management:
Copyright Bettcher, Adam Dale. Permission granted to the University of Florida to digitize, archive and distribute this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.
Classification:
LD1780 2014 ( lcc )

Downloads

This item has the following downloads:


Full Text

PAGE 1

A PROTOTYPE METHOD FOR PARAMETERIZED SOUNDSCAPE MAPPING By ADAM DALE BETTCHER A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF S CIENCE IN ARCHITECTURAL STUDIES UNIVERSITY OF FLORIDA 2014

PAGE 2

© 2014 Adam Dale Bettcher

PAGE 3

This is dedicated to people who take on large things. Go large, or go home!

PAGE 4

4 ACKNOWLEDGMENTS I am indebted to man y people for the work of this long project. Particularly, Gary Siebein provided much support, guidance, encouragement, and aid during the process of developing this thesis. Paul Zwick was also helpful in providing information, and his classes provided th e basis of inspiration for this thesis. Martin Gold has also been helpful. Invaluable professional guidance was also provided by others outside the committee. A debt is owed to Gordon Hempton, who discussed the idea of 'calibrated listeners' in additio n to 'calibrated equipment' and has many insightful things. The National Park Service's Wilderness Soundscape Management Program was also very helpful in the process, particularly Damon Joyce, Kurt Fristrup, and Dan Mennitt. My studiomates, friends, and family also provided support. Particular thanks go to , Sang Bum Park, Jennifer Nelson, Adam Galatioto (who was also my roommate for a time), Sangbong Shin, Lucky Tsaih, and Jose Garrido. My wife, Lisa Fan is owed a particular debt of gratitude for suppo rting sticking with me while work continued on this project; my mother and father also have been very supportive. Special thanks go out to my sister, and especially my aunt Terrie for quartering me in Seattle during the summer of 2010; the time we were ab le to spend together was invaluable. The rest of my family is also thanked.

PAGE 5

5 TABLE OF CONTENTS page ACKNOWLEDGMENTS ................................ ................................ ................................ .. 4 LIST OF TABLES ................................ ................................ ................................ ............ 7 LIST OF FIGURES ................................ ................................ ................................ .......... 8 LIST OF ABBREVIATIONS AND DEFINITIONS ................................ ........................... 16 CHAPTER 1 INTRODUCTION ................................ ................................ ................................ .... 22 2 LITERATURE REVIEW ................................ ................................ .......................... 24 Methods of Evaluating and Mapping Sound ................................ ........................... 24 The Effects of Noise on Humans ................................ ................................ ............ 25 Community Annoyance ................................ ................................ .................... 27 Effects on Sleep ................................ ................................ ............................... 28 Discussion of Effects of Noise on Humans ................................ ....................... 28 Deterministic Noise Mapping ................................ ................................ .................. 29 Variables in Noise Mapping ................................ ................................ .............. 29 Weather (ISO 9613 2, CONCAWE, Harmonoise) : ................................ ........... 31 Soundscape Method ................................ ................................ ............................... 32 An Overview of the Soundscape Method ................................ ......................... 33 F ormal Methods of Documenting Soundscape ................................ ................. 35 Discussion of Soundscape Literature ................................ ............................... 38 Review of Different Methods of Addressing Noise ................................ .................. 39 3 METHODOLOGY AND RESULTS ................................ ................................ ......... 41 Overview of Process ................................ ................................ ............................... 41 Sound Data P rocessing ................................ ................................ .......................... 42 Measured Data ................................ ................................ ................................ . 43 Calibrated Listener ................................ ................................ ........................... 47 Sound Event Ta gs ................................ ................................ ............................ 48 Parameters to Calculate For Each Sound Class ................................ .............. 49 Parameters per Sound Event Tag ................................ ................................ .... 51 Sound Class Parameters per Measurement Period ................................ ......... 53 Global Sound Class Parameters ................................ ................................ ...... 54 Sound Taxonomy ................................ ................................ ............................. 60 Spatial Data ................................ ................................ ................................ ............ 61 Spatial Data Processing ................................ ................................ ................... 61 Locations of Measurement Taking ................................ ................................ ... 61 Sound Source Locations ................................ ................................ .................. 62

PAGE 6

6 Physical Site Properties ................................ ................................ .................... 63 Predictive Sound Level Maps ................................ ................................ ........... 64 Method of Computing Spatially and Temporally Varying Parameters .............. 66 Spatial Analysis of Spatially and Temporall y Varying Parameters ................... 66 Sound Class Parameter Surfaces ................................ ................................ .... 67 Evaluator Data ................................ ................................ ................................ ........ 67 Evaluator Data Processing ................................ ................................ ............... 67 Evaluator Parameters ................................ ................................ ....................... 69 Evaluator Criteria Maps ................................ ................................ .................... 74 Evaluator Weighting Maps ................................ ................................ ................ 75 Acoustic Suitability ................................ ................................ ........................... 76 Review of Process ................................ ................................ ................................ .. 77 4 CONCLUSIONS ................................ ................................ ................................ ..... 90 Discussion of Output ................................ ................................ ............................... 90 Possible Improvements to Process ................................ ................................ ......... 92 Data Collection and Analysis ................................ ................................ ............ 92 Spatial Data ................................ ................................ ................................ ...... 92 Evaluator Data ................................ ................................ ................................ .. 93 Review of Conclusions ................................ ................................ ............................ 93 Final Review and Summary ................................ ................................ .................... 94 APPENDIX A PROCESSED SOUND DATA ................................ ................................ ................. 96 B CALCULATED SPATIAL DATA ................................ ................................ ............ 143 C SUITABILITY CALCULATIONS ................................ ................................ ............ 181 LIST OF REF ERENCES ................................ ................................ ............................. 234 BIOGRAPHICAL SKETCH ................................ ................................ .......................... 238

PAGE 7

7 LIST OF TABLES Table page 3 1 Visitation times a nalyzed for each measurement location. ................................ . 77 3 2 List and description of taxonometric sound classes derived from data. .............. 78 3 3 List of physical site layers. ................................ ................................ .................. 78 3 4 Description and percentage contribution to value for sleep sound valuation, 10PM 7AM (75% of residential valuation). ................................ ......................... 79 3 5 Description and calculation method for household activities valuation, 8AM and 5PM 10PM (25% of residential valuation). ................................ ................... 79 A 1 Visitation times for Location 1, Afte rnoon. ................................ .......................... 97 A 2 Sound events for Location 1, Afternoon. ................................ ............................ 97 A 3 Visitation times for Location 1, Morning. ................................ ............................. 99 A 4 Sound events for Location 1, Morning. ................................ ............................. 100 A 5 Visitation times for Location 6, Afternoon. ................................ ........................ 102 A 6 Events for Location 6, Afternoon. ................................ ................................ ..... 102 A 7 Visitation times for Location 6, Morning. ................................ ........................... 103 A 8 Visitation times for Location 6, Morning. ................................ ........................... 104 A 9 Visitation times for Location 7, Afternoon. ................................ ........................ 106 A 10 Events for Location 7, Afternoon. ................................ ................................ ..... 106 A 11 Visitation times for Location 7, Morning. ................................ ........................... 108

PAGE 8

8 LIST OF FIGURES Figure page 3 1 Binary yes no decision making process for valuing an aural environment based on a single number rating of a sound pressure level measurement. ....... 79 3 2 Binary yes no decision making process for valuing an aural environment based on a single number rating of a sound level map. ................................ ..... 80 3 3 Proposed prototype method of valuing an aural environment, which uses soundscape analysis and mapping techniq ues to produce a range of ratings for evaluators. ................................ ................................ ................................ ..... 80 3 4 Google Earth (Google Corporation, 2010) 3 D image of the studied area. The yellow line is approximately ten miles long; 'L1' correspon ds to the 'urban' typology, 'L6' to the 'rural' typology, and 'L7' to the 'wilderness' typology. ................................ ................................ ................................ ............. 81 3 5 Map showing typologies, along with the measurement location for each typology (marked wi th an 'X'). ................................ ................................ ............. 82 3 6 Photograph of measurement apparatus. (Photo courtesy of Adam Bettcher.) .. 83 3 7 Frequency response of the Zoo m H2 Handy Portable Stereo Recorder, according to manufacturer data. (Zoom Corporation, 2010) .............................. 83 3 8 A screenshot from the sound tagging program. This program allowed the user to define the frequency and time domains of individual sound events, assigned a sound class to each individual event, and then computed statistical information about level, occurrence, and percentage time audible for each sound class. ................................ ................................ .......................... 84 3 9 A graphical representation of the splining process. ................................ ............ 84 3 10 Sound taxonomy of the study area derived from analysis of the six measurement periods. The taxo nomy shows linkages between the sound and object where applicable, and also shows linkages between types of objects based on location. ................................ ................................ .................. 85 3 11 Chart of how Americans spent their day in 2008. (Carter et al., 2009) .............. 86 3 12 Valuations of sounds for a residential user during the 'sleep' period. ................. 87 3 13 Valuations of sound s for a residential user during the 'household activities' period. ................................ ................................ ................................ ................ 88 3 14 Acoustic suitability map for a residential evaluator. ................................ ............ 89

PAGE 9

9 A 1 Location 1 tag data, afternoon. ................................ ................................ ........... 96 A 2 Location 1 tag data, morning ................................ ................................ .............. 98 A 3 Location 6 tag data, afternoon ................................ ................................ .......... 101 A 4 Location 6 tag data, morning ................................ ................................ ............ 103 A 5 Location 7 tag data, afternoon ................................ ................................ .......... 105 A 6 Location 7 tag data, morning ................................ ................................ ............ 107 A 7 Data sheet for Sound 1 Automobile ................................ ................................ 110 A 8 Data sheet for Sound 2 Automobile accel eration ................................ ........... 111 A 9 Data sheet for Sound 3 Semi truck ................................ ................................ . 112 A 10 Data sheet for Sound 4 Car start ................................ ................................ .... 113 A 11 Data sheet for Sound 5 Vehicle moving over bump ................................ ....... 114 A 12 Data sheet for Sound 6 Unlock Beep ................................ ............................. 115 A 13 Data sheet for Sound 7 Car Horn ................................ ................................ ... 116 A 14 Data sheet for Sound 8 Keys jingle ................................ ................................ 117 A 15 Data sheet for Sound 9 Brake squeak ................................ ............................ 118 A 16 Data sheet for Sound 10 Compression brake ................................ ................ 119 A 17 Data sheet for Sound 11 Car door close ................................ ........................ 12 0 A 18 Data sheet for Sound 12 Distant traffic ................................ ........................... 121 A 19 Data sheet for Sound 13 Moped acceleration ................................ ................ 122 A 20 Data sheet for Sound 14 Engine idle ................................ .............................. 123 A 21 Data sheet for Sound 15 Skateboard ................................ ............................. 124 A 22 Data sheet for Sound 16 Voices ................................ ................................ ..... 125 A 23 Data sheet for Sound 17 Whistle ................................ ................................ .... 126 A 24 Data sheet for Sound 18 Sneeze ................................ ................................ ... 127 A 25 Data sheet for Sound 19 Footstep ................................ ................................ .. 128

PAGE 10

10 A 26 Data sheet for Sound 20 Cough ................................ ................................ ..... 129 A 27 Data sheet for Sound 21 Music ................................ ................................ ...... 130 A 28 Data sheet for Sound 22 Birds ................................ ................................ ....... 131 A 29 Data sheet for Soun d 23 Insect chirp ................................ ............................. 132 A 30 Data sheet for Sound 24 Insect buzz ................................ ............................. 133 A 31 Data sheet for Sound 25 Seagull ................................ ................................ .... 134 A 32 Data sheet for Sound 26 Frog ................................ ................................ ........ 135 A 33 Data sheet for Sound 27 Things falling from trees ................................ ......... 136 A 34 Data sheet for Sound 28 Wind ................................ ................................ ....... 137 A 35 Data sheet for Sound 30 Hammering ................................ ............................. 138 A 36 Data sheet for Sound 31 Lawn mower ................................ ........................... 139 A 37 Data sheet for Sound 32 Mechanical fan ................................ ....................... 140 A 38 Data sheet for Sound 33 Mechanical squeak ................................ ................. 141 A 39 Data sheet for Sound 34 Chain rattle ................................ ............................. 142 B 1 Road input used for predictive sound map model. (WSDOT, 1996) (WSDOT, 2003 ) ................................ ................................ ................................ ................ 143 B 3 Forested input used for predictive sound map model. ................................ ...... 145 B 4 Contour areas input used for predictive sound map model. (UW Geomorphological Research Group, 2000) ................................ ...................... 146 B 5 Water areas used for predictive sound map model. (WSDOT, 2004) .............. 147 B 6 Predi ctive sound pressure level maps for Sound 1 Automobile ..................... 148 B 7 Predictive sound pressure level maps for Sound 2 Automobile acceleration . 149 B 8 Predictive sound pressure level maps for Sound 3 Semi truck ...................... 150 B 9 Predictive sound pressure level maps for Sound 4 Car start .......................... 151 B 10 Predictive sound pressure level maps for Sound 5 Vehicle moving over bump ................................ ................................ ................................ ................ 152

PAGE 11

11 B 11 Predictive sound pressure level maps for Sound 6 Unlock Beep ................... 153 B 12 Predictive sound pressure level maps for Sound 7 Car Horn ......................... 154 B 13 Predictive sound pressure level maps for Sound 8 Keys jingle ...................... 155 B 14 Predictive sound pressure level maps for Sound 9 Brake squeak .................. 156 B 15 Predictive sound pressure level maps for Soun d 10 Compression brake ...... 157 B 16 Predictive sound pressure level maps for Sound 11 Car door close .............. 158 B 17 Predictive sound p ressure level maps for Sound 12 Distant traffic ................. 159 B 18 Predictive sound pressure level maps for Sound 13 Moped acceleration ...... 160 B 19 Predictive sound pressure level maps for Sound 14 Engine idle .................... 161 B 20 Predictive sound pressure level maps for Sound 15 Skateboard ................... 162 B 21 Predictive sound pressure level maps for Sound 16 Voices ........................... 163 B 22 Predictive sound pressure level maps for Sound 17 Whistle .......................... 164 B 23 Predictive sound pressure level maps for Sound 18 Sneeze ......................... 165 B 24 Predictive sound pressure level maps for Sound 19 Footstep ....................... 166 B 25 Predictive sound pressure level maps for Sound 20 Cough ........................... 167 B 26 Predictive sound pressure level maps for Sound 21 Music ............................ 168 B 27 Predictive sound pressure level maps for Sound 22 Birds ............................. 169 B 28 Predictive sound pressure level maps for Sound 23 Insect chirp ................... 170 B 29 Predictive sound pressure level maps for Sound 24 Insect buzz ................... 171 B 30 Predictive sound pressure level maps for Sound 25 S eagull ......................... 172 B 31 Predictive sound pressure level maps for Sound 26 Frog .............................. 173 B 32 Predictive sound pressure level maps for Sound 27 Things falling from trees 174 B 33 Predictive sound pressure level maps for Sound 28 Wind ............................. 175 B 34 Predictive sound pres sure level maps for Sound 30 Hammering ................... 176 B 35 Predictive sound pressure level maps for Sound 31 Lawn mower ................. 177

PAGE 12

12 B 36 Predic tive sound pressure level maps for Sound 32 Mechanical fan ............. 178 B 37 Predictive sound pressure level maps for Sound 33 Mechanical squeak ....... 179 B 38 Predictive sound pressure level maps for Sound 34 Chain rattle ................... 180 C 1 Residential acoustic suitability map. ................................ ................................ . 181 C 2 Summary of day based evaluation. ................................ ................................ .. 182 C 3 Leq calculation maps per sound, day. A: Automobile. B: Automobile acceleration. C: Semi truck. D: Car start. ................................ ....................... 183 C 4 Leq calculation maps per sound, day. A: Vehicle moving over bump. B: Unlock beep. C: Car horn. D: Car start. ................................ .......................... 184 C 5 Night Leq input maps for calculation. A: Brake squeak. B: Compression brake. C: Car door close. D: Distant traffic ................................ ..................... 185 C 6 Night Leq input maps for calculation. A: Moped acceleration. B: Engine idle. C: Skateboa rd. D: Voices. ................................ ................................ ............... 186 C 7 Night Leq input maps for calculation. A: Brake squeak. B: Compression brake. C: Car door close. D: Distant traffic ................................ ..................... 187 C 8 Night Leq input maps for calculation. A: Whistle. B: Sneeze. C: Footstep. D: Cough. ................................ ................................ ................................ ......... 188 C 9 Night Leq input maps for calculation. A: Music. B: Birds. C: lawn mowe r. D: Mechanical fan. ................................ ................................ ................................ 189 C 10 Night Leq input maps for calculation. A: Mechanical fan. B: Mechanical squeak. C: Chain rattle. D: Estimated Leq, all anthropogenic sounds (night). 190 C 11 Night Leq input maps for calculation. A: Insect chirp. B: Insect buzz. C: Seagull. D: Frog. ................................ ................................ .............................. 191 C 12 Night Leq input maps f or calculation. Total night Leq, all natural sounds. ....... 192 C 13 Anthropogenic Leq levels over 35dB. ................................ ............................... 193 C 14 Count of natural eve nts exceeding 10dB above the overall Leq during night hours. ................................ ................................ ................................ ............... 194 C 15 Count of anthropogenic sounds exceeding 10dB above night Leq. A: Semi truck. B: Semi truck. C: Car start. D: Car start . ................................ .............. 195 C 16 Count of anthropogenic sounds exceeding 10dB above night Leq. B: Compression brake. C: Car door close. D: Distant traffic ............................... 196

PAGE 13

13 C 17 Count of anthropogenic sounds exceeding 10dB above night Leq. A: Brake squeak. B: Compression brake. C: Car door close. D: Distant traffic ............ 197 C 18 Count of sounds exceedi ng 10dB above night Leq. A: Vehicle moving over bump. B: Unlock beep. C: Car horn. D: Car start. ................................ .......... 198 C 19 Count of sounds exceeding 10dB above night Leq. A: Vehicle moving over bump. B: Unlock beep. C: Car horn. D: Car start. ................................ .......... 199 C 20 Count of sounds exceeding 10dB above night Leq. A: Vehicle moving over bump. B: Unlock beep. C: Car horn. D: Car start. ................................ .......... 200 C 21 Count of sounds exceeding 10dB above night Leq. A: Vehicle moving over bump. B: Unlock beep. C: Car horn. D: Car start. ................................ .......... 201 C 22 Cou nt of sounds exceeding 10dB above night Leq. A: Vehicle moving over bump. B: Unlock beep. C: Car horn. D: Car start. ................................ .......... 202 C 23 Count of sounds exceeding 10dB above night Leq. A: Vehicle mo ving over bump. B: Unlock beep. C: Car horn. D: Car start. ................................ .......... 203 C 24 Count of sounds exceeding 10dB above night Leq. A: Vehicle moving over bump. B: Unlock beep. C: Car horn. D: Car star t. ................................ .......... 204 C 25 Count of anthropogenic sounds exceeding 10dB above night Leq. A: Vehicle moving over bump. B: Unlock beep. C: Car horn. D: Car start. ..................... 205 C 26 Count of sounds exceeding 10dB above night Leq. A: Vehicle moving over bump. B: Unlock beep. C: Car horn. D: Car start. ................................ .......... 206 C 27 Leq calculation maps per sou nd, day. A: Automobile. B: Automobile acceleration. C: Semi truck. D: Car start. ................................ ....................... 207 C 28 Leq calculation maps per sound, day. A: Vehicle moving over bump. B: Unlock beep. C: Car horn. D: Keys jingle. ................................ ...................... 208 C 29 Leq calculation maps per sound, day. A: Brake squeak. B: Compression brake. C: Car door close. D: Distant traffic. ................................ .................... 209 C 30 Leq calculation maps per sound, day. A: Moped acceleration. B: Engine idle. C: Skateboard. D: Voices. ................................ ................................ ....... 210 C 31 Leq calculation maps per sound, day. A: Whistle. B: Sneeze. C: Footstep. D: Cough. ................................ ................................ ................................ ......... 211 C 32 Leq calculation maps per sound, day. A: Music. B: Birds. C: Insect chirp. D: Insect buzz. ................................ ................................ ................................ .. 212

PAGE 14

14 C 33 Leq calculation maps per sound, day. A: Seagull. B: Frog. C: Things falling from trees. D: Wind. ................................ ................................ ......................... 213 C 34 Leq calculation maps per sound, day. A: Low frequency wind in terference. B: Hammering. C: Lawn mower. D: Mechanical fan. ................................ ...... 214 C 35 Leq calculation maps per sound, day. A: Mechanical fan. B: Mechanical squeak. C: Chain rattle. D: Est. Leq Natural S ounds (Night). ......................... 215 C 36 Number of events 10dB or more above Leq, night. A: Automobile. B: Automobile acceleration. C: Semi truck. D: Car start. ................................ ..... 216 C 37 Number of events 10dB or more above Leq, day. A: Vehicle moving over bump. B: Unlock beep. C: Car horn. D: Keys jingle. ................................ ...... 217 C 38 Number of events 10dB or more above Leq, day. A: Brake squeak. B: Compression brake. C: Car door close. D: Distant traffic. .............................. 218 C 39 Number of events 10dB or more above Leq, night. A: Moped acceleration. B: Engine id le. C: Skateboard. D: Voices. ................................ ...................... 219 C 40 Number of events 10dB or more above Leq, day. A: Whistle. B: Sneeze. C: Footstep. D: Cough. ................................ ................................ ........................ 220 C 41 Number of events 10dB or more above Leq, day. A: Music. B: Birds. C: Insect chirp. D: Insect buzz. ................................ ................................ ............. 221 C 42 Number of events 10dB or more above Leq, day. A: Seagull. B: Frog. C: Things falling from trees. D: Wind. ................................ ................................ ... 222 C 43 Number of events 10dB or more above Leq, day. A: Low frequency wind interference. B: Hammering. C: Lawn mower. D: Mechanical fan. ................ 223 C 44 Number of events 10dB or more above Leq, day. A: Mechanical fan. B: Mechanical squeak. C: Chain rattle. D: Total number of events 10dB or more above Leq. ................................ ................................ ............................... 224 C 45 Areas where day Leq exceeds 65dB ................................ ................................ 225 C 46 Weighting due to the percent time audible of desirable (<40dB) natural sound. ................................ ................................ ................................ ............... 226 C 47 Weighting due to number of anthropogenic sounds exceeding 10dB over night Leq. ................................ ................................ ................................ .......... 227 C 48 Weighting due to the percent time audible of natural sou nds. .......................... 228 C 49 Weighting due to the count of the number of natural sound events exceeding 40dB. ................................ ................................ ................................ ................ 229

PAGE 15

15 C 50 Weighting due the Leq of night sounds exceeding 35dB. ................................ . 230 C 51 Weighting due to number of anthropogenic sounds exceeding 10dB over day Leq. ................................ ................................ ................................ .................. 231 C 52 Wei ghting due to Leq exceeding 65dB. ................................ ............................ 232 C 53 Summary of night based evaluation. ................................ ................................ 233

PAGE 16

16 LIST OF ABBREVIATIONS AND DEFINITIONS Aural Environment The totality of sound in a given place, including time dependence and interaction of acoustic impulses with surroundings 1/1 Center Octave Frequency Bands As defined by ANSI S1.11, precisely defined sound spectrum bandwidths centered around predefined midband frequencie s. This standard breaks up the range of sound frequencies audible to a normal person into predefined sets for analysis. 1/3 Center Octave Frequency Bands R As defined by ANSI S1.11, a further subdivision of the audible sound spectrum into narrower width sound spectrum bandwidths centered on predefined midband frequencies. Each 1/3 center octave frequency band occupies approximately a third of the bandwidth of a 1/1 center octave frequency band. Acoustic Suitability T he aural qualities that are right, ne eded, or appropriate for a given evaluator ANSI American National Standards Institute Anthropogenic Of, relating to, or resulting from the influence of human beings on nature Base Layer Raster dataset containing spatial data that is used as an input for an algorithm Calibrated [baseline] Sound pressure data recorded by a listening device, referenced to a physical baseline (usually the threshold of human hearing) so that it can be used in an accurate and exact way Calibrated Listener An individual train ed to analyze sound recordings and data to identify sound events in an accurate and exact way dB Decibel [Sound] Energy Measurements A physical measurement of sound pressure, power, or intensity, or the resulting level from this measurement Evaluator A listener who judges and values whether an aural environment is beneficial, neutral, or undesirable. GIS Geographic Information System, a computer based method of processing, analyzing, and displaying spatial data

PAGE 17

17 Global sound parameter A property inheren t to a sound or sound class that does not, or is assumed not to, vary across the study area Globally Spatially Varying Sound Parameters A property inherent to a sound or sound class that is assumed to vary across the study area Kriging A method of calcul ating spatial variation based on a fixed set of points across a 2D or 3D region L10 The sound pressure level exceeded by 10% of the sound energy of a given sample L XX The sound pressure level exceeded by XX % of the sound energy of a given sample L90 The sound pressure level exceeded by 90% of the sound energy of a given sample Length of Event The amount of time a sound event is being produced or heard Leq Equivalent sound pressure level average Sound level parameter A number representing the ability o f an object of a sound class to produce acoustic impulses, or the measurement of the acoustic impulses that the sound class does produce Line Source In acoustic modeling, a sound source assumed to be best represented in the model by a line segment Lmax T he maximum sound pressure level recorded by a device during a predefined measurement period Lmin The minimum sound pressure level measured by a device during a predefined measurement period Lp Sound pressure level Lw Sound power le v el Name sound parame ter A categorization property of sound, where a sound event is identified and given a name to relate it to other sounds or events that may occur Natural ambient As defined by the National Park Service, the statistical measurement relating to the percent t ime audible of only non anthropogenic sounds

PAGE 18

18 Number of Events per Hour A calculated or estimated measurement of the number of discrete sound events per hour Occurrence sound parameter A property of sound relating to the frequency or likelihood of its occ urrence Percent Time Audible Per fixed time period, the percentage of time a sound is audible on a site. May be related to the amount of time a sound is being produced, but it is beyond the scope of this thesis to relate the two. Point Source In acousti c modeling, a sound source assumed to be best represented in the map by a single point Predictive Noise Mapping / Modeling A type of acoustic model that creates a predictive map of the sound energy present on a site based on source and physical parameter inputs Preference The rating of sounds by an evaluator; higher preference normally means a more positive rating PTA Percent time audible Reference Lp 2*10 5 Pascals, the theoretical threshold of human hearing expressed in sound pressure Reference Lw 10 12 Watts, the theoretical threshold of human hearing expressed in sound power Residential Relating to activities typically performed in a home Segmented model A way of representing and analyzing data that varies over time and/or space by carving the spa ce in which the data occurs into discretely delineated regions or groups for analysis purposes Shapefile A type of data file used by GIS for analysis purposes Single number metric/measurement The use of one number to provide simplified information about a range of conditions or data Single spectrum The use of one spectrum to provide simplified acoustic information about a range of data or conditions represented by that data Sound Class Parameter A property inherent to a grouping of sounds

PAGE 19

19 Sound class A grouping of related sounds Sound Levels An energy or power measurement of sound, referenced to the threshold of human hearing. Used as a general term to describe physical measurement of sound Sound event A discretely identifiable sonic occurrence Soun d power The ability of a sound source to generate acoustical impulses, measured in watts Sound Power Level A logarithmic scale of sound power, referenced to the threshold of human hearing Sound Pressure The variation in medium density present in a cross sectional area of that medium, measured in Pascals Sound Pressure Level A logarithmic scale of sound pressure, referenced to the threshold of human hearing Sound tag In this thesis, a bounding figure drawn on a spectrogram to denote the time and frequenc y domain of audibility of a sound event Sound taxonomy The structuring of all sounds on a site into a hierarchy showing relationships between different sound classes Soundscape The totality of the aural environment, including listeners, sources, time, sp ace, and the way that the paths between sources and listeners affect sound Spectrogram A type of spectrum chart that displays acoustic data on time, frequency, and sound level axes Speech intelligibility The ability of speech to be understood by listener s, based on a percentage of words that are able to be correctly discerned by a person of reasonable hearing Splining A method of interpolating data between measured points on a graph by connecting them with a line Statistically Derived [sound levels] So und level measurements that result from statistical analysis of sound samples measured by a recording device over a defined period of time Study area A geographic area where the soundscape was measured, modeled, and evaluated

PAGE 20

20 Suitability The appropriaten ess of an environment for a particular purpose; normally a range Surface A set of raster data used in GIS processing Time Centroid The midpoint of a range of time, such as the range of time defined by a sound event's audibility or a measurement period's duration Typology A geographic region defined by expected similarity in characteristics Valuation A number applied to a parameter to represent its suitability to an evaluator .wav File extension used for a Wave sound file Weight [relative] A number app lied to a parameter, or set of parameters, or suitability to describe its relative importance in comparison to other parameters Wilderness A broad class of evaluator in which it is desirable for anthropogenic sounds to be minimized. Related to the Nation al Park Service's concept of differentiation between anthropogenic and natural sounds.

PAGE 21

21 Abstract of Thesis Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Master of Science in Architectural Studies A PROTOTYPE METHOD FOR PARAMETERIZED SOUNDSCAPE MAPPING By Adam Dale Bettcher August 2014 Chair: Gary W. Siebein Co Chair: Paul D. Zwick Major: Architecture The objective of this thesis was to investigate a method of determining acoustical suitability and preference across a geographic region that accounted for the importance of the makeup of sounds ('soundscape') when evaluating the sonic environment. Objectives included determining a method of establishing acoustic suitability, and developing a way to map this suitability across a region. This was accomplished through combining existing methods in predictive noise mapping and soundscape analysis to create a map database of relevant parameters that described the aural landscape from site measured data. This database was then used to calculate acoustic preference for a sample evaluator. The results showed that the development of a comprehensive system of mapping multiple soundscape based parameters enhanced the descriptive quali ty of sound maps, with the possibility of describing substantially varied preferences for different listeners when compared to level based sound mapping alone.

PAGE 22

22 CHAPTER 1 INTRODUCTION Deterministically calculated, energy based noise mapping exists as a st andard method of mapping aural environments. This method was used as a basis for a robust mapping method that computed additional source based factors such as duration, probability of occurrence, and source type to produce sound maps that documented valua tions of different sonic elements by listeners. These extra parameters enhanced the sound map's ability to describe aural qualities in a sonic environment beyond describing the physical phenomenon of sound. This extra description overcame some limitation s in current sound mapping methods in describing qualitative elements. Attempts to document where people might find 'good' and 'bad' aural environments have been the subject of noise studies and mapping since the dawn of noise regulations (Thompson , 2002). Early noise measurement methods through objective energy measures and statistics provided information used to write the first noise regulations, and similar methods have been used since to describe different environments. As needs for description evolved, different methods of weighting and analyzing sound energy were used to enhance these descriptions. This practice has evolved into the modern technology of the deterministically computed noise map , in which average time weighted sound pressure levels are p redicted and mapped across a given area. This interpretation of data derived from calculated long term average sound pressure levels relied on the assumption that certain long term averages will cause certain effects. There was much research that had been done in correlating energy levels of sound with effect on humans, and much more that had been done on

PAGE 23

23 correlating different source and psychometric based criteria with effects. (Berlgund et al . , 1999) (Fletcher and Harvey, 1971) (Jeon et al. , 2011) (Li e t al. , 2012) (Miller , 1971 ) (Yu and Kang , 2008) Most of these measures were energy based or rely on some kind of ratio of frequency content or time domain position. They were based on the implicit assumption that long term averages convey most importan t information about the aural environment. Additional research suggested that the content, source, and other unmeasured parameters of the sound also had significant effects. (Jeon et al. , 2011) (Yu and Kang , 2009) Soundscape approaches gave a wider range o f descriptors for 'good' and 'bad' aural environments. Inherent in the soundscape approach was the idea that source is a critical factor in a landscape of sound comprised entirely of discrete sources. Human perception of the source is intrinsic to the soun dscape method, and had much promise for correlating sonic stimulus and perceptual or behavioral response. However, the soundscape method as described in the literature was not as straightforward as energy based measures and metrics due to this influence of perceptual qualities and the difficulty of breaking down a sonic landscape into its constituent parts. This thesis proposed a parameterized method of sound mapping, in which various qualities of sounds were described, statistically computed from different types of source data, and mapped using deterministic propagation noise mapping methods as a base layer. These parameters provided the descriptive language that was used by a soundscape's evaluator to describe and map their acoustic needs. By including inf ormation such as sound source, frequency of occurrence, and percentage of time

PAGE 24

24 present throughout a day along with sound levels, a more complete picture of aural suitability was generated as compar ed to a map of raw noise levels. CHAPTER 2 LITERATURE REVI EW Methods of Evaluating and Mapping Sound When trying to evaluate an aural environment for suitability, science traditionally attempted to connect a measured property of sound with an effect. Usually, data representing some quality of a site (total sound energy, surveyed impressions) were collected and then correlated with reactions to this noise. This data was compared against similar data collected from other sites to form overall assumptions about the relationship between sound and human reaction to sou nd. When possible and relevant, this information could be mapped over a region to provide a quick, easy means to make decisions about the aural environment over a large area. Sound physically propagates in a deterministic and measureable fashion, and noise maps accounted for this when documenting the spreading of sound from a source through an environment, forming paths to receivers. Two significant schools of thought were reviewed, each having insight on what data to collect and how to collect the data to produce these decisions and maps. The soundscape school of thought analyzed an aural environment as a composition of discrete sounds, and sought to analyze these individual sounds through perceived sonic qualities. The traditional method, by contrast, soug ht to analyze sound quantitatively through energy measurements, statistical calculations of energy measurements, and observations of human behavior and preference relative to these statistical calculations of energy measurements.

PAGE 25

25 The Effects of Noise on H umans There was a large extant body of research that attempted to connect sound levels to particular effects on human behavior, perception, physiology, and other factors. In this body of pre existing research, correlations between the effect and the stimul us were documented through the collection of data typically either through laboratory tests or the collection of surveys combined with physical measurements. The intent of the studies varied widely; documented study intents included hearing loss, physiol ogical effects, effects on mental states, effects on sleep, changes in medication for sleep, community annoyance, effects on speech intelligibility in acoustically sensitive settings, and other symptoms. These effects had been correlated with varying succe ss to various sound levels, sometimes with additional information describing the characteristics of the sounds. Hearing loss: For steady state noises, hearing loss had been shown as a function of level over time, dependent on frequency. Hearing loss may be temporary or permanent. (Miller , 1971) An increased chance of permanent hearing loss may occur through high levels of exposure over a long period of time, through repeated exposure of this nature for months or years, or through sudden impulsive sounds. (M iller , 1971) This was measured through studies of people who work in noisy environments (Miller , 1971), and through laboratory (Fletcher and Harvey , 1971) and field experiments on animals (Dufour , 1980). Effects on speech intelligibility: Speech intelligib ility was reduced by noise intrusion. While people had excellent ability to distinguish individual noises irrespective of level, the presence of 'masking' sounds could obscure the meaning of this speech and thereby cause an increased level of effort to det ermine the content of the speech.

PAGE 26

26 The 'masking' noise's effect on speech intelligibility was highly dependent on many environmental factors and the frequency content of the masking sound itself. (Drullman , 1995) Additionally, a large number of other liste ner dependent factors had a significant effect on speech intelligibility, such as age and hearing loss. These effects had typically been observed in laboratory settings, and were extrapolated from those settings into other environments. (Levitt and Webster , 1998) Physiological effects: Several studies attempted to draw a correlation between noise and effects on human physiology. (Berlgund et al., 1999) (Miller , 1971) Many of these studies were conducted in laboratory conditions; a subject was exposed to n oise and correlations were drawn from the results. (Berlgund et al. , 1999) (Miller , 1971) Site based studies had also been conducted that attempted to map noise and correlate the exposure of residences with physiological effects, but this was been shown to be somewhat problematic due to issues of variable isolation; residents may have experienced exposure to noise at other locations beyond the home, and there were many factors beyond noise that could be correlated with the physiological effects cited in the laboratory studies. (Berlgund et al. , 1999) (Miller , 1971) In general, a number of negative psychological and physiological effects had been associated with increased exposure to elevated noise levels. The factors that influenced these physiological effec ts include time, length, relative and absolute level. (Miller , 1971) The studies documented exposure to particular non meaningful noise and related these levels to different health effects. The EPA review found that the response of voluntary musculature, s lower response of smooth muscles, even slower response of neuro endocrine system resulted in persistent changes of non auditory bodily functions

PAGE 27

27 and aggravation of disease conditions; noise exposure had been linked to these effects. Additional effects incl uded activation of the 'startle' response, increased tension, and a reduction in the orienting reflex that decreased with adaptation. (Miller , 1971) Community Annoyance Some of the earliest studies conducted on noise attempted to draw a correlation betwee n measured sound levels and levels of complaints. (Thompson, 2002) (Schultz , 1978) In general, surveys were conducted of the population to determine the level of annoyance (a categorical variable, with categories ranging from 'not annoyed' to 'highly annoy ed') among the general population. The area surveyed was then measured at different locations to determine the change in equivalent average sound level over the area. Complaints were then mapped against levels to determine a correlation. (Schultz , 1978) Community annoyance depended on the type of noise source that dominates the area in question. As different sources had unique frequencies of occurrence, loudness levels and tonal characteristics, the single number equivalent average level may not have contained enough information to explain the variance in the levels. For example, SELs and NEFs were used in proximity to airports to draw a closer correlation between noise levels and complaints. Despite this, airplane noise had many properties that strongly affected opinion that were not captured by average levels (Kroesen et al. , 2008). Additionally, some mechanical sources such as windmills were found to be annoying due to rapidly varying tonal characteristi cs that were averaged away using a strict long term average, and low frequency content that was not captured by more common models of noise annoyance. (Janssen et al. , 2011) Thus, the 'dominant' noise was usually noted when connecting community annoyance or probability of complaint and sound levels.

PAGE 28

28 Effects on Sleep A primary concern of noise at night was the effect on an individual's sleeping pattern. Noise and its effect on sleep had been measured by laboratory sleep studies, by questionnaires, and by an alysis of correlated factors such as consumption of sleep aids at pharmacies in noisy areas (Berlgund et al. , 1999). Listeners were not necessarily aware of their wakening, so directly questioning exposed population may have been less effective than more r igorous means (such as laboratory tests) or less direct measures of measurements (such as the sales of pharmaceutical sleep aids). Both steady state noise and impulsive noises had been shown to interrupt sleep; additional properties of noise that influenc ed sleep, such as the vocal content of spoken sound, had also been shown to exist. Continuous masking noise may have enhanced sleep; however, the removal of a masking noise source may have caused a marked increase in the perceived loudness of the 'masked' sounds. This suggested that consistency was an important characteristic of the soundscape when determining the effects of sound on sleep. (Pollack , 1991) Discussion of Effects of Noise on Humans The collective effects of noise on humans and the environmen t were noticeable, but sometimes difficult to generalize; additionally, determining numerically accurate relationships between noise and effect was problematic due to variability among listeners and practical difficulty in isolating variables for study. Ne vertheless, there was enough existing research to indicate basic relationships between noise and effects. In general, an increase in noise corresponded with an increase in negative effects. Other important factors included the content of the noise, the re lationship of the noise to the

PAGE 29

29 backdrop of other sounds also occurring, and the valuations and activities of the receiver of noise. Deterministic Noise Mapping The attempt to determine the effect of sound on humans went hand in hand with developing predict ive models for noise levels. This was aided by the fact that sound propagates in a mathematically predictable and deterministic way through media. Noise mapping was defined as a series of calculations that attempted to determine this propagation over a la rge area; implicit in this mapping was the correlation between this predictive map and the effect of this noise in humans. There were a number of existing methods of mapping noise. Several important and commonly used methods included ISO 9613 2, CONCAWE, Harmonoise, and FHWA TNM. Variables in Noise Mapping Most noise mapping methods utilized a series of attenuation values subtracted from the initial measured Lp of the sound source in order to determine the level at a particular receiver point. This varied and depended on a number of factors. Typical important information incorporated in a noise map included the following (ISO 9613 2 , 1996) (Manning, 1982) ( US Department of Transportation, 1998) (H ARMONOISE , 200 2) (H ARMONOISE , 200 4 ) (H ARMONOISE , 2005): Geometrical divergence (ISO 9 613 2, CONCAWE): Geometrical divergence accounted for attenuation due to distance, with no other effects included. This was constant over distance, irrespective of site. This was the only method of attenuation considered constant in this manner. Atmospheri c absorption (FHWA TNM post 1998, ISO 9613 2, CONCAWE, Harmonoise): Attenuation of sound due to the atmosphere was also a factor. This was dependent on temperature and humidity, and was consistent across all models. Harmonoise allowed for differences in te mperature gradients vertically across the atmosphere, which rendered it the most sophisticated method of calculating atmospheric attenuation; refraction through the atmosphere was

PAGE 30

30 accounted for by using a parabolic equation model outside the region close t o the source. Ground effect (FHWA TNM, ISO 9613 2, CONCAWE, Harmonoise): Attenuation due to the ground absorption and reflection was also included in most models. This was typically modeled by providing the ground with a 'reflective' or 'absorptive' coef ficient, which allowed the calculating algorithm to either incorporate or ignore reflections from the ground over a given region. Barriers (ISO 9613 2, CONCAWE, FTWA TNM): Attenuation due to barriers was accounted for by means of a calculation of the inse rtion loss of the barrier (a included the relative height of the barrier in relation to the source, the length of the barrier, the material of the barrier, and the diffracti on of sound over the top of the barrier. CONCAWE calculated barrier noise by Maekawa's method of calculating the Fresnel number and computing attenuation based on barrier height and the respective locations of the source and receiver. Relevant wind and te mperature gradient modifiers were also applied, along with a recommendation to recalculate ground effects to account for their reduction by barriers. The ISO standard expanded these basic variables by reacting to diffraction; it provided different equation s to account for different diffraction conditions over the edges of barriers. Reflection from surfaces (ISO 9613 2, Harmonoise, FHWA TNM): The reflection of sounds off of barriers had an effect on the usefulness of the barrier, particularly if both sides of the road were lined with noise barriers. The more recent noise models were able to account for these reflections in their calculations; FHWA TNM's focus was primarily on accounting for reflections from parallel barriers, and accounted for the absorption coefficient of barrier surfaces. Diffraction over surfaces (Harmonoise, ISO 9613 2): Diffraction over surfaces was accounted for in ISO 9613 2 and Harmonoise. These models provided different equations for broad categories of barriers that were known to have different edge diffraction conditions. Topography (ISO 9613 2, FHWA TNM, Harmonoise): Topographical layout of the landscape also influenced the propagation of sound over a region. Heights of source and receiver in relation to topography (CONCAWE): T he ground effect was altered by the grazing angle of the sound in CONCAWE. This modeling method provided a series of attenuation curves for the ground effect based on the height of the source and receiver. Screening by obstacles (ISO 9613 2, CONCAWE, Har monoise): Attenuation through various other objects that were not strictly considered barriers was also included in some noise models. ISO 9613 2 accounted for foliage, industrial sites, and houses as regions. FHWA TNM also accounted for building areas and

PAGE 31

31 'tree zones', with attenuation tied to the maximum impedance established in ISO 9613 2. CONCAWE accounted for screening by other equipment, vegetation which could underestimate attenuation at low frequencies data suggested that different forests produc e different attenuation. Harmonoise used a boundary element method (BEM) model to compute sound propagation over complex obstacles. Weather (ISO 9613 2, CONCAWE, Harmonoise) Weather, as represented by noise models, was generally recognized as a complicate d phenomenon that could result in a large amount of variation over distance in predicting the propagation of sound. All the models acknowledged the effect of weather in noise prediction in so me manner. This acknowledgement ranged from a provision of differ ent, broad classification systems for weather types that potentially affected propagation to an acknowledgment that the model accounted for only one type of condition (usually the condition which was best for propagation, e.g., the worst case scenario). T he accuracy to which weather models were successful at predicting outdoor noise propagation depended largely on the level of detail present in the model, as well as the extent to which the model took into account other effects ( Attenborough et al., 1994). Accordingly, the predictive models for weather used in various standards had some correlation to the level of complexity inherent in the modeling method. ISO 9613 developed moderate ground based temperature inversion. The resulting noise map thus represented a 'worst case' scenario. CONCAWE used six different classifications of weather types when applying correction factors for meteorological effects in the propagation of sound. The model acknow ledged that different regions may have had different regulatory procedures when accounting for wind in noise calculations (Manning 1981), and thus the model used the classification types as a means of accounting for these differences. The weather correctio n was calculated from experimentally derived attenuation curves. FHWA TNM's original 1978 version ignored attenuation due to meteorological conditions. This was primarily due to the temporary effects that atmospheric

PAGE 32

32 attenuation would have on the propagati on of sound; for a conservative estimate, there was no atmospheric attenuation applied. (FHWA 1978) Harmonoise used 25 different meteorological profiles to compute the propagation of sound. The meteorolog ical profiles were used to compute logarithmic linear sound speed profiles that defined the speed of sound as a function of height. This richness of descriptors resulted from the European attempt to allow several different existing national systems to work within an international framework. The end result was that a richer palette of possible attenuation was created when compared with earlier models; this was made possible by more advanced computer technology available in the early 21st century as compared to the end of the 20th century. Method of calculation: ISO 9613 2, CONCAWE, and FHWA TNM all assumed a specific receiver position, and calculated a specific path between the source and receiver. In contrast, Harmonoise used a series of propagation planes and raytracing techniques to compute the propagation of sound, which was a bit closer to the room acoustics model of computing the propagation of sound. Ray tracing, boundary element method, and parabolic equation (PE) models were all possible within the H armonoise model. PE accounted for atmospheric refraction, but did not allow for complicated geometry such as curves and downward tilting barriers; raytracing and boundary element method calculations were allowed as alternatives, but were limited to two dim ensional outputs due to computing power issues. End result: The noise map typically calculated levels using a time weighted average. The method of computing attenuation curves was relatively stable and produced reasonable approximations of the effect of th e environment on sound's propagation; most variability stemmed from the weather, and from variability within the sound itself. When factoring out this variability, all reviewed prediction models provided a suitable means of predicting the propagation of so und over a region. Soundscape Method One basic definition of 'soundscape' described the concept as a method of evaluating and manipulating sound that deals with sounds as discrete events that occurred in defined surroundings and were reacted to by listener s. This approach was conceptually different from an averaged energy characteristic of a site, and encompassed many different approaches to sound from compositional methods to ved since

PAGE 33

33 first being introduced in the 1970s, and was an emerging field of study among acousticians. The development of practical applications and methods of documenting soundscapes from a scientific standpoint were in an early stage of exploration. An Ov erview of the Soundscape Method The soundscape, for the purpose of this thesis, was defined as the totality of discrete sound events occurring in a given area over time. The first systematic use of the term 'soundscape' in this manner was proposed by Murra y Schafer in 'The Tuning of the World'. He proposed viewing all sounds as elements in a composition; he described the artistic, compositional, and informational effects of sound. He proposed placing the elements of a soundscape into a series of categories that were analogous to texture the sound mark, the signal sound, and the like and gave a means of spatial understanding of sound through the concept of the acoustic horizon. (Schafer , 1993) His ideas about the soundscape were important because they pro posed a hierarchy of the sounds that comprised a soundscape, and divided them into an acoustical palette that could be used as a basis for evaluation and design. This theoretical groundwork for describing the soundscape was conceptually different than meas uring sound energy and drawing correlations between sound levels and human reaction to sound. From an engineering standpoint, the soundscape as a composition was also described in mathematical as well as artistic terms. Sound was considered a type of inf ormation, and was described and documented using related informational terms such as signal to noise ratio, intelligibility, and high fidelity versus low fidelity. (Truax, 1985) The observations on the soundscape were also referenced, from more of an engin eering perspective, in the works of Barry Truax. Truax defined sound as a type of communication, and applied communication science to the concept. For example, he

PAGE 34

34 differentiated between 'high fidelity' environments in which discrete events were easily hear d, and contrasted this to 'low fidelity' environments in which individual sounds were not easily heard due to the existence of masking background sound. (Truax, 1985) This approach lent itself to the use of quantitative scientific methodologies in the stud y of soundscape. As a space occupying physical phenomenon, the space in which sound occurred was acknowledged as having a significant effect on the qualities of sound particularly qualities relating to human interpretation of the sound. Thus, the physica l properties of the surroundings were considered as an important influence on the soundscape. For example, the long reverberation times of medieval concert halls gave rise to the slow, monotonic Gregorian chant for increased intelligibility of the music. ( Blesser and Salter , 2006) Barry Blesser argued that the soundscape was fundamentally an evolution between the physical environment, the sounds that occupied that environment, and the perceptions that tied these two factors together. This was shown in the e volution of the interaction between sound, listener, and environment in the development of classical music; for example, earlier chamber pieces were written for a less reverberant, smaller environment with fewer audience members than later symphonic pieces . (Blesser and Salter , 2006) Thus, there was a strong locational and physical component to soundscape; the sound and its containing chamber coexisted and co evolved. Independent listeners and artists also dealt strongly with the concept of 'soundscape,' a nd had particular valuations on sonic environments. Gordon Hempton, a individuals who were specially trained to evaluate soundscapes and identify sounds which occur in

PAGE 35

35 them. (Hem pton, 2009) He suggested that listening to identify all the elements of a sonic environment was a potentially teachable skill that was not in common practice among contemporary Western people. (Hempton, 2009) As an audio ecologist, he argued that noise p ollution was not directly related to sound levels and their relationship to human reaction, but instead on the content of the sonic environment, thus necessitating the need to identify and evaluate individual sounds, their behaviors, and their qualities. ( Hempton, 2009) This audio ecological approach to noise pollution was squarely within the framework of earlier soundscape theorists and writers. In summary, the intellectual and theoretical framework of soundscape theory involved breaking down the sonic e nvironment into its individual components the individual sonic events themselves, the spaces that contained these sounds, and the valuation of these sounds by listeners. Specially trained listening was necessary to discern all the sounds, which had a sig nificant descriptive and qualitative component; at the same time, it was possible to document these events and describe them in quantitative scientific terms. This framework, while less straightforward than the quantitative engineering approach of sound an alysis, had the potential to significantly enhance the work of acoustical engineers who attempted to describe sound's effect on people. This framework had been explored in a variety of ways by several researchers seeking practical applications of soundscap e study. Formal Methods of Documenting Soundscape Contemporary methods of evaluating the soundscape typically involved correlating user preference with a pre recorded or directly experienced aural environment. As such, a systematic way was not found in the literature to map the preferences directly over a region. Correlations had been drawn between preference

PAGE 36

36 and psychoacoustic methods, sound levels could be directly mapped, and there had been some methods that attempt to document a shift in soundscape alon g a 'sound walk'. These efforts, while important for documenting soundscape, did not provide a method to systematically map acoustical qualities on the scale of the noise map, although there had been some attempts to introduce such a procedure. The quantif ication and documentation of soundscapes was an area of emerging research. Attempts had been made to document soundscapes by recording existing conditions binaurally and testing listeners' reactions in a lab; consistencies in vocabularies were found acros s listeners when describing these recording in lab settings, which suggested a common basis for describing different soundscapes. (Axelsson et al. , 2010) The placement of these sounds in context was quite important, however, as much research suggested t hat visual and other sensory aspects of a space had a significant factor in determining perceptions of sonic space that was sometimes higher than the sound itself. (Jeon et al. , 2011) (Li et al. , 2012) (Yu , sound o record listener perception in context. In a sound walk, listeners were directed along an itinerary and identify acoustical characteristics of the spaces in which they pass through, whether in groups or individually. (Hong et al. , 2011) Resulting observa tions from the sound walk were seen as a part of a tool palette for an urban designer, as a way of connecting sound and space. (Venot and Semidor , 2006) When combined with signs requesting listeners to be quiet, the sound walk method was successful in ma naging and improving visitor aural experience in US national parks. (Stack et al. , 2010)

PAGE 37

37 Attempts to correlate raw sound levels with acoustic comfort were met with mixed success when studying soundscapes, such as the low correlation coefficient between a coustic comfort and sound level shown by several studies; (Nilsson 2007) (Yu, 2009) there was a noted of the need for different types of classifications of different sites due to additional factors beyond raw sound pressure level (Yu, 2009). Other studies found a correlation between acoustical properties of soundscape and 'feel' of space, and determined the contribution of various architectural metrics to each of these 'feelings'; these perceptions were influenced by visual and other sensory phenomena as pa rt of a synthetic whole. (Jeon et al. , 2011) Parameters based on characteristics of individuals who listened to and evaluated soundscapes were also frequently analyzed; Yu discovered significant correlations between long term environmental exposure to sou nd and evaluation of aural environment while finding some correlation with demographic factors and preferences in soundscape evaluation, though correlation factors were not high (2009). This research suggested that qualities of the physical environment in which the sounds occurred were significant in soundscape, as well as the past experiences of the listeners who were evaluating a soundscape and the type of soundscape being evaluated. Actual mapping of the soundscape remained a frontier to be developed, pa rticularly over scales larger than a room or a single outdoor space. Kang was able to map acoustical parameters in an outdoor space using indoor room acoustics software (Kang, 2005); the framework laid out by Blesser established these site characteristics as important to the overall soundscape. Servinge et al proposed a GIS based method to demonstrate the concept of acoustical 'preference' on a map. (Servinge et al. , 1999)

PAGE 38

38 This method correlated vehicle noise in urban areas with negatively rated soundscapes , and assumed a linkage between lower levels of vehicular noise and higher preference. These results were shown over time, in a video format (Kang and Servign , 1999). This particular soundscape mapping method as described had its limitations; it relied on the evaluation of a single source and did not map sound as accurately as existing noise mapping methods. However, its connection of sound to preference was a strength of this model, as was its ability to document the change in the behavior of sound over ti me. As part of its Quiet Parks Initiative, the National Park Service was also actively developing a series of metrics to deal with the issue of 'soundscape'. To manage the noise impacts of air tours, this organization was tasked with documenting the 'natur al ambient' across the various national park facilities to measure the impact of human generated noise on natural environments. This led to several different attempts to develop a measure of developing metrics for 'soundscape' that allowed the impact of so me sounds to be rated differently than that of other sounds; in general, this 'natural ambient' was related to the percentage of time that manmade sounds were audible in a natural environment (Rapoza et al., 2008) The degree to which manmade noise impacte d the natural ambient varied depending on a study area's purpose within a park (NPS , 2006), and may have included manmade and historically important sounds (NPS , 2000). This variation guided the qualitative method of determining what noise is 'valued' and what noise is 'not valued'. Discussion of Soundscape Literature The soundscape approach was closer to that of musical composition than engineering, with a focus on identifying reactions to the sonic environment and its content rather than quantitative meas urements. With its concern for the quality and

PAGE 39

39 content of ambient sound, the researchers and composers who used the soundscape approach were working with material that is not necessarily easy or obvious to measure without human intervention and interpretat ion. In particular, situations where the content of the background sound was as important as (or is more important than) the sound level were able to benefit greatly from a soundscape approach. Various attempts at using engineering metrics to address and a ccess the compositional qualities of a soundscape found some success, such as those of the National Park Service. Attempts to quantify the fundamentally qualitative elements of the soundscape suggested some convergence between the two approaches. 'Soundsc ape mapping' precedents existed within this area of convergence, due to their need to take into account both the physical properties of sound propagation and the evaluative method of source identification and composition. In this combined approach, the rel ative strength of each method (predictive power for engineering methods, descriptive power for soundscape methods) had the potential to be harnessed to provide a more accurate and robust method of evaluating the effect of sounds on humans. Emerging researc h in soundscapes made attempts to gather data to make more informed design and policy decisions about how to address noise control concerns, and attempted to expand on this noise based method of evaluation by also accounting for desirable sounds, such as i n the NPS natural sounds program. Review of Different Methods of Addressing Noise Despite their power, noise maps had some limitations in predictive and descriptive abilities. In addition to accommodating simplifications of real world physical conditions f or calculation purposes, many studies using the soundscape method suggested that the underlying numbers noise maps calculated did not necessarily

PAGE 40

40 capture many of the important impacts on listeners that were caused by noise. These limitations were less indi cative of problems in the prediction of the physical spread of sound than limitations in correlating these sound levels to human reaction and preference. Some of the limitations in the level based method of evaluation were reduced or eliminated through a s oundscape approach, which took into account the source, length of time the sound occurred, and other factors of sound by breaking down the environment into its component parts. While the soundscape method showed great promise for addressing the limitations of quantitative and statistical methods of describing an aural environment for evaluation purposes, a method of systematically mapping these characteristics had not yet been developed. A hybrid of traditional methods for the calculation of the physical pr operties of sound with the descriptive characteristics used in soundscape methodologies was thought to address some of the above issues with existing noise maps while expanding the possible applications of a noise map. A proposed 'hybrid' solution was the subject of this thesis.

PAGE 41

41 CHAPTER 3 METHODOLOGY AND RESULTS For most precedent methods of non soundscape noise study, acoustic suitability the energy on a site. These me thods resulted in a binary yes no comparison of 1). This reaction that correlated to this level of ac ceptable sound energy. Predictive noise mapping methods expanded this basic binary method of determining suitability with predictive maps of sound energy on a site. (Figure 3 2) By contrast, the soundscape method of evaluation relied on many factors bey ond sound energy to make a more nuanced and descriptive evaluation of an aural environment. Thus, a mapping method for soundscape methods needed to accommodate the expanded range of descriptors and reactions used by this method. It was thus required to me asure and map parameters beyond this sound energy level. A specific implementation of the soundscape mapping process, conducted in the city of Port Angeles, Washington and nearby Olympic National Park (Figure 3 3), was used to explore techniques, limitati ons, and benefits of producing these soundscape maps. A direct process from raw data calculation to evaluator acoustic suitability maps was developed and used to produce soundscape maps for a specific location. The maps were calculated from a limited samp le of data to demonstrate the procedure. Overview of Process The proposed method of soundscape mapping incorporated existing predictive noise measurement and mapping methods to evaluate a sonic environment from a

PAGE 42

42 soundscape approach, based on its sounds a nd their behaviors. This type of sound mapping required documenting the sonic environment, and then calculating and mapping the relevant parameters related to the behavior of individual sound events described above. This method had four distinct phases, de scribed below: soundscape documentation, sound data processing , spatial data processing , and acoustic suitability determination and mapping . The overall flowchart of data, processing methods, and outputs were illustrated in Figure 3 4, and generally was as follows: Soundscape documentation: The documentation of the sonic environment was done by taking sound recordings, sound level measurements, and observer notes that documented the behaviors and qualities of a soundscape made of individual sound events . Sound data processing: The relevant descriptive parameters for these sounds were extracted from the field measured data and organized into a taxonomy of sound that described the sonic environment in terms of its component sounds . Spatial data processing: T hese parameters were then mapped using existing predictive techniques for sound level, enhanced with techniques developed to describe other parameters of the behavior of the . Acoustic suitability determination and mappi ng: The mapped parameters were used to estimate acoustic suitability by mathematically translating an the predictive map . Sound Data Processing The goal of collecting data from the s ite was to provide a meaningful description of the sounds and their behaviors in a study area for evaluation. A list of the types of data to collect was developed to produce information about the types of sounds that occurred on the site. Important descri ptive parameters of sound that could be calculated from the collected data were developed; these parameters described where the sound would occur, when, and for what duration and level. Statistical processing

PAGE 43

43 methods of individual sounds, recordings, and t heir sound energy were then developed to convert the data into descriptive parameters. The end result was a taxonomic list of sounds and their descriptive parameters that were present in the study area. This collection of data was used in a predictive and explanatory model that allowed the mapping of these parameters over space. Measured Data To develop an understanding of how the soundscape changed across a geographic area during a typical day, a series of acoustical measurements were taken over the cours e of August, 2010. Measurements from three different sites during two different time periods per site were used to compute the parameters for each sound class. The data consisted of calibrated overall A weighted sound level measurements, recorded sounds ta ken simultaneously with the calibrated sound level measurements, calibrated octave band equivalent sound pressure level measurements of individual events, and notes about sound events taken during the recording period, including times the measurements were taken, and notes on the measurement locations and periods, respectively. The purpose of this data was to identify and record individual sound events at different places and times for soundscape analysis of the whole site. Sound recording (.wav). The so und recording consisted of a series of digitally stored 16 bit floatin g point numbers obtained 44,100 times a second. Each point represented a sample on a sound waveform; the number of bits represented the possible range of these numbers, from positive to negative. The sampling rate determined the theoretical upper frequency range of the sound, which at the Nyquist rate is 22,050Hz. These data were then played back on headphones, and standard digital signal processing methods were used to analyze this data .

PAGE 44

44 Calibrated sound energy measurements. Measurements of overall, A weighted sound were taken to provide a physical measure of sound energy on a site, expressed in base units of sound pressure level (dB, related to a base sound pressure level of 2*10 5 Pa) . (ANSI S1.4 , 2006) These measurements were flat weighted, either with integration occurring every minute or single data points taken every 1/8 minute (125ms). The calibrated overall A weighted sound energy measurements provided a baseline guide for the sound pressure level present on the site. Calibrated measurements were then linked to data derived from the sound recording such as source type, frequency of occurrence, and percentage of time audible to provide a way to connect data measurements to a physical p roperty of sound. Times of measurement taking. The start and end time of the measurement period were recorded. As the soundscape of a site varied over the course of the day, the parameters to be calculated were also likely to change considerably over time. An inclusion of the time period in which the recordings were taken gave a grounding point to the measurement sample, so that inferences could be made from patterns across the data over the course of a day. All times of events within the measurement period were subsequently determined by linking the point's sample to a particular time period. Measurement locations and periods: The intent of the measurement point and time period selection was to gather enough information about the behavior of sound over spac e and time to make inferences about other times and places. This information served as the basis for analysis to extrapolate information about the behavior of sound for each typology. With this limited information, a very rough picture of the behavior of sound over the course of a day in each typology was inferred.

PAGE 45

45 The study area was divided into three typologies: urban, rural, and wilderness. The boundary of the urban area was estimated from the Clallam County parcel map and aerial photos by visual observ ation, with areas of noticeable collections of buildings and clustered together lots identified as urban. The boundary of the wilderness area was determined by the limits of the national forest and park land, and the rural area was defined as land area th at was neither wilderness nor urban. (Figure 3 5) Behavioral conditions of sound, including number of events per hour and percentage of time audible, were assumed to be homogenous within each typology for urban, rural, and wilderness areas. One measurement point was selected for each typology (Figure 3 5). These measurement points were selected based on their accessibility and how much the immediate surroundings appeared to be representative of conditions in the rest of the typology. During the measurement phase, each measurement point was visited and measured at two distinctly different time periods (e.g., night versus day) to document the change of the behavior of sound over time (Table 3 1). For each measurement period at each site, five to thirty minute s of simultaneous audio recordings and calibrated measurements were taken. Differences in the soundscape over time were captured at each measurement point due to the temporal spacing of the measurement periods. Data collected: The soundscape mapping method required the documentation of both the physical properties of sound energy and data that allowed sound events to be identified and analyzed. To document both the sound energy and the source types, calibrated sound level measurements were taken simultaneou sly with audio recordings

PAGE 46

46 of the sound. To further supplement this data, notes were taken in the field on which sound events were occurring at the time of observation, including direction and location of the event where possible. The data, along with the sound event tags discussed in C: Sound tags below, can be found in Appendix A. Apparatus and procedure: The calibrated measurements were taken by a Rion NL 32 sound level meter (Figure 3 6). The digital recordings were taken by a Zoom H2 Handy Portable Ste reo Recorder, a device that records in stereo or 'surround' sound using two to four condenser microphones with a frequency response most accurate above 40Hz (Figure 3 7). Both were mounted five feet above grade on a tripod. The calibrated device was calib rated both before and after the measurement series with a pistophone calibrator, with a variance of less than 1dB observed between the starting and ending calibration. A windscreen was used on each device to minimize wind interference. At each site, a sing le location for placing the apparatus was selecte d, and a consistent orientation for the apparatus was chosen. At the beginning of each measurement period, the apparatus was set up and the recording devices were switched on. The recording device was starte d and the time was noted; a countdown was initiated, and then the calibrated device was started upon the countdown to provide a link between the calibrated measurement and the recorded measurement. During the recording, the operator moved to a consistent d istance from the apparatus of two to eight feet to minimize interference. The operator then recorded observations about the events occurring on the site in a notebook. After the pre determined period of recording, the operator noted the time of measuremen t stopping, and halted both devices. The

PAGE 47

4 7 operator then continued on to the next site, until all sites and time periods were measured. Calibrated Listener Manual tagging of sounds was an important portion of the procedure for producing data about individual sound classes, and also determined which events fall into which sound classes. Thus, the person who was processing the data had a significant effect in the outcome of the results, particularly when compared to more rigid and physically determined data. Th is was due in part to the fact that perception was an important factor in determining an evaluator's rating of an aural environment; numeric data did not necessarily capture many elements of perception, and an overseer for the process of defining and class ifying sounds was important in allowing human perception to influence the process of applying meaning to the data in a way that was similar to how the end evaluator might perceive sound. One example of the necessity of incorporating human perception as an element in soundscape analysis was the way that a chirping bird might be described. Sounds were often composites of different elements; bird chirps were comprised of several individual 'chirps' separated by intervals in which the source was not generating sound. If the identification of the sound were limited only to the 'chirps', the percentage of time audible would be far lower than if the identification of the sound encompassed the periods of inactivity in between the individual chirps; a listener may th us have perceived the bird call as only the chirps, or include the noiseless space between the chirps depending on the application. This decision was ultimately based on the reason that the bird chirp might be evaluated; for example, listening to a sustain ed bird call may have been important to an ornithologist, while determining peak levels of birds might have

PAGE 48

48 been important when overall noise levels are a concern. The decision as to how the sound was tagged influenced parameters such as length of overall time heard and sound level statistics. To accommodate these possible variations in the processing of sound data, directions of some sort needed to be developed for the listener to follow to encourage consistency in tagging measurements. General guidelines were thus established with regards to sound tagging to encourage consistency in documentation. The sound tagging was done by a single listener using a consistently understood interpretation of the guidelines. The guidelines were as follows: The 'start sam ple' of the tag was estimated by the point at which the sound first became audible, and the 'end sample' was estimated at the point at which the sound no longer became audible. The 'lower frequency' and 'upper frequency' were determined by the points at wh ich the sound event was no longer visible on the spectrogram. Sounds with a repetitive, impulsive nature (such as footsteps) were tagged as a single unit, as the percentage of time audible was not accurately captured by only recording the impulse due to th e lingering impression of each sound to the operator or listener (particularly when followed by another, similar sound). Sounds with substantial variation and pauses (such as voices and birdsong) were tagged as a single unit until a notable pause. A limite d number of sound classes were used as descriptors; if events were only slightly qualitatively different and could be classified similarly, a single sound class was used (e.g., 'old cars' and 'cars' were both classified under 'automobiles'; similarly, all bird species except seagulls were classified as 'birds'). Complete and near complete breaks in continuous sounds (such as fan noise) were treated as breaks between two distinct events rather than as one continuously occurring event. Sound Event Tags The so und tag was used to connect a given range of data to a specific sound class. A 'sound tag' was a data structure containing the limits of the sound's presence in

PAGE 49

49 sound file data in the time and frequency domains. These sound tags were generated by the calib rated listener while listening to the sound recording data. The goal of this process was to break down the data and recordings into their component events. The end result of this portion of the procedure was a list of individual sound events along with th e time and frequency domain ranges. This dividing of the data into time and frequency domains allowed for a series of descriptive parameters to be calculated from the sound data for each defined acoustic event. Individual sound events were tagged using a computer program that was written for the purpose of creating sound event tags. The program allowed a user to see the relationship between time, frequency, and level using a spectrogram, and to demarcate individual sound events using a starting and ending point (Figure 3 8). This individual sound event was then classified according to the type of event it described (the event's sound class). The bounding boxes created during the sound tagging process provided the lower and upper frequency limits and time do main limits to describe each individual sound class event, allowing the data to be broken down into events for further analysis. These sound event tags, along with the recorded data and information about the physical properties of each site, were placed in Appendix A. Parameters t o Calculate For Each Sound Class The parameters of sound class, sound level, length of time per event, number of events per hour, and percentage of time audible per hour were calculated to provide a mix of quantitative, source base d, temporal, and qualitative descriptors for potential evaluators to use. These parameters were capable of describing rudimentary behavior and properties of sound for evaluators that would place importance on the type of sound being heard, when it was bein g heard, where it was being heard, how often it was being

PAGE 50

50 heard, the level of sound being heard, and how long the sound events were. This list of parameters allowed the maps to document variation in performance of sound classes over space and time. Addit ionally, selecting the class of sound as a parameter aided in the breaking up of an aural environment into its subsequent components. Sound level parameters. As discussed earlier, most predictive sound methods calculated the propagation of sound using soun d power levels in 1/1 center frequency octave bands. Sound power levels produced by each source in the source class were roughly determined by providing an average estimate of distance to each source type that occurred over the course of the measurement. These measurements were tied to a calibrated baseline by the procedure outlined in Section G below, and sound power levels per source were estimated from these sound pressure levels by the process as described in Section G below. When tied to a calibrated baseline, these statistics were used to describe the physical sound energy parameters of the event. Occurrence parameters: In addition to level, parameters relating to the occurrence and duration of the sound were used to obtain insight into how the aural environment changes over time. Important descriptors of sound included the percentage of time audible (calculated by the total time of occurrence of the event divided by the length of time of the recording), the number of events, and the length of each eve nt in seconds. Parameters related to the presence or activation of a sound source varied over time and space respectively. Thus, variation was calculated for each location and for each hour over the course of the day. Determination of hourly parameter va riation over time was done by a splining method, while the determination of parameter variation over

PAGE 51

51 space was done by segmenting the geographic area into typologies and applying the calculated parameter from each location across the range of the typology. These parameters that varied over both space and time were the percent time audible per hour for each sound class during a recording period; the number of events per hour of each sound. Information about the behavior of the sound beyond level could be ge nerated. The aforementioned occurrence parameters were based on existing metrics of calculating occurrence statistics used in source based analysis of sound. The US National Park Service defines a method for computing the 'natural ambient', which is calcul ated as the percentage of time audible of anthropogenic sounds and the assigns a continuous equivalent sound pressure level derived from the percentage of time audible (e.g., if no non natural intrusions are observed for 75% of the time on a site, the natu ral ambient sound level is considered to be the L75 and all sounds above this level are marked as human generated); this method suggests that 'percentage of time audible' should be an important metric. (Rapoza et al., 2008) Airport noise impacts were base d on the number of overflights, which suggests that 'number of occurrences of each event' ( n ) is an important consideration in describing sound. (Raney and Cawthron ., 1998) Finally, 'length of time per event' gave a range as to how long in seconds each ind ividual event occurred, which was a helpful descriptive parameter to further describe differences in sounds, and was used in conjunction with n to approximate the percentage of time audible. Parameters per Sound Event Tag After the tagging process, there e xisted a list of all events that the listener identified, along with the time and frequency domain bounds of these events. The bounds served as a window on the overall data for each sound event, so that events

PAGE 52

52 could then be analyzed for variations in time and over space. Extracting the data from this window provided the duration of the event in seconds, the list of the range of octave bands available (Hz), and the sound level in the octave frequency bands (dB). These parameters were calculated in the follow ing way: Length: Each sound event tag had its length calculated in terms of samples. The sample rate of the wave file could be used to convert the length from samples into seconds. The length in samples was computed by subtracting the end sample from the s tart sample to obtain the total number of sound samples per event. The samples were then divided by the sampling rate of 44,100 samples per second to result in the number of seconds. Octave frequency band levels: The segment of the wave file isolated by t he time domain was run through an ANSI octave band filter. (ANSI S1.11 , 2004) (Ellis, 2004) This octave band filter provided a series of uncalibrated equivalent sound pressure levels for each octave band in the tag's time domain; the reference sound pressure le vel is the microphone's own base level, according to the wave file standard. While this does not capture a sound pressure level according to the reference sound pressure level at the threshold of human hearing, it does capture the variation in sound pressu re level per octave band found in the wave file; this variation was later tied to a reference sound pressure level that had been captured simultaneously by the calibrated measuring device. Sound class: The sound class information was provided during the ta gging process. This allowed each sound event to be identified as a type of source, which is

PAGE 53

53 helpful when grouping and classifying events for calculating class wide parameters and for placement into the sound taxonomy. Two parameters were calculated for eac h individual sound tag. These parameters included uncalibrated sound pressure level per octave center frequency band and the length of time of the event. The sound class of each event, and the location and time period during which the event occurred were noted. This data served as the basis to compute parameter values for each sound class. Sound Class Parameters per Measurement Period The first phase of analysis each individual sound tag produced a list of sound tags with time bounds (relative to the sampl e rate of the wave recording), frequency bounds, and an associated sound class. From this list, parameter values were calculated from the relationship of the sound events to their existence in time. These parameters included the number of events in each so und class per hour and the percentage of time audible per sound class per hour. Each hour of the day at each location was provided with a value for both parameters through a splining method. The data for each measurement period was condensed into a single representative point for use in this splining method. In this way, the change in the soundscape over the course of a day could be monitored and described quantitatively as a sum of its component parts. Interpolation points for each parameter were calcula ted by using the following methods: Number of events per hour: For each sound class, the number of events per hour was calculated for each recording period by counting the number of events and dividing this number by the length of time of the measurement p eriod. nrate rec = n rec / t rec

PAGE 54

54 Percentage of time audible per hour: For each sound class, the percentage of time audible was calculated by summing the total length of samples for each sound class, subtracting the calculated overlap of samples within a soun d class, and dividing this number by the total number of samples of the measurement period. Overlap was calculated by comparing the starting and ending samples of each sound tag, and summing the total number of samples during which two or more events were occurring. PTA rec = t class overlap class / t rec This procedure resulted in a list of the number of events per hour and percentage of time audible for each sound class during each recording period. Inferences about how these parameters changed over time we re made by using a simple spline method, described further in G: Global sound class parameters below. For splining parameters, time centerpoints for each measurement period were used as the x values for spline point; these were calculated by adding half th e length of the recording time to the start time of the measurement period. The rate of change in the parameter values between the time centerpoints was used for the splining t centerpoint = t start + ( t rec / 2 ) Global Sound Class Parameters At this point in the calculation procedure, data existed for each individual sound event tag and for each individual sound class during each measurement period. In addition to the time dependent and location dependent parameters of percentage of time audible per hour a nd number of events per hour, parameters were calculated that were time independent and location independent. The source sound power level of each sound class and the length of time per event were analyzed for all times and

PAGE 55

55 locations. The variation in the se parameters were calculated to globally document the behavior of each sound class. The above described parameters for each sound tag and measurement period were used for further analysis of global behavior. Using this dataset as a base, information abou t the behavior and qualities of sound was calculated for the entire study geographic area. Each sound class was described using by calculating quartiles of the length of time per event over the time period, and through spline interpolation of the number p er hour and percent time audible parameters at each site. This data served as the quantitative base of the sound taxonomy. This data for each sound class was placed in Appendix B. Tag based parameters: The tag based parameters of estimated sound power leve l per octave center frequency band and length of time per event were calculated for individual sound event tags. The sound event tags were separated by sound class, and global parameters were calculated for each individual sound class based on the informat ion from these accumulated sound tags. This data gave insight into the inferred qualities and behaviors of individual sound events within a sound class. The intent of this type of data was to produce a descriptive library of sounds that occurred on the s ite. Length of time per event: Global data on the length of time per event for each sound class was calculated by averaging the length of time from all sound event tags of matching sound classes. Quartiles of the length of time for each sound class were al so calculated. The quartiles provided information on the variation of length of each sound class event. This parameter was used to describe the behavior of individual events per sound type across the site.

PAGE 56

56 Sound power level per octave center frequency ba nd: Sound power level per octave center frequency band was calculated from the uncalibrated octave center frequency sound pressure level calculated in E: Parameters per sound event tag and from estimated average distances of the sound source in question ob served on the site. Sound power level per octave band per sound class was not a straightforward average, nor were the levels in the sound event file calibrated to any real world condition; to address these issues, a rough calibration procedure was develope d as discussed below. Calibration procedure between the wave based sound tag data and the calibrated measurement data : Data from the .WAV files was not inherently calibrated to a particular reference sound pressure level; instead, as noted earlier, values for wave files were single sample points from an input source, represented as a floating point value from the baseline state of the microphone. While this captured the variation of pressure from the sound source, the measurement was not tied to a particul ar sound pressure level baseline, and a calibration procedure was necessary. A simple calibration procedure for the sound file data was devised by comparing the uncalibrated sound pressure measurements contained in the recording file and the calculated Leq from the calibrated sound pressure level recorded during the same period. The first step in the calibration procedure was to calculate the relative level difference between the calibrated and uncalibrated measurements. To calibrate the overall measuremen t, the averaged sound pressure of the portion of the calibrated file that was measured during the sound recording period was compared to the average absolute value of the sound pressure stored in the sound recording period's file. The

PAGE 57

57 start and end samples of the sound tag were linked to the corresponding times. The average sound pressure of both the calibrated file and the uncalibrated recording file was calculated through the following summation of sample points: p calc =10^(Lp sampleCalibratedMeasurement /1 0) p calc =(p calc1 +p calc2 +...+p calcn ) / n p meas =10^(Lp sampleWAV /10) p meas =(p meas 1+p meas2 +...+p measn ) / n p cal = p meas p calc Lp class =10*log(p calc +p cal ) The second step was to separate out the sound pressure levels from each sound tag, filter out third octav e band data for future weighting, and remove all extraneous time and frequency band data from each tag. Each tag's samples were run through an ANSI compatible octave band filter that extrapolated third octave center frequency band data for the range of th e tag's sound data. (Ellis, 2004) The highest and lowest upper and lower frequency bounds, respectively, from each sound event for each sound class were used to set the limits on third octave center frequency bands. All values in frequency bands that fell outside these ranges were discarded. Octave band center sound pressure level was calculated by logarithmically summing the third octave band frequency data that fell within the frequency bounds of each octave band, as in the equation below. Lp octave = 10* log(10^(Lp thirdoct1 /10)+10^(Lp thirdoct3 /10)+10^(Lp thirdoct3 /10)) This difference between calibrated and uncalibrated measurements in each octave band was added to all octave sound pressure values for all sound classes,

PAGE 58

58 resulting in sound pressure levels wi th some relation to a reference sound pressure level. The difference in sound pressure per third octave center frequency band was calculated between calibrated individual event measurements and the average uncalibrated sound pressure. The sound pressure p er octave center frequency band was then calculated by adding the calibrated level from the first step to the from the uncalibrated level in the sound event tag for each sound class from all sound event tags matching the sound class in question. An A weig hting curve was then applied to the octave band data to provide a base source number for calculation that related to the frequency response of the human ear. (ANSI S1.4 , 2006) Sound power level was calculated from this resulting sound pressure level by use of a sound pressure level to sound power level (Lp Lw) conversion. The distance used for each sound class was an estimate of the average distance between the recording device and the sound source. The sound power level was calculated using a simple outdoor Lp Lw conversion, resulting in inferred sound power levels for each octave center frequency band for each sound class. It should be noted that this calculation procedure was simplified by assuming a uniformly radiating sound source and consistent environment al quality. Lw class =Lp + 20 log(d avg ) 8 ( Piercy and Daigle , 199 8 ) Spatially varying measurement period based parameters: The number of events per hour and percentage of time audible per hour were derived from the relationship of tag properties to the ov erall measurement period. Due to the limited sample size at each site, the values derived from each measurement period were used as the base points in a spline function to document variation in these parameters over

PAGE 59

59 the course of a day at each site. This s pline function interpolated hourly points based on the calculated values of the time variance parameters of percentage of time audible and number of events per hour. The measurement periods' calculated points formed the base calculation points of a spline function that was written to calculate the hourly values of the percent time audible and number per hour parameters for each sound class. The y value of each spline point was taken from the list of calculated interpolation points for each sound class for e ach measurement period. The rate of change between the two points was then calculated to determine how these values change over time. Hourly points were interpolated by adding this rate of change to the start value of each parameter while subtracting the f ractional difference between the centerpoint of the time and the nearest hour. This process is shown graphically in Figure 3 10. mParam recs = (paramRate rec2 paramRate rec1 ) / (t centerpoint2 t centerpoint1 ) paramRate hr = param rec + mParam recs * paramHr re c mParam recs *(t centerpoint prevhour) The result of this calculation gave a value for the percentage of time audible of each sound class and the number of events per hour at each hour of each sound class at each measurement location. These temporally vary ing parameters provide information about how the sound at each location changes over time, particularly in regard to its presence and to its duration. When combined with the time and location invariant source parameters of level per octave band and length of time per event, a profile of the behavior and properties of each sound class were developed. These profiles were used as the basis for the sound taxonomy.

PAGE 60

60 Sound Taxonomy The calculation of global parameters listed above provided a list of all the sound s on the site, as well as quantitative descriptions of its levels and behaviors. However, the analysis of individual sound classes alone did not provide a view of how the sounds could be meaningfully related in a soundscape. These relationships were consid ered important in determining linkages between sounds, and an aid in locating related sounds in space. To document these relationships, a sound taxonomy was produced. The sound taxonomy was a list of all the sound classes that were present on the site, al ong with their calculated parameters, with linkages denoted between individual sounds in their larger environment. The taxonomy was organized by grouping sound classes into larger groups based on their type and location. This organization eased both the process of evaluation and locating the sound classes in space. Evaluators could more easily define their preferences in terms of larger groups of sound classes as opposed to individually defining smaller groups of sound classes. Sounds were classified acc ording to how their source objects were located and the function that those source objects served. This grouping of sounds provided benefits for easily sharing transportation related spatial data. For example, automobile based motion has a number of activ ities and corresponding sounds related to the presence of the automobile, including; automobile passing, car horns, engine idling, and brake squeaks. All sources relating to the automobile movement were grouped under the 'automobile' taxonomy. Automobile movement sounds, in turn, were grouped under the 'transportation' taxonometric class, as transportation objects are typically found in very specific areas and require specific infrastructure to function as intended. A total of six overall sound taxonometri c classes were found; this list, as well as a discussion of their

PAGE 61

61 rationale, can be found in Table 3 2 below. The sound taxonomy itself (Figure 3 11) applied to the whole site. Spatial Data Spatial Data Processing Spatial data were used to translate the ra nge of information provided by the sound taxonomy described above into maps of how the parameters vary across space. This was accomplished through assembling a database of physical site conditions, mapping the spatially and temporally varying parameters id entified earlier, and mapping sound level data using predictive sound mapping methods. The final result of this method was a series of GIS surfaces and databases, each surface containing information about each sound class parameter. These surfaces and dat abases were used as base layers to compute acoustic suitability. Locations of M easurement T aking Noting the location of the initial measurement sites was important when calculating spatially varying parameters, as spatially varying parameters collected fro m each measurement location were used to estimate the behavior of sound across the entire study area. As discussed in the previous section, data from three measurement locations were used capture sound information. These locations were chosen based on the typology definitions discussed above. These points were geographically located by providing them with coordinates to locate the points of measurement. These points were estimated by viewing a map of the study area with a high density aerial photograph und erlay, estimating where the measurement points occur, and placing points at these locations to anchor them in maps. These points were given the parameters as properties that were then transferred to their larger typologies.

PAGE 62

62 Sound S ource L ocations Each sou nd class in the sound taxonomy was located in space throughout the geographic area. Predictive noise models required information on the location of sound sources to predict the spread of sound on the site; thus, the locations of sound sources and the chara cteristics of their surrounding environment were important. Most predictive models for sound propagation allow several different types of sources, which allowed for modeling of the differences in ways that sounds are generated on sites. A source type for each sound class was selected based on the sound's behavior and location, and spatial data about source locations was documented for use in a predictive sound model. For each sound class, a GIS file was developed that included information about the potenti al source locations. Where sources followed a defined path or occurred in a specific physical region, location information for these sound classes was placed in the file using a vector format. This location data was derived from site observations, inferenc es made from aerial photographs and GIS based parcel data, and additional data made available from state, federal, and local government agencies. The values for sound power levels calculated above were provided to each sound source location as additional p arameters for each object within the file. A map of the source locations and base data for each sound class can be found in Appendix B, along with an underlying explanation of why each sound class was located in each location. Depending on the source type , a point, line, or area source was used. The point source was used for fixed sources and routes along which fixed sources occurred, such as permanent mechanical noise systems. The line source was used to represent sources that existed on consistent tracks or as lines, such as automobiles. The area

PAGE 63

63 source was used to represent sources that could occur over a geographic area at any particular point, such as birds or human voices. The range of the data was drawn, and the sound power level from the sound taxon omy was placed in the attributes of the file to represent the sound power spectrum. While this method assumes that all sources occur everywhere at once and one at a time, the probability of hearing that sound is separated out as a different parameter; thus , the percentage of time audible was combined with the raw levels of the predictive sound map to produce an hourly Leq. Physical S ite P roperties To calculate a noise map, information about the nature of the path of sound was important to note in addition t o information about the sources of the sound. As discussed in the literature review, there were a variety of standard predictive models for noise. Each predictive model had a different variety of factors that were taken into account in calculating the phys ical propagation of sound. Thus, the physical site properties to include in the model were dependent on the type of predictive modeling method used. ISO 9613 2 was used as the predictive model to calculate the spread of sound for the level parameters. This standard method of calculation took into account information about geometrical divergence, atmospheric absorption, ground effect, barriers, reflection from surfaces, screening by obstacles, and weather. Of these effects, several required additional inform ation about the site beyond the relative locations of sources. Additional site data was thus required to calculate the ground effect, barriers, reflection from surfaces, and screening by obstacles. Data about the site properties were collected from public ly available GIS datasets from state, federal, and county databases. These datasets consisted of digital elevation

PAGE 64

64 models, parcel data, aerial photographs, field measured environmental data, and recorded physical properties of infrastructure and other manm ade systems. This information was used in combination with field observations and generalized assumptions to produce layers whose geometry represented built up areas, vegetated areas, areas of exposed water, elevation shifts, un vegetated areas, and major barriers. A list of the final physical site properties' layers, descriptions of their derivation, and a list of the input layers from government datasets used to derive the physical layers are described in Table 3 2 below (Washington Dept. of Ecology, 1999 ); the maps were located in Appendix A. This information was held constant across all generated predictive models used for the project. Predictive S ound L evel M aps To calculate the physical spread of sound from sources across a landscape, a predictive soun d map was generated for each sound class by use of a computer model of the site and each individual source. Each map represented the sound pressure level of a sound class if a single sound event were to occur over all points simultaneously; although this w as not anticipated as a realistic condition, the maximum potential sound pressure level could be combined with statistical information about a sound's behavior to describe more realistic conditions in which sounds occur for limited durations at specific lo cations. Each map consisted of a raster dataset spanning the extents of the site and using 100 meter cells, with each cell containing a value for the predicted sound level at the centerpoint of the cell. Each raster dataset was re imported into the GIS ma p for analysis by the evaluators. The outputs of these maps can be found along with the extended sound class list in Appendix B.

PAGE 65

65 The standard used for the sound propagation model was ISO 9613 2, with calculations performed in the Cadna A software package. This method was chosen due to the widespread use of the standard, a balance between complexity of inputs and simplicity of computation, the lack of detailed weather information collected during the measurement periods described above, and the inclusion of data that could easily be derived from GIS based modeling methods. The output from these calculations accounted for weather variability by assuming conditions most favorable to the propagation of sound, and should thus give results that indicate the widest possible reach of the sound source. (ISO 9613 2 , 1996 ) To produce the maps, data was provided to the software to calculate an individual noise map for each sound class. The physical property layers were imported into GIS; relevant values, such as elevation for c ontour lines, were applied to objects from data stored in the attributes. These layers were held constant for all map calculations. Each sound class location layer was imported singularly, the calculated sound power levels were applied to the geometry, and the model was calculated for each sound class layer. The output grid was calculated to represent a receiver height of one meter above the ground. The resulting raster map was exported to GIS, and the sound class location layer was removed in preparation for the next sound class level map to be calculated. This predictive sound level maps indicated the average sound pressure level of a single sound event occurring everywhere at once. This parameter is representative of the maximum possible receiver sound pressure level for each sound event. The level maps represented of the maximum acoustic space that a single event can occupy in

PAGE 66

66 absence of all other events. Additional parameters and comparisons were able to modify this space to allow for a richer descrip tion of the behavior of each sound class. Method of C omputing S patially and T emporally V arying P arameters After the predictive map was constructed, spatially and temporally varying parameters other than sound pressure level was calculated from data in the sound taxonomy for each site measurement point. For this simplified process, spatially varying parameters were assumed to remain constant across each geographic typology. A GIS layer consisting of vector area shapes was used to define the limits of each ty pology. Hourly data about the number of events per hour and percentage of time audible for each sound class were created as attributes, with the parameters of each taxonomy attached to each shape. While not entirely accurate or nuanced for example, apply ing each parameter as a constant across a geographic area results in drastic shifts in parameters at the boundaries that are not expected to be observed the method does show roughly the behavioral differences in sound between each geographic typology. Sp atial A nalysis of S patially and T emporally V arying P arameters For each sound class, the shapefile of each geographic typology was used as the basis for generating the raster surfaces for the spatially and temporally varying parameters. Surfaces were genera ted for each hour for each sound class for the percentage of time audible and number of events per hour parameters. Surface values were assigned from the parameters contained in the attribute information attached to each typological area surface. The impor ted sound level raster layer was used as the base for determining the cell size and location of the raster grid for these surfaces, to ease calculations involving both parameters. Thus, at every point where a level

PAGE 67

67 measurement has been assigned, all other parameters are also assigned to the corresponding location for each sound class. Sound Class Parameter Surfaces Upon calculation of the spatial analysis of the spatially and temporally varying parameters, a series of surfaces were obtained that contained p arameter values for each raster cell for each sound class. Conceptually, these surfaces described the potential sound level and documented behavior of different sounds across a site. All surface values aligned in a consistent grid size and location to all ow for calculation efficiency. With the parameter and sound level information serving as the underlying database, simple surface math was applied to the dataset to calculate information about the behavior and level of sound on the site. These calculations then served as the basis for acoustic suitability calculations derived from a sample evaluator data. All sound class parameter data and locations for each sound class were placed in Appendix B of this report. Evaluator Data Evaluator Data Processing The ne xt step in the development of the suitability map was to translate the quantitative parameters into numerical values that represented the impact of each parameter on end valuation of the soundscape. When the acoustic parameters were calculated and mapped, a series of descriptive base layers were generated that described the aural environment of the study geographic area. These base layers mathematically represented several aspects of the 'landscape' of sound as it shifts over space and time. The layers sho wed spatial variation in sound pressure level, hourly event occurrence, and event duration for each source. However, these maps by

PAGE 68

68 themselves were devoid of meaning without understanding how the parameters related to the perception of the sonic environment . A process of calculating 'acoustic suitability' was developed and applied to the data to translate environmental parameters into human value, and thus into a map of human value of the sonic environment. To develop maps of acoustic suitability, the calcul ated parameters (or combinations of parameters) were then converted into human valued data by applying numerical values representing how each parameter influences the human perception of sound. First, a description of the activity was created and sounds we re assigned a weight based on desirable, undesirable, or neutral. Then, weights were applied to the duration of each event, and what time the event occurred. The weighted parameter surfaces were then summed to provide a surface that incorporates the total weight of each parameter surface. This surface was indicative of acoustic suitability. Not all acoustic environments were likely to be universally acceptable to all people at all times. What people perceived as important in a sonic environment was highly dependent on the activity being performed, time, and even the person listening. This was done for one theoretical group of evaluators to demonstrate the process. This concept was based on the Land Use Conflict Identification Strategy (LUCIS) used in urban planning. In the LUCIS method, an underlying landscape was analyzed for suitability by different uses; different landscape characteristics were given different weighting factors, the weighting factors for each use were averaged using a weighting scale fo r each factor, and areas of suitability (and conflict between uses) were identified. The analytical concepts of landscape analysis and weighting factors were extrapolated to the acoustic data to determine acoustic suitability. (Carr and Zwick , 2007)

PAGE 69

69 Eval uator Parameters Parameter values were estimated for residential acoustic suitability. Evaluator criteria were selected from a review of the literature regarding the effect of sound on people. This list was then expressed in terms of the values represented in the sound class parameter surfaces, to produce a translational series of evaluator parameters. These values consisted of a weighted series of criteria expressing likes and dislikes for different sounds, levels of sound, and other sonic characteristics at different times of day. The relative importance of each evaluator parameter to the overall valuation was then determined by the relative weight of each criterion . Through this translation, acoustic suitability for each evaluator was expressed in terms o f parameters that could be mapped. Basic parameters: From the data calculated up to this point in the process, several values were calculated for important measurements in the process. One of the most important of these values was the hourly Leq. Importa nt data relating to the calculation of this parameter included the sound level of each event, the percent time audible of each event, the number of events per hour, and the length of time for each event. Assuming all data was normally distributed, the Leq for an individual sound for an hour was calculated using the equation below. This assumed that all sounds were evenly distributed across the hour when calculating overlapping events (represented by the variable O est , which assumed an even distribution of sounds was estimated from the number of events n events , the average length of events in the sound class length avg , and the total number of seconds in the measurement period sec total ), Grouped sets of sounds could thus be averaged to obtain an Leq for a se t of sounds, represented as Leq set .

PAGE 70

70 Leq sound = 10*log(PTA time *10^(L sound /10) * O est ) O est = (n events * length avg ) / sec total or 1, whichever is greater Leq set = 10*log (10^(Leq sound1 /10) + 10^(Leq sound2 /10) + ... + 10^(Leq soundN /10)) The residential eval uator was representative of residential land use , which encompassed a number of activities that were assumed to be located in a home. Each of these activities had their own acoustic likes and dislikes, and each activity was assumed to occur at different ti mes of day in the residence. Because different activities were not assumed to be conducted simultaneously, time was an important factor in the weighting applied to each of the acoustic parameters. Previous national research produced data about how people s pend their day. The federal government conducted telephone surveys of household members over the age of 15. Survey participants were asked to report information on time spent per day doing a predefined list of activities (BLS , 2013); this data is graphed in a chart below (Cox et. al , 2009). Specific activities documented by this survey were assumed to be tied to the home, specifically sleep, television watching, and household activities. Sleep: According to Figure 3 11, a t least 50% of the population was asleep between 10PM and 7AM in 2008 (Cox et al. , 2009). Several factors that influenced sleep included the duration of sound, level of sound above background noise, overall sound level, and the type of sound. (Miller , 1971) Impulsive sounds, as well as ba ckground sound levels over a certain level were also seen as harmful to sleep. (Berlgund, et al, 1999) Natural sounds were also considered more desirable than anthropogenic sounds despite the latter's potential for sound masking, as soundscape

PAGE 71

71 research in dicates that natural sounds may be perceived as more pleasant in controlled conditions. (Axelsson et al. , 2010) Audible natural sound was seen as desirable for its pleasantness in controlled conditions. (Axelsson et. al. , 2010) Therefore, the presence o f audible natural sound on the site was considered a positive. The surface used to represent this valuation was denoted as V NatPTA . The impact of impulsive sounds (t =< 1s) were considered di sruptive to sleep. (Berlgund et al. , 1999). Therefore, the coun t of sound events exceeding 10dB above the total Leq were considered disruptive and assigned a negative value. The surface used to represent this valuation was denoted as V ImpSounds . The World Health Organization establishes a baseline Leq of 35dBA as a t hreshold background sound level for sleep indoors (Berlgund et al. , 1999). To account for open windows and outdoor sleeping, cells where calculated Leq derived from anthropogenic sounds exceeded 35dB was valued lower than areas where this level was not ex ceeded. The surface used to represent this valuation was denoted as V AnthroLeq . Additionally, the number of anthropogenic events exceeding 35dBA was counted negatively to account for the impact of individual sounds rather than the average represented by th e Leq, and also the lower value of individual anthropogenic sounds. This weight was intended to capture the impact of discrete events exceeding the level specified by the WHO, in addition to the background levels. The surface used to represent this valuat ion was denoted as V AnthroEvents . Because of the previously referenced pl easantness of natural sounds, the average level of natural sound was not rated as negatively in the weighting as the anthropogenic sounds. The surface used to represent this valuatio n was denoted as V NatLeq . The surfaces V NatPA , V ImpSounds , V AnthroLeq , V AnthroEvents , and V NatLeq were thus calculated for valuation of sound during sleep hours. Each raster cell was assigned a number based on the values of its parameters. The equations below were used to calculate each cell of the raster surface. The new number values assigned through these equations represented a range of possibilities for valuations, with lower numbers representing less suitable conditions and higher numbers represent ing more suitable

PAGE 72

72 conditions. The summary of these sounds, levels, and their weights is found in Table 3 5. These valuations are shown on the site sound taxonomy in Figure 3 13. To assign the values listed above, it was necessary to calculate a dditional values from the sound taxonomy parameters and sound pressure level maps that had previously been calculated. The values listed above were calculated in the following manner: n ImpSounds : This variable wa s the number of impulsive sounds at each ce ll. An impulsive sound was defined as a sound with a median sound pressure level in the raster cell with the potential to exceeded 10dB above the L eqTotal value of the cell. (Berlgund et al. , 1999) n AnthroEvents : This variable is the total count of anthr opogenic events exceeding 35dB between the hours of 10PM and 7AM at each cell. This was calculated by summing the number of events per hour for each hour between 10PM and 7AM for all events

PAGE 73

73 defined as anthropogenic on the sound taxonomy, in all cells wher e that sound class' sound level exceeded 35dB. PTA LongestNat : This variable is the greatest percent time audible of a single natural sound class as defined in the sound taxonomy at each cell. L eqNat : This variable is the Leq of all natural sounds as define d in the sound taxonomy from 10PM to 7AM, using the Leq calculation method as defined in 'Basic value parameters' above. L eqAnthro : This variable is the Leq of all anthropogenic sounds as defined in the sound taxonomy from 10PM to 7AM, using the Leq calcul ation method as defined in 'Basic value parameters' above. L eqTotal : This variable is the Leq of all sounds as defined in the sound taxonomy from 10PM to 7AM, using the Leq calculation method as defined in 'Basic value parameters' above. Household activiti es: These are general activities associated with being home, and include cooking, preparing for work, socializing and other activities that do not fall into a particular category and take place in the home; Figure 3 12 suggested these activities occur prim arily at 8AM and between the hours of 5 and 10PM (Cox et al., 2009). Areas with exterior noise levels over 65dB were considered undesirable, as per the HUD noise regulation (The Environmental Planning Division, 1985). As per the noise regulation, Leq lev els between 65dB and 75dB were considered likely to be unacceptable under normal circumstances, and levels over 75dB were considered unacceptable. The surface used to represent this was denoted as V AllLeq . Impulsive events 10dB over the daytime Leq were a lso considered undesirable. The surface used to calculate this valuation was denoted as V AnthroEvents . The surfaces V AnthroEvents and V AllLeq were thus calculated or valuation of daytime household activities. The equations below were used to calculate eac h cell of these raster surfaces. The values assigned through these equations represented a range of possibilities for valuations, with lower numbers representing less suitable conditions and higher numbers representing more suitable conditions. The sum mary of these sounds,

PAGE 74

74 levels, and their weights is found in Table 3 5. These valuations are shown on the site sound taxonomy in Figure 3 14. The above equations required calculations of the variables n AnthroEvents and L eqAll . These variables were calc ulated in the following way: n AnthroEvents : This variable is the total count of anthropogenic events exceeding 35dB during the range of hours of 8AM and 5PM 10PM at each cell. This was calculated by summing the number of events per hour for each hour betw een 8AM and 10PM for all events defined as anthropogenic on the sound taxonomy, in all cells where that sound class' sound level parameter exceeded 10dB above that cell's L eqAll . L eqAll : The Leq was calculated for all sounds occurring over a range includin g the hours of 8AM and from 5PM to 10PM. The level was calculated from the sound taxonomy parameters and sound level maps using the method documented above. Evaluator C riteria M aps As described above, each evaluator had a series of equations that used the calculated acoustic parameters as a base for determining valuation of sound. Once the acoustic needs for each evaluator were described as criteria and expressed in terms of the sound class parameters, it was possible for the sound class parameter surfaces to be used to calculate the presence or absence of evaluator criteria. Applying the equations listed above to the base maps resulted in a series of maps for each evaluator that translated the base parameters into acoustic criteria that were relevant for e ach evaluator. These revalued maps consisted of an array of raster cells containing 0 9 values that indicated a particular parameter's suitability for the evaluator, with higher

PAGE 75

75 values representing greater suitability due to that parameter. These values were a direct translation of physical properties of the site into perceptual values of the site. The criteria maps were then used as input for overall acoustic suitability once relative weights were applied to each parameter. Evaluator W eighting M aps To a ccount for the relative importance of individual criteria in the evaluator's total description of acoustic suitability, each criterion was scaled relative to other criteria to express its relative importance in the overall acoustic suitability for each eva luator. This was done by multiplying the evaluator parameter maps by the weighting percentages established in the Evaluator Parameters. The 0 9 integer value present in each cell of each evaluator parameter value map was multiplied by the percentage valu e for each criteria established in the Evaluator Parameters. The weighted average of all the evaluator weighting maps gave the acoustic suitability map. The total valuation surface of the Sleep portion was calculated by applying weighting factors to each surface in the Sleep suitability surfaces set and the Household Events suitability surfaces set, and then summing the resulting surfaces in each set, applying weights to the resulting Sleep surface and the Household Events surface, and summing the weighte d Sleep and Household Events surfaces to produce the final suitability surface. The value distribution of the Sleep surface was weighted significantly to account for the existing research of sound level and type on sleep (70%), and a minority of weight p laced on valuation stemming from newer source based research. (30%) The level based valuation was further broken down to represent the significant impact to sleep represented by impact sounds (50% of total), as well as the anthropogenic Leq established by the World Health Organization (20% of total). The soundscape based valuation was further broken down to represent the presence of natural sounds and their beneficial effect (20% of total), with the count of anthropogenic sounds augmenting the negative im pact of elevated

PAGE 76

76 anthropogenic Leq (8%) and a minor negative valuation for overly loud natural environments (2%). The value distribution of the Household Events surface was weighted with a majority of the valuation (70%) dependent on the long established criteria for average daytime sound level, and a minority of the valuation (30%) derived from the suitability arising from newer research on preference for natural sounds. The value distribution of the total Residential suitability matrix was weighted towar d sleep (75% of the value) as compared to household events (25% of value). This accommodated both the greater number of hours ascribed to the 'sleep' portion of the day as compared to the 'household events' portion of the day (nine hours and six hours, re spectively), as well as the increased health risks stemming from a lack of sleep compounded with those present from elevated sound levels. (Berlgund et al. , 1999) The Residential acoustic suitability for each raster cell was therefore calculated using the following sequence of equations: Sleep Valuation (V Sleep ) = 0.20 * V NatPTA + 0.50 * V ImpSounds + 0.20 * V AnthroSoundsLeq + 0.08 * V AnthroEvents + 0.02 * V NatLeq Household Events Valuation (V HEvents ) = 0.30 * V AnthroEvents + 0.70 * V AllLeq Total Residentia l Valuation (V Residential ) = 0.75 * V Sleep + 0.25 * V HEvents Acoustic S uitability Acoustic suitability was finally determined by the summing of the weighted parameter value maps for each evaluator. This provided a relative single number representation of the acoustic suitability at each location. The calculated acoustic suitability map is found in Figure 3 18. Inferences about appropriateness of an aural enviro nment could be made from the acoustic suitability. In general, the map show ed that for the resi dential user, the 'urban area' was projected to be more suitable than the wilderness areas and areas near the roads, which in turn are less suitable than areas far from roads and not in the wilderness.

PAGE 77

77 Review of Process In summary, an acoustic suitability map based on soundscape criteria was constructed that provided from data and processing methods that perceptually, quantitatively, and qualitatively described an overall aural environment. These maps were produced by taking specific types of site measurem ents at different locations across a study region, analyzing measurable data to determine important acoustical parameters, developing values for each parameter based on the listener's value of sonic quality that could be documented through measurable data and its analysis, and mapping all those things onto the site. The soundscape approach of decomposing the landscape of sound into discrete elements for evaluation was critical in the functioning of this process. In this way, the physical properties of the s ound on the site were translated into listener experienced valuation of the sound, with a far more robust way of describing and evaluating sound than a single number criteria method such as Ldn. Table 3 1. Visitation times analyzed for each measurement l ocation. Location Time analyzed L 1 Urban Morning: 1:19AM 1:44AM L 1 Urban Afternoon: 12:17PM 12:22PM L 6 Rural Morning: 2:44AM 2:54AM L 6 Rural Afternoon: 3:45PM 3:55PM L 7 Wilderness Morning:6:40AM 7:00AM L 7 Wilderness Afternoon: 3:44PM 3:59PM

PAGE 78

78 Table 3 2. List and description of taxonometric sound classes derived from data. Sound Class Description Sounds contained Transportation Auto Auto generated noise assumed to be on roads 1 Automobile, 2 Automobile acceleration, 3 Semi truck , 4 Car start, 5 Vehicle moving over bump, 6 Unlock beep, 7 car horn, 8 Keys jingle, 9 Brake squeak, 10 Compression brake, 11 Car door close, 12 Distant traffic, 13 Moped acceleration, 14 Engine idle Transportation Non Auto generated transportation noise 15 Skateboard Anthropogenic Noises emanating from humans 16 Voices, 17 Whistle, 18 Sneeze, 19 Footstep, 20 Cough, 21 Music Natural Non human animal life sounds 22 Birds, 23 Insect chirp, 24 Insect buzz, 25 Seagull, 26 Frog Environmental Sounds caused by non animal natural events 27 Things falling from trees, 28 Wind, 29 Low frequency wind interference Activity Sounds related to activities with sound generating devices 30 Hammering, 31 Lawn mower Mechanical Sounds caused by machines 32 Mechanical fan, 3 3 Mechanical squeak, 34 Chain rattle Table 3 3. List of physical site layers. Model element Layer Source Roads Roads_meter.shp Washington State GIS Building areas URBAN_meter.shp Washington State GIS Forested areas Forest_meter.shp Washington State G IS; trace over aerial photographs Contour lines (100m elevation change) PA_Contour_712_ meter.shp to PA_Contour_914_ meter.shp US Ge o logical Survey Water Coastclip.shp Washington State GIS NPS lodge at Hurricane Ridge (L7) Located by researcher Aerial pho tograph

PAGE 79

79 Table 3 4. Description and percentage contribution to value for sleep sound valuation, 10PM 7AM (75% of residential valuation). Sound Class Description Valuation Weight Natural Percent time audible >75%: 9 >50%: 8 >25%: 7 Else: 5 20% All E vent exceeds 10dB of local Leq 1 event total: 7 1 event /hour: 5 5 events / hour: 1 Else: 9 50% Anthro. Leq exceeds 35dB Leq > 35: 4 Leq > 45: 1 Else: 9 20% Anthro. Sound events exceed 35dB 1 event total: 7 1 event / hour: 5 5 events / hour: 1 Else: 9 8% Natural Leq exceeds 40dB where PTA > anthro PTA Leq > 35: 4 Leq > 45: 1 Else: 9 2% Table 3 5. Description and calculation method for household activities valuation, 8AM and 5PM 10PM (25% of residential valuation). Sound Class Description Valuation Wei ght Anthro Level 10dB above ambient No events: 9 1 event/day: 7 1 event/hour: 4 5 events/hour: 1 30% All Leq >65dB Leq <65dB: 9 Leq 65 75dB: 4 Leq>75dB: 1 70% Figure 3 1. Binary yes no decision making process for valuing an aural environment based o n a single number rating of a sound pressure level measurement.

PAGE 80

80 Figure 3 2. Binary yes no decision making process for valuing an aural environment based on a single number rating of a sound level map. Figure 3 3. Proposed prototype method of valuing an aural environment, which uses soundscape analysis and mapping techniques to produce a range of ratings for evaluators.

PAGE 81

81 Figure 3 4. Google Earth (Google Corporation, 2010) 3 D image of the studied area. The yellow line is approximately ten miles long ; 'L1' corresponds to the 'urban' typology, 'L6' to the 'rural' typology, and 'L7' to the 'wilderness' typology.

PAGE 82

82 Figure 3 5. Map showing typologies, along with the measurement location for each typology (marked with an 'X').

PAGE 83

83 Figure 3 6. Photograph o f measurement apparatus . (Photo courtesy of Adam Bettcher.) Figure 3 7. Frequency response of the Zoom H2 Handy Portable Stereo Recorder, according to manufacturer data. (Zoom Corporation, 2010)

PAGE 84

84 Figure 3 8. A screenshot from the sound tagging pro gram. This program allowed the user to define the frequency and time domains of individual sound events, assigned a sound class to each individual event, and then computed statistical information about level, occurrence, and percentage time audible for ea ch sound class. Figure 3 9. A graphical representation of the splining process.

PAGE 85

85 Figure 3 10. Sound taxonomy of the study area derived from analysis of the six measurement periods. The taxonomy shows linkages between the sound and object where appl icable, and also shows linkages between types of objects based on location.

PAGE 86

86 Figure 3 11. Chart of how Americans spent their day in 2008. (Carter et al. , 2009)

PAGE 87

87 Figure 3 12. Valuations of sounds for a residential user during the 'sleep' period.

PAGE 88

88 F igure 3 13. Valuations of sounds for a residential user during the 'household activities' period.

PAGE 89

89 Figure 3 14. Acoustic suitability map for a residential evaluator.

PAGE 90

90 CHAPTER 4 CONCLUSIONS The acoustic suitability map produced using the soundscape me thod created a much different range of data and types of evaluating criteria than level based maps of sound energy. This increased range of descriptors was used to infer level based valuations of acoustic suitability. The measurement and data processing methods described in the 'Results' section were not considered as an ultimate implementation of the system, but instead as a specific implementation with limited resources and time as a sample way of processing the data to produce the results of the map. Discussion of Output As mentioned previously, an acoustic suitability map was constructed for a Residential evaluator. The Residential acoustic suitability map showed higher residential acoustic suitability in the urban areas, lower acoustic suitability in rural areas, and some acoustic suitability in wilderness areas and places near highways and roads because of the presence of more sounds in these locations. This map was compared to existing research on what expected suitability should be for residenti al use for each typology. In general, 'urban' areas will have greater numbers and durations of loud, manmade sounds than natural areas. (Berlgund et al. , 1999) Natural areas with fewer human uses, as well as areas far from human uses, will have a greater number of natural sounds, fewer anthropogenic sounds, and less overall duration or audibility over time of manmade sounds. (Hempton, 1999) Rural areas were expected to exist somewhere in between these two types of areas, as the density of human activities was observed to be lower than urban areas but greater than wilderness areas. Thus, residential suitability was expected to be higher in rural areas and wilderness areas than

PAGE 91

91 urban areas due to significant valuation of low levels of sound at night combine d with a component that included some source based valuation of the sound. The Residential map appeared to correspond somewhat with existing soundscape research. Suitability for residential uses tended to increase for natural sounds and views (Li et al. , 2012); therefore, the wilderness and rural areas were theoretically more suitable for residential use than urban areas. In this calculation, the urban area was more suitable than the wilderness area and areas near the road, which in turn was more suitable than the rural area. One possible cause for this was the increase in wind noise in the natural areas compared to rural and urban areas, which tended to raise overall sound levels. As wind noise may have served as a masking sound, it was likely that an a djustment to the weighting criteria would have accounted for the greater valuation of steady masking sounds compared to other sounds with levels over 40dBA. The map possessed sharp differences in levels of suitability that roughly correspond to the typolog y boundaries. This suggested a sharp bounding in the behavior and spread of sound. As sound tended to propagate outdoors in a more gradated manner, even with barriers in place (ISO , 1996), it was assumed that these sharp barriers were artifacts of the li mited number of sampling points in the study. Additional measurement points may have revealed a more gradual shift in suitability between typologies, although even maps with sharp differences in suitability were likely usable for some planning purposes.

PAGE 92

92 Possible Improvements to Process Data Collection and Analysis The data collection and analysis procedure successfully took measured site conditions, analyzed the measured site conditions to describe the behavior of sound, and produced maps of this behavior . The data that was collected allowed a greater range of descriptors of sound than traditional single number weighted measurements. This greater range of descriptors allowed for a more nuanced way of evaluating soundscapes, which theoretically allowed for a higher rate of accuracy and correlation between acoustic stimulus and evaluator response. A greater number of measurement locations, technological advancement in automatic source separation, greater variability in determining the shape of sound event ta gs, additional parameters such as statistical information or spectral sound levels, and the collection of increasingly accurate site, weather, and sound data Spatial Data The processing procedure for the sound data resulted in both physical properties (le vels) and behavioral properties (probability of occurrence per time of day and length of event) for each type of sound. A series of mapping procedures were used to convert these parameters into spatial parameters by noting where each sound occurred (the ph ysical properties) and how the sounds changed over space and time (the behavioral properties). GIS was used to map the behavioral properties of sound, and specialized noise mapping software was used to compute the physical spread of sound from mapped sourc e locations. The separation of these parameters into different underlying maps was critical in allowing different evaluators to evaluate the same sonic

PAGE 93

93 environment with different criteria, so in this way the process of computing the change in parameters ov er space was successful. The processing procedure for the data was heavily simplified for the purposes of this thesis, and so could have several refinements applied. An increased number of data collection points would have allowed for a more sophisticated method of mapping the change of spatially varying and temporally varying parameters across the site. A more refined predictive method could have been used to compute sound, which would have required additional refinements for the input for physical site p roperties. Finally, a decrease in cell size would have allowed for greater precision in providing values, with the tradeoff of longer computation time. Together, these improvements may have resulted in a more accurate, more natural method of converting th e measured parameters into maps. Evaluator Data The evaluator methods were primarily derived from educated inferences made after a literature review of the effects of sound on people, such as those compiled by the World Health Organization and the US Envir onmental Protection Agency. While it was beyond the scope of this thesis to produce evaluator data based on survey conditions, this survey of evaluators was one possible means of refinement. Additionally, analysis of variables to reduce extraneous or inef fective variables had been conducted in other land use surveys (Carr and Zwick , 2007); future processes could use similar methods for individual evaluators when developing their suitability maps. Review of Conclusions The translation of the site's sound e nvironment into listener values was a linear process from data collection to preference results based on soundscape based source

PAGE 94

94 and qualitative data in addition to the physical projection of sound. The steps of this process involved the collection of data , the establishment of behavioral and physical parameters of sound that described the behavior of sound on the site, the computation of these parameters from available data to describe sounds and their behavior, the mapping of these data over a region, and the creation of the final preference maps based on these mapped parameters. Additional future improvements to the system included improvements in data collection, refinement and automation in data processing, the use of additional data measuring points an d length of time for data collection, the addition of a survey component, and validation of the method through site measurements. Final Review and Summary A soundscape approach to noise mapping can provide richer descriptors to describe site characteristic s. By separating individual sounds out of recorded background data, determining simple parameters to describe their characteristics and behavior, and establishing criteria for people to analyze these sounds, a layer of qualitative information (as well as q uantitative descriptors on qualitative information) can be added to sound maps to produce greater accuracy in correlating aural environment with preference. This particular measurement method outlined a possible strategy to incorporate these qualities int o a systematic measurement process by synthesizing and enhancing existing techniques. Additional studies using this method, as well as corroboration with additional field data and evaluator preferences, are necessary to determine the validity of this metho d in the real world. Important factors when building the quantitative model of the soundscape included the domains in which each model's descriptor varied. In this thesis, some

PAGE 95

95 parameters' variability were analyzed independent of time and space, while oth ers had variability described in the time domain or in the spatial domain. Determining how to document the variance of selected parameters was likely to be a fundamental question that will impact the accuracy of the resulting model of sound's performance across the site. Baseline assumptions about how parameters may vary were identified as an area of further study and refinement with further iterations of this procedure. Further study and caution was necessary in the process of translating qualitative so undscape descriptors into quantitative valuations of parameters. The relative weights of various descriptive criteria required further study, as do combinations of various descriptors or perhaps the criteria themselves. The process of constructing a sound scape model from measured data produced a sufficient documentation of underlying calculations to identify areas of further refinement. The mapping portion of this process produced a series of data that deconstructed acoustic data measured across the study geographic area into its sonic components located in space. The evaluation portion of this process produced a series of criteria based maps that reconstructed these sonic components into listener perception of sound. Through the documentation created by these processes, further areas of refinement in the processes of this breakdown and reconstruction were identified to enhance understanding of acoustical suitability and preference.

PAGE 96

96 APPENDIX A PROCESSED SOUND DATA Figure A 1. Location 1 tag data, afte rnoon.

PAGE 97

97 Table A 1. Visitation times for Location 1, Afternoon. Location: L1 (Urban, town downtown) Date: 08/06/10 Time Period: Noon Start time: 12:17:00 PM End time: 12:23:00 PM Cal. file: AU1_0409 Wave file: STE 000 2 Ambient sounds Distant traffic Table A 2. Sound events for Location 1, Afternoon. Time Event Time Event 12:17:00 PM Count off 12:19:46 PM Pedestrian footsteps walking 12:17:42 PM One van passes 12:20:01 PM Motorcycle idles 12:17:49 PM Basket @ bo u tique store dragged 12:21:25 PM P eople walk by and stop at store child, woman, ten maze 12:18:26 PM Man walks by, stops 12:21:35 PM Pedestrian in flip flops 12:18:50 PM Short conversation 12:22:09 PM Twelve shuffly steps 12:19:09 PM Keys, cyclist passes 12:22:21 PM Loud car with musi c passes 12:19:19 PM Seagull 12:22:36 PM Man walks by with cane and keys 12:19:26 PM Seven cars and one motorcycle pass 12:22:56 PM Relative quiet; 'beep' audible 12:19:42 PM One car passes

PAGE 98

98 Figure A 2. Location 1 tag data, morning

PAGE 99

99 Table A 3. Vi sitation times for Location 1, Morning. Location: L1 (Urban, town downtown) Date: 08/07/10 Time Period: Midnight, to wee hours (~3AM) Start time: 01:19:00 AM End time: 12:23:00 PM Cal. file: AU2_04_15 Wave file: STE_006_2 Ambient sounds Mechanical dr one down arcade, another very tonal sound of unspecified location. Distant voices

PAGE 100

100 Table A 4. Sound events for Location 1, Morning. Time Event Time Event 01:19:00 AM Start 01:30:13 AM One motorcycle passes 01:21:46 AM Bird, or voice (apartment above or possibly down street) 01:30:50 AM Rattly taxi passes 01:22:40 AM One van passes 01:31:14 AM Rattly dump truck passes 01:23:31 AM Distant chirping night birds 01:31:50 AM Barely perceptible distant base sound plane lik e or truck l ike 01:23:58 AM Sneeze 200' to east 01:32:17 AM Dump truck accelerates up hill to east 01:24:22 AM Footsteps 01:33:16 AM Loud car noise heard through arcade 01:24:29 AM Voices 01:33:42 AM Shuffly sounding footsteps 01:25:09 AM Westbound car s heard through arcade 01:33:54 AM Distant amplified voice, indeterminate location 01:25:18 AM High pitched brake like noise 01:34:15 AM Moped like rev 01:25:13 AM Distant car noise 01:34:16 AM Building 'click' 01:26:15 AM Faint voice through arcade 01: 35:10 AM Cars rev one block east 01:26:45 AM Truck turns onto 101 from waterfront two blocks away 01:35:40 AM One car revs one block east 01:27:07 AM Large van and taxi pass 01:36:14 AM Distant quack or amplified voice 01:27:56 AM Dragging sound to east 01:36:35 AM Honk 01:28:12 AM SUV passes 01:36:44 AM Chainy sound, far west 01:28:19 AM Pickup truck passes 01:36:54 AM Distant traffic passes to west 01:28:30 AM Police car passes; pedestrian with plastic bag walks on other side of street 01:37:11 AM O ne car revs left one block to east 01:28:49 AM Cyclist passes 01:37:16 AM Seagulls 01:28:56 AM Car passes with thumpy base 01:37:32 AM Semi/dump truck accelerates 01:29:02 AM Couple passes on foot 01:37:56 AM Semi/dump truck passes 01:29:20 AM Car pass es 01:38:11 AM One car passes 01:29:27 AM Loud music, then car passes 01:38:23 AM Distant wind chime sound 01:29:55 AM One car passes 01:38:34 AM Truck accelerates eastbound on US 101

PAGE 101

101 Figure A 3. Location 6 tag data, afternoon

PAGE 102

102 Table A 5. Visitati on times for Location 6, Afternoon. Location: L6 Countryside (houses +5ac.) Date: 08/10/10 Time Period: Mid afternoon Start time: 03:45:00 PM End time: 03:55:00 PM Cal. file: AU1_0421 Wave file: STE_011 Ambient sounds Distant traffic, weed whacker, construction noise, tweeting birds Table A 6. Events for Location 6, Afternoon. Time Event Time Event 03:45:14 PM Wind in broadleaf trees 03:49:00 PM Distant traffic, some high altitude plane noise 03:45:27 PM Low frequency traffic noise from south 0 3:50:00 PM Impact sources from maintenance people at nearby house 03:46:15 PM Ambient 03:52:50 PM Car passes 03:46:40 PM Weed whacker stops 03:52:59 PM Stomach rumble (ignore) 03:47:22 PM person putting weed whacker away 03:53:19 PM Mail col lection (boxes open, close) 03:48:20 PM Several different kinds of birds 03:54:10 PM Car turns down gravel road

PAGE 103

103 Figure A 4. Location 6 tag data, morning Table A 7. Visitation times for Location 6, Morning. Location: L6 Countryside (houses +5ac.) D ate: 08/11/10 Time Period: Wee hours of morning Start time: 02:44:00 AM End time: 02:54:00 AM Cal. file: AU2_04_30 Wave file: STE_020 Ambient sounds Mechanical drone from house to west; flowing stream to east; distant cars, occasionally

PAGE 104

104 Table A 8. Visitation times for Location 6, Morning. Time Event Time Event 02:44:25 AM I enter my car 02:48:52 AM Very distant semi truck revs to west 02:45:45 AM Louder ebb and flow of distant traffic 02:50:15 AM Faint chirp of birds 02:46:26 AM Rev of motorc ycle from Port Angeles below

PAGE 105

105 Figure A 5. Location 7 tag data, afternoon

PAGE 106

106 Table A 9. Visitation times for Location 7, Afternoon. Location: L7 1 (National park front country visitor center parking lot) Date: 08/14/10 Time Period: Afternoon Start t ime: 03:44:00 PM End time: 03:59:00 PM Cal. file: AU2_04_39 Wave file: STE_002 3 Ambient sounds Autos, voices, doors closing, honking Table A 10. Events for Location 7, Afternoon. Time Event Time Event 03:46:40 PM Deer, 150' to east 03:53:18 PM In sect sounds in grass 03:47:52 PM Visitor center generator audible 03:54:35 PM Footsteps 03:48:03 PM Semi truck pulls into parking lot, makes delivery 03:55:02 PM Prominent generator noise 03:49:20 PM Prominent human voice 03:57:54 PM Footsteps 03:51:00 PM Semi truck leaves parking lot 03:58:33 PM Prominent generator noise 03:51:31 PM Car alarm beeping @50' 03:58:52 PM Wind in grasses, generator

PAGE 107

107 Figure A 6. Location 7 tag data, morning

PAGE 108

108 Table A 11. Visitation times for Location 7, Morning. Location : L7 1 (National park front country visitor center parking lot) Date: 08/15/10 Time Period: Early morning Start time: 06:40:00 AM End time: 07:05:00 AM Cal. file: AU2_0243 Wave file: STE_006 3 Location: L7 1 (National park front country visitor c enter parking lot)

PAGE 109

109 Table A 12. Events for Location 7, Morning. Time Event Time Event 06:42:00 AM Car pulls up and turns down dirt road to L8 06:49:22 AM Departing car pulls around curve 06:43:55 AM Pickup truck pul ls up and turns down dirt road to L8 06:49:57 AM Car leaves parking lot 06:45:14 AM Distant voices 06:50:20 AM Music coming from car near visitor center; roar from mountains (note time delay between seeing and hearing event) 06:45:23 AM Cars heard from o ver ridge (101) 06:51:53 AM Last car is leaving visitor center; is stopped 06:45:34 AM Roar from south 06:52:31 AM Ranger car pulls into parking lot 06:46:17 AM Distinctive car sound from over ridge; insect buzzing 06:53:00 AM Second ranger pickup truck pulls into parking lot 06:46:30 AM Birds 06:53:43 AM A few seconds of almost all sound halting 06:47:30 AM Traffic over ridge to north (101) 06:54:24 AM Ranging unlocking visitor center; birds; car comes around bend 06:48:13 AM Car pulls out 06:55:19 AM Car moves down dirt road to L8 06:48:53 AM People being active in cars (voices) 06:42:00 AM Car pulls up and turns down dirt road to L8 06:49:22 AM Departing car pulls around curve 06:43:55 AM Pickup truck pulls up and turns down dirt road to L8 06:4 9:57 AM Car leaves parking lot 06:45:14 AM Distant voices 06:50:20 AM Music coming from car near visitor center; roar from mountains (note time delay between seeing and hearing event) 06:45:23 AM Cars heard from over ridge (101) 06:51:53 AM Last car is l eaving visitor center; is stopped 06:45:34 AM Roar from south 06:52:31 AM Ranger car pulls into parking lot 06:46:17 AM Distinctive car sound from over ridge; insect buzzing 06:53:00 AM Second ranger pickup truck pulls into parking lot 06:46:30 AM Birds 06:53:43 AM A few seconds of almost all sound halting 06:47:30 AM Traffic over ridge to north (101) 06:54:24 AM Ranging unlocking visitor center; birds; car comes around bend 06:48:13 AM Car pulls out 06:55:19 AM Car moves down dirt road to L8 06:48:53 AM People being active in cars (voices)

PAGE 110

110 Figure A 7. Data sheet for Sound 1 Automobile

PAGE 111

111 Figure A 8. Data sheet for Sound 2 Automobile acceleration

PAGE 112

112 Figure A 9. Data sheet for Sound 3 Semi truck

PAGE 113

113 Figure A 10. Data sheet for Sound 4 Car start

PAGE 114

114 Figure A 11. Data sheet for Sound 5 Vehicle moving over bump

PAGE 115

115 Figure A 12. Data sheet for Sound 6 Unlock Beep

PAGE 116

116 Figure A 13. Data sheet for Sound 7 Car Horn

PAGE 117

117 Figure A 14. Data sheet for Sound 8 Keys jingle

PAGE 118

118 Figure A 15. Data sheet f or Sound 9 Brake squeak

PAGE 119

119 Figure A 16. Data sheet for Sound 10 Compression brake

PAGE 120

120 Figure A 17. Data sheet for Sound 11 Car door close

PAGE 121

121 Figure A 18. Data sheet for Sound 12 Distant traffic

PAGE 122

122 Figure A 19. Data sheet for Sound 13 Moped accele ration

PAGE 123

123 Figure A 20. Data sheet for Sound 14 Engine idle

PAGE 124

124 Figure A 21. Data sheet for Sound 15 Skateboard

PAGE 125

125 Figure A 22. Data sheet for Sound 16 Voices

PAGE 126

126 Figure A 23. Data sheet for Sound 17 Whistle

PAGE 127

127 Figure A 24. Data sheet for Sound 18 S neeze

PAGE 128

128 Figure A 25. Data sheet for Sound 19 Footstep

PAGE 129

129 Figure A 26. Data sheet for Sound 20 Cough

PAGE 130

130 Figure A 27. Data sheet for Sound 21 Music

PAGE 131

131 Figure A 28. Data sheet for Sound 22 Birds

PAGE 132

132 Figure A 29. Data sheet for Sound 23 Insect chirp

PAGE 133

133 Figure A 30. Data sheet for Sound 24 Insect buzz

PAGE 134

134 Figure A 31. Data sheet for Sound 25 Seagull

PAGE 135

135 Figure A 32. Data sheet for Sound 26 Frog

PAGE 136

136 Figure A 33. Data sheet for Sound 27 Things falling from trees

PAGE 137

137 Figure A 34. Data sheet for Sound 28 Wind

PAGE 138

138 Figure A 35. Data sheet for Sound 30 Hammering

PAGE 139

139 Figure A 36. Data sheet for Sound 31 Lawn mower

PAGE 140

140 Figure A 37. Data sheet for Sound 32 Mechanical fan

PAGE 141

141 Figure A 38. Data sheet for Sound 33 Mechanical squeak

PAGE 142

142 Figure A 39. Data she et for Sound 34 Chain rattle

PAGE 143

143 APPENDIX B CALCULATED SPATIAL DATA Figure B 1. Road input used for predictive sound map model. ( WSDOT, 1996 ) (WSDOT, 2003)

PAGE 144

144 Figure B 2. Building areas input used f or predictive sound map model.

PAGE 145

145 Figure B 3. Forested i nput used for predictive sound map model.

PAGE 146

146 Figure B 4. Contour areas input used for predictive sound map model. ( UW Geomorphological Research Group, 2000 )

PAGE 147

147 Figure B 5. Water areas used for predictive sound map model. ( WSDOT, 2004 )

PAGE 148

148 Figure B 6 . Pre dictive sound pressure level maps for Sound 1 Automobile

PAGE 149

149 Figure B 7 . Predictive sound pressure level maps for Sound 2 Automobile acceleration

PAGE 150

1 50 Figure B 8 . Predictive sound pressure level maps for Sound 3 Semi truck

PAGE 151

151 Figure B 9 . Predictive soun d pressure level maps for Sound 4 Car start

PAGE 152

152 Figure B 1 0 . Predictive sound pressure level maps for Sound 5 Vehicle moving over bump

PAGE 153

153 Figure B 1 1 . Predictive sound pressure level maps for Sound 6 Unlock Beep

PAGE 154

154 Figure B 1 2 . Predictive sound pressu re level maps for Sound 7 Car Horn

PAGE 155

155 Figure B 1 3 . Predictive sound pressure level maps for Sound 8 Keys jingle

PAGE 156

156 Figure B 1 4 . Predictive sound pressure level maps for Sound 9 Brake squeak

PAGE 157

157 Figure B 1 5 . Predictive sound pressure level maps for Sou nd 10 Compression brake

PAGE 158

158 Figure B 1 6 . Predictive sound pressure level maps for Sound 11 Car door close

PAGE 159

159 Figure B 1 7 . Predictive sound pressure level maps for Sound 12 Distant traffic

PAGE 160

160 Figure B 1 8 . Predictive sound pressure level maps for Sound 13 Moped acceleration

PAGE 161

161 Figure B 19 . Predictive sound pressure level maps for Sound 14 Engine idle

PAGE 162

162 Figure B 2 0 . Predictive sound pressure level maps for Sound 15 Skateboard

PAGE 163

163 Figure B 21 . Predictive sound pressure level maps for Sound 16 Voice s

PAGE 164

164 Figure B 2 2 . Predictive sound pressure level maps for Sound 17 Whistle

PAGE 165

165 Figure B 2 3 . Predictive sound pressure level maps for Sound 18 Sneeze

PAGE 166

166 Figure B 2 4 . Predictive sound pressure level maps for Sound 19 Footstep

PAGE 167

167 Figure B 2 5 . Predictive sound pressure level maps for Sound 20 Cough

PAGE 168

168 Figure B 2 6 . Predictive sound pressure level maps for Sound 21 Music

PAGE 169

169 Figure B 2 7 . Predictive sound pressure level maps for Sound 22 Birds

PAGE 170

170 Figure B 2 8 . Predictive sound pressure level maps for Sou nd 23 Insect chirp

PAGE 171

171 Figure B 29 . Predictive sound pressure level maps for Sound 24 Insect buzz

PAGE 172

172 Figure B 3 0 . Predictive sound pressure level maps for Sound 25 Seagull

PAGE 173

173 Figure B 3 1 . Predictive sound pressure level maps for Sound 26 Frog

PAGE 174

174 Figu re B 3 2 . Predictive sound pressure level maps for Sound 27 Things falling from trees

PAGE 175

175 Figure B 3 3 . Predictive sound pressure level maps for Sound 28 Wind

PAGE 176

176 Figure B 3 4 . Predictive sound pressure level maps for Sound 30 Hammering

PAGE 177

177 Figure B 3 5 . P redictive sound pressure level maps for Sound 31 Lawn mower

PAGE 178

178 Figure B 3 6 . Predictive sound pressure level maps for Sound 32 Mechanical fan

PAGE 179

179 Figure B 3 7 . Predictive sound pressure level maps for Sound 33 Mechanical squeak

PAGE 180

180 Figure B 3 8 . Predictiv e sound pressure level maps for Sound 34 Chain rattle

PAGE 181

181 APPENDIX C SUITABILITY CALCULATIONS Figure C 1. Residential acoustic suitability map.

PAGE 182

182 Figure C 2. Summary of day based evaluation.

PAGE 183

183 Figure C 3. Leq calculation maps per sound, day. A: Aut omobile. B: Automobile acceleration. C: Semi truck. D: Car start.

PAGE 184

184 Figure C 4. Leq calculation maps per sound, day. A: Vehicle moving over bump. B: Unlock beep. C: Car horn. D: Car start.

PAGE 185

185 Figure C 5. Night Leq input maps for calculation. A: Brake squeak. B: Compression brake. C: Car door close. D: Distant traffic

PAGE 186

186 Figure C 6. Night Leq input maps for calculation. A: Moped acceleration . B: Engine idle. C: Skateboard. D: Voices.

PAGE 187

187 Figure C 7. Night Leq input maps for calculation. A: Brake squeak. B: Compression brake. C: Car door close. D: Distant traffic

PAGE 188

188 Figure C 8. Night Leq input maps for calculation. A: Whistle. B: Sneeze. C: Footstep. D: Cough.

PAGE 189

189 Figure C 9. Night Leq input maps for calculation. A: Music. B: Birds. C : lawn mower. D: Mechanical fan.

PAGE 190

190 Figure C 10. Night Leq input maps for calculation. A: Mechanical fan. B: Mechanical squeak. C: Chain rattle. D: Estimated Leq, all anthropogenic sounds (night).

PAGE 191

191 Figure C 11. Night Leq input maps for calculation. A : Insect chirp. B: Insect buzz. C: Seagull. D: Frog.

PAGE 192

192 Figure C 12. Night Leq input maps for calculation. Total night Leq, all natural sounds.

PAGE 193

193 Figure C 13. Anthropogenic Leq levels over 35dB.

PAGE 194

194 Figure C 14. Count of natural events exceeding 10dB above the overall Leq during night hours.

PAGE 195

195 Figure C 15. Count of anthropogenic sounds exceeding 10dB above night Leq. A: Semi truck. B: Semi truck. C: Car start. D: Car start.

PAGE 196

196 Figure C 16. Count of anthropogenic sounds exceeding 10dB above night L eq. B: Compression brake. C: Car door close. D: Distant traffic

PAGE 197

197 Figure C 17. Count of anthropogenic sounds exceeding 10dB above night Leq. A: Brake squeak. B: Compression brake. C: Car door close. D: Distant traffic

PAGE 198

198 Figure C 18. Count of sound s exceeding 10dB above night Leq. A: Vehicle moving over bump. B: Unlock beep. C: Car horn. D: Car start.

PAGE 199

199 Figure C 19. Count of sounds exceeding 10dB above night Leq. A: Vehicle moving over bump. B: Unlock beep. C: Car horn. D: Car start.

PAGE 200

200 Fig ure C 20. Count of sounds exceeding 10dB above night Leq. A: Vehicle moving over bump. B: Unlock beep. C: Car horn. D: Car start.

PAGE 201

201 Figure C 21. Count of sounds exceeding 10dB above night Leq. A: Vehicle moving over bump. B: Unlock beep. C: Car ho rn. D: Car start.

PAGE 202

202 Figure C 22. Count of sounds exceeding 10dB above night Leq. A: Vehicle moving over bump. B: Unlock beep. C: Car horn. D: Car start.

PAGE 203

203 Figure C 23. Count of sounds exceeding 10dB above night Leq. A: Vehicle moving over bump. B: Unlock beep. C: Car horn. D: Car start.

PAGE 204

204 Figure C 24. Count of sounds exceeding 10dB above night Leq. A: Vehicle moving over bump. B: Unlock beep. C: Car horn. D: Car start.

PAGE 205

205 Figure C 25 . Count of anthropogenic sounds exceeding 10dB above night L eq. A: Vehicle moving over bump. B: Unlock beep. C: Car horn. D: Car start.

PAGE 206

206 Figure C 26. Count of sounds exceeding 10dB above night Leq. A: Vehicle moving over bump. B: Unlock beep. C: Car horn. D: Car start.

PAGE 207

207 Figure C 27. Leq calculation maps per sound, day. A: Automobile. B: Automobile acceleration. C: Semi truck. D: Car start.

PAGE 208

208 Figure C 28. Leq calculation maps per sound, day. A: Vehicle moving over bump. B: Unlock beep. C: Car horn. D: Keys jingle.

PAGE 209

209 Figure C 29. Leq calculation maps per sound, day. A: Brake squeak. B: Compression brake. C: Car door close. D: Distant traffic.

PAGE 210

210 Figure C 30. Leq calculation maps per sound, day. A: Moped acceleration. B: Engine idle. C: Skateboard. D: Voices.

PAGE 211

211 Figure C 31. Leq calculation maps per sound, day. A: Whistle. B: Sneeze. C: Footstep. D: Cough.

PAGE 212

212 Figure C 32. Leq calculation maps per sound, day. A: Music. B: Birds. C: Insect chirp. D: Insect buzz.

PAGE 213

213 Figure C 33. Leq calculation maps per sound, day. A: Seagull. B: Frog . C: Things falling from trees. D: Wind.

PAGE 214

214 Figure C 34. Leq calculation maps per sound, day. A: Low frequency wind interference. B: Hammering. C: Lawn mower. D: Mechanical fan.

PAGE 215

215 Figure C 35. Leq calculation maps per sound, day. A: Mechanical fan. B: Mechanical squeak. C: Chain rattle. D: Est. Leq Natural Sounds (Night).

PAGE 216

216 Figure C 36. Number of events 10dB or more above Leq, night. A: Automobile. B: Automobile acceleration. C: Semi truck. D: Car start.

PAGE 217

217 Figure C 37. Number of events 10dB or more above Leq, day. A: Vehicle moving over bump. B: Unlock beep. C: Car horn. D: Keys jingle.

PAGE 218

218 Figure C 38. Number of events 10dB or more above Leq, day. A: Brake squeak. B: Compression brake. C: Car door close. D: Distant traffic.

PAGE 219

219 Figure C 39. Number of events 10dB or more above Leq, night. A: Moped acceleration. B: Engine idle. C: Skateboard. D: Voices.

PAGE 220

220 Figure C 40. Number of events 10dB or more above Leq, day. A: Whistle. B: Sneeze. C: Footstep. D: Cough.

PAGE 221

221 Figure C 41. Numb er of events 10dB or more above Leq, day. A: Music. B: Birds. C: Insect chirp. D: Insect buzz.

PAGE 222

222 Figure C 42. Number of events 10dB or more above Leq, day. A: Seagull. B: Frog. C: Things falling from trees. D: Wind.

PAGE 223

223 Figure C 43. Number of event s 10dB or more above Leq, day. A: Low frequency wind interference. B: Hammering. C: Lawn mower. D: Mechanical fan.

PAGE 224

224 Figure C 44. Number of events 10dB or more above Leq, day. A: Mechanical fan. B: Mechanical squeak. C: Chain rattle. D: Total numb er of events 10dB or more above Leq .

PAGE 225

225 Figure C 45. Areas where day Leq exceeds 65dB

PAGE 226

226 Figure C 46. Weighting due to the percent time audible of desirable (<40dB) natural sound.

PAGE 227

227 Figure C 47. Weighting due to number of anthropogenic sounds exceeding 10 dB over night Leq .

PAGE 228

228 Figure C 48. Weighting due to the percent time audible of natural sounds.

PAGE 229

229 Figure C 49. Weighting due to the count of the number of natural sound events exceeding 40dB.

PAGE 230

230 Figure C 50. Weighting due the Leq of night sounds exceeding 35dB.

PAGE 231

231 Figure C 51. Weighting due to number of anthropogenic sounds exceeding 10dB over day Leq.

PAGE 232

232 Figure C 52. Weighting due to Leq exceeding 65dB.

PAGE 233

233 Figure C 53. Summary of night based evaluation.

PAGE 234

234 LIST OF REFERENCES ANS I S1.11 ( 2004 ) . Specification for Octave, Half Octave, and Third Octave Band Filter Sets . ANSI S1.4 ( 2006 ) . Specification for Sound Level Meters. Attenborough , K., Taherzadeh, S. , Bass, H.E., Di, X. , Raspet, R. , Becker., G.R., Gundesen, A. , Chrestman, A. , Daigle, G.A. , L'Esperance, A. , Gabillet, Y. , Gilbert, K.E. , Li, Y.L ., White, M.J . , Naz, P. , Noble, J.M. , and v an Hoof, H.A.J.M. ( 1995 ). Journal of the Acoustical Society of America 97 , 173 191 . Axelsson, O., Nilsson, M., and Berglund, B . ( 2010 J. Acoust. Soc. Am er . 128 , 2836 2846 . Berlgund, B., Lindvall, T., and Schwela, D. (editors) ( 1999 ). Guidelines for Community Noise . (World Health Organization, London) . Blesser, B. , and Salter, L.R. ( 2007 ). Spaces Speak, Are You Listening? Experiencing Aural Architecture (MIT Press, Cambr idge, MA) . Carr, M. H. , and Zwick, P. D. ( 2007 ). Smart Land Use Analysis: The LUCIS Model (ESRI Press, Redlands, CA) . Carter, S., Cox, A., Quealy, K., and Schoenfeld, A . ( 2009 July 31, 2009 . Cox, A., Carter, S, Quealy, K., and Schonefeld, A. ( 2009 ). July 31, 2009 . Drullman, R. ( 1995 Temporal envelope and fine structure cues for speech J . Acoust. Soc. Amer. 97 , 585 592. Dufour, P. A ( 1980 ). Effects of Noise on Wildlife and Other Animals Review of Research Since 1971 (US Environmental Protection Agency Office of Noise Abatement and Control, Washington, DC) . Ellis, D . ( 2004 ). Spectrogr ams: Constant Q (Log frequency) and conventional (linear) [Computer program] ( http://www.ee.columbia.edu/ln/labrosa/matlab/sgram/ ) . Feiberg, A. , and Genuiet, K. ( 2011 s: Need for (automatic) source st meeting of the Acoustical Society of America . (Seattle, WA) . Fletcher, J. L. , and Harvey, M. J. ( 1971 ). Effects of Noise on Wildlife and Other Animals (US Environmental Protection Agency O ffice of Noise Abatement and Control, Washington, DC) .

PAGE 235

23 5 Google Corporation ( 2010 ). Google Earth [ Version 5.2 Software] (Mountain View, CA) . HARMONOISE ( 2002 ). Reference Model Task 2.4, Choice of basic sound propagation models. Deliverable 14 of the Harmonoise project (Harmonoise, European Union) . HARMONOISE ( 200 4 ). Reference Model, Description of the reference model, Deliverable 16 of the Harmonoise project (Harmonoise, European Union) . HARMONOISE ( 200 5 ). Final Technic al Report, Final public version (Harmonoise, European Union) . Hempton, G. ( 2009 ). One Square Inch of Silence (Free Press, New York, NY) . Hong, J. Y., Lee, P. J., and Jeon, J. Y. ( 2011 st meeting of the Acoustical Society of America . (Seattle, WA) . ISO 9613 2 ( 1996 ). Acoustics Attenuation of sound during propagation outdoors Part 2: General method of calculation. Janseen, S. A. Vos, H., Eisse s, A., and Pedersen, E. ( 2011 exposure response relationships for wind turbine annoyance and annoyance due J. Acoust. Soc. Am er . 130 , 3746 3753. Jeon, J. Y., Lee, P. J., Hong, J. Y., and Cabrera, D. ( 2011 audit ory factors J. Acoust. Soc. Am er . , 130 , 3761 3770 . Kang, J. ( 2005 J. Acoust. Soc. Am er . , 117 , 3695 3706 . Kang, M. , and Servign, S. ( 1999 ) hy for Urban Soundscape Information , GIS '99 Proceedings of the 7 th ACM international symposium on Advances in geographic information systems , 116 121. ( Association for Co mputing Machinery, New York, NY) . Kroesen, M., Molin, E. J., and van Wee, B. ( 2008 ). annoyance: A structural equation analysis, J. Acoust. Soc. Am er . 123 , 4250 4260 . Levitt, H. , and Webster, J . ( 1998 ). Handbook of Acoustical Measurements and Noise Control , 3 rd Ed. , Harris, C. ed. (Acoustical Society of America, Woodbury, NY) . Li, H. N., Chau, C. K., Tse, M. S., and Tang, S. K. ( 2012 sea views, greenery views, and personal characteristics on noise annoyance perceptio J. Acoust. Soc. Am er . , 131 , 2131 2140.

PAGE 236

236 Manning, C.J. ( 1981 ). The propagation of noise from petroleum and petrochemical complexes to neighboring communities (CONCAWE, Den Haag) . Marsh, K.J. ( 1982 tion of Noise from Open Applied Acoustics 15 , 411 428 . Miller, J. D. ( 1971 ). Effects of Noise on People (US Environmental Protection Agency Office of Noise Abatement and Control, Washington, DC) . Nilsson, M.E. ( 2007 Proceedings of Inter Noise 2007, Institute of Noise Control Engineering (Istanbul, Turkey) . Piercy, J.E. , and Daigle, G.A. ( 1998) Handbook of Acoustical Measurements and Noise Control , 3 rd Ed. , Harris, C. ed. (Aco ustical Society of America, Woodbury, NY) . Pollack, C.P., and Fay, T.H., ed. ( 1991 Noise and Health (New York Academy of Medicine, New York, NY) . Raney, J.P. , and Cawthron, J.M. ( 1998 Handbook of Acoustical Measurements and Noise Control , 3 rd Ed. , Harris, C. ed. ( Acoustical Society of America , Woodbury, NY) . Rapoza, A., MacDonald, J., Hastings, A., Scarpone, C., Lee, C., and Fleming, G. ( 2008 ). Ambient Computation Methods in Support of the National Parks Ai r Tour Management Act (US Department of Transportation, Cambridge, MA). Schafer, M. R. ( 1977 ). The Soundscape: Our Sonic Environment and the Tuning Of the World ( Destiny Books, USA ) . Schultz, Theodore J . ( 1978 J. Acoust. Soc. Am er . 64 , 377 405 . Servinge, S., Laurini, R., Kang, M., and Li, K. J. ( 1999 First Specifications of an Information System for Urban Soundscape , I CMCS '99 Proceedings of the IEEE International Conference on Multimedia Computing an d Systems , 2 , 262 266 (IEEE Computer Society, Washington, DC) . Stack, D. W., Peter, N., Manning, R. E., and Fristrup, K. M. ( 2011 noise levels at Muir Woods National Monument using experimental J. Acoust. Soc. Am er . 129 , 1 375 1380 . Thompson, E. ( 2002 ). The Soundscape of Modernity: Architectural Acoustics and the Culture of Listening in America, 1900 1933 (The MIT Press, Cambridge, MA) . Truax, B. ( 1984 ) Acoustic Communication ( Ablex Publishing Corporation, Norwood, NJ ).

PAGE 237

237 US Bu reau of Labor Statistics (BLS) ( 2013 ). American Time Use Survey user's Guide: Understanding ATUS 2003 to 2011 (Washington, DC) . US Department of Housing and Urban Development (HUD) ( 1985 ). The Noise Guidebook: A Reference Document for Implementing the Depa rtment of Housing and Urban Development's Noise Policy (US Department of Housing and Urban Development, Washington, DC ) . US Department of Transportation ( 1998 ). FHWA Traffic Noise Model (FHWA TNM) Technical Manual (US Department of Transportation Researc h and Special Programs Administration. Cambridge, MA) . US National Park Service (NPS) ( 2000 ). Director's Order #47: Soundscape Preservation and Noise Management (Washington, DC). NPS ( 2006 ). National Park Service Management Policies (Washington , DC) . UW Geomorphological Research Group ( 2000 ) . Washington 10 meter DEMs [GIS raster file] ( Seattle, WA ) . Venot, F. , and Semidor, C. ( 2006 rd International Conference o n Passive and Low Energy Architecture (Geneva, Switzerland) . Washington Department of Ecology ( 1999 ). Washington state county boundaries [GIS vector file] (Olympia, WA) . Washington State Department of Transportation GIS Implementation Team (WSDOT) ( 2004 ) Major Shorelines of Washington State, with Trimmed Coastline [GIS vector file] ( Olympia, WA ) . WSDOT ( 1996 ) Washington State County Series, Local Roads [GIS vector file] (Olympia, WA) . WSDOT ( 2003 ) Washington State County Serie s, State Routes [GIS vector file] ( Olympia, WA ) . Yu, L. , and Kang, J. ( 20 08 on J. Acoust. Soc. Am er . , 123 , 772 783 . Yu, L. , and Kang, J. ( 20 09 Modeling subjective evaluation of soundscape quality in urban open spaces: An artificial neural network approach, J. Acoust. Soc. Am er . , 12 6 , 1163 . Zoom Corporation ( 2010 ). Built in Mic, Polar Pattern/Frequency Response [for the H2] .

PAGE 238

238 BIOGRAPHICAL SKETCH Mr. Bettcher was originally on a path to become an architect a career ambition he had held from an early age. As a young child liv ing in cities scattered across the United States, he grew to appreciate maps, cities, and architecture; the latter interest was cemented during an extended period of do it yourself renovations at his family home while living in Knoxville, Tennessee. This coincided with a developing interest in computer programming, science, and drawing. Various career paths seemed open to incorporating all these interests, but architecture stood out at the forefront. After going through two degrees in architecture, howeve r (a Bachelor of Design in Architecture and a professional Master of Architecture, both at the University of Florida), and working in the field, he realized that architects don't necessarily work so clearly at the intersection of science and design. When hired as an acoustical consultant after the Master of Architecture, he enjoyed working in a field that more explicitly combined science and analysis with design in ways that weren't immediately obvious, but still had a significant (if subconscious) impact on the way people interact with their spaces. During the Great Recession, he took the opportunity to go back for another degree in acoustics a way of more clearly practicing architecture in the way he saw as most suited to his perspecti ves.