Citation
Atlas: A Service-Oriented Sensor and Actuator Network Platform to Enable Programmable Pervasive Computing Spaces

Material Information

Title:
Atlas: A Service-Oriented Sensor and Actuator Network Platform to Enable Programmable Pervasive Computing Spaces
Creator:
KING, JEFFREY C. ( Author, Primary )
Copyright Date:
2008

Subjects

Subjects / Keywords:
Atlases ( jstor )
Computer programming ( jstor )
Ethernet ( jstor )
Firmware ( jstor )
Narrative devices ( jstor )
Personal computers ( jstor )
Sensors ( jstor )
Servomotors ( jstor )
Signals ( jstor )
Ubiquitous computing ( jstor )

Record Information

Source Institution:
University of Florida
Holding Location:
University of Florida
Rights Management:
Copyright Jeffrey C. King. Permission granted to the University of Florida to digitize, archive and distribute this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.
Embargo Date:
11/30/2007
Resource Identifier:
660020694 ( OCLC )

Downloads

This item is only available as the following downloads:


Full Text

PAGE 1

1 ATLAS: A SERVICE-ORIENTED SENSOR AND ACTUATOR NETWORK PLATFORM TO ENABLE PROGRAMMABLE PERVASIVE COMPUTING SPACES By JEFFREY C. KING A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLOR IDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2007

PAGE 2

2 2007 Jeffrey C. King

PAGE 3

3 To my parents, Sharon and Wayne King, who ta ught me the love of poking and prodding the world, demanding its secret knowledge, but never becoming frustrated to find some things are inscrutable, wondrously so – and – To my grandmother, Barbara Ryther would all the promise of inde pendent living within this text come true, I’d sti ll want you living with us

PAGE 4

4 ACKNOWLEDGMENTS My enduring thanks and appreciation go to my advisor, Dr. Abdelsalam (Sumi) Helal, for bringing me into his fold, and sharing his abilit y to clarify and motivate and excite students. I, of course, thank the many friends in the Mobile and Pervasive Computing Lab that I have spent so much time with these years: James Russo, Dr. Hisham Zabadani, Youssef Kaddoura, Steve Vander Ploeg, Erwin Jansen, St even Pickles, Raja Bose, Hen-I Yang, Ed Koush, Dr. Bessam Abdulrazak, and Dr. Shinyoung Li m. James and Steve V.P. deserve special note for the incredible early work on the then -unnamed sensor platform. The same goes for Steven Pickles, who produced our first working sensor platform and who always showed the saintly patience necessary when electrical engineer s are asked to demystify their hardware mojo for computer scientists. Raja Bose and Hen-I Yang deserve special note. Mr. Bose’s speed and talent for magnificent design and implementation is stunning to behold, and the Atlas project would be nowhere without him. Mr. Yang ha s provided the perfect solid, reasoned and experienced grounding for ideas that threatened to spiral out of control, and of the work performed in this lab, our collaborations have been the most satisfying.

PAGE 5

5 TABLE OF CONTENTS page ACKNOWLEDGMENTS...............................................................................................................4 LIST OF TABLES................................................................................................................. ..........8 LIST OF FIGURES................................................................................................................ .........9 ABSTRACT....................................................................................................................... ............11 CHAPTER 1 INTRODUCTION..................................................................................................................12 Pervasive Computing............................................................................................................ ..12 Deploying Pervasive Computing Environments....................................................................12 Matilda Smart House.......................................................................................................13 Gator Tech Smart House.................................................................................................14 Failures of Integrated Pervasive Computing Spaces.......................................................15 Programmable Pervasive Spaces............................................................................................16 Sensor Network Platforms...............................................................................................17 Atlas Platform................................................................................................................. .18 Organization of Dissertation...................................................................................................18 2 RELATED WORK.................................................................................................................20 Sensor Networks................................................................................................................ .....20 Middleware for Sensor Networks and Pervasive Computing.................................................22 3 OVERVIEW OF THE ATLAS PLATFORM........................................................................27 Sensors and Actuators for Pervasive Spaces..........................................................................27 Middleware for Programmable Pervasive Spaces..................................................................28 Atlas Platform................................................................................................................. ........29 4 ATLAS PLATFORM HARDWARE.....................................................................................33 Processing Layer............................................................................................................... ......33 Communication Layer............................................................................................................35 Wired Ethernet.................................................................................................................35 WiFi........................................................................................................................... ......37 ZigBee......................................................................................................................... ....37 Universal Serial Bus........................................................................................................38 Serial Port.................................................................................................................... ....38 Bluetooth...................................................................................................................... ...38 Patch Antenna..................................................................................................................38

PAGE 6

6 Device Interface Layer......................................................................................................... ..39 Analog Sensors................................................................................................................39 Digital Sensors................................................................................................................ .39 Actuators...................................................................................................................... ....40 General Purpose Input/Output.........................................................................................41 Device Interfacing and IEEE 1451..................................................................................41 Wireless Interface Layer..................................................................................................41 Other Layers................................................................................................................... ........42 5 ATLAS PLATFORM FIRMWARE.......................................................................................47 uIP Firmware................................................................................................................... .......47 Direct Firmware................................................................................................................ ......47 TinyOS Firmware................................................................................................................ ...48 TinyOS Base....................................................................................................................49 Atlas Platform Definition................................................................................................51 Atlas Application.............................................................................................................54 6 ATLAS MIDDLEWARE.......................................................................................................62 Open Services Gateway Initiative Middleware......................................................................62 Atlas Middleware Architecture...............................................................................................64 Network Manager............................................................................................................64 Configuration Manager...................................................................................................65 Bundle Repository...........................................................................................................66 Atlas Developer Applica tion Programming Interface.....................................................67 Application Development Using Atlas...................................................................................67 Atlas Service Authoring Tool..........................................................................................68 Communication Modules................................................................................................69 Performance Evaluation......................................................................................................... .70 Experiment Setup............................................................................................................70 Atlas Middleware Scalability..........................................................................................71 Performance under Zero-Load Data Streams..................................................................73 7 ATLAS TRUST MODEL.......................................................................................................82 Security and Privacy in Sensor Networks..............................................................................82 Attack Model for the Atlas Platform......................................................................................82 Remote Attacks...............................................................................................................82 Neighboring Attacks........................................................................................................83 Internal Attacks...............................................................................................................83 Interception Attacks.........................................................................................................83 Insertion Attacks..............................................................................................................84 Denial of Service Attacks................................................................................................84 Attack Prevention.............................................................................................................. .....84 Fundamental Security......................................................................................................85 Preventing Interception and Insertion Attacks................................................................85

PAGE 7

7 Pervasive Noise................................................................................................................ ......87 Overview....................................................................................................................... ..87 Noise Model....................................................................................................................88 Pervasive Noise Implementation.....................................................................................90 8 CASE STUDIES................................................................................................................... ..93 Gator Tech Smart House........................................................................................................93 Smart Blinds................................................................................................................... .93 Atlas-Based Smart Floor.................................................................................................94 Smart Front Door.............................................................................................................95 Results........................................................................................................................ .....96 Purdue NILE-PDT................................................................................................................ ..96 9 CONCLUSIONS AND FUTURE WORK...........................................................................100 Hardware Roadmap..............................................................................................................100 Middleware Roadmap...........................................................................................................101 Data Processing.............................................................................................................101 Distributed Middleware Servers....................................................................................101 Alternate Service Frameworks......................................................................................102 APPENDIX EXAMPLE NESC SOURCE CODE USED in ATLAS FIRMWARE...............103 Interface...................................................................................................................... ..........103 Interface Provider............................................................................................................. ....103 Interface User................................................................................................................. .......103 Configuration Wiring........................................................................................................... .103 LIST OF REFERENCES.............................................................................................................104 BIOGRAPHICAL SKETCH.......................................................................................................110

PAGE 8

8 LIST OF TABLES Table page 2-1 Comparison of existing se nsor network platforms............................................................24 5-1 NesC Interfaces defined in the Atlas firmware..................................................................56 6-1 Methods provided by the Atlas Deve loper Application Pr ogramming Interface..............75 6-2 Primary functionalities provided by Atlas Service Authoring Tool..................................75

PAGE 9

9 LIST OF FIGURES Figure page 1-1 In-lab Matilda Smart House...............................................................................................19 1-2 Gator Tech Smart House....................................................................................................19 2-1 Crossbow MICA2 and MICA2DOT motes.......................................................................24 2-2 Telos mote, with serial a nd Universal Serial Bus support.................................................25 2-3 Phidgets 8/8/8 Interface Kit...............................................................................................25 2-4 Cork Cube platform......................................................................................................... ..26 3-1 Middleware architecture for programmable pervasive spaces...........................................31 3-2 Atlas-specific architecture for programmable pervasive spaces........................................31 3-3 Software stack architecture of the Atlas platform..............................................................32 4-1 Three-layered Atlas node...................................................................................................42 4-2 Atlas Processing Layer..................................................................................................... .43 4-3 Atlas Wired Ethernet Communication Layer....................................................................43 4-4 Atlas WiFi Communication Layer.....................................................................................43 4-5 Atlas ZigBee Communication Layer.................................................................................44 4-6 Atlas Programming and Debugging Board........................................................................44 4-7 Atlas Universal Patch Antenna..........................................................................................44 4-8 Atlas Analog Sensor – 8 Device Interface Layer...............................................................45 4-9 Atlas Analog Sensor – 32 Device Interface Layer.............................................................45 4-10 Atlas Digital Contact Sens or Device Interface Layer........................................................45 4-11 Atlas Servo Device Interface Layer...................................................................................46 4-12 Atlas General Purpose Inpu t/Output Device Interface Layer............................................46 5-1 Communication path for data transm itted from Atlas node to middleware.......................59 5-2 Communication path for data transm itted from Atlas middleware to node.......................60

PAGE 10

10 5-3 Component and module diagram of TinyOS-based Atlas firmware.................................61 6-1 Adding a new Atlas node to the network...........................................................................76 6-2 Knopflerfish OSGi framework running the Atlas middleware..........................................77 6-3 Atlas node web configuration page....................................................................................77 6-4 Atlas repository web configuration page...........................................................................78 6-5 Atlas Eclipse IDE plugin...................................................................................................78 6-6 Screenshot of the AtlasSim application.............................................................................79 6-7 AtlasSim node configuration window...............................................................................79 6-8 Processor load vs. number of sensors................................................................................80 6-9 Memory usage vs. number of sensors................................................................................80 6-10 Processor load vs. number of sensors................................................................................81 6-11 Memory usage vs. number of sensors................................................................................81 8-1 Servo motor attached to the Smart Blinds.........................................................................97 8-2 Tile of the Smart Floor, with Atlas node...........................................................................98 8-3 Atlas node connected to 32 pressure sensors.....................................................................98 8-4 Graphical display of location tracker service.....................................................................99 8-5 Connecting the electronic dea dbolt to the Atlas platform.................................................99 8-6 Private-Door Duo electronic door opener..........................................................................99

PAGE 11

11 Abstract of Dissertation Pres ented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy ATLAS: A SERVICE-ORIENTED SENSOR AND ACUATOR NETWORK PLATFORM TO ENABLE PROGRAMMABLE PERVASIVE COMPUTING SPACES By Jeffrey C. King May 2007 Chair: Abdelsalam (Sumi) Helal Major: Computer Engineering Pervasive computing environments such as smart spaces require a mechanism to easily integrate, manage and use numerous, heterogene ous sensors and actuators into the system. However, available sensor network platforms are inadequate for this task. The goals and requirements for a smart space are very different from the typical sensor network application. Specifically, we found that the manual integration of devices must be replaced by a scalable, plug-and-play mechanism. The space should be assembled programmatically by software developers, not hardwired by engi neers and system integrators. This allows for cost-effective development, enables extensibil ity, and simplifies change manage ment. We found that in a smart space, computation and power are re adily available and connectivity is stable and rarely ad-hoc. Our deployment of a smart house (an assistive environment for se niors) guided us to designing Atlas, a new, commercially available service-orie nted sensor and actuator platform that enables self-integrative, programmable pervasive spaces . We present the design and implementation of the Atlas hardware and middleware components, its salient characterist ics, and several case studies of projects using Atlas.

PAGE 12

12 CHAPTER 1 INTRODUCTION Pervasive Computing From the first Electronic Numerical Integrator and Computer (ENIAC) to the latest home Personal Computers (PCs), users ha ve been interacting with comput ers as distinct objects. Yet, when we sit down at these machines, we are rare ly interested in just “using the computer.” Instead, we probably have some specific need, su ch as evaluating a formula, sending an instant message to a friend, or looking up the recipe fo r tonight’s dinner. We are forced to focus attention on leading the computer through the necessary steps to provide a desired service, instead of focusing on how we wish to use the service. Even through the advent of Mobile Computing, as computing technology has become increasingly distributed, from a single PC per family to multiple computers per user, includi ng laptops, Personal Digital Assistants (PDAs), and cell phones, the user’s atte ntion has remained focused on th e device used to accomplish a task, not the task itself. Pervasive Computing, however, is a paradigm wh ere these tasks and serv ices are integrated into the environment. There wi ll be many computers available throughout the physical space, but the technology will be calm [1], in that th e computers are not distinct objects demanding attention, and will be effectively invisible to the user [2]. The last few years have seen an increased inte rest in the field of pervasive computing. The recent advent of sensor networks and pinhead-s ize computers has allowed researchers to begin implementing the vision of ubiquitous and pervasive computing. Deploying Pervasive Computing Environments The Pervasive and Mobile Computing Laboratory at the University of Florida has been researching and prototyp ing “assistive environments” – pervas ive computing spaces designed to

PAGE 13

13 increase independence and improve the quality of life for older people an d people with special needs. The research and development activities have been conducted in close collaboration with researchers from University of Florida’s School of Public Health and Health Professions. The primary objective is to create a usable and accep table technology that supports “aging in place” and reduces the impact of age-related cognitive and motor impairments. While the objective was well-understood, achieving it using ex isting technologies and system s proved to be infeasible. Matilda Smart House The Matilda Smart House (Fig. 1-1) was out first attempt at creating assistive environments for elder persons. Housed inside the Pervasive Computi ng Lab and occupying over 500 square feet, it consisted of a front door, a k itchen area, a living room area, a small bedroom, and a tiny bathroom. This mock-up house was a platform to experiment with accessible appliance control, indoor location tracking, and c ognitive assistant concepts and applications. We integrated dozens of sensors, actuators, applia nces, and other components, including contact and motion sensors, cameras, ultrasonic transceivers, X10 home automation modules and controllers, several microprocessor-based c ontrollers, a microwave oven, an entertainment system, mobile phones, multi-purpose monitors, and a home PC. The integration process in the Matilda Sm art House was much more complex than expected. It was labor-intensive and ad-hoc, and repetitive work had to be performed for each entity (device or application) a dded to the space. The learning impedance associated with some entities required non-tr ivial know-how and detailed product specifications and schematics. Configuration also presented a nother challenge, as every entity needed to be configured individually, and detailed, low-level documentation had to be kept up to date. Demonstrations of the Matilda Smart House and its applications draw wide acclaim, but it became evident to us that the smart house was almost unchangeable and unma nageable. Reconfiguring entities or replacing

PAGE 14

14 components is difficult, and applications were lo aded with hard-coded artifacts that explicitly reflected the ad-hoc integration decision made. Delivering a technology that can be customized to individual users, or that can adapt to users’ changing needs as they progress through the aging stages, was unrealizable using the existing tools. In essence, we need a sound core technology to reduce pervasive spacing “hacki ng,” and to make smart homes and other pervas ive computing environments a viable technology for the market. Gator Tech Smart House While the design, system integration, and appl ication development and maintenance stages of constructing the Matilda Smart House were all extremely difficult, our focus group studies and the academic, commercial, and general audience interest in the project marked the potential of the technology. Our experience with the Matild a house led to our creation of the Gator Tech Smart House (GTSH) project [3], a 2,500 square foot, free-standing hous e located in the Oak Hammock Continuous Care Retirement Community [4] in Gainesville, Florida. Funded by donations from alumni and corporate sponsors, th e GTSH (Fig. 1-2) was our second attempt at creating assistive environments for sensors. It serves as the test bed in which we deployed and tested various technologies developed in our lab. Given the chance to start fresh, we drew additional goals and objectives from the lessons learned from the Matilda Smart House. As indicat ed, the key problem to resolve was the issue of scalability, both in initial development and continuous manageability. In addition, the GTSH study includes many live-in sessions during which elder and disabled pers ons would be staying in the house over a period of time. Acceptance a nd usability issues, ther efore, also became a central focus, and we needed a way to quickly adapt the environment for different residents.

PAGE 15

15 Failures of Integrated Pervasive Computing Spaces Building a pervasive computing environmen t requires deploying and connecting many components. A smart space will typically incl ude a multitude of sensors, such as Radio Frequency Identification (RFID) tags, location tran sceivers or beacons us ing electromagnetic or ultrasonic wave signals, pressure, contact, motio n, moisture, temperature, vibration, chemical, and light sensors. The space could also contain a ra nge of power-line-controlled devices, such as lamps, televisions, fans, and radios. A smart hom e might include networke d appliances such as special microwaves, refrigerators, or ovens. Out put devices could deliver messages, reminders, or instructions to the resident , and can range in complexity from small flashing lights (as in cognitive assistance applications that provide simple cues to th e user) to speakers relaying audio data (live, pre-recorded, or te xt-to-speech) to televisions and m onitors displaying text or video (either in full-screen or as an overlay on ot her media). Special-purpose equipment, such as medical devices, security systems, electronic door-openers, motorized window blinds may be present. In addition, a variety of user interfaces (e.g., remote cont rols, wireless tablets or PDAs, touch-screen panels on the walls , microphone arrays, or cell phone s) could provide interaction with the environment. All of these components must then be physically linked via hardware and logically linked by software to provide a wide range of services to the users. Anyone building a pervasive computing space qu ickly realizes what a laborious process this ad-hoc system integration re quires. Just choosing the appropria te devices to incorporate into the space is hard enough; obtaining sample devi ces is difficult, and product reviews dealing specifically with smart spaces are few and far between. Smart space developers must research each device’s characteristics and operation, determining how to conf igure it and interface with it. The device selection process is followed by the time-consuming and error-prone connection phase, in which the numerous and heterogeneous devices are linked into the Input/Output (I/O)

PAGE 16

16 ports of the computers that will operate the smart space. Special care must be taken to ensure that no conflicting hardware resources are shared with incompatible devices, requiring complete and specific knowledge and documentation on both the low-level (interconnection) and high-level (application) components of the system. After this process of installing the physical devices and connections is complete, creating the services that make use of the devices is no easier. System developers still require complete knowledge of the system, including all the low-level details of the devices, such the proper voltages and signals to query and control the de vice, and the meaning of any signals returned. The developers must also know how the devices are connected, taking in to consideration any special routines needed to access devices on separa te networks, or to pr event conflict on shared resources. Unlike traditional software development, the existing tools make it infeasible to build a pervasive computing environment in small, mo dular pieces. Adding or removing one device, or changing the behavior of one service, requires re visiting all the others. Exhaustive and repetitive regression testing is necessary to guard against errors or in determinate behavior caused by conflicting requests of devices or of the connecti on resources used to link them. These are the failures of integrated pervas ive computing environments. Programmable Pervasive Spaces Given these failures, and drawing upon our e xperience with the Matilda House, our goal during the creation of the Gator Tech Smart H ouse was to develop models, methodologies, and processes for creating programmable pervasive spaces [5]. This is a concept in which a smart space exists, in addition to its physical entity, as a runtime environment and a software library [6]. Service discovery and gate way protocols and frameworks (such as the Open Services Gateway Initiative [7,8]) automatically integrat e system components usin g a generic middleware that maintains a service definition for each sens or and actuator in the space. Programmers

PAGE 17

17 assemble services into composite applications using various programming models [9, 10], tools, and features of the middleware. Such a space shou ld eliminate the need for system integration and the hard-coding of physical artifacts by s upporting plug-and-play technology. It should decouple application development from the physical world, providing a servic e-oriented view of devices in the environment. This serviceoriented architecture should support a sound programming model that will allow software devel opers to create services easily using familiar languages and Integrated Development Environments (IDEs). Sensor Network Platforms Building a programmable pervasive space first requires some standard way of organizing the numerous and heterogeneous sensors and actuator s into the system. It is clear that sensor network technology will be a foundation of these spaces. A sensor network is a special kind of computer network. The nodes of this network are generally small, low-power computers that are connected to one or more sensors. However, the major sensor network platforms that are commercially available were not designed spec ifically for pervasiv e spaces. While these platforms are still useful for gathering data in the space, they also bring significant limitations. First, the name “sensor network” itself imp lies a major limitation: the platforms are only able to integrate and operate sensors. A pervas ive space will need to gather information from sensors, but it will also need to manipulate de vices. A sensor network platform for pervasive spaces should be able to host actuators in addition to sensors. Second, sensor network nodes generally form ad -hoc networks. The classic example of a traditional sensor network application is habi tat monitoring [11]. Nodes are dispersed over a large, remote area, and must run unattended for long periods of time without external power or reliable network connectivity. The nodes must form their own network, eith er to replicate data across the system or to have information “ hop” from one node to another until an external

PAGE 18

18 network is found. This fragile network must also adapt as new nodes are added, or more commonly, as existing nodes fail due to depleted batteries or environmental hazards. However, this ad-hoc networking is not a ppropriate in a pervasive space, where a more reliable and ordered network structure is needed. Third, most sensor networks were designed to operate in isolation. A particular sensor network deployment will have one purpose, runn ing one application. Additionally, because the traditional sensor network runs in isolation, th e nodes must perform all application computation. Because the nodes are low-power devices, this re quires building applications that run in a distributed manner, breaking the processing in to small, self-contained units. Beyond the increased difficulty of developing distributed com puting applications, pervasive spaces will run a large number and variety of services, making de ployment on existing sens or network platforms infeasible. Fortunately, a pervasive space will, by definition, be rich with computational resources, but the underlying sensor network plat form must take advant age of this capability. Atlas Platform None of the available sensor network platform s are fully adequate for the development of pervasive spaces. We seek to change this with the creation of the Atla s platform. Atlas, the sensor and actuator platform described in th is dissertation, is the basic building block for programmable pervasive spaces. Atlas provide s physical nodes for connecting various heterogeneous devices, a system fo r translating those devices into software services, a system for maintaining a library of device services and their interfaces, and a runtime environment for accessing services and composing applications. Organization of Dissertation Chapter 2 of this dissertation pr esents related work in the fields of sensor networks and middleware for pervasive computing. Chapter 3 provides an overview of the Atlas sensor and

PAGE 19

19 actuator network platform. Chapter 4 presents the hardware details of the physical Atlas platform nodes. Chapter 5 describes the firmware that ope rates the Atlas nodes. Chapter 6 presents the Atlas middleware architecture. Chapter 7 describes the Atlas trust model, a novel system for security and privacy management in sensor and actuator networks de signed for pervasive computing environments. Chapter 8 demonstrates the Atlas platform in use through several case studies. Chapter 9 presents the current conclusi ons of the Atlas project and discuss the likely future work for that platform. Figure 1-1. In-lab Ma tilda Smart House. Figure 1-2. Gator Tech Smart House.

PAGE 20

20 CHAPTER 2 RELATED WORK Sensor Networks There has been a dramatic increase during the past three years in the number of sensor platforms in development or commercially availa ble. The most visible of these has been the Mote family [12, 13, 14], developed by the University of California at Berkeley as part of the Smart Dust [15, 16] project. The final goal of the Smart Dust project is a complete sensor node package (sensors, power, communication, and contro l hardware) in one cubic millimeter. While that form factor is not yet be available, the Berk eley team has released several iterations of their platform. Motes are available commercially from Crossbow Technology [17]. Crossbow offers several versions of the platform, such as the MICAz and MICA2 (similar in size to a pack of cigarettes), and the MICA2DOT (roughly the size of six stacked quarters). These platforms (Fig. 2-1) include an integrated pr ocessing and communication module and offer limited modularity in the form of daughter cards, cont aining different sensor arrays , which can be plugged into the platform. Other versions lack this modularity. For example, Telos [18, 19] (shown in Fig. 3-2), as developed by the Smart Dust team, is a comp letely integrated platform based on the TI MSP430 microcontroller. It offers higher performa nce and consumes less power than other Mote platforms, but comes at a higher cost, and the ava ilable sensors are integrated into the device and cannot be changed by users. Many groups are working with Mote s either as the basis for othe r projects or to further the sensor platform itself. Intel and Berkeley ha ve worked together on iMote [20], a Bluetoothenabled version of the wireless sensor node. College of the Atlantic collab orated with Berkeley to use wireless sensor networks for hab itat monitoring on Great Duck Island [11].

PAGE 21

21 Motes are currently the de facto standard platform for sens or networks. Although the Mote was primarily developed for use in wireless ad-h oc networks for applications such as remote monitoring, researchers in many unr elated areas have used Mote primarily for its commercial availability and its ability to inte grate numerous sensors into a system. Phidgets [21], developed by the University of Calgary, is another widely used, commercially available platform (Fig. 2-3). The Ph idgets support a large variety of sensors and actuators. They allow rapid application develo pment and are extremely easy to use. But the Phidgets are not fully modular, and they onl y support communication to a Windows desktop computer via Universal Serial Bus (USB), which leads to scalability problems. Some groups have worked on creating a more modular sensor network platform. The Cube [22], developed by University College Cork (Fig. 2-4), and MASS [2 3], a Sandia National Laboratory project, have modular architectures allowing users to rapidly develop applications and reconfigure platforms as necessary. Other se nsor network platforms, such as NIMS [24], XYZ [25], and Eco [26] were designed for specific applica tions: environmental monitoring (NIMS, XYZ) and health monitoring (Eco). The Smart-Its [27], developed jointly by Lan caster University and the University of Karlsruhe, offer some features th at could facilitate the development of pervasive spaces. They have a somewhat modular hardware design and a template-based software design process, which allows rapid application development. But the Smar t-Its platform is still not completely modular, with an integrated processi ng and communication board. Furthermore, devices connected through Smart-Its are constrained to a single application (runni ng on the Smart-It hardware). This does not allow for service-rich environments in which applications can be developed using service composition. The featur e set of these platforms is summarized in Table 2-1.

PAGE 22

22 None of the available sensor network platform s are fully adequate for the development of pervasive spaces. Most of the platforms focus onl y on sensors, and barely touch upon the issue of actuators. In a pervasive space, ac tuators play as important a role as sensors, as actuators are used to influence the space. NIMS and XYZ ma ke use of actuators, bu t only for the specific purpose of making the platforms mobile. Phidgets support a large number of actuators, but are constrained by scalability issues a nd a fixed hardware configuration. Additionally, none of these platforms have the capability to represen t automatically their connected devices as software services to pr ogrammers and users. In stead, programmers must write distributed applications that query hard-c oded resources to access the devices connected to the platform. Except for the larger number of devices supported, th is is no better than connecting sensors and actuators directly to the Input/Output (I/O) ports of a computer. It is a development method that does not scale as more devices and services are added to a smart space. Middleware for Sensor Networks and Pervasive Computing As mentioned previously, the majority of ubiquitous computing research involving implemented systems has been pilot projects to demonstrate that pervasive computing is useable [28], and that these pilo t projects represent adhoc, specialized solutions that are not easy to replicate. In order to ease the development of programmable pervasive spaces, effort has been placed into developing a variety of middl eware solutions to aide developers. A common theme in many of these solutions is the notion of context. Context-aware computing is a paradigm in which applications can discover and take a dvantage of contextual information. This could be temperature, location of the user, activity of th e user, etc. Context is any information that can be used to characterize the situation of an entity. An entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and application themselves.

PAGE 23

23 The Context Toolkit [29, 30] provides a set of Java objects that address the distinction between context and user input. The context consis ts of three abstractions: widgets, aggregators and interpreters. Context widgets encapsulate information about a single piece of context, aggregators combine a set of widgets together to provide higher-level wi dgets, and interpreters interpret both of these. In the Gaia project, an application mode l known as MPACC is proposed for programming pervasive space [31], and includes five component s with distinct critic al functionalities in pervasive computing app lications. The model provides the sp ecification of operation interfaces, the presentation dictates the output and presentati ons, the adaptor converts between data formats, the controller specifies the rules and applic ation logic and the c oordinator manages the configurations. EQUIP Component Toolkit (ECT) [32], part of the Equator project, takes an approach more similar to distributed databases. Represen ting entities in the sm art space with components annotated with name-value pair property a nd connection, ECT provide s a convenient way to specify conditions and actions of the applications, as well as st rong support in authoring tools, such as graphic editors, capability browsers and scripting capabilities. The SOCAM architecture [33] is a middlew are layer that makes use of ontology and predicates. There is an ontology de scribing the domain of interest . By making use of rule-based reasoning we can then create a set of rules to infer the status of an entity of interest. The SOCAM research shows that ontology can be used as the basis of reasoning engines, but such engines are not suitable for smaller devices due to the computational power required.

PAGE 24

24 CoBrA [34] is a broker-centric agent architec ture. At the core of the architecture is a context broker that builds and updates a shared context model that is made available to appropriate agents and services. Table 2-1. Comparison of existi ng sensor network platforms. Feature MICA2 Telos Cube MASS Smart-Its Microcontroller ATmega128 L TI MSP430 ATmega128L Cygnal C8051F125 PIC 18F6720 Program memory 128K 48K 128K 128K 128K Volatile memory (RAM) 4K 2K 4K 2K 2K Non-volatile memory (EEPROM) 512K 128K 4K n/a n/a Communication protocols supported RF IEEE 802.15.4 RF RF RF Service frameworks supported None None None None None Ad-hoc network support Yes Yes Yes Yes Yes Modular Partial No Partial Yes Partial Commercially Available Yes Yes No No Yes Figure 2-1. Crossbow MICA2 and MICA2DOT motes.

PAGE 25

25 Figure 2-2. Telos mote, with serial and Universal Serial Bus support. Figure 2-3. Phidgets 8/8/8 Interface Kit.

PAGE 26

26 Figure 2-4. Cork Cube platform.

PAGE 27

27 CHAPTER 3 OVERVIEW OF THE ATLAS PLATFORM Sensors and Actuators for Pervasive Spaces Our lab has thoroughly investig ated [35, 36] how to develo p robust, maintainable, and service-rich pervasive spaces. During this resear ch, we presented a formal model of pervasive spaces, which first required identifying the key entities in such a space. To summarize, a space consists of living beings and objects. The living beings interact with each other and with the objects. In a pervasive space, the living beings ar e users, and we can divide the objects into two categories: passive objects and active objects. Passive objects are "dumb" objec ts that cannot be queried or controlled by the smart space. At best, passive objects may be recognized by the space, but only users can manipulate them. Passive objects therefore are not key entities in a smart space. Active objects, however, can provide information to, or be manipulated by, the smart space. Active objects are key entities. Active objects are further divided into two cl asses: sensors and actuators. Sensors provide information about a particular domain, supplying data to the system about the current state of the space. Sensors only provide measurement; they ca nnot directly alter the state of the space. Actuators are the active objects that alter the space. Th ey activate devices that perform certain functions. Sensors and actuators are the foundation of a pe rvasive space, as they provide the means for gathering information about the state of the space and for controlling devices that can modify the state of the space. We therefore require a platform to connect numerous and heterogeneous sensors and actuators to the services and applicat ions that will monitor and control the space. Connecting sensors and actuators to applica tions implies more than simply physically coupling these devices to a computer platform (alt hough this is certainly impor tant, as we wish to

PAGE 28

28 employ far more devices than could be connected to the limited Input/Output (I/O) ports for a single machine or even a small cluster). Connec ting devices with appli cations means providing some mechanism for the applications to make us e of devices and services directly, instead of accessing some I/O resource on a machine that happens to be wired to a particular device. Beyond dealing with resource allocations, c onnecting applications and devices means eliminating the need for those applications to kn ow the low-level information (voltages, control codes, etc.) to drive the devices. To solve this problem, we require a networkenabled, service-oriented platform that can “convert” the various sensors and ac tuators to software services. The required sensor platform would be responsible for obtaining this represen tation and for managing the services in such a way that applications are easily able to obtain and use the services and associated knowledge. Realizing this, we were able to design the arch itecture for programmable pervasive spaces shown in Fig. 3-1. Middleware for Programmable Pervasive Spaces In our architecture, the physical layer contai ns passive and active objects. Through sensors and actuators, active objects are captured into th e smart space for observation and control. The platform node layer, implemented by Atla s, contains all the sensor and actuator platform nodes in the environment. These nodes au tomatically integrate the sensors and actuators (and hence their respective active objects) from the layer beneath and export their service representations to the layers above. The service layer, which resides above the platfo rm layer, holds the registry of the software service representation of all sens ors and actuators connected to the platform nodes. The service layer, which typically runs on a centralized, fu ll-fledged server, also contains the service discovery, composition, and invocation mechanisms for applications to locate and make use of

PAGE 29

29 particular sensors or actuators. The service layer contains a co ntext-management extension as well as a knowledge representation and storage ex tension, both of which are necessary to build programmable pervasive spaces. Finally, the application layer si ts at the top and consists of the execution environment that provides access to the software library of sensors, actuators, and other services. It also contains the actual applications and compos ed services that mon itor and control elements of the pervasive space. Atlas Platform Following this architecture, we created the At las sensor and actuator platform to address the inadequacies of existing platforms for cr eating programmable pervasive spaces. Atlas is a combination of modular hardware, firmware running on the hardware, and a software middleware that provides services and an execu tion environment. Together these components allow virtually any kind of sensor , actuator, or other device to be integrated into a network of devices, all of which can be queried or controlled through an interface specific to that device, and facilitates the development of app lications that use the devices. The Atlas-specific architecture for programmabl e pervasive computing spaces is shown in Fig. 3-2. As with the generic ar chitecture in Fig. 3-1, the Phys ical Layer consists of the phenomena we wish to sense, the devices we wish to control, and the sens ors and actuators that can perform these tasks. The Atla s Platform Layer consists of th e nodes that integrate the sensors and actuators into a programmable network. The Plug-and-Play firmware provides the tools for this integration, as well as for management of the nodes and various devices once the network is established. The firmware is able to do this th rough collaboration with th e Atlas Services running in the Service Layer. The Service Layer runs insi de the Open Services Gateway Initiative (OSGi) framework, and consists of the co re Atlas Services and a runtime environment for the software

PAGE 30

30 representation of sensors and actuators. The Serv ice Layer also includes a set of Communication Modules to link Atlas services with applicati ons, and optional support mechanisms for contextoriented development. The Application Layer hosts the end-user applications that make use of an Atlas deployment. These applic ations may be running in the same OSGi framework as the Service Layer, in which case they are able to use OSGi-provided service discovery and collaboration mechanisms to interact with the sens ors and actuators. If an application is external to the framework, it will interact with the de ployment using the Co mmunication Modules. Fig. 3-3 shows the layered software stack ar chitecture of the Atlas node and the fullfledged server that hosts the service framewor k. The Atlas driver runs on the Atlas node. On power-up, it registers the associ ated sensor or actuator serv ices on the framework server. Optionally, a processing agent could be dynamica lly loaded onto the Atlas node to allow for onnode processing (such as data filtering). The following three chapters will examine the details of the Atlas nodes (hardware), Plugand-Play operating system (firmware), and se rvice and application fr amework (middleware).

PAGE 31

31 Figure 3-1. Middleware architecture for programmable pervasive spaces. Figure 3-2. Atlas-specific architecture for programmable pervasive spaces.

PAGE 32

32 Figure 3-3. Software stack archite cture of the Atlas platform.

PAGE 33

33 CHAPTER 4 ATLAS PLATFORM HARDWARE Each Atlas node is a modular hardware devi ce composed of stackab le, swappable layers, with each layer providing specific functionali ty. The modular design and easy, reliable quickconnect system allow users to change node confi gurations on the fly. A basic Atlas node (Fig. 41) consists of three layers: the Processing La yer, the Communications Layer, and the Device Interface Layer. Processing Layer The Processing Layer (Fig. 4-2) is responsible for the main operation of the Atlas node. Our design is based around the Atmel ATmega128L [37] microcontroller. The ATmega128L can run at between 2 to 8 MHz, and includes 128 KB of Flash program memory, 4 KB of static random access memory (SRAM), a 4 KB electrically erasable programmable read-only memory (EEPROM) for non-volatile storage, and an 8-ch annel 10-bit analog-to-dig ital converter (ADC). Using its internal oscillator, the chip provides two 8-bit timers, two 16-bit timers, and 8 Pulse Width Modulation (PWM) channels. Two se rial Universal Synchronous/Asynchronous Receiver/Transmitters (USART or UART), and suppor t for the Serial Peri pheral Interface (SPI) [38] and Inter-Integrated Circuit (I2C) [39] protocols allow the ATmega128L to communicate with and control many external devices. The mi crocontroller can operate at a core voltage between 2.7 and 5.5 V. We chose the ATmega128L microcontroller fo r its low power consumption, plethora of data pins, ample program space, and the readily available tools and information resources both from the company and the microcontroller’s user base. In addition to the ATmega128L, the Atlas Processing Layer also includes 32 KB of expanded SRAM for on-node services such as the Atlas firmware or end-user filters, queries, and applications. The ATmega128L supports up to 64 KB

PAGE 34

34 of external memory, so this 32 KB SRAM is ma pped into the microcontroller’s memory address space and requires no special handling by the user. A real-time clock supplements the internal oscillator of the ATmega128, allowing for accurate timing. This clock can also be used to have the microcontroller wake from a sleep state at specified intervals, suppo rting future work in extreme low-power operation. Voltage regulators on the Processing Layer convert an up-to 12 V incoming direct current (DC) signal to th e 3.3 V used throughout the Atlas hardware. Since Atlas is designed principally as the bui lding block of pervasive computing spaces such as smart homes, were wired power is re adily available, the Processing Layer includes headers for wired power coming from standard DC converters. The dual-head design allows Atlas nodes to be daisy-chained together, pe rmitting many nodes to operate without wasting electrical outlets in a given space. The number of nodes that can be chained to a single power supply depends on the number of types of devices connected to the platforms. In the case of the Gator Tech Smart House’s Smart Floor (see Chapter 8), each node is connected to 32 piezoresistive force sensors, and 15 Atlas nodes can be daisychained from a single outlet. The dual-head design also allows nodes to driv e actuators that requir e more power than the 3.3 V that could be sent through the microcontroller’s da ta pins and the Atlas layer bus. The second head can be used with a pass-through cab le to provide the unregulated power supply directly to the compatible actuator Interface Layer, supporti ng actuators up to 12 V. For situations where wired power is undesirabl e, the headers are compatible with various Atlas battery packs allowing for untethered node operation. Battery power is an obvious choice in many sensor network deployments, where appl ications often require long-lived, unattended, low-duty-cycle sensing in situations such as habi tat monitoring or battlefield intrusion detection

PAGE 35

35 where wired power is unavailable. Even in the ca se of smart homes, sensors may be needed in locations were running a power line would be inconvenient. The choice between wired and battery power does affect the capability of a node. In most cases, a battery-operated node will not be able to drive actuators, as those devices would either drain the battery too quickly or fa il to operate at all. The power s ource also can affect the choice of Communication Layer, as a high-power medium su ch as WiFi [40] would be less useful than a low-power medium such as ZigBee [41]. Communication Layer For a sensor and actuator network platform to be useful, users must be able to access the data being produced by sensors, and must be able to send commands to the actuators. With Atlas, data transfer over a network is handled by the Communication Layer. Several options are currently available: Wired 10/100 Base-T Ethernet IEEE 802.11b (WiFi) ZigBee USB RS232 Bluetooth Wired Ethernet The original Atlas Wired Ethernet Co mmunication Layer (Fig. 4-1), the first Communication Layer produced, made use of th e Cirrus Logic Crystal LAN CS8900a network interface controller (NIC) [42] and a standard Registered Ja ck 45 (RJ45) connector for basic 10 Base-T networking. Light emitting diodes (LED s) on the Interface Layer indicate power, connectivity, and local area ne twork (LAN) activity status. Wired Ethernet is important in situations requiring high-speed data access over an extremely reliable connection. For example, the Gator Tech Smart House uses Wired Ethernet

PAGE 36

36 Atlas nodes for critical systems such as location/fa ll detection. It is idea l for applications where nodes are situated in areas shielded from ra dio frequency (RF) communication, as in many industrial settings, or for deployments where jammi ng from benign or malicious external signals may be an issue. Wired Ethernet is also preferab le in high-security settings where snooping must be detected, as splicing the Ethe rnet cable produces a change in the impedance of the wires that can be sensed [43]. As the Atlas platform developed, we found th at the original Wired Ethernet solution became problematic. Because the CS8900a-based layer only provided the NIC and connection port, the Atlas firmware had to control the lowlevel operation of the NIC, and had to implement a Transmission Control Protocol / Internet Pr otocol (TCP/IP) stack. After looking at Open Source options for 8-bit microcont roller TCP/IP stacks, we found that the full implementations would require too much memory, and the partial im plementations would lack several features we wished to use (such as multicast). We sett led on uIP [44], which provides a partial TCP implementation and very limited UDP. We were able to take this and, with an Open Source uIPbased driver for the CS8900a that we modified for the Atlas platform, produce a working Wired Ethernet solution. uIP, however, was not as stable as we would have liked. Additionally, having to control the NIC from the firmware reduced the number of data pins from the microcont roller that we could use for sensors and actuators. Fi nally, the architecture imposed by uIP made it difficult to create a single firmware that would work with any Atlas Communication Layer. The current Atlas Wired Ethernet Communica tion Layer (Fig. 4-3) uses the LANTRONIX XPort [45]. Unlike the CS8900a, the XPort is an in tegrated Ethernet device, meaning the module includes its own microcontroller, which operates th e Ethernet transceiver and runs a full TCP/IP

PAGE 37

37 stack. The XPort provides 10/100 Mb networking, a nd includes higher-level protocols such as DHCP, HTTP, and SMTP. LEDs on the module itself indicate power status and LAN connectivity / activity. Because the XPort is an integrated solution, Atlas is able to control the device, and send and receive da ta, using simple commands. WiFi The WiFi Communication Layer (Fig. 4-4) is based on the DPAC WLNB-AN-DP101 Airborne Wireless LAN Module [46], providing 802.11b connectivity to the Atlas platform. Like the XPort, the DPAC module is an integrated de vice, with its own microc ontroller to operate the device and implement the network protocols. Also like the Wired Ethernet Layer, the WiFi Layer, which provides connection speeds up to 11 Mb, is appropriate for situations requi ring high-speed data access. 802.11b devices are typically rated for a range of 50 m, though th e exact range depends on antennas used and environmental effects. WiFi is not a low-power standard. The WiFi Communication Layer is best used when wired power is available but wire d Ethernet is not. Battery operat ion is possible, but an extended life is possible only with very infrequent transmissions. ZigBee The ZigBee Communication Layer (Fig. 4-5) us es the Cirronet ZigBee module [47]. This module is based on the IEEE 802.15.4 standard fo r low-power wireless networking. Atlas nodes using ZigBee communication are the best choice for untethered, extendedlife, battery-operated applications. ZigBee-based Atlas nodes can func tion exactly like other Atlas nodes by means of the Cirronet ZigBee Gateway, or can form an ad -hoc mesh network for non-pervasive-computing sensor network scenarios.

PAGE 38

38 Universal Serial Bus The Universal Serial Bus (USB) Communica tion Layer allows Atlas nodes to connect directly to the middleware com puter using its USB ports. The USB layer is primarily used for secure programming and configuration of nodes – b ecause the node is connected directly to the computer, information such as secret keys can be passed to the node with out fear of the data being compromised. Serial Port Several Atlas Communication Layers provi de serial port communication. The most common is the Atlas Programming and Debugging Board (Fig. 4-6). The Programming Board allows for complete testing and programming of Atlas nodes. All pins from the ATmega128L are exposed, headers are available for Joint Test Action Group (JTAG) [48], 6-pin Atmel In-System Programmer (ISP) [49], and 10-pin ISP devices, and standard seri al headers are available for both of the ATmega128’s internal serial UARTs. Bluetooth The Bluetooth Communication Layer was a prototype design built on top of the USB Communication Layer. It has been tested in a robotics experime nt performed by the lab, but is not regularly used. Patch Antenna The Atlas Patch Antenna (Fig. 4-7) is not a Communication Layer itse lf, but is used in conjunction with either the WiFi or ZigBee layers. The omni-dir ectional antenna operates in the mid 2.4 GHz Instrumentation, Scientific, and Medical (ISM) band, with a narrow bandwidth extending to 2.485 GHz, naturally filtering out-of-band interference. The flat form factor allows it to be mounted over the Communication Layer us ed, protecting the underly ing layers, and since

PAGE 39

39 the patch antenna is rigidly fixed to the Atlas node, it is much more reliable than the typical whip antenna. Device Interface Layer The Interface Layer is used to connect the various sensors and act uators to the Atlas platform. Interface Layers are available for a variet y of analog and digital sensors, actuators, and general purpose input and output. Device-integrat ed Interface Layers are also being developed. Analog Sensors Two Device Interface Layers are available for analog sensors. Each board accepts standard 3-wire analog sensors (the wires correspond to the refere nce (maximum) voltage, ground (minimum voltage), and the signal). The first Interface Layer, the 8-sensor board (Fig. 4-8), supports up to eight analog sensors. Each sensor is connected direc tly to one of the eight analogto-digital converter (ADC) channels on th e ATmega128L. This Interface Layer uses Molex/Waldom C-Grid polarized headers on the connection terminals, preventing users from plugging in sensors the wrong way. The second analog sensor Inte rface Layer, the 32-sensor board (Fig. 4-9) supports 32 sensors. Four 8-to-1 multiplexers on this Interface Layer are used to share the first four ADC channels with the 32 sensors. The control lines for the multiplexers are connected to the Atlas layer interconnect bus an d are controlled by the firmware. Du e to space limitations, this board does not include polarized headers, so users must be careful to plug sensors in using the correct orientation. The reference, ground, and signal pin rows are labeled on the board to help users correctly connect their devices. Digital Sensors The Digital Contact Interface La yer (Fig. 4-10) supports up to 16 contact or other two-pin digital sensors. Despite being designed for dig ital sensors, this Interface Layer uses the

PAGE 40

40 ATmega128L’s ADC, leaving the digital data pins available for other applications. Digital sensors produce a value of either 0 or 1. This is read by the ADC as 0 or 1023 (the maximum value of the 10-bit ADC). The Atlas firmware, re cognizing the Digital Cont act Interface Layer, translates ADC values of 0-511 as 0 and 512-1023 as 1. Since two-wire digital sensors can be plugged into the layer in either orientation, this board does no t include polarized headers. Actuators The Servo Interface Layer (Fig. 4-11) allows si x servo motors to be controlled by the Atlas platform using Pulse Width Modulation (PWM). Se rvos are positionable motors with some range of motion, usually 180 degrees. They are controlled digi tally; the input to a se rvo is either 0 or 1. With servos, the value on the input line does not directly set its position. Instead, while the input is 0, the servo remains at its current position. To change the servo’s position, a value of 1 is sent for some specific time. The length of time (the “width ”) that this “pulse” of 1 is held determines the new position. For example, setting the inpu t to 1 for 0.5 ms might set the servo to its minimum position, while a 2.5 ms pulse sets it to the maximum, and a 1.5 ms pulse puts it in the middle. While an analog control might seem more intuitive, the digital PWM requires less power, and allows the servo to be located furt her from the Atlas node, as the noise and signal attenuation in a long wire is less of an issue with digital signals. The Servo Interface Layer also includes the dual-head power connect ors. Most servos require at least 7 V, more than the ATmega 128L can provide. Servos can either use a passthrough cable from the Processing Layer, or can be directly connected to a separate direct current (DC) power supply. Other simple two-wire actuators, such as LED s, can be controlled using the Servo Interface Layer. However, since this is not intuitive, a nd LEDs do not require the direct power connection, a separate Interface Layer will be available for them.

PAGE 41

41 A second actuator Interface Layer being produced now is the triode for alternating current (TRIAC) board. The TRIAC Layer allows Atlas to be wired into a household circuit (such as for overhead lights or fans, or electric al outlets in general) and cont rol the current through it. This allows the platform to replace X10 home automation modules and similar devices. General Purpose Input/Output The General Purpose Input/Output (GPIO) In terface Layer (Fig. 4-12) allows users to connect any device to the Atlas platform without requiring a customized Interface Layer. It also allows for a mix of analog and digital sensors and actuators to be c ontrolled by a single Atlas node. The GPIO Layer exposes all the ADC channels and other data pins, as well as power and ground, from the ATmega128L. Users may need to a dd routines in the Atlas firmware to handle special devices, and this layer contains headers for both JTAG a nd ISP devices, used to write new programs or data to the microcontroller. Device Interfacing and IEEE 1451 IEEE 1451 [50] is an upcoming standard for sm art sensors and actuators. This standard defines a universal interface at the connecti on terminal (e.g., where Atlas currently uses polarized headers or simple pins ). In addition to the basic form factor of this connection, 1451 also specifies that the connec tion must contain a Transducer Electronic Data Sheet (TEDS). A 1451-compatible platform must be able to read the TEDS, which contains information defining what the connected device is and how to control or use it. This fits perfectly with Atlas’s Plugand-Play approach to sensor and actuator networks. However, as th e standard is still fairly new, and as seen almost no industry adoption, Atlas does not support the IEEE 1451 at this time. Wireless Interface Layer The Wireless Interface Layer is a new con cept being developed in the lab by another student, Ed Koush. In this scenar io, sensors would be connected to a new platform, comprised of

PAGE 42

42 an ultra-low-power microcontroller and wireless transceiver. These platforms would not operate on wired power or batteries; th ey use power scavenging techniques to store energy from sources such as solar power, radio frequency energy, or vibrations. These pl atforms would not be powerful enough to interact directly with the Atlas middleware. Instead, they would communicate with Atlas nodes running the Wireless Interface Layer. In essence, each low-power node is treated just like a sensor connected to an Atlas node. Other Layers Atlas is not limited to three layers. Additional layers could be added to provide extra processing power for computational expensive en cryption or digital sign al processing. Nodes could act as bridges between networks by stac king two or more Communication Layers. A data storage layer could be added to support nodes that run for ex tended periods without network connectivity. The power scavenging techniques me ntioned above could be implemented for the main Atlas platform. This is the versatility of Atlas’s modular, stackable architecture. Figure 4-1. Three-layered Atlas node, with a 6x3-wire Analog Sensor Interface Layer and a Wired Ethernet Communication Layer.

PAGE 43

43 Figure 4-2. Atlas Processi ng Layer. ATmega128L micr ocontroller in center. Figure 4-3. Atlas Wired Ethernet Communica tion Layer with LANTRONIX XPort module. Figure 4-4. Atlas WiFi Communication La yer with DPAC Airborne module.

PAGE 44

44 Figure 4-5. Atlas ZigBee Communication Layer with Cirronet ZigBee module. Figure 4-6. Atlas Programming and Debugging Boar d with dual serial communication ports. Figure 4-7. Atlas Universal Patch Antenna.

PAGE 45

45 Figure 4-8. Atlas Analog Sensor – 8 Device Interface Layer. Figure 4-9. Atlas Analog Sensor – 32 Device Interface Layer. Figure 4-10. Atlas Digital Contact Sensor Device Interface Layer.

PAGE 46

46 Figure 4-11. Atlas Servo Device Interface Layer. Figure 4-12. Atlas General Purpose Input/Output Device Interface Layer. Programming headers center left and right.

PAGE 47

47 CHAPTER 5 ATLAS PLATFORM FIRMWARE uIP Firmware As mentioned in previous chapters, the first version of the Atlas platform used the Wired Ethernet Communication Layer based on the Crys tal LAN CS8900a network interface controller (NIC). Wired Ethernet was the easiest and most useful network medium to use in our smart house. However, using the CS8900a necessitated either writing or fi nding both a Transmission Control Protocol / Internet Prot ocol (TCP/IP) stack and a driver for the NIC. After reviewing several Open Source TCP/IP stacks for embedde d systems, we chose uIP, a light-weight implementation. Although uIP did not come with a driver for the CS8900a, referencing the drivers that were included and the data sheet for our NIC, we were able to get Ethernet communication working. While uIP was a beneficial springboard for wo rking with the Atlas platform, its limits became apparent as we began to add new Comm unication Layers. The uIP application structure made it difficult to circumvent the uIP internal s when a different Communication Layer was in use. Given this, and the occasional instability ex hibited by the TCP/IP stack, we moved to a new firmware base. Direct Firmware The move to a direct firmware was made while integrating the DPAC Airborne WiFi module into the Atlas platform. Because the module is an integrated device, with its own microcontroller, Atlas did not need to run its own TCP/IP stack or have a complex driver for the device. Instead, the DPAC module was connect ed to one of the Universal Asynchronous Receive/Transmit ports (UAR T1) of the ATmega128L on the Atlas Processing Layer.

PAGE 48

48 Controlling the module and sending and receive data was simplified to writing and reading messages from the serial port. Since the code was entirely ours, adding new features (and de bugging) the firmware became much easier. We were able to create a unified firmware for all sensors and actuators by moving towards a Web-based node configuration tool (see Chapter 6). Separate versions of each firmware existed for the different Communication Layers, but all other no de variations were handled by the configuration system . Using this system, we were able to add WiFi support with the DPAC module, as well as ZigBee support us ing the Cirronet module. We also found an integrated solution for Wired Ethernet, th e LANTRONIX XPort. We replaced our CS8900a design with the XPort, which we found to be much more stable. TinyOS Firmware While the Direct Firmware approach worked for a while, it too began to show signs of problems. First, we were still maintaining sepa rate firmware code bases for each Communication Layer. Second, as we began adding advanced feat ures to the Atlas plat form, such as filtering, query processing, and security laye rs, it was become difficult to c oordinate the various tasks the firmware had to perform. Finally, while our pr imary target for the Atlas platform has always been pervasive computing spaces where the provided infrastructure makes ad-hoc an unnecessary complication, the fact that our sensor platform coul d not do mesh networking at all was seen as a weakness. We attacked all three problems by beginning a third version of the Atlas firmware. This time, the code would be based on the TinyOS [51, 52] 2.0 source code. TinyOS is a light-weight, component-based, event-driven OS specifically desi gned for sensor networks . Like the Motes, it was invented at Berkeley as part of the Smart Du st project. It is writte n in nesC [53, 54], a Clanguage variant that uses trad itional imperative-style coding for low-level implementation of

PAGE 49

49 modules, usually corresponding to th e functional blocks of specifi c hardware, and a declarativestyle wiring language for connecti ng modules together at a highlevel to form applications. While the scheduling model provided by TinyOS is simplistic, it is enough to resolve the issues we had in the previous firmware with trying to coordinate an increasing task list. Additionally, the modular coding style allows us to maintain single code base for all Communication Layers. And because TinyOS is the operating system for the entire Mote family and related devices, there are several Open Sour ce ad-hoc networking algorithms that will now be easy to port to the Atlas platform. Moving to a TinyOS-based firmware required two main steps. First, the underlying TinyOS code had to be ported to run on the Atla s hardware. Second, the “application” side of the Atlas firmware, responsible for initializing th e Communication Layer, establishing a connection to the middleware, configuring nodes, querying sensors, controlling actuators, and transmitting data, had to be implemented in nesC and linked into the final TinyOS program. Since TinyOS is distributed with a platform definition for the Mica2 family of Motes, which use the same ATmega128L microcontroller as Atlas, porting the underlying system was not difficult. Mainly this involve d changing clock rates, baud ra tes, and eliminating code for devices that are integrated into the Micas but are not provided or needed by Atlas. Developing the Atlas app lication took far longer. This wa s almost entirely new code, as Atlas uses a common firmware to query and cont rol all devices, with device-specific issues handled at a higher-level in the middleware, whereas Motes requires us ers to develop their applications entirely on the lo w-level, distributed nodes. TinyOS Base The TinyOS kernel offers a simple, two-tier scheduler. The lower le vel consists of nonpreemptive, first-in-fir st-out processes called tasks [54]. Each tasks runs to completion before the

PAGE 50

50 next is scheduled, and the microc ontroller can enter a sleep state if no tasks are waiting. From the nesC perspective, a task is a si ngle function that takes no paramete rs and returns no value, and is annotated with the “task” keyword. Although one task cannot interrupt another, the TinyOS scheduler will preempt tasks to execute an upper tier process, called an event [54]. Events are processes that handle system signa ls. These signals may be cause by hardware interrupts (timers, data, etc.) or software interrupts (nesC “signal” statements). Events (or, more accurately, event handlers), like ta sks, are implemented as void functions in nesC, though they can take parameters. TinyOS and applications built against it are composed of nesC modules. Modules are similar to objects in Object -oriented Programming (OOP), though by default the modules are singletons and multiple instances cannot be created. A module is defined by the interfaces it uses, and the interfaces it provides. Interfaces in nesC contain tw o kinds of structures: comma nds and events [54]. Commands are functions that the module prov ides (again, just as in OOP). Events, as mentioned above, are signals that the module can send out into the system. A module that provides a particular interface must implement every commands listed by that interface. A module that uses an interface must implement an event handler for ever y event listed by that in terface. If more than one module use the same interface, both modules must implement an event handler, and both handlers will be called if the event is signals, though TinyOS provides no guarantees about the order in which the event handlers will execute. Both commands and events are, by default, synchronous, meaning they cannot be interrupted by other events . If commands or events are explicitly defined (in the interface) as asynchr onous, however, then any implementation can be preempted.

PAGE 51

51 Beyond the scheduler and programming model, the TinyOS core provides a software abstraction of the ATmega128L hardware. This includes module wrappers around components such as the hardware timers, seri al ports, and general data pins. Atlas Platform Definition In addition to the core system, TinyOS applic ations are built against a platform definition. TinyOS ships with platform definitions for the Mica family of motes (Mica, Mica2, Mica2Dot, and Micaz [55]), Telos, and the Intel mote, among others. Each pl atform definition must include a hardware.h file that links in the correct microcontro ller driver, and defines the clock speed and baud rate for the device. The platform definition also requires a PlatformC.nc nesC module. This module must exist, as it is used by the TinyOS ke rnel, and contains any initialization routines that must run when the de vice is booted. Finally, a .platform file is required by the build system, and consists of a series of include statements, lis ting the directories that contain source code for the platform. Atlas is a modular architecture, and the plat form definition is larger than the typical TinyOS project. In addition to the required files mentioned above, th e Atlas definition contains a number of new modules. PlatformSerial0C.nc and PlatformSerial1C.nc expose and isolate the UART0 and UART1 connections of the microcontroller. SerialCommC.nc is a generic module, which, in nesC, indicates that multiple instances ca n be created. It defines an interface for serial communication, and provides support for buffere d, multi-character writing to a UART. The module also provides both synchronous (blo cking) and asynchronous (event-driving, nonblocking) reading and writ ing. The Atlas platform cr eates two instances of SerialCommC , binding one to PlatformSerial0C and the other to PlatformSerial1C . Other features of the microcontroller are ab stracted by modules in the Atlas platform definition. Several modules are used to provide an interface to the ATmega128L’s hardware

PAGE 52

52 counters, supporting 128 virtual timers [56] and alarms from the device’s three physical counters. The Atlas includes support for both even t-driven and busy-waiting (blocking) timers. The platform definition also includes modules to support the non-volatile memory of the microcontroller (where node configurations are stored), and the analog-to-digi tal converter and digital data pins of the device (used to read sensor data and control actuators). In addition to the microcontroller features, the Atlas platform de finition contains the modules that manage the sw appable Communication Layers. AtlasCommC.nc is the abstract communication model that is exposed to higher levels of th e Atlas application. AtlasCommC implements a number of communica tion-related commands, such as “init” (sets th e node’s IP address and network settings, and initializes th e Communication Layer), “c onnect” (establishes a TCP/IP connection with the provided IP address a nd port number), and “send” (transmits a string over the established connection). AtlasCommC also defines events, such as “receivedByte” (automatically signaled when an incoming byte is received from the TCP/IP connection) and “commDisconnect” (signaled when the TCP/IP c onnection is dropped). Any module that uses AtlasCommC (such as the main Atlas application modul e) must implement the handlers for these events. As an abstract module, the commands defined by AtlasCommC are not fully implemented by that module. Instead, it makes use of severa l concrete modules. As of now, these are AtlasCommXportC.nc , AtlasCommDpacC.nc , and AtlasCommCirronetC.nc . These are the modules for the wired Ethernet, 802.11b WiFi, and ZigBee Communication Layers. Each of these modules implements the same commands as AtlasCommC . Therefore, when an Atlas application calls AtlasCommC.send(“Hello”) , AtlasCommC first detects which Communication

PAGE 53

53 Layer is in use. If it is the Ethernet layer, AtlasCommC calls AtlasCommXportC.send(“Hello”) . If the WiFi module is in use, AtlasCommDpacC.send(“Hello”) will be called instead. In this architecture, an outgoing byte transmitte d from an application is relayed first to AtlasCommC , then passed to the appr opriate concrete Communica tion Module. The concrete module sends the byte (and any nece ssary control codes) through the SerialCommC that wraps PlatformSerial1C . This data is transmitted by the ATmega128L over UART1 to the Communication Layer hardware, which sends the byte over the TCP/IP connection. Incoming data from the TCP/IP connection passes th rough the Communication Layer hardware, to UART1, to PlatformSerial1C , to the SerialCommC . The data is then relayed directly to the abstract AtlasCommC , avoiding redundant event handler implementation in the various concrete modules. AtlasCommC sends the incoming byte to the current Communication Module. The concrete module determines if the byte is a cont rol command to the hardware. If so, it consumes the byte and performs the operation. Othe rwise, it sends the byte back to AtlasCommC (by signaling an event), which then passes the byte up to the top-level application (also by signaling an event). Fig. 5-1 and Fig. 5-2 provide diagrams of this data flow for data sent by the node and by the middleware. This data marshalling scheme results in the abstract AtlasCommC needing only read access to UART1, and the concrete modules needing only write access. To avoid duplicate event handlers and possible conflict, the CommBridgeC.nc module wraps the UART1 SerialCommC . CommBridgeC provides two interfaces: CommBridgeControl , which wraps the UART-reading events, and CommBridgeModule , wrapping the UART-writing commands. AtlasCommC then uses the CommBridgeControl interface, while the concrete modules use CommBridgeModule . The Atlas platform definition links CommBridgeC as the interface provider for the abstract and

PAGE 54

54 concrete modules, but these modules will only ha ve access to the structures specified by the interface they use. The platform definition also includes the AtlasDebugC module, a wrapper around the SerialCommC instance for PlatformSerial0C . It provides access to the UART-writing commands while hiding the UART-reading events. This provi des applications with a simple module to use for writing debugging and testing messages to the se rial port without having to implement event handlers for incoming data (which would not appear on a write-only connection). These modules provide all the necessary functionality to operate the Atlas node hardware. They do not, however, query sensors, control actua tors, or link the nodes into the larger Atlas framework. The platform definition provides a base that facilitates the development of applications that will run on the nodes. Any kind of application could be created, such as a simple number cruncher or a stand-alone, ad-h oc sensor network serv ice. The Plug-and-Play operating system that manages connected devices and communicates with the Atlas middleware is just another application, though obviously it is critical in terms of enabling programmable pervasive computing spaces. Atlas Application Unlike the large platform definition, the Atlas application consists of only three modules. AtlasConfigC.nc contains routines for reading and writi ng the configuration data for each node. AtlasConfigC uses the AtlasEEPROMC.nc module from the platform definition to access the 4 KB non-volatile memory provided by the ATme ga128L. The basic Atlas configuration file contains the unique node identifi cation string, IP settings for th e node and the middleware server, including options such as D ynamic Host Configuration Prot ocol (DCHP) support and WiFi Service-Set Identifier (SSID). This file also contains the device map, an array indicating the

PAGE 55

55 types of devices connected to the platform. Si milarly, the service driver references for the various attached devi ces are also stored. The main Atlas application is defined in the AtlasC.nc module. AtlasC uses the TinyOS Boot interface, and provides the event handler th at is called once TinyOS has booted and is ready to run the application. The application uses AtlasConfigC to load the node configuration file. AtlasC uses this configuration and the AtlasCommC module to initiali ze the Communication Layer and establish a TCP/IP connection to the middleware. The application then performs a handshake with the server, and uploads the serv ice bundle references for attached devices. This protocol is shown in Fig. 5-3. After the node has register ed with the middleware fram ework, it loops, waiting for commands. Incoming transmissions forwarded to and parsed by the CommandProcessorC.nc module. This module encapsulates the asse mbling, CRC verification, and recognition of messages, breaking valid messages down in to commands and parameters. Recognized commands are executed through callback functions in AtlasC . Some commands perform one-time actions. For example, the node can receive a command to turn on a particular actuator, or to pr oduce one reading from a sensor. Other commands require repeated execution. The pr imary command of this type is Subscribe , which instructs to the node to send continuous readings from a partic ular sensor (or to transmit over some interval, or to transmit any time the value changes over so me threshold, etc.). These commands result in new Task processed being posted, allowed for re peated, encapsulated execution that will not block the command-recognition process (as this pr ocess is initiated by an event, and event processes can interrupt tasks). B ecause the current task must run to completion before another task can start, repeated execution is not performe d with loops inside the task definition. Rather,

PAGE 56

56 each task ends by checking if more work is require d. If so, the task is reposted. This allows other waiting tasks a chance to run. The module and component diagram of the comple te firmware architecture is provided in Fig. 5-3. Table 5-1 describes the Atlas Interf aces defined in the firmware, and Appendix A provides an example of the nesC code as used in the Atlas firmware. Table 5-1. NesC Interfaces de fined in the Atlas firmware. Interface Structure Type Description CommandRunner runPing command Executes 'PING' command from middleware. runRequestConfigDetailcommand Executes 'REQUEST_CONFIG_DETAIL' command from middleware. runRequestConfig command Executes 'REQUEST_CONFIG' command from middleware. runReset command Executes 'REBOOT' command from middleware. runWriteConfig command Executes 'WRITE_CONFIG' command from middleware. runFakeData command Execute 'FAKE_DATA' command from middleware. CommandProcessor initCipher command Initializes the incoming and outgoing RC4 ciphers. encryptByte command Encrypt byte using outgoing cipher. decypherByte command Decrypt byte using incoming cipher. processCommand command Parse and execute command string. AtlasComm init command Forward initialize command to attached communication module. connect command Forward connect command to attached communication module. reset command Forward reset command to attached communication module. send command Forward send command to attached communication module. sendAsync command Forward sendAsync command to attached communication module. sendByte command Forward sendByte command to attached communication module.

PAGE 57

57 Table 5-1. Continued. Interface Structure Type Description AtlasComm commReady event Signals application when communication module completes initialization. commDisconnect event Signals application when TCP/IP connection is broken. sendComplete event Signals application after last character of string transmitted via sendAsync is sent. receivedByte event Signaled when character is received by communication module. AtlasCommMod init command Init ialize communication module. connect command Create TCP/IP connection to provided address and port. reset command Reset communication module. send command Transmit string over TCP/IP connection (blocking). sendAsync command Transmit string over TCP/IP connection (non-blocking). sendByte command Transmit single byte over connection (blocking). completeSend command Notifies communication module that a non-blocking write has completed. doReceiveByte command Receive character from physical communication layer via abstract communication module. commReady event Signaled after concrete module completes initialization. commDisconnect event Signaled when TCP/IP connection is broken. sendComplete event Signaled when last character of string transmitted via sendAsync is sent. receivedByte event Signaled when byte received from communication layer should be delivered to the application (not a control command to the communication module).

PAGE 58

58 Table 5-1. Continued. Interface Structure Type Description CommBridgeControl sendComplete ev ent Signaled when asynchronous write has completed, for read-only access to communication layer. receivedByte event Signaled when character arrives from physical communication layer, for read-only access to layer. CommBridgeModule send command Wrapper on blocking write command, for write-only access to communication layer. sendAsync command Wrapper on non-blocking write command, for write-only access to communication layer. sendByte command Wrapper on blocking, singlecharacter write command, for write-only access to communication layer. AtlasDebug write command Blocking write command to debug port of node hardware. writeAsync command Non-blocking write command to debug port of node hardware. writeByte command Blocking, single-byte write command to debug port of node hardware. SerialComm send command Non-blocking write command to ATmega128 UART device. sendBlocking command Blocking write command to ATmega128 UART device. sendByteBlocking command Blocking, single-byte write command to ATmega128 UART device. sendComplete event Signaled when non-blocking UART write command completes. receivedByte event Signaled when UART hardware fires interrupt for incoming character. AtlasEEPROM readBlock command Reads specified block of memory from ATmega128 EEPROM. writeBlock command Writes data to specified block of ATmega128 EEPROM. DelayTimer delayMilli command Blocks processing for specified number of milliseconds.

PAGE 59

59 Figure 5-1. Communication path for data transmitted from Atlas node to middleware.

PAGE 60

60 Figure 5-2. Communication path for data transmitted from Atlas middleware to node.

PAGE 61

61 Figure 5-3. Component and module diagra m of TinyOS-based Atlas firmware.

PAGE 62

62 CHAPTER 6 ATLAS MIDDLEWARE As mentioned in the introduction, while “middlew are” usually refers to a framework sitting between an operating system and an application layer, programmable pervasive spaces requires a middleware between the physical world and an application layer. T hus Atlas nodes and the firmware that operates them are all considered part of our Atlas middleware. That said, the majority of our middleware framework opera tes on a stand-alone personal computers. Open Services Gateway Initiative Middleware The Open Services Gateway Initiative (OSGi) is a Java-based fram ework that provides a runtime environment for dynamic, transient se rvice modules known as bundles. It provides functionalities such as life cycle management as well as service registration and discovery that are crucial for scalable composition and maintena nce of applications us ing bundles. Designed to be the “universal middleware,” OSGi enables se rvice-oriented architectures, where decoupled components are able to dynamica lly discover each other and collabo rate. OSGi is synergistic to pervasive computing, and is a ke y component of the Atlas middlew are, hosting the majority of the software modules. OSGi bundles are small programs consisting of three main source components and a descriptive Manifest. The source components are the inte rface, the implementation and the OSGi activator. The interface represents a service contr act, which describes the external behavior of and available services provided by the bundle. A bundle can provide di fferent services by offering multiple interfaces. The implementation realizes the behavior defined by the interface. The Activator implements an OSGi-specific in terface that binds the otherwise regular Java classes to the OSGi framework, which manages th e life cycle of the bundle. The Manifest is a file that is both machineand human-readable, which specifies propertie s of the bundle such as

PAGE 63

63 version, author, and imported and exported packages, and is used to identify bundles and resolve dependencies. Each sensor or actuator is represented in the Atlas middleware as an individual OSGi device service. This is a natural choice, becau se the life cycle management capability of the OSGi framework is designed to handle the dyn amic nature of a typical smart space, where devices come and go as the space evolves. The OSGi framework also provides a discovery service, which allows applications to find a nd use other existing services. But unlike other discovery services such as Jini and Universa l Plug-and-Play (UPnP) which are designed for distributed environments, OSGi provides a single centralized runtime environment. The applications and services are also re presented as bundles in the OSGi framework. These bundles’ dependencies on the basic device services are ad ministered by the life cycle management and service discovery capabilities of the framework, enabling the composition and activation of more complex applications and serv ices. All the common serv ices supported in the Atlas middleware are also implemented as bundles running on the OSGi framework. Using the Atlas platform, sensors and actuato rs are connected to Atlas nodes. The nodes are powered on, and connect to the middleware serv er. At this point, if the nodes are not already configured, a Web-based configurat ion tools allows users to select any node in the network, and assign the devices that are connected to it. Assi gning the devices accomplished two tasks. First, the node configures itself to handle the device properly: analog sensors will cause the node to periodically sample the analog-to -digital converter, servos will cau se the node to listen from commands from the middleware telling it to ch ange the servo’s posi tion, etc. Second, the assignment tells the middleware to instantiate a new copy of the appropriate driver bundle for

PAGE 64

64 that device, and map it to the Atlas node to wh ich it is connect. Then any commands sent by the driver bundle will reach the device, and any da ta produced by the devi ce will reach the bundle. The Atlas platform then is turning physical sensors and actuators into software services. The driver bundle that represents a sensor or ac tuator is running in th e Atlas middleware. Using OSGi’s service discovery features, other applic ations running in the middleware can obtain references to the sensor and actuator bundles and call the methods they provide. Atlas Middleware Architecture In addition the services and execution environment provided by OSGi, the Atlas Middleware consists of severa l special bundles running in framework. At a minimum, four components are needed to bring new Atlas node s online: Network Manager, Configuration Manager, Bundle Repository, and the Atlas Developer Application Programming Interface (API). Network Manager The Atlas Manager bundle contains several cla sses that are instantiat ed and started when the bundle activates. The first of these is Netw ork Manager. Network Manager creates a new Network Listener, which by default is a Transmi ssion Control Protocol (T CP) server, listening for connection on the configured Atlas port (7000 usually). Networ k Listeners for the Universal Serial Bus (USB) network will al so be created. When an Atlas node comes online, it performs some negotiation with the middleware by contacti ng the Network Listener. As shown in Fig. 61(1), after the initial handshake, the Network Ma nager spawns a Communicator Thread that will exclusively handle all th e network communications with this particular node from now on. A Node Service Handler (NSH) is al so created which registers the va rious devices connected to this node as OSGi services (as shown in Fig. 6-1(2)) and handles the routing of commands and data

PAGE 65

65 between the service bundles and their respective devices. Applicat ions are then able to locate and use these services provided by the new devices (Fig. 6-1(3)). The Network Manager is responsible for keep ing track of all the nodes in the network, both existing and newly joined, and the various device services offered. When a node joins the network, it handshakes with the Network Manager, then uploads its configuration. At this point, control is passed to the Configuration Manager, which register s the device services and activates the node. The devices are then ready to be used by applications and other services. Network Viewer, a part of the Atlas Web Conf iguration and Administ ration Tool, provides a front-end to the Network Manager, allowing us ers to view the current network status. It displays the list of active nodes and a short summary of their connection stat istics and histories. By clicking on a particular node, us ers are able to view details su ch as configuration parameters, the properties of sensors or actuators connected and the services offered. Configuration Manager The Configuration Manager bundle is used to prove a web-based configuration, much like that of a residential router, to all the Atlas nodes curren tly registered in the network. The service also provides tools for the Network Manager to verify the configuration data sent during the initial handshake as a node c onnects. The Configuration Manage r encapsulates all the methods for recording and manipulating node settings. Wh en a new node uploads its configuration file, the Network Manager passes it on to the Configura tion Manager, which then parses the file and accesses the Bundle Repository to get the refe rences for service bundles required by the connected devices. It then loads these servi ces bundles into the OSGi framework, thereby registering the different device services associated with the node. The Configuration Manager allo ws a user to view and modi fy node configurations through the Atlas Web Configuration and Administration T ool. Through this web inte rface a user is able

PAGE 66

66 to modify various node settings and select devi ces that are to be connected to a node. This enables nodes to be programmed over the netw ork without requiring them to be attached to microcontroller programming hardware . After the new settings are sent to the node, it reboots and the new devices automatically become ava ilable as services in the OSGi framework. Configuration Manager is en tirely the Master’s thesis work of Raja Bose. Bundle Repository The Repository bundle manages the device driver bundles. It stores a generic bundle for each particular type of sensor or actuator, and allows Network Manager to instantiate and start a new copy each time one of those particular devices comes online. The Bundle Repository eliminates the need for each node to locally store bundles. Instead, the Configuration Manager retrieves references to the bundles from the B undle Repository when new nodes join. It enables nodes with limited storage to support a larger numb er and greater variety of devices. This also simplifies the process of keepi ng service bundles up-to-date. U pdated bundles can be placed in the Bundle Repository, and nodes that provide this service will au tomatically use this latest version when the node comes online. The Atlas Web Configuration and Administra tion Tool provides a front-end to the Bundle Repository, enabling users to view and modify its contents. It lists the available service bundles, the physical devices they repres ent and other details such as the version numbers and dates uploaded. Users are able to add, delete and update service bundles, and s ynchronize with other repositories. The local Repository in one middleware also is able to synchronize with a remote Repository in another. Our lab maintains a globa l repository at SensorPl atform.org. We hope to promote an Open Source community-like effort around the development and maintenance of device driver bundles. This is a critical step in creating programmable pervasive spaces. A

PAGE 67

67 community effort to develop a public library of sensor and actuator drivers will reduce the time people spend reinventing the wh eel, having to research a devi ce’s characteristics and operation. In the device is already in the library, the bundl e can be downloaded and used in any smart space without modification, and without having to l earn what low-level vol tages and signals are necessary to use the sensor or actuator. Atlas Developer Application Programming Interface The Atlas Developer Application Programmi ng Interface (API) pr ovides interfaces and base classes (shown in Table 6-1) for third pa rties to develop device and application service bundles on top of Atlas. Programmers wishing to write their own serv ice bundles for sensors or actuators are required to extend the AtlasService base class prov ided by the API. This class provides low-level functionality common to all th e service bundles. It also hide s many system-level details, allowing programmers to concentr ate on writing code specific to the device. The Developer API promotes the proliferation of device serv ice bundles. This, combined with the Bundle Repository, encourages community-based developmen t, covering large territories of new sensors and actuators. Using the AtlasClient interface to develop perv asive applications promotes standardized, streamlined interactions between the application and the middlewar e. Its unified interface allows for rapid development of complex applications over a large set of widely diverse devices. Application Development Using Atlas Atlas was designed to work with many di fferent programming models. As Fig. 3-2 showed, applications can make direct use of the service bundles running in the middleware, or more complicated models can be plugged in to the basic service framework. For example, a Context Module could provide a layer of abst raction over the raw services, allowing for a

PAGE 68

68 context-driven programming model, where knowledge and behavior are exp licitly defined using context transitions, instead of being buried inside separate, potentially conflicting applications. Atlas Service Authoring Tool The Atlas middleware includes a simple and ea sy-to-use service authoring (programming) tool that extends the capability of the popular open-source integrated development environment (IDE) Eclipse in the form of a plug-in. The A tlas Service Authoring To ol, created by another student in the lab, Hen-I Yang, allo ws programmers to quickly brow se the sensors, actuators and other services available in a pervasive environment, use them to develop new services and applications or modify existing ones remotely, and deploy them back to the pervasive space. Table 2 lists the functional requirements of the tool, which we established based on our experience with the Gator Tech Smart House. The software employs a visual “design by selection” approach for servic e design, where programmers can dr ag-and-drop services from the list and the tool automatically p ackages the necessary components and creates the handles that can be invoked to access the services. The t ool also provides source code templates so programmers are free to concentrate on implementing the application logic rather than the details of writing OSGi bundles. To design and develop a new application, pr ogrammers simply perform the following procedure: 1. Connect to a remote pervasive environment that is based on the Atlas middleware. Alternatively, connect to a sta nd-alone Atlas Bundle Repository. 2. Edit the manifest file for the application (if necessary). 3. Browse and select from the list of sensors, actu ators and services those entities that will be used in the application. 4. Add application logic implementation to the source code generated by the tool.

PAGE 69

69 5. Test and debug locally. 6. Deploy the application back to the pervasive environment w ith a single touch of a button. The Atlas Service Authoring Tool supports remote development and deployment; Eclipse and the Atlas plug-in are intended to be installed on a machine ot her than the centralized Atlas middleware server. We believe this powerful f eature is the catalyst for future growth of pervasive computing. As these enviro nments become more prevalent, it will be infeasible to have programmers visit the numerous or inaccessible locations where applications will be deployed. In a smart retirement community of 100 smart homes, the central office should be able to push any number of new third-party services out to th eir members without visiting each house. This is easily accomplished with the Atlas Service Authoring Tool. Communication Modules Instead of implementing a singular comm unication manager that oversees all the communications within and connected to our A tlas middleware, Atlas middleware includes an array of communication modules for effective co mmunications in a heter ogeneous environment. With the number and diversity of the entities in a pervasive envi ronment, it is unrealistic to expect one single communication pr otocol to work on vast numb er of diverse entities from different vendors. Until a publicly agreed standa rd has been adapted for pervasive computing environment, we currently offer various comm unication modules conforming to each of the different protocols. Each communication module provid es basic carrier of messages based on certain standard or proprietary protocol. They are implemented onl y to serve as carriers to deliver and receive messages, and do not check for validity of th e content or conformance to the high level choreography, etc. All th e services in the environment can invoke the service of one of these communication modules to conduct inte rnal or external communications.

PAGE 70

70 Currently, Atlas middleware supports the follow ing communication modules. For internal communications, services can use the OSGi wi ring API for inter-bundle communications. This module instantiates a producer-cons umer relation between data generators and users. For external communications, the At las middleware currently support three modules re alizing three different protocols, the Telex network (TTY) client, the hypert ext transport protocol (HTTP) client and the web service inte rface module. These modules allo ws the services and entities residing in the server to communicate with non-Atla s based sensors and actua tors, as well as with external services and systems such as existing business applications. For incoming communications, in addition to the TTY console and HTTP server console that already comes with OSGi reference implemen tation by default, we have also built a service that awaits incoming message and multiplex them to different service bundles based on the identifiers associated with each message. Performance Evaluation To investigate the performance and scalab ility of the Atlas middleware, several experiments were designed and conducted. In pa rticular, we ran the Atlas middleware on a lowend desktop computer, injected multiple sensor device services into the middleware, and sent simulated sensor readings from a separate desk top computer. The first experiment examined the issue of scalability and investig ated how the number of sensors a ffects the resource usage of the middleware. The second experiment explored the performance of the middleware as a typical data consuming application is connected to an in creasing number of data streams. We describe in the following sub-sections, the setup and results obtained from these experiments. Experiment Setup The experiments were conducted with the Atlas middleware running on a Pentium4 1.70 GHz/256 KB cache Dell Dimension 8100 machin e with 256 MB of memory. The operating

PAGE 71

71 system used was Ubuntu 6.10. The Java SDK 5.0 Update 8 was used to run Knopflerfish 1.3.5, an open-source OSGi framework implementation. The AtlasSim utility (Fig. 6-6) is used to te st the Atlas middleware by simulating a network of Atlas nodes. The simulator is able to spawn multiple nodes, each in its own thread. Only the communication with the middlew are server is simulated, not the hardware components themselves. Each node spawned can be individua lly customized at any time, specifying the number and type of device serv ices provided, the data sent, an d the sampling frequency (Fig 67). The main application thread uses a single tim er to step through the simulation. AtlasSim uses the default communication medium (wired Ethe rnet, WiFi, etc.) provided by the machine to reach the specified middleware server host. The maximum number of nodes that can be spawned depends on the speed, memory, and network stack of the simulation machine. AtlasSim was written in Delphi for Windows, and can be compiled using Borland Kylix for Linux. For both the experiments, we simulated a fo rce sensing resistor from Interlink. The InterlinkPressureSensor device service bundle is initiated an d activated to continuously forward readings from the simulated sensors at the rate of one reading per second. Atlas Middleware Scalability In this experiment, we investigated the scal ability of the middlewar e, in terms of the maximum number of sensor device services th at can be supported while running on a low-end desktop machine with limited resources. We further monitored the performance impact on processor load and memory usage as we simulated more and more sensor devices. In this experiment, we conti nuously instantiated and activated new sensor device services in the middleware, and observed the impact of the number of instantiated sensors on the performance of the centralized server hosting the middleware. For baseline comparison, we observed that before we star t the OSGi framework on the server, the user-level central

PAGE 72

72 processing unit (CPU) load was at 1.3%, and th e system level CPU load was at 0%, and the amount of memory in use was 215.42 MB. Once the OS Gi platform was initiated and before any sensor device services were activated, the us er level and the system level CPU load both remained unchanged, but the amount of memory in use jumped to 243.44 MB. In Fig. 6-8 we observe that both the CPU load at the syst em level and the user level increases linearly as the number of connected sensors increased, althoug h the user level CPU load climbs at a much faster rate than the sy stem level CPU load. This is because the device services run on top of the OSGi framework at the user level, hence basically the user level CPU load grows proportionally with the total number of sensors. The data stream from each of the simulated sensor is delivered th rough the network resulting in a much less, but also linearly proportional growth in terms of system level CPU load. The experiment stops at 500 sensors because the system becomes sluggish and unrespons ive, resulting in difficulty in adding more sensors, and demonstrates that 500 is the maxi mum number of sensors that can be supported by the Atlas middleware when each Atlas node is only connected to a single sensor. Fig. 6-9 shows how the memory usage is affect ed by the number of sensor connected. We can see that as soon as the first sensor is conne cted and activated, the memory usage jumps from 243.44 MB to between 251 and 252 MB. The memory us age then fluctuates within this reason as more and more sensors are connected. What is mo st interesting is that a typical pattern of thrashing is observed at around 300 sensors. The memory usage goes down from there, and the system response becomes slower. Although the CP U load keeps increasing (as shown in Fig. 68) the performance in terms of processing sens or readings does not improve. The memory usage continue to decline due to wors e thrashing activities until the sy stem finally stop responding at 500 sensors.

PAGE 73

73 In a variation of this experiment we simu lated multiplexing Atlas nodes. It was observed that when there is no application interacting with these sensor device services, the Atlas middleware could support up to 133 Atlas sensor nodes, with each node connected to 32 sensors, before the CPU in the centralized server overloade d and the system thrashed. In other words, the Atlas middleware, when running on a low-end res ource limited test bed PC, with capabilities similar to typical set-top boxes or access points, can support up to 4256 sensors at a given time. In a typical setting of smart houses (such as the Gator Tech Smart House), this kind of scalability far exceeds the requirements of typical applica tions and services provided in a pervasive computing environment. Hence, Atlas middleware is scalable with respect to a wide class of pervasive computi ng applications. Performance under Zero-Load Data Streams In this experiment, we set up a simple applicat ion that discovers all th e basic sensor device services available in the system, and subscribes to their data streams. This application does not perform any useful operations on the data gather ed; it only checks the timestamp when new data arrives. The idea of this experiment is to capture the applicationmiddleware interaction overhead resulting from instantiat ing and connecting to sensor da ta streams. This experiment allows us to investigate how many sensors a singl e typical application ca n support. In essence, this experiment examines the consumer side of the middleware (applications), whereas the experiments in the previous section examine th e producer side of the middleware (sensors). For the baseline comparison of this experi ment, after the OSGi platform and the application service bundle we re initiated, and before any sensor device service was activated, the user level CPU load and the system level CPU lo ad were 2% and 0.3%, which is slightly higher than without the application, and the amount of memory in use jumps to 251.64 MB, which is about 3% more than without the application.

PAGE 74

74 In Fig 6-10, we overlay the plot s of CPU load versus number of sensors for two scenarios: (1) when there is no application present; (2) when an application is present and subscribing to continuous data streams from all the sensors. Obse rve that in the absence of an application, the CPU load increases slowly as the number of sensors increases, and barely reaches 30% when there are 500 active sensors. In contrast, when an application subscribes to the sensor data streams, the CPU load rises sharply and satura tes when the number of sensors exceeds 75, but the system remains responsive until the number of sensors reaches 300. The CPU load increases sharply because, in addition to the extra load created by each added sensor, the application must handle the da ta streams coming from all the existing sensors as well as the data from the new sensor. As the number of sensors increases, the application has to consume more and more computing resource s to handle the constant incoming streams of data. The CPU load reaches close to 100% at around 75 sensors, then drops back and hover around 97% until the system stops responding at around 300 sensors. The system is able to support even more nodes after CPU load peaked at 75 because the data streams from sensors experience longer delays because of the need to schedule more threads for each of the sensors, and longer time slice are needed for the subscrib ing application to handler all the incoming data streams. Data readings are queued, and some even tually dropped due to time out and insufficient space in queue, before they are able to be processed by the application. The addition of application service does not have any significant impact on the memory usage as shown by two highly overlapping trend line s (one with the applic ation running, and the other does not) in Fig 6-11. In addition to monitoring the CPU load and memory usage, we also measure two other important characteristics of the plug-and-play process, device de tection time and data channel

PAGE 75

75 initialization time. The device detection time is used to describe the time it takes from when the device service registers with the Atlas middleware to when the application detects its existence. We observed a typical device detection time of 1 ms, although 23 ms was also occasionally recorded. This is likely due to preemptive framework activities. During this time, the application transmits the data subscription message, waits fo r the request to reach and be processed by the node, and finally receives back th e first data reading. The average data channel in itialization time is 113.33 ms with a large standard deviation of 195.97 ms. On average, it takes 117.77 ms for an applica tion to detect, start interacting with and receiving data from the sensor. Table 6-1. Methods provided by the Atlas De veloper Application Programming Interface Method Class/Interface Description receivedData AtlasClient Data handler called by service bundle when data arrives addProperty AtlasService Add a property pair (key, value) to be associated with this service removeProperty AtlasService Remove a property associated with this service getProperties AtlasService Get all prop erties associated with this service sendCommand AtlasService Send control comma nds (used by service associated with an actuator) getData AtlasService Pull data from se nsor associated with this service Subscribe AtlasService Request data stream from sensor Unsubscribe AtlasService Halt data stream from sensor isSubscriber AtlasService Check if app lication is receiving data stream from particular sensor dataHandler AtlasService Data handler cal led by middleware when data arrives for particular service. Must be implemented by device service developer. Table 6-2. Primary functionalities provi ded by Atlas Service Authoring Tool Feature Scope Browse Available Entities and Services Local machine, remote machine Provide Information on OSGi bundles Local bundles, remote bundles Design by Selection Local bundles Semi-automatic Environment Setup Local machine, remote machine Seamlessly integration with Eclipse tools (code completion, syntax highlighting, debugging, etc.) Local IDE Deploy New Bundles Back to Smart Space Local machine, remote machine

PAGE 76

76 Figure 6-1. Adding a new Atlas node to the networ k. 1) Node turns on, contacts middleware, is assigned a new Commu nicator and Node Service Ha ndler. 2) Node receives configuration, middleware instantiates a nd starts device service bundle for each device connected to node. 3) Node be gins processing, querying and transmitting sensor values, receiving and executing actuator commands. Middleware maintains mapping between device services and nodes, an d applications and device services.

PAGE 77

77 Figure 6-2. Knopflerfish OSGi framew ork running the Atlas middleware. Figure 6-3. Atlas node we b configuration page.

PAGE 78

78 Figure 6-4. Atlas repository web configuration page. Figure 6-5. Atlas Eclipse IDE plugin.

PAGE 79

79 Figure 6-6. Screenshot of the AtlasSim application. Figure 6-7. AtlasSim node configuration window.

PAGE 80

80 CPU Load % versus Number of Sensors 0 5 10 15 20 25 0100200300400500 Number of SensorsCPU Load (%) User Level CPU Load (%) System Level CPU Load (%) Figure 6-8. Processor load vs. number of sensors. Memory Usage versus Number of Sensors 242000 243000 244000 245000 246000 247000 248000 249000 250000 251000 252000 253000 0100200300400500 Number of SensorsMemory Usage (KB) Memory Usage Figure 6-9. Memory usage vs. number of sensors.

PAGE 81

81 CPU Load % versus Number of Sensors with and without Application Service 0 10 20 30 40 50 60 70 80 90 100 0100200300400500 Number of SensorsCPU Load (%) CPU Load (%), Sensors only CPU Load (%), Application Service with Sensors Figure 6-10. Processor load vs. number of sensor s. Blue line is with no application service running, pink line is with applica tion subscribing to sensor data. Memory Usage versus Number of Sensors with and without Application Service 0 10 20 30 40 50 60 70 0100200300400500 Number of SensorsMemory Usage (%) Memory Usage (%), Sensors only Memory Usage (%), Application service with sensors Poly. (Memory Usage (%), Sensors only) Poly. (Memory Usage (%), Application service with sensors) Figure 6-11. Memory usage vs. number of sensors. Blue line is with no application service running, pink line is with applica tion subscribing to sensor data.

PAGE 82

82 CHAPTER 7 ATLAS TRUST MODEL Security and Privacy in Sensor Networks For all the benefits sensor (and actuator) ne tworks provide, especia lly in the realm of pervasive computing, there is one issue in particular that can se riously hinder their acceptance: trust. Will users trust that this giant, computer ized information-gathering device can be secured so that third parties cannot access their personal da ta? Will the system respect the users’ privacy, or even have a concept of what privacy is ? A pervasive computing environment must be trustworthy, otherwise no user se rvice, regardless of its usefulne ss, can be employed in a realworld scenario. To understand how to make a smar t space trustworthy, we must first model the potential attacks against the system. Attack Model for the Atlas Platform Attacks on the trustworthiness of an Atlas network can be categorized by their physical proximity to the space, by their logical proximity to the various architectural layers of the space, and by the manner in which they degrade the space. There are three relevant physical proximity categories when dealing with an Atlas deploym ent: remote, neighboring, a nd internal. There are also three degradation cate gories: interception, inserti on, and denial of service. Remote Attacks This attack category primarily consists of trad itional computer security vulnerabilities. In spaces such as smart homes, the machine that ho sts the Atlas middleware service framework will often have an always-on, broadband Internet conn ection. It can therefore be exposed to the same worms, viruses, and other malware th at afflict regular home computers. Additionally, Atlas networks may be relaying se nsitive information to a trusted third party. This information could be inter cepted at the network level, or be released by the third party,

PAGE 83

83 intentionally or unintentionally. Th ese are the remote attacks that do not require direct access to the Atlas deployment. Neighboring Attacks Neighboring attacks are those that originate from a position physical close to, but not inside, the smart space. They have access to th e communication within th e Atlas network, but do not have physical access to the node hardware. In general, this involves an Atlas deployment that makes use of wireless Communication Layers (s uch as WiFi or ZigBee). Neighboring attacks occur within range of the wireless signal. However, wired Atlas networks are potentially vulnerable. For example, if a smart house us es power-line communica tion, where data is modulated onto the electrical wiring of the house, the network will be visible from any exterior sockets the house provides. Based on the wiring from the utility company, the network may even be visible from the internal so ckets of other near-by homes. Internal Attacks Internal attacks involve the inside of a space being compromised, for example, when a burglar has broken into a home. This obviously has serious security concerns beyond the computer system, but in terms of the Atlas plat form, an internal attack means that the nodes running in a space may have been physically comp romised. Cipher keys or stored data may be read from the node, the firmware may be modifie d, or the node hardware itself might have been modified or replaced with other devices. Si milarly, the middleware server controlling the network may also be compromised. Interception Attacks Interception attacks involve an unauthorized third party gaining access to the data produced by the Atlas network. Th e easiest vector for an inter ception attack is monitoring (or “sniffing”) wireless traffic from a Neighboring att ack position. An Internal attack position would

PAGE 84

84 allow Interception of more secu re network technologies, such as wired Ethernet. Successful Internal or Remote attacks against the middlew are server could expose all data being produced by the deployment. Data being relayed to authorized third parties is also vulnerable to Remote Interception, either through t echnical or social attacks. Insertion Attacks The placement of unauthorized data or commands into the Atlas netw ork is an Insertion attack. This could involve an attacker masquerading as the mi ddleware, sending commands to a node to disrupt operation of its connected devices. A lternatively, it could involve masquerading as a node, sending false sensor data to the middlew are to alter the behavior of running services. Insertion attacks also include fa king transmissions from the middlew are to external applications making use of the available Communication Mo dules in the framework, and vise versa. Denial of Service Attacks A Denial of Service (DoS) attack prevents some component of the Atlas deployment from functioning correctly or at full cap acity. This may also involve an Insertion attack. For example, if fake “reboot” commands are continuously sent to a node. Denial of Service also includes techniques such as jamming, preventing nodes with in some area from sending or receiving data. In the domain of smart spaces, this category of att ack also includes attacks such as cutting power to the system, or creating a scenario such that nodes are forced to maintain a high duty cycle, depleting their batteries. Attack Prevention As previously mentioned, smart spaces must be trustworthy if they are to be commercially viable and acceptable to users. To be trustwor thy, there must be, where possible, systems in place to identify and prevent or minimize the atta cks modeled above. While at some point user

PAGE 85

85 behavior will trump whatever safety mechanisms are in place, Atlas can be built upon a secure foundation and provide tools and support to help users make correct choices. Fundamental Security The machine that hosts the Atlas middlew are service framework is the critical, authoritative central hub of an A tlas deployment. Common-sense secu rity practices must in place for this computer. Though the Open Services Ga teway Initiative (OSGi) middleware can run on any operating system that offers a Java Runtim e Environment, we trad itionally use a modern, server-oriented Linux distribution. These are conf igured for the minimu m necessary services, and the tools and practices to further secure the operating system are well documented. Services such run with as few privileges as necessary, and their access to unneeded system components should be denied. Security auditing of all services, including the Atlas middleware, is necessary to identify and eliminate potential Remo te and Internal attack vectors, such as buffer overrun errors that allow arbi trary code execution. Strong pa sswords are necessary both for administrator access to the middleware conf iguration and to the machine itself. If the server is connected to the Internet, a software firewall must be in place, and a hardware firewall is also recommended. Secu rity updates to the ope rating system, OSGi implementation, Atlas middleware, and any othe r services running on the machine should be automatically downloaded and applied. Preventing Interception and Insertion Attacks Interception and Insertion are two attacks comm on to all sensor networks. Data encryption and signing are the standard solutions in normal computer systems, but have posed problems in the domain of sensor networks [57]. First, th e choice of cryptographic scheme is limited by the low computational power of the hardware nodes. Common ciphers such as Advanced Encryption Standard (AES) [58] or Rivest Shamir Adleman (RSA) [59] are infeasible on microcontrollers

PAGE 86

86 such as the ATmega128L. Second, because other sensor platforms rely on ad-hoc networks, where communication occurs only from node to node, each node must share a unique cipher key with every other node. With minimum key size s of 256 bytes and 100 nodes, this requires 2.56 MB (256 x 1002), far exceeding the standard 4 KB of non-volatile memory offered by most platforms. While addition storage could be a dded, this increases th e cost of each node. Techniques to reduce the necessary storage are av ailable, such as limiting the key sharing to small, intersecting cliques of node s [60], but this exacerbates the already complex ad-hoc routing algorithms. Fortunately, the infrastructure-based network topology of the Atlas platform solves this key-sharing problem. Nodes comm unicate only with the middlewar e server, meaning each node only needs to store the one unique ke y it shares with the middleware. Atlas uses Rivest Cipher 4 (RC4) [61], a co mputationally inexpensive stream cipher. The 256 byte key is used to initialize an array cont aining a permutation of the values 0 to 255 (inclusive). This is used as th e cipher generator for the algorithm. Each byte of a data stream is processed individually, encoding (or decoding) the data and e volving the permutation array. Each node maintains two of these arrays, a nd the middleware maintains two for each node that has registered. On each side, one array is used for incoming data, an d the other for outgoing. This is necessary to prevent synchronization errors when the node and the middleware transmit data at the same time. Additionally, the reliabilit y of the TCP connection ensures that the paired incoming-outgoing arrays on the node and middleware remain consistent. Encryption does not begin until after a node has fully registered. This prevents the ciphertext of known plaintext from being available to an attacker. With this encryption in place, the damage from Interception attacks is minimize d. Not only is a brute-force attack against the

PAGE 87

87 encryption generally infeasible, much of the da ta produced by a sensor network as a short lifespan, and will be useless long before it could be compromised. Similarl y, Insertion attacks are prevented. Since only the node and the middlewar e know the shared key, a message encrypted with that key is essentially signed. Pervasive Noise Although encryption solves many of the problems identified in our attack model, it cannot operate until the node and middleware agree upon a key. Key distribution is major topic in sensor networks [62]. The node hardware is not powerful enough to run common, secure key negotiation and exchange algorithms such as Di ffie-Hellman [63]. Yet it is infeasible to manually program the keys for thousands of se nsor nodes. While Atlas deployments in smart spaces will generally contain fewer nodes than Ad -hoc sensor network projects, an environment such as a smart home will be managed by a regular person, not a trained engineer. Someone who has problems programming a video cassette recorder (VCR) will not have an easy time generated and uploading a cipher key to each new devi ce he or she brings into the house. While residents could schedule visits from a “Smart House Certified” technician (similar to the technical support crews at electronics retailers, not expens ive engineers) to authorize new devices, this would prevent users from immediately enjoying their purchases. To address this, we created the Pervasive Noise concept, a novel tec hnique that allows unaut henticated devices to participate (with limits) in an existing Atlas system. Overview In any system, noise is essentially an unwan ted signal that competing with or obscuring a desired signal. Pervasive Noise is a preventative measure against Interception attacks, and allows users to run unauthenticated nodes, transmitti ng plaintext messages, while protecting their privacy. This system assumes a mixed network; that is, an existing Atlas deployment that

PAGE 88

88 contains some authenticated nodes. Pervasiv e Noise takes advantage of the centralized, authoritative Atlas middleware, allowing secure, en crypted collaboration with the authenticated nodes. Periodically, these nodes are told to ma squerade as the unauthenticated nodes in the system. Masquerading as an unauthenticated node involves sending a plai ntext transmission, the value of which was provided by the mi ddleware in the encrypted message. As the middleware knows what value to expect from this "fake" tran smission, it is able to discard the message when it arrive s. However, to any attacker at tempting to intercept data from the network, this masquerade is indistinguishable from a real tr ansmission. It is noise, obscuring the true data produced by the unauthenticated node and protecting the privacy of the user. Although the ideal Pervasive No ise system would constan tly flood the network with masqueraded data, this is not practical in a real-world deployment. The authenticated nodes cannot spend all their duty cycles on fake transmissions. Each has a number of real sensors or actuators to query or control. N odes operating on batteries would lo se power quickly if they were always transmitting. Instead, for each data stream produced by an unauthenticated device, we use statistical information, provided by the bundle author for the partic ular device, to minimize the transmissions necessary to protect the privacy of Atlas's users. Noise Model In the Pervasive Noise model, we are essentia lly concerned with two categories of nodes: authenticated and unauthenticate d. Authenticated nodes have been given, via some mechanism, a key that is shared with the Atlas middl eware. Unauthenticated nodes have no key. A = { a1, a2, ..., ai} is the set of authenticated nodes, and U = { u1, u2, ..., uj} the set of unauthenticated nodes. Though ideally any authenticated no de can be used in the mas querade, for practical reasons we wish to subcategorize them into AN = { an1, an2, ..., anx}, the set of nodes plac ed specifically to

PAGE 89

89 act as noise generators, AP = { ap1, ap2, ..., apy}, the set of regular nodes operating on wired power, and AB = { ab1, ab2, ..., abz}, the set of regular nodes operating on battery power. At any given time, a particular node in A may have some portion of its bandwidth available to transmit noise. As all sensor data transmi ssion times are approximately equal, the amount available depends on the number of real sensors that node has to query, and the rate at which it queries. For example, a node that has two sensor s ubscriptions at an interv al of five seconds can transmit more noise than a node with four sensor subscriptions at an interval of five seconds, but both would be more available than a node with one sensor but a subscription interval of one tenth of a second. We define the function (i) to be the total subscription interval (in seconds) of node ai A , and (i) the number of sensors subscribed. Function (i) is the availability, in messages per second of node ai: (i) = ( (i) (i)) / (i) Here, is the time for one sampling and transmission. If a A is to successfully masquerade as u U , the value a transmits cannot be completely random. If u is a temperature sensor inside the house, and a sends a value of -30 F, it easily will be detected as noise. Node a should transmit a value that is plausible bu t obscures the true reading. Determining a plausible value re quires information provided by the author of the service driver bundle th at corresponds to the particular device . First is the noise variance, or the range of plausible values given a known, valid reading. Function (j) is the noise variance of uj U . The driver bundle must also provide informati on about the required accuracy for the device, encapsulated by function (j). This defines an accuracy thre shold beyond which applications that use data from the device will behave improperly.

PAGE 90

90 Nodes in A are able to transmit as necessary. Nodes in U , however, are throttled to r messages per second. This allows nodes in A sufficient time to perform their regular tasks in addition to producing noise. The goal of Pervasive Noise is to produce, during each period of valid transmission by a node in U (1/ r ), noise that is plausible ( (j) of applicable historical data) but that yields a deviation larger than (j). The deviation for uj is: (j) = ( (1/(n+1))( i n( (j) ui)2) ) Where n is the number of noise transmissions. Therefore noise is generated until (j) > (j), or the availability of nodes from A is depleted. The total availa bility needed to mask one unauthenticated node is nr , and the maximum number of una uthenticated nodes that can be supported is: ai A : ( (i)) / nr Pervasive Noise Implementation Pervasive Noise required additions to both the Atlas node firmware and the middleware bundles. On the node side, support was added for the Fake_Data command. This is the command sent to authenticated nodes when they are to masquerade as an unauthenticated node. The command provides three parameters: the node ID of the unauthenticated node, the channel number of the device, and the value to transm it. A node will only execute this command if it receives it in an encrypted message. Firmware code was also added to throttle th e sampling and transmission rate if the node does not have a key, or if its key does not ma tch with the middleware. While the transmission rate of valid data is throttled, unauthenticated nodes will also transmit noise. In this case, the noise is the regular sensor data message scrambled with a temporary key. The purpose of this is

PAGE 91

91 to make the node appear, to a third party, like an authenticated node. The middleware will automatically drop the messages, as it will not have a matching key for that node. Management of the Pervasive Noise system is handled in the Atlas Manager bundle in the middleware. When a node comes online and the driver bundles for its connected devices are loaded, Atlas Manager reads the noise varian ce and accuracy parameters from the bundle's manifest file. If the node has an associated key stored on the server, the RC4 arrays for incoming and outgoing streams are initialized. Atlas Manager then spawns a PervasiveNoiseManager thread object for the node, associated it with the TCPNetworkReader thread object that mainta ins the TCP/IP connection between the middleware and that node. The Perv asive Noise Manager serves several purposes. First, it is used to immediately discard noise transmissions, rather than processing them higher in the stack. Noise transmissions from unauthent icated nodes (which are encrypted with a temporary key that the middleware does not ha ve) are detected because the middleware is expecting a plaintext transmission, and will not find the correct identificati on bits in the header of the message. Noise transmissions from auth enticated nodes are also easily detected. The Pervasive Noise Manager records the Fake_Data commands that it sends out. When a transmission comes in, the node ID, channel, and value are checked against the list. If a matching entry is found, it is removed from the list and the message is dropped. If the unauthorized node happens to send, during a valid tr ansmission, the same value as has been assigned to a noise transmission, the message may be dropped, but then the middleware will k eep the "noise" value, and no data is lost. As mentioned above, the Pervasive Noise Ma nager is also responsible for sending the Fake_Data commands to authenticated nodes. It coordina tes with the Atlas Manager to select a

PAGE 92

92 node that is available for noise production. The availability invol ves the number of tasks being performed by the node, and the timing demands of those tasks. Availability also involves physical proximity of the authenticated node to the unauthenticated. In the current implementation, only nodes in the same room as the unauthenticated node are chosen to produce noise. This ensures that the masquerading nodes w ill have similar signal st rengths to the real node. Substantially varied signal strength could be used by an attack to filter out the noise transmissions. The Pervasive Noise Manager chooses values for the noise transmissions by applying the noise variance function to a running average of th e true values from the unauthenticated node. When the thread object is firs t instantiated, it chooses a rando m bias for the variance function. Therefore some nodes will produce noise values a bove the average, some below, and some will straddle. This prevents attackers from simply av eraging the noise to obtain approximate values. While Pervasive Noise is designed primarily to prevent Interception attacks, a side effect of this system does provide some protection from Insertion attack s. Communication with authenticated nodes are protected by means of encryption, but the middleware is protected from Insertion attacks posing as unauthenticated node s by means of the transm ission rate throttling. The Pervasive Noise Manager maintains a timesta mp for the last received message for each unauthenticated node. If an attack er is inserting messages into the Atlas network, the Pervasive Noise Manager will see that messages from that node are coming faster than allowed. It will then drop all messages from that node for some period of time.

PAGE 93

93 CHAPTER 8 CASE STUDIES Gator Tech Smart House Since the Atlas platform was not available un til late 2005, the origin al implementation of the Gator Tech Smart House (GTSH) used othe r sensor platforms (such as Phidgets) and automation technologies (such as X10 modules). In preparation fo r the first in a series of experiments with live-in residents, we currently are migrating the existing services and applications inside the GTSH to the Atlas platform. Additionall y, we are creating new services made possible by the new platform. The following cas e studies provide detail s of our experience. Smart Blinds One of the major aims of the GTSH is to al low its resident to control various household devices, such as the window bli nds, using voice commands or a si mple touch screen, interactive GUI. Our plan was to deploy a system which woul d not only allow the resi dent to operate the blinds without requiring physical interacti on but also allow the house to control them automatically to adjust ambient lighting. Each window shade in the house is connected to a Hi-Tec HS-322HD Deluxe Servo (Fig. 8-1) with an output torque of 3 kg / cm, which allows the smart house to open and close the blinds. The servos are connected to Atlas node s via the 6-way servo board, allowing a single node to control up to 6 servos. The WindowBlinds service was implemented us ing the generic actuator service bundle. This service bundle translates the high-level commands provided by the end-user application into low level instructions required by the Atlas node to control the servos . It allows an application to both control individual blinds and also c ontrol multiple blinds as a single entity.

PAGE 94

94 Atlas-Based Smart Floor In a smart home geared towards providing an assistive environment for seniors, locating the residents and keeping track of their whereabou ts is of paramount impo rtance. Indoor location tracking systems provide information about the resident’s loca tion, daily activities, and room preferences and also help in de tecting emergencies like falls. In addition to this, it is equally essential that such a system should neither be in trusive and nor should it require special attention from the resident to operate effectively. Keepin g these things in mind it was decided to install a Smart Floor in the GTSH which would provide unencumbered indoor location tracking using pressure sensors located beneath floor tiles. Kadd oura et al. [64] describe such a system where a pressure sensor is centrally placed underneath eac h square foot block of the floor and is able to detect a foot step on any part of that block. This system is not only able to provide nearly 100% coverage over its area of deployment but is also relatively inexpensive as compared to other similar location tracking systems in use today. The Gator Tech Smart House has a residentialgrade raised floor cons isting of floor tiles measuring one square foot each. The process of deploying the piezoelectric pressure sensors was the same as described in [64]. The approach taken by Kaddoura et al. had the pressure sensors connected to Phidgets 8/8/8 Interface Kits, which can only support a maximum of 8 sensor s. For the second iteration of the Smart Floor, we used the Atlas platform (Fig. 8-2) together with its 32-way analog sensor board (Fig. 8-3), which supports 32 two-wire analog sensors. In this manner we were able to deploy the Smart Floor throughout a large section of the house (over 2000 sq. ft.) using only ten Atlas nodes. This improved the cost-effectiveness of the system. We also made use of the onboard filtering capability of the platform to only transmit sensor data if there is a change in

PAGE 95

95 readings beyond a user-defined th reshold. This prevents the Smar t Floor from flooding the entire sensor network in the house. The Smart Floor service was im plemented using the generic analog sensor service bundle mentioned in Sec. 3.6. Applications, such as th e Location Tracker (Fig. 8-4), access the Smart Floor sensor readings simply by subscribing as a listener to th e dispatchPacket event produced by the bundle. Smart Front Door We created an intelligent front door in the GT SH to support elderly and disabled residents. The door facilitates access to the house by means of a keyless entry system using Radio Frequency Identification (RFID) badges and an automatic door opening, closing and locking mechanism. Voice control of the doo r also allows the resident to grant access to visitors without having to move to the foyer. The door is fitted with a Mi-K F01P deadbolt locking mechanism (Fig. 8-5) offering both electronic operation for keyle ss entry and conventional opera tion using keys. The deadbolt turning mechanism is driven by a standard 5V di rect current motor conne cted to an Atlas node using a motor driver board. For opening and clos ing the door we use the Private-Door Duo door opener (Fig. 8-6) because it allows both automatic and manual operation. The Front Door service bundle c oordinates the functioning of the door lock and the dooropening mechanisms. It provides services to the application developers to both lock/unlock the door and also to open/close it if necessary. It al so makes sure that conflicting commands issued to the Atlas node controlling th e door are not executed, for example, opening a door which is locked. The lock/unlock commands issued by this bundle are execute d by the Atlas node controlling the locking mechanism while the do or open/close commands are issued as X10 commands to the door opener.

PAGE 96

96 Using this service bundle, one of the co-autho rs developed an application which allows residents keyless entry into the Smart House vi a the use of RFID badge s and also allows the resident to operate the front door by issuing simple voice comma nds from anywhere inside the house. Results Live-in trials began in th e GTSH on March 24, 2006 [65]. The subjects’ activities are being monitored and logged for analysis by our co llaborators in the Depa rtment of Occupational Therapy. Writing the logging application was a st raight-forward task because Atlas provides a homogeneous interface to the plethora of hardwa re devices installed in the house. Like other GTSH applications, the logger is a bundle r unning in our middleware framework. It can subscribe to events from various service bundles and from other applications to record data produced in the house. Purdue NILE-PDT The NILE-PDT (Phenomena Detection and Tr acking) system was developed by the Indiana Database Center at Pur due University to detect and tr ack environmental phenomena (gas clouds, oil spills, etc.). They required a platform that would allow their system to sample data streams from many different sensors. Additi onally, NILE-PDT needed to control the data streams by altering the sampling rate of the sensor s using feedback algorithms, a mechanism that required uniform interfacing with every sensor in the network. The Atlas platform was a perfect match for NILE-PDT. In addition to providing a uniform interface to heterogeneous se nsors, Atlas also offers a plug-and-play development model, even for a pplications written outside our framework. NILEPDT had been in development for years, and it was almost fully implemented before Atlas was available. Other conflicts ar ose during this collaboration, such as NILE-PDT using User

PAGE 97

97 Datagram Protocol (UDP) for communication (the current Comm unication Layer for Atlas uses Transmission Control Protocol, or TCP) or the de vice drivers for the sensors providing raw data readings (NILE-PDT exp ects time-stamped data). We expect these types of conflicts will be common when groups use a third-party platform with their existing applications. We created a proxy system in the framework to resolve these issues. Using this interface, the NILE-PDT deve lopers were able to create a proxy in our framework that formed the bridge between the sensor services and the NILE-PDT engine. Our middleware allows external applications to uploa d and register these proxy services into the framework. The NILE-PDT team was able to implement their system without having any knowledge about the internal workings of the Atlas platfo rm. A collaborative paper [66] and demonstration based on this research was presented at the Very Large Data Bases (VLDB) 2005 conference. Figure 8-1. Servo motor attached to the Smart Blinds.

PAGE 98

98 Figure 8-2. Tile of the Smar t Floor, with Atlas node. Figure 8-3. Atlas node connected to 32 pressure sensors.

PAGE 99

99 Figure 8-4. Graphical display of location tracker service. Figure 8-5. Connecting the electronic deadbolt to the Atlas platform. Figure 8-6. Private-Door D uo electronic door opener.

PAGE 100

100 CHAPTER 9 CONCLUSIONS AND FUTURE WORK The Atlas platform is functional and is comm ercially available to researchers and other groups. It can be obtained from Pervasa, Inc. (a University of Florida startup) at www.pervasa.com. Interest from both Industry and A cademia indicate that Atlas has great value. But we have only begun to position Atlas as a true building block of programmable pervasive spaces. Concepts such as service-oriented sens or and actuator platform, programming models, and mobile sensor islands are among the many usef ul components that offer a basic framework for the field. But it will take much more experi ence for our understanding of sensor networks to mature. We need to develop new programmi ng models and languages, and IDEs and other associated tools. As the field continues to expe riment, we will find new ways in which pervasive computing can work symbiotically with other research. Continuing architectural improvements in speed, miniaturization and power consumption will allow us to pack more intelligence into more spaces and devices. Standards will emerge th at will facilitate the s eamless sharing of data and services among spaces. The next version of A tlas is already in development. This section will provide an overview of the major change s and improvements that we have planned. Hardware Roadmap We are investigating two combined Powe r/Communication Layers using power-line communication and power-over-Ethernet. Other power-related development includes energy harvesting designs using resource s such high-frequency radio wave s, solar power, or vibration. The stackable 2x2" form factor has worked well for most projects, but we are also investigating a more compact design for applica tions that require a smal ler footprint. Extreme miniaturization is not planned because working (p rogramming, installing, et c.) with tiny nodes is difficult.

PAGE 101

101 We are also developing pre-c onfigured Device Connection Layers. These layers come with integrated sensor arrays, allowing users to deploy an Atlas network out of the box, without requiring any configuration of the hardware. Middleware Roadmap Data Processing While the primary goal for the Atlas project wa s developing a sensor and actuator platform to enable programmable pervasive spaces, the platfo rm is an appropriate tool for researchers and groups working in other fields. As mentioned in Sec. 6.2, we are collaborating with Purdue University on the Nile-PDT project, which involve s reading streams of data from sensors. We are working to enhance the data streaming capab ilities of the platform. This involves expanding the data processing functionality both onboard A tlas nodes and inside the service framework. Some of the capabilities bei ng developed are data filtering, data aggregation, and query processing. Distributed Middleware Servers The current release of the Atlas platform assumes a single computer will be running the middleware framework that hosts the software-s ervice representations of connected devices. However, in an extremely large or densely pack ed environment, so many services running on a single computer could result in poor performance. Additionally, a single pervasive space could cover many geographically dispersed areas. We ar e developing a distributed version of the middleware to solve these issues. This new ar chitecture allows a hierarchical grouping of middleware servers, each of which can connect to Atlas nodes and ot her servers, feeding information to a parent server.

PAGE 102

102 Alternate Service Frameworks The Atlas platform is focused primarily on an Open Services Gateway Initiative (OSGi) framework. However, we are in the process of porting our middleware to the Microsoft .NET framework. An alpha version of this framework exists, where devices are represented as Web Services.

PAGE 103

103 APPENDIX EXAMPLE NESC SOURCE CODE USED IN ATLAS FIRMWARE Interface interface SampleInterface { /* commands are implemented by modules that PROVIDE interface */ command uint8_t returnByte(uint8_t b); /* declare structure 'async' to use inside event handler */ async command void init(); /* events are implemented by modules that USE interface */ event void hardwareEvent(); } Interface Provider module Provider { provides interface SampleInterface; } implementation { uint8_t base; /* implementation of interface */ command uint8_t SampleInterface.returnByte(uint_t b) { return (b + base); } async command void SampleInterface.init() { /* command is async, must ensure thread safeness */ atomic { base = 0; } } /* default implementation for interface's event (if no user is wired) */ default event void SampleInterface.hardwareEvent() { return; } } Interface User module User { uses interface SampleInterface; } implementation { /* application function */ uint8_t runZero() { return call SampleInterface.returnByte(0); } /* implementation of interface */ event void SampleInterface.hardwareEvent() { call SampleInterface.init(); runZero(); call SampleInterface.returnByte(1); } } Configuration Wiring configuration SampleApplicationC {} implementation { components Provider, User; User.SampleInterface -> Provider.SampleInterface; }

PAGE 104

104 LIST OF REFERENCES [1] M. Weiser and J. Brown, “Designing Calm Technology,” PowerGrid Journal , no. 1, 1996, pp. 1. [2] K. Lyytinen and Y. Yoo, “Issues and Challenges in Ubiquitous Computing,” Communications of the ACM , no. 45, 2002, pp. 62. [3] A. Helal, “Gator Tech Smart House,” http://www.icta.ufl.edu/gt.htm , January 2005, cited February 2007. [4] Oak Hammock IT, “Oak Hammock Retirement Community,” http://www.oakhammock.org/ , January 2006, cited February 2007. [5] A. Helal et al., "Gator Tech Smart H ouse: A programmable Pervasive Space," IEEE Computer , vol. 38, no. 3, 2005, pp. 50. [6] S. Helal, "Programming Pervasive Spaces," IEEE Pervasive Computing , vol. 4, no. 1, pp. 2005, pp. 84. [7] C. Lee, D. Nordstedt and A. Helal, "OSGi for Pervasive Computing," Standards, Tools and Best Practice Department, IEEE Pervasive Computing , vol. 2, no. 3, 2003. [8] D. Maples and P. Kriends, "The Open Services Gateway Initiative: An Introductory Overview," IEEE Computer Magazine , vol. 39, no. 12, 2001, pp. 110. [9] H. Yang, E. Jansen, and A. Helal, "A Comparison of Two Programming Models for Pervasive Computing," Ubiquitous Networks and Enablers to Context Aware Services, Int’l Symp. on Apps. and the Internet , Pheonix, January, 2006. [10] H. Yang, J. King, A. Helal, and E. Jansen, “A Context-driven Model for Programming Pervasive Spaces,” Int’l Conf. on Smart homes and health Telemetrics , Nara, June, 2007. [11] A. Mainwaring, J. Polastre, R. Szewczyk, D. Culler and J. Anderson, "Wireless Sensor Networks for Habitat Monitoring," 1st ACM Int’l Workshop on Wireless Sensor Networks and Applications , 2002, pp. 88. [12] J. Hill and D. Culler, "Mica: A Wireless Platform for Deeply Embedded Networks," IEEE Microprocessors , vol. 22, no. 6, 2002, pp. 12. [13] M. Horton, D. Culler, K. Pister, J. H ill, R. Szewczyk, and A. Woo, "MICA, The Commercialization of Microsensor Motes , " Sensors , vol. 19, no. 4, 2002, pp 40.

PAGE 105

105 [14] J. Polastre, R. Szewczyk, C. Sharp, and D. Culler, "The Mote Revolution: Low Power Wireless Sensor Network Devices," Hot Chips 16: A Symposium on High Performance Chips , 2004. [15] K. Pister, “Smart Dust,” http://robotics.eecs.berkeley.edu/~pister/SmartDust/ , 2001, cited February 2007. [16] J. Kahn, R. Katz, and K. SPister, "M obile Networking for Smart Dust", ACM/IEEE Int’l Conf. on Mobile Computing and Networking , Seattle, 1999. [17] Crossbow, Inc., “Wireless Sensor Network Products”, http://www.xbow.com/Products/Wir eless_Sensor_Networks.htm , 2007, cited February 2007. [18] J. Polastre, R. Szewczyk, and D. Culler, "Telos: Enabling Ultra-Low Power Wireless Research," 4th Int’l Conf. on Information Processing in Sensor Networks , Los Angeles, 2005. [19] Moteiv, “TelosMote Sky”, http://www.moteiv.com/products/tmotesky.php , 2006, cited February 2007. [20] L. Nachman, R. Kling, J. Huang and V. Hummel, "The Intel Mote Platform: A Bluetooth-based Sens or Network for Indus trial Monitoring," 4th Int’l Conf. on Information Processing in Sensor Networks , Los Angeles, 2005. [21] S. Greenberg and C. Fitchett, "Phidgets: Easy Development of Physical Interfaces through Physical Widgets," Proc. of 14th ACM Symp. on User Interface Software and Technology , 2001, pp. 209. [22] B. O'Flynn et al., "The Development of a Novel Minaturized Modular Platform for Wireless Sensor Networks," 4th Int’l Conf. on Informati on Processing in Sensor Networks , Los Angeles, 2005. [23] N. Edmonds, D. Stark and J. Davis, "MASS: Modular Architecture for Sensor Systems," 4th Int’l Conf. on Information Processing in Sensor Networks , Los Angeles, 2005. [24] R. Pon et al., "Networked Infomechani cal Systems: A Mobile Embedded Networked Sensor Platform," 4th Int’l Conf. on Information Processing in Sensor Networks , Los Angeles, 2005. [25] D. Lymberopoulos and A. Savvides, "XYZ: A Motion-enabled Power Aware Sensor Node Platform for Distributed Sensor Network Applications," 4th Int’l Conf. on Information Processing in Sensor Networks , Los Angeles, 2005.

PAGE 106

106 [26] C. Park, J. Liu and P. Chou, "Eco: An Ultr a-compacy Low-power Wireless Sensor Node for Real-time Motion Monitoring," 4th Int’l Conf. on Information Processing in Sensor Networks , Los Angeles, 2005. [27] H. Gellerson, G. Kortuem, A. Schmidt and M. Beigl, "Physical Prototyping with SmartIts," IEEE Pervasive Computing , vol. 3, no. 3, 2004, pp. 74. [28] G. Chen and D. Kotz, “A Survey of C ontext-Aware Mobile Computing Research,” Technical Report TR2000-381, Dept. of Com puter Science, Dartmouth College, 2000. [29] A. Dey and G. Abowd, "The Context Toolkit: Aiding the Development of Context-Aware Applications," Workshop on Software Engineering for Wearable and Pervasive Computing , Limerick, 2000. [30] M. Huebscher and J. McCann, "Adaptive Middl eware for Context-awar e Applications in Smart-homes," Proc. of the 2nd Workshop on Middleware for Pervasive and Ad-hoc Computing, 2004, pp. 111. [31] M. Romn, C. Hess, R. Cer queira, A. Ranganathan, R. Campbell and K. Nahrstedt, “Gaia: A Middleware Infrastructu re to Enable Active Spaces,” IEEE Pervasive Computing , vol. 1, no. 4, 2002, pp. 74. [32] C. Greenhalgh, S. Izadi, J. Mathrick, J. Humble and I. Taylor , “ECT: A Toolkit to Support Rapid Construction of UbiComp Environments,” System Support for Ubiquitous Computing Workshop , Tokyo, 2004. [33] T. Gu, H. Pung and D. Zhang, “A Serviceoriented Middleware for Building Contextaware Services,” Journal of Network and Computer Applications , no. 28, 2005, pp. 1. [34] H. Chen, T. Finin, A. Joshi, F. Perich, D. Chakraborty and L. Kagal, “Intelligent Agents Meet the Semantic Web in Smart Spaces,” IEEE Internet Computing , vol. 8, no. 6, 2004, pp. 69. [35] A. Helal, W. Mann and C. Lee, “Assistive Environments for Individuals with Special Needs,” Smart Environments , 2005, pp. 361. [36] A. Helal et al., “Assistive Envir onments for Successful Aging,” Proc. of 1st Int’l Conf. on Smart homes and Health Telemetrics , 2003, pp. 104. [37] Atmel Corp., “ATmega128L Data Sheet”, http://www.atmel.com/dyn/resources/prod_documents/doc2467.pdf , 2005, cited February 2007.

PAGE 107

107 [38] Motorola, Inc., “SPI Block Guide,” http://www.freescale.com/files/microc ontrollers/doc/ref _manual/S12SPIV3.pdf , February 2003, cited February 2007. [39] Philips, Inc., “I2C Overview,” http://www.nxp.com/products/interface_control/i2c/ , 2006, cited February 2007. [40] IEEE Working Group, “Setting the Standards for Wireless LANs”, http://www.ieee802.org/11/ , February 2007, cited February 2007. [41] ZigBee Alliance, “ZigBee Alliance Home Page,” http://www.zigbee.org/en/index.asp , February 2007, cited February 2007. [42] Cirrus Logic, Inc., “CS8900A”, http://www.cirrus.com/en/p roducts/pro/detail/P46.html , September 2004, cited February 2007. [43] Panel discussion, “From sensor networks to intelligence,” M. Chandy (moderator), 1st Int'l Conf. on Distributed Co mputing in Sensor Systems , Los Angeles, 2005. [44] A. Dunkels, “Full TCP/IP for 8-Bit Architectures,” 1st Int’l Conf. on Mobile Applications, Systems and Services , San Fransisco, 2003. [45] LANTRONIX, Inc., “Xport – Embedded Ethern et, Embedded Device Server, Serial To Ethernet,” http://www.lantronix.com/devi ce-networking/embedded-deviceservers/xport.html , August 2006, cited February 2007. [46] DPAC Technologies, Inc., “AB Wire less Device Server Module,” http://www.dpactech.com/docs/wireless_pr oducts/AB%20wireles s%20device%20server %20module.pdf , August 2006, cited February 2007. [47] Cirronet, Inc., “Zigbee Produc ts and Radio Solutions,” http://www.cirronet.com/zigbee.htm , 2006, cited February 2007. [48] JTAG Technolgies, “Boundry-Scan (JTAG) Test and In-System Programming Solutions,” http://www.jtag.com/main.php , February 2007, cited February 2007. [49] Atmel Corporation, “AVR ISP In-System Programmer,” http://www.atmel.com/dyn/produc ts/tools_card.asp?tool_id=2726 , March 2004, cited February 2007. [50] K. Lee, "IEEE 1451: A Standard in Suppor t of Smart Transducer Networking," IEEE Instrumentation and Measurement Technology Conf. , Baltimore, 2000.

PAGE 108

108 [51] P. Levis, S. Madden, D. Gay, J. Polastre, R. Szewczyk, K. Whitehouse, A. Woo, J. Hill, M. Welsh, E. Brewer, and D. Culler, " TinyOS: An Operating System for Sensor Networks," Ambient Intelligence, W. Weber, J. Rabaey, and E. Aarts (Eds.), SpringerVerlag, New York, NY, 2004, pp. 115. [52] TinyOS Community, “TinyOS,” http://www.tinyos.net/ , February 2007, cited February 2007. [53] D. Gay, P. Levis, R. Behren, M. Welsh, E. Brewer, and D. Culler, "The nesC Language: A Holistic Approach to Network Embedded Systems ," ACM Conf. on Programming Language Design and Implementation , Uppsala, 2003. [54] D. Gay, P. Levis, D. Culler and E. Brewer, “nesC 1.1 Language Reference Manual,” http://www.tinyos.net/tinyos-1.x/doc/nesc/ref.pdf , May 2003, cited February 2007. [55] Crossbow, Inc., “MICAz 2.4 GHz Wireless Module,” http://www.xbow.com/Products/p roductsdetails.aspx?sid=101 , 2007, cited February 2007. [56] P. Levis, S. Madden, D. Gay, J. Polastre, R. Szewczyk, A. Woo, E. Brewer, and D. Culler, "The Emergence of Networking Ab stractions and Techniques in TinyOS," USENIX NSDI , San Fransisco, 2004. [57] S. Avancha, J. Undercoffer, A. Joshi, and J. Pinkston, "Security for Wireless Sensor Networks," Wireless Sensor Networks , 2004, pp. 253. [58] J. Daemen and V. Rijmen, The Design of Rijndael: AES the Advanced Encryption Standard , Springer-Verlag, New York, 2002. [59] W. Stallings, Cryptography and Network Security, Principles and Practices , 4th ed., Prentice Hall, Upper Saddle River, NJ, 2006, pp. 268. [60] A. Wacker, M. Knoll, T. Heiber and K. Rothermel, “Sensornet Services: A New Approach for Establishing Pairwise Keys for Securing Wireless Sensor Networks,” Proc. of 3rd Int'l Conf. on Embedded Networked Sensor Systems , 2005, pp. 27. [61] W. Stallings, Cryptography and Network Security, Principles and Practices , 4th ed., Prentice Hall, Upper Saddle River, NJ, 2006, pp. 189-193. [62] H. Chan, A. Perrig and D. Song, "Key Dist ribution Techniques for Sensor Networks," Wireless Sensor Networks , Kluwer Academic, New York, 2004, pp. 277.

PAGE 109

109 [63] W. Stallings, Cryptography and Network Security, Principles and Practices , 4th ed., Prentice Hall, Upper Saddle River, NJ, 2006, pp. 298. [64] Y. Kaddoura, J. King and A. Helal, “Cost-pr ecision Tradeoffs in Unencumbered Floorbased Indoor Location Tracking,” 3rd Int’l Conf. on Smart homes and health Telematics , Magog, 2005. [65] R. Bose, J. King, H. El-zabadani, S. Pickles and A. Helal, “Building Plug-and-Play Smart Homes Using the Atlas Platform,” 4th Int’l Conf. on Smart homes and health Telematics , Belfast, 2006. [66] M. Ali et al., “NILE-PDT: A Phenomenon Detection and Tracking Framework for Data Stream Management Systems,” Very Large Data Bases Conf. , Trondheim, 2005.

PAGE 110

110 BIOGRAPHICAL SKETCH Any wild or improper behavior from Jeffrey King should be excused, as he was, literally, born in a zoo. Specifically, the New York Zool ogical Society (Bronx Zoo), where in 1979 both of his parents were working as keepers. With as many gorillas as humans for playmates, it makes sense then that Jeff has always had a strong affinity for th e natural world. Growing up, his household was something of a menagerie, with the re quisite cats and dogs, as well as fish, turtles, small birds, snakes, and rabbits, w ith the occasional wild deer, tu rkey, hawk, or fox investigating the forested yard for any treats that might have been left for them. While obtaining his Ph.D. in computer engineering from the University of Fl orida, Jeff always balanced his interest in technology with hobbies in music, painting, literature , and the culinary arts and with his love of the outdoors, spending weekends (or weekdays if lucky!) hiki ng, backpacking, kayaking, skiing, playing golf, football, Ultimate Frisbee, or any sport or recr eation under the sun or moon. His dream is to compete in the Eco-Challenge Expedition Race (and not just to meet the Playboy Bunny team), and anxiously awaits the return of the contest. Until the n, and now that he has completed his studies, Jeff intends to work for Perv asa, Inc., a University of Florida tech startup, following former members of UF’s Mobile a nd Pervasive Computing Lab, commercializing technology such as smart homes to improve the lives of our aging population. But once a year, he will always be found somewhere on the 2,200 m ile Appalachian Trail, which he and nine college friends are hiking in pieces. At their current pace, this should keep Jeff occupied well into his eighties.