Citation
The Usability of graphical user interfaces of mobile computing devices designed for construction foremen : icons and predefined text lists compared

Material Information

Title:
The Usability of graphical user interfaces of mobile computing devices designed for construction foremen : icons and predefined text lists compared
Creator:
Qu, Tan ( Dissertant )
Hinze, Jimmie W. ( Thesis advisor )
Hasell, Mary J. ( Thesis advisor )
Jones, Pierce ( Reviewer )
Akers, Ronald ( Reviewer )
Wetherington, Leon E. ( Reviewer )
Place of Publication:
Gainesville, Fla.
Publisher:
University of Florida
Publication Date:
Copyright Date:
2006
Language:
English

Subjects

Subjects / Keywords:
Computer icons ( jstor )
Computer technology ( jstor )
Error rates ( jstor )
Information search ( jstor )
Mobile devices ( jstor )
Personal computers ( jstor )
Search time ( jstor )
Statistical discrepancies ( jstor )
Usability ( jstor )
User satisfaction ( jstor )
Design, Construction, and Planning thesis, Ph.D
Dissertations, Academic -- UF -- Design, Construction, and Planning

Notes

Abstract:
Field documentation by construction foremen traditionally has been done through the use of pen and paper. The drawbacks of the traditional method and the need to computerize the field documentation process have long been recognized by researchers of construction management. Mobile computing devices provide an excellent hardware platform for addressing this need. Unfortunately, the past research efforts and technological developments in this area have not provided solutions with good usability. This study examined past research from a usability point of view and focused on the graphical user interface usability aspect of the problem. The inefficiency associated with the data input method though stylus and touch sensitive screen was examined. The focus of the study was on construction foremen, but other participants in the construction industry were also included as a basis of comparison. The study investigated the experience of the research participants with computers, personal digital assistants (PDA's) and other touch sensitive screen devices. The study evaluated the usability properties of icons and pre-determined text lists as potential candidates for automated data entry on mobile computing devices in the construction field. The views of participants on the standardization of the content of the field documentation, importance of quick data entry in the field, and the inefficiency associated with a stylus writing data input method were explored. Thirty-five construction foremen employed by sitework contractors, 37 construction professionals, and 28 university students were selected to complete a specially designed computer visual search game that consisted of an icon visual search interface and a text visual search interface. Each subject completed 14 visual search tasks in each interface. Results showed foremen and construction professionals performed visual search tasks faster with icons than with pre-determined text lists. Study results also showed comparable levels of accuracy of data input and also good satisfaction ratings when using the icon interface when compared with the text interface. The results also suggested a strong positive correlation between the task completion time and task errors (fewer errors when task times were short). A strong negative correlation was noted between the construction experience of the research participant and the task errors; i.e., participants with less experience made more errors.
Subject:
automated, computing, construction, foremen, graphical, icons, mobile, sitework, usability
General Note:
Title from title page of source document.
General Note:
Document formatted into pages; contains 220 pages.
General Note:
Includes vita.
Thesis:
Thesis (Ph. D.)--University of Florida, 2006.
Bibliography:
Includes bibliographical references.
General Note:
Text (Electronic thesis) in PDF format.

Record Information

Source Institution:
University of Florida
Holding Location:
University of Florida
Rights Management:
Copyright Qu, Tan. Permission granted to the University of Florida to digitize, archive and distribute this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.
Embargo Date:
7/24/2006
Resource Identifier:
003589386 ( alephbibnum )
496615206 ( OCLC )

Downloads

This item has the following downloads:


Full Text












THE USABILITY OF GRAPHICAL USER INTERFACES OF MOBILE COMPUTING
DEVICES DESIGNED FOR CONSTRUCTION FOREMEN:
ICONS AND PRE-DEFINED TEXT LISTS COMPARED













By

TAN QU

















A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY


UNIVERSITY OF FLORIDA


2006
































Copyright 2006

by

Tan Qu




























This dissertation is dedicated to my family for their loving support over the years to
finish this important chapter of my academic goals. My wife Wei Sun, who also received
her Ph.D. at the University of Florida, has been a wonderful counselor and gave me moral
support whenever I needed it. This dissertation is especially dedicated to my son, Tan,
who is coping every day with the learning disabilities associated with his autism. His
inquisitiveness and persistence in learning has taught me a completely new perspective
about the meaning and the privilege of the higher education. He was the very first and the
most enthusiastic reviewer of the icons that were designed for this study. This dissertation
is also dedicated to my daughter Victoria, who has been the joy of my life and who was
the constant inspiration to me with her loving energy. Last but not the least, this
dissertation is dedicated to Isabella, my infant daughter, who unconditionally shares her
love.















ACKNOWLEDGMENTS

I am grateful to many individuals for their support in this research effort. Without

their guidance and assistance, this study would not have been possible.

My supervisory committee was an excellent source of direction, both during the

stage of preparing a feasible and sound proposal for the study and throughout the actual

research and dissertation writing phases. I am much indebted to my committee chair Dr.

Jimmie Hinze, who has exemplified his true scholarship and mentorship and guided me

through individual steps of the study. Dr. Hinze has devoted countless hours from his

busy schedule in critiquing the study design and data analyses. His detailed review of the

working drafts was extremely beneficial. Dr. Pierce Jones provided many valuable

suggestions and directions in refining the research apparatus. His knowledge and

expertise in computer visual communications proved indispensable in this area. Dr.

Ronald Akers provided the needed scrutiny in statistical methods used in the study and

offered many good suggestions. Dr. Mary Jo Hasell provided considerable assistance

during the initial proposal development and offered many thoughtful insights. Her

encouragement to me in finishing the Ph.D. program was much appreciated. Dr. Leon

Wetherington gave helpful advice in refining the survey questionnaire and the experiment

apparatus, and he provided a very practical perspective to this study.

Many other individuals who contributed to this study must also be acknowledged.

Paul Ridilla, my best friend, also a very knowledgeable construction management

consultant, has provided great support in completing this research effort through his









candid and practical views from his over 50 years of experience in the construction

industry. I cannot forget to mention Jimmy Flores and many other foremen, construction

professionals, and students at the M. E. Rinker, Sr. School of Building Construction of

the University of Florida for their participation in the study. I cannot name each one of

them here but their valuable time in completing the study was much appreciated. I also

would like to give thanks to the individuals at the Florida Department of Transportation

and the principals of the firms that participated in this study.
















TABLE OF CONTENTS

page

A C K N O W L E D G M E N T S ................................................................................................. iv

LIST OF TABLES .................................................. ................ ....... xi

LIST OF FIGURES ......... ....................... .......... ....... ............ xvi

ABSTRACT ........ .............. ............. ...... ...................... xix

CHAPTER

1 IN TR OD U CTION ............................................... .. ......................... ..

P problem Statem ent................. .................. ........ ................ ....... ........... ..
Field Information Documentation in Construction ..........................................2
Problems Associated with Paper-based Documentation Method..........................3
Computerizing Field Information Documentation ..............................................4
R research O bjectives.......... ..................................................................... ....... ....

2 L IT E R A TU R E R E V IE W .................................................................. .....................9

Human-Computer Interface/Interaction (HCI), Graphic User Interface (GUI), and
U sab ility ........................ ............ .............. ..... .... ................. 9
Past Research Examined from A HCI and Usability Perspective ..............................11
Foremen and Their Role in the Information Communication Process.....................13
Graphic User Interface on Pen-Based Mobile Computing Devices ...........................16
Icons, Signs and Symbols A Brief Historical Review........................ ..................18
Signs, Symbols and Icons in Construction and the Possibility of Using Icons as
Automated Data Entry in Graphic User Interface ............................................23
Icons vs. Pre-defined Text .......... ........... ......................... ......... ...............25
Effect From Interface Implementation Differences ........................................26
Visual Appeal Factor Associated With Iconic Interfaces...................................27
Abstract Vs. "Concrete" Icons And Icons As Computer Command Vs. As
Inform action U nits .......... ........ .......................... .. ..... .... ... .. ...... .... 27
Subject Characteristics ........................................ ...............................27
S u m m ary .......................................................................................... 2 8









3 RESEARCH M ETHODOLOGY ........................................ .......................... 29

R research Q questions ................. .. ...... .. ... .. ................ .... ........ ..... .... 3 1
Do Construction Foremen Perform Computer Tasks Faster Using Icons Than
Using Predefined Text Lists Or Vice Versa?.......................... ................31
Do Construction Foremen Experience Fewer Errors Using Icons Or Pre-
D defined Text List? .......................................................... ...... .. .................32
Do Construction Foremen Have A Preference Between Predefined Text Lists
A nd Icons? ............................................... ............... .... .................... 33
What Is The Ranking Order Of The Above Three Usability Aspects From
The Point Of View Of Construction Foremen? ..........................................34
What Are The Views Of Construction Foremen About The Concept Of The
Icon Based Mobile Field Documentation Applications? ..............................34
What Is The General Knowledge And Experience Of Construction Foremen
On M obile Com putting D evices? .............................. ... .................... ....34
What Percentage Of The Information In Current Field Documentation Do
Foremen Think Can Be Standardized For Use With The "Click And
Select" C concept? ................................................ .... .. .. .. ........ .... 35
S am p le s ..................................................... 3 5
M eth o d s .........................................................................3 7
Visual Searching Task Experim ent .......................................... ............... 38
A pparatus/M materials .............................................................................. 39
Icon training session ........... ........................................ .... .... .............. 40
Icon visual search test .......................... .. .. ................ ........ .. ........ .. 41
Test platform ........................................43
Icons and Pre-defined Text Lists ............... ........................ ............... 44
D ata collection m ethod........................................... .......................... 46
R response V ariables .............. ................................ ............. ............... 47
Visual Search Game Design considerations..................... .................48
Sample Icon-Based Mobile Equipment Usage Documentation Application ......50
P ro cedu res ...............................5 4............................
Research Hypotheses ........... ........................ .................. .......... .... 55
Task Completion Tim e ....................................... ...............................55
T a sk E rro rs ................................................................5 7
U ser Satisfaction ........ .................................................................. .... .... .... 57
Survey Q questionnaire D esign ............................................. ............................ 58
Foremen Demographics................. .. .. ..........................59
Foremen's Experience with Touch Sensitive Screen Devices and Mobile
C om putting D evices.......................... ............ ... ................... ............ 59
Foremen's View on Standardization of the Content of Field Documentation ....60
Foremen's Preference Between Icons and Pre-defined Text List.....................60
Foremen's View about Icon-based Field Information Documentation Tools.....61

4 PILOT STUDY ....................... .... ... ............ ............... .. ...... 63

Icon Design and Recognition Quality Testing .............. ................................. 63
Test Platform D difference Study ................................ ............................................ 68









D a ta ................................................................................................................ 6 8
H ypotheses T testing ............. .. ........ .......... .................. ...... ................. 72
Hypotheses testing about variances of the populations on Fujitsu and
N on-Fujitsu platform s............................. ....................... 72
Hypotheses testing about the difference between the means of the data
collected on the Fujitsu and Non-Fujitsu platforms ..............................76
Icon Learning Curve A nalysis......................................................... ............... 81
Learning Curve R egression ........................................ ........................... 81
Long Term Effect of Icon Training ................................ .. ..... ............... 86
Number of Training Sessions Required for the Final Study ............................91
Establishing Training Session Time Baseline...............................................91
Lessons Learned During the Pilot Study .................................. ............... 92
Experim ental environm ent ............................................... ............... 93
V erbal instructions ............................................ .............. ..............93

5 RESULTS AND DISCU SSION S......................................... .......................... 94

Sam ple D em graphics ..... ............................................................ ............... 94
A g e ..............................................................................9 4
E d u c a tio n ....................................................................................................... 9 5
Construction Experience .................................. .....................................96
Foreman's Crew size ................... ............................ ........97
Foremen Categorizations .......................................................................... 98
Occupations of the Construction Professionals.................................... 99
Student Status ................. ......... ........ .... ........100
Com puter Experience ................................................ ..... ..... ................ 100
Experiences of the Research Subjects with Touch Sensitive Screen Devices
(T S S D 's) ................ ..... .......... .... .... .................................. 1 0 1
Experiences of the Research Subjects with Personal Digital Assistants (PDA's)....104
Views of The Research Subjects about the Efficiency of the Data Entry
M echanism by Handwriting Recognition .................................. ............... 105
Forem en and PD A Efficiency ........................................ ........ ............... 106
Construction Professionals ................................................... ............... ... 107
Stu dents ................................................................ 109
C ross-groups .............................. ...............................110
The Views of Subjects on the Importance of Quick Data Entry on Mobile
C om putting D devices .................................. .. .. ... ...... .... .. ........ .. ........... .. 112
The Views of Foremen and Construction Professionals about the Standardization
of the Field D ocum entation Content...................................... ......... ............... 113
The Views of Foremen and Construction Professionals about the Percentage of
the Field Documentation Content That Could be Standardized..........................114
Satisfaction Ratings of the Subjects with the Icon Visual Search Game and Text
V isual Search G am e ................... ........... ... ....... ... .. ............... ................ 115
Hypothesis Testing on Subjects' Satisfaction Ratings with the Icon Visual
Search Game and Text Visual Search Game..............................................118
Wilcoxon matched pairs signed rank test.............................119
Paired D difference t-test..................................... ............................ ........ 120









Ranking Order of the Three Usability Factors (Task Time, Task Errors, and
Satisfaction Level) ..................................................................... .. ... ....... 121
Ranking Order of the Three Usability Factors by Foremen.............................121
Ranking Order of the Three Usability Factors by Construction Professionals .122
Ranking Order of the Three Usability Factors by Students ...........................123
The Views of Subjects about the Icon-based Field Documentation Systems on
M obile C om putting D evices ................. .................... .. ..................... .... 124
Readiness of the Foremen to Use Field Documentation Systems on Mobile
C om putting D evices ............... ........ ...... .... ............ ...... ................ ...... 126
Visual Search Game Results Analyses and Hypotheses Testing ...........................127
Average Task Time ......................... ....... .............................. 127
Average Task Instruction Reading Tim e..........................................................129
A average Task Search Tim e ........................................ .......................... 131
T ask E rrors ................ ................................................... 133
Error R education in Training Sessions.................................................. ................ 135
One-Way ANOVA (Analysis of Variance) of Visual Search Game Results and
Subject Types ............... ........... ........ ........................................... 137
Correlation Analysis between Construction Experience and the Average Icon
S each T im e ................... .... ............................... .. ... ... .... ............... 14 0
Correlation Analysis between Construction Experience and the Task Errors..........141
Correlation Analysis of the Average Task Search Time and Task Errors.............. 142
One-Way ANOVA (Analysis of Variance) of Visual Search Task Time of
Foremen with Computer Usage as Factor Levels ...........................................143

6 SUMMARY, CONCLUSIONS AND RECOMMENDATIONS ..........................145

S u m m ary ......................... ......... ................... .......... ... ..... ..................... 14 5
Are Computer Tasks Performed Faster When Using Icons Than When Using
Predefined T ext L ists? ................... ... ..... ............................ .... 145
Are Textual Instructions Processed Faster Than The Iconic Instructions? .......146
Are Icons Located Faster Than Text? ................... ................................... 146
Errors With Icons Versus Errors With Pre-Defined Text List ..........................146
Preferences of Pre-defined Text Lists Versus Icons............... .... ..............147
Ranking Order of the Three Usability Factors ............................. .................. 147
Views about Using Icon Based Mobile Field Documentation Applications.....147
Views about the Standardization of Information Contained in Field
Docum entation ............................. ...... ... ...... ...... .. .............. 147
Experience of Foremen with Mobile Computing Technologies .....................148
C o n clu sio n s.................................................... ................ 14 8
R research Lim stations .............................................. ... .... ....... ........ 149
R ecom m endations....... .............. .......................... .. .. .. ....... .. ............ 150
Future Research Recom m endations .............................. .......... ............... .... 151
Other Sectors of the Construction Industry and Other Geographical Areas .....151
Intelligent Data Validation in Data Input Process............................................151
Modeling of the Cognitive Activities of the Visual Search Process Through
the Use of Eye-tracking Technologies................................ ................. 152










APPENDIX

A PILOT STUD Y RESULTS D A TA .................................. ...................................... 153

B FINAL STUDY RESULT DATA...................................... .......................... 159

C SURVEY QUESTIONN AIRE ........................................................ .... ........... 190

L IST O F R E FE R E N C E S ......................................................................... ................... 192

B IO G R A PH IC A L SK E T C H ........................................... ...........................................200













































x
















LIST OF TABLES


Table p

3-1 Icons and Pre-defined Text Lists Used in the Visual Search Tests..........................44

4-1 Icon Recognition Quality Testing Results .................................... ............... 64

4-2 Icon Recognition Evaluation Results Organized by Evaluator.............................68

4-3 Example of Excluding Outliers in the Computation of the Average Task Time .....70

4-4 Correlation Between the Mean Task Time and Task Errors on the Fujitsu
P platform ................ .......... ........................... ...........................7 1

4-5 Correlation Between the Mean Task Time and Task Errors on the Non-Fujitsu
P platform ................ .......... ........................... ...........................7 1

4-6 Platform Difference Study Sample Variance F Values...................................76

4-7 Platform Difference Study t Test for Equality of Means ................................80

4-8 Platform Difference Study Sample Group Statistics ............................................81

4-9 Learning Rate on the Average Task Time Per Icon for Each Training Session ......84

4-10 Group Statistics of Learning Rates...................................... ......................... 85

4-11 Session Tim e D ata Statistics ............................................................................. 92

5-1 Mean and Median Ages of the Foremen, Construction Professionals, and
S tu d en ts ............................................................................. 9 4

5-2 Mean and Median Construction Experience Durations of the Foremen,
Construction Professionals and Students ...................................... ............... 96

5-3 Levene Test of Homogeneity of Variances on the Total TSSD's Scores............103

5-4 LSD Test of the Means of Total TSSD's Scores ...........................104

5-5 Levene Test of Homogeneity of Variances on the Ratings of Foremen with and
without PDA Experience...... .................... .. ........................... 107









5-6 One-Way ANOVA of the Means of the Numeric Ratings of Foremen with PDA
Experience and Foremen without PDA experience ............................................ 107

5-7 Levene Test of Homogeneity of Variances on the Ratings of Construction
Professionals with and without PDA Experience ...............................................108

5-8 One-Way ANOVA of the Means of the Numeric Ratings of Construction
Professionals with and without PDA experience.............................109

5-9 Levene Test of Homogeneity of Variances on the Ratings of Students with and
w without PD A Experience ......................................................... ............... 109

5-10 One-Way ANOVA of the Means of the Numeric Ratings of Students with and
w without PD A experience ......................................................... .............. 110

5-11 LSD Test of the Means of Numeric Ratings by Foremen, Construction
Professionals, and Students with Prior PDA Use Experience.............................111

5-12 Percentages of the Field Documentation Content that Could be Standardized As
Estimated by Foremen and Construction Professionals .............. ... ...............114

5-13 Wilcoxon Signed Ranks -Satisfaction Rating Differences between the Icon
Visual Search Game and the Text Visual Search Game .....................................119

5-14 Wilcoxon Signed Ranks Test Statistics-Satisfaction Rating Differences between
the Icon Visual Search Game and the Text Visual Search Game ........................120

5-15 Paired Samples Differences t-Tests Statistics- Subjects' Satisfaction Ratings
with the Icon Visual Search Game and the Text Visual Search.............................121

5-16 Paired Samples t-Tests Importance Ratings of the Foremen on Shorter Task
Time, Fewer Task Errors and Higher User Satisfaction .....................................122

5-17 Paired Samples t-Tests Importance Ratings of the Construction Professionals
on Shorter Task Time, Fewer Task Errors and Higher User Satisfaction............123

5-18 Paired Samples t-tests Students' Importance Ratings on Shorter Task Time,
Fewer Task Errors and Higher User Satisfaction............................124

5-19 LSD Test Results Responses of Foremen vs. Their Computer Usage................125

5-20 Paired Samples t-Tests -Average Task Time in the Icon User Interface vs. Text
U ser Interface .................................... ........................................ 129

5-21 Paired Samples t-tests -Average Task Time in the Icon User Interface vs. Text
U ser Interface ....................................................... .... ...................... 13 1

5-22 Paired Samples t-tests -Average Task Search Time in the Icon User Interface vs.
Text U ser Interface ......................................... ...... ........ .. ........ .... 133









5-23 Paired Samples t-Tests Mean Task Errors in the Icon User Interface vs. Text
U ser Interface ..................................................................... .........135

5-24 Correlation Between Training Session Errors and Construction Experience ........136

5-25 Levene Homogeneity of Variance Tests on the Visual Search Game Results
Between Foremen, Construction Professionals, and Students............................137

5-26 One-way ANOVA of the Average Task Time, Average Task Instruction
Reading Time, and Average Task Search Time By Subject Types ....................138

5-27 Post-Hoc LSD Test Results on Task Search Time in the Icon Visual Search
G a m e ........................................................................... 1 3 9

5-28 Tamhane's T2 Test on the Task Errors in The Icon Visual Search Game And
Text Visual Search Game Subject Type As Factor Levels ........... ................ 140

5-29 Correlation Analysis on the Construction Experience and Icon Search Time.......141

5-30 Correlation Analysis between the Construction Experience and Icon Search
E errors ................ .................................... ...........................14 1

5-31 Correlation Analysis between the Construction Experience and Text Search
E errors ................ .................................... ...........................142

5-32 Correlation Analysis between the Icon Search Time and Icon Search Errors .......142

5-33 Correlation Analysis between the Text Search Time and Text Search Errors.......143

5-34 Levene Homogeneity of Variance Tests Task Time of Foremen with Different
C om puter U sage .................................................. .. .... ... .. ........ .... 143

5-35 One-way ANOVA of the Task Time of Foremen Computer Usage as Factor
L levels ............................................................... .... ...... ......... 144

A-i Average Task Time for Each Training Session..........................................153

A-2 Number of Errors for Each Training Session....................................................... 154

A-3 Subject 1(Homebuilder Superintendent) Icon Training Session Data ...................155

A-4 Subject 2 (Engineer) Icon Training Session Data..................................................156

A-5 Subject 3 (Framing Foreman) Icon Training Session Data.............................157

A -6 Session Tim e for Each Training Session..................................... ..................... 158

B -l Forem en D em graphics ................................................................................... 159









B-2 Construction Professionals Demographics.................................. ...... ............ ...160

B -3 Student D em graphics ............................................ ......................................... 16 1

B-4 Foremen's experience with common touch sensitive screen devices ..................162

B-5 Construction professionals' experience with common touch sensitive screen
devices .............. .......... ....... ......... ......................................... 163

B-6 Students' experience with common touch sensitive screen devices ....................164

B-7 Foremen' experience with PDA's ...............................................165

B-8 Construction Professionals' experience with PDA's ........................ ...............166

B-9 Student Subjects' Experience with PDA's..... ..............................167

B-10 Foremen's Ratings of the Efficiency of the Data Entry Mechanism by Stylus
Handwriting on M obile Computing Devices ................................. ... ................ 168

B-11 Construction Professionals' Ratings of the Efficiency of the Data Entry
Mechanism by Stylus Handwriting on Mobile Computing Devices.................169

B-12 Students' Ratings of the Efficiency of the Data Entry Mechanism by Stylus
Handwriting on M obile Computing Devices ............................... ............... .170

5-13 Foremen Subjects' Ratings of the Importance of Being Able to Input Data
Quickly on Mobile Computing Devices................................... ....... .......... 171

B-14 Construction Professionals' Ratings of the Importance of Being Able to Input
Data Quickly on M obile Computing Devices .................................. ................. 172

B-15 Student Subjects' Ratings of the Importance of Being Able to Input Data
Quickly on Mobile Computing Devices.................. ......................... 173

B-16 Foremen's View about Whether Most Content of Their Field Documentation
C ou ld b e Stan dardized ............................................ ......................................... 174

B-17 Construction Professionals' View about Whether Most Content of the
Construction Foremen's Field Documentation Could be Standardized ...............75

B-18 Foremen's Estimate of the Percentage of the Information in Their Field
Documentation Could be Standardized............................... ............ ............. 176

B-19 Construction Professionals' Estimate of the Percentage of the Information in
Construction Foremen's Documentation that Could be Standardized................... 177

B-20 Foremen's Satisfaction Ratings with Icon Visual Search Game and the Text
V isual Search G am e .............. ........................... ... ...... ................. .. 178









B-21 Construction Professionals' Satisfaction Ratings with Icon Visual Search Game
and Text V isual Search G am e ...................................................... .............. 179

B-22 Student Subjects' Satisfaction Ratings with Icon Visual Search Game and Text
V isual Search G am e .......... ... .................. ..... ............ .. ...... .. ........ .. 180

B-23 Foremen's Importance Ratings on Shorter Task Time, Fewer Task Error and
H higher U ser Satisfaction ............................................... ............................. 181

B-24 Construction Professionals' Importance Ratings on Shorter Task Time, Fewer
Task Error and Higher User Satisfaction .................................... ............... 182

B-25 Students' Importance Ratings on Shorter Task Time, Fewer Task Error and
H higher U ser Satisfaction ............................................... ............................. 183

B-26 Foremen's Views About Whether the Icon-based Field Documentation Systems
W would Help Do Their Jobs ............ ... ................................ 184

B-27 Construction Professionals' Views About Whether the Icon-based Field
Documentation Systems Would Help Foremen Do Their Jobs ...........................185

B-28 Student Subjects' Views About Whether the Icon-based Field Documentation
Systems W would Help Foremen Do Their Jobs................................................... 186

B-29 Foremen Subjects' Average Task Time, Average Task Instruction Reading
Time, Average Task Search Time, and Average Task Errors.............................187

B-30 Construction Professionals' Average Task Time, Average Task Instruction
Reading Time, Average Task Search Time, and Average Task Errors .................188

B-31 Students' Average Task Time, Average Task Instruction Reading Time,
Average Task Search Time, and Average Task Errors ............... ................189
















LIST OF FIGURES


Figure pge

2-1 Illustration of the Evolution of Human-Computer Communication Process ..........23

3-1 Extended Stages of the Information Processing Model (Preece at al. 1994) ..........30

3-2 Sample Screen Shot of the Icon Training Session ................................................41

3-3 Sample Screenshot of the Icon Visual Search Session ............... .... ........... 42

3-4 Screenshot of the Text Visual Search Session ........................ ... ..................... 43

3-5 Visual Search Response Variable Definitions ............................... ............... .48

3-6 Main Screen of the Sample Icon-based Field Documentation Application
(shown running on Handspring Treo 270 Model)...................................... 51

3-7 Equipm ent Selection Screen.......................................................... ............... 52

3-8 Scraper Selection Screen ........... ......... ...... ................................. ............... 52

3-9 Scraper Time Information Entry Screen ...................................... ............... 53

3-10 Scraper W ork Production Input Screen......................................... ................... 53

3-11 Schematic Diagram of the Human Eye, With the Fovea at the Bottom Courtesy
from Wikipedia, http://en.wikipedia.org/wiki/Optic fovea, February 7, 2006 .......56

4-1 Mean Task Time and Search Errors Observed in the Platform Difference Study ...71

4-2 Learning Effect of the Mean Average Task Time.................................................82

4-3 M ean Learning Rate Scatter Plot ........................................ ......................... 86

4-4 Average instruction reading time, search time, and task time (Subject 1)...............87

4-5 T ask E errors (Subject 1) ......... ................. .................................. ..........................88

4-6 Average instruction reading time, search time, and task time (Subject 2)...............88

4-7 Task E errors (Subject 2) .................................. ......... ...................................89









4-8 Average instruction reading time, search time, and task time (Subject 3)...............89

4-9 Task Errors (Subject 3) .............................. ............. ...................................90

5-1 Age Group Distributions of the Research Subjects ...... ...................................... 95

5-2 Education Levels of the Research Subjects..... ..............................96

5-3 Construction Experience of the Research Subjects............................................... 97

5-4 C rew Sizes of Forem en ................................................. ............................... 98

5-5 Forem en Specializations ................................................ .............................. 99

5-6 Occupations of the Construction Professionals................ ...... ..............99

5-7 Computer Use Experience of Foremen ..... ..................... ............101

5-8 Subjects' experience with common TSSD's......................................103

5-9 Experience of Research Subjects with PDA Devices ............... ..... .......... 105

5-10 Efficiency Ratings of Foremen on the Stylus Writing Method on PDA Devices.. 107

5-11 Efficiency Ratings of Construction Professionals on the Stylus Writing Method
on PD A D devices ......... ............ ............. ............. .. .. ........ .. ............ 108

5-12 Stylus Writing Input Method Efficiency Ratings by Foremen, Construction
Professionals, and Students Who Had PDA Use Experience ...............................111

5-14 The Importance Ratings on Being Able to Enter Information on Mobile
Computing Devices Quickly ............................ ..... ...................................... 113

5-14 Responses of Foremen and Construction Professionals on Whether The Content
of The Field Documentation Could be Standardized...........................................114

5-15 Satisfaction Ratings of Foremen on the Icon Visual Search Game and Text
V isual Search G am e ................ ................................. ....... .. ........ .. 116

5-16 Satisfaction Ratings of Construction Professionals on the Icon Visual Search
Game and Text Visual Search Game ....................................... ...............116

5-17 Satisfaction Ratings of Students on the Icon Visual Search Game and Text
V isual Search G am e .............. ........................... ... ............... .. 117

5-18 Subjects' Equivalent Numeric Satisfaction Ratings on the Icon Visual Search
Game and Text Visual Search Game ....................................... ...............118









5-19 Views of Subjects About Whether the Icon-based Field Documentation Systems
would Help Foremen Do Their Jobs ........................................... ............... 125

5-20 Responses of Foremen on Whether They Would Use a Field Documentation
System on M obile Computing Devices ............................. ..... .... ............. 127

5-21 Mean Average Task Time Observed on the icon interface vs. text interface for
each sam p le ...................................................... ................ 12 8

5-22 Mean Average Task Instruction Reading Time Observed during the Icon Visual
Search Gam e vs. the Text Visual Search Game.....................................................130

5-23 Mean Average Task Search Time Observed during the Icon Visual Search Game
vs. the Text Visual Search Gam e ..................................... ........... ........ ....... 132

5-24 Mean Task Errors Observed during the Icon Visual Search Game vs. the Text
V isual Search G am e .................. ............................ .. ...... .. ............ 134

5-25 Task Errors Observed during the Icon Training Sessions.................................... 136


xviii















Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy

THE USABILITY OF GRAPHICAL USER INTERFACES OF MOBILE COMPUTING
DEVICES DESIGNED FOR CONSTRUCTION FOREMEN:
ICONS AND PRE-DEFINED TEXT LISTS COMPARED

By

Tan Qu

May 2006

Chair: Jimmie W. Hinze
Cochair: Mary J. Hasell
Major Department: Design, Construction, and Planning

Field documentation by construction foremen traditionally has been done through

the use of pen and paper. The drawbacks of the traditional method and the need to

computerize the field documentation process have long been recognized by researchers of

construction management. Mobile computing devices provide an excellent hardware

platform for addressing this need. Unfortunately, the past research efforts and

technological developments in this area have not provided solutions with good usability.

This study examined past research from a usability point of view and focused on

the graphical user interface usability aspect of the problem. The inefficiency associated

with the data input method though stylus and touch sensitive screen was examined. The

focus of the study was on construction foremen, but other participants in the construction

industry were also included as a basis of comparison. The study investigated the









experience of the research participants with computers, personal digital assistants

(PDA's) and other touch sensitive screen devices.

The study evaluated the usability properties of icons and pre-determined text lists

as potential candidates for automated data entry on mobile computing devices in the

construction field. The views of participants on the standardization of the content of the

field documentation, importance of quick data entry in the field, and the inefficiency

associated with a stylus writing data input method were explored.

Thirty-five construction foremen employed by sitework contractors, 37

construction professionals, and 28 university students were selected to complete a

specially designed computer visual search game that consisted of an icon visual search

interface and a text visual search interface. Each subject completed 14 visual search tasks

in each interface. Results showed foremen and construction professionals performed

visual search tasks faster with icons than with pre-determined text lists. Study results also

showed comparable levels of accuracy of data input and also good satisfaction ratings

when using the icon interface when compared with the text interface. The results also

suggested a strong positive correlation between the task completion time and task errors

(fewer errors when task times were short). A strong negative correlation was noted

between the construction experience of the research participant and the task errors; i.e.,

participants with less experience made more errors.














CHAPTER 1
INTRODUCTION

The value and importance of information acquisition, transfer, organization, and

utilization are well accepted in the construction industry. "In a profound sense, the

management of a construction project is about managing the project information flow"

(Winch 2002, pp. 339). A construction project from inception to completion involves a

multitude of varied participants and the whole construction process generates vast

amounts of information. Effectively managing such an immense volume of information

to ensure its accuracy and availability in a timely manner is crucial to the successful

completion of any project (Cox et al. 2002).

Problem Statement

A construction project is a unique, complex, custom-built response to a client's
needs. (Russell 1993)

It is not only a process whereby information from the participants in the form of

building or site plans, specifications, construction schedules, and various other

documents are implemented, but also a process where new information is created. This

process takes on physical and time dimensions and often generates a mammoth amount

of information of varying interest to the various participants. As the time dimension

grows, the volume of the information also increases, providing new data for the spatial,

time, resource and cost variables of the project.









Field Information Documentation in Construction

Many aspects of the construction process require accurate documentation of site

conditions, including progress, quality, quantity, change, conflicts, and as-built

information on the project. Documentation, communication, and analysis of construction

field data are beneficial to all participants of a construction project (Hwang et al. 2003).

For example, field data are needed for the project owners to verify and approve

construction payment requests. Engineers and architects rely on field data to verify their

design assumptions and improve the designs. Contractors require up-to-date field

information to have a good understanding of the project status. In the construction

industry, where disputes and litigation are almost commonplace, accurate documentation

not only minimizes the possibility of disputes and claims, but also facilitates construction

innovations and improvements (Liu 2000). The importance and legal ramifications of

accurate vs. poorly documented construction information are well cited by the

practitioners and academicians in the construction industry (Kangari 1995, O'Brien

1998).

Field information documentation is especially important to contractors. Russell

(1993) pointed out that the collection of field information is important to

* Record the values of various context variables (weather conditions and work-force
parameters) that are helpful in explaining reasons behind the current status of a
project.

* Assess the current status of activities, extra work orders, and back charges in terms
of active state (postponed, started, ongoing, idle, and finished), work scope
completed, and problems encountered and their immediate consequences (man-
hours and/or time lost).

* Measure resource consumption rates and their allocation to ongoing activities









Besides having these functions, information collected in the field is often kept by

contractors as historical data for preparing future estimates and schedules (Fayek et al.

1998).

Problems Associated with Paper-based Documentation Method

Traditionally, field information documentation has been done through the use of

paper forms. This practice remains the same with few changes over the years for the

majority of the construction industry (McCullouch and Gunn 1993, Fayek et al. 1998,

Cox et al. 2002). With the paper-based documentation method, information is manually

entered in notebooks or pre-printed forms. These notebooks (sometimes called "logs")

and forms are periodically sent to the main offices for top management review and for

archival purposes. Sometimes the pre-printed forms can be further processed by copying

desired information from multiple forms into one form and even into a computer

spreadsheet.

Unfortunately, such systems are based on a large number of paper documents and

have numerous drawbacks, especially when the need arises for accessing and retrieving

the information that has been collected. Fayek et al. (1998) identified some of the

problems with paper documentation as follows:

* Inconsistent procedures for collecting data on different types of resources (labor,
equipment, materials, and subcontractors);

* Inaccurate assignment of hours to cost codes;

* Lack of data on site conditions, schedule progress, and problems associated with
activities which lead to cost and schedule overruns;

* Multiple entry of the same data;

* Lack of timely feedback on project performance.









With these deficiencies, it is difficult to obtain timely information on potential

problems with schedules, resources and safety issues and to initiate the appropriate

corrective actions. Incomplete/inadequate and inaccurate documentation, as a result of

poor recordkeeping, are often considered inferior evidence documents in litigation or

arbitration procedures (Kangari, 1995).

The use of inaccurate information in project bidding and project resource allocation

often results in significant economic consequences that are manifested as construction

delays and business losses (Cox et al. 2002). As in any competitive industry, nothing

could be more devastating to construction companies than making important decisions on

unreliable information.

Computerizing Field Information Documentation

With the apparent problems associated with the paper-based documentation

method, the need and importance to computerize the field information collection process

have long been recognized in the construction industry (Russell 1993, McCulloch 1993,

Condreay 1997, Elzarka et al. 1997, Liu 2000, Cox et al. 2002, Hwang et al. 2003). As an

industry-wide practice, the use of computer technology in fulfilling this need has not yet

come to reality.

Computer use in the offices of the construction companies is no longer considered

"high-tech" business "things" that only a few privileged ones can have access to. The use

of desktop computers and desktop applications is an essential part of business operations

that include accounting, word processing, project estimating, project scheduling, and e-

mail communications. However, computer use by field personnel for documenting field

data is still not a common practice in the construction industry. Communications between

field personnel and office management, to a large extent, continue to be verbal









communications through the use of telephones (wired or wireless) and two-way radios.

Although this part of the communication channel between the field and the office has

been greatly improved by the advancement of wireless communication technologies, the

transient nature of verbal communications leaves few means by which the information

can be conveniently stored and retrieved.

This obstacle in computerizing construction field communications is perceived in

the industry to be due to various forms of barriers (Toole 1998, Davis and Songer 2003,

Flood et al. 2003). Flood et al. summarized the barriers as follows:

1. Lack of application development: various computing models and concepts have
been developed through years of research. However realizing and fine-tuning that
concept into a workable application often requires financial and time commitments
for research and development that are not readily available.

2. Institutional and individual barriers: these include old beliefs and resistance to
change and to the adoption of new technologies; lack of understanding of the
potential of a tool; lack of resource commitment to its proper implementation;
concerns about possible legal ramifications in the use of a new technology; and
lack of confidence in the integrity of the output from a new technology.

3. Quality issues as "user-friendliness" and "integrity of the software:" these include
issues such as the ease with which an application can be learned by its users, the
ease of which output and results can be interpreted, the convenience of data input,
the convenience with which the application can be tailored to work for each
specific problem, etc.

The "barriers" or problems described by Flood et al. are inter-related. For example,

the institutional barriers exist because although the computer technologies advance

rapidly there has not yet been any stabilized system of solutions that fully considered the

differing characteristics of various potential construction field users. To clarify the point,

the lack of a unified operating system standard in the mobile computing technologies has

resulted in many different mobile computing devices that are available commercially and

these technologies cannot be considered as stabilized as they are under constant patch,









upgrade and refinement. Second, existing software applications for mobile computing

devices to be used for construction field documentation are scarce and often the end-user

characteristics and working environment was not considered when they were developed.

All these factors along with the old beliefs have made it difficult for construction

companies to invest in these technologies for their field personnel. Since construction

companies have not universally adopted the mobile computing technologies for their field

supervisors, there has been little enthusiasm from the software developers to address this

application.

The third category of barriers, discussed above, seems to be the root problem. The

ability to conveniently input data in the construction field has been a challenge and a

driving force for the research in computerizing the construction field documentation

process. Many ideas and directions exist in providing solutions for this need. However,

there have not been any studies taking on the system usability point of view in examining

the problems. Usability refers to how easily a system can be learned and used by its

intended end users, how fast the users can complete the required tasks, how much the

system is prone to errors, and how much the users like to use the system. Among the

major aspects of the system usability, hardware usability issues are generally addressed

by the computer industry on a continuing basis while the software usability issues

constitute the primary interest to researchers in the construction industry. This study will

undertake a system usability approach to examine the problems existing in the computer

software interfaces designed for use in the construction industry and evaluate possible

alternative solutions.









Research Objectives

With respect to the third category of the problem as summarized by Flood et al.

(2003), existing research efforts have mainly focused on the hardware aspect of the issue.

Research in the past has mostly revolved around the approach of exploring commercially

available mobile computing devices and their suitability for construction field

information communications. Small-sized mobile computing devices equipped with

touch sensitive screens have now been accepted as a basic platform for computer use in

the construction field environment; however, the graphical user interface aspect has not

been extensively investigated.

This study will introduce the concept of the usability of the computer graphical user

interface into the construction research world and use this concept to provide a new

perspective on how past research on mobile computing in construction has progressed.

The problems related to the inefficiency of the pen/stylus handwriting input method on

mobile computing devices will be examined. Existing studies on alternative automated

data collection technologies to augment the pen/stylus data input method for mobile

computing systems in construction will also be reviewed and discussed.

As the main objective of this study, icons (graphical or illustrative representations

of concepts or items) as a possible alternative mechanism for automated data entry on

mobile computing systems will be investigated. This study will focus on construction

foremen as the real field information providers and the validity of icons as the main

mechanism in the graphical user interfaces designed for them. From the usability

approach, icons and pre-defined text lists will be compared in evaluating their relative

effectiveness and efficiency in construction field data input processes. A user interface

experiment with the participation from sitework construction foremen in Central Florida









will be conducted to determine which mechanism results in better usability, e.g., shorter

task times, fewer user errors and higher user satisfaction. The priority order assessment

by foremen for these three important usability factors (task completion time, task errors,

and user satisfaction) will be surveyed. The effect of foremen demographics on the

resulting data will also be analyzed and discussed.

The experience of construction foremen with mobile computing devices will be

explored. This study will also investigate through face-to-face interviews their opinions

about using icon-based mobile documentation tools. As potential end users, they will be

asked questions related to the computerization of field documentation.














CHAPTER 2
LITERATURE REVIEW

This chapter will discuss some general concepts related to human-computer

interaction/interface (HCI), graphical user interface (GUI), icon, and usability theories.

Past research on computerizing construction field communications will be reviewed from

a system usability perspective. Construction foremen and their role in the information

communication process on construction sites will be examined. Limitations associated

with the pen/stylus handwriting-based data input method will be discussed as well. This

chapter will also provide a brief historical review on icons, signs and symbols. In the later

part of this chapter, the concept of using icons as an automated data entry mechanism in

graphical user interfaces designed for construction foremen will be discussed.

Human-Computer Interface/Interaction (HCI), Graphic User Interface (GUI), and
Usability

Barker (1989) informally defined a human-computer interface (HCI) as a

mechanism which facilitates the flow of information between a computer and a human.

The Association for Computing Machinery's Special Interest Group on Computer-

Human Interaction (ACM SIGCHI) described human-computer interaction as a field with

intertwined roots in computer graphics, operating systems, human factors, ergonomics,

industrial engineering, cognitive psychology, and computer system engineering.

Redmond-Pyle and Moore (1995) stated that in typical information systems and office

systems the human-computer interface includes the following:

* The parts of the computer hardware that the user interacts with, e.g., screen,
keyboard, mouse, on/off switch, etc.









* The images or data that are visible on the screen, e.g., windows, menus, messages,
help screens.

* User documentation such as manuals and reference cards.

The second component in Redmond-Pyle and Moore's definition of the HCI

structure is often referred as the graphic user interface (GUI). GUI provides the

uppermost presentation layer for the communications (visual input and output) between

the users and the computers.

The term "usability," in simple words, defines how usable a product or system is

when it is put to use by the users to perform the intended activities or tasks. In other

words, a product with high usability is easier to learn and use than a product with low

usability. Therefore it is easy to understand that if a product has a low "usability," it will

have less probability to be accepted by its intended customers or users.

The definitions most cited by the researchers in the usability world are from

Shackel (1990) and Nielsen (1994). Shackel's definition of usability and Nielsen's

definition share many common aspects and the main components and characteristics of

their definitions are summarized as follows:

* Effectiveness: for a specified range of tasks and group of users in a particular
environment, how effectively can the tasks be performed using the interface? What
are the frequency and seriousness of the user errors? This is sometimes referred to
as "productivity" or efficiency of use once the system has been learned, as it
includes how fast the user can correctly perform tasks.

* Learnability and retention of knowledge and skills learned: how much training and
how much practice do users require before they become proficient with the system?
If use is intermittent, how much relearning time do users need to re-gain the
required knowledge and skills to use the system?

* Flexibility: to what extent is the interface still effective if there are changes in the
task or environment?









* Attitude or subjective user satisfaction: do people who use the system find it
stressful and frustrating, or do they find it rewarding to use, and feel a sense of
satisfaction? Do users like the system?

Since the 1980's, usability theory is widely recognized as an important software

quality alongside technical aspects such as functionality, internal consistency, reliability

etc. Most of the major information technology companies maintain their own usability

divisions to investigate potential usability pitfalls in their products and systems before

they are released to the market. Usability engineering is a crucial part of the computer-

related business to survive in today's customer/user-driven market where user acceptance

is critical to success when launching new products or systems. To the end users, a system

with good usability can help improve their productivity, reduce the quantity or frequency

of user errors, and require less training for those who will use the new system (Redmond-

Pyle and Moore 1995).

Past Research Examined from A HCI and Usability Perspective

Complex technical systems do not evolve fully formed, but rather in fits and starts
as the combination of technical possibility and economic advantage encourages
localized development. (Winch 2002, pp. 341)

In a retrospective point of view, the past research in the construction industry on

computerizing field information communication has mainly focused on usability issues of

the hardware and functionality aspects of the Human-Computer Interface. The approach

adopted by most researchers consisted of taking the technologies and computing devices

commercially available and evaluating their appropriateness and functionalities in various

types of field information documentation/communication tasks. Examples of such include

research on pen computers (McCullouch 1993, Coble and Kibert 1994, Songer et al.

1995, Elzarka et al. 1997, Liu 2000), research on bar code technology (Coble and Elliott

1995, Condreay 1997), research on Radio Frequency Identification (RFID) technology









(McCullouch 1991, Jaselskis and El-Misalami 2000), research on wireless

communication technology used in conjunction with handheld PC's (De La Garza and

Howitt 1997), and more recently research on pocket PC's (Repass et al. 2000, Bowden et

al. 2002, Cox et al. 2002, Williams 2003). In addition to these adaptive approaches in

finding the ideal computing device suitable for construction field needs, there are also

some innovative research studies such as Digital Hardhat (Liu 1997, a system employing

a hardhat-mounted video camera and pen computer that is capable of capturing textual,

sound, pictorial information) and Gator Communicator (Alexander 1996, a handheld

computer prototype that is based on the OS-9000 real-time operating system and includes

a global position receiver (GPS), digital compass, digital stereo camera, and digital two-

way wireless radio functions).

While these research efforts provided many valuable insights and lessons as to the

characteristics of the ideal mobile computing platform that would be suitable for

construction field settings, the graphic user interface or software aspect of the system

usability has unfortunately often been neglected. The characteristics of an effective

graphical user interface for field users was seldom considered. It should be recognized

that usability of the graphical user interfaces has considerable importance. A good

example to illustrate such a point was a study conducted by Tektronix Laboratories on the

effect of user interface design upon user productivity (Bailey et al 1988). In that study, a

Tektronix 11000 series laboratory oscilloscope was compared to its predecessor 7000

series. The 7000 series interface was a dedicated physical control system while the 11000

system employed a rich graphical user interface that included icons, popup menus,

assignable controls and a touch panel. The study results showed that the 11000 series had









a 77% performance gain over the 7000 series and the researchers attributed the

performance gain to the benefit from the better cognitive factors of strategy selections

and recall of operational details associated with the 11000 series's user interface.

Foremen and Their Role in the Information Communication Process

Recognizing the users and their particular needs is the first step in the process of

successful usability engineering. There are various groups of existing and potential field

computer users on construction sites. For general contractors and construction managers

their field personnel are typically project superintendents, field engineers, and often

project managers on some larger projects. For self performed work, the general

contractors are also similar to subcontractors or specialty contractors where their field

personnel include construction workers and foremen.

According to the Household Data Annual Averages statistics for 2002 released

from the Bureau of Labor Statistics of the U.S. Department of Labor, there are 6.774

million workers employed in the U.S. construction industry. A foreman in the

construction industry usually supervises two to more than twenty workers, with a crew

size of six to eight workers being most typical (Borcherding 1977a, Elliot 2000). Based

on this ratio, it therefore can be estimated that there are approximately one million

foremen in the U.S. construction industry. Therefore, research on foremen and

computerizing their documentation tasks can have significance in improving the

computer use and possibly improving the productivity and the product quality in the

construction industry.

Foremen are key individuals on the construction site. Research work on foremen

and their roles in the construction process occur primarily in literature published in the

1970's and 1980's, with a few studies in the 1990's. For example, Borcherding, who









perhaps contributed most in the research work related to construction foremen, defined

foremen as the "key link between management and individual workmen" (Borcherding

1977a). In an effort to identify and clarify the functions and information needs of various

construction management personnel, Tenah (1986) defined the primary functions of

foreman as one who "organizes and coordinates employees engaged in a specific craft or

function on a construction project; reads and interprets drawings, blue prints, and

specifications; allocates, assigns and inspects work; administers union agreements and

safety enforcement; hires and trains employees." Hinze and Kuechenmeister stated that

(1981) foremen as first-line supervisors are responsible for directing, guiding, and

managing crew members to achieve quality workmanship within budget and on schedule.

Senior (1996) observed the efficient foremen devote a substantial proportion of their time

to planning the job.

With the challenging characteristics of their job and a busy work schedule, foremen

often devote most of their attention to field problem solving, issuing work orders to their

crews, coordinating with other contractors, and performing other functions of their job

responsibilities. Field documentation such as daily field activity reports, accident

investigations, daily safety reviews, and other company internal report forms (Coble and

Baker 1993) are often relegated to the bottom of their priority list. As a result, these field

documentation tasks are either completed haphazardly or deferred to whenever their

schedules allow the time for such activities. Consequently these field documentation

efforts often contain incomplete information or inaccurate information that makes it

difficult for management to fully exploit its value. As previously discussed, management

often relies on the information collected in the field for making essential business









decisions such as preparing a bid for a new project or allocating manpower and

equipment resources among ongoing projects. The deficiency in the information collected

in the field, though clearly desirable to be rectified, is often sacrificed by management as

a trade-off to a smooth running project that is on schedule and within budget. This

dilemma has been long recognized in the construction industry (Borcherding 1977a,

Coble and Baker 1993).

Coble and Baker (1993) stated "construction foremen are clearly the missing link to

fully computerizing a construction company." Coble (1994) further pointed out that in

order to successfully computerize them, the research effort must take into consideration

the foremen's background, characteristics, and job concerns. It was generally believed the

majority of the construction foremen have no education beyond high school and this was

supported in a study conducted at Stanford University (Borcherding 1977) and another

study conducted at the University of Florida (Elliott 2000). Elliott's study also indicated a

mean foreman age of 40.0 and an average of 9.5 years of experience for the construction

foremen included in the study sample (N=119). In the construction industry, foremen

typically advance to their position through many years of experience from craft workers

in crews to positions of leadership, primarily as foremen. Foremen must be willing to

accept responsibility, possess the ambition to lead others, and the desire to achieve goals

(Borcherding 1977a). The feeling of threatened job security, diminished social status, or

reduced self-esteem is usually understood as the driving force for the individual

resistance to the changes brought forth by new technologies and was considered as a

factor in foremen's resistance towards the idea of using computers in their realm (Coble

1994). This paradigm may seem to have changed some in the recent years with the









increasing indispensability of computers in the society and is somewhat indicated in

Elliott's study. In fact, 79.9% of the foremen Elliott surveyed indicated that handheld

computing systems may have the potential on helping them do their jobs. While this

possible trend is encouraging, the fundamental characteristics of construction foremen as

being efficient and productive individuals will still require handheld/mobile computing

systems designed for them to be efficient and easy to use. Unfortunately, as previously

mentioned, this area historically has not made much progress.

Graphic User Interface on Pen-Based Mobile Computing Devices

Most research on the mobile computing systems in the construction field revolve

around the basic concept of using pen and touch sensitive screens as the main input

platform regardless of their sizes/categories (e.g., tablet PC's, palmtop computers, pocket

PC's, etc.) or operating systems. It is widely accepted that in the construction field the

use of a physical keyboard is not practical for mobile users such as construction foremen

on a busy and rugged construction site (Coble et al. 1996, Alexander et al. 1997). Yet

manual data entry through a pen (sometimes called stylus) is not a faster or more reliable

way than using a keyboard either. To input a character the user has to make a series of

hand strokes with the stylus across the touch sensitive screen and it requires extensive

practicing for a user to become proficient in using a stylus. A few research studies in the

construction industry on pen computing technologies have recognized this limitation

(e.g., Rojas and Songer 1996, Bowden et al. 2002). This problem of inconvenient data

entry is inherent with such use of a pen or stylus (Masui 1998).

As a result, alternative automated data entry technologies such as bar codes, radio

frequency identification, etc. as previously discussed, were explored by some

construction researchers to augment the manual data entry limitations associated with









pen/stylus technology. Particularly, the advancement of speech recognition technologies

in recent years brought construction researchers' interest into this area to explore the

potential uses of speech recognition technologies as an automated data entry method on

construction sites. Sunkpho and other researchers at Carnegie Mellon University explored

such technologies and have prototyped a framework for developing audio-centric

(namely speech recognition) interfaces in field data collection applications (Sunkpho et al

2000, Sunkpho and Garrett 2003). Speech technologies hold great potential in providing

automated data entry in computer applications as they are considered one of the "natural"

communication mechanisms between humans and computers and as a general rule

speaking is faster than typing or writing. However, this technology has its limitations in

the construction field as well. First, noise interference on the construction site is a major

problem in the reliability of data entry (speech input) process and this problem is

unfortunately inherent with the construction environment and cannot be eliminated.

Secondly, as Sunkpho and others recognized, integrating a speech interface into

application is not a trivial feat as this is a complex technology (Sunkpho and Garrett

2003). Moreover, speech recognition is not the most efficient method in actuating the

computer commands in the graphical user interfaces. Querying and database

manipulation on the collected voice data are even more complicated tasks.

Experts in the construction research field have accepted that predefined drop-down

menus and text lists in the graphic user interface may be a more efficient and easy-to-

implement method to automate the data entry process in the construction field. Many

researchers believe a substantial portion of the information documented in the field is

repetitive from project to project and can easily be standardized (e.g., McCullouch 1993,









Rojas and Songer 1996, Cox et al. 2002, Bowden et al. 2002). Using a pen/stylus to click

and select items in the graphic user interface is a relatively quick and effortless process

therefore the user efforts in performing computer tasks seem to be trivial.

As a user industry in information technologies, the construction industry has not

studied the graphical user interfaces in much detail compared to the computer industry.

This is probably a result of the unique nature of the construction industry that is not

generally understood by those in the computer industry. Yet graphic user interface can

play an important role in determining the overall usability of a computer system. For

example, older adults usually have a difficult time in using the graphics user interfaces

designed for average users as a result of the normal effects of aging including some

decline in cognitive, perceptual and other abilities. Studies have found using area cursors

(larger sized cursors) and sticky icons (feature of icons that eases the selection process)

can improve their performance in basic selection tasks (Worden et al. 1997). In addition,

even different operating system user interfaces on similar types of personal digital

assistant (PDA) devices can result in significantly different user performance (Teresa et

al. 2001).

Icons, Signs and Symbols A Brief Historical Review

Before words there were sounds and intonation, before writing there were symbols.
Speech splintered into different languages, different symbols developed into
various writing systems. Writing systems separated into the symbolic and the
phonetic, but symbolic iconographies persisted from earliest writing to the present
day. Only the symbols changed. As the computer replaced the pen and the brush, so
iconography, with today's symbols, prepares for tomorrow. (pp. 63, Sassoon, R.
and Gaur, A. 1997)

Icons, signs and symbols exist everywhere in our lives and workspaces. Because of

their communicative power, icons are used in a wide variety of situations to inform

people about particular conditions or to give instruction. For example, symbols or









pictographs are widely used in the Olympic games to depict various sports; they are used

on product packaging cartons and in instructional manuals to inform people how to

properly handle, transport, store and use products; they are used in public places such as

airports and train stations worldwide to provide directions and identification of important

facilities (e.g., luggage claim areas, telephone booths, currency exchanges, escalators,

etc.); they are used in equipment instrumentations such as the instrument clusters in

automobiles to indicate malfunctions and warnings (e.g., low fuel reserve, engine

malfunction, etc.) when illuminated; they are used on the roadways to alert drivers of

road conditions, allowable speeds, recreational interest areas, general service facilities at

exits, etc.; they are used at workplaces to caution of safety perils, hazardous materials and

required safety equipment and measures, etc. This list can go on and on.

The general philosophy that icons, signs and symbols are used instead of the

character-based representations is that they are more intuitive and effective in conveying

the intended information. In fact, using pictorial representations by humans to

communicate non-verbally dates back to primitive times (Sassoon and Gaur 1997). In

early times, the need for written communication was simple and character-based written

languages did not exist. Yet, our ancestors used pictographs carved on rocks and other

objects to document information or communicate intellectual thoughts between one

another. Later when the need for written communication became more sophisticated and

more specific, the pictographs gradually broke down into smaller information units and

by the use of conventions they have evolved into today's written languages which are

totally based on abstract characters or radicals (radical are used in Chinese, Japanese and

other Asian written languages). In his book "The Alphabet: An Account of the Origin and









Development of Letters," Taylor (1991) illustrated an example that how the picture of the

owl was conventionalized into today's letter "M." In the old Egyptian language the name

of the owl was mulak. The picture of the owl is believed to have been primarily used as

an ideogram to denote the bird itself, secondly as a phonogram standing for the name of

the bird. It then became a syllabic sign used to express the sound mu, the first syllable of

the name, until ultimately it was employed simply to denote m, the initial sound of that

syllable. In his book "The Icon Book: Visual Symbols for Computer Systems and

Documentation," Horton (1994) also listed similar illustrations on how the letters "A"

and "0" have evolved from the ancient Egyptian hieroglyph, Sinai script, Moabite stone,

and early Phoenician to Greek and Roman characters. In Chinese, words denoting the

objects such as sun, moon, and mountains were also evolved from the early actual

graphic representations of these objects. Therefore, in one sense, the use of signs,

symbols and icons in today's society may be regarded as an effort of reverse engineering

of the human written communication evolution process.

The twentieth century has seen quite a few systemized research efforts in

developing visual communication systems using signs and symbols. Otto Neurath (1882-

1945) developed a method of visual presentation of statistical information as an

educational medium using pictograms that later became well known as the "International

System of Typographic Picture Education" (ISOTYPE). The basic principle of the

ISOTYPE system is that each symbol represents both a topic and a designated quantity,

and symbols can be "compounded" (McLaren 2000). For example, 'man' + mining' =

mine worker (McLaren 2000) or 'shoes' + 'factory' = shoe factory (Horton 1994). In the

1960's Charles Bliss developed a system called "Semantography" which consists of an









"alphabet" of 100 fundamental symbols that can be juxtaposed or superimposed to

represent even richer concepts. The fundamental set of symbols includes numbers,

mathematical symbols and simple geometric shapes and many of these shapes are easily

recognizable because they are abstractions of familiar objects or are already used

internationally (Horton 1994). The International Organisation for Standardisation (ISO)

and the International Electrotechnical Commission (IEC) also developed approximately

1,450 standardized symbols for international use. These are compiled respectively in ISO

7000 'Graphical Symbols for Use on Equipment Index and Synopsis' and IEC 417

'Graphical Symbols for Use on Equipment Index, Survey and compilation of the Single

Sheets" (McLaren 2000).

Icons, and other terms including signs, symbols, signets, ideograms, index,

phonograms, and pictograms/pictographs are closely related and are often confusing to

ordinary people. From the semiotics (defined as the science of signs," Eco 1976) point

of view, Marcus summarized (2003) the definitions of these terms as follows (pp. 38):

* Signs: perceivable (or conceivable) objects that convey "meaning."

* Symbols: signs that have meaning by convention and are often abstract, like the
letters of this sentence or the national flag.

* Icons: Signs that are self-evident, "natural," or "realistic" for a particular group of
interpreters, like a photograph of a person, a "realistic" painting, or a right-pointed
arrow to indicate something should move to or is located to the right.

* Index: a special semiotics term for signs that are linked by the cause-and-effect in
space and time, like a photograph representing a scene, or a fingerprint on the
coffee mug at the scene of the crime.

* Ideograms: symbols that stand for ideas or concepts, for example, the letter "i"
standing for "information," help desk," or "information available."

* Phonograms: symbols that stand for sounds, for example, the letter "s."









* Pictogram: an icon (or sometimes symbol) that has clear pictorial similarities with
some object, like the person or men's room sign that (for some interpreters) appears
to be a simplified drawings of a (specially, male) human being.

Despite such seemingly detailed linguistic delineations of these terms, the nuances

between icons and other terms and the significance of the nuances often diminish when

they are used in various disciplines. The interchangeable use of some of the terms is

common in today's society where graphically enriched software application user

interfaces are flourishing. In the computer applications, icons can be referred to

anything, not just the easy-to-recognize pictographs but also the abstract images or

symbols which can be totally unrelated but arbitrarily assigned to represent certain

computer commands. More interestingly, there are also studies on audible icons or

"earcons" which have taken the definition of icons to a new dimension (Brewster et al.

1993).

The use of icons in computer graphical user interface was early incorporated in the

design of Xerox's 8010 "Star" Office Workstation (Bewley et al., 1983) and has become

the main component in software applications to allow the user to easily navigate through

the programs. The motivation of using icons in computer graphical user interfaces is

similar to other applications (e.g., public information displays, equipment labeling, traffic

controls, etc.) to facilitate the communication process between the human and

computer. As shown in Figure 2-1, in the early days that computers technology, users of

computers communicated with them by means of simple 'binary state' switches, buttons

and numeric octall or hexadecimal) key pads. As interface technology improved, the

mode of interaction was superseded by the use of QWERTY keyboards which enabled

the construction of command line interfaces. As the complexity of these grow, they

became more difficult to learn and remember. The introduction of graphical user









interface in the Xerox 'Star' workstation and later on Microsoft Windows took away the

complexity of the command line interface through the use of 'dialogue boxes.' With the

graphical user interface, the need for a user to type is substantially reduced. Instead, users

use mouse to point to objects (such as icons and pictures) on the screen to execute desired

commands. In some sense, much of the "ease to use" of a computer system often depends

upon the power of the metaphors embedded in the end-user interfaces. Since icons are

more visually distinctive than abstract words, it is generally believed that it is easier to

identify an icon than a word from a group of screen objects in a graphical user interface.

Icons can represent a considerable amount of information in very little space and space is

often at a premium on computer display screens (Hemenway, 1982).


SWITCHES, BUTTONS + KEYPADS


COMMAND LINE
INTERFACES


HUMAN DIALOGUE BOXES COMPUTER


GRAPHICAL USER
INTERFACES



ICONS INTERFACE + ICONIC LANGUAGES


Figure 2-1. Illustration of the Evolution of Human-Computer Communication Process

Signs, Symbols and Icons in Construction and the Possibility of Using Icons as
Automated Data Entry in Graphic User Interface

Signs and symbols are widely used in the construction industry. Pictorial symbols

based signs are commonly used on construction sites to convey various safety warnings









and messages. Construction plans by nature are graphical representations of the

construction process of buildings via conventionally accepted symbols and rules. For

example, in site utility plans, straight or curved lines stand for various types of pipes with

the size information either directly noted near the lines or indirectly noted by means of a

pipe schedule. Different symbols are used to show various fittings or structures (e.g., gate

valves, bends, fire hydrants, backflow preventers, sanitary manhole, etc.). This system is

also used in drawings for virtually all other trades (e.g., plumbing, fire sprinklers, HVAC,

electrical, etc.). Construction foremen, whose main job functions include reading the

construction plans and then issuing work orders to their crews, have considerable

experience working with symbols-based graphical communication systems in this regard.

Using the "click and select" concept as previously discussed, Coble and Elliott

(1996) proposed the idea of using icons as the basic means not only for computer

commands but also for data entry in the graphic user interfaces designed for construction

field users. Unfortunately this idea never got to the stage of being implemented into a

working system and therefore was never tested in real settings to assess the usability.

Coble B. (1997) at the University of Florida tested 56 icons with 59 respondents

consisting primarily of construction project managers, superintendents, foremen and field

engineers for icon recognition response and 41 icons were matched successfully to their

descriptions by the respondents with a 90% or better concurrence rate. These results

indicate that if designed properly, icons can be used for automated data entry in the

graphic user interface designed for construction foremen.

Both icons and pre-defined text have the potential benefit of reducing the data input

effort by construction foremen as the intended end users. Therefore it would be









interesting to know if there is a difference between these two options in terms of

usability. Usability comparison between icons and pre-defined text lists in the graphic

user interface needs to be studied in order to validate or invalidate the concept of using

icons to automate the data entry process in the mobile computing systems designed for

construction foremen.

Icons vs. Pre-defined Text

Existing empirical studies have equivocated on the issue of whether there is a

difference in terms of task completion time and user errors between textual

representations and iconic representations in the computer user interfaces. A few earlier

studies suggested there is little or no performance gain that iconic representations have

over textual representations. For example, Rohr and Keppel (1984) compared icons and

text as computer commands in word processing and reported no improvement for icons

over text in terms of task completion time and error rates. Kacmar (1989) compared text,

icons and text+icon combination in matching programming concepts and labels and

found combined labels most accurate and no difference in all three mechanisms in terms

of task time. Whiteside et al. (1985) did a comparison of different interface design

approaches and their effect on different types of computer users (novice, transfer, and

expert). Whiteside et al. (1985) found there was no significant performance improvement

for iconic interfaces and novice and transfer users actually performed worse with iconic

interfaces. Egido and Patterson (1988) studied effects of icons on navigation through a

catalogue and the study results showed the search time for icons was slower than text or

text plus labels. The study results by Egido and Patternson also indicated that icon users

took fewer steps but spent more time on each step than those with labels. Benbasat and

Todd (1993) conducted an experimental investigation under two factor levels where icons









versus text and direct manipulation versus menu-based were paired into four different

interface types. Benbasat and Todd concluded that there were no difference between the

icon and text-based interfaces for the time taken to complete the task and the number of

errors made. On the other hand, a more recent study by Staggers and Kobus (2000)

indicates icon-based graphical user interface has a shorter response time, fewer errors and

higher user satisfaction than text-based user interface. In Staggers and Kobus's study, 98

randomly selected male and female nurses completed 40 tasks using a text-based

interface and an icon-based graphical interface. Overall, nurses had a significantly faster

response time (P<0.01) and fewer errors (P<0.01) using the graphical interface than the

text-based interface. The icon-based graphical user interface was also rated significantly

higher for satisfaction than the text-based interface, and the graphical user interface was

faster to learn (P<0.01). Given these seemingly contradicting conclusions of previous

studies, a reliable statistical inference could not be drawn that there is no difference

between icon and text-based interfaces for construction foremen in terms of task

completion time, number of errors, and level of user satisfaction. The reasons are further

discussed below.

Effect From Interface Implementation Differences

Many of the earlier empirical studies did not preclude the effect of interface

implementation differences on the study results. Factors such as font size, icon size,

spacing and layout might have influenced the study results but were not counterbalanced

to minimize their effects on the study results. Therefore these study results were not

totally conclusive.









Visual Appeal Factor Associated With Iconic Interfaces

Visual appeal refers to the phenomena that users tend to spend more time on icon-

based user interfaces because of their visual attractions. Therefore, if the visual appeal

factor was not counterbalanced in the study, the results may not be conclusive to show if

the longer task time associated with icon-based user interface was a result from longer

processing and recognition time or from the visual appeal factor associated with the

iconic interface. Many earlier studies did not take this factor into consideration.

Abstract Vs. "Concrete" Icons And Icons As Computer Command Vs. As
Information Units

Icons in existing empirical studies are generally abstract icons and used for

denoting computer commands. Icons used for computer command are often abstract in

concept and arbitrarily assigned to a particular computer command and require extensive

usage for a user to acquire the relationship association. Icons of interest in this study are

"concrete" icons, which means they are on the lesser abstract end of the scale and are

primarily used for information units. Therefore there is clearer association between the

icons and the objects/activities represented.

Subject Characteristics

The subjects included in many of the earlier studies were often college students or

people who had considerable computer experience. The specific advantages and

disadvantages associated with icons may vary from novice users to intermediate users to

expert users. Therefore, advantages associated with icon-based interfaces may not be as

remarkable to expert users as to novice users. Previous studies have addressed little in

this area.






28


Summary

Computerization of the field documentation tasks of construction foremen has not

become a prevalent practice to date. Of the efforts done in the past, little has been focused

on the usability of the graphical user interface for the data collection systems designed for

construction foremen. The comparison of the use of icons and pre-determined text lists as

automated data input mechanisms for construction foremen in particular was non-

existent.














CHAPTER 3
RESEARCH METHODOLOGY

This chapter introduces the research questions that this study attempted to address.

It also discusses the methods used to accomplish the research objectives stated in the

previous chapters. The chapter is organized in the following sections: (1) research

questions, (2) methods, (3) sample selection criteria and techniques, (4) study design, (5)

survey questionnaire design, and (6) statistical procedures for analysis of the results.

As stated in the previous chapters, it has not been determined whether icons have

better usability than pre-defined text lists in the graphical user interfaces designed for

construction foremen. This was the main question that this study attempted to answer. In

the field of cognitive psychology, humans are characterized as information processors -

everything that is sensed (sight, hearing, touch, smell and taste) is considered as

information that the mind processes (Preece et al. 1994). With the information processor

theory, information enters and exits the human mind through a series of ordered

processing stages (Lindsay and Norman 1977). As summarized in Figure 3-1,

information from the environment is encoded into some form of internal representation in

Stage 1; in Stage 2 the internal representation of the stimulus is compared with

memorized representations that are stored in the brain; in Stage 3 a response is

formulated to the encoded stimulus; when an appropriate match is made the process

passes on to stage 4, which deals with the organization of the response and the necessary

action (Preece et al. 1994). Based on this theory, it can be conjectured that the human

brain would process text and graphic information differently in these four stages and the









difference would be dependent on the predominant information processing mode

(graphical or textual) that one is accustomed to. Larkin and Simon (1987) pointed out that

textual and pictorial information differs in terms of the effort associated with making

inferences. Jacob (1995) stated that the problem of human-computer interaction could be

viewed as two powerful information processors (human and computer) attempting to

communicate with each other via a narrow-bandwidth, highly constrained interface.

Therefore to address the human-computer interaction problem, more natural and more

convenient means need to be provided for users and computers to exchange information

easily and reliably. Addressing this problem can also help the researchers understand

better whether construction foremen as information processors may process the graphic-

based information faster and more accurately than the text-based information. The answer

may depend on their extensive experience working directly in the field and dealing with

construction plans as highly graphic-based communication media.

---------------------
Attention


Input output
or lp 1.Encoding 2.Comparison 3.Response 4.Response O or
Sor Selection Execution or
Stimuli I II Response
I----------------------
Memory


Figure 3-1. Extended Stages of the Information Processing Model (Preece at al. 1994).

Determining whether foremen process icons better than text is important to the

information technology sector providing IT solutions to the construction industry because

the computer programming to implement the graphical user interface typically takes 40-

90% of the entire program code in today's software applications (Chalmers 2003). It

takes a great amount of time and effort to develop quality icons and often requires many









trial and error processes and refinements before finalizing an icon that perfectly serves

the design intent. Therefore it is important first to know whether or not icons can actually

improve the usability of the graphical user interfaces designed for construction foremen,

otherwise time and efforts invested in designing icons and implementing iconic user

interface are not guaranteed to reap the intended benefits.

Research Questions

In the computer field, the usability of a system is typically measured by collecting

and analyzing the following data: time required for using the system to complete a given

task; number of errors and type of errors experienced by using the system to perform the

task; time required to learn the system to perform the task; retention quality of the

knowledge learned to use the system; and the user's subjective assessments of the system

(Chin et al. 1988, Roberts and Engelbeck 1989, Jeffries et al. 1991, Nielsen and Philips

1993).

From the usability point of view, there are several questions of primary interests in

this study and they are discussed below.

Do Construction Foremen Perform Computer Tasks Faster Using Icons Than Using
Predefined Text Lists Or Vice Versa?

More specifically, do construction foremen tend to find the correct choice faster

using icons or pre-defined text lists? User tasks in computer graphical user interfaces

usually take two basic steps: first locating the correct screen target (e.g., button, menu

item, etc.) that can be the most time-consuming, and then performing the desired action

on the chosen component. One salient trend in the human-computer interaction research

field in recent years has focused on studying the visual searching or location learning

aspect of user tasks in the graphical user interfaces to gain more understanding about the









cognitive models of human-computer interaction and subsequently to find ways to

improve the usability of the human-computer interface (Salvucci 1999, Byrne et al. 1999,

Ehret 2002, Hornof and Halverson 2003). Being able to quickly find the screen objects

can reduce a user's task time, errors, and frustration (Ehret 2002). Investigating this

question is especially meaningful to construction researchers because although the

timesaving in visual searching or location learning for each individual task may appear to

be small, the aggregated effect can be significant in the entire software application over a

sustained period of time. For construction foremen, computer systems need to be efficient

and effective to use. Therefore, any effort to achieve this goal is significant in the process

of computerizing the field documentation tasks of construction foremen.

Do Construction Foremen Experience Fewer Errors Using Icons Or Pre-Defined
Text List?

Frequency and seriousness of the errors in user computer tasks also account for an

important aspect of the usability in a system. It is easy to understand that if users

experience more errors on one system then it is likely for them to become more easily

frustrated with the said system than any other competing system. The frustration could in

turn lower their motivations to use the system. If provided with choices, users would

naturally reject the error-prone system and adopt the one with fewer errors. User errors in

graphical user interface generally can be grouped into the following three categories: 1)

identification errors (observed errors are clearly the results of incorrect identification), 2)

selection errors (observed errors are accidental mouse/pen selection errors although the

user has identified the correct choice), 3) experimenter interventions, both subject

initiated and experimenter initiated. Although it is desirable to analyze all types of user

errors, this study will specifically focus on the identification type of user errors as these









types of errors are directly related to how the interfaces are implemented. Selection errors

as a result of pen/touch screen sensitiveness and pen-using skills are considered as

hardware related and therefore would generally have equal effects on icon-based user

interface and text-based user interface (provided the icon-represented screen objects and

text-represented screen objects are comparable in size). Selection errors will not be

explored in this study.

Do Construction Foremen Have A Preference Between Predefined Text Lists And
Icons?

Using a 7-step Likert ranking scale, what are typical foremen satisfaction ratings

with the icon interface and the text interface (1. Very Dissatisfied; 2. Dissatisfied; 3.

Slightly Dissatisfied; 4. No Opinion; 5. Slightly Satisfied; 6. Satisfied; 7. Very Satisfied)?

Users' satisfaction with a system is not only closely related to the efficiency and efficacy

of the system that is directly translated to task time and users errors, but also is affected

by the psychological effect (disorientation, anxiety, etc.) and the cognitive load (how

much mental effort is required) of the user interface. Computer anxiety and computer

related anxiety was estimated to affect 30% of the United States workforce (Logan 1994).

Computer related distress is commonly believed to yield increases in mistakes,

debilitating thoughts, self-depreciating thoughts, irrational beliefs and absenteeism

(Ramsay 1997). Rozell and Garden (2000) further noted that researchers have observed

that motivation, or level of effort, is one of the primary variables affecting individual

performance in general and computer-related performance in particular. Recognizing and

developing systems towards user preference and user satisfaction is the key in today's

user-driven information technology market.









While the above three questions comprise the main focus of this study, the

following questions are also to be explored.

What Is The Ranking Order Of The Above Three Usability Aspects From The Point
Of View Of Construction Foremen?

Which aspect do they perceive as the most important? Text-based user interface

and icon-based user interface both have their advantages and disadvantages. Although it

is desirable to have a system that is far superior to its competition in all aspects, reality

shows it is not always the case. Therefore, when making system selections, it is important

to know which factors are viewed as most important to construction foremen.

What Are The Views Of Construction Foremen About The Concept Of The Icon
Based Mobile Field Documentation Applications?

In other words, how do construction foremen as the end users perceive the icon

based mobile documentation applications for automating the field documentation

process? Would they perceive that this kind of applications would help do their jobs

better?

What Is The General Knowledge And Experience Of Construction Foremen On
Mobile Computing Devices?

How do they perceive the difficulty and inefficiency associated with handwriting

input using pen/stylus on current mobile computing devices? Elliott's study (2000)

indicated the 84.0% of the construction foremen in his sample (N=119) did not use

computers to perform any part of their jobs, but 50.4% of the sampled foremen did use

computers in their homes. Elliott's study also showed 79.9% of the foremen responded

positively that they thought mobile computing devices would help them do their jobs.

Mobile computing devices such as the personal digital assistants (PDA's) are

commonplace nowadays, and affordability no longer seems to be the issue as it was a few









years ago. They are more widely used, not only for work-related tasks, but also as a tool

for better organization of personal business. Therefore, the knowledge and experience of

foremen with mobile computing devices need to be investigated to understand their

exposure to this type of technology. Difficulty associated with handwriting input using

pen/stylus has long been perceived by the researchers as a roadblock in the

implementation process of mobile computing devices in the construction field. However

it is not known whether construction foremen as the end users have the same perception.

Therefore this issue needs to be explored.

What Percentage Of The Information In Current Field Documentation Do Foremen
Think Can Be Standardized For Use With The "Click And Select" Concept?

There always exist non-standard and miscellaneous information items in foremen's

field documentation. But is the percentage of the items that can potentially be

standardized significant enough to justify the use of automated information entry either

by icons or pre-determined text lists?

Last, how do construction foremen demographics (age, construction experience,

computer use experience, etc.) affect the results for the first three research questions? Are

there any correlations between these user parameters and the observed user performance

variables in terms of task time and error rates?

Samples

Although it was desirable to include all the foremen from all construction trades in

the sample universe, the study was specifically focused on construction foremen in the

sitework trades (e.g., clearing and excavating, underground utilities, and paving) in the

greater Orlando area in the state of Florida. The declaration on the sample universe was

important to minimize the effect on the study results by potential external factors such as









1) subjects' individual knowledge differences on trade specific construction activities,

procedures, and equipment, i.e., construction foremen in the sitework trades may not be

very familiar with knowledge pertaining to the mechanical or electrical trades, 2)

geographical location factor (although the influence from this factor may likely be very

small as the construction workforce in U.S. are generally very mobile).

The Blue Book, a business directory widely used in the U.S. construction industry,

was used to identify the potential sitework contractors in the greater Orlando area that

could be included in the study. The management personnel of these firms were contacted

to seek their permission to solicit the participation of their foremen in the study. When a

foreman agreed to participate in the study, the visual searching experiment was arranged

and conducted in an indoor environment (e.g., contractors' main offices, and job site

trailers) and the questionnaire survey was given subsequently.

The inclusion criteria for this study were as follows:

* Firm management agreed to allow their foremen to be included in the study

* The foreman was willing to voluntarily participate in the study

* The foreman was able to readily read and write in English fluently

* The foreman had normal vision or corrected vision and was able to read the 14 pt
text without any difficulties when seated 12 to 18 inches in front the experiment
apparatus.

Although only the foremen sample was initially planned, two additional samples

were taken in the final study phase. The initial sample included thirty-five foremen who

were selected from eight different sitework construction companies. A total of twelve

companies were contacted but eight actually participated in the final study. These eight

companies were not the same ones that participated in the pilot study phase. The second

sample included thirty-seven subjects whose professions were closely related to civil









engineering/sitework construction. This sample included twelve project managers, five

superintendents, four project engineers, one construction inspector, one construction

estimator, eight civil engineers, four CAD technicians employed by civil engineering

firms, and two construction management consultants. Subjects in this sample were

employed by seventeen different firms, with six subjects employed by the state

transportation department and thirty-one by private companies. Additionally, these

subjects were located in the U.S. except for one who was in the U.K. The subject from

the U.K. was a research consultant in the field of mobile computing technologies for the

construction industry. The third sample included twenty-six graduate students and two

undergraduate students in the School of Building Construction at the University of

Florida. These students were selected because they would likely be in various

construction supervision/management positions when they graduate from the university.

Their views about the research questions were of interest as well.

Methods

Two methods were employed in this study to collect data to answer the above

stated research questions. The first method was essentially a computer visual search game

that each subject was required to play. The computer game contained the code to track

various user interface events (screen targets pressed, mouse cursor locations, time stamps

for various events, etc.) that provided the quantitative data to answer the first two

research questions (user task completion time and error rate). Such information was

recorded in a simple text format data file that was then imported to a spreadsheet

application for data formatting and initial processing.

For the more qualitative questions (Research questions 3 through 8), a survey

questionnaire was used to obtain the subjects' views/answers to these questions. The









survey questionnaire also included the section for the subjects' demographic information,

their knowledge and experience in mobile computing devices equipped with touch

sensitive screens, their subjective evaluations on the experimental apparatus, and other

research questions.

For research question #5, a sample icon-based application operating on the Palm

OS for documenting equipment usage was demonstrated to the subjects. The sample

equipment usage tracking application was designed to work with the stylus and the touch

sensitive screen and required no typing or handwriting to input the key information. A

user would only need to use the stylus to select different icons to navigate between the

screens and to input the equipment usage information (equipment number, operating

hours, idle time, downtime (if any), and the quantities of the work completed). The more

detailed discussion on the sample icon-based equipment usage tracking application can

be found in the later part of the chapter. After the icon application demonstration, the

subjects were then asked to describe their views about the icon-based mobile field

documentation application. Data collected through the computer experiment and survey

questionnaires were imported into a statistics program for analysis.

Visual Searching Task Experiment

The visual searching task experiment was designed to collect the data for a user's

task time and error rate response variables under the two different factor levels (icons or

pre-defined text lists). The visual searching task required a subject to identify and select

either an icon screen object or a pre-defined text screen object from a group of screen

objects to match the instruction given at the top of the screen. The instruction was given

in a different format from the screen objects, i.e., textual instruction for icon interface and

iconic instruction for textual interface. Each visual search task basically consisted of









three steps: reading the instruction, and locating an icon or pre-defined text list that

matched the instructions, and selecting the correct screen target. The computer game

tracked the time used for reading the instruction and the time for locating and selecting

the screen objects for each search task.

Apparatus/Materials

The apparatus used in the visual searching task experiment was a custom developed

icon/text matching computer game. The computer game was tested in a pilot study phase

and underwent several iterations and refinements to incorporate the findings learned

during the pilot study phase. The computer game recorded users' mouse movements and

actions on the screen during each visual search game session. The recorded information

provided data on the time and user errors variables for each visual searching task. The

visual search game included three icon-training sessions, one text-icon visual search

session, and one icon-text visual search session. Each subject was required to complete

three icon-training sessions before the text-icon visual search session or the icon-text

visual search session could begin. The order of the text-icon visual search session and

the icon-text visual search session in each game was randomly determined by the code

in the computer game. Three training sessions were given to each subject as the pilot

study (see Chapter 4) showed this to be needed for a test subject to adequately learn the

icons. In the event that a subject's overall time for any icon training session was longer

than 91 seconds (a threshold level determined in the pilot study that 90% of tested

subjects were able to complete 3 sessions), a dialog box would pop up on the screen to

alert the test subject that the session time was longer than the baseline and they needed to

endeavor to do better in the next training session. In addition, an elapsed time meter was









also displayed at the corner of the computer screen to remind the test subject of the time

and to motivate them to complete the game quickly.

Icon training session

Figure 3-2 shows the screenshot of a typical icon training session. Fifteen icons

were displayed in 3 rows and 5 columns. The locations of the icons were randomly

determined by the code in the computer program. In a typical icon training session, a

textual instruction for the target screen object was displayed near the top of the screen.

When the screen was completely displayed, the clock would start counting the elapsed

time. The test subjects were required to read the textual instruction and then try to find

the correct icon matching the textual description, e.g., "Excavator Laying Pipe" as shown

in Figure 3-2. When a correct icon was selected, that particular icon would be removed

from the screen and then the next visual search task would begin. If an incorrect icon

were selected, a dialog box would pop up on the screen to prompt the test subject to retry.

A total of 4 retries were allowed before that visual search task was called unsuccessful.

Each test subject was required to complete three icon training sessions before being

allowed to move on the subsequent text-icon or icon-text visual search test sessions.



































Figure 3-2. Sample Screen Shot of the Icon Training Session

Icon visual search test

After completing three icon-training sessions, each test subject was considered to

have acquired the knowledge on the corresponding matching relationships between the

icons and the text descriptions. The test subjects were then given either the icon visual

search test or the text visual search test based on the random number generated by the

computer (icon visual search test first if the random number was an odd number and visa

versa). Figure 3-3 shows a typical icon visual search test interface. The icon visual search

test screen is essentially identical to the icon-training interface except that after each

successful match the full screen would be re-generated with all the icons at completely

different locations from the previous visual search task. Figure 3-4 shows a typical text

visual search interface. The text visual search interface utilized the same principle as the


I









icon visual search interface (screen re-generated after each visual search task). In the text

visual search interface, the target instruction was given in icon format with screen objects

consisting of a pre-determined text description list. Fourteen text objects were used in the

text visual search game and they were organized in 7 rows and 2 columns.


Figure 3-3. Sample Screenshot of the Icon Visual Search Session




































Figure 3-4. Screenshot of the Text Visual Search Session

Test platform

The computer game was designed to run on any computer with Windows 2000 or

other later Micorsoft Windows operating systems and supporting at least 800-pixel by

600-pixel screen resolution. The computer game was designed with the capability to

capture the system time stamps accurate to one millisecond (1/1000 second). Data

obtained in the pilot study phase (see Chapter 4) showed the differences in the results

from tests conducted on various computer platforms were not significant at a confidence

coefficient of 95% (a = 0.5). The computers used in this study were a Fujitsu Stylistic

3400 pen tablet PC and an IBM 600E Laptop computer.









Icons and Pre-defined Text Lists

Icons used in the computer game were designed to depict various construction

activities/operations. Thirty-five icons were initially designed with the guidance from a

university professor in the construction management research field. After the preliminary

icon recognition testing and pilot study, fifteen icons with the highest recognition success

rates were selected and used in the computer game for this study. These icons and their

corresponding text descriptions are listed in Table 3-1.

Table 3-1. Icons and Pre-defined Text Lists Used in the Visual Search Tests
# Icon Pre-defined Text List



1 3-wheel Steel Roller Compacting



2 Traffic Roller Compacting Asphalt



3 Asphalt Paving Operation



4 Dozer Grading Dirt



5 Mobilizing Equipment


Excavator Backfilling trench









Table 3-1. Continued
# Icon



7



81



9 E



10


I
11



12



13



14


N*1J


Pre-defined Text List




Excavator Excavating Trench



Excavator Installing Structure



Excavator Laying Pipe



Excavator Loading Truck



Pouring Concrete



Loader Moving Dirt



Loader Moving Pipe



Material Delivery


Motor Grader Fine Grading









Data collection method

The tools used to evaluate system usability in the computer industry have changed

greatly over the last two decades. When the field of usability was first formed, simple

prototyping tools such as HyperCard were often used to create the scenario task user

interfaces. Primitive data collection methods such as paper and pencil were often used to

log user event information and the test administrator's observations. As the usability

discipline have gained more importance in the computer industry, the usability study

tools also have become more sophisticated to explore deeper usability issues in computer

hardware and software products. The usability divisions in most major information

technology companies have dedicated usability testing labs that are well equipped to

allow the evaluators to observe and analyze the potential users' task behaviors in a

controlled environment. The study platforms in simulated manner or actual product form

are often assisted with special computer programs that can capture the user interface

events (e.g., time spent on each task, time paused, time that a subject stayed on a

particular user interface object) and store this information in an event log file that can be

retrieved later and analyzed in detail. One example of such sophisticated usability study

tools used in recent years is the eye-tracking system (e.g., Salvucci 1999, Byrne et al.

1999, Hornof and Halverson 2003) that can record the participant's eye movement

information during a task scenario. Eye-tracking can identify the patterns of visual

activities that subjects exhibit while interacting with computer graphical user interfaces.

Such tools are highly desirable in most usability studies. However the complexity and

costs associated with procuring, setting up, calibrating, and operating them often makes it

prohibitive to use the eye-tracking system on small-scaled and incidental research

projects.









Nonetheless, the main feature of these laboratory usability tools is no more than

tracking and collecting the user interface event data such as menu selections, keystrokes,

cursor location and movement, etc. The same concept is used in the data collection

method designed in this study. As discussed before, the visual search game was

programmed to capture the time stamps and mouse/pen event information and log the

collected data in a tab delineated text file that could be imported into a Microsoft Excel

spreadsheet program for initial data formatting and processing.

Response Variables

The visual search game captured and recorded the system time stamps at the

following user interface events: screen displayed/elapsed time meter started, search

instruction displayed, mouse cursor enters the search object panel, and screen object

selected. It also recorded information such as the target object names and the name of

each screen object selected.

As shown in Figure 3-5, the instruction reading time treading was derived as the

difference between the time stamp "screen displayed/search instruction displayed" and

the time stamp "mouse cursor entered the search object panel." Similarly, the time used

to search for the target object was obtained by subtracting the time stamp "mouse cursor

entered the search object panel" from the time stamp "the "Nth screen object selected" for

that particular search task. The number of search errors was counted as N-l, namely the

number of attempts before the correct target was selected at the Nth try. The session time

was also obtained by finding the difference between the time stamp "elapsed time meter

started" and the time stamp of the last selected object in the session.











2nd Target Nth Target
Selected if Selected if
Search Mouse Cursor 1st Target Previous Previous
Instruction Enters Search Selected Choice Choice
Displayed Panel Incorrect _. Incorrect



I I I TIME SCALE
Reading Search
Target Instruction Number of Search Errors

Instruction Reading
Time treadnq Search Time


Figure 3-5. Visual Search Response Variable Definitions

Visual Search Game Design considerations

Special considerations were taken in the design of the visual searching experiment

and are discussed below.

Instruction format. Verbal instructions were initially contemplated in the pilot

study phase with the intent of precluding the potential bias that might exist in the user

interface for the pre-defined text lists in situations where study participants might attempt

to locate the choice by word matching. However it was evident that this was not a

concern that needed to be addressed and on the contrary it could possibly introduce the

"noise" in the result data from other factors, such as instructor accent,

environment/background distractions, etc. As a result, the verbal instruction mode was

dropped from the later versions of the visual search game. Instead, the iconic interface

was designed with textual instruction and the pre-determined text list interface was

designed with iconic instruction. This modification in instruction format/mode also

facilitated comparing the time used for reading textual instruction with the time used for

reading iconic instruction to see whether there was a difference in processing the iconic

information and textual information by the test subjects.









Randomly-sequenced screen objects layout scheme. In order to reduce any bias

that might exist when participants tried to locate the screen object (icon or pre-defined

text list) by remembering its location in the previous task screen, the layout sequences of

the screen objects in all the experiment graphical user interfaces were randomly re-

assigned for each visual search task.

Font size. The font and size of the text used in the pre-defined text user interface

was Arial 14 point which is approximately 0.15 inch in height when shown on the Fujitsu

Stylistic 3400 screen. The Arial 14 point size of text as in the pilot study was deemed

adequate for most people when the Fujitsu Stylistic 3400 screen was held approximately

12 to 18 inches in front of the eyes.

Colors and contrast. Previous research by Nasanen and Ojanpaa (2003) showed

that with increased the levels of contrast or sharpness, the search time, the number of eye

fixations per search and fixation duration decreased. As colors and contrast are not of

particular interest in this study, the screen objects (text lists and icons) in this study were

designed in monochrome (the highest contrast) to eliminate or minimize the potential

effect of the color and contrast factors on the study results.

Size of the icons. Prior research (Lindberg and Nasanen 2003) found that the size

of icons has a strong effect on the speed of icon processing in the human vision system.

Lindberg and Nasanen's study showed that icons smaller than a 0.70 view angle resulted

in significantly longer search times. As the study of the size of icons was not within the

scope of this study, icons used in this study were designed to have a view angle

significantly greater than 0.70 as observed by Lindberg and Nasanen. Icons used in this

study were designed in 64-pixel by 64-pixel which is approximately 0.67 inch by 0.67









inch when shown on the Fujitsu Stylistic 3400 screen. This icon size translates to 3.190

and 2.130, respectively, when the test subjects are seated 12 to 18 inches in front of the

screen.

Repeated Measures. In order to maximize the information obtained from each

study participant, repeated measures were used as the residual effects between the two

user interfaces is generally believed negligible. Each participant completed 14 visual

searching tasks (matching icons or text to the specified tasks) in the icons user interface

and 14 tasks in the pre-defined text lists user interface during the test sessions.

Sample Icon-Based Mobile Equipment Usage Documentation Application

To obtain the foremen's views on icon-based field information documentation

tools, a sample icon-based construction equipment timesheet application running Palm

OS was developed for the study. Equipment usage is a piece of information commonly

tracked by almost all sitework contractors. This information is typically used for prepare

the billing and also to measure productivities. Often foremen have to write on the pre-

printed paper form to record the equipment use time and the activities that the equipment

is used for. Figures 3-6 to 3-10 show the major screenshots of the icon based mobile

application. To document equipment usage time information, a foreman would select the

equipment timesheet icon in the first screen (Figure 3-6). For example, to document the

equipment usage on scrapers, the foreman would select the scrapers icon in the second

screen as shown in Figure 3-7 (shows various types of construction equipment). The third

screen (Figure 3-8) showed all scrapers that had been mobilized onto the project site. The

foreman then would select the particular scraper for which the time information was to be

logged. In the fourth screen (Figure 3-9), the foreman would enter the equipment hour









meter reading, equipment operating time, idle time and downtime information by clicking

the icons and the soft keypad displayed on the touch sensitive screen. In the fifth screen

(Figure 3-10), the foreman would enter the completed work information by also selecting

appropriate icons. The application was designed in such a way that the foremen would

not have to write with stylus or use the hard keypad and combination keys to enter the

above information.








Mobilize Demobilize Material tri Mobilize Demo [ Material
Equipment ~ t Deliveries Equipment Equipr[ Delveries
CORNERST O F T LOG --Equipment Time
,, --Scraper, [] __
WE F tNl N I974F F 19 1E3










Figure 3-6. Main Screen of the Sample Icon-based Field Documentation Application
(shown running on Handspring Treo 270 Model)































Figure 3-7. Equipment Selection Screen


Figure 3-8. Scraper Selection Screen































Figure 3-9. Scraper Time Information Entry Screen


Figure 3-10. Scraper Work Production Input Screen









Procedures

Based on the experience from the pilot study phase, it was found that the total time

required for each subject to complete the visual search game and fill out the questionnaire

survey needed to be limited to ten minutes or less. Otherwise, the interest to participate

by the potential subjects, and their employing firms, tended to be very low. The final

version was designed to be completed in the ten minutes time frame with six to eight

minutes allotted for the visual search game and two to three minutes for filling out the

survey forms. Generally the manager of a company would first be contacted to obtain the

permission to interview the foremen and also to obtain their assistance in making

arrangements for the interviews. The actual interviews were usually held in conjunction

with the companies' weekly or monthly meeting at the home offices but some were held

at the jobsite offices.

Once a potential subject had agreed to participate in the study, a short introduction

was provided on the visual search game and the survey. For the subjects who had never

used a computer or mouse before, the test administrator provided additional guidance

through the first training session to ensure they could efficiently use the mouse and

understood the game protocols. As stated before, the subject started with either the icon

visual search test or the text visual search test based on the random number generated by

the computer code. At the end of the visual search experiment, the subjects were asked to

complete the questionnaire survey. Subjects either verbally gave their answers to the

questions that were recorded on the survey form by the experiment administrator or filled

out the questionnaire by themselves.









Research Hypotheses

From the research questions 1, 2 and 3 discussed earlier in this chapter, the

following hypotheses were formulated:

Task Completion Time

Regarding the task completion time between icon-based user interface and text-

based user interface, the null hypothesis and the alternative hypothesis are stated as

follows:

H10: There is no difference in the task completion time for icon-based user

interface and text-based user interface.

H1a: There is a difference in the task completion time for icon-based user

interface and text-based user interface.

For the purpose of this study, a meaningful difference between the task completion

time for icon-based user interface and text-based user interface on the per-task level was

defined as 1,000 milliseconds (or one second). This number is selected because according

to the theories in the physiology and psychology of eye movements, the vision acuity is

not distributed uniformly across the visual field (Jacob 1995). Instead, the highest acuity

is concentrated on the fovea that covers approximately one degree of the field of view. As

shown in Figure 3-11, fovea is a spot located near the rear center of the human eye that is

responsible for sharpest central vision. Outside the fovea, the peripheral vision acuity

ranges from 15 to 50 percent of that of the fovea. The peripheral vision is generally

inadequate to see an object clearly; therefore, in order to see an object (e.g., a word or an

icon) clearly, one must move the eyeball to make that object appear directly on the fovea.

During a typical visual search process, when the target appears in the peripheral vision,

the eyes make sudden movements (called saccades, typically 30-120 milliseconds) to







56


make the target appear in the fovea vision range and then a fixation (a period of relative

stability during which an object can be viewed) follows. Fixations typically last between

200 and 600 milliseconds. There is also a 100-300 milliseconds delay before the saccade

occurs. It is estimated that a complete fixation process could take 330-1,020 milliseconds.

Therefore, 1,000 milliseconds (translated to one to three fixation periods) could be used

as a suitable meaningful difference. A significance level of 0.05 (. = 0.05) was chosen

for this hypothesis.


Pqstcrinr chanbcr
ZoRaLtE fin M





oChoJ-d








Opia dm ----B_

Opi aer--ei


Camnea

flled wjih queCILay bumlf
CiIiiry musde


Figure 3-11. Schematic Diagram of the Human Eye, With the Fovea at the Bottom
Courtesy from Wikipedia, http://en.wikipedia.org/wiki/Opticfovea, February
7, 2006









Task Errors

Regarding the number of identification errors between icon-based user interface

and text-based user interface, the null hypothesis and the alternative hypothesis are stated

as follows:

H20: There is no difference in number of identification errors for icon-based

user interface and text-based user interface.

H2a: There is a difference in the number of identification errors for icon-based

user interface and text-based user interface.

The meaningful difference in identification errors is defined as 1 (one error). As

factors such as screen brightness, screen object sizes, color, contrast, and newness to the

icons are muted to the extent that their effects on the resultant data are minimal, the

number of identification errors are expected to decrease significantly. In the pilot study,

the maximum number of identification errors in all icon visual search tests and text visual

search tests was two. A significance level of 0.05 (a = 0.05, for Type I error) was chosen

for this hypothesis.

User Satisfaction

Research question #3 concerns the satisfaction rating of the icon-based interface as

compared to the text-based interface. The response to this question would generally be an

ordinal variable. However, if the rankings of the satisfaction rating scale can be evenly

placed on a -1 to +1 scale, the response variable then could be treated as a numeric

variable and therefore more information would be available from the resultant data. With

this treatment, the rating "Not at all" was assigned a value of-1.0, "Did not like it" -

0.67, "Slightly disliked it" -0.33, "No opinion" 0, "Liked it a little" +0.33, "Liked it"







58


+0.67, and "Liked it very much" +1.0. The null hypothesis and the alternative hypothesis

were formulated as follows:

H30: There is no difference in construction foremen's satisfaction rating for

icon-based user interface and text-based user interface.

H3a: There is a difference in construction foremen's satisfaction rating for icon-

based user interface and text-based user interface.

The meaningful difference in foremen's satisfaction rating was defined as 0.165

(1/2 step on the satisfaction rating scale). A significance level of 95% was also chosen for

this hypothesis (a = 0.05).

Survey Questionnaire Design

Research questions 4 through 9 as stated earlier in the chapter were to be answered

from the information collected through the survey questionnaire. The questionnaire used

in this study was designed to facilitate an organized and consistent method of gathering

data during personal interviews. Questions pertinent to the research were developed and

then refined in the pilot study. For many questions, a Likert scale or semantic differential

scale was deemed appropriate and scaled answers were developed. Several variations of

the Likert scale were used and are listed as following:

* Agreement Scale:

1 2 3 4 5 6 7
strongly disagree slightly no slightly agree strongly
disagree disagree opinion agree agree

* Importance Scale

1 2 3 4 5
not important of little Fairly important very
at all importance important important










* Efficiency Scale

1 2 3 4 5 6 7
very inefficient slightly no slightly efficient very
inefficient inefficient opinion efficient efficient

* Satisfaction Scale

1 2 3 4 5 6 7
not at all did not like slightly no liked it a like it liked it
disliked it opinion little very much

There were also open-ended questions in the survey questionnaire because the

possible answers to some of the questions could not be anticipated. The answers to the

open-ended questions were sorted and grouped in the results analysis phase.

Foremen Demographics

The first section of the questionnaire gathered demographic information about the

individual participant and the participant's employer. This included the company name,

years in business, the foreman's specific trade, the duration of the foreman's construction

experience, the foreman's average crew size, the foreman's education level, and the

foreman's age.

Foremen's Experience with Touch Sensitive Screen Devices and Mobile Computing
Devices

The survey questionnaire was also intended to obtain general information of the

construction foremen's experience and use of touch sensitive screen devices and mobile

computing devices. Many of today's touch sensitive systems in use already have

incorporated icons to some extent. A foremen's experience on such systems might have

an effect on the foreman's answer on the preference between text based interface and

icon based interface.

* Which of the following touch-sensitive screen devices have you used? (check all
that apply)









a. ATM Machines

b. Information Kiosks

c. Store checkout services

d. Other, please specify

* Have you ever used a mobile computing device (for example, Palm Pilot or pocket
PC's)? (Yes/No)

* Do you use a mobile computing device for work or for your own personal
business? (Yes/No)

* If "Yes," what do you mainly use it for?

a. Work b. Personal Business c. Both

* If "Yes," how much time do you use it for on a weekly basis?

minutes

* How efficient do you think it is to enter the field information on computers using
the stylus writing method? [Efficiency Scale]

* How important do you think it is to be able to enter the field information on
computers in a quick and efficient manner? [Importance Scale]

Foremen's View on Standardization of the Content of Field Documentation

Foremen were asked whether they thought the information content of their typical

field documentation could be standardized. If the answer was "Yes," then they were

asked to estimate a percentage of the amount that could be standardized.

* Do you think most information in your field documentation can be standardized on
the computer so you can pick and choose on the computer screen? [Agreement
Scale]

* If the answer is "Yes," what is the percentage of information that you think can be
standardized? %

Foremen's Preference Between Icons and Pre-defined Text List

As one of the important factors in usability, foremen satisfaction ratings with text-

based interface and icon-based interface were assessed. Foremen were asked how much









they liked the icon visual search game and the text visual search game using a 7-step

Likert scale as previously mentioned. This study was more interested in knowing which

interface had a higher user satisfaction and to what extent the difference varied.

* How much did you like the icon game? [Satisfaction Scale]

* How much did you like the text game? [Satisfaction Scale]

* Please rank the importance of the following three usability factors ("1" being the
lowest and "10" being the highest):

a. shorter task completion time

b. fewer errors

c. satisfaction

Foremen's View about Icon-based Field Information Documentation Tools

After the visual search game, the subjects were shown how to use the sample icon-

based mobile equipment usage documentation application. The subjects were then asked

their opinions as to whether they thought the icon-based mobile computing system could

help them better fulfill their field documentation responsibilities.

* Do you think the icon-based mobile computer tools like the one shown to you
would help you do your daily log? [Agreement Scale]

Please comment on the answer:

* If you were given an icon-based mobile computer tool just as the one shown to you
for your field documentation, would you use it?

a. Yes

b. No, please explain reason

As the survey questionnaire was originally designed for foremen, it was used for

the "other construction professionals" sample and the subjects were asked to fill out the

survey as best applicable to them. A shortened and modified questionnaire survey form






62


was used for the student sample with most questions the same as the foremen

questionnaire.














CHAPTER 4
PILOT STUDY

This chapter documents the preliminary studies conducted during the process of

designing the visual search game. The chapter is organized into the following sections:

icon design and recognition quality testing, sample size preliminary estimation for

hypothesis testing, visual search game initial testing, test platform effect study, and icon

learning curve analysis.

Icon Design and Recognition Quality Testing

Icons included in this study were designed in the Axialis AX-Icons 4.5 program.

A total of 35 (see Table 4-1) 64-pixel by 64-pixel icons were developed to represent

various sitework construction activities and operations. These icons were first evaluated

by a university professor in the construction management research field and went through

several iterations before a recognition quality testing by other construction related

professionals was conducted. In the icon recognition quality-testing phase, a total of 18

persons whose professions were directly related to sitework construction participated in

the evaluation:

* 6 construction foremen

* 3 equipment operators

* 2 superintendents for a sitework contractor

* 1 project engineer

* 2 superintendents for general contractors

* 3 construction inspectors for a civil engineering firm









* 1 construction surveyor

A printout sheet with the 35 icons was shown to the 18 participants and they were

asked to identify the construction activity that each icon represented. If a participant's

description of the icon matched the design intent then the recognition of that icon was

deemed successful. All 18 participants completed the recognition quality evaluation and

the results are shown in the Table 4-1. Recognition success rate for an icon was defined

as the percentage of evaluators who successfully identified that particular icon at the

verbal prompt over the total number of evaluators. Table 4-2 shows the number of icons

successfully recognized by each evaluator.

Table 4-1. Icon Recognition Quality Testing Results
Icon Icon Image Icon Description Recognition
No. Success Rate

1 Excavator Excavating Trench 16/18 (or
MWE_ 88.89%)


,17/18 (or
2 Excavator Laying Pipe 4/18 (r
944415/18 (or

3 Excavator Setting a Structure 15/18 (or
83.33%)


4 Excavator Loading Truck 17/18 (or
17944418 (or)


5 Pouring Concrete 17/18 (or
94.44%)


S16/18 (or
6 Dozer Clearing Trees 88189%)









Table 4-1. Continued
Icon Icon Image
No.


Icon Description


Recognition
Success Rate


Excavator Clearing Trees



Dump Truck Unloading Materials



Flat Belly Pan Loading Material



Peddle Pan Loading Material



Dozer Cutting Trench



Dozer Grading Dirt



Motor Grader Fine Grading


14



15



16


Survey and Layout



Maintenance of Traffic



Loader Grading Dirt


Loader Moving Dirt


14/18 (or
77.78%)


17/18 (or
94.44%)


10/18 (or
55.56%)


8/18 (or
44.44%)


7/18 (or
38.89%)


17/18 (or
94.44%)


17/18 (or
94.44%)


15/18 (or
83.33%)


17/18 (or
94.44%)


13/18 (or
72.22%)


16/18 (or
88.89%)









Table 4-1. Continued
Icon Icon Image
No.


Icon Description


Recognition
Success Rate


Loader Moving Pipe



Mixer Mixing Subgrade



Box Blade Grading Dirt



Single-drum Roller Compacting



Maintenance of Traffic



Self-elevating Scraper Loading Material



Double-drum Roller Compacting
Asphalt



Paving Asphalt



Traffic Roller Compacting Asphalt



3-Wheel Steel Roller Compacting



Small Double Roller Compacting
Asphalt


17/18 (or
94.44%)


9/18 (or
50.00%)


16/18 (or
88.89%)


16/18 (or
88.89%)


17/18 (or
94.44%)


15/18 (or
83.33%)


17/18 (or
94.44%)


17/18 (or
94.44%)


17/18 (or
94.44%)


17/18 (or
94.44%)


17/18 (or
94.44%)









Table 4-1. Continued


Icon Description


Recognition
Success Rate


Plate Tamp Compacting Dirt



Broom Tractor Sweeping



Excavator Backfilling Trench



Mobilize Equipment



Material Delivery



Construction Accident


Dewatering Operation


Icon
No.


Icon Image


17/18 (or
94.44%)


15/18 (or
83.33%)


17/18 (or
94.44%)


17/18 (or
94.44%)


17/18 (or
94.44%)


16/18 (or
88.89%)


15/18 (or
83.33%)









Table 4-2. Icon Recognition Evaluation Results Organized by Evaluator
Evaluator Evaluator Job Function # of Icon Successfully
# Recognized
1 Construction Inspector 30
2 Construction Inspector 32
3 Earthwork Foreman 35
4 Earthwork Foreman 33
5 Construction Surveyor 32
6 Earthwork Foreman 31
7 Equipment Operator 29
8 Equipment Operator 30
9 Earthwork Foreman 34
10 Construction Inspector/P.E. 29
11 Equipment Operator 33
12 General Superintendent 29
13 Superintendent 33
14 Project Engineer 30
15 Project Superintendent (General Contractor) 32
16 Underground Utilities Foreman 32
17 Earthwork Foreman 34
18 Project Superintendent (General Contractor) 31

Based on the icon recognition quality test results, fifteen icons with relatively high

recognition success rates were selected from the 35 icons and used in the visual search

game computer program.

Test Platform Difference Study

As it was likely that the visual search game would be administered on different

computer platforms during the final testing phase, it was important to know whether

different computer platforms could cause differences in the test results. To answer this

question, a test platform difference study was conducted to investigate the potential

influence of the platform differences.

Data

Two independent samples with 15 subjects per sample were drawn from a local

civil engineering firm. Each subject completed five (5) icon-training sessions. The









subjects completed 15 visual search tasks in each session. None of the subjects had

previously seen the icons or the visual search game before the test. The first sample test

was taken on a Fujitsu 3400 Tablet PC. The second sample test was taken on various

types of computers that the test subjects used daily. The results for the average task time

and task errors from the icon-training sessions are shown in Table A-i in Appendix A.

The individual task time was defined as the time used by a subject to read the textual

instruction and subsequently find the corresponding icon. The average task time for each

session was defined as the average of the individual task times in that session. To reduce

the bound of error in the results, the two highest and two lowest task times were excluded

and only the remaining eleven (11) observations were used to compute the average time

for each session. For example, Table 4-3 illustrates how Subject l's average task time in

Session 3 was calculated. Two lowest individual task time observations 500 ms and 1,407

ms and two highest observations 13,172 ms and 10,859 ms were considered as the

outliers and were not included in the computation of the average search task time for

Session 3. The average task time for Session 3 was computed with the remaining eleven

individual task time observations. Based on this method, the average task time by Subject

1 in Session 3 was computed to be 3,966 milliseconds. The reason for excluding the

outliers was mainly to reduce the bound of error due to either excessively long or short

incidental task times. The excessively long task times usually occurred where the visual

search process was halted on one particular icon with which the subject had great

difficulty. The excessively short task times typically occurred when the mouse cursor was

right on the target icon when the next task started. The shortest task time also frequently

occurred with the last visual search task where only one icon remained on the screen.









Table 4-3. Example of Excluding Outliers in the Computation of the Average Task Time
Search Task # Task Time (in Milliseconds)
1 6,937
2 14-859
3 3,156
4 2,156
5 13,72
6 3,031
7 4,953
8 6,172
9 3,625
10 5,562
11 4-407
12 3,281
13 1,578
14 3,172
15 500

Figure 4-1 shows the mean task time and search errors in each of the five icon

training sessions for the Fujitsu sample and Non-Fujitsu sample. As it is evident in Figure

4-1, the mean task time decreased as the training sessions progressed and the task errors

also reduced in a similar fashion on both of the Fujitsu and Non-Fujitsu platforms.

Pearson correlation coefficients between the mean task time and task errors on both of the

Fujitsu platform and Non-Fujitsu platform, as shown in Tables 4-4 and 4-5, indicate that

the correlation is significant at 0.01 level on both platforms. In other words, task time and

task errors were highly correlated. Longer task time was generally correlated to more task

errors while shorter task time correlated to fewer errors.






















1 2 3 4 5
Session No.


--- Fujitsu Task Time
SFujitsu Session Errors


-u-Non-Fujitsu Task Time
-e- Non-Fujitsu Session Errors


Figure 4-1. Mean Task Time and Search Errors Observed in the Platform Difference
Study

Table 4-4. Correlation Between the Mean Task Time and Task Errors on the Fujitsu
Platform
Fujitsu Task Non-Fujitsu
Time Task Errors
Fujitsu Task Time Pearson Correlation 1 0.999
Sig. (2-tailed) 0.01
N 5 5
Fujitsu Task Errors Pearson Correlation 0.999 1
Sig. (2-tailed) 0.01
N 5 5

Table 4-5. Correlation Between the Mean Task Time and Task Errors on the Non-Fujitsu
Platform
Non-Fujitsu Non-Fujitsu
Task Time Task Errors
Non-Fujitsu Task Time Pearson Correlation 1 0.998
Sig. (2-tailed) 0.01
N 5 5
Non-Fujitsu Task Errors Pearson Correlation 0.998 1
Sig. (2-tailed) 0.01
N 5 5









Hypotheses Testing

In determining the potential differences in study results that might be caused by

different computer platforms, the hypotheses about the differences between the means of

the two populations and the hypotheses about the variances of the two populations were

tested.

Hypotheses testing about variances of the populations on Fujitsu and Non-Fujitsu
platforms

It was important to compare the variability in the results data from the Fujitsu and

Non-Fujitsu platforms. This was because for the small sized samples (N<30) the

assumption of equal variance would be required to calculate the pooled sample variance

to test the hypotheses about the means from these two different populations.

With a confidence level of 95% (a = 0.05), the following Hypotheses were stated:

Hypothesis Conclusion
If Ho is accepted equal variance likely exists in
the average task times in Session 1 between the
Fujitsu platform and Non-Fujitsu platform. Test
H0: 2 Ft= 2llNFujits platform is unlikely a factor that causes different
o 0 T1Fujitsu -- T1Non-Fujitsu
variability in test results. It is appropriate to
calculate pooled sample variance from ST1Fujitsu and
ST1Non-Fujitsu-
T1 If Ho is rejected there is insufficient evidence to
support that equal variance exists in the average
task times in Session 1 between the Fujitsu
2 2 platform and Non-Fujitsu platform. Test platform
Hi: a TIFujitsu C2T1Non-Fujitsu is likely a factor that causes different variability in
test results. It is not appropriate to calculate
pooled sample variance from ST1Fujitsu and ST1Non-
Fujitsu.
If Ho is accepted equal variance likely exists in
the average task times in Session 2 between the
Fujitsu platform and Non-Fujitsu platform. Test
T2 Ho: o2T2Fujitsu 2T2Non-Fujitsu platform is unlikely a factor that causes different
variability in test results. It is appropriate to
calculate pooled sample variance from ST2Fujitsu and
ST2Non-Fulitsu.














Hi: a T2Fujitsu # 7 T2Non-Fujitsu


Ho: a2T3Fujitsu a= 2T3Non-Fujitsu








HI: 2 T3Fujitsu 7 a2T3Non-Fujitsu


Ho0 C2T4Fujitsu a= 2T4Non-Fujitsu







2 2
Hi: a T4Fujitsu T4Non-Fujitsu


T5 HO: 2 T5Fujitsu =- 2T5Non-Fujitsu


If Ho is rejected there is insufficient evidence to
support that equal variance exists in the average
task times in Session 2 between the Fujitsu
platform and Non-Fujitsu platform. Test platform
is likely a factor that causes different variability in
test results. It is not appropriate to calculate
pooled sample variance from ST2Fujtsu and ST2Non-
Fujitsu.
If Ho is accepted equal variance likely exists in
the average task times in Session 3 between the
Fujitsu platform and Non-Fujitsu platform. Test
platform is unlikely a factor that causes different
variability in test results. It is appropriate to
calculate pooled sample variance from ST3Fujtsu and
ST3Non-Fujitsu-
If Ho is rejected there is insufficient evidence to
support that equal variance exists in the average
task times in Session 3 between the Fujitsu
platform and Non-Fujitsu platform. Test platform
is likely a factor that causes different variability in
test results. It is not appropriate to calculate
pooled sample variance from SFujitsu and ST3Non-
Fujitsu.
If Ho is accepted equal variance likely exists in
the average task times in Session 4 between the
Fujitsu platform and Non-Fujitsu platform. Test
platform is unlikely a factor that causes different
variability in test results. It is appropriate to
calculate pooled sample variance from ST4Fujitsu and
ST4Non-Fujitsu-
If Ho is rejected there is insufficient evidence to
support that equal variance exists in the average
task times in Session 4 between the Fujitsu
platform and Non-Fujitsu platform. Test platform
is likely a factor that causes different variability in
test results. It is not appropriate to calculate
pooled sample variance from ST4Fujitsu and ST4Non-
Fulitsu*
If Ho is accepted equal variance likely exists in
the average task times in Session 5 between the
Fujitsu platform and Non-Fujitsu platform. Test
platform is unlikely a factor that causes different
variability in test results. It is appropriate to
calculate pooled sample variance from S5Fujitsu and
ST5Non-Fujitsu-













Hi: 2 T5Fujitsu C7 T5Non-Fujitsu


2 2
Ho: 2 E1Fujitsu = C2E1Non-Fujitsu



El


HI: 2E1Fujitsu 2E1Non-Fujitsu






Ho: 0 E2Fujitsu = 7 E2Non-Fujitsu



E2


H1: 2 E2Fujitsu 2 a E2Non-Fujitsu






E3 Ho: 7 E3Fujitsu = 7 E3Non-Fujitsu


If Ho is rejected there is insufficient evidence to
support that equal variance exists in the average
task times in Session 5 between the Fujitsu
platform and Non-Fujitsu platform. Test platform
is likely a factor that causes different variability in
test results. It is not appropriate to calculate
pooled sample variance from ST5Fujits and ST5Non-
Fujitsu.
If Ho is accepted equal variance likely exists in
the task errors in Session 1 between the Fujitsu
platform and Non-Fujitsu platform. Test platform
is unlikely a factor that causes different variability
in test results. It is appropriate to calculate pooled
sample variance from SE1Fujtsu and SE1Non-Fujtsu.
If Ho is rejected there is insufficient evidence to
support that equal variance exists in the task
errors in Session 1 between the Fujitsu platform
and Non-Fujitsu platform. Test platform is likely
a factor that causes different variability in test
results. It is not appropriate to calculate pooled
sample variance from SE1Fujtsu and SE1Non-Fujtsu.
If Ho is accepted equal variance likely exists in
the task errors in Session 2 between the Fujitsu
platform and Non-Fujitsu platform. Test platform
is unlikely a factor that causes different variability
in test results. It is appropriate to calculate pooled
sample variance from SFujitsu and SE2Non-Fujitsu
If Ho is rejected there is insufficient evidence to
support that equal variance exists in the task
errors in Session 2 between the Fujitsu platform
and Non-Fujitsu platform. Test platform is likely
a factor that causes different variability in test
results. It is not appropriate to calculate pooled
sample variance from SFujitsu and SE2Non-Fujitsu
If Ho is accepted equal variance likely exists in
the task errors in Session 2 between the Fujitsu
platform and Non-Fujitsu platform. Test platform
is unlikely a factor that causes different variability
in test results. It is appropriate to calculate pooled
sample variance from SEFuitsu and E3Non-Fujitsu-













Hi: 2 E3Fujitsu C2 2E3Non-Fujitsu






Ho: C E4Fujitsu = 2 E4Non-Fujitsu






Hi: C2E4Fujitsu C 2E4Non-Fujitsu






Ho: 2 E5Fujitsu = 7 E5Non-Fujitsu






Hi: C2E5Fujitsu C 2E5Non-Fujitsu


If Ho is rejected there is insufficient evidence to
support that equal variance exists in the task
errors in Session 3 between the Fujitsu platform
and Non-Fujitsu platform. Test platform is likely
a factor that causes different variability in test
results. It is not appropriate to calculate pooled
sample variance from SE3Fujit and SE3Non-Fujitsu
If Ho is accepted equal variance likely exists in
the task errors in Session 4 between the Fujitsu
platform and Non-Fujitsu platform. Test platform
is unlikely a factor that causes different variability
in test results. It is appropriate to calculate pooled
sample variance from SE4Fuitsu and SE4Non-Fujitsu
If Ho is rejected there is insufficient evidence to
support that equal variance exists in the task
errors in Session 4 between the Fujitsu platform
and Non-Fujitsu platform. Test platform is likely
a factor that causes different variability in test
results. It is not appropriate to calculate pooled
sample variance from SE4Fujtsu and SE4Non-Fujitsu.
If Ho is accepted equal variance likely exists in
the task errors in Session 5 between the Fujitsu
platform and Non-Fujitsu platform. Test platform
is unlikely a factor that causes different variability
in test results. It is appropriate to calculate pooled
sample variance from SEFujitsu and sENon-Fujitsu
If Ho is rejected there is insufficient evidence to
support that equal variance exists in the task
errors in Session 5 between the Fujitsu platform
and Non-Fujitsu platform. Test platform is likely
a factor that causes different variability in test
results. It is not appropriate to calculate pooled
sample variance from SE5Fultsu and sENon-Fujitsu


To test the above stated hypotheses, F values (F=S12/S22, note Population 1 was

denoted as the population providing the largest sample variance) for each set of

hypotheses were calculated and listed in Table 4-6. F /2 (a/2 = 0.025) value with nl-1

(15-1=14) degrees of freedom for the numerator and n2-1 (15-1=14) degrees of freedom


for denominator is also shown in Table 4-6.









Table 4-6. Platform Difference Study -

Sample S12
Variable
Ti 47,779,430.471
T2 17,371,668.271
T3 14,929,980.804
T4 13,533,569.440
T5 12,102,513.284
El 31.360
E2 5.444
E3 3.004
E4 2.778
E5 1.960


Sample Variance F Values

S22 F=S12/S22

34,221,720.004 2.119
19,017,158.084 1.603
14,415,690.240 1.101
13,647,113.640 1.364
11,557,280.160 2.099
27.738 1.795
9.000 1.340
5.138 1.446
3.484 1.655
2.778 1.111


As shown in Table 4-6, the calculated F values for T1, T2, T3, T4, T5, Ei, E2, E3, E4,

and E5 are all less that than F0.025 values. Therefore, at a confidence level of 95% (a =

0.05), the Ho hypotheses for T1, T2, T3, T4, T5, Ei, E2, E3, E4, and E5 cannot be rejected.

This led to the conclusion that at a confidence level of 95% (a = 0.05) different computer

test platforms (Fujitsu or Non-Fujitsu computers) did not introduce the difference in the

variability in the average task time and task errors for each of the five icon training

sessions. Therefore, it is assumed with confidence that data to be collected on different

computer platforms would have the equal variances.

Hypotheses testing about the difference between the means of the data collected on
the Fujitsu and Non-Fujitsu platforms

It is important not only to analyze the variability of the data collected on the Fujitsu

and Non-Fujitsu platforms but also to compare the means of the data from these two

independent samples. If different platforms do result in different means, then the data in

the final study should all be collected on the same platform to avoid the unwanted bias in

the data that may exist because of the platform difference factor. For this, the following

hypotheses were stated with a confidence level of 95% (a = 0.05):


F0.025
(ni- = 14,
n2-1=14)
2.983
2.983
2.983
2.983
2.983
2.983
2.983
2.983
2.983
2.983









Hypothesis


Ho: T1Fujltsu T1Non-Fujltsu 0


Ti



H1: TiFujltsu T1Non-Fujitsu 0





Ho: T2Fultsu T2Non-Fujitsu 0


T2



H1: T2Fujitsu- T2Non-Fujitsu 0





H : T3Fujltsu T3Non-Fujitsu 0


T3



H1: T3Fujitsu T3Non-Fujitsu 0





T4 Ho: T4Fultsu T4Non-Fulitsu 0


Conclusion
If Ho is accepted sample evidence is not
sufficient to support the conclusion that there is a
difference between the means of the average task
times of the Fujitsu population and Non-Fujitsu
population for Session 1. Test platform is
unlikely to be a factor that causes a difference in
test results.
If Ho is rejected there is a difference between
the means of the average task times collected on
the Fujitsu platform and Non-Fujitsu platform
for Session 1. Test platform is likely to be a
factor that causes a difference in test results.
If Ho is accepted sample evidence is not
sufficient to support the conclusion that there is a
difference between the means of the average task
times of the Fujitsu population and Non-Fujitsu
population for Session 2. Test platform is
unlikely to be a factor that causes a difference in
test results.
If Ho is rejected there is a difference between
the means of the average task times collected on
the Fujitsu platform and Non-Fujitsu platform
for Session 2. Test platform is likely to be a
factor that causes a difference in test results.
If Ho is accepted sample evidence is not
sufficient to support the conclusion that there is a
difference between the means of the average task
times of the Fujitsu population and Non-Fujitsu
population for Session 3. Test platform is
unlikely to be a factor that causes a difference in
test results.
If Ho is rejected there is a difference between
the means of the average task times collected on
the Fujitsu platform and Non-Fujitsu platform
for Session 3. Test platform is likely to be a
factor that causes a difference in test results.
If Ho is accepted sample evidence is not
sufficient to support the conclusion that there is a
difference between the means of the average task
times of the Fujitsu population and Non-Fujitsu
population for Session 4. Test platform is
unlikely to be a factor that causes a difference in
test results.












H1: T4Fujltsu T4Non-Fujitsu # 0





H0: T5FultsuT- T5Non-Fujitsu= 0






H1: T5Fuljtsu- T5Non-Fujitsu 0





Ho: ElFujitsu E1Non-Fujitsu 0






HI: EFujlitsu E1Non-Fujitsu : 0





Ho: E2Fujitsu E2Non-Fuitsu 0






H1: E2Fujtsu E2Non-Fujitsu 0


If Ho is rejected there is a difference between
the means of the average task times collected on
the Fujitsu platform and Non-Fujitsu platform
for Session 4. Test platform is likely to be a
factor that causes a difference in test results.
If Ho is accepted sample evidence is not
sufficient to support the conclusion that there is a
difference between the means of the average task
times of the Fujitsu population and Non-Fujitsu
population for Session 5. Test platform is
unlikely to be a factor that causes a difference in
test results.
If Ho is rejected there is a difference between
the means of the average task times collected on
the Fujitsu platform and Non-Fujitsu platform
for Session 5. Test platform is likely to be a
factor that causes a difference in test results.
If Ho is accepted sample evidence is not
sufficient to support the conclusion that there is a
difference between the means of the task errors
of the Fujitsu population and Non-Fujitsu
population for Session 1. Test platform is
unlikely to be a factor that causes a difference in
test results.
If Ho is rejected there is a difference between
the means of the task errors collected on the
Fujitsu platform and Non-Fujitsu platform for
Session 1. Test platform is likely to be a factor
that causes a difference in test results.
If Ho is accepted sample evidence is not
sufficient to support the conclusion that there is a
difference between the means of the task errors
of the Fujitsu population and Non-Fujitsu
population for Session 2. Test platform is
unlikely to be a factor that causes a difference in
test results.
If Ho is rejected there is a difference between
the means of the task errors collected on the
Fujitsu platform and Non-Fujitsu platform for
Session 2. Test platform is likely to be a factor
that causes a difference in test results.













Ho: E3Fultsu- E3Non-Fuitsu= 0





H1: E3Fujltsu E3Non-Fujitsu 0





H0: E4Fltsu E4Non-Fujitsu 0





H1: E4Fujltsu E4Non-Fulitsu f 0





H0: E5Fujitsu- E5Non-Fujitsu= 0





H1: E5Fujltsu ES5on-Fulitsu # 0


If Ho is accepted sample evidence is not
sufficient to support the conclusion that there is a
difference between the means of the task errors
of the Fujitsu population and Non-Fujitsu
population for Session 3. Test platform is
unlikely to be a factor that causes a difference in
test results.
If Ho is rejected there is a difference between
the means of the task errors collected on the
Fujitsu platform and Non-Fujitsu platform for
Session 3. Test platform is likely to be a factor
that causes a difference in test results.
If Ho is accepted sample evidence is not
sufficient to support the conclusion that there is a
difference between the means of the task errors
of the Fujitsu population and Non-Fujitsu
population for Session 4. Test platform is
unlikely to be a factor that causes a difference in
test results.
If Ho is rejected there is a difference between
the means of the task errors collected on the
Fujitsu platform and Non-Fujitsu platform for
Session 4. Test platform is likely to be a factor
that causes a difference in test results.
If Ho is accepted sample evidence is not
sufficient to support the conclusion that there is a
difference between the means of the task errors
of the Fujitsu population and Non-Fujitsu
population for Session 5. Test platform is
unlikely to be a factor that causes a difference in
test results.
If Ho is rejected there is a difference between
the means of the task errors collected on the
Fujitsu platform and Non-Fujitsu platform for
Session 5. Test platform is likely to be a factor
that causes a difference in test results.


As the sample sizes of the Fujitsu sample and Non-Fujitsu sample were both less

than 30, the t distribution with 28 degrees of freedom (n+n2-2 = 15+15-2=28, nl=15,

n2=15) was used to develop the critical values for the test. The following assumptions

were made for the test: 1) Fujitsu population and Non-Fujitsu population both have

normal distributions; 2) population variances in the Fujitsu population and Non-Fujitsu










population are equal. Pooled estimates of the population variances were calculated from

the variances of the Fujitsu sample and Non-Fujitsu sample. For a = 0.05, t /2 with 28

degrees of freedom is 2.048. Critical values (lower value 0 2.048*Spooled, upper value 0

+ 2.048*Spooled) for the test were also calculated and the resultant data are listed in Table

4-7.

Table 4-7. Platform Difference Study t Test for Equality of Means
(Sample 1- Fujitsu Platform, Sample 2- Non-Fujitsu Platform)
Pooled
le Sample Sample Difference Sample Critical Values
1 2 ofSample S1 $Sample
Variable 1 2 ofSample SV S2 Variance
Mean Mean Means (S2/nl+ Lower Upper
S22/n2)0 5
T1 6,912.27 5,849.93 1,062.33 3,199.987 2,198.060 1,002.377 -2,052.868 2,052.868
T2 4,167.93 4,360.87 -192.93 1,084.059 1,372.435 451.572 -924.820 924.820
T3 3,863.93 3,796.80 67.13 746.550 711.389 266.259 -545.299 545.299
T4 3,678.80 3,694.20 -15.40 1,050.892 899.898 357.229 -731.606 731.606
T5 3,478.87 3,399.60 79.27 1,012.110 698.636 317.538 -650.318 650.318
El 5.60 5.27 0.33 4.763 3.555 1.535 -3.143 3.143
E2 2.33 3.00 -0.67 3.063 2.646 1.045 -2.140 2.140
E3 1.73 2.27 -0.53 1.387 1.668 0.560 -1.147 1.147
E4 1.67 1.87 -0.20 1.759 2.264 0.740 -1.516 1.516
E5 1.40 1.67 -0.27 1.056 1.113 0.396 -0.811 0.811

As shown in Table 4-7, the observed sample mean differences for T1, T2, T3, T4, T5,

Ei, E2, E3, E4, and E5 were all located between the lower and upper critical values.

Therefore, at a confidence level of 95% (significance level a = 0.05), the Ho hypotheses

for T1, T2, T3, T4, T5, E1, E2, E3, E4, and E5 cannot be rejected.

Based on the results from the platform difference study, there is not sufficient

evidence to reject the hypotheses that data collected on the different platforms do not

differ in terms of the population means and population variance. Therefore, risk of

introducing bias in the results by administering the visual search game on different

computers can be considered to be low.




Full Text

PAGE 1

THE USABILITY OF GRAPHICAL USER INTERFACES OF MOBILE COMPUTING DEVICES DESIGNED FOR CONSTRUCTION FOREMEN: ICONS AND PRE-DEFINED TEXT LISTS COMPARED By TAN QU A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLOR IDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2006

PAGE 2

Copyright 2006 by Tan Qu

PAGE 3

This dissertation is dedicated to my fam ily for their loving su pport over the years to finish this important chapter of my academic goals. My wife Wei Sun, who also received her Ph.D. at the University of Florida, has been a wonderful counselor and gave me moral support whenever I needed it. This dissertati on is especially dedicated to my son, Tan, who is coping every day with the learning di sabilities associated with his autism. His inquisitiveness and persistence in learning ha s taught me a completely new perspective about the meaning and the privilege of the hi gher education. He was the very first and the most enthusiastic reviewer of the icons that were designed for this study. This dissertation is also dedicated to my daughter Victoria, who has been the joy of my life and who was the constant inspiration to me with her loving energy. Last but not the least, this dissertation is dedicated to Isabella, my infant daughter, who unconditionally shares her love.

PAGE 4

iv ACKNOWLEDGMENTS I am grateful to many individuals for thei r support in this research effort. Without their guidance and assistance, this study would not have been possible. My supervisory committee was an excellent source of direction, both during the stage of preparing a feasible and sound proposal for the st udy and throughout the actual research and dissertation writing phases. I am much indebted to my committee chair Dr. Jimmie Hinze, who has exemplified his true scholarship and mentorship and guided me through individual steps of the study. Dr. Hi nze has devoted countless hours from his busy schedule in critiquing the study design and data analyses. His detailed review of the working drafts was extremely beneficial. Dr. Pierce Jones provided many valuable suggestions and directions in refining the research apparatus. His knowledge and expertise in computer visual communicati ons proved indispensable in this area. Dr. Ronald Akers provided the needed scrutiny in statistical methods used in the study and offered many good suggestions. Dr. Mary Jo Hasell provided considerable assistance during the initial pr oposal development and offere d many thoughtful insights. Her encouragement to me in finishing the P h.D. program was much appreciated. Dr. Leon Wetherington gave helpful advice in refini ng the survey questionnaire and the experiment apparatus, and he provided a very practical perspec tive to this study. Many other individuals who contributed to this study must also be acknowledged. Paul Ridilla, my best friend, also a ve ry knowledgeable construction management consultant, has provided great support in completing this research effort through his

PAGE 5

v candid and practical views from his over 50 years of experience in the construction industry. I cannot forget to mention Jimmy Flores and many other foremen, construction professionals, and students at the M. E. Ri nker, Sr. School of Bu ilding Construction of the University of Florida for their particip ation in the study. I cannot name each one of them here but their valuable time in comp leting the study was much appreciated. I also would like to give thanks to the individuals at the Florida Department of Transportation and the principals of the firms th at participated in this study.

PAGE 6

vi TABLE OF CONTENTS page ACKNOWLEDGMENTS.................................................................................................iv LIST OF TABLES.............................................................................................................xi LIST OF FIGURES.........................................................................................................xvi ABSTRACT.....................................................................................................................xi x CHAPTER 1 INTRODUCTION........................................................................................................1 Problem Statement........................................................................................................1 Field Information Documentation in Construction...............................................2 Problems Associated with Pape r-based Documentation Method..........................3 Computerizing Field Information Documentation................................................4 Research Objectives......................................................................................................7 2 LITERATURE REVIEW.............................................................................................9 Human-Computer Interface/In teraction (HCI), Graphic User Interface (GUI), and Usability....................................................................................................................9 Past Research Examined from A HCI and Usability Perspective..............................11 Foremen and Their Role in the In formation Communication Process.......................13 Graphic User Interface on Pen-Based Mobile Computing Devices...........................16 Icons, Signs and Symbols – A Brief Historical Review.............................................18 Signs, Symbols and Icons in Construction and the Possibility of Using Icons as Automated Data Entry in Graphic User Interface...................................................23 Icons vs. Pre-defined Text..........................................................................................25 Effect From Interface Im plementation Differences............................................26 Visual Appeal Factor Associ ated With Iconic Interfaces....................................27 Abstract Vs. “Concrete” Icons And Ic ons As Computer Command Vs. As Information Units.............................................................................................27 Subject Characteristics........................................................................................27 Summary.....................................................................................................................28

PAGE 7

vii 3 RESEARCH METHODOLOGY...............................................................................29 Research Questions.....................................................................................................31 Do Construction Foremen Perform Comput er Tasks Faster Using Icons Than Using Predefined Text Lists Or Vice Versa?...................................................31 Do Construction Foremen Experience Fewer Errors Using Icons Or PreDefined Text List?...........................................................................................32 Do Construction Foremen Have A Preference Between Predefined Text Lists And Icons?.......................................................................................................33 What Is The Ranking Order Of The A bove Three Usability Aspects From The Point Of View Of Construction Foremen?...............................................34 What Are The Views Of Construction Foremen About The Concept Of The Icon Based Mobile Field Docu mentation Applications?.................................34 What Is The General Knowledge And Experience Of Construction Foremen On Mobile Computing Devices?.....................................................................34 What Percentage Of The Information In Current Field Documentation Do Foremen Think Can Be Standardized For Use With The “Click And Select” Concept?..............................................................................................35 Samples.......................................................................................................................3 5 Methods......................................................................................................................37 Visual Searching Task Experiment.....................................................................38 Apparatus/Materials.....................................................................................39 Icon training session.....................................................................................40 Icon visual search test..................................................................................41 Test platform................................................................................................43 Icons and Pre-defined Text Lists..................................................................44 Data collection method.................................................................................46 Response Variables......................................................................................47 Visual Search Game Design considerations.................................................48 Sample Icon-Based Mobile Equipmen t Usage Documentation Application......50 Procedures...................................................................................................................54 Research Hypotheses..................................................................................................55 Task Completion Time........................................................................................55 Task Errors..........................................................................................................57 User Satisfaction..................................................................................................57 Survey Questionnaire Design.....................................................................................58 Foremen Demographics.......................................................................................59 Foremen’s Experience with Touch Se nsitive Screen Devices and Mobile Computing Devices..........................................................................................59 Foremen’s View on Standardization of the Content of Field Documentation....60 Foremen’s Preference Between Icons and Pre-defined Text List.......................60 Foremen’s View about Icon-based Fiel d Information Documentation Tools.....61 4 PILOT STUDY...........................................................................................................63 Icon Design and Recogniti on Quality Testing............................................................63 Test Platform Difference Study..................................................................................68

PAGE 8

viii Data......................................................................................................................68 Hypotheses Testing.............................................................................................72 Hypotheses testing about variances of the populations on Fujitsu and Non-Fujitsu platforms...............................................................................72 Hypotheses testing about the differen ce between the means of the data collected on the Fujitsu a nd Non-Fujitsu platforms.................................76 Icon Learning Curve Analysis....................................................................................81 Learning Curve Regression.................................................................................81 Long Term Effect of Icon Training.....................................................................86 Number of Training Sessions Required for the Final Study...............................91 Establishing Training Session Time Baseline.....................................................91 Lessons Learned During the Pilot Study.............................................................92 Experimental environment...........................................................................93 Verbal instructions.......................................................................................93 5 RESULTS AND DISCUSSIONS...............................................................................94 Sample Demographics................................................................................................94 Age......................................................................................................................94 Education.............................................................................................................95 Construction Experience.....................................................................................96 ForemanÂ’s Crew size...........................................................................................97 Foremen Categorizations.....................................................................................98 Occupations of the Construction Professionals...................................................99 Student Status....................................................................................................100 Computer Experience........................................................................................100 Experiences of the Research Subjects with Touch Sensitive Screen Devices (TSSDÂ’s)...............................................................................................................101 Experiences of the Research Subjects w ith Personal Digital Assistants (PDAÂ’s)....104 Views of The Research Subjects about the Efficiency of the Data Entry Mechanism by Handwriting Recognition.............................................................105 Foremen and PDA Efficiency...........................................................................106 Construction Professionals................................................................................107 Students.............................................................................................................109 Cross-groups......................................................................................................110 The Views of Subjects on the Importance of Quick Data Entry on Mobile Computing Devices...............................................................................................112 The Views of Foremen and Construction Pr ofessionals about the Standardization of the Field Documentation Content.....................................................................113 The Views of Foremen and Construction Pr ofessionals about th e Percentage of the Field Documentation Content That Could be Standardized............................114 Satisfaction Ratings of the Subjects with the Icon Visu al Search Game and Text Visual Search Game..............................................................................................115 Hypothesis Testing on SubjectsÂ’ Satisf action Ratings with the Icon Visual Search Game and Text Visual Search Game.................................................118 Wilcoxon matched pairs signed rank test...................................................119 Paired Difference t -test...............................................................................120

PAGE 9

ix Ranking Order of the Three Usability F actors (Task Time, Task Errors, and Satisfaction Level)................................................................................................121 Ranking Order of the Three Usability Factors by Foremen..............................121 Ranking Order of the Three Usability F actors by Construction Professionals.122 Ranking Order of the Three Usab ility Factors by Students..............................123 The Views of Subjects about the Icon-b ased Field Documentation Systems on Mobile Computing Devices..................................................................................124 Readiness of the Foremen to Use Field Documentation Systems on Mobile Computing Devices...............................................................................................126 Visual Search Game Results Analyses and Hypotheses Testing..............................127 Average Task Time...........................................................................................127 Average Task Instruction Reading Time...........................................................129 Average Task Search Time...............................................................................131 Task Errors........................................................................................................133 Error Reduction in Training Sessions.......................................................................135 One-Way ANOVA (Analysis of Variance) of Visual Search Game Results and Subject Types........................................................................................................137 Correlation Analysis between Construc tion Experience and the Average Icon Search Time..........................................................................................................140 Correlation Analysis between Constructi on Experience and the Task Errors..........141 Correlation Analysis of the Average Ta sk Search Time and Task Errors................142 One-Way ANOVA (Analysis of Variance) of Visual Search Task Time of Foremen with Computer Usage as Factor Levels.................................................143 6 SUMMARY, CONCLUSIONS AND RECOMMENDATIONS............................145 Summary...................................................................................................................145 Are Computer Tasks Performed Faster When Using Icons Than When Using Predefined Text Lists?...................................................................................145 Are Textual Instructions Processed Fa ster Than The Iconi c Instructions?.......146 Are Icons Located Faster Than Text?...............................................................146 Errors With Icons Versus Errors With Pre-Defined Text List..........................146 Preferences of Pre-defined Text Lists Versus Icons..........................................147 Ranking Order of the Three Usability Factors..................................................147 Views about Using Icon Based Mobile Field Documentation Applications.....147 Views about the Standardization of Information Contained in Field Documentation...............................................................................................147 Experience of Foremen with Mobile Computing Technologies.......................148 Conclusions...............................................................................................................148 Research Limitations................................................................................................149 Recommendations.....................................................................................................150 Future Research Recommendations.........................................................................151 Other Sectors of the Construction In dustry and Other Geographical Areas.....151 Intelligent Data Validation in Data Input Process.............................................151 Modeling of the Cognitive Activities of the Visual Search Process Through the Use of Eye-tracking Technologies...........................................................152

PAGE 10

x APPENDIX A PILOT STUDY RESULTS DATA..........................................................................153 B FINAL STUDY RESULT DATA............................................................................159 C SURVEY QUESTIONNAIRE.................................................................................190 LIST OF REFERENCES.................................................................................................192 BIOGRAPHICAL SKETCH...........................................................................................200

PAGE 11

xi LIST OF TABLES Table page 3-1 Icons and Pre-defined Text Lists Used in the Visual Search Tests..........................44 4-1 Icon Recognition Quality Testing Results...............................................................64 4-2 Icon Recognition Evaluation Resu lts Organized by Evaluator................................68 4-3 Example of Excluding Outliers in the Computation of the Average Task Time.....70 4-4 Correlation Between the Mean Task Ti me and Task Errors on the Fujitsu Platform....................................................................................................................71 4-5 Correlation Between the Mean Task Time and Task Errors on the Non-Fujitsu Platform....................................................................................................................71 4-6 Platform Difference Study – Sample Variance F Values.........................................76 4-7 Platform Difference Study – t Test for Equality of Means......................................80 4-8 Platform Difference Study Sample Group Statistics.............................................81 4-9 Learning Rate on the Average Task Ti me Per Icon for Each Training Session......84 4-10 Group Statistics of Learning Rates...........................................................................85 4-11 Session Time Data Statistics....................................................................................92 5-1 Mean and Median Ages of the Fo remen, Construction Professionals, and Students....................................................................................................................94 5-2 Mean and Median Construction Experience Durations of the Foremen, Construction Professionals and Students.................................................................96 5-3 Levene Test of Homogeneity of Variances on the Total TSSD’s Scores..............103 5-4 LSD Test of the Means of Total TSSD’s Scores...................................................104 5-5 Levene Test of Homogeneity of Vari ances on the Ratings of Foremen with and without PDA Experience........................................................................................107

PAGE 12

xii 5-6 One-Way ANOVA of the Means of the Numeric Ratings of Foremen with PDA Experience and Foremen without PDA experience...............................................107 5-7 Levene Test of Homogeneity of Va riances on the Ratings of Construction Professionals with and without PDA Experience..................................................108 5-8 One-Way ANOVA of the Means of the Numeric Ratings of Construction Professionals with and without PDA experience...................................................109 5-9 Levene Test of Homogeneity of Vari ances on the Ratings of Students with and without PDA Experience........................................................................................109 5-10 One-Way ANOVA of the Means of the Numeric Ratings of Students with and without PDA experience........................................................................................110 5-11 LSD Test of the Means of Nume ric Ratings by Foremen, Construction Professionals, and Students with Prior PDA Use Experience................................111 5-12 Percentages of the Field Documentation Content that Could be Standardized As Estimated by Foremen and Construction Professionals.........................................114 5-13 Wilcoxon Signed Ranks –Satisfaction Rating Differences between the Icon Visual Search Game and the Text Visual Search Game........................................119 5-14 Wilcoxon Signed Ranks Test Statistics–S atisfaction Rating Differences between the Icon Visual Search Game and the Text Visual Search Game..........................120 5-15 Paired Samples Differences t -Tests Statistics– Subjects’ Satisfaction Ratings with the Icon Visual Search Ga me and the Text Visual Search.............................121 5-16 Paired Samples t -Tests Importance Ratings of the Foremen on Shorter Task Time, Fewer Task Errors and Higher User Satisfaction........................................122 5-17 Paired Samples t -Tests Importance Ratings of the Construction Professionals on Shorter Task Time, Fewer Task Errors and Higher User Satisfaction..............123 5-18 Paired Samples t -tests – Students’ Importance Ratings on Shorter Task Time, Fewer Task Errors and Higher User Satisfaction...................................................124 5-19 LSD Test Results – Responses of Foremen vs. Their Computer Usage................125 5-20 Paired Samples t -Tests –Average Task Time in the Icon User Interface vs. Text User Interface.........................................................................................................129 5-21 Paired Samples ttests –Average Task Time in th e Icon User Interface vs. Text User Interface.........................................................................................................131 5-22 Paired Samples t -tests –Average Task Search Time in the Icon User Interface vs. Text User Interface.................................................................................................133

PAGE 13

xiii 5-23 Paired Samples t -Tests – Mean Task Errors in the Icon User Interface vs. Text User Interface.........................................................................................................135 5-24 Correlation Between Trai ning Session Errors and Construction Experience........136 5-25 Levene Homogeneity of Variance Test s on the Visual Search Game Results Between Foremen, Construction Professionals, and Students...............................137 5-26 One-way ANOVA of the Average Task Time, Average Task Instruction Reading Time, and Average Task Search Time – By Subject Types....................138 5-27 Post-Hoc LSD Test Results on Task S earch Time in the Icon Visual Search Game......................................................................................................................139 5-28 Tamhane's T2 Test on the Task Errors in The Icon Visual Search Game And Text Visual Search Game – S ubject Type As Factor Levels.................................140 5-29 Correlation Analysis on the Construc tion Experience and Icon Search Time.......141 5-30 Correlation Analysis between the C onstruction Experience and Icon Search Errors......................................................................................................................141 5-31 Correlation Analysis between the C onstruction Experience and Text Search Errors......................................................................................................................142 5-32 Correlation Analysis between the Icon Search Time and Icon Search Errors.......142 5-33 Correlation Analysis between the Text Search Time and Text Search Errors.......143 5-34 Levene Homogeneity of Variance Tests – Task Time of Foremen with Different Computer Usage.....................................................................................................143 5-35 One-way ANOVA of the Task Time of Foremen – Computer Usage as Factor Levels.....................................................................................................................144 A-1 Average Task Time for Each Training Session......................................................153 A-2 Number of Errors fo r Each Training Session.........................................................154 A-3 Subject 1(Homebuilder Superint endent) Icon Training Session Data...................155 A-4 Subject 2 (Engineer) Ic on Training Session Data..................................................156 A-5 Subject 3 (Framing Foreman) Icon Training Session Data....................................157 A-6 Session Time for Each Training Session................................................................158 B-1 Foremen Demographics.........................................................................................159

PAGE 14

xiv B-2 Construction Profe ssionals Demographics.............................................................160 B-3 Student Demographics...........................................................................................161 B-4 ForemenÂ’s experience with comm on touch sensitive screen devices....................162 B-5 Construction profession alsÂ’ experience with common touch sensitive screen devices....................................................................................................................163 B-6 StudentsÂ’ experience with comm on touch sensitive screen devices......................164 B-7 ForemenÂ’ experience with PDAÂ’s..........................................................................165 B-8 Construction Professional sÂ’ experience with PDAÂ’s.............................................166 B-9 Student SubjectsÂ’ Experience with PDAÂ’s.............................................................167 B-10 ForemenÂ’s Ratings of the Efficiency of the Data Entry Mechanism by Stylus Handwriting on Mobile Computing Devices.........................................................168 B-11 Construction Profession alsÂ’ Ratings of the Effici ency of the Data Entry Mechanism by Stylus Handwriting on Mobile Computing Devices......................169 B-12 StudentsÂ’ Ratings of the Efficiency of the Data Entry Mechanism by Stylus Handwriting on Mobile Computing Devices.........................................................170 5-13 Foremen SubjectsÂ’ Ratings of the Im portance of Being Able to Input Data Quickly on Mobile Computing Devices.................................................................171 B-14 Construction Professional sÂ’ Ratings of the Importan ce of Being Able to Input Data Quickly on Mobile Computing Devices........................................................172 B-15 Student SubjectsÂ’ Ratings of the Impo rtance of Being Able to Input Data Quickly on Mobile Computing Devices.................................................................173 B-16 ForemenÂ’s View about Whether Most Content of Their Field Documentation Could be Standardized...........................................................................................174 B-17 Construction ProfessionalsÂ’ View about Whether Most Content of the Construction ForemenÂ’s Field Docu mentation Could be Standardized.................175 B-18 ForemenÂ’s Estimate of the Percentage of the Information in Their Field Documentation Could be Standardized..................................................................176 B-19 Construction ProfessionalsÂ’ Estimate of the Percentage of the Information in Construction ForemenÂ’s Documenta tion that Could be Standardized...................177 B-20 ForemenÂ’s Satisfaction Ratings with Icon Visual Search Game and the Text Visual Search Game...............................................................................................178

PAGE 15

xv B-21 Construction ProfessionalsÂ’ Satisfacti on Ratings with Icon Visual Search Game and Text Visual Search Game................................................................................179 B-22 Student SubjectsÂ’ Satisfaction Ratings w ith Icon Visual Search Game and Text Visual Search Game...............................................................................................180 B-23 ForemenÂ’s Importance Ratings on Shor ter Task Time, Fewer Task Error and Higher User Satisfaction........................................................................................181 B-24 Construction ProfessionalsÂ’ Importa nce Ratings on Shorter Task Time, Fewer Task Error and Higher User Satisfaction...............................................................182 B-25 StudentsÂ’ Importance Ratings on Shor ter Task Time, Fewer Task Error and Higher User Satisfaction........................................................................................183 B-26 ForemenÂ’s Views About Whether the Icon-based Field Documentation Systems Would Help Do Their Jobs.....................................................................................184 B-27 Construction ProfessionalsÂ’ View s About Whether the Icon-based Field Documentation Systems Would Help Foremen Do Their Jobs.............................185 B-28 Student SubjectsÂ’ Views About Whethe r the Icon-based Field Documentation Systems Would Help Fo remen Do Their Jobs.......................................................186 B-29 Foremen SubjectsÂ’ Average Task Ti me, Average Task Instruction Reading Time, Average Task Search Time, and Average Task Errors................................187 B-30 Construction ProfessionalsÂ’ Averag e Task Time, Average Task Instruction Reading Time, Average Task Search Time, and Average Task Errors.................188 B-31 StudentsÂ’ Average Task Time, Aver age Task Instruction Reading Time, Average Task Search Time, and Average Task Errors..........................................189

PAGE 16

xvi LIST OF FIGURES Figure page 2-1 Illustration of the Evolution of Human-Computer Comm unication Process...........23 3-1 Extended Stages of the Information Processing Model (Preece at al. 1994)...........30 3-2 Sample Screen Shot of the Icon Training Session...................................................41 3-3 Sample Screenshot of the Icon Visual Search Session............................................42 3-4 Screenshot of the Text Visual Search Session.........................................................43 3-5 Visual Search Response Variable Definitions.........................................................48 3-6 Main Screen of the Sample Icon-b ased Field Documentation Application (shown running on Handspring Treo 270 Model).................................................51 3-7 Equipment Selection Screen.....................................................................................52 3-8 Scraper Selection Screen..........................................................................................52 3-9 Scraper Time Information Entry Screen..................................................................53 3-10 Scraper Work Production Input Screen....................................................................53 3-11 Schematic Diagram of the Human Eye, With the Fovea at the Bottom Courtesy from Wikipedia, http://en.wikipedia.o rg/wiki/Optic_fovea, February 7, 2006.......56 4-1 Mean Task Time and Search Errors Ob served in the Platform Difference Study...71 4-2 Learning Effect of the Mean Average Task Time....................................................82 4-3 Mean Learning Rate Scatter Plot.............................................................................86 4-4 Average instruction reading time, s earch time, and task time (Subject 1)...............87 4-5 Task Errors (Subject 1)............................................................................................88 4-6 Average instruction reading time, s earch time, and task time (Subject 2)...............88 4-7 Task Errors (Subject 2)............................................................................................89

PAGE 17

xvii 4-8 Average instruction reading time, s earch time, and task time (Subject 3)...............89 4-9 Task Errors (Subject 3)............................................................................................90 5-1 Age Group Distributions of the Research Subjects..................................................95 5-2 Education Levels of the Research Subjects..............................................................96 5-3 Construction Experience of the Research Subjects..................................................97 5-4 Crew Sizes of Foremen............................................................................................98 5-5 Foremen Specializations..........................................................................................99 5-6 Occupations of the Construction Professionals........................................................99 5-7 Computer Use Experience of Foremen..................................................................101 5-8 SubjectsÂ’ experience with common TSSDÂ’s..........................................................103 5-9 Experience of Research Subjects with PDA Devices............................................105 5-10 Efficiency Ratings of Foremen on th e Stylus Writing Method on PDA Devices..107 5-11 Efficiency Ratings of Construction Professionals on the Stylus Writing Method on PDA Devices.....................................................................................................108 5-12 Stylus Writing Input Method Efficien cy Ratings by Foremen, Construction Professionals, and Students W ho Had PDA Use Experience................................111 5-14 The Importance Ratings on Being Able to Enter Information on Mobile Computing Devices Quickly..................................................................................113 5-14 Responses of Foremen and Construction Professionals on Whether The Content of The Field Documentati on Could be Standardized.............................................114 5-15 Satisfaction Ratings of Foremen on th e Icon Visual Search Game and Text Visual Search Game...............................................................................................116 5-16 Satisfaction Ratings of Construction Professionals on the Icon Visual Search Game and Text Visual Search Game.....................................................................116 5-17 Satisfaction Ratings of Students on th e Icon Visual Search Game and Text Visual Search Game...............................................................................................117 5-18 SubjectsÂ’ Equivalent Numeric Satisf action Ratings on the Icon Visual Search Game and Text Visual Search Game.....................................................................118

PAGE 18

xviii 5-19 Views of Subjects About Whether the Icon-based Fi eld Documentation Systems would Help Foremen Do Their Jobs......................................................................125 5-20 Responses of Foremen on Whether They Would Use a Field Documentation System on Mobile Computing Devices..................................................................127 5-21 Mean Average Task Time Observed on the icon interface vs. text interface for each sample............................................................................................................128 5-22 Mean Average Task Instruction Readi ng Time Observed during the Icon Visual Search Game vs. the Text Visual Search Game.....................................................130 5-23 Mean Average Task Search Time Obse rved during the Icon Visual Search Game vs. the Text Visual Search Game...........................................................................132 5-24 Mean Task Errors Observed during th e Icon Visual Search Game vs. the Text Visual Search Game...............................................................................................134 5-25 Task Errors Observed du ring the Icon Training Sessions......................................136

PAGE 19

xix Abstract of Dissertation Pres ented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy THE USABILITY OF GRAPHICAL USER INTERFACES OF MOBILE COMPUTING DEVICES DESIGNED FOR CONSTRUCTION FOREMEN: ICONS AND PRE-DEFINED TEXT LISTS COMPARED By Tan Qu May 2006 Chair: Jimmie W. Hinze Cochair: Mary J. Hasell Major Department: Design, Construction, and Planning Field documentation by construction fore men traditionally has been done through the use of pen and paper. The drawbacks of the traditional method and the need to computerize the field documentation process have long been recogni zed by researchers of construction management. Mobile computing devices provide an excellent hardware platform for addressing this need. Unfort unately, the past research efforts and technological developments in this area ha ve not provided soluti ons with good usability. This study examined past research from a usability point of view and focused on the graphical user interface usab ility aspect of the problem. The inefficiency associated with the data input method though stylus and touch sensitive screen was examined. The focus of the study was on construction foremen, but other participants in the construction industry were also included as a basis of comparison. The study investigated the

PAGE 20

xx experience of the research participants with computers, personal digital assistants (PDAÂ’s) and other touch sens itive screen devices. The study evaluated the usability properties of icons and pre-determined text lists as potential candidates for au tomated data entry on mobile computing devices in the construction field. The views of participants on the standardization of the content of the field documentation, importance of quick data entry in the field, and the inefficiency associated with a stylus writing da ta input method were explored. Thirty-five construction foremen em ployed by sitework contractors, 37 construction professionals, and 28 university students were selected to complete a specially designed computer visual search game that consisted of an icon visual search interface and a text visual search interface. E ach subject completed 14 visual search tasks in each interface. Results showed foremen and construction professionals performed visual search tasks faster with icons than w ith pre-determined text lists. Study results also showed comparable levels of accuracy of da ta input and also good satisfaction ratings when using the icon interface when compared with the text interface. The results also suggested a strong positive corr elation between the task comp letion time and task errors (fewer errors when task times were s hort). A strong negative correlation was noted between the construction experien ce of the research participant and the task errors; i.e., participants with less experience made more errors.

PAGE 21

1 CHAPTER 1 INTRODUCTION The value and importance of informati on acquisition, transfer, organization, and utilization are well accepted in the construction industr y. “In a profound sense, the management of a construction project is a bout managing the project information flow” (Winch 2002, pp. 339). A construction projec t from inception to completion involves a multitude of varied participants and the whole construction process generates vast amounts of information. Effectively managing such an immense volume of information to ensure its accuracy and availability in a timely manner is crucial to the successful completion of any project (Cox et al. 2002). Problem Statement A construction project is a unique, complex, custom-built response to a client’s needs. (Russell 1993) It is not only a process whereby informati on from the participants in the form of building or site plans, specifications, construction schedules, and various other documents are implemented, but also a proce ss where new information is created. This process takes on physical and time dimensi ons and often generates a mammoth amount of information of varying interest to the various participants. As the time dimension grows, the volume of the information also in creases, providing new data for the spatial, time, resource and cost variables of the project.

PAGE 22

2 Field Information Document ation in Construction Many aspects of the construction process require accurate docum entation of site conditions, including progress, quality, qua ntity, change, conflicts, and as-built information on the project. Documentation, communication, and analysis of construction field data are beneficial to all participants of a construc tion project (Hwang et al. 2003). For example, field data are needed for th e project owners to verify and approve construction payment requests. Engineers and ar chitects rely on field data to verify their design assumptions and improve the designs Contractors require up-to-date field information to have a good understanding of the project status. In the construction industry, where disputes and litigation ar e almost commonplace, accurate documentation not only minimizes the possibility of disputes and claims, but also facilitates construction innovations and improvements (Liu 2000). The importance and legal ramifications of accurate vs. poorly documented constructi on information are well cited by the practitioners and academicians in the c onstruction industry (Kangari 1995, OÂ’Brien 1998). Field information documentation is especi ally important to contractors. Russell (1993) pointed out that the collection of field information is important to Record the values of various context va riables (weather condi tions and work-force parameters) that are helpful in explaini ng reasons behind the current status of a project. Assess the current status of activities, extr a work orders, and back charges in terms of active state (postponed, started, ongoi ng, idle, and finished), work scope completed, and problems encountered and their immediate consequences (manhours and/or time lost). Measure resource consumption rates and their allocation to ongoing activities

PAGE 23

3 Besides having these functions, information collected in the field is often kept by contractors as historical data for preparing future estimates and schedules (Fayek et al. 1998). Problems Associated with Paper-based Documentation Method Traditionally, field information documen tation has been done through the use of paper forms. This practice remains the same with few changes over the years for the majority of the construction industry (McCullouch and Gunn 1993, Fayek et al. 1998, Cox et al. 2002). With the paper-based documentation method, information is manually entered in notebooks or pre-printed forms. These notebooks (sometimes called “logs”) and forms are periodically sent to the main offices for top management review and for archival purposes. Sometimes the pre-printe d forms can be further processed by copying desired information from multiple forms in to one form and even into a computer spreadsheet. Unfortunately, such systems are based on a large number of paper documents and have numerous drawbacks, especially when the need arises for accessing and retrieving the information that has been collected. Fa yek et al. (1998) identified some of the problems with paper documentation as follows: Inconsistent procedures for collecting data on different types of resources (labor, equipment, materials, and subcontractors); Inaccurate assignment of hours to cost codes; Lack of data on site conditions, schedule progress, and problems associated with activities which lead to co st and schedule overruns; Multiple entry of the same data; Lack of timely feedback on project performance.

PAGE 24

4 With these deficiencies, it is difficult to obtain timely information on potential problems with schedules, resources and safe ty issues and to initiate the appropriate corrective actions. Incomplete/inadequate a nd inaccurate documentation, as a result of poor recordkeeping, are often considered infe rior evidence documents in litigation or arbitration procedures (Kangari, 1995). The use of inaccurate information in projec t bidding and project resource allocation often results in significant economic conseque nces that are manife sted as construction delays and business losses (Cox et al. 2002) As in any competitive industry, nothing could be more devastating to construction companies than making important decisions on unreliable information. Computerizing Field Info rmation Documentation With the apparent problems associated with the paper-based documentation method, the need and importance to computeriz e the field information collection process have long been recognized in the constr uction industry (Russell 1993, McCulloch 1993, Condreay 1997, Elzarka et al. 1997, Liu 2000, Cox et al. 2002, Hwang et al. 2003). As an industry-wide practice, the use of computer technology in fulf illing this need has not yet come to reality. Computer use in the offices of the cons truction companies is no longer considered “high-tech” business “things” that only a few privileged ones can have access to. The use of desktop computers and desktop applications is an essential part of business operations that include accounting, word processing, pr oject estimating, proj ect scheduling, and email communications. However, computer use by field personnel for documenting field data is still not a common practice in the construction industry. Communications between field personnel and office management, to a large extent, continue to be verbal

PAGE 25

5 communications through the use of telephones (wired or wireless) and two-way radios. Although this part of the communication cha nnel between the field and the office has been greatly improved by the advancement of wireless communication technologies, the transient nature of verbal communications leaves few means by which the information can be conveniently stored and retrieved. This obstacle in computeriz ing construction field communi cations is perceived in the industry to be due to various forms of barriers (Toole 1998, Davis and Songer 2003, Flood et al. 2003). Flood et al. summarized the barriers as follows: 1. Lack of application development: various computing models and concepts have been developed through years of research. However realizing a nd fine-tuning that concept into a workable application ofte n requires financial and time commitments for research and development th at are not readily available. 2. Institutional and individual barriers: these include old beliefs and resistance to change and to the adoption of new tec hnologies; lack of understanding of the potential of a tool; lack of resource commitment to its proper implementation; concerns about possible legal ramifications in the use of a new technology; and lack of confidence in the integrity of the output from a new technology. 3. Quality issues as “user-friendliness” and “i ntegrity of the software:” these include issues such as the ease with which an a pplication can be learned by its users, the ease of which output and results can be inte rpreted, the convenience of data input, the convenience with which the applicati on can be tailored to work for each specific problem, etc. The “barriers” or problems described by Fl ood et al. are inter-related. For example, the institutional barriers exist because although the computer technologies advance rapidly there has not yet been any stabilized sy stem of solutions that fully considered the differing characteristics of vari ous potential construction field users. To clarify the point, the lack of a unified operating system standa rd in the mobile computing technologies has resulted in many different mobile computing devices that are availa ble commercially and these technologies cannot be c onsidered as stabilized as th ey are under constant patch,

PAGE 26

6 upgrade and refinement. Second, existing softwa re applications for mobile computing devices to be used for construction field doc umentation are scarce and often the end-user characteristics and working environment was not considered when they were developed. All these factors along with the old beliefs have made it difficult for construction companies to invest in these technologies fo r their field personnel. Since construction companies have not universally adopted the m obile computing technologies for their field supervisors, there has been little enthusiasm fr om the software developers to address this application. The third category of barriers, discussed a bove, seems to be the root problem. The ability to conveniently input data in the construction fiel d has been a challenge and a driving force for the research in computer izing the construction field documentation process. Many ideas and directions exist in providing solutions for this need. However, there have not been any studies taking on the system usability point of view in examining the problems. Usability refers to how easily a system can be learned and used by its intended end users, how fast the users can co mplete the required tasks, how much the system is prone to errors, and how much the users like to use the system. Among the major aspects of the system usability, hardware usability issues are generally addressed by the computer industry on a continuing basi s while the software usability issues constitute the primary interest to research ers in the construction industry. This study will undertake a system usability approach to exam ine the problems existing in the computer software interfaces designed for use in the construction industry and evaluate possible alternative solutions.

PAGE 27

7 Research Objectives With respect to the third category of th e problem as summarized by Flood et al. (2003), existing research efforts have mainly focused on the hardware aspect of the issue. Research in the past has mostly revolved around the approach of exploring commercially available mobile computing devices and their suitability for construction field information communications. Small-sized m obile computing devices equipped with touch sensitive screens have now been accepted as a basic platform for computer use in the construction field environm ent; however, the graphical user interface aspect has not been extensively investigated. This study will introduce the concept of the us ability of the computer graphical user interface into the construction research worl d and use this concept to provide a new perspective on how past research on mobile computing in constr uction has progressed. The problems related to the inefficiency of the pen/stylus handw riting input method on mobile computing devices will be examined. Existing studies on alternative automated data collection technologies to augment the pen/stylus data input method for mobile computing systems in construction will also be reviewed and discussed. As the main objective of this study, icons (g raphical or illustrative representations of concepts or items) as a possible altern ative mechanism for automated data entry on mobile computing systems will be investig ated. This study will focus on construction foremen as the real field information provide rs and the validity of icons as the main mechanism in the graphical user interfaces designed for them. From the usability approach, icons and pre-defined text lists will be compared in evaluating their relative effectiveness and efficiency in construction fi eld data input processe s. A user interface experiment with the participation from site work construction foremen in Central Florida

PAGE 28

8 will be conducted to determine which mechanis m results in better usability, e.g., shorter task times, fewer user errors and higher us er satisfaction. The prio rity order assessment by foremen for these three important usability factors (task completi on time, task errors, and user satisfaction) will be surveyed. The effect of foremen demographics on the resulting data will also be analyzed and discussed. The experience of construction foremen with mobile computing devices will be explored. This study will also investigate thr ough face-to-face interv iews their opinions about using icon-based mobile documentation to ols. As potential end users, they will be asked questions related to the comp uterization of field documentation.

PAGE 29

9 CHAPTER 2 LITERATURE REVIEW This chapter will discuss some general concepts related to human-computer interaction/interface (HCI), gr aphical user interface (GUI), icon, and usability theories. Past research on computerizing construction field communications will be reviewed from a system usability perspective. Construction foremen and their role in the information communication process on constr uction sites will be examined. Limitations associated with the pen/stylus handwriting-based data input method will be discussed as well. This chapter will also provide a brief historical re view on icons, signs and symbols. In the later part of this chapter, the concept of using ic ons as an automated data entry mechanism in graphical user interfaces designed for construction foremen will be discussed. Human-Computer Interface/Interaction (HCI ), Graphic User Interface (GUI), and Usability Barker (1989) informally defined a human-computer in terface (HCI) as a mechanism which facilitates the flow of in formation between a computer and a human. The Association for Computing Machiner yÂ’s Special Interest Group on ComputerHuman Interaction (ACM SIGCHI) described hu man-computer interac tion as a field with intertwined roots in computer graphics, operating systems, human factors, ergonomics, industrial engineering, cognitive psychology, and computer system engineering. Redmond-Pyle and Moore (1995) stated that in typical information systems and office systems the human-computer in terface includes the following: The parts of the computer hardware that the user interacts with, e.g., screen, keyboard, mouse, on/off switch, etc.

PAGE 30

10 The images or data that are visible on the screen, e.g., windows, menus, messages, help screens. User documentation such as manuals and reference cards. The second component in Redmond-Pyle and Moore’s definition of the HCI structure is often referred as the graphi c user interface (GUI). GUI provides the uppermost presentation layer for the communica tions (visual input and output) between the users and the computers. The term “usability,” in simple words, defines how usable a product or system is when it is put to use by the users to perfor m the intended activities or tasks. In other words, a product with high usability is easie r to learn and use than a product with low usability. Therefore it is easy to understand th at if a product has a low “usability,” it will have less probability to be accepted by its intended customers or users. The definitions most cited by the resear chers in the usability world are from Shackel (1990) and Nielsen (1994). Shackel’s definition of usability and Nielsen’s definition share many common aspects and the main components and characteristics of their definitions are summarized as follows: Effectiveness: for a specified range of tasks and group of users in a particular environment, how effectively can the tasks be performed using th e interface? What are the frequency and seriousness of the user errors? This is sometimes referred to as “productivity” or efficiency of use once the system has been learned, as it includes how fast the user can correctly perform tasks. Learnability and retention of knowledge and skills lear ned: how much training and how much practice do users require before th ey become proficient with the system? If use is intermittent, how much relearning time do users need to re-gain the required knowledge and sk ills to use the system? Flexibility: to what extent is the interface still effective if ther e are changes in the task or environment?

PAGE 31

11 Attitude or subjective user satisfaction: do people who use the system find it stressful and frustrating, or do they find it rewarding to use, and feel a sense of satisfaction? Do users like the system? Since the 1980Â’s, usability theory is wide ly recognized as an important software quality alongside technical aspects such as f unctionality, internal consistency, reliability etc. Most of the major information technol ogy companies maintain their own usability divisions to investigate potent ial usability pitfalls in their products and systems before they are released to the market. Usability engineering is a crucial part of the computerrelated business to survive in todayÂ’s custom er/user-driven market where user acceptance is critical to success when la unching new products or systems. To the end users, a system with good usability can help improve their pr oductivity, reduce the quantity or frequency of user errors, and require less training fo r those who will use the new system (RedmondPyle and Moore 1995). Past Research Examined from A HCI and Usability Perspective Complex technical systems do not evolve fully formed, but rather in fits and starts as the combination of technical possib ility and economic advantage encourages localized development. (Winch 2002, pp. 341) In a retrospective point of view, the past research in the construction industry on computerizing field information communication has mainly focused on usability issues of the hardware and functionality aspects of the Human-Computer Interface. The approach adopted by most researchers consisted of ta king the technologies and computing devices commercially available and evaluating their a ppropriateness and func tionalities in various types of field information documentation/comm unication tasks. Examples of such include research on pen computers (McCullouch 1993, Coble and Kibert 1994, Songer et al. 1995, Elzarka et al. 1997, Liu 2000), research on bar code technology (Coble and Elliott 1995, Condreay 1997), research on Radio Freque ncy Identification (RFID) technology

PAGE 32

12 (McCullouch 1991, Jaselskis and El-Misalami 2000), research on wireless communication technology used in conjuncti on with handheld PCÂ’s (De La Garza and Howitt 1997), and more recently research on pocket PCÂ’s (Repass et al. 2000, Bowden et al. 2002, Cox et al. 2002, Williams 2003). In ad dition to these adaptive approaches in finding the ideal computing device suitable fo r construction field needs, there are also some innovative research studies such as Digital Hardhat (Liu 1997, a system employing a hardhat-mounted video camera and pen computer that is capable of capturing textual, sound, pictorial information) and Gato r Communicator (Alexander 1996, a handheld computer prototype that is based on the OS-9000 real-time op erating system and includes a global position receiver (GPS), digital compa ss, digital stereo camera, and digital twoway wireless radio functions). While these research efforts provided many valuable insights and lessons as to the characteristics of the ideal mobile computing platform that would be suitable for construction field settings, the graphic user interface or software aspect of the system usability has unfortunately often been negl ected. The characteris tics of an effective graphical user interface for field users was se ldom considered. It should be recognized that usability of the grap hical user interfaces has c onsiderable importance. A good example to illustrate such a point was a study conducted by Tektronix Laboratories on the effect of user interface design upon user producti vity (Bailey et al 1988). In that study, a Tektronix 11000 series laborat ory oscilloscope was compared to its predecessor 7000 series. The 7000 series interface was a dedica ted physical control system while the 11000 system employed a rich graphical user interface that included icons, popup menus, assignable controls and a touch panel. The st udy results showed that the 11000 series had

PAGE 33

13 a 77% performance gain over the 7000 seri es and the researchers attributed the performance gain to the benefit from the be tter cognitive factors of strategy selections and recall of operational details associated with the 11000 seriesÂ’s user interface. Foremen and Their Role in the In formation Communication Process Recognizing the users and thei r particular needs is the fi rst step in the process of successful usability engineering. There are va rious groups of existi ng and potential field computer users on construction sites. For general contractors and construction managers their field personnel are typical ly project superintendents, field engineers, and often project managers on some larger projects. For self performed work, the general contractors are also similar to subcontractors or specialty contractors where their field personnel include constructi on workers and foremen. According to the Household Data Annua l Averages statistics for 2002 released from the Bureau of Labor Statistics of the U.S. Department of Labor, there are 6.774 million workers employed in the U.S. c onstruction industry. A foreman in the construction industry usually supervises two to more than twenty workers, with a crew size of six to eight workers being most t ypical (Borcherding 1977a, Elliot 2000). Based on this ratio, it therefore can be estimated that there are approximately one million foremen in the U.S. construction industry. Therefore, research on foremen and computerizing their documentation tasks can have significance in improving the computer use and possibly improving the pr oductivity and the product quality in the construction industry. Foremen are key individuals on the constr uction site. Research work on foremen and their roles in the construction process occu r primarily in literature published in the 1970Â’s and 1980Â’s, with a few studies in th e 1990Â’s. For example, Borcherding, who

PAGE 34

14 perhaps contributed most in the research wo rk related to construction foremen, defined foremen as the “key link between manageme nt and individual wo rkmen” (Borcherding 1977a). In an effort to identify and clarify the functions and information needs of various construction management personnel, Tenah (1986) defined the primary functions of foreman as one who “organizes and coordinate s employees engaged in a specific craft or function on a construction proj ect; reads and interprets dr awings, blue prints, and specifications; allocates, assigns and insp ects work; administers union agreements and safety enforcement; hires and trains employ ees.” Hinze and Kuechenmeister stated that (1981) foremen as first-line supervisors are responsible for directing, guiding, and managing crew members to achieve quality workmanship within budget and on schedule. Senior (1996) observed the efficient foremen devote a substantial pr oportion of their time to planning the job. With the challenging characteristics of th eir job and a busy work schedule, foremen often devote most of their atte ntion to field problem solving, issuing work orders to their crews, coordinating with othe r contractors, and performing other functions of their job responsibilities. Field documentation such as daily field activity reports, accident investigations, daily safety reviews, and ot her company internal re port forms (Coble and Baker 1993) are often relegated to the bottom of their priority list. As a result, these field documentation tasks are either completed ha phazardly or deferred to whenever their schedules allow the time for such activitie s. Consequently these field documentation efforts often contain incomplete informati on or inaccurate information that makes it difficult for management to fully exploit its value. As previously discussed, management often relies on the information collected in the field for making essential business

PAGE 35

15 decisions such as preparing a bid for a new project or a llocating manpower and equipment resources among ongoing projects. The deficiency in the information collected in the field, though clearly desirable to be rec tified, is often sacrifi ced by management as a trade-off to a smooth running project that is on schedule and w ithin budget. This dilemma has been long recognized in the construction industry (Borcherding 1977a, Coble and Baker 1993). Coble and Baker (1993) stated “construction foremen are clearly the missing link to fully computerizing a construction company.” Coble (1994) further pointed out that in order to successfully computer ize them, the research effort must take into consideration the foremen’s background, characte ristics, and job concerns. It was generally believed the majority of the construction foremen have no education beyond high school and this was supported in a study conducted at Stanford University (Borcherding 1977) and another study conducted at the University of Florida (Elliott 2000). Elliott’s study also indicated a mean foreman age of 40.0 and an average of 9.5 years of experience for the construction foremen included in the study sample (N=119) In the construction industry, foremen typically advance to their pos ition through many years of ex perience from craft workers in crews to positions of leadership, primarily as foremen. Foremen must be willing to accept responsibility, possess the ambition to lead others, and the desire to achieve goals (Borcherding 1977a). The feeling of threatened job security, diminished social status, or reduced self-esteem is usually understood as the driving force for the individual resistance to the changes brought forth by new technologies and wa s considered as a factor in foremen’s resistance towards the id ea of using computers in their realm (Coble 1994). This paradigm may seem to have ch anged some in the recent years with the

PAGE 36

16 increasing indispensability of computers in the society and is somewhat indicated in ElliottÂ’s study. In fact, 79.9% of the foreme n Elliott surveyed indicated that handheld computing systems may have the potential on helping them do their jobs. While this possible trend is encouraging, the fundamental characteristics of construction foremen as being efficient and productive individuals will still require handheld/mobile computing systems designed for them to be efficient a nd easy to use. Unfortunately, as previously mentioned, this area historically has not made much progress. Graphic User Interface on Pen-Ba sed Mobile Computing Devices Most research on the mobile computing systems in the construction field revolve around the basic concept of using pen and t ouch sensitive screens as the main input platform regardless of their sizes/categories (e.g., tablet PCÂ’s, palm top computers, pocket PCÂ’s, etc.) or operating systems. It is widely accepted that in the construction field the use of a physical keyboard is not practical fo r mobile users such as construction foremen on a busy and rugged construction site (Cobl e et al. 1996, Alexander et al. 1997). Yet manual data entry through a pen (sometimes calle d stylus) is not a fast er or more reliable way than using a keyboard either. To input a ch aracter the user has to make a series of hand strokes with the stylus across the touch sensitive screen and it requires extensive practicing for a user to become proficient in using a stylus. A few research studies in the construction industry on pen computing tec hnologies have recognized this limitation (e.g., Rojas and Songer 1996, Bowden et al. 2002) This problem of inconvenient data entry is inherent with such use of a pen or stylus (Masui 1998). As a result, alternative automated data en try technologies such as bar codes, radio frequency identification, etc. as previ ously discussed, were explored by some construction researchers to augment the manua l data entry limitations associated with

PAGE 37

17 pen/stylus technology. Particul arly, the advancement of speech recognition technologies in recent years brought construc tion researchers’ interest into this area to explore the potential uses of speech recognition technologi es as an automated data entry method on construction sites. Sunkpho and other researcher s at Carnegie Mellon University explored such technologies and have prototyped a framework for developing audio-centric (namely speech recognition) interfaces in fi eld data collection applications (Sunkpho et al 2000, Sunkpho and Garrett 2003). Speech technologies hold great potential in providing automated data entry in computer applications as they are considered one of the “natural” communication mechanisms between humans and computers and as a general rule speaking is faster than typing or writing. However, this t echnology has its limitations in the construction field as well. First, noise in terference on the construction site is a major problem in the reliability of data entry (speech input) process and this problem is unfortunately inherent with the construc tion environment and cannot be eliminated. Secondly, as Sunkpho and others recognized, integrating a speech interface into application is not a trivia l feat as this is a comp lex technology (Sunkpho and Garrett 2003). Moreover, speech recogni tion is not the most effici ent method in actuating the computer commands in the graphical user interfaces. Querying and database manipulation on the colleted voice data ar e even more complicated tasks. Experts in the construction research field have accepte d that predefined drop-down menus and text lists in the graphic user in terface may be a more efficient and easy-toimplement method to automate the data entr y process in the construction field. Many researchers believe a substant ial portion of the information documented in the field is repetitive from project to project and can easily be standardized (e.g., McCullouch 1993,

PAGE 38

18 Rojas and Songer 1996, Cox et al. 2002, Bowden et al. 2002). Using a pe n/stylus to click and select items in the graphic user interf ace is a relatively quick and effortless process therefore the user efforts in performing computer tasks seem to be trivial. As a user industry in information tec hnologies, the construc tion industry has not studied the graphical user interfaces in much detail compared to the computer industry. This is probably a result of the unique natu re of the construction industry that is not generally understood by those in the comput er industry. Yet graphi c user interface can play an important role in determining the overall usability of a computer system. For example, older adults usually have a difficult time in using the gra phics user interfaces designed for average users as a result of th e normal effects of aging including some decline in cognitive, perceptual and other ab ilities. Studies have found using area cursors (larger sized cursors) and sticky icons (feature of icons that eases th e selection process) can improve their performance in basic selec tion tasks (Worden et al. 1997). In addition, even different operating system user interf aces on similar types of personal digital assistant (PDA) devices can result in significan tly different user performance (Teresa et al. 2001). Icons, Signs and Symbols – A Brief Historical Review Before words there were sounds and intona tion, before writing there were symbols. Speech splintered into different langua ges, different symbols developed into various writing systems. Writing systems separated into the symbolic and the phonetic, but symbolic iconographies persis ted from earliest writing to the present day. Only the symbols changed. As the com puter replaced the pen and the brush, so iconography, with today’s symbols, prepares for tomorrow. (pp. 63, Sassoon, R. and Gaur, A. 1997) Icons, signs and symbols exist everywhere in our lives and workspaces. Because of their communicative power, icons are used in a wide variety of situations to inform people about particular conditi ons or to give instructi on. For example, symbols or

PAGE 39

19 pictographs are widely used in the Olympic games to depict various sports; they are used on product packaging cartons and in instruc tional manuals to inform people how to properly handle, transport, stor e and use products; they are used in public places such as airports and train stations worldwide to provi de directions and iden tification of important facilities (e.g., luggage claim areas, telephone booths, currency exch anges, escalators, etc.); they are used in equipment instrument ations such as the instrument clusters in automobiles to indicate malfunctions and wa rnings (e.g., low fuel reserve, engine malfunction, etc.) when illuminated; they are used on the roadways to alert drivers of road conditions, allowable speeds, recreational intere st areas, general se rvice facilities at exits, etc.; they are used at workplaces to cau tion of safety perils, hazardous materials and required safety equipment and measures, etc. This list can go on and on. The general philosophy that icons, signs and symbols are used instead of the character-based representations is that they are more intuit ive and effective in conveying the intended information. In fact, using pictorial representa tions by humans to communicate non-verbally dates back to primitive times (Sassoon and Gaur 1997). In early times, the need for written communicati on was simple and character-based written languages did not exist. Yet, our ancestors used pictographs carve d on rocks and other objects to document information or co mmunicate intellectual thoughts between one another. Later when the need for written co mmunication became more sophisticated and more specific, the pictographs gradually broke down into smaller information units and by the use of conventions they have evolve d into today’s writte n languages which are totally based on abstract characte rs or radicals (radical are us ed in Chinese, Japanese and other Asian written languages). In his book “T he Alphabet: An Account of the Origin and

PAGE 40

20 Development of Letters,” Taylor (1991) illust rated an example that how the picture of the owl was conventionalized into today’s letter “M.” In the old Egyptian language the name of the owl was mulak The picture of the owl is believed to have been primarily used as an ideogram to denote the bird itself, s econdly as a phonogram standing for the name of the bird. It then became a syllab ic sign used to express the sound mu the first syllable of the name, until ultimately it was employed simply to denote m the initial sound of that syllable. In his book “The Icon Book: Visu al Symbols for Computer Systems and Documentation,” Horton (1994) also listed similar illustra tions on how the letters “A” and “O” have evolved from the ancient Egyptia n hieroglyph, Sinai script, Moabite stone, and early Phoenician to Greek and Roman ch aracters. In Chinese, words denoting the objects such as sun, moon, and mountains we re also evolved from the early actual graphic representations of these objects. Th erefore, in one sense, the use of signs, symbols and icons in today’s society may be re garded as an effort of reverse engineering of the human written communi cation evolution process. The twentieth century has seen quite a few systemized research efforts in developing visual communication systems usi ng signs and symbols. Otto Neurath (18821945) developed a method of visual presenta tion of statistical information as an educational medium using pictograms that later became well known as the “International System of Typographic Picture Education” (ISOTYPE). The basic principle of the ISOTYPE system is that each symbol repres ents both a topic and a designated quantity, and symbols can be “compounded” (McLaren 2000). For example, ‘man’ + mining’ = mine worker (McLaren 2000) or ‘shoes’ + ‘f actory’ = shoe factory (Horton 1994). In the 1960’s Charles Bliss developed a system calle d “Semantography” which consists of an

PAGE 41

21 “alphabet” of 100 fundamental symbols that can be juxtaposed or superimposed to represent even richer concepts. The fundamental set of symbols includes numbers, mathematical symbols and simple geometric sh apes and many of these shapes are easily recognizable because they are abstractions of familiar objects or are already used internationally (Horton 1994). The International Organisation for Standardisation (ISO) and the International Electrotechnical Comm ission (IEC) also developed approximately 1,450 standardized symbols for international us e. These are compiled respectively in ISO 7000 ‘Graphical Symbols for Use on Equi pment – Index and Synopsis’ and IEC 417 ‘Graphical Symbols for Use on Equipment – Index, Survey and compilation of the Single Sheets” (McLaren 2000). Icons, and other terms including signs, symbols, signets, ideograms, index, phonograms, and pictograms/pictographs are cl osely related and are often confusing to ordinary people. From the semiotics (defined as “ the science of signs,” Eco 1976) point of view, Marcus summarized (2003) the defi nitions of these terms as follows (pp. 38): Signs : perceivable (or conceivable) objects that convey “meaning.” Symbols : signs that have meaning by conventi on and are often ab stract, like the letters of this sentence or the national flag. Icons : Signs that are self-evident, “natural,” or “realistic” for a particular group of interpreters, like a pho tograph of a person, a “realisti c” painting, or a right-pointed arrow to indicate something should move to or is located to the right. Index : a special semiotics term for signs that are linked by the ca use-and-effect in space and time, like a photograph representi ng a scene, or a fingerprint on the coffee mug at the s cene of the crime. Ideograms : symbols that stand for ideas or concepts, for example, the letter “i” standing for “information,” “ help de sk,” or “information available.” Phonograms : symbols that stand for sounds, for example, the letter “s.”

PAGE 42

22 Pictogram : an icon (or sometimes symbol) that ha s clear pictorial similarities with some object, like the person or men’s room si gn that (for some interpreters) appears to be a simplified drawings of a (specially, male) human being. Despite such seemingly detailed linguistic delineations of these terms, the nuances between icons and other terms and the significance of the nuances often diminish when they are used in various disciplines. The in terchangeable use of so me of the terms is common in today’s society where graphically enriched software application user interfaces are flourishing. In the computer applications, icons can be referred to anything, not just the easy-torecognize pictographs but also the abstract images or symbols which can be totally unrelated but arbitrarily assigned to represent certain computer commands. More interestingly, ther e are also studies on audible icons or “earcons” which have taken the definition of icons to a new dimension (Brewster et al. 1993). The use of icons in computer graphical us er interface was early incorporated in the design of Xerox’s 8010 “Star” Office Worksta tion (Bewley et al., 1983) and has become the main component in software applications to allow the user to easily navigate through the programs. The motivation of using icons in computer graphical user interfaces is similar to other applications (e.g., public in formation displays, equipment labeling, traffic controls, etc.) – to facilitate the comm unication process between the human and computer. As shown in Figure 2-1, in the earl y days that computer s technology, users of computers communicated with them by means of simple ‘binary state’ switches, buttons and numeric (octal or hexadecimal) ke y pads. As interface technology improved, the mode of interaction was superseded by th e use of QWERTY keyboards which enabled the construction of command line interfaces. As the complexity of these grow, they became more difficult to learn and rememb er. The introduction of graphical user

PAGE 43

23 interface in the Xerox ‘Star’ workst ation and later on Microsoft Windows took away the complexity of the command lin e interface through the use of ‘dialogue boxes.’ With the graphical user interface, the need for a user to type is substantially reduced. Instead, users use mouse to point to objects (such as icons and pictures) on the scre en to execute desired commands. In some sense, much of the “ease to use” of a computer system often depends upon the power of the metaphors embedded in the end-user interfaces. Since icons are more visually distinctive than abstract words, it is generally believed that it is easier to identify an icon than a word from a group of sc reen objects in a gra phical user interface. Icons can represent a considerab le amount of information in very little space and space is often at a premium on computer display screens (Hemenway, 1982). Figure 2-1. Illustration of th e Evolution of Human-Com puter Communica tion Process Signs, Symbols and Icons in Construction and the Possibility of Using Icons as Automated Data Entry in Graphic User Interface Signs and symbols are widely used in th e construction industry. Pictorial symbols based signs are commonly used on construction sites to convey various safety warnings HUMAN SWITCHES BUTTONS + KEYPADS COMMAND LINE INTERFACES COMPUTER DIALOGUE BOXES GRAPHICAL USER INTERFACES ICONS INTERFACE + ICONIC LANGUAGES

PAGE 44

24 and messages. Construction plans by nature are graphical representations of the construction process of buildings via conve ntionally accepted symbols and rules. For example, in site utility plans, straight or cu rved lines stand for various types of pipes with the size information either directly noted near the lines or indirec tly noted by means of a pipe schedule. Different symbols are used to sh ow various fittings or structures (e.g., gate valves, bends, fire hydrants, backflow prevente rs, sanitary manhole, etc.). This system is also used in drawings for vi rtually all other trades (e.g., plumbing, fire sprinklers, HVAC, electrical, etc.). Construction foremen, whos e main job functions include reading the construction plans and then issuing work or ders to their crews, have considerable experience working with symbols-based graphi cal communication systems in this regard. Using the “click and select” concept as previously discussed, Coble and Elliott (1996) proposed the idea of using icons as the basic means not only for computer commands but also for data entry in the gra phic user interfaces designed for construction field users. Unfortunately this idea never got to the stage of being implemented into a working system and therefore was never tested in real settings to assess the usability. Coble B. (1997) at the University of Fl orida tested 56 icons with 59 respondents consisting primarily of construction project ma nagers, superintendents, foremen and field engineers for icon recognition response and 41 icons were matched su ccessfully to their descriptions by the respondent s with a 90% or better conc urrence rate. These results indicate that if designed pr operly, icons can be used for automated data entry in the graphic user interface designe d for construction foremen. Both icons and pre-defined text have the potential benefit of re ducing the data input effort by construction foremen as the inte nded end users. Therefore it would be

PAGE 45

25 interesting to know if there is a differen ce between these two options in terms of usability. Usability comparison between icons and pre-defined text lists in the graphic user interface needs to be studi ed in order to validate or i nvalidate the concept of using icons to automate the data entry process in the mobile computing systems designed for construction foremen. Icons vs. Pre-defined Text Existing empirical studies have equivocat ed on the issue of whether there is a difference in terms of task completion time and user errors between textual representations and iconic representations in the computer user interfaces. A few earlier studies suggested there is littl e or no performance gain that iconic representations have over textual representations. For example, Rohr and Keppel (1984) compared icons and text as computer commands in word pro cessing and reported no improvement for icons over text in terms of task completion time a nd error rates. Kacmar (1989) compared text, icons and text+icon combination in matc hing programming concepts and labels and found combined labels most accurate and no di fference in all three mechanisms in terms of task time. Whiteside et al. (1985) di d a comparison of different interface design approaches and their effect on different types of computer users (novice, transfer, and expert). Whiteside et al. (1985) found there was no significant performance improvement for iconic interfaces and novice and transfer us ers actually performed worse with iconic interfaces. Egido and Patterson (1988) studied effects of icons on navigation through a catalogue and the study results showed the search time for icons was slower than text or text plus labels. The study results by Egido and Patternson also indicated that icon users took fewer steps but spent more time on each st ep than those with labels. Benbasat and Todd (1993) conducted an experimental investig ation under two factor levels where icons

PAGE 46

26 versus text and direct manipulation versus menu-based were paired into four different interface types. Benbasat and Todd concluded that there were no difference between the icon and text-based interfaces for the time take n to complete the task and the number of errors made. On the other hand, a more recent study by Staggers and Kobus (2000) indicates icon-based graphical user interface has a shorter re sponse time, fewer errors and higher user satisfaction than text-based user interface. In Staggers and KobusÂ’s study, 98 randomly selected male and female nurses completed 40 tasks using a text-based interface and an icon-based graphi cal interface. Overall, nurses had a significantly faster response time (P<0.01) and fewer errors (P< 0.01) using the graphical interface than the text-based interface. The icon-based graphica l user interface was also rated significantly higher for satisfaction than the text-based in terface, and the graphi cal user interface was faster to learn (P<0.01). Given these seemi ngly contradicting conc lusions of previous studies, a reliable statistical inference could not be drawn that there is no difference between icon and text-based interfaces fo r construction foremen in terms of task completion time, number of errors, and level of user satisfaction. Th e reasons are further discussed below. Effect From Interface Implementation Differences Many of the earlier empirical studies di d not preclude the effect of interface implementation differences on the study results. Factors such as f ont size, icon size, spacing and layout might have influenced the study results but were not counterbalanced to minimize their effects on the study result s. Therefore these study results were not totally conclusive.

PAGE 47

27 Visual Appeal Factor Associat ed With Iconic Interfaces Visual appeal refers to the phenomena that users tend to spend more time on iconbased user interfaces because of their visual attractions. Therefore, if the visual appeal factor was not counterbalanced in the study, the results may not be conclusive to show if the longer task time associated with icon-ba sed user interface was a result from longer processing and recognition time or from the vi sual appeal factor associated with the iconic interface. Many earlier studies did not take this factor into consideration. Abstract Vs. “Concrete” Icons And Icons As Computer Command Vs. As Information Units Icons in existing empirical studies are generally abstract icons and used for denoting computer commands. Icons used for computer command are often abstract in concept and arbitrarily assigned to a particul ar computer command and require extensive usage for a user to acquire the relationship a ssociation. Icons of inte rest in this study are “concrete” icons, which means they are on the lesser abstract end of the scale and are primarily used for information units. Therefor e there is clearer association between the icons and the objects/activities represented. Subject Characteristics The subjects included in many of the earlier studies were often college students or people who had considerable computer e xperience. The specific advantages and disadvantages associated with icons may vary from novice users to intermediate users to expert users. Therefore, advantages associat ed with icon-based interfaces may not be as remarkable to expert users as to novice users. Previous studies have addressed little in this area.

PAGE 48

28 Summary Computerization of the field documentati on tasks of construction foremen has not become a prevalent practice to da te. Of the efforts done in the past, little has been focused on the usability of the graphica l user interface for the data collection systems designed for construction foremen. The comparison of the use of icons and pre-determined text lists as automated data input mechanisms for cons truction foremen in particular was nonexistent.

PAGE 49

29 CHAPTER 3 RESEARCH METHODOLOGY This chapter introduces the research questi ons that this study a ttempted to address. It also discusses the methods used to accomp lish the research object ives stated in the previous chapters. The chapter is organized in the following sections: (1) research questions, (2) methods, (3) sample selection criteria and techniques, (4) study design, (5) survey questionnaire design, and (6) statistical procedures for analysis of the results. As stated in the previous chapters, it ha s not been determined whether icons have better usability than pre-define d text lists in the graphica l user interfaces designed for construction foremen. This was the main questi on that this study attempted to answer. In the field of cognitive psychology, humans are characterized as information processors – everything that is sensed (sight, hearing, touch, smell and taste) is considered as information that the mind processes (Preece et al. 1994). With the information processor theory, information enters and exits th e human mind through a series of ordered processing stages (Lindsay and Norman 1977). As summarized in Figure 3-1, information from the environment is encoded into some form of internal representation in Stage 1; in Stage 2 the internal representation of the stimulus is compared with memorized representations that are stored in the brain; in Stage 3 a response is formulated to the encoded stimulus; when an appropriate match is made the process passes on to stage 4, which deals with the or ganization of the response and the necessary action (Preece et al. 1994). Ba sed on this theory, it can be conjectured that the human brain would process text and graphic informati on differently in these four stages and the

PAGE 50

30 difference would be dependent on the pre dominant information processing mode (graphical or textual) that one is accustomed to. Larkin and Simon (1987) pointed out that textual and pictorial information differs in terms of the effort associated with making inferences. Jacob (1995) stated that the problem of human-co mputer interaction could be viewed as two powerful information processors (human and computer) attempting to communicate with each other via a narrow-ba ndwidth, highly constrained interface. Therefore to address the human-computer in teraction problem, mo re natural and more convenient means need to be provided for users and comput ers to exchange information easily and reliably. Addressing this problem can also help the researchers understand better whether construction foremen as inform ation processors may process the graphicbased information faster and more accurately than the text-based information. The answer may depend on their extensive e xperience working directly in the field and dealing with construction plans as highly grap hic-based communication media. Figure 3-1. Extended Stages of the Informa tion Processing Model (Preece at al. 1994). Determining whether foremen process icons better than text is important to the information technology sector providing IT solu tions to the construc tion industry because the computer programming to implement the gr aphical user interface typically takes 4090% of the entire program code in todayÂ’s software applications (Chalmers 2003). It takes a great amount of time and effort to develop quality icons and often requires many Attention Input or Stimuli 1.Encoding 2.Comparison 3.Response Selection 4.Response Execution Output or Res p onse Memory

PAGE 51

31 trial and error processes and refinements befo re finalizing an icon that perfectly serves the design intent. Therefore it is important fi rst to know whether or not icons can actually improve the usability of the graphical user interfaces designed fo r construction foremen, otherwise time and efforts invested in designing icons and implementing iconic user interface are not guaranteed to reap the intended benefits. Research Questions In the computer field, the usability of a system is typically measured by collecting and analyzing the following data : time required for using the system to complete a given task; number of errors and type of errors e xperienced by using the sy stem to perform the task; time required to learn the system to perform the task; retention quality of the knowledge learned to use the system; and the us erÂ’s subjective assessm ents of the system (Chin et al. 1988, Roberts and Engelbeck 1989, Jeffries et al. 1991, Nielsen and Philips 1993). From the usability point of view, there are several questions of primary interests in this study and they are discussed below. Do Construction Foremen Perform Comput er Tasks Faster Using Icons Than Using Predefined Text Lists Or Vice Versa? More specifically, do constr uction foremen tend to find the correct choice faster using icons or pre-defined text lists? User tasks in computer graphical user interfaces usually take two basic steps: first locati ng the correct screen target (e.g., button, menu item, etc.) that can be the most time-cons uming, and then performing the desired action on the chosen component. One salient trend in the human-computer interaction research field in recent years has focused on studying the visual searching or location learning aspect of user tasks in the graphical user interfaces to gain mo re understanding about the

PAGE 52

32 cognitive models of human-computer intera ction and subsequently to find ways to improve the usability of the human-computer interface (Salvucci 1999, Byrne et al. 1999, Ehret 2002, Hornof and Halverson 2003). Being able to quickly find the screen objects can reduce a userÂ’s task time, errors, and fr ustration (Ehret 2002) Investigating this question is especially meaningful to c onstruction researchers because although the timesaving in visual searching or location lear ning for each individual task may appear to be small, the aggregated effect can be signifi cant in the entire software application over a sustained period of time. For construction foremen, computer systems need to be efficient and effective to use. Therefore, any effort to achieve this goal is sign ificant in the process of computerizing the field documenta tion tasks of construction foremen. Do Construction Foremen Experience Few er Errors Using Icons Or Pre-Defined Text List? Frequency and seriousness of the errors in user computer tasks also account for an important aspect of the usab ility in a system. It is eas y to understand that if users experience more errors on one sy stem then it is likely for them to become more easily frustrated with the said system than any ot her competing system. The frustration could in turn lower their motivations to use the sy stem. If provided with choices, users would naturally reject the error-prone system and adopt the one with fewer errors. User errors in graphical user interface genera lly can be grouped into the following three categories: 1) identification errors (observed errors are clearly the results of incorrect identification), 2) selection errors (observed errors are accide ntal mouse/pen selection errors although the user has identified the correct choice), 3) experimenter interventions, both subject initiated and experimenter initiated. Although it is desirable to analy ze all types of user errors, this study will specifically focus on the id entification type of user errors as these

PAGE 53

33 types of errors are directly related to how the interfaces are impleme nted. Selection errors as a result of pen/touch screen sensitivene ss and pen-using skills are considered as hardware related and therefore would genera lly have equal effects on icon-based user interface and text-based user interface (provided the icon-re presented screen objects and text-represented screen objects are comparable in size). Selection errors will not be explored in this study. Do Construction Foremen Have A Preferen ce Between Predefined Text Lists And Icons? Using a 7-step Likert ranking scale, what are typical foremen satisfaction ratings with the icon interf ace and the text interface (1. Very Dissatisfied; 2. Dissatisfied; 3. Slightly Dissatisfied; 4. No Opinion; 5. Slight ly Satisfied; 6. Satisfied; 7. Very Satisfied)? UsersÂ’ satisfaction with a system is not only cl osely related to the efficiency and efficacy of the system that is directly translated to task time and users errors but also is affected by the psychological effect (disorientation, anxiety, etc.) and the cognitive load (how much mental effort is required) of the user interface. Computer anxiety and computer related anxiety was estimated to affect 30% of the United States workforce (Logan 1994). Computer related distress is commonly be lieved to yield increases in mistakes, debilitating thoughts, self-dep reciating thoughts, irrationa l beliefs and absenteeism (Ramsay 1997). Rozell and Garden (2000) furt her noted that resear chers have observed that motivation, or level of effort, is one of the primary variables affecting individual performance in general and computer-related performance in particular. Recognizing and developing systems towards user preference a nd user satisfaction is the key in todayÂ’s user-driven information technology market.

PAGE 54

34 While the above three questions compri se the main focus of this study, the following questions are al so to be explored. What Is The Ranking Order Of The Above Three Usability Aspects From The Point Of View Of Construction Foremen? Which aspect do they perceive as the mo st important? Text-b ased user interface and icon-based user interface both have thei r advantages and disadvantages. Although it is desirable to have a system that is far supe rior to its competition in all aspects, reality shows it is not always the case. Therefore, wh en making system selections, it is important to know which factors are viewed as mo st important to construction foremen. What Are The Views Of Construction Foremen About The Concept Of The Icon Based Mobile Field Documentation Applications? In other words, how do construction fore men as the end users perceive the icon based mobile documentation applications for automating the field documentation process? Would they perceive that this ki nd of applications woul d help do their jobs better? What Is The General Knowledge And Exp erience Of Construction Foremen On Mobile Computing Devices? How do they perceive the difficulty and in efficiency associated with handwriting input using pen/stylus on current mobile computing devices? ElliottÂ’s study (2000) indicated the 84.0% of the construction fore men in his sample (N=119) did not use computers to perform any part of their j obs, but 50.4% of the sampled foremen did use computers in their homes. ElliottÂ’s study al so showed 79.9% of the foremen responded positively that they thought mobile computi ng devices would help them do their jobs. Mobile computing devices such as the personal digital assistants (PDAÂ’s) are commonplace nowadays, and affordability no longer seems to be the issue as it was a few

PAGE 55

35 years ago. They are more widely used, not only for work-related tasks, but also as a tool for better organization of personal business. Th erefore, the knowledge and experience of foremen with mobile computi ng devices need to be inve stigated to understand their exposure to this type of technology. Difficulty associated with handwriting input using pen/stylus has long been perceived by the researchers as a roadblock in the implementation process of mob ile computing devices in the construction field. However it is not known whether construction foremen as the end users have the same perception. Therefore this issue needs to be explored. What Percentage Of The Information In Current Field Documentation Do Foremen Think Can Be Standardized For Use Wi th The “Click And Select” Concept? There always exist non-standard and misce llaneous information items in foremen’s field documentation. But is the percentage of the items that can potentially be standardized significant enough to justify the use of automated information entry either by icons or pre-determined text lists? Last, how do construction foremen demogr aphics (age, construction experience, computer use experience, etc.) affect the resu lts for the first three research questions? Are there any correlations between these user pa rameters and the observed user performance variables in terms of task time and error rates? Samples Although it was desirable to include all the foremen from all construction trades in the sample universe, the study was specifica lly focused on construction foremen in the sitework trades (e.g., clearing and excavating, underground utilities, and paving) in the greater Orlando area in the stat e of Florida. The declarati on on the sample universe was important to minimize the effect on the study re sults by potential extern al factors such as

PAGE 56

36 1) subjectsÂ’ individual know ledge differences on trade speci fic construction activities, procedures, and equipment, i.e., construction foremen in the sitework trades may not be very familiar with knowledge pertaining to th e mechanical or elec trical trades, 2) geographical location factor (although the influe nce from this factor may likely be very small as the construction workforce in U.S. are generally very mobile). The Blue Book, a business directory widely used in the U.S. construction industry, was used to identify the potential sitework contractors in the great er Orlando area that could be included in the study. The management personnel of these firms were contacted to seek their permission to solicit the partic ipation of their foremen in the study. When a foreman agreed to participate in the study, th e visual searching experiment was arranged and conducted in an indoor environment (e.g., contractorsÂ’ main of fices, and job site trailers) and the questionnaire survey was given subsequently. The inclusion criteria for this study were as follows: Firm management agreed to allow their foremen to be included in the study The foreman was willing to voluntarily participate in the study The foreman was able to readily read and write in English fluently The foreman had normal vision or corrected vision and was able to read the 14 pt text without any difficulties when seated 12 to 18 inches in front the experiment apparatus. Although only the foremen sample was ini tially planned, two additional samples were taken in the final study phase. The init ial sample included thirty-five foremen who were selected from eight different sitework construction companies. A total of twelve companies were contacted but eight actually pa rticipated in the final study. These eight companies were not the same ones that par ticipated in the pilot study phase. The second sample included thirty-seven subjects whos e professions were closely related to civil

PAGE 57

37 engineering/sitework construction. This samp le included twelve pr oject managers, five superintendents, four project engineers, one constructi on inspector, one construction estimator, eight civil engineers, four C AD technicians employed by civil engineering firms, and two construction management cons ultants. Subjects in this sample were employed by seventeen different firms, with six subjects employed by the state transportation department a nd thirty-one by private comp anies. Additionally, these subjects were located in the U.S. except fo r one who was in the U.K. The subject from the U.K. was a research consultant in the fi eld of mobile computi ng technologies for the construction industry. The third sample include d twenty-six graduate students and two undergraduate students in the School of Build ing Construction at the University of Florida. These students were selected b ecause they would likely be in various construction supervision/management positions when they graduate from the university. Their views about the research quest ions were of interest as well. Methods Two methods were employed in this study to collect data to answer the above stated research questions. The first method was essentially a computer visual search game that each subject was required to play. The computer game contained the code to track various user interface events (screen targets pressed, mouse cursor locations, time stamps for various events, etc.) that provided th e quantitative data to answer the first two research questions (user task completion tim e and error rate). Such information was recorded in a simple text format data f ile that was then imported to a spreadsheet application for data format ting and initial processing. For the more qualitative questions (Res earch questions 3 through 8), a survey questionnaire was used to obtain the subjec tsÂ’ views/answers to these questions. The

PAGE 58

38 survey questionnaire also included the secti on for the subjectsÂ’ demographic information, their knowledge and experience in mobile computing devices equipped with touch sensitive screens, their subjective evaluations on the experimental apparatus, and other research questions. For research question #5, a sample iconbased application op erating on the Palm OS for documenting equipment usage was demo nstrated to the subjects. The sample equipment usage tracking application was design ed to work with the stylus and the touch sensitive screen and required no typing or handwriting to input the key information. A user would only need to use the stylus to se lect different icons to navigate between the screens and to input the equipment usag e information (equipment number, operating hours, idle time, downtime (if any), and the qua ntities of the work completed). The more detailed discussion on the sample icon-base d equipment usage tracking application can be found in the later part of the chapter. After the icon application demonstration, the subjects were then asked to describe thei r views about the icon-based mobile field documentation application. Data collected thro ugh the computer experiment and survey questionnaires were imported into a statistics program for analysis. Visual Searching Task Experiment The visual searching task experiment was de signed to collect the data for a userÂ’s task time and error rate response variables unde r the two different factor levels (icons or pre-defined text lists). The vi sual searching task required a subject to iden tify and select either an icon screen object or a pre-defined text screen object from a group of screen objects to match the instruction given at the top of the screen. The instruction was given in a different format from the screen objects, i.e., textual instruction for icon interface and iconic instruction for textual in terface. Each visual search task basically consisted of

PAGE 59

39 three steps: reading the instruction, and locat ing an icon or pre-de fined text list that matched the instructions, and selecting the correct screen target. The computer game tracked the time used for reading the instru ction and the time for locating and selecting the screen objects for each search task. Apparatus/Materials The apparatus used in the visual searchi ng task experiment was a custom developed icon/text matching computer game. The comput er game was tested in a pilot study phase and underwent several iterations and refinements to incor porate the findings learned during the pilot study phase. Th e computer game recorded usersÂ’ mouse movements and actions on the screen during each visual search game session. The recorded information provided data on the time and user errors va riables for each visual searching task. The visual search game included thre e icon-training sessions, one text icon visual search session, and one icon text visual search session. Each subject was required to complete three icon-training sessions before the text icon visual search session or the icon text visual search session could begin. The order of the text icon visual search session and the icon text visual search session in each game was randomly determined by the code in the computer game. Three training sessions were given to each subject as the pilot study (see Chapter 4) showed this to be needed for a test subject to adequately learn the icons. In the event that a s ubjectÂ’s overall time for any icon training session was longer than 91 seconds (a threshold level determined in the pilot study that 90% of tested subjects were able to complete 3 sessi ons), a dialog box would pop up on the screen to alert the test subject that the session time was longer than the baseline and they needed to endeavor to do better in the ne xt training session. In additi on, an elapsed time meter was

PAGE 60

40 also displayed at the corner of the computer screen to remind the test subject of the time and to motivate them to complete the game quickly. Icon training session Figure 3-2 shows the screenshot of a t ypical icon training se ssion. Fifteen icons were displayed in 3 rows and 5 columns. The locations of the icons were randomly determined by the code in the computer pr ogram. In a typical icon training session, a textual instruction for the target screen objec t was displayed near th e top of the screen. When the screen was completely displayed, the clock would start counting the elapsed time. The test subjects were required to read the textual instructi on and then try to find the correct icon matching the textual descripti on, e.g., “Excavator Laying Pipe” as shown in Figure 3-2. When a correct icon was select ed, that particular icon would be removed from the screen and then the next visual se arch task would begin. If an incorrect icon were selected, a dialog box would pop up on the sc reen to prompt the te st subject to retry. A total of 4 retries were allowed before that visual search task was called unsuccessful. Each test subject was require d to complete three icon trai ning sessions before being allowed to move on the subsequent text icon or icon text visual search test sessions.

PAGE 61

41 Figure 3-2. Sample Screen Shot of the Icon Training Session Icon visual search test After completing three icon-training sessions each test subject was considered to have acquired the knowledge on the correspond ing matching relati onships between the icons and the text descriptions. The test subj ects were then given e ither the icon visual search test or the text vi sual search test based on the random number generated by the computer (icon visual search test first if the random number was an odd number and visa versa). Figure 3-3 shows a typical icon visual s earch test interface. The icon visual search test screen is essentially identical to the icon-training interface ex cept that after each successful match the full screen would be re-generated with all the icons at completely different locations from the pr evious visual search task. Fi gure 3-4 shows a typical text visual search interface. The text visual search interface utilized the same principle as the

PAGE 62

42 icon visual search interface (screen re-generated after each visual search task). In the text visual search interface, the target instruction was given in icon format with screen objects consisting of a pre-determined text description list. Fourteen text obj ects were used in the text visual search game and they were organized in 7 rows and 2 columns. Figure 3-3. Sample Screenshot of the Icon Visual Search Session

PAGE 63

43 Figure 3-4. Screenshot of the Text Visual Search Session Test platform The computer game was designed to r un on any computer with Windows 2000 or other later Micorsoft Windows operating sy stems and supporting at least 800-pixel by 600-pixel screen resolution. The computer ga me was designed with the capability to capture the system time stamps accurate to one millisecond (1/1000 second). Data obtained in the pilot study phase (see Chapter 4) showed the differences in the results from tests conducted on various computer pl atforms were not significant at a confidence coefficient of 95% ( = 0.5). The computers used in this study were a Fujitsu Stylistic 3400 pen tablet PC and an IBM 600E Laptop computer.

PAGE 64

44 Icons and Pre-defined Text Lists Icons used in the computer game were designed to depict various construction activities/operations. Thirty-five icons were initially designed with the guidance from a university professor in the c onstruction management research field. After the preliminary icon recognition testing and pilot study, fifteen icons with the highest recognition success rates were selected and used in the comput er game for this study. These icons and their corresponding text descriptions are listed in Table 3-1. Table 3-1. Icons and Pre-defi ned Text Lists Used in the Visual Search Tests # Icon Pre-defined Text List 1 3-wheel Steel Roller Compacting 2 Traffic Roller Compacting Asphalt 3 Asphalt Paving Operation 4 Dozer Grading Dirt 5 Mobilizing Equipment 6 Excavator Backfilling trench

PAGE 65

45 Table 3-1. Continued # Icon Pre-defined Text List 7 Excavator Excavating Trench 8 Excavator Installing Structure 9 Excavator Laying Pipe 10 Excavator Loading Truck 11 Pouring Concrete 12 Loader Moving Dirt 13 Loader Moving Pipe 14 Material Delivery 15 Motor Grader Fine Grading

PAGE 66

46 Data collection method The tools used to evaluate system usabili ty in the computer industry have changed greatly over the last two decad es. When the field of usabili ty was first formed, simple prototyping tools such as Hype rCard were often used to create the scenario task user interfaces. Primitive data collec tion methods such as paper a nd pencil were often used to log user event information and the test administratorÂ’s observations. As the usability discipline have gained more importance in the computer industr y, the usability study tools also have become more sophisticated to explore deeper usability issues in computer hardware and software products. The usabil ity divisions in most major information technology companies have dedicated usability testing labs that are well equipped to allow the evaluators to observe and analyze the potential usersÂ’ task behaviors in a controlled environment. The study platforms in simulated manner or actual product form are often assisted with special computer pr ograms that can capture the user interface events (e.g., time spent on each task, time pa used, time that a subject stayed on a particular user interface object) a nd store this information in an event log file that can be retrieved later and analyzed in detail. One ex ample of such sophisti cated usability study tools used in recent years is the eye-tracking system (e .g., Salvucci 1999, Byrne et al. 1999, Hornof and Halverson 2003) that can r ecord the participantÂ’s eye movement information during a task scenario. Eye-track ing can identify the patterns of visual activities that subjects exhibit while interacting with computer graphical user interfaces. Such tools are highly desirable in most us ability studies. However the complexity and costs associated with procuring, setting up, calibrating, and operati ng them often makes it prohibitive to use the eye-tracking system on small-scaled and incidental research projects.

PAGE 67

47 Nonetheless, the main feature of these la boratory usability tools is no more than tracking and collecting the user interface event data such as menu selections, keystrokes, cursor location and movement, etc. The same concept is used in the data collection method designed in this study. As discusse d before, the visual search game was programmed to capture the time stamps and mouse/pen event information and log the collected data in a tab delineat ed text file that could be imported into a Microsoft Excel spreadsheet program for initial data formatting and processing. Response Variables The visual search game captured and recorded the system time stamps at the following user interface events: screen disp layed/elapsed time meter started, search instruction displayed, mouse cursor enters the search object panel, and screen object selected. It also recorded information such as the target object names and the name of each screen object selected. As shown in Figure 3-5, the instruction reading time treading was derived as the difference between the time stamp “screen di splayed/search instruction displayed” and the time stamp “mouse cursor entered the sear ch object panel.” Similarly, the time used to search for the target object was obtain ed by subtracting the time stamp “mouse cursor entered the search object panel” from the time stamp “the “Nth screen object selected” for that particular search task. The number of s earch errors was counted as N-1, namely the number of attempts before the corre ct target was selected at the Nth try. The session time was also obtained by finding the difference between the time stamp “elapsed time meter started” and the time stamp of the la st selected object in the session.

PAGE 68

48 Figure 3-5. Visual Search Re sponse Variable Definitions Visual Search Game Design considerations Special considerations were taken in the design of the visual searching experiment and are discussed below. Instruction format Verbal instructions were initially contemplated in the pilot study phase with the intent of precluding the potentia l bias that might exist in the user interface for the pre-defined text lists in situ ations where study participants might attempt to locate the choice by word matching. Howe ver it was evident that this was not a concern that needed to be addressed and on the contrary it coul d possibly introduce the “noise” in the result data from othe r factors, such as instructor accent, environment/background distractions, etc. As a result, the verbal instruction mode was dropped from the later versions of the visual search game. Instead, the iconic interface was designed with textual instruction and the pre-determined text list interface was designed with iconic instruction. This modi fication in instruction format/mode also facilitated comparing the time used for readi ng textual instruction wi th the time used for reading iconic instruction to see whether there was a difference in processing the iconic information and textual information by the test subjects. Number of Search Errors Search Instruction Displayed Reading Search Target Instruction Mouse Cursor Enters Search Panel 1s t Target Selected 2n d Target Selected if Previous Choice Incorrect N th Target Selected if Previous Choice Incorrect TIME SCALE Instruction Reading Time trea d in g Search Time

PAGE 69

49 Randomly-sequenced screen objects layout scheme In order to reduce any bias that might exist when participants tried to lo cate the screen object (icon or pre-defined text list) by remembering its location in the pr evious task screen, th e layout sequences of the screen objects in all th e experiment graphical user interfaces were randomly reassigned for each visual search task. Font size The font and size of the text used in the pre-defined text user interface was Arial 14 point which is approximately 0.15 inch in height when shown on the Fujitsu Stylistic 3400 screen. The Arial 14 point size of text as in the pilot study was deemed adequate for most people when the Fujitsu St ylistic 3400 screen was held approximately 12 to 18 inches in front of the eyes. Colors and contrast. Previous research by Nasanen and Ojanpaa (2003) showed that with increased the levels of contrast or sharpness, the search time, the number of eye fixations per search and fixa tion duration decreased. As colo rs and contrast are not of particular interest in this study, the screen ob jects (text lists and ic ons) in this study were designed in monochrome (the hi ghest contrast) to eliminate or minimize the potential effect of the color and contrast factors on the study results. Size of the icons Prior research (Lindberg and Nasanen 2003) found that the size of icons has a strong effect on the speed of icon processing in the human vision system. Lindberg and NasanenÂ’s study showed that icons smaller than a 0.7 view angle resulted in significantly longer search times. As the study of the size of ic ons was not within the scope of this study, icons used in this study were designed to have a view angle significantly gr eater than 0.7 as observed by Lindberg and Nasanen. Icons used in this study were designed in 64-pixel by 64-pixel which is approximately 0.67 inch by 0.67

PAGE 70

50 inch when shown on the Fujitsu Stylistic 3400 screen. This icon size translates to 3.19 and 2.13 respectively, when the test subjects are seated 12 to 18 inches in front of the screen. Repeated Measures In order to maximize the information obtained from each study participant, repeated measures were us ed as the residual e ffects between the two user interfaces is generally believed negligible. Each participant completed 14 visual searching tasks (matching icons or text to the specified tasks) in the icons user interface and 14 tasks in the pre-defined text lists user interface during the test sessions. Sample Icon-Based Mobile Equipment Usage Documentation Application To obtain the foremenÂ’s views on icon-based field information documentation tools, a sample icon-based construction e quipment timesheet application running Palm OS was developed for the study. Equipment usage is a piece of information commonly tracked by almost all sitework contractors. Th is information is typically used for prepare the billing and also to measur e productivities. Often foreme n have to write on the preprinted paper form to record the equipment us e time and the activities that the equipment is used for. Figures 3-6 to 3-10 show the major screenshots of the icon based mobile application. To document equipment usage time information, a foreman would select the equipment timesheet icon in the first screen (Figure 3-6). For example, to document the equipment usage on scrapers, the foreman woul d select the scrapers icon in the second screen as shown in Figure 3-7 (shows various types of construction equipment). The third screen (Figure 3-8) showed a ll scrapers that had been mob ilized onto the project site. The foreman then would select the particular scra per for which the time information was to be logged. In the fourth screen (Figure 3-9), the foreman would enter the equipment hour

PAGE 71

51 meter reading, equipment operating time, idle time and downtime information by clicking the icons and the soft keypad displayed on the touch sensitive screen. In the fifth screen (Figure 3-10), the foreman would enter the co mpleted work information by also selecting appropriate icons. The application was designe d in such a way that the foremen would not have to write with stylus or use the ha rd keypad and combination keys to enter the above information. Figure 3-6. Main Screen of the Sample Ic on-based Field Docume ntation Application (shown running on Handspring Treo 270 Model)

PAGE 72

52 Figure 3-7. Equipment Selection Screen Figure 3-8. Scraper Selection Screen

PAGE 73

53 Figure 3-9. Scraper Time Information Entry Screen Figure 3-10. Scraper Work Production Input Screen

PAGE 74

54 Procedures Based on the experience from the pilot study phase, it was found that the total time required for each subject to complete the visual search game and fill out the questionnaire survey needed to be limited to ten minutes or less. Otherwise, the interest to participate by the potential subjects, and their employing fi rms, tended to be very low. The final version was designed to be completed in th e ten minutes time frame with six to eight minutes allotted for the visual search game and two to three minutes for filling out the survey forms. Generally the manager of a co mpany would first be contacted to obtain the permission to interview the foremen and also to obtain their assistance in making arrangements for the interviews. The actual interviews were usually held in conjunction with the companiesÂ’ weekly or monthly mee ting at the home offices but some were held at the jobsite offices. Once a potential subject had agreed to part icipate in the study, a short introduction was provided on the visual search game and the survey. For the subjects who had never used a computer or mouse before, the test administrator provid ed additional guidance through the first training session to ensure they could efficiently use the mouse and understood the game protocols. As stated before the subject started w ith either the icon visual search test or the text visual sear ch test based on the ra ndom number generated by the computer code. At the end of the visual s earch experiment, the subjects were asked to complete the questionnaire survey. Subjects ei ther verbally gave their answers to the questions that were recorded on the survey fo rm by the experiment ad ministrator or filled out the questionnaire by themselves.

PAGE 75

55 Research Hypotheses From the research questions 1, 2 and 3 di scussed earlier in this chapter, the following hypotheses were formulated: Task Completion Time Regarding the task completion time between icon-based user interface and textbased user interface, the null hypothesis and the alternative hypothesis are stated as follows: H10: There is no difference in the task completion time for icon-based user interface and text-based user interface. H1a: There is a difference in the task completion time for icon-based user interface and text-based user interface. For the purpose of this study, a meaningful difference between the task completion time for icon-based user interface and text-based user interface on th e per-task level was defined as 1,000 milliseconds (or one second). This number is selected because according to the theories in the physi ology and psychology of eye moveme nts, the vision acuity is not distributed uniformly across the visual field (Jacob 1995). Instea d, the highest acuity is concentrated on the fovea that covers appr oximately one degree of the field of view. As shown in Figure 3-11, fovea is a sp ot located near the rear cent er of the human eye that is responsible for sharpest central vision. Outside the fovea, the peripheral vision acuity ranges from 15 to 50 percent of that of th e fovea. The periphera l vision is generally inadequate to see an object clear ly; therefore, in order to see an object (e.g., a word or an icon) clearly, one must move th e eyeball to make that object appear directly on the fovea. During a typical visual search process, when the target appears in the peripheral vision, the eyes make sudden movements (called saccades, typically 30-120 milliseconds) to

PAGE 76

56 make the target appear in the fovea vision ra nge and then a fixation (a period of relative stability during which an object can be viewed ) follows. Fixations typically last between 200 and 600 milliseconds. There is also a 100300 milliseconds delay before the saccade occurs. It is estimated that a complete fi xation process could take 330-1,020 milliseconds. Therefore, 1,000 milliseconds (translated to one to three fixation pe riods) could be used as a suitable meaningful differe nce. A significance level of 0.05 ( = 0.05) was chosen for this hypothesis. Figure 3-11. Schematic Diagram of the Hu man Eye, With the Fovea at the Bottom Courtesy from Wikipedia, http://en.wikipedia.org/wiki/Optic_fovea February 7, 2006

PAGE 77

57 Task Errors Regarding the number of identification e rrors between icon-based user interface and text-based user interface, the null hypothesi s and the alternative hypothesis are stated as follows: H20: There is no difference in number of identification errors for icon-based user interface and text-based user interface. H2a: There is a difference in the number of identification errors for icon-based user interface and text-based user interface. The meaningful difference in identification e rrors is defined as 1 (one error). As factors such as screen brightne ss, screen object sizes, color, contrast, and newness to the icons are muted to the extent that their e ffects on the resultant data are minimal, the number of identification errors are expected to decrease significan tly. In the pilot study, the maximum number of identification errors in all icon visual search tests and text visual search tests was two. A si gnificance level of 0.05 ( = 0.05, for Type I error) was chosen for this hypothesis. User Satisfaction Research question #3 concerns the satisfac tion rating of the icon-based interface as compared to the text-based interface. The res ponse to this question w ould generally be an ordinal variable. However, if the rankings of the satisfaction rating scale can be evenly placed on a –1 to +1 scale, the response vari able then could be treated as a numeric variable and therefore more information would be available from the resultant data. With this treatment, the rating “N ot at all” was assigned a va lue of –1.0, “Did not like it” – 0.67, “Slightly disliked it” –0.33, “No opinion” 0, “Liked it a little” +0.33, “Liked it”

PAGE 78

58 +0.67, and “Liked it very much” +1.0. The null hypothesis and the al ternative hypothesis were formulated as follows: H30: There is no difference in construc tion foremen’s satisfaction rating for icon-based user interface and text-based user interface. H3a: There is a difference in constructi on foremen’s satisfaction rating for iconbased user interface and text -based user interface. The meaningful difference in foremen’ s satisfaction rating was defined as 0.165 (1/2 step on the satisfaction rating scale). A si gnificance level of 95% was also chosen for this hypothesis ( = 0.05). Survey Questionnaire Design Research questions 4 through 9 as stated earlier in the ch apter were to be answered from the information collected through the survey questionnaire. The questionnaire used in this study was designed to facilitate an organized and c onsistent method of gathering data during personal interviews. Questions pert inent to the research were developed and then refined in the pilot study. For many questio ns, a Likert scale or semantic differential scale was deemed appropriate and scaled answ ers were developed. Several variations of the Likert scale were used and are listed as following: Agreement Scale: Importance Scale 1 strongly disa g ree 2 disa g ree 3 slightly disa g ree 4 no o p inion 5 slightly agree 6 a g ree 7 strongly a g ree 1 not important at all 2 of little im po rt a n ce 3 Fairly im p ortan t 4 important 5 very important

PAGE 79

59 Efficiency Scale Satisfaction Scale There were also open-ended questions in the survey questionnaire because the possible answers to some of the questions co uld not be anticipated. The answers to the open-ended questions were sorted and gr ouped in the results analysis phase. Foremen Demographics The first section of the questionnaire ga thered demographic information about the individual participant and the participantÂ’ s employer. This included the company name, years in business, the foremanÂ’s specific trad e, the duration of the foremanÂ’s construction experience, the foremanÂ’s average crew size, the foremanÂ’s educa tion level, and the foremanÂ’s age. ForemenÂ’s Experience with Touch Sensitiv e Screen Devices and Mobile Computing Devices The survey questionnaire was also intende d to obtain general information of the construction foremenÂ’s experien ce and use of touch sensitive screen devices and mobile computing devices. Many of todayÂ’s touch sensitive systems in use already have incorporated icons to some extent. A fore menÂ’s experience on such systems might have an effect on the foremanÂ’s answer on the preference between text based interface and icon based interface. Which of the following touch-sensitive scre en devices have y ou used? (check all that apply) 1 very inefficien t 2 inefficien t 3 slightly inefficien t 4 no o p inion 5 slightly efficient 6 efficien t 7 very efficien t 1 not at all 2 did not like 3 slightly disliked i t 4 no o p inion 5 liked it a little 6 like i t 7 liked it ver y much

PAGE 80

60 a. ATM Machines b. Information Kiosks c. Store checkout services d. Other, please specify Have you ever used a mobile computing device (for example, Palm Pilot or pocket PC’s)? (Yes/No) Do you use a mobile computing device for work or for your own personal business? (Yes/No) If “Yes,” what do you mainly use it for? a. Work b. Personal Business c. Both If “Yes,” how much time do you us e it for on a weekly basis? minutes How efficient do you think it is to enter the field information on computers using the stylus writing method? [Efficiency Scale] How important do you think it is to be ab le to enter the field information on computers in a quick and efficient manner? [Importance Scale] Foremen’s View on Standardization of the Content of Field Documentation Foremen were asked whether they thought th e information content of their typical field documentation could be standardized. If the answer was “Yes,” then they were asked to estimate a percentage of th e amount that could be standardized. Do you think most information in your fi eld documentation can be standardized on the computer so you can pick and choose on the computer screen? [Agreement Scale] If the answer is “Yes,” what is the per centage of information that you think can be standardized? % Foremen’s Preference Between Icon s and Pre-defined Text List As one of the important factors in usabil ity, foremen satisfaction ratings with textbased interface and icon-based interface were assessed. Foremen were asked how much

PAGE 81

61 they liked the icon visual search game and th e text visual search game using a 7-step Likert scale as previously mentioned. This study was more interested in knowing which interface had a higher user satisfaction and to what extent the difference varied. How much did you like the icon game? [Satisfaction Scale] How much did you like the text game? [Satisfaction Scale] Please rank the importance of the following three usability factors (“1” being the lowest and “10” being the highest): a. shorter task completion time b. fewer errors c. satisfaction Foremen’s View about Icon-based Fi eld Information Documentation Tools After the visual search game, the subject s were shown how to use the sample iconbased mobile equipment usage documentation a pplication. The subjects were then asked their opinions as to whether they thought th e icon-based mobile computing system could help them better fulfill their field documentation responsibilities. Do you think the icon-based mobile co mputer tools like the one shown to you would help you do your daily log? [Agreement Scale] Please comment on the answer: If you were given an icon-based mobile computer tool just as the one shown to you for your field documentation, would you use it? a. Yes b. No, please explain reason As the survey questionnaire was origina lly designed for foremen, it was used for the “other construction professionals” sample and the subjects were asked to fill out the survey as best applicable to them. A shortened and modified questionnaire survey form

PAGE 82

62 was used for the student sample with mo st questions the same as the foremen questionnaire.

PAGE 83

63 CHAPTER 4 PILOT STUDY This chapter documents the preliminary studies conducted during the process of designing the visual search game. The chapte r is organized into the following sections: icon design and recognition quality testi ng, sample size preliminary estimation for hypothesis testing, visual search game initial testing, test platform effect study, and icon learning curve analysis. Icon Design and Recognition Quality Testing Icons included in this study were de signed in the Axialis AX-Icons 4.5 program. A total of 35 (see Table 4-1) 64-pixel by 64pixel icons were deve loped to represent various sitework construction activities and operations. These icons were first evaluated by a university professor in the construction management research field and went through several iterations before a recognition quality testing by other construction related professionals was conducted. In the icon recogn ition quality-testing ph ase, a total of 18 persons whose professions were directly relate d to sitework construction participated in the evaluation: 6 construction foremen 3 equipment operators 2 superintendents for a sitework contractor 1 project engineer 2 superintendents for general contractors 3 construction inspectors fo r a civil engineering firm

PAGE 84

64 1 construction surveyor A printout sheet with the 35 icons was show n to the 18 participants and they were asked to identify the construction activity th at each icon represente d. If a participantÂ’s description of the icon matched the design in tent then the recognition of that icon was deemed successful. All 18 participants comp leted the recognition qu ality evaluation and the results are shown in the Table 4-1. Rec ognition success rate for an icon was defined as the percentage of evaluators who successfu lly identified that particular icon at the verbal prompt over the total number of evalua tors. Table 4-2 shows the number of icons successfully recognized by each evaluator. Table 4-1. Icon Recognition Qu ality Testing Results Icon No. Icon Image Icon Description Recognition Success Rate 1 Excavator Excavating Trench 16/18 (or 88.89%) 2 Excavator Laying Pipe 17/18 (or 94.44%) 3 Excavator Setting a Structure 15/18 (or 83.33%) 4 Excavator Loading Truck 17/18 (or 94.44%) 5 Pouring Concrete 17/18 (or 94.44%) 6 Dozer Clearing Trees 16/18 (or 88.89%)

PAGE 85

65 Table 4-1. Continued Icon No. Icon Image Icon Description Recognition Success Rate 7 Excavator Clearing Trees 14/18 (or 77.78%) 8 Dump Truck Unloading Materials 17/18 (or 94.44%) 9 Flat Belly Pan Loading Material 10/18 (or 55.56%) 10 Peddle Pan Loading Material 8/18 (or 44.44%) 11 Dozer Cutting Trench 7/18 (or 38.89%) 12 Dozer Grading Dirt 17/18 (or 94.44%) 13 Motor Grader Fine Grading 17/18 (or 94.44%) 14 Survey and Layout 15/18 (or 83.33%) 15 Maintenance of Traffic 17/18 (or 94.44%) 16 Loader Grading Dirt 13/18 (or 72.22%) 17 Loader Moving Dirt 16/18 (or 88.89%)

PAGE 86

66 Table 4-1. Continued Icon No. Icon Image Icon Description Recognition Success Rate 18 Loader Moving Pipe 17/18 (or 94.44%) 19 Mixer Mixing Subgrade 9/18 (or 50.00%) 20 Box Blade Grading Dirt 16/18 (or 88.89%) 21 Single-drum Roller Compacting 16/18 (or 88.89%) 22 Maintenance of Traffic 17/18 (or 94.44%) 23 Self-elevating Scraper Loading Material 15/18 (or 83.33%) 24 Double-drum Roller Compacting Asphalt 17/18 (or 94.44%) 25 Paving Asphalt 17/18 (or 94.44%) 26 Traffic Roller Compacting Asphalt 17/18 (or 94.44%) 27 3-Wheel Steel Roller Compacting 17/18 (or 94.44%) 28 Small Double Roller Compacting Asphalt 17/18 (or 94.44%)

PAGE 87

67 Table 4-1. Continued Icon No. Icon Image Icon Description Recognition Success Rate 29 Plate Tamp Compacting Dirt 17/18 (or 94.44%) 30 Broom Tractor Sweeping 15/18 (or 83.33%) 31 Excavator Backfilling Trench 17/18 (or 94.44%) 32 Mobilize Equipment 17/18 (or 94.44%) 33 Material Delivery 17/18 (or 94.44%) 34 Construction Accident 16/18 (or 88.89%) 35 Dewatering Operation 15/18 (or 83.33%)

PAGE 88

68 Table 4-2. Icon Recognition Evaluation Results Organized by Evaluator Evaluator # Evaluator Job Function # of Icon Successfully Recognized 1 Construction Inspector 30 2 Construction Inspector 32 3 Earthwork Foreman 35 4 Earthwork Foreman 33 5 Construction Surveyor 32 6 Earthwork Foreman 31 7 Equipment Operator 29 8 Equipment Operator 30 9 Earthwork Foreman 34 10 Construction Inspector/P.E. 29 11 Equipment Operator 33 12 General Superintendent 29 13 Superintendent 33 14 Project Engineer 30 15 Project Superintendent (General Contractor) 32 16 Underground Utilities Foreman 32 17 Earthwork Foreman 34 18 Project Superintendent (General Contractor) 31 Based on the icon recognition quality test re sults, fifteen icons with relatively high recognition success rates were se lected from the 35 icons and used in the visual search game computer program. Test Platform Difference Study As it was likely that the visual search game would be administered on different computer platforms during the final testi ng phase, it was important to know whether different computer platforms c ould cause differences in the te st results. To answer this question, a test platform difference study wa s conducted to investigate the potential influence of the platform differences. Data Two independent samples with 15 subjects per sample were drawn from a local civil engineering firm. Each subject comple ted five (5) icon-tra ining sessions. The

PAGE 89

69 subjects completed 15 visual search tasks in each session. None of the subjects had previously seen the icon s or the visual search game before the test. The first sample test was taken on a Fujitsu 3400 Tablet PC. The second sample test was taken on various types of computers that the test subjects used daily. The results for the average task time and task errors from the icon-training sessi ons are shown in Table A-1 in Appendix A. The individual task time was defined as the time used by a subject to read the textual instruction and subsequently find the corres ponding icon. The average task time for each session was defined as the average of the indivi dual task times in th at session. To reduce the bound of error in the results, the two highest and two lowest task times were excluded and only the remaining eleven (11) observations were used to compute the average time for each session. For example, Table 4-3 illust rates how Subject 1Â’s average task time in Session 3 was calculated. Two lowest individua l task time observations 500 ms and 1,407 ms and two highest observa tions 13,172 ms and 10,859 ms were considered as the outliers and were not included in the comput ation of the average search task time for Session 3. The average task time for Session 3 was computed with the remaining eleven individual task time observations. Based on th is method, the average task time by Subject 1 in Session 3 was computed to be 3,966 milliseconds. The reason for excluding the outliers was mainly to reduce the bound of error due to eith er excessively long or short incidental task times. The excessively long ta sk times usually occurr ed where the visual search process was halted on one particul ar icon with which the subject had great difficulty. The excessively short task times t ypically occurred when the mouse cursor was right on the target icon when the next task started. The shortest task time also frequently occurred with the last visual search task where only one icon remained on the screen.

PAGE 90

70 Table 4-3. Example of Excluding Outliers in th e Computation of the Average Task Time Search Task # Task Time (in Milliseconds) 1 6,937 2 10,859 3 3,156 4 2,156 5 13,172 6 3,031 7 4,953 8 6,172 9 3,625 10 5,562 11 1,407 12 3,281 13 1,578 14 3,172 15 500 Figure 4-1 shows the mean task time and s earch errors in each of the five icon training sessions for the Fujitsu sample and N on-Fujitsu sample. As it is evident in Figure 4-1, the mean task time decreased as the tr aining sessions progressed and the task errors also reduced in a similar fashion on both of the Fujitsu and Non-Fujitsu platforms. Pearson correlation coefficients between the mean task time and task errors on both of the Fujitsu platform and Non-Fujitsu platform, as shown in Tables 4-4 and 4-5, indicate that the correlation is significant at 0.01 level on both platforms. In other words, task time and task errors were highly correlated. Longer task time was generally correlated to more task errors while shorter task time correlated to fewer errors.

PAGE 91

71 0 1 2 3 4 5 6 7 8 12345 Session No.Time (1000 milliseconds) / # of Errors Fujitsu Task Time Non-Fujitsu Task Time Fujitsu Session Errors Non-Fujitsu Session Errors Figure 4-1. Mean Task Time and Search Erro rs Observed in the Platform Difference Study Table 4-4. Correlation Between the Mean Task Time and Task Errors on the Fujitsu Platform Fujitsu Task Time Non-Fujitsu Task Errors Pearson Correlation 1 0.999 Sig. (2-tailed) 0.01 Fujitsu Task Time N 5 5 Pearson Correlation 0.999 1 Sig. (2-tailed) 0.01 Fujitsu Task Errors N 5 5 Table 4-5. Correlation Between the Mean Task Time and Task Errors on the Non-Fujitsu Platform Non-Fujitsu Task Time Non-Fujitsu Task Errors Pearson Correlation 1 0.998 Sig. (2-tailed) 0.01 Non-Fujitsu Task Time N 5 5 Pearson Correlation 0.998 1 Sig. (2-tailed) 0.01 Non-Fujitsu Task Errors N 5 5

PAGE 92

72 Hypotheses Testing In determining the potential differences in study result s that might be caused by different computer platforms, the hypotheses about the differences between the means of the two populations and the hypot heses about the variances of the two populations were tested. Hypotheses testing about variances of th e populations on Fujitsu and Non-Fujitsu platforms It was important to compare the variability in the results data from the Fujitsu and Non-Fujitsu platforms. This was because for the small sized samples (N<30) the assumption of equal variance would be require d to calculate the pooled sample variance to test the hypotheses about the means from these two different populations. With a confidence level of 95% ( = 0.05), the following Hypotheses were stated: Hypothesis Conclusion H0: 2 T1Fujitsu = 2 T1Non-Fujitsu If H0 is accepted equal variance likely exists in the average task times in Session 1 between the Fujitsu platform and Non-Fujitsu platform. Test platform is unlikely a fact or that causes different variability in test results. It is appropriate to calculate pooled sample variance from sT1Fujitsu and sT1Non-Fujitsu. T1 H1: 2 T1Fujitsu 2 T1Non-Fujitsu If H0 is rejected there is insufficient evidence to support that equal variance exists in the average task times in Session 1 between the Fujitsu platform and Non-Fujitsu platform. Test platform is likely a factor that caus es different variability in test results. It is not appropriate to calculate pooled sample variance from sT1Fujitsu and sT1NonFujitsu. T2 H0: 2 T2Fujitsu = 2 T2Non-Fujitsu If H0 is accepted equal variance likely exists in the average task times in Session 2 between the Fujitsu platform and Non-Fujitsu platform. Test platform is unlikely a fact or that causes different variability in test results. It is appropriate to calculate pooled sample variance from sT2Fujitsu and sT2Non-Fujitsu.

PAGE 93

73 H1: 2 T2Fujitsu 2 T2Non-Fujitsu If H0 is rejected there is insufficient evidence to support that equal variance exists in the average task times in Session 2 between the Fujitsu platform and Non-Fujitsu platform. Test platform is likely a factor that caus es different variability in test results. It is not appropriate to calculate pooled sample variance from sT2Fujitsu and sT2NonFujitsu. H0: 2 T3Fujitsu = 2 T3Non-Fujitsu If H0 is accepted equal variance likely exists in the average task times in Session 3 between the Fujitsu platform and Non-Fujitsu platform. Test platform is unlikely a fact or that causes different variability in test results. It is appropriate to calculate pooled sample variance from sT3Fujitsu and sT3Non-Fujitsu. T3 H1: 2 T3Fujitsu 2 T3Non-Fujitsu If H0 is rejected there is insufficient evidence to support that equal variance exists in the average task times in Session 3 between the Fujitsu platform and Non-Fujitsu platform. Test platform is likely a factor that caus es different variability in test results. It is not appropriate to calculate pooled sample variance from sT3Fujitsu and sT3NonFujitsu. H0: 2 T4Fujitsu = 2 T4Non-Fujitsu If H0 is accepted equal variance likely exists in the average task times in Session 4 between the Fujitsu platform and Non-Fujitsu platform. Test platform is unlikely a fact or that causes different variability in test results. It is appropriate to calculate pooled sample variance from sT4Fujitsu and sT4Non-Fujitsu. T4 H1: 2 T4Fujitsu 2 T4Non-Fujitsu If H0 is rejected there is insufficient evidence to support that equal variance exists in the average task times in Session 4 between the Fujitsu platform and Non-Fujitsu platform. Test platform is likely a factor that caus es different variability in test results. It is not appropriate to calculate pooled sample variance from sT4Fujitsu and sT4NonFujitsu. T5 H0: 2 T5Fujitsu = 2 T5Non-Fujitsu If H0 is accepted equal variance likely exists in the average task times in Session 5 between the Fujitsu platform and Non-Fujitsu platform. Test platform is unlikely a fact or that causes different variability in test results. It is appropriate to calculate pooled sample variance from sT5Fujitsu and sT5Non-Fujitsu.

PAGE 94

74 H1: 2 T5Fujitsu 2 T5Non-Fujitsu If H0 is rejected there is insufficient evidence to support that equal variance exists in the average task times in Session 5 between the Fujitsu platform and Non-Fujitsu platform. Test platform is likely a factor that caus es different variability in test results. It is not appropriate to calculate pooled sample variance from sT5Fujitsu and sT5NonFujitsu. H0: 2 E1Fujitsu = 2 E1Non-Fujitsu If H0 is accepted equal variance likely exists in the task errors in Session 1 between the Fujitsu platform and Non-Fujitsu platform. Test platform is unlikely a factor that ca uses different variability in test results. It is appropriate to calculate pooled sample variance from sE1Fujitsu and sE1Non-Fujitsu. E1 H1: 2 E1Fujitsu 2 E1Non-Fujitsu If H0 is rejected there is insufficient evidence to support that equal varian ce exists in the task errors in Session 1 between the Fujitsu platform and Non-Fujitsu platform. Test platform is likely a factor that causes differe nt variability in test results. It is not appropr iate to calculate pooled sample variance from sE1Fujitsu and sE1Non-Fujitsu. H0: 2 E2Fujitsu = 2 E2Non-Fujitsu If H0 is accepted equal variance likely exists in the task errors in Session 2 between the Fujitsu platform and Non-Fujitsu platform. Test platform is unlikely a factor that ca uses different variability in test results. It is appropriate to calculate pooled sample variance from sE2Fujitsu and sE2Non-Fujitsu. E2 H1: 2 E2Fujitsu 2 E2Non-Fujitsu If H0 is rejected there is insufficient evidence to support that equal varian ce exists in the task errors in Session 2 between the Fujitsu platform and Non-Fujitsu platform. Test platform is likely a factor that causes differe nt variability in test results. It is not appropr iate to calculate pooled sample variance from sE2Fujitsu and sE2Non-Fujitsu. E3 H0: 2 E3Fujitsu = 2 E3Non-Fujitsu If H0 is accepted equal variance likely exists in the task errors in Session 2 between the Fujitsu platform and Non-Fujitsu platform. Test platform is unlikely a factor that ca uses different variability in test results. It is appropriate to calculate pooled sample variance from sE3Fujitsu and sE3Non-Fujitsu.

PAGE 95

75 H1: 2 E3Fujitsu 2 E3Non-Fujitsu If H0 is rejected there is insufficient evidence to support that equal varian ce exists in the task errors in Session 3 between the Fujitsu platform and Non-Fujitsu platform. Test platform is likely a factor that causes differe nt variability in test results. It is not appropr iate to calculate pooled sample variance from sE3Fujitsu and sE3Non-Fujitsu. H0: 2 E4Fujitsu = 2 E4Non-Fujitsu If H0 is accepted equal variance likely exists in the task errors in Session 4 between the Fujitsu platform and Non-Fujitsu platform. Test platform is unlikely a factor that ca uses different variability in test results. It is appropriate to calculate pooled sample variance from sE4Fujitsu and sE4Non-Fujitsu. E4 H1: 2 E4Fujitsu 2 E4Non-Fujitsu If H0 is rejected there is insufficient evidence to support that equal varian ce exists in the task errors in Session 4 between the Fujitsu platform and Non-Fujitsu platform. Test platform is likely a factor that causes differe nt variability in test results. It is not appropr iate to calculate pooled sample variance from sE4Fujitsu and sE4Non-Fujitsu. H0: 2 E5Fujitsu = 2 E5Non-Fujitsu If H0 is accepted equal variance likely exists in the task errors in Session 5 between the Fujitsu platform and Non-Fujitsu platform. Test platform is unlikely a factor that ca uses different variability in test results. It is appropriate to calculate pooled sample variance from sE5Fujitsu and sE5Non-Fujitsu. E5 H1: 2 E5Fujitsu 2 E5Non-Fujitsu If H0 is rejected there is insufficient evidence to support that equal varian ce exists in the task errors in Session 5 between the Fujitsu platform and Non-Fujitsu platform. Test platform is likely a factor that causes differe nt variability in test results. It is not appropr iate to calculate pooled sample variance from sE5Fujitsu and sE5Non-Fujitsu. To test the above stated hypotheses, F values ( F = S1 2/S2 2, note Population 1 was denoted as the population providi ng the largest sample variance ) for each set of hypotheses were calculated and listed in Table 4-6. F /2 ( /2 = 0.025) value with n1-1 (15-1=14) degrees of freedom for the numerator and n2-1 (15-1=14) degrees of freedom for denominator is also shown in Table 4-6.

PAGE 96

76 Table 4-6. Platform Difference Study – Sample Variance F Values Sample Variable S1 2 S2 2 F = S1 2/S2 2 F0.025 (n1-1 = 14, n2-1=14) T1 47,779,430.471 34,221,720.004 2.119 2.983 T2 17,371,668.271 19,017,158.084 1.603 2.983 T3 14,929,980.804 14,415,690.240 1.101 2.983 T4 13,533,569.440 13,647,113.640 1.364 2.983 T5 12,102,513.284 11,557,280.160 2.099 2.983 E1 31.360 27.738 1.795 2.983 E2 5.444 9.000 1.340 2.983 E3 3.004 5.138 1.446 2.983 E4 2.778 3.484 1.655 2.983 E5 1.960 2.778 1.111 2.983 As shown in Table 4-6, the calculated F values for T1, T2, T3, T4, T5, E1, E2, E3, E4, and E5 are all less that than F0.025 values. Therefore, at a confidence level of 95% ( = 0.05), the H0 hypotheses for T1, T2, T3, T4, T5, E1, E2, E3, E4, and E5 cannot be rejected. This led to the conclusion that at a confidence level of 95% ( = 0.05) different computer test platforms (Fujitsu or Non-Fujitsu comput ers) did not introduce the difference in the variability in the average task time and task errors for each of the five icon training sessions. Therefore, it is assumed with confid ence that data to be collected on different computer platforms would have the equal variances. Hypotheses testing about the difference betw een the means of the data collected on the Fujitsu and Non-Fujitsu platforms It is important not only to analyze the vari ability of the data collected on the Fujitsu and Non-Fujitsu platforms but also to compar e the means of the data from these two independent samples. If different platforms do result in different means, then the data in the final study should all be collected on the sa me platform to avoid the unwanted bias in the data that may exist because of the platfo rm difference factor. For this, the following hypotheses were stated with a confidence level of 95% ( = 0.05):

PAGE 97

77 Hypothesis Conclusion H0: T1Fujitsu T1Non-Fujitsu = 0 If H0 is accepted – sample evidence is not sufficient to support the c onclusion that there is a difference between the means of the average task times of the Fujitsu population and Non-Fujitsu population for Session 1. Test platform is unlikely to be a factor that causes a difference in test results. T1 H1: T1Fujitsu T1Non-Fujitsu 0 If H0 is rejected – there is a difference between the means of the average task times collected on the Fujitsu platform and Non-Fujitsu platform for Session 1. Test platform is likely to be a factor that causes a diffe rence in test results. H0: T2Fujitsu – T2Non-Fujitsu = 0 If H0 is accepted – sample evidence is not sufficient to support the c onclusion that there is a difference between the means of the average task times of the Fujitsu population and Non-Fujitsu population for Session 2. Test platform is unlikely to be a factor that causes a difference in test results. T2 H1: T2Fujitsu – T2Non-Fujitsu 0 If H0 is rejected – there is a difference between the means of the average task times collected on the Fujitsu platform and Non-Fujitsu platform for Session 2. Test platform is likely to be a factor that causes a diffe rence in test results. H0: T3Fujitsu – T3Non-Fujitsu = 0 If H0 is accepted – sample evidence is not sufficient to support the c onclusion that there is a difference between the means of the average task times of the Fujitsu population and Non-Fujitsu population for Session 3. Test platform is unlikely to be a factor that causes a difference in test results. T3 H1: T3Fujitsu – T3Non-Fujitsu 0 If H0 is rejected – there is a difference between the means of the average task times collected on the Fujitsu platform and Non-Fujitsu platform for Session 3. Test platform is likely to be a factor that causes a diffe rence in test results. T4 H0: T4Fujitsu – T4Non-Fujitsu = 0 If H0 is accepted – sample evidence is not sufficient to support the c onclusion that there is a difference between the means of the average task times of the Fujitsu population and Non-Fujitsu population for Session 4. Test platform is unlikely to be a factor that causes a difference in test results.

PAGE 98

78 H1: T4Fujitsu – T4Non-Fujitsu 0 If H0 is rejected – there is a difference between the means of the average task times collected on the Fujitsu platform and Non-Fujitsu platform for Session 4. Test platform is likely to be a factor that causes a diffe rence in test results. H0: T5Fujitsu – T5Non-Fujitsu = 0 If H0 is accepted – sample evidence is not sufficient to support the c onclusion that there is a difference between the means of the average task times of the Fujitsu population and Non-Fujitsu population for Session 5. Test platform is unlikely to be a factor that causes a difference in test results. T5 H1: T5Fujitsu – T5Non-Fujitsu 0 If H0 is rejected – there is a difference between the means of the average task times collected on the Fujitsu platform and Non-Fujitsu platform for Session 5. Test platform is likely to be a factor that causes a diffe rence in test results. H0: E1Fujitsu E1Non-Fujitsu = 0 If H0 is accepted – sample evidence is not sufficient to support the c onclusion that there is a difference between the means of the task errors of the Fujitsu population and Non-Fujitsu population for Session 1. Test platform is unlikely to be a factor that causes a difference in test results. E1 H1: E1Fujitsu E1Non-Fujitsu 0 If H0 is rejected – there is a difference between the means of the task errors collected on the Fujitsu platform and Non-Fujitsu platform for Session 1. Test platform is likely to be a factor that causes a difference in test results. H0: E2Fujitsu – E2Non-Fujitsu = 0 If H0 is accepted – sample evidence is not sufficient to support the c onclusion that there is a difference between the means of the task errors of the Fujitsu population and Non-Fujitsu population for Session 2. Test platform is unlikely to be a factor that causes a difference in test results. E2 H1: E2Fujitsu – E2Non-Fujitsu 0 If H0 is rejected – there is a difference between the means of the task errors collected on the Fujitsu platform and Non-Fujitsu platform for Session 2. Test platform is likely to be a factor that causes a difference in test results.

PAGE 99

79 H0: E3Fujitsu – E3Non-Fujitsu = 0 If H0 is accepted – sample evidence is not sufficient to support the c onclusion that there is a difference between the means of the task errors of the Fujitsu population and Non-Fujitsu population for Session 3. Test platform is unlikely to be a factor that causes a difference in test results. E3 H1: E3Fujitsu – E3Non-Fujitsu 0 If H0 is rejected – there is a difference between the means of the task errors collected on the Fujitsu platform and Non-Fujitsu platform for Session 3. Test platform is likely to be a factor that causes a difference in test results. H0: E4Fujitsu – E4Non-Fujitsu = 0 If H0 is accepted – sample evidence is not sufficient to support the c onclusion that there is a difference between the means of the task errors of the Fujitsu population and Non-Fujitsu population for Session 4. Test platform is unlikely to be a factor that causes a difference in test results. E4 H1: E4Fujitsu – E4Non-Fujitsu 0 If H0 is rejected – there is a difference between the means of the task errors collected on the Fujitsu platform and Non-Fujitsu platform for Session 4. Test platform is likely to be a factor that causes a difference in test results. H0: E5Fujitsu – E5Non-Fujitsu = 0 If H0 is accepted – sample evidence is not sufficient to support the c onclusion that there is a difference between the means of the task errors of the Fujitsu population and Non-Fujitsu population for Session 5. Test platform is unlikely to be a factor that causes a difference in test results. E5 H1: E5Fujitsu – E5Non-Fujitsu 0 If H0 is rejected – there is a difference between the means of the task errors collected on the Fujitsu platform and Non-Fujitsu platform for Session 5. Test platform is likely to be a factor that causes a difference in test results. As the sample sizes of the Fujitsu sample and Non-Fujitsu sample were both less than 30, the t distribution with 28 degrees of freedom ( n1+ n2-2 = 15+15-2=28, n1=15, n2=15) was used to develop the critical va lues for the test. The following assumptions were made for the test: 1) Fujitsu popul ation and Non-Fujitsu population both have normal distributions; 2) population variances in the Fujitsu population and Non-Fujitsu

PAGE 100

80 population are equal. Pooled estimates of th e population variances we re calculated from the variances of the Fujitsu sample and Non-Fujitsu sample. For = 0.05, t /2 with 28 degrees of freedom is 2.048. Critical values (lower value 0 – 2.048* Spooled, upper value 0 + 2.048* Spooled) for the test were also calculated and the resultant data are listed in Table 4-7. Table 4-7. Platform Difference Study – t Test for Equality of Means (Sample 1Fujitsu Platform, Sa mple 2Non-Fujitsu Platform) Critical Values Sample Variable Sample 1 Mean Sample 2 Mean Difference of Sample Means S1 S2 Pooled Sample Variance ( S1 2/n1+ S2 2/n2)0.5 Lower Upper T1 6,912.27 5,849.93 1,062.33 3,199.987 2,198.060 1,002. 377 -2,052.868 2,052.868 T2 4,167.93 4,360.87 -192.93 1,084.059 1,372.435 451.572 -924.820 924.820 T3 3,863.93 3,796.80 67.13 746.550 711.389 266.259 -545.299 545.299 T4 3,678.80 3,694.20 -15.40 1,050.892 899.898 357.229 -731.606 731.606 T5 3,478.87 3,399.60 79.27 1,012.110 698.636 317.538 -650.318 650.318 E1 5.60 5.27 0.33 4.763 3.555 1.535 -3.143 3.143 E2 2.33 3.00 -0.67 3.063 2.646 1.045 -2.140 2.140 E3 1.73 2.27 -0.53 1.387 1.668 0.560 -1.147 1.147 E4 1.67 1.87 -0.20 1.759 2.264 0.740 -1.516 1.516 E5 1.40 1.67 -0.27 1.056 1.113 0.396 -0.811 0.811 As shown in Table 4-7, the observe d sample mean differences for T1, T2, T3, T4, T5, E1, E2, E3, E4, and E5 were all located between the lower and upper critical values. Therefore, at a confidence level of 95% (significance level = 0.05), the H0 hypotheses for T1, T2, T3, T4, T5, E1, E2, E3, E4, and E5 cannot be rejected. Based on the results from the platform difference study, there is not sufficient evidence to reject the hypotheses that data collected on the different platforms do not differ in terms of the population means and population variance. Therefore, risk of introducing bias in the results by administer ing the visual search game on different computers can be considered to be low.

PAGE 101

81 Icon Learning Curve Analysis Learning Curve Regression To obtain a general idea of how learning effects take place in the icon training sessions, the data from the platform diffe rence study were further analyzed for the learning curve regression. As indicated in the platform difference study, different platforms did not appear to ge nerate differences in the means and variances in the two independent samples. Hence the data from both samples were merged and the descriptive statistics for the merged sample are listed in Table 4-8. Table 4-8. Platform Difference Study Sample Group Statistics N Minimum Maximum Mean Std. Deviation T1 30 2333 13510 6381.10 2750.947 T2 30 2443 7430 4264.40 1219.127 T3 30 2679 5315 3830.37 717.312 T4 30 2239 6402 3686.50 961.329 T5 30 2100 6185 3439.23 855.440 It can be seen from Table 4-8 that th e minimum observed average task times for each session did not vary greatly while the maximum observed average task times exhibited great variab ility with a general trend of d ecreasing as the session number increased. The means of the average task times from Session 1 to Session 5 also exhibit a similar decreasing trend. The s catter plot of the mean aver age task times is shown in Figure 4-2 with an estimated logarithmic trend line calcul ated with the least square method superposed.

PAGE 102

82 y = -1778.9Ln(x) + 6023.6 R2 = 0.9019 0.00 1000.00 2000.00 3000.00 4000.00 5000.00 6000.00 7000.00 0123456 Session NumberMilliseconds Mean Average Task Time Log. (Mean Average Task Time) Figure 4-2. Learning Effect of the Mean Average Task Time The concept of the learning curve was in troduced to the airc raft industry in 1936 when T. P. Wright published an artic le in the February 1936 issue of the Journal of the Aeronautical Science Wright described a basic theory for obtaining cost estimates based on the repetitive production of airplane asse mblies. Since then, learning curves (also known as progress functions) have been appl ied to many types of work, from simple tasks to complex projects as manufacturing a Space Shuttle (NASA website, 2005). There are various models for learning curve analysis. For the Wright learning curve, the underlying hypothesis is that the di rect effort (time) necessary to complete a unit of production will decrease by a cons tant percentage each time the production quantity is doubled. For example, if the rate of improvement is 20% between doubled quantities, then the learning percent would be 80%. A learning curve analysis program developed by J. Gambatese, entitled Cons truction Scheduling and Productivity Impacts (CSPI) program, predicts the time to perform a particular unit of work (individual units) as:

PAGE 103

83 TN = Kt*Ns Where: TN = Effort required to complete the Nth unit N = Unit number Kt = Constant (Theoretically K1 = T1) s = Slope parameter or improvement rate (negative value) s = log(phi) /log2 phi = rate of improvement (general ly based on double units, log2 implies doubled units). If phi = 0.80, then the s econd unit is completed with 80% of the effort of the first unit. The 4th unit woul d require 64% of the effort of the first unit. The CSPI program uses the least squares f it method to calculate s and K. The data for the mean of the average task time from th e merged sample were entered in the CSPI program and the computed results determined s = -0.374 and K = 6020. With s known, phi was computed to be 77.18%. Therefore, using the CSPI program, the learning curve equation for the average icon match task time is TN = 6020*N-0.374. WrightÂ’s learning curve and the CSPI pr ogram are both based on the assumption of the constant rate of improvement. With th is assumption and a sufficiently large N, mathematically the formula could result in a TN that may not be realistically attainable. For example, with the above derived equa tion from the CSPI program, given N = 40, T40 would equal to 1,516.5 ms which is already less than any observed individual task time. Given N at 1,000, T1,000 would be 455.4 ms and this number would almost be impossible for an average person to attain. It is l ogical to conjecture that as N increases TN should

PAGE 104

84 approach a number that represents the physic al limit of the test subject. Therefore the assumption of the constant rate of improveme nt may not be able to fully address the factor of the human physical learning limits in the learning curve equations. For this reason, the learning curve equati on is generally accepted as being applicable when the time to produce one unit is high and where th e total number of units produced is low. Learning curves exhibit a gr eater gradient at the beginning of the curve and the gradient gradually decreases as N increases as illustrated in the learning curve of icon training session. If LRN = (TN-1-TN)/TN-1 is denoted as the learni ng rate in session N, the task time TN can be expressed as TN = TN-1(1LRN). The learning rate is a function of session number N, i.e., LRN = (N). The data in Table A-1 can be transposed into a new group of learning rate data as shown in Table 4-9. Table 4-9. Learning Rate on the Average Task Time Per Icon for Each Training Session (LRN-N-1 denotes the learning rate from Session N-1 to Session N) Learning Rate Subject No. Platform LR1-2 LR2-3 LR3-4 LR4-5 1 Non-Fujitsu 0.3169 0.0794 0.0270 0.2848 2 Non-Fujitsu -0.0596 -0.0837 0.1642 0.0621 3 Non-Fujitsu -0.1105 0.3315 0.1398 0.2830 4 Non-Fujitsu 0.1044 0.1670 0.0037 -0.1339 5 Non-Fujitsu 0.3186 -0.2005 0.2904 -0.2826 6 Non-Fujitsu 0.3897 0.3139 -0.0755 0.0564 7 Non-Fujitsu 0.5291 -0.0028 0.1216 -0.0696 8 Non-Fujitsu 0.3485 0.1521 -0.1520 0.0512 9 Non-Fujitsu 0.1800 0.0311 0.0022 0.2812 10 Non-Fujitsu 0.3738 -0.0295 0.1976 -0.1058 11 Non-Fujitsu -0.0137 0.2399 -0.1916 0.0862 12 Non-Fujitsu -0.0402 0.1836 -0.3936 0.1087 13 Non-Fujitsu 0.4583 -0.0577 0.2879 -0.0982 14 Non-Fujitsu 0.3520 0.1317 -0.2690 0.1555 15 Non-Fujitsu 0.0487 0.1794 0.1458 0.1759 16 Fujitsu 0.2339 0.0012 0.0235 0.0383 17 Fujitsu 0.2216 0.1077 0.0034 0.1441 18 Fujitsu 0.3532 -0.1058 0.3632 0.0050 19 Fujitsu 0.3257 0.1128 0.1275 -0.1706 20 Fujitsu 0.0766 -0.0039 0.1591 0.0729

PAGE 105

85 Table 4-9. Continued Learning Rate Subject No. Platform LR1-2 LR2-3 LR3-4 LR4-5 21 Fujitsu 0.5867 0.0512 0.1363 -0.3809 22 Fujitsu 0.5284 0.1030 -0.3969 0.2626 23 Fujitsu 0.4341 0.1987 0.2256 -0.0050 24 Fujitsu 0.1242 0.1016 0.0819 -0.2624 25 Fujitsu 0.4624 -0.0209 -0.0991 0.0683 26 Fujitsu 0.4407 0.1165 -0.0030 0.1782 27 Fujitsu 0.2753 -0.1993 0.2208 -0.0105 28 Fujitsu 0.4562 0.1306 -0.0350 0.1755 29 Fujitsu 0.4840 0.2376 0.1065 0.1798 30 Fujitsu 0.1596 -0.0105 -0.1890 0.2731 The above learning rate data were ente red in the SPSS (Version 13.0) program and the group statistics were computed and summarized in Table 4-10. Table 4-10. Group Statistics of Learning Rates N Minimum Maximum Mean Std. Deviation LR1-2 30 -0.1105 0.5867 0.278620 0.1932216 LR2-3 30 -0.2005 0.3315 0.075197 0.1332881 LR3-4 30 -0.3969 0.3632 0.034110 0.1913727 LR4-5 30 -0.3809 0.2848 0.047443 0.1755864 It can be seen from Table 4-10 that the minimum, maximum and mean of the learning rates over the sessions still exhibit the general decreasing trend while the negative value of the minimum learning rate suggests the increasing difficulties of making improvements over the previous sessions. The scatter plot of the mean learning rates over each session is shown in Figure 4-3 with an estimated logarithmic trend line calculated with the least s quare method superposed.

PAGE 106

86 y = -0.1757Ln(x) + 0.2485 R2 = 0.8518 0.000000 0.050000 0.100000 0.150000 0.200000 0.250000 0.300000 012345 Session No.Improvement Rate Session Learning Rate Log. (Session Learning Rate) Figure 4-3. Mean Learning Rate Scatter Plot It can be seen in the Figure 4-3 that when the session number exceeds 4 the logarithmic learning rate trend line could cross the X-axis and the learning rate could be negative. In other words, as session numbers increase, it may be possible for the learning rate to approach zero and transiti on to the reverse direction (the Nth session requires more time to complete than in the N-1th session). Therefore, it can be inferred that the additional improvement may not be remarkab le when the number of training sessions exceeds 4. Long Term Effect of Icon Training To obtain a general glimpse on the long-te rm effect of the leaning curve in the icon training process, three (3) subjects we re solicited and each co mpleted twenty (20) icon-training game sessions (total 300 ic on visual search tasks). Among the three subjects, one was a superintendent for a regional homebuilder, one was a professional engineer working for the state transportation department, and the third one was a foreman

PAGE 107

87 for a local framing contractor. These three subj ects were not part of the sample for the platform difference study. Table A-3 in Appendix A shows the sessi on errors, average task time, average instruction reading time, and average sear ch time per each session that Subject 1 (homebuilderÂ’s superintendent) achieved in 20 icon training game sessions. Each icon training game session included 15 icon visual search tasks so each subject completed 300 icon visual search tasks. Of the 15 groups of time observations for each icon training session, the 2 shortest and the 2 longest obs ervations were excluded from the final calculation and the averages were computed using the remaining (11 sets) data. Figure 4-4 shows the scatter plot of S ubject 1Â’s average task time, average instruction reading time, and average search time. Figure 4-5 shows the scatter plot of Subject 1Â’s task errors. Average Time Per Session ( Subject 1) 0 500 1,000 1,500 2,000 2,500 3,000 3,500 4,000 4,500 5,0001 3 5 7 9 11 13 15 17 19Session #Milliseconds Average Instruction Reading Time per Task Average Search Time per Task Average Task Time Figure 4-4. Average instruction reading time, search time, and task time (Subject 1)

PAGE 108

88 Number of Task Errors (Subject 1) 0 1 2 135791113151719 Session #Errors Number of Task Errors Figure 4-5. Task Errors (Subject 1) Similarly, the data for the icon training games for Subject 2 and Subject 3 are shown in Table A-4 and Table A-5. Scatter plots of these da ta are shown in Figures 4-6, 4-7, 4-8, and 4-9. Average Time per Session Subject 2 0 1000 2000 3000 4000 5000 60001 3 5 7 9 11 13 15 17 19Session #Milliseconds Average Instruction Reading Time per Task Average Search Time per Task Average Task Time Figure 4-6. Average instruction reading time, search time, and task time (Subject 2)

PAGE 109

89 Task Errors (Subject 2)0 1 2 1234567891011121314151617181920Session #Errors Task Errors Figure 4-7. Task Errors (Subject 2) Average Time per Session Subject 3 0 1000 2000 3000 4000 5000 6000 7000 8000 1234567891011121314151617181920 Session #Milliseconds Average Task Time per Session Average Instruction Reading Time per Session Average Search Time per Session Figure 4-8. Average instruction reading time, search time, and task time (Subject 3)

PAGE 110

90 Task Errors (Subject 3)0 1 2 3 4 1234567891011121314151617181920Session #Errors Errors per Session Figure 4-9. Task Errors (Subject 3) From Figures 4-4, 4-6, and 4-8, it can be s een that the average task time and the average search time have a general decreasi ng trend over the 20 sessions but they also have localized cycles that vary from 3 to 7 sessions. Within each of these cycles, the average task time or the average search tim e would first decrease and then gradually transition to increase at the end of the cycle and then decrease again when the next cycle starts. Although three groups of data are not adequate to prov ide a true representation of the long-term effect of the icon training pr ocess, it is possible that a concentrated repetitive training pr ogram may not produce a consistent rate of improvement. This may be due to the decaying factor of fatigue/loss of interest over the time. Therefore, the learning rate function previously discussed LRN = (N) could be further refined to include the decaying factor kD (constant) as LRN = (N, kD). Since the regression of the learning rate function is not the focus of this st udy, it will not be discussed in depth here. It can also be seen from the aforementione d scatter plots that the repetitive training over the 20 sessions provided little or no im provement on the instruction reading time.

PAGE 111

91 This might be explained by the fact the recognition/comprehension of the textual information is an acquired knowledge/skill fo r each of the test subjects and this knowledge/skill has undergone many learning sessions over years and has reached the learning curve limit. The number of errors per session for each of the three test subjects was not significant and did not appear to exhibit any patterns. This may be explained by the fact that all three subjects have considerable construction experience and were familiar with the construction operations/activi ties represented by the icons. Number of Training Sessions Required for the Final Study One of the important questi ons in the pilot study phase was to determine roughly how many training sessions would be required for a subject to ade quately master the icon-text description correspondi ng relationships. This information was needed to ensure the total time allotted for each study subject was realistically attainable. The learning rate data derived in Table 4-10 were used for th is analysis. The basic principle used to determine the required number of training se ssions was to select the number for which the gains in the learning rate slowed down and stabilized. More specifically, the criterion was set for when the learning rate dropped less than 10%. As shown in Table 4-10, the learning rates in Session 2, 3, 4, and 5 were 0.2786, 0.0752, 0.0341 and 0.0474. The most learning gains took place in Session 2 and th e learning rate dropped below 10% as soon as the training sessions reached the third round. Therefore, three rounds of training sessions would very likely be adequate for an average subject to master the icons. Establishing Training Session Time Baseline Another objective of the pilot study was to obtain a general idea of the upper threshold limit of the session time that 90% of population would be able to achieve

PAGE 112

92 within three (3) training sessi ons. The session time data from the platform study samples were used for this purpose and are shown in Table A-6 in Appendix A. As previously discussed, the platform diffe rence did not appear to have caused any difference in the means and variances of the data collected on the Fu jitsu and Non-Fujitsu platforms. Therefore, the session time data co llected on both platforms are joined and the resulting descriptive statistics of the data are shown the Table 4-11. Table 4-11. Session Time Data Statistics N Minimum Maximum Mean Std. Deviation ST1 30 47,615 246,765 123,572.23 44,766.980 ST2 30 40,244 221,020 83,512.17 33,879.883 ST3 30 35,340 100,757 65,806.63 15,519.586 ST4 30 32,029 113,553 62,626.97 17,618.954 ST5 30 32,451 94,376 60,107.37 15,349.895 Valid N (list wise) 30 Based on the assumption that the session time data from the above sample (N=30) conforms to a normal distribution, the uppe r threshold limit with a 90% confidence coefficient for the session time in Session 3 would be: 65,807+1.645 (15,520) = 91,337 ms. Twenty-nine of 30 (96.7%) sampled subj ects completed Session 3 in less than 91,337 ms. This upper threshold limit was used as a ba seline in the icon visu al search game to prompt a faster visual search when a subjec t’s session time was lower than this value. However, it should be noted that this value was not the absolute criterion to determine whether a subject had adequately mastered the icons because some individuals are by nature “slow-performers” and yet have great accuracy in their visual search tasks. Lessons Learned During the Pilot Study Two main issues were iden tified during the pilot study a nd the solutions to these issues were incorporated into the final study design.

PAGE 113

93 Experimental environment Although the experimental apparatus (Fujitsu Stylistic 3400 pen tablet PC) used in this study was equipped with an indoor/outdoor viewable screen, it was still difficult to see in the outdoor environment under some circ umstances. Coupled with other distractive elements (noise from equipment, worker s, and interruption from radio contact) on construction sites, the test results can be very unreliable. Data obtai ned in the pilot study were all conducted indoors with one excepti on. Study participants need to give their undivided attention during the test. The test would be best admini stered in an indoor environment where screen readability and noise s/distractions posed a minimal effect in the study results. Verbal instructions During the pilot study, it was also obse rved that if word-for-word verbal instructions were given, the time for a subj ect to find the correct target on the text interface tended to be significantly less because subjects tended to search for the target by the exact word-matching strategy instead of tryi ng to process the inst ruction first and then finding the object that matched the exact m eaning. In addition, other factors associated with the mode of verbal inst ructions (e.g., test administer accent, tone of voice, etc.) could be problematic. Therefore, verbal instru ctions were eliminated in the final study.

PAGE 114

94 CHAPTER 5 RESULTS AND DISCUSSIONS This chapter presents the results of the st udy and discusses the st atistic analysis of the findings. The research hypotheses stated in Chapter 3 are also tested here. Sample Demographics Demographic information of each sample group was collected through a questionnaire survey. This included age, year s of construction experi ence, education level (or academic status for the student sample), craf t of foreman, crew size (not applicable to the construction professionals sample and student sample), and occupation. Age Thirty-four foremen provided informati on on their age. The mean age of the foremen sample was 40.8 years and the median age was 40.5 years. This appears to be consistent with the findings from ElliottÂ’s study (2000) which reported a foreman median age of 40.0 years. Thirty-three of 37 constr uction professionals gave their age and the mean of the age in this sample was 38.2 y ears with a median age of 34.0 years. Twentyseven of the 28 students in the third sample responded to the age question and the mean age of the students was 28.4 years with a median age of 27.0 years (see Table 5-1). Table 5-1. Mean and Median Ages of th e Foremen, Construction Professionals, and Students Mean Age (Years) Median Age (Years) Foremen Sample (N=34) 40.8 40.5 Construction Professionals Sample (N=33) 38.2 34.0 Student Sample (N=27) 28.4 27.0

PAGE 115

95 The ages of the foremen and the constructi on professionals appear to be somewhat evenly distributed across the major working age categories while the age distribution of the students was more concentrated in the 20 to 30 years age group (see Figure 5-1). Over 82% of the foremen were over 30 years old and more than half (52.94%) were over 40 years old. Over 66% of the construction pr ofessionals were over 30 years old. By comparison, the majority (70.4%) of the st udents were between 20 to 30 years old. 23.5% 29.4% 29.4% 17.6% 21.2% 21.2% 24.2% 33.3% 11.1% 18.5% 70.4% 0.0% 10.0% 20.0% 30.0% 40.0% 50.0% 60.0% 70.0% 80.0% 20 to 3030 to 4040 to 50>50 Age Group (Years)% of the Respondents Foremen (N=34) Construction Professionals (N=33) Student (N=27) Figure 5-1. Age Group Distributions of the Research Subjects Education The education levels between the three sa mples are in sharp contrast. Most (73.5%) of the foremen surveyed in this study ha d no formal education beyond high school. This was comparable to 64.3% in BorcherdingÂ’s study in 1977 and 61.3% in ElliottÂ’s study at the University of Florida in 2000. The major ity (75.8%) of the construction professionals had college level of education with three ha ving attended graduate school. By contrast, the students had the most advanced educati on with 92.9% enrolled in the graduate school (see Figure 5-2).

PAGE 116

96 26.5% 75.8% 9.1% 7.1% 73.5% 15.2% 92.9% 0.0% 10.0% 20.0% 30.0% 40.0% 50.0% 60.0% 70.0% 80.0% 90.0% 100.0% High School CollegeGraduate School% of the Respondents Foremen (N=34) Construction Professionals (N=33) Students (N=28) Figure 5-2. Education Levels of the Research Subjects Construction Experience The mean construction experience of the foremen was 19.6 years with a median of 20 years. This was almost 10 years more than the mean construction experience (9.5 years) reported in ElliottÂ’ s study (2000). The mean cons truction experience of the construction professionals was 15.1 years with a median of 9.5 years. By comparison, the mean construction experience of the student s was 1.7 years with a median of 0.8 year (see Table 5-2). Table 5-2. Mean and Median Construction Experience Durations of the Foremen, Construction Professionals and Students Mean (Years) Median (Years) Foremen 19.6 20.0 Construction Professionals 15.1 9.5 Students 1.7 0.8 The construction experience duration group di stributions varied greatly between the foremen, the construction professionals, a nd the students. The foremen had the most construction experience with 85.3% of th e respondents having over 10 years of construction experience. Over the half (55.9%) of the foremen had over 20 years

PAGE 117

97 construction experience. The extent of cons truction experience among the construction professionals was somewhat mixed and did not appear to have a ny discernable pattern. Project superintendents tended to have more years of experience when compared to others, such as CAD technicians, whose j obs tended to more limited to an office environment. By comparison, the student samp le had the least construction experience with over half (60.7%) having one year or le ss of construction expe rience (see Figure 53). 29.4% 2.9% 55.9% 11.8% 37.5% 12.5% 18.8% 25.0% 6.3% 60.7% 14.3% 25.0% 0.0% 10.0% 20.0% 30.0% 40.0% 50.0% 60.0% 70.0% Less Than 11 to 55 to 1010 to 20Over 20 Duration of Construction Experience (Years)% of the Respondents Foremen (N=34) Construction Professionals (N=32) Students (N=28) Figure 5-3. Construction Experien ce of the Research Subjects ForemanÂ’s Crew size Thirty of the 35 foremen provided informa tion on their typical crew size. The mean crew size that foremen supervised was 13 with a median of 5.5. This was also comparable to the mean crew size (14.8) re ported in ElliottÂ’s study. The crew size might be higher than typical because of the high percentage of the general foremen included in the sample. When general foremen are exclude d from the analysis, the mean crew size was 5.8. The majority of the foremen (21 out of 30, or 70%) supervised a crew size of 10

PAGE 118

98 or fewer workers (see Figure 5-4). Seven of th e 8 general foremen supervised more than 20 workers. 7 2 10 11 0 2 4 6 8 10 12 1 to 55 to 1010 to 20Over 20 Crew SizeFrequency Figure 5-4. Crew Sizes of Foremen Foremen Categorizations Of the 35 foremen, 7 (20.6%) were earthwork foremen, 14 (41.2%) were underground utility foremen, 2 (5.9%) were paving foremen, 8 (23.5%) were general foremen, and 3 (8.8%) were “other” (e.g., surv eying, erosion and sediment control, and concrete construction) types of foremen (see Figure 5-5).

PAGE 119

99 Other 8.8% Earthwork Foreman 20.6% Underground Utility Foreman 41.2% Paving Foreman 5.9% General Foreman 23.5% Figure 5-5. Foremen Specializations Occupations of the Cons truction Professionals The subjects in the construction prof essionals sample included 12 project managers, 8 civil engineers, 5 superintendents, 4 project engineers, 4 CAD technicians, 2 construction consultants, 1 construction es timator, and 1 construction inspector (see Figure 5-6). Superintendents 13.5% CAD Technicians 10.8% Consultants 5.4% Civil Engineers 21.6% Estimators 2.7% Inspectors 2.7% Project Managers 32.4% Project Engineers 10.8% Figure 5-6. Occupations of th e Construction Professionals

PAGE 120

100 Student Status The majority (92.9%, 26 out of 28) of the students were graduate students. Two of 28 student subjects were undergraduate student s. The majority (26 out of 28, or 92.9%) of the students were majoring in buildi ng construction. While participation was voluntary, the students who were asked to par ticipate were enrolled in classes conducted in the M. E. Rinker, Sr. School of Building Construction at the University of Florida. Computer Experience Subjects were also asked if they used a computer at work and at home. Of the 35 foremen, 2 (5.7%) used computers both at wo rk and at home, 15 (42.9%) used computers only at home, and 18 (51.4%) did not use comput ers at all. Of the two foremen who used computers at work, one was a general fore men and the other was an underground utility foreman with responsibilities fo r preparing job estimates. Overall, foremen in this survey did not use computers for work but several (42.9%) used computers at home (see Figure 5-7). All of the 37 construction professionals and all of the 28 students stated that they used computers at work/school and at home (see Figure 5-7).

PAGE 121

101 51.4% 42.9% 5.7% 100.0%100.0% 0.0% 20.0% 40.0% 60.0% 80.0% 100.0% 120.0% Used Computer at Work/School and at Home Used Computer Only At Home Used Computer Only At Work/School Did not use computers at allPercentage of the Respondents Foremen (N=35) Construction Professionals (N=37) Students (N=28) Figure 5-7. Computer Use Experience of Foremen Based on the demographic information obtained from the survey, the following characteristics can be generalized for the three samples: Subjects in the foremen sample were gene rally in the older age group (>30 years old). A high percentage of the sample did not have education beyond the high school level. The majority of the samp le had extensive construction experience (>20 years). Most subjects in the foremen sample can be considered as novice or beginner level computer users. Subjects in the construction professionals sample were more evenly mixed in the age group and most had received coll ege level education. The length of construction experience in this group was mixed when compared to the foremen group. The subjects in this sample genera lly can be considered as proficient computer users. Subjects in the student sample were pr imarily in the younger age group (20 to 30 years old) and had achieved relatively high education level (graduate school). The subjects generally had relati vely little construction experi ence (less than 1 year). The student subjects generally have consid erable computer experience and can be considered as advanced computer users. Experiences of the Research Subjects with Touch Sensitive Screen Devices (TSSDÂ’s) Subjects were asked if they had ever used ATM machines, information kiosks, store self-checkout devices or other common devices that were equipped with touch-

PAGE 122

102 sensitive screens. Although these TSSD’s are in much larger formats than the typical mobile computing devices, the basic concep t and interaction mechanisms are the same. Prior experience of foremen with using TSSD ’s would certainly facilitate the training process of using the mobile field documentation systems. When asked if they had experience with ATM machines, information kiosks, selfcheckout devices, and similar devices, subjec ts responded to the question by either “Yes” or “No.” If a subject had used a particular device in the past, a score of 1 was given. Otherwise, 0 was entered for the subject for th at particular device. Each subject’s scores on the three common TSSD’s were then adde d up to obtain a total experience score which provides a good indication of each subject ’s overall experience with these touchsensitive devices. Thirty-four of the 35 foremen, 37 cons truction professionals, and 27 (of 28) students responded to the quest ion regarding their experien ces with TSSD’s. Over 85% of the foremen had used ATMs, 38.2% had used information kiosks, and 73.5% have used store self-checkout devices. Of the construction professionals, 91.9% had used ATMs, 78.4% had used information kiosks, and 86.5% had used store self-checkout devices. All students had used ATMs, 59.3% had used information kiosks, and 81.5% had used store self-checkout devices (see Figure 5-8).

PAGE 123

103 71.40% 38.20% 85.30% 91.90% 78.40% 86.50% 78.60% 100% 59.30% 0.00% 20.00% 40.00% 60.00% 80.00% 100.00% 120.00% ATMsInformation KiosksStore Self-Checkout Devices% of the respondents Foremen Construction Professionals Students Figure 5-8. SubjectsÂ’ experi ence with common TSSDÂ’s The mean total TSSD scores of the foremen, construction professionals, and students were 1.97, 2.56 and 2.41(of a possible maximum score of 3.0), respectively. The Levene test of Homogeneity of Variances showed the tota l TSSD scores of the three samples had equal variances at a signifi cance level of 0.05 (see Table 5-3). A Least Significance Difference (LSD) test was conducte d to compare the means of these scores. The foremen sample had the lowest mean TSDD score and the differences when compared with the construction professionals and students were stat istically significant (at a significance level of 0.05). There was no statistically significant difference between the means of the TSSD experience scores of the construction professionals and the students (see Table 5-4). Table 5-3. Levene Test of Homogeneity of Variances on the Total TSSDÂ’s Scores Levene Statistic df1 df2 Sig. 0.414 2 95 0.662

PAGE 124

104 Table 5-4. LSD Test of the Means of Total TSSDÂ’s Scores 95% Confidence Interval (I) Category (J) Category Mean Difference (I-J) Std. ErrorSig. Lower Bound Upper Bound Foremen Construction Professionals -0.596980.19586 0.003 -0.9858 -0.2082 Students -0.436820.21252 0.043 -0.8587 -0.0149 Construction Professionals Foremen 0.596980.19586 0.003 0.2082 0.9858 Students 0.160160.20867 0.445 -0.2541 0.5744 Students Foremen 0.436820.21252 0.043 0.0149 0.8587 Construction Professionals -0.160160.20867 0.445 -0.5744 0.2541 Experiences of the Research Subjects with Personal Digital Assistants (PDAÂ’s) Participants were asked about their experi ence with PDAÂ’s and wh ether or not they used PDAÂ’s for work or personal purposes. Thirty-four of the 35 foremen responded to these questions and 11 (32.4%) stated that they had used PDAÂ’s (see Figure 5-9). Of the 11 foremen, 5 used PDAÂ’s solely for work re lated activities, 4 used PDAÂ’s solely for personal business and 2 used PDAÂ’s for both work and personal business. The average PDA use time of the foremen was 3.70 hours per week. Of the 37 construction professionals, 20 ( 54.1%) stated that they had used PDAÂ’s before. Of the 20 construction professionals, 2 (10%) had used PDAÂ’s solely for work, 7 (35.0%) had used them solely for personal business, and 11 (55.0%) had used them for both work-related need and personal busin ess. The average PDA use time of the construction professionals was 4.53 hours per week with a median of 2.00 hours per week. Twenty-seven of the 28 students responde d to the PDA questions and 12 (44.4%) stated that they had used PDAÂ’s before. Si nce the use of PDAÂ’s for school purposes or

PAGE 125

105 other personal agenda coul d all be categorized as personal use, the study did not distinguish between the two. The average PDA use time by the student subjects was 6.67 hours per week and the median was 2.50 hours per week. The construction professionals had the highest percentage (54.05%) of PDA use experience, while 32.35% of the foremen had PDA use experience. The foremen sample had lowest PDA user percentages compared with the constructi on professionals and students. Because the sample sizes in the study were fairly small, the percentage of PDA user shown may not be statistically significant. The data did suggest that at least some of the foremen had experience with PDAÂ’s. 14.7% 11.8% 5.9% 32.4% 18.9% 5.4% 29.7% 54.1% 44.4%44.4% 0.0% 10.0% 20.0% 30.0% 40.0% 50.0% 60.0% Used Solely for Work Used Solely for Personal Business Used for both Work and Personal Business Overall% of the Respondents Foremen (N=34) Construction Professionals (N=37) Students (N=27) Figure 5-9. Experience of Resear ch Subjects with PDA Devices Views of The Research Subjects about the Efficiency of the Data Entry Mechanism by Handwriting Recognition Participants were asked to ra te the efficiency of the da ta input mechanism by stylus handwriting recognition method. For the subject s who had never used a PDA before, a demonstration was shown how the alphanumeric information was entered on a mobile

PAGE 126

106 computing device by using a stylus to write on a PDA’s touch-sensitive screen. As noted in Chapter 3, the subjects were asked to use a 7-step modified Likert scale (1 being “very inefficient” and 7 being “very efficient”) to express their ra tings. The lowest rating (“very inefficient”) was assigned a value of 1 and the highest rating (“very efficient”) was assigned a value of 7. Foremen and PDA Efficiency Thirty-four of the 35 foremen responded to the question about PDA effciency. Of the 11 foremen who had actually used PDA’s, 3 rated the stylus writing input method “very inefficient,” 2 rated it “inefficient,” 2 rated it “slightly inefficient,” 3 rated it “efficient,” and 1 rated it “ver y inefficient” (see Figure 5-10). The mean of the equivalent numeric ratings by these 11 foremen was 3.45 (“s lightly inefficient” to “no opinion”). Of the 23 foremen who had no PDA use experi ence, the mean numeric rating was 4.96 (close to “slightly efficient”). The Levene test of Homogeneity of Variances on the ratings from the foremen with PDA experi ence (N=11) and the foremen without PDA experience (N=23) indicated the variances in these two groups were equivalent at a significance level of 0.05 (s ee Table 5-5). The One-Way ANOVA showed the difference of the means of the numeric ratings from thes e two groups was statistically significant at a 0.05 level (see Table 5-6). Therefore, th e prior PDA use expe rience by foremen may have played a role in their opinions about the reduced effici ency of the stylus writing input method.

PAGE 127

107 0 1 2 3 4 5 6 7 8 9 10 Very inefficient InefficientSlightly inefficient No opinion Slightly efficient EfficientVery efficientNumber of Foremen Foremen Who Had PDA Experience Foremen Who Had No PDA Experience Figure 5-10. Efficiency Ratings of Fore men on the Stylus Writing Method on PDA Devices Table 5-5. Levene Test of Homogeneity of Variances on the Ratings of Foremen with and without PDA Experience Levene Statistic df1 df2 Sig. 2.120 1 32 0.16 Table 5-6. One-Way ANOVA of the Means of the Numeric Ratings of Foremen with PDA Experience and Foremen without PDA experience Sum of Squares df Mean Square F Sig. Between Groups 16.787 1 16.787 4.207 0.05 Within Groups 127.684 32 3.990 Total 144.471 33 Construction Professionals As previously noted, 20 of the 37 c onstruction professionals had PDA use experience. The mean of the equivalent numeric ratings by these 20 construction professionals was 5.0 (“slightly efficient”). Of the 17 construction professionals who had no PDA use experience, the mean of the equi valent numeric ratings was 4.82 (close to “slightly efficient”). The Levene test of Homogeneity of Variances on the ratings from

PAGE 128

108 the construction professionals with PDA e xperience (N=20) and the ones without PDA experience (N=17) indicated the variances in these two groups were not statistically different at a significance level of 0.05 (see Table 5-7). The One-Way ANOVA showed the difference of the means of the numeric ratings from these two groups was not statistically significant at a significance level of 0.05 (see Ta ble 5-8). Therefore, there was not sufficient evidence supporting the a ssumption that prior PDA use experience made a difference in the opinions of construc tion professionals on the efficiency of the stylus writing input method. 0 1 2 3 4 5 6 7 8 9 Very inefficient InefficientSlightly inefficient No opinionSlightly efficient EfficientVery efficientNumber of Foremen Construction Professionals Who Had PDA Experience Construction Professionals Who Had No PDA Experience Figure 5-11. Efficiency Ratings of Constr uction Professionals on the Stylus Writing Method on PDA Devices Table 5-7. Levene Test of Homogeneity of Variances on the Ratings of Construction Professionals with and without PDA Experience Levene Statistic df1 df2 Sig. 0.087 1 35 0.77

PAGE 129

109 Table 5-8. One-Way ANOVA of the Means of the Numeric Ratings of Construction Professionals with and without PDA experience Sum of Squares df Mean Square F Sig. Between Groups 0.286 1 0.286 0.108 0.74 Within Groups 92.471 35 2.642 Total 92.757 36 Students As previously discussed, 11 of the 26 stude nts had previously used PDA’s. Of these 11 students, the mean of the equivalent numeric ratings was 4.0 which indicates an almost neutral position on this questi on. Of the 15 students who had no PDA use experience, the mean of the equivalent num eric ratings was 4.67 (between “no opinion” and “slightly efficient”). The Levene test of Homogeneity of Varian ces on the ratings of the students with PDA experience (N=11) and the ones without PDA experience (N=15) indicated the variances in thes e two groups were not statistic ally different at a 0.05 level (see Table 5-9). The One-Way ANOVA showed the difference of the means of the numeric ratings of these two groups was not statistically significant (see Table 5-10). Therefore, there was not sufficient eviden ce supporting the assumption that the prior PDA use experience made a difference in the opi nions of students on the efficiency of the stylus writing input method. Table 5-9. Levene Test of Homogeneity of Va riances on the Ratings of Students with and without PDA Experience Levene Statistic df1 df2 Sig. 0.086 1 24 0.77

PAGE 130

110 Table 5-10. One-Way ANOVA of the Means of the Numeric Ratings of Students with and without PDA experience Sum of Squares df Mean Square F Sig. Between Groups 2.821 1 2.821 0.875 0.36 Within Groups 77.333 24 3.222 Total 80.154 25 Cross-groups Over 63% of foremen with PDA use expe rience rated the stylus writing input method toward the inefficient end of the sc ale while 15% of construction professionals with PDA use experience shared the same vi ew. The views of the students with PDA use experience seemed to be more evenly divide d in their ratings, w ith 55.5% regarding the stylus writing input method as inefficient whil e 45.5% rated it as e fficient (see Figure 512). Foremen with PDA use experience seemed to have the lowest mean numeric rating (3.45), followed by the students with PDA us e experience (4.0) and the construction professionals with PDA use experience (5.0) The difference between the mean numeric ratings of the foremen and construction professionals with PDA experience was statistically significant at a significance level of 0.05. The di fferences of the foremen vs. students, and students vs. constr uction professionals were not statistically significant (see Table 5-11).

PAGE 131

111 18.2%18.2% 5.0% 18.2% 9.1% 27.3% 27.3% 30.0% 20.0% 25.0% 10.0% 10.0%9.1% 18.2% 27.3% 27.3% 0.0% 5.0% 10.0% 15.0% 20.0% 25.0% 30.0% 35.0% Very inefficient InefficientSlightly inefficient No opinionSlightly efficient EfficientVery efficientPercentage of Respondents Who Had PDA Experience Foremen Who Had PDA Experience Construction Professionals Who Had PDA Experience Students Who Had PDA Experience Figure 5-12. Stylus Writing Input Method Effi ciency Ratings by Foremen, Construction Professionals, and Students W ho Had PDA Use Experience Table 5-11. LSD Test of the Means of Numeric Ratings by Foremen, Construction Professionals, and Students with Prior PDA Use Experience 95% Confidence Interval (I) Category (J) Category Mean Difference (I-J) Std. ErrorSig. Lower Bound Upper Bound Foremen Construction Professionals -1.545 0.692 0.03 -2.95 -0.14 Students -0.545 0.787 0.49 -2.14 1.05 Construction Professionals Foremen 1.545 0.692 0.03 0.14 2.95 Students 1.000 0.692 0.16 -0.40 2.40 Students Foremen 0.545 0.787 0.49 -1.05 2.14 Construction Professionals -1.000 0.692 0.16 -2.40 0.40 The data suggest that foremen with pr ior PDA experience perceived the stylus writing input method as a slig htly inefficient method. The firsthand PDA use experience by foremen may have played a role in thei r opinions about how efficient the stylus writing method would be to document the info rmation in the construction field. Without the firsthand experience, the inefficiency asso ciated with the stylus writing input method

PAGE 132

112 might not be obvious to the respondents. The prior PDA use experience, however, did not seem to influence the views of the constr uction professionals and the students. In addition, the computer skills and time availabil ity for PDA use could also be factors that would influence the subjects’ responses to this question. As previously discussed, foremen are in general novice computer users. The time available in their daily schedule for paperwork and any computer related task s is far less than of the construction professionals and students who are generally pr oficient or advanced computer users and whose daily schedules are comprised of a significant amount of time allocated for computer activities. Therefore, what consti tutes an efficient data input method to the construction foremen might not be the same as to the construction professionals and students. The Views of Subjects on the Importan ce of Quick Data Entry on Mobile Computing Devices Study participants were asked to rate the importance of being able to input data quickly on mobile computing devices. As noted in Chapter 3, the subjects were asked to use a 5-step modified Likert scale (1 being “not important at all” and 5 being “very important”) to express their ratings. Subject s from all three samp les rated highly the importance of quick data entry in the construc tion field. Over 85% of the subjects in all three samples (97.0% in foremen sample, 100% of the “Construction professionals,” and 88.5% in student sample) rated quick data entr y as “fairly important” to “very important” (see Figure 5-13). Therefore, the importance of quick data entry on mobile computing devices in the construction field seemed to have been equally recognized by the foremen, construction professionals, and students.

PAGE 133

113 17.6% 35.3% 44.1% 2.9% 35.1% 10.8% 54.1% 7.7% 38.5% 34.6% 15.4% 3.8% 0.0% 10.0% 20.0% 30.0% 40.0% 50.0% 60.0% Not Important At All Of Little Importance Fairly important ImportantVery ImportantPercentage of the Respondents Foremen (N=34) Construction Professionals (N=37) Students (N=26) Figure 5-14. The Importance Ratings on Being Able to Enter Information on Mobile Computing Devices Quickly The Views of Foremen and Construction Pr ofessionals about the Standardization of the Field Documentation Content Subjects were asked if they would agr ee that the content of the construction foremen’s field documentation could be standa rdized. Their responses were entered in a 7-step modified Likert scale (1 being “str ongly disagree” and 7 be ing “strongly agree”). The students were not surveyed on this quest ion because with their limited construction experience. Over 70% of the foremen and 85% of the construction professionals agreed or strongly agreed that th e content of the construction field documentation could be standardized (see Figure 5-14).

PAGE 134

114 20.6% 50.0% 20.6% 5.9% 2.9% 5.4% 24.3% 62.2% 2.7% 2.7% 2.7% 0.0% 10.0% 20.0% 30.0% 40.0% 50.0% 60.0% 70.0% Strongly Disagree DisagreeSlightly disagree No opinion Slightly agree AgreeStrongly agreePercentage of the Respondents Foremen (N=34) Construction Professionals (N=37) Figure 5-14. Responses of Foremen and C onstruction Professionals on Whether The Content of The Field Documentation Could be Standardized The Views of Foremen and Construction Pr ofessionals about the Percentage of the Field Documentation Content That Could be Standardized Subjects were also asked how much they thought the construction foremenÂ’s field documentation could be standardized. The st udent sample was not surveyed on this question for the same reason previously disc ussed. The mean percentages estimated by the foremen and the construction professiona ls were 75.6% and 80.3%, respectively (see Table 5-12). Table 5-12. Percentages of the Field Documentation Content th at Could be Standardized As Estimated by Foremen and Construction Professionals Mean Median Minimum Maximum Foremen (N=30) 75.6% 77.5% 40% 100% Construction Professional s (N=30) 80.3% 80.0% 50% 100% This finding indicates that the majority of the field documentation by foremen has the potential to be standardi zed. The high percentage of th e information that could be standardized also means the pr oductivity gains on the field documentation tasks could be

PAGE 135

115 more noticeable. It also justifies the ongoi ng research efforts and costs of developing innovative technologies to automate the field documentation process. Satisfaction Ratings of th e Subjects with the Icon Visual Search Game and Text Visual Search Game At the end of the visual search computer game, the participants were asked how they liked the icon visual search game (i.e., icon user interface) a nd text visual search game (i.e., text user interface). Subjects were asked to rate their satisfaction on a 7-step modified Likert scale with lowe st rating being “not at all” and highest rating being “liked it very much.” As discussed in Chapter 3, th e rankings of the satisfaction rating scale were evenly placed on a –1 to +1 scale so the ratings (ordinal variable) could be converted to an equivalent num eric variable. The rating “Not at all” was assigned a value of –1.0, “Did not like it” –0.67, “Slightly dis liked it” –0.33, “No opini on” 0, “Liked it a little” +0.33, “Liked it” +0.67, a nd “Liked it very much” +1.0. Over 90% of the foremen provided favorable ratings on the icon visual search game and about the same percentage provided favorab le ratings on the text visual search game (see Figure 5-15). Over 85% of the construction professionals provided favorable ratings on the icon visual search game and 78.4% provi ded favorable ratings on the text visual search game (see Figure 5-16) Less than 60% of the studen ts provided favorable ratings on the icon visual search game and 51.8% provi ded favorable ratings on the text visual search game (see Figure 5-17).

PAGE 136

116 17.6% 55.9% 17.6% 5.9% 2.9% 8.8% 52.9% 29.4% 5.9% 2.9% 0.0% 10.0% 20.0% 30.0% 40.0% 50.0% 60.0% Not all allDid not like it Slightly disliked it No opinion Liked it a little Liked itLiked it very much% of The Respondents Icon Visual Search Game Text Visual Search Game Figure 5-15. Satisfaction Ratings of Foremen on the Icon Visual Search Game and Text Visual Search Game 18.9% 62.2% 10.8% 8.1%5.4% 51.4% 21.6% 8.1% 8.1% 2.7% 2.7% 0.0% 10.0% 20.0% 30.0% 40.0% 50.0% 60.0% 70.0% Not all allDid not like it Slightly disliked it No opinion Liked it a little Liked itLiked it very much% of The Respondents Icon Visual Search Game Text Visual Search Game Figure 5-16. Satisfaction Ratings of Constructi on Professionals on th e Icon Visual Search Game and Text Visual Search Game

PAGE 137

117 3.7% 51.9% 3.7% 18.5% 18.5% 3.7% 3.7% 37.0% 11.1% 22.2% 14.8% 7.4% 3.7% 0.0% 10.0% 20.0% 30.0% 40.0% 50.0% 60.0% Not all allDid not like it Slightly disliked it No opinion Liked it a little Liked itLiked it very much% of The Respondents Icon Visual Search Game Text Visual Search Game Figure 5-17. Satisfaction Ratings of Students on the Icon Visual Search Game and Text Visual Search Game The mean numeric satisfaction ratings of the foremen on the icon visual search game and the text visual search game were 0.59 and 0.52, respectively. The mean numeric satisfaction ratings of the construction professionals on the icon visual search game and the text visual search game we re 0.64 and 0.4, respectively. The mean numeric satisfaction ratings of the students on the ic on visual search game and the text visual search game were 0.64 and 0.31, respectively (see Figure 5-18). It can be seen that the mean numeric satisfaction ratings from all th ree samples on the icon visual search game appear to be higher than for the text visual search game. The difference of the numeric satisfaction ratings on the icon game vs. te xt game was more noticeable with the construction professional sample and the student sample than with the foremen sample. Overall, the subjects from all three samples a ppeared to be satisfied with the icon visual search game while being slightly satisfied with the text visual search game. This indicates the icons could be a good solutio n to automate the data input process in the construction field.

PAGE 138

118 0.64 0.64 0.59 0.31 0.40 0.52 0.00 0.10 0.20 0.30 0.40 0.50 0.60 0.70 ForemenConstruction Professionals StudentsEquivalent Numeric Satisfaction Rating Icon Visual Search Game Text Visual Search Game (P>0.05, N.S.S.) (P<0.01) (P>0.05, N.S.S.) Figure 5-18. SubjectsÂ’ Equivalent Numeri c Satisfaction Ratings on the Icon Visual Search Game and Text Visual Search Game Hypothesis Testing on SubjectsÂ’ Satisfaction Ratings with the Icon Visual Search Game and Text Visual Search Game Research Question 3 as stated in Chapte r 3 sought to determine if there was a difference between the foremenÂ’s satisfacti on rating with the icon interface and their satisfaction rating with the text interf ace. The null hypothesis and the alternative hypothesis were formulated as follows: H30: There was no difference in subjectsÂ’ satisfaction rating for icon-based user interface and text-based user interface. H3a: There was a difference in subjectsÂ’ satisfaction rating for icon-based user interface and text-based user interface. A significance level of 95% was chosen for this hypothesis ( = 0.05). The meaningful difference as stated earlier in Chapter 3 was defined as 1/2 step on the numeric satisfaction rating scal e, i.e., 0.165. The same hypotheses were also tested on the construction professional sample and the st udent sample. The hypotheses were tested

PAGE 139

119 with Wilcoxon matched pairs signed rank tests on the subjects’ semantic differential ratings and paired difference t -tests on the equivalent num eric ratings based on the assumptions the populations were normal. Wilcoxon matched pairs signed rank test The hypotheses test results for the three sa mples were not the same. The attained significance levels for the foremen ( p =0.05) and the students ( p =0.18) were at or greater than the desired significance level ( =0.05) therefore the null hypothesis H30 cannot be rejected. The attained significance leve l for the construction professionals ( p =0.01) was less than the desired significance level ( =0.05) therefore the null hypothesis H30 for the construction professionals can be rej ected and the alternative hypothesis H3a can be accepted (see Table 5-13 and Table 5-14). Table 5-13. Wilcoxon Signed Ranks –Satisfactio n Rating Differences between the Icon Visual Search Game and the Text Visual Search Game N Mean Rank Sum of Ranks Negative Ranks 8a 5.63 45.00 Positive Ranks 2b 5.00 10.00 Ties 24c Foremen TextGameSatisfaction – IconGameSatisfaction Total 34 Negative Ranks 15a 8.37 125.50 Positive Ranks 1b 10.50 10.50 Ties 21c Construction Professionals TextGameSatisfaction IconGameSatisfaction Total 37 Negative Ranks 7a 6.86 48.00 Positive Ranks 4b 4.50 18.00 Ties 16c Students TextGameSatisfaction IconGameSatisfaction Total 27 a. TextGameSatisfaction < IconGameSatisfaction b. TextGameSatisfaction > IconGameSatisfaction c. TextGameSatisfaction = IconGameSatisfaction

PAGE 140

120 Table 5-14. Wilcoxon Signed Ranks Test Sta tistics–Satisfaction Rating Differences between the Icon Visual Search Game and the Text Visual Search Game TextGameSatisfaction IconGameSatisfaction Z -1.941a Foremen Asymp. Sig. (2-tailed) 0.05 Z -3.038a Construction Professionals Asymp. Sig. (2-tailed) 0.01 Z -1.348a Students Asymp. Sig. (2-tailed) 0.18 a. Based on positive ranks. Paired Difference t -test The attained significances for the foremen ( p =0.05) and the student ( p =0.24) were at or greater than the desired significance ( =0.05). In addition, the measured differences means for the foremen sample (0.0691) and th e student sample (0.1000) were all less than the meaningful difference stated earlier in Ch apter 3 (1/2 step on th e numeric satisfaction rating scale, i.e., 0.165). This leads to the acceptance of the null hypothesis H30, i.e., there is no detectable difference in satisfact ion ratings for icon-based user interface and text-based user interface by the foremen and the students. The attained significance level for the construction professionals ( p <0.01) was less than the desired significance ( =0.05) and the measured differences mean (0.2438) was greater th an the meaningful difference (0.165) therefore the null hypothesis H30 can be rejected (see Table 5-15). The results suggest that the foreme n and the students liked the icon visual search game and the text search game about equally and di d not have a strong preference of the iconic interface over the text interface in the visual search games. Construction professionals, on the other hand, had a higher satisfaction ra ting on the icon interface than the text interface. However, consideri ng the interfaces used in the visual search games were

PAGE 141

121 intentionally simplified, the same conclu sion might not be made to real world applications where the user interfaces are more much sophisticated and complicated. Table 5-15. Paired Samples Differences t -Tests Statistics– Subjects’ Satisfaction Ratings with the Icon Visual Search Ga me and the Text Visual Search Paired Differences 95% Confidence Interval of the Difference Mean Std. Deviation Std. Error Mean Lower Upper t df Sig. (2-tailed) Foremen Icon Text 0.0691 0.19848 0.03404 -0.000170.13841 2.030 33 0.05 Construction professionals Icon Text 0.2438 0.44219 0.07270 0.09635 0.39122 3.353 36 0.01 Students Icon Text 0.1000 0.43320 0.08337 -0.071370.27137 1.199 26 0.24 Ranking Order of the Three Usability Fa ctors (Task Time, Task Errors, and Satisfaction Level) Subjects were asked to rank the importance of the three usability factors (shorter task time, fewer task errors, and higher us er satisfaction) indivi dually using a 1 to 10 scale (“1” being the lowest and “10” being th e highest). As the hypothe ses test results for the three samples were different, they are discussed separately below. Ranking Order of the Three Us ability Factors by Foremen Foremen gave an average importance rati ng of 7.767 to shorter task time, 7.900 to fewer task errors, and 8.200 to higher user satisfaction. Since the foremen’s importance ratings on these three usability factors were all very close, paired samples ttests with a significance level of 0.05 ( = 0.05) were conducted to see if differences existed between the means of the ratings. The attained significance levels (0.54, 0.50 and 1.00 respectively) were all greater than the desired significan ce level of 0.05 (see Table 5-16). Therefore, the sample data did not provide sufficient evidence to reject the hypothesis that there were no differences between the m ean importance ratings for shorter task time,

PAGE 142

122 fewer errors and higher user satisfaction. It can be stated that foremen ranked all three usability factors with equal importance. Table 5-16. Paired Samples t -Tests Importance Ratings of the Foremen on Shorter Task Time, Fewer Task Errors and Higher User Satisfaction Paired Differences 95% Confidence Interval of the Difference Mean Std. Deviation Std. Error Mean Lower Upper t df Sig. (2tailed) Pair 1 TaskTime Errors -0.2759 2.3739 0.4408 -1.17890.6271 -0.626 28 0.54 Pair 2 TaskTime Satisfaction -0.2759 2.1530 0.3998 -1.09480.5431 -0.690 28 0.50 Pair 3 Errors Satisfaction 0.0000 2.2991 0.4269 -0.87450.8745 0.000 28 1.00 Ranking Order of the Three Usability Fa ctors by Construction Professionals The average importance ratings on shorter ta sk time, fewer task errors, and higher user satisfaction by the construction professiona ls were 8.139, 9.556, and 8.389, respectively. Similarly, to test the hypothesis that there were no differences between these importance ratings, paired samples t -tests with a significance level of 0.05 ( = 0.05) were conducted. The attain ed significance level ( p =0.34) for the paired samples t -test on the short task time vs. higher user satisfacti on was greater than the desired significance level of 0.05. Therefore the sample data did no t provide sufficient evidence to reject the hypothesis that there were no differences between these two importance ratings. However, the attained significance levels on the paired samples t -tests on shorter task time vs. few errors ( p <0.01) and fewer errors vs higher user satisfaction ( p <0.01) were lower than the desired significance level. This indicates that differences exist between the importance ratings for shorter task time and fewer errors as well as between the importance ratings for fewer errors and higher user satisfaction.

PAGE 143

123 Table 5-17. Paired Samples t -Tests Importance Ratings of the Construction Professionals on Shorter Task Time, Fewer Task Errors and Higher User Satisfaction Paired Differences 95% Confidence Interval of the Difference Mean Std. Deviation Std. Error Mean Lower Upper t df Sig. (2tailed) Pair 1 TaskTime Errors -1.4167 1.4015 0.2336 -1.8909-0.9425 -6.065 35 0.01 Pair 2 TaskTime Satisfaction -0.2500 1.5561 0.2593 -0.77650.2765 -0.964 35 0.34 Pair 3 Errors Satisfaction 1.1667 1.6987 0.2831 0.59191.7414 4.121 35 0.01 The importance rating differences mean for Pair 1 (TaskTime-Errors) was –1.4167, which indicates that the importance rating for short task time was lower than the importance rating for fewer errors. The impor tance rating differences mean for Pair 3 (Errors-Satisfaction) was 1.1667, which indica tes that the importa nce rating for higher user satisfaction was lower than the impor tance rating for fewer errors. Thus, the construction professionals ranke d “fewer errors” as the most important usability factor with “shorter task time” and “higher user satisfaction” as being of less but equal importance. Ranking Order of the Three Usability Factors by Students Students gave an average importance rati ng of 7.556 to shorter task time, 8.148 to fewer task errors, and 7.630 to higher user sa tisfaction. The attained significance levels ( p =0.14, 0.89 and 0.36, respectively) for the paired samples t -tests were all greater than the desired significance level of 0.05. Theref ore, the sample data did not provide sufficient evidence to reject the hypothesis that there were no differences between the mean importance ratings for shorter task time, fewer errors and higher user satisfaction. It can be stated that the student s ranked all three usability f actors with equal importance.

PAGE 144

124 Table 5-18. Paired Samples t -tests – Students’ Importance Ratings on Shorter Task Time, Fewer Task Errors and Higher User Satisfaction Paired Differences 95% Confidence Interval of the Difference Mean Std. Deviation Std. Error Mean Lower Upper t df Sig. (2-tailed) Pair 1 TaskTime Errors -0.5926 2.0050 0.3859 -1.38570.2006-1.536 26 0.14 Pair 2 TaskTime Satisfaction -0.0741 2.6592 0.5118 -1.12600.9779-0.145 26 0.89 Pair 3 Errors Satisfaction 0.5185 2.8605 0.5505 -0.61301.65010.942 26 0.36 The Views of Subjects about the Icon-bas ed Field Documentation Systems on Mobile Computing Devices A sample icon-based construction equipm ent timesheet application running on Palm OS (described in the Chapter 3) was de monstrated to the subjects during the survey. Following the demonstration the s ubjects were asked whether they thought the icon based mobile documentation applications li ke the one shown to them would help the construction foremen better fulfill their fiel d documentation respons ibilities. Subjects were asked to respond to this question using a 7-step modified Li kert scale (1 being “strongly disagree” and 7 being “strongly agree”). The majority of the foremen, constructi on professionals, and student subjects (84.85%, 86.11%, and 81.4%, respectively) resp onded favorably that the icon-based field documentation systems would help fore men do their jobs (see Figure 5-19).

PAGE 145

125 3.0% 12.1% 18.2% 51.5% 15.2% 2.8% 11.1% 19.4% 41.7% 25.0% 3.7%3.7% 25.9% 48.1% 7.4% 3.7% 7.4% 0.0% 10.0% 20.0% 30.0% 40.0% 50.0% 60.0% Strongly Disagree DisagreeSlightly disagree No opinionSlightly agree AgreeStrongly agreePercentage of the Respondents (%) Foremen (N=33) Construction Professionals (N=36) Students (N=27) Figure 5-19. Views of Subjects About Whet her the Icon-based Field Documentation Systems would Help Fo remen Do Their Jobs Responses from foremen were further anal yzed with their com puter usage and the LSD (Least Significant Difference) test show s the differences between the responses from the foremen who did not use a computer, only used computer at home or at work, and used computer both at work and at hom e were not statistically significant at a significance level of 0.05 (see Table 5-19). Table 5-19. LSD Test Results – Responses of Foremen vs. Their Computer Usage 95% Confidence Interval (I) UseComputer (J) UseComputer Mean Difference (I-J) Std. Error Sig. Lower Bound Upper Bound 1 0.2050.362 0.576 -0.53 0.95 0 (Did not use a computer) 2 -0.8330.742 0.270 -2.35 0.68 0 -0.2050.362 0.576 -0.95 0.53 1 (Used Computers only at home) 2 -1.0380.756 0.180 -2.58 0.51 0 0.8330.742 0.270 -0.68 2.35 2 (Used computers both at home and at work) 1 1.0380.756 0.180 -0.51 2.58 Subjects who provided unfavorable assessments to the question were also asked to comment on their answers. Although the unf avorable responses account for a small

PAGE 146

126 percentage of each sample, the information they provided was valuable to understand the reasoning behind their disapproval or their reservations about the icon system. The feedback is summarized as below: Potential errors associated with using computers. This not only includes the unintended errors resulted from the fore men’s computer use skills but also the errors introduced by incorrectly coding task s due to lack of better places to record miscellaneous information and task item s not covered by the standard choices. Amount of information that can be c overed under the standard choices. Learning curve and training effort require d to proficiently use the system. Individual resistance to changes in the wa y of doing things. This was particularly evident for the foremen who were reaching their 50’s and were not willing to make changes in the ways of doing things. Th ese foremen, usually with many years of experience, have been able to perform their jobs well without having to use a computer, so they do not see the need to use one now. This does not necessarily mean that they do not have the ability to learn and use the system but simply a matter of lacking of interest and motivation in making changes. Readiness of the Foremen to Use Fiel d Documentation Systems on Mobile Computing Devices Foremen were asked, if they were given a mobile field documentation tool like the one demonstrated to them, would they use it. Three (8.6%) foremen responded “No” and 33 (91.4%) foremen responded “Yes”. Of the 28 foremen who agreed the icon based field documentation systems would help do their jobs 27 stated that they would use such a system. It is interesting to see that of th e 5 foremen who did not think the icon system would help do their jobs, 4 actually responded th at they would give the system a try (see Figure 5-20). Perhaps this indicates an increased open-mindedness of the foremen towards new technologies that would have th e potential to make their jobs easier and more productive. Historically, the constr uction research world viewed foremen’s rejection of computer technologies as a result of feeling a threat to th eir job security. This factor may be diminishing with the ever-incre asing use of computers in their daily lives

PAGE 147

127 through the use of the Internet at home by themselves or family members or the use of other electronics with embedded computi ng technologies such as smart phones. 4 1 1 27 0 5 10 15 20 25 30 Would use the systemWould not use the systemNumber of Respondents Foremen who did not agree icon system would help do their jobs Foremen who agreed that icon system would help do their jobs Figure 5-20. Responses of Foremen on Whethe r They Would Use a Field Documentation System on Mobile Computing Devices Visual Search Game Results An alyses and Hypotheses Testing As stated in Chapter 3, one of the main obj ectives of this study was to find out if differences existed in usability in terms of task completion time and user errors between the icon interface and the text interface. The visual search game kept track of the subjectsÂ’ screen event inform ation and the system time stam ps for these user screen events. The visual search game results provi ded the raw data for computing the average task time, average task instruction reading tim e, average task search time, and the number of search errors observed during the icon visual search session and the text visual search session. Average Task Time The means of the average task time observe d in the icon visual search session and the text visual search session for the foremen were 5,952 milliseconds and 7,344

PAGE 148

128 milliseconds. The means of the average task times observed in the icon visual search session and the text visual search session for the construction pr ofessionals were 5,718 milliseconds and 6,762 milliseconds. The means of the average task time observed in icon visual search session and the text visu al search session for the students were 6,485 milliseconds and 7,084 milliseconds. It appears th at the average task times observed on the icon interface were generally shorter than the average ta sk times observed on the text interface (see Figure 5-21). 5,718 6,762 6,485 5,952 7,084 7,344 0 1,000 2,000 3,000 4,000 5,000 6,000 7,000 8,000 Foremen (N=33)Construction Professionals (N=37) Students (N=24)Milliseconds Icon Visual Search Game Text Visual Search Game Figure 5-21. Mean Average Task Time Observ ed on the icon interf ace vs. text interface for each sample To test whether there were significant differences between the average task time observed on the icon interface and the text interface, the following null hypothesis and the alternative hypothesis were established: H10: There was no difference in the task completion time for the icon-based user interface and the te xt-based user interface. H1a: There was a difference in the task completion time for the icon-based user interface and the text-b ased user interface.

PAGE 149

129 The results from the paired differences ttest showed the differences were statistically significant for the foremen ( p <0.01) and the construction professionals ( p <0.01). In addition, the means of the paired differences by foremen (1,391.7 milliseconds) and construction professionals (1,044.6 milliseconds) were greater than the meaningful difference (1,000 milliseconds) as stated in Chapter 3. Therefore the null hypothesis H10 can be rejected and the alternative hypothesis H1a can be accepted. It can be stated that there was a difference in th e average task times between the icon user interface and text in terface by the foremen and the construction professionals. The attained significance level for the students ( p =0.37) on the other hand was greater than the desire d significance level ( = 0.05) (see Table 5-20). This led to the acceptance of the null hypothesis H10 which means the sample data did not provide sufficient evidence to reject the hypothesis that there was no difference in the average task time between the icon interf ace and the text interface by students. Table 5-20. Paired Samples t -Tests –Average Task Time in the Icon User Interface vs. Text User Interface Paired Differences 95% Confidence Interval of the Difference Mean Std. Deviation Std. Error Mean Lower Upper t df Sig. (2tailed) Foremen Icon Interface – Text Interface -1,391.71,519.9 264.6 -1,930.6-852.7 -5.260 32 0.01 Construction Professionals Icon Interface – Text Interface -1,044.61,581.8 260.0 -1,572.0-517.2 -4.017 36 0.01 Students Icon Interface – Text Interface -599.13,223.9 658.1 -1,960.4762.3 -0.910 23 0.37 Average Task Instruction Reading Time The means of the average task instructi on reading time observed in the icon visual search session and the text visual search session for the foremen were 1,629 milliseconds and 2,134 milliseconds, respectively. The means of the average task instruction reading

PAGE 150

130 time observed in the icon visual search sessi on and the text visual search session for the construction professionals were 1,509 m illiseconds and 1,769 milliseconds, respectively. The means of the average task instruction reading time observe d in the icon visual search session and the text visual search sessi on for the students were 1,118 milliseconds and 1,699 milliseconds. It appears that the average task instruction reading time observed on the icon interface were shorter than the average task instru ction reading time observed on the text interface (see Figure 5-22). 1,629 1,118 1,509 2,134 1,699 1,769 0 500 1,000 1,500 2,000 2,500 Foremen (N=33)Construction Professionals (N=37) Students (N=24)Milliseconds Icon Visual Search Game Text Visual Search Game Figure 5-22. Mean Average Task Instructi on Reading Time Observed during the Icon Visual Search Game vs. the Text Visual Search Game To determine if the difference between th e average task instruction reading times observed on the icon interface and the text in terface were significan t, the following null hypothesis and the alternative hypo thesis were established: H20: There was no difference in the task instruction reading time for the iconbased user interface and the text-based user interface. H2a: There was a difference in the task instruction reading time for the iconbased user interface and the text-based user interface.

PAGE 151

131 The results from the paired differences ttest showed that the attained significance levels for all three samples (foremen p =0.01, construction professionals p =0.01, and students p =0.01) were lower than the de sired significance level ( = 0.05). This led to a rejection of the null hypothesis and an acceptanc e of the alternative hypothesis that there indeed was a difference in the task instru ction reading time between the icon visual search game and the text visual search game As discussed in Chap ter 3, the instructions were given in text mode during the icon visu al search session a nd in icon mode during the text visual search session. Considering the means of the paired differences (icon interface – text interface) were negative values, it is interes ting to see that the subjects actually had a shorter time read ing the textual instructions on the icon user interface as compared to reading the iconic instru ctions on the text user interface. Table 5-21. Paired Samples ttests –Average Task Time in the Icon User Interface vs. Text User Interface Paired Differences 95% Confidence Interval of the Difference Mean Std. Deviation Std. Error Mean Lower Upper t df Sig. (2tailed) Foremen Icon Interface – Text Interface -505.6778.6135.5-781.6-229.5 -3.730 32 0.01 Construction Professionals Icon Interface – Text Interface -261.3570.393.7-451.5-71.2 -2.787 36 0.01 Students Icon Interface – Text Interface -581.31,031.4210.5-1,016.8-145.7 -2.761 23 0.01 Average Task Search Time The means of the average task search tim e observed in the icon visual search session and the text visual search session for the foremen were 4,189 milliseconds and 4,963 milliseconds, respectively. The means of the average task search time observed in the icon visual search session and the text visual search session for the construction professionals were 4,143 milliseconds and 4,808 milliseconds. The means of the average

PAGE 152

132 task instruction research times observed in the icon visual search session and the text visual search session for the students were 5,341 milliseconds and 5,056 milliseconds (see Figure 5-23). It appears that the mean task search times observed in the icon visual search game for the foremen and the constr uction professionals we re less than the corresponding task search times in the text visual search game while it was the opposite scenario for the students. 5,341 4,143 4,189 4,8085,056 4,963 0 1,000 2,000 3,000 4,000 5,000 6,000 Foremen (N=33)Construction Professionals (N=37) Students (N=24)Milliseconds Icon Visual Search Game Text Visual Search Game Figure 5-23. Mean Average Task Search Ti me Observed during the Icon Visual Search Game vs. the Text Visual Search Game Similarly, in order to test whether the di fferences between the average task search time observed on the icon interface and the text interface were statistically significant, the following null hypothesis and the alternat ive hypothesis were established: H30: There was no difference in the task search time for the icon-based user interface and the text-b ased user interface. H3a: There was a difference in the task search time for the icon-based user interface and the text-b ased user interface.

PAGE 153

133 The results from the paired differences ttest showed that the differences for the foremen ( p =0.01) and the construction professionals ( p =0.01) were statistically significant. This led to a rejection of the null hypothesis and an acceptance of the alternative hypothesis th at there indeed was a difference between the task search times in the icon visual search game and the text visu al search game for foremen and construction professionals. Considering the means of the paired differences (icon interface – text interface) were negative values, it can be said that foremen and construction professionals had a shorter visual search time on the icon us er interface as compared to the text user interface. On the other ha nd, the attained significan ce level for the students ( p =0.56) showed that the difference was not statistic ally significant, ther efore the null hypothesis cannot be rejected for the students. Table 5-22. Paired Samples t -tests –Average Task Search Time in the Icon User Interface vs. Text User Interface Paired Differences 95% Confidence Interval of the Difference Mean Std. Deviation Std. Error Mean Lower Upper t df Sig. (2tailed) Foremen Icon Interface – Text Interface -773.51,452.7 252.9 -1,288.6-258.3 -3.058 32 0.01 Construction Professionals Icon Interface – Text Interface -665.61,544.7 254.0 -1,180.7-150.6 -2.621 36 0.01 Students Icon Interface – Text Interface 284.72,337.5 477.1 -702.31,271.7 0.597 23 0.56 Task Errors The means of the task errors observed dur ing the icon visual search game and the text visual search game for the foremen were 0.58 and 0.85, respectively. The means of the task errors observed during the icon visual search game and the text visual search game for the construction professional sample were 1.86 and 1.81, respectively. The means of the task errors obser ved during the icon visual sear ch game and the text visual

PAGE 154

134 search game for the students were 5.00 and 3.00 (see Figure 5-24). The foremen had slightly fewer errors in the icon game than the text game while the students had more errors in the icon visual sear ch game. The construction prof essionals appeared to have about same amount of errors in the icon game and in the text game. Overall, the foremen made considerably fewer errors when compar ed to the construction professionals and the students. 5.00 1.86 0.58 1.81 3.00 0.85 0.00 1.00 2.00 3.00 4.00 5.00 6.00 Foremen (N=33)Construction Professionals (N=37) Students (N=24)Number of Errors Icon Visual Search Game Text Visual Search Game Figure 5-24. Mean Task Errors Observed dur ing the Icon Visual Search Game vs. the Text Visual Search Game In order to test whether the differences in the number of erro rs observed on the icon interface and the text interface were statisti cally significant, the following null hypothesis and the alternative hypo thesis were tested: H40: There was no difference in the number of task errors for the icon-based user interface and the te xt-based user interface. H4a: There was a difference in the number of task errors for the icon-based user interface and the text-b ased user interface.

PAGE 155

135 The results from the paired differences ttest showed that the differences in the number of errors were not st atistically significant (foremen p =0.24, construction professionals p =0.85, and students p =0.27). Therefore, there was insufficient evidence to support the hypothesis that the number of task errors differed between the icon visual search game and the text visual search game. Table 5-23. Paired Samples t -Tests – Mean Task Errors in the Icon User Interface vs. Text User Interface Paired Differences 95% Confidence Interval of the Difference Mean Std. Deviation Std. Error Mean Lower Upper t df Sig. (2tailed) Foremen Icon Interface – Text Interface -0.271.31 0.23 -0.74 0.19 -1.200 32 0.24 Construction Professionals Icon Interface – Text Interface 0.051.68 0.28 -0.51 0.61 0.195 36 0.85 Students Icon Interface – Text Interface 1.56.5 1.3 -1.3 4.3 1.127 23 0.27 Error Reduction in Training Sessions Results showed the tasks errors decreased significantly (P<0. 01) from the first training session to the second training se ssion while error reduction from the second training session to the third training session was not remark able (see Figure 5-25). The numbers of task errors in th e first two training sessions were also strongly correlated (P<0.01) to the construction experience of the pa rticipants (see Table 5-24), i.e., the more construction experience a participant has the fe wer errors are likely to happen during the training sessions.

PAGE 156

136 0.39 1.46 3.09 0.42 1.39 1.49 3.08 5.13 3.22 0.00 1.00 2.00 3.00 4.00 5.00 6.00 123 Training Session #Number of Errors Foremen Construction Professionals Students Figure 5-25. Task Errors Observed during the Icon Training Sessions Table 5-24. Correlation Between Training Sess ion Errors and Construction Experience Training Session 1 Errors Construction Experience Pearson Correlation1 -0.453 Sig. (2-tailed) 0.01 Training Session 1 Errors N 93 85 Pearson Correlation-0.453 1 Sig. (2-tailed) 0.01 Construction Experience N 85 91 Training Session 2 Errors Construction Experience Pearson Correlation1 -0.298 Sig. (2-tailed) 0.01 Training Session 2 Errors N 93 85 Pearson Correlation-0.298 1 Sig. (2-tailed) 0.01 Construction Experience N 85 91 Training Session 3 Errors Construction Experience Pearson Correlation1 -0.211 Sig. (2-tailed) 0.05 Training Session 3 Errors N 93 85 Pearson Correlation-0.211 1 Sig. (2-tailed) 0.05 Construction Experience N 85 91

PAGE 157

137 One-Way ANOVA (Analysis of Variance) of Visual Search Game Results and Subject Types To determine if the foremen, the construc tion professionals, and the students had different performances in the visual search game, a one-way ANOVA of the visual search game results was conducted. The results of th e Levene Homogeneity of variance tests on the observations of task times showed that the task times in all three groups had equal variances. The attained significance levels ( p <0.01) of the Levene Homogeneity of variance test on the task errors showed that the differences on the task error variances between the three groups we re statistically signifi cant (see Table 5-25). Table 5-25. Levene Homogeneity of Variance Tests on the Visual Search Game Results Between Foremen, Construction Professionals, and Students Levene Statistic df1 df2 Sig. Average Task Time 2.5912 91 0.08 Average Instruction Reading Time 2.5282 91 0.09 Average Search Time 2.6532 91 0.08 Icon Visual Search Game Task Errors 20.8222 91 0.01 Average Task Time 1.1282 91 0.33 Average Instruction Reading Time 0.1742 91 0.84 Average Search Time 0.9712 91 0.38 Text Visual Search Game Task Errors 10.8372 91 0.01 The one-way ANOVA results showed the di fferences on the average task time and the average task instruction reading time betw een the three samples were not statistically significant (see Table 5-26). This led to an acceptance of the null hypotheses that foremen, construction professionals and the st udents performed equally on their average task time and average task instruction reading time. The attained significance level ( p =0.03) of the one-way ANOVA F test showed the difference on the average task search times in the icon visual search game among the

PAGE 158

138 foremen, construction professionals, and student s were statistically significant. Therefore the null hypothesis that foremen, construc tion professionals and students performed equally on the icon visual search time can be rejected. The one-way ANOVA Post-Hoc LSD test showed the differences between the foremen/construction professionals and students were statistically significant. Th e difference between the foremen and the construction professionals was not statis tically significant (see Table 5-27). Table 5-26. One-way ANOVA of the Average Task Time, Average Task Instruction Reading Time, and Average Task Search Time – Subject Type as Factor Levels Sum of Squares df Mean Square F Sig. Between Groups 8,665,548.482 2 4,332,774.241 1.2070.30 Within Groups326,644,841.997913,589,503.758 Average Task Time Total 335,310,390.47993 Between Groups 3,828,279.157 2 1,914,139.578 2.7210.07 Within Groups64,012,323.748 91703,432.129 Average Task Instruction Reading Time Total 67,840,602.904 93 Between Groups 24,761,133.574 2 12,380,566.78 7 3.7920.03 Within Groups297,085,150.980913,264,671.989 Icon Visual Search Game Average Task Search Time Total 321,846,284.55393 Between Groups 5,933,930.329 2 2,966,965.164 0.5220.59 Within Groups 516,968,573.884915,680,973.339 Average Task Time Total 522,902,504.21393 Between Groups 3,367,207.347 2 1,683,603.674 1.3270.27 Within Groups 115,470,659.462911,268,908.346 Average Task Instruction Reading Time Total 118,837,866.80993 Between Groups 962,704.004 2 481,352.002 0.1100.89 Within Groups 396,794,469.410914,360,378.785 Text Visual Search Game Average Task Search Time Total 397,757,173.41593

PAGE 159

139 Table 5-27. Post-Hoc LSD Test Results on Task Search Time in the Icon Visual Search Game 95% Confidence Interval (I) Category (J) Category Mean Difference (I-J) Std. ErrorSig. Lower Bound Upper Bound Construction Professionals 46.496432.624 0.92 -812.86 905.85 Foremen Students -1,151.576484.724 0.02 -2,114.42 -188.73 Foremen -46.496432.624 0.92 -905.85 812.86 Construction Professionals Students -1,198.072473.563 0.01 -2,138.75 -257.40 Foremen 1,151.576484.724 0.02 188.73 2,114.42 Students Construction Professionals 1,198.072473.563 0.01 257.40 2,138.75 As indicated by the Levene Homogeneity of variance test results, the equal variance assumption would not be appropriate for the task errors in the icon visual search game and the text visual search game and therefore the F test cannot be used. A Tamhane's T2 test was conducted for these variables. The difference between the task errors of foremen and the task errors of th e construction professionals and students were statistically significant ( p <0.05). The mean task errors of foremen were lower than the construction professionals and significantl y lower than the students in both user interfaces. The difference in th e task errors between the construction professionals and students were not statistically significant (see Table 5-28).

PAGE 160

140 Table 5-28. Tamhane's T2 Test on the Task Er rors in The Icon Visual Search Game And Text Visual Search Game – S ubject Type As Factor Levels 95% Confidence Interval Dependent Variable (I) Category (J) Category Mean Difference (I-J) Std. Error Sig. Lower Bound Upper Bound Construction Professionals -1.289 0.487 0.03 -2.49 -0.09 Foremen Students -4.341 1.301 0.01 -7.68 -1.00 Foremen 1.289 0.487 0.03 0.09 2.49 Construction Professionals Students -3.052 1.359 0.09 -6.50 0.40 Foremen 4.341 1.301 0.01 1.00 7.68 Task Errors in Icon Visual Search Game Students Construction Professionals 3.052 1.359 0.09 -0.40 6.50 Construction Professionals -0.962 0.383 0.05 -1.91 -0.01 Foremen Students -2.568 1.039 0.06 -5.24 0.10 Foremen 0.962 0.383 0.05 0.01 1.91 Construction Professionals Students -1.606 1.090 0.39 -4.37 1.16 Foremen 2.568 1.039 0.06 -0.10 5.24 Task Errors in Text Visual Search Game Students Construction Professionals 1.606 1.090 0.39 -1.16 4.37 Correlation Analysis between Construction Experience and the Average Icon Search Time The one-way ANOVA results in Table 5-26 a nd the Post-Hoc LSD test results in Table 5-27 showed that the means of the av erage icon search times of the foremen and the construction professionals were shorter than the average icon search times of the students. As previously disc ussed, the mean construction ex perience of the foremen and the construction professionals (19.5 years and 15.1 years, respectively) were much higher than the mean construction experience of the students (1.7 years). The Pearson correlation analysis on the icon search time and construction experience however did not suggest a linear correla tion existed between these two va riables (see Table 5-29). A nonlinear association might exist between these two variables but this issue is beyond the scope of this study.

PAGE 161

141 Table 5-29. Correlation Analysis on the Constr uction Experience and Icon Search Time Construction Experience Icon Average Search Time Pearson Correlation1 0.036 Sig. (2-tailed) 0.75 Construction Experience N 91 86 Pearson Correlation0.036 1 Sig. (2-tailed) 0.75 Icon Average Search Time N 86 94 Correlation Analysis between Constructi on Experience and the Task Errors The Tamhane's T2 test results on task erro rs in Table 5-28 suggested the mean of the task errors by foremen differed from th at of construction professionals and the students. The Pearson bivariate correlation an alyses on the task errors and construction experience showed there was a significant and fairly strong negative correlation between the construction experience and ta sk errors in the icon visual search game and text visual search game (see Table 5-30 and Table 5-31) In other words, the more construction experience a subject has the fewer visual search errors are likely to be made. Table 5-30. Correlation Analysis between th e Construction Experience and Icon Search Errors Construction Experience Icon Search Errors Pearson Correlation 1 -0.287 Sig. (2-tailed) 0.01 Construction Experience N 91 86 Pearson Correlation -0.287 1 Sig. (2-tailed) 0.01 Icon Search Errors N 86 94

PAGE 162

142 Table 5-31. Correlation Analysis between the Construction Experience and Text Search Errors Construction Experience Text Search Errors Pearson Correlation 1 -0.225 Sig. (2-tailed) 0.04 Construction Experience N 91 86 Pearson Correlation -0.225 1 Sig. (2-tailed) 0.04 Text Search Errors N 86 94 Correlation Analysis of the Average Ta sk Search Time and Task Errors As discussed in Chapter 3, the task time was defined as the total amount of time used by a subject to successfully locate a ta rget (icon or text) on the screen and it also included the “penalty” time incurred (pauses a nd retrials) due to search errors. It is plausible to think there might be an association between the average task search time and task errors. The Pearson correlation analyses on the average icon search time vs. icon search errors and the average text search time vs. text search errors showed there were strong positive correlations be tween the average task search times and task errors (see Table 5-32 and Table 5-33). That is, longer ta sk times were genera lly associated with more task errors. Table 5-32. Correlation Analysis between the Ic on Search Time and Icon Search Errors IconAverSearchTimeIconSearchErrors Pearson Correlation1 0.581 Sig. (2-tailed) 0.01 Sum of Squares and Cross-products 321,846,284.553 401,354.447 Covariance 3,460,712.737 4,315.639 Icon AverSearchTime N 94 94 Pearson Correlation0.581 1 Sig. (2-tailed) 0.01 Sum of Squares and Cross-products 401,354.447 1,484.553 Covariance 4,315.639 15.963 IconSearchErrors N 94 94

PAGE 163

143 Table 5-33. Correlation Analysis between the Te xt Search Time and Text Search Errors TextSearchErrors TextAverSearchTime Pearson Correlation 1 0.364 Sig. (2-tailed) 0.01 Sum of Squares and Cross-products 867.713 214,087.138 Covariance 9.330 2,302.012 TextSearchErrors N 94 94 Pearson Correlation 0.364 1 Sig. (2-tailed) 0.01 Sum of Squares and Cross-products 214,087.138 397,757,173.415 Covariance 2,302.012 4,276,958.854 TextAverSearchTime N 94 94 One-Way ANOVA (Analysis of Variance) of Vi sual Search Task Time of Foremen with Computer Usage as Factor Levels Results were analyzed to see if the task times of the foremen differed in relation to their experience with computer usage. Foremen’ s computer usage is categorized as one of the following: 0 – does not use computer at all; 1 – use computer only at home; 2 –use computer only at work. The results of the Levene Homogeneity of Variance tests on the observations of task times showed that th e task times in all three computer usage categories had equal variances (see Table 5-34). The one-way ANOVA results showed there were no statistically significant diffe rences in the task times among the foremen who had different experience with computer usage (see Table 5-35). Table 5-34. Levene Homogeneity of Varian ce Tests – Task Time of Foremen with Different Computer Usage Levene Statistic df1 df2 Sig. IconAverTaskTime 1.192 2 30 0.32 TextAverTaskTime 0.798 2 30 0.46

PAGE 164

144 Table 5-35. One-way ANOVA of the Task Ti me of Foremen – Computer Usage as Factor Levels Sum of Squares df Mean Square F Sig. Between Groups 8,178,038.5712 4,089,019.285 2.1810.13 Within Groups 56,253,038.399301,875,101.280 IconAverTaskTime Total 64,431,076.97032 Between Groups 1,294,661.2852 647,330.642 0.1660.85 Within Groups 116,769,787.685303,892,326.256 TextAverTaskTime Total 118,064,448.97032

PAGE 165

145 CHAPTER 6 SUMMARY, CONCLUSIONS AND RECOMMENDATIONS Summary This study focused on the graphical user interface usability aspect of the computerization process of field documenta tion by construction foremen. Two potential automated data input methods on mobile co mputing devices were compared through a computer visual search game that wa s conducted with 35 foremen, 37 construction professionals and 28 construction students. Are Computer Tasks Performed Faster When Using Icons Than When Using Predefined Text Lists? The study results showed that the foremen performed computer tasks faster on the icon interface than on the text interface. The task time cons isted of the time reading the search game instructions and the task sear ch time. The results showed that foremen, on average, completed individual tasks almost 1.5 seconds (1,392 milliseconds) faster on the icon interface than on the text interface, i.e., the text-bas ed task time was about 20% longer than the icon-based task time. The c onstruction professionals also performed the tasks faster (almost 15% faster) with icons th an with text. The time difference represents a significant amount of timesaving over the life expectancy of a computer application that could potentially be used by construction foremen. Among the student participants, no discernable difference was noted between th e text-based and the icon-based tasks.

PAGE 166

146 Are Textual Instructions Processed Faster Than The Iconic Instructions? It is interesting to see that the study re sults showed foremen and other subjects (construction professionals and students) read textual instruc tions faster than the iconic instructions. This indicates that text has an advantage over icons in the initial information processing stage. Icons when used as the dire ct instructions for computer tasks can be problematic as the multiple meanings associ ated with them can make it ambiguous for the readers to decipher the design intention; therefore, icons could take more time to process or evaluate. A user would have to go through several information-processing cycles to make a decision on th e intent of the iconic instru ctions while it may only take one information processing cycle for textua l instructions. Note that the textual instructions were associated with the icon se arch tasks and that the iconic instructions were associated with the text search tasks. Are Icons Located Faster Than Text? When looking at the time used for the act ual search portion of a computer task, excluding the time used for read ing instructions, icons possess an advantage over the text. The study results showed that foremen and c onstruction professionals used less time to locate icons than to locate text. Results s how that recognizable figures (icons) are identified faster than pure text. The sear ch time for students provided no discernable advantage of icons over text, possibly becau se the students (with limited experience) were not as familiar with the large construction equipment depicted in the icons. Errors With Icons Versus Errors With Pre-Defined Text List Study results showed the number of errors that participants (foremen, construction professionals, and students) made on the ic on interface were not different from the number of errors they made on the text inte rface. Overall, it was noted that foremen made

PAGE 167

147 relatively fewer errors. In fact errors were found to be fewe r in number as construction experience increased. Results also showed th at as the training se ssions progressed, the task errors decreased. Preferences of Pre-defined Text Lists Versus Icons Results showed that construction professi onals had high satisfaction ratings with the icon interface but for the foremen and stude nts there were not clear preferences for either icons or predefined text lists. Ranking Order of the Three Usability Factors Foremen and students viewed the three usab ility factors (shorter task time, fewer errors, and higher user satisfaction) as equa lly important. The construction professionals, on the other hand, ranked “fewer errors” as the most importa nt usability factor with “shorter task time” and “higher user satisfa ction” being of equal but slightly less importance. Views about Using Icon Based Mobile Field Documentation Applications Research participants generally agre ed that the icon based mobile field documentation system (introduced to the par ticipants) would help foremen do their jobs. In fact, most foremen indicated they would use such a system if one were to be provided to them. The positive feedback, especially from the foremen, suggests that icon-based mobile documentation tools may be readily accepted by construction supervisors. Views about the Standardization of Inform ation Contained in Field Documentation The participating foremen and construction professionals (students views were not sought) generally agreed that most c onstruction field documentation could be standardized. Standardization is associated w ith rapid data input and this is generally regarded as a valuable featur e. The results indicated a la rge amount of the information

PAGE 168

148 can potentially be standardized for quick and easy entry (pick and choose action) on mobile computing devices, while inform ation input through keyboard and stylus handwriting (standard input ) can be cumbersome. Experience of Foremen with Mobile Computing Technologies The number of PDA users among foremen is relatively small, but it has increased substantially in recent years (based on E lliottÂ’s study in 2000). This increase is perhaps attributed to the fact that th e use of the mobile computing te chnologies in society is more prevalent. The use of the conventional co mputer technologies (desktop computers and laptop computers) for documenting field work among foremen has remained low (5.7%) and is slightly lower than the 16.0% reported in the Elliott study. The use of computers by foremen in their homes (48.6%) seemed to remain unchanged from the extent of use (50.4%) reported in the Elliott study. It wa s noted that the icon task times were not noticeably affected by the participantsÂ’ experi ence with the use of computer or other related technologies. Conclusions Icons hold great potential to be an efficient and effectiv e data input mechanism for construction foremen in the construction fi eld. The efforts required for implementing, training and learning the iconic data input system can be minimal. The foremen in the study did not have any previous knowledge abou t the icons used in the test interfaces and many had very limited computer use experience. Yet these foremen were able to learn to effectively use the icons and the iconic user interface in a relatively short amount of time. They have performed computer tasks as fast as the more proficient computer users such as construction professionals and university students, and they made few errors. With positive views of the foremen towards the ic ons and iconic interfaces, the implementation

PAGE 169

149 of icon-based data input sy stems on mobile computing devices should not encounter much individual resistance from foremen as the end users. Designing of the graphical user interfaces for construction foremen must take into consideration of the usability factors associat ed with foremenÂ’s char acteristics and their working environment in order to provide a tr uly user-friendly and effective system. In general, based on all study participants, minimizing errors and task time, while maintaining user satisfaction, should all be goals when developing a graphical user interface. Icon-based data input systems on mobile computing devices can make foremen more productive and reduce the e rrors associated redundant data input in the information flow process on construction si tes. In fact, foremen made fewer errors on the iconic interface compared to the text interface. The extensive fiel d experience and knowledge of the graphic nature of the construction ac tivities give construction foremen a unique advantage of using icons as an automated data input mechanism. With the computerization of the field documentation of construction foremen, field information can be easily input and reliabl y retrieved by other pa rticipants of the construction process for timely and accurate f eedback of the construction progress. This in turn leads to better decisions on project management and cost control and makes the overall construction process mo re productive an d profitable. Research Limitations This study revealed findings that contribute to the improvement of the usability of the graphical user interface on mobile co mputing devices designed for construction foremen. Despite this, some res earch limitations should be noted.

PAGE 170

150 Since this research project was not funde d and the interviews were conducted by the researcher, the number of foremen survey ed was limited by the researcherÂ’s resources and time. The foremen sample size (N=35) wa s adequate to test the research hypotheses in the study but in general is a small re presentation of the general population. The foremen included in this study were sampled from the contractors performing sitework construction in Central Florida. Therefore, the results may not be generalized with confidence to the other sectors of the constr uction industry or to other geographical areas. Recommendations Too often, when evaluating and impl ementing two competing alternative approaches, a misconception is that one must be far superior to the other and one should be used for all scenarios while the other shoul d be totally rejected. However, as found in this study, icons and pre-determined text lists both have their own advantages/disadvantages in different aspects of a visual searching ta sk. It is logical in this case for these two data input methods to co-exist in a graphi cal user interface. The optimal combination in the design of the graphical user interfaces for construction foremen would be to use text for the computer task instructions and to use icons as choice items for faster selection. Although the interfaces used in the study we re intentionally simplified and may not be the exact format as real world applications the errors in real world applications are likely to be fewer because the users generally would have more opportunities to practice and get over the learning curve. As indicated in the study, foremen perf ormed as well as the more experienced computer users such as construction prof essionals and university students in task completion time, made fewer errors in the vi sual search tasks, and had relatively high

PAGE 171

151 user satisfaction ratings on both of the icon inte rface and the text interf ace. Thus, the lack of experience of a foreman with the use of computers should not be a factor when considering the implementation and use of such data input tec hnology on construction sites. Most foremen were receptive to th e equipment timesheet application that was introduced to them. Therefore, implementi ng field documentation tools based on mobile computing technologies should not require extensive efforts in training and in the actual use of the systems as previously thought. As the mobile computing t echnologies such as PDA’s become more mature and affordable, co mpanies should get more familiar with the technologies that are readily available and start to “computerize” foremen. There is a missing link in the computerized inform ation flow on construction sites. Future Research Recommendations Other Sectors of the Construction Industry and Other Geographical Areas As previously discussed, this study has focused on foremen in sitework construction. It is recommended that simila r studies be conducted with foremen from other sectors of the construction industry with representation from a wide distribution of crafts or trades. Studies could also be performed in other geographical areas. With a large percentage of non-English speaking workers on some construction sites, the scope of the study should be extended to other languages. The advantage of icons in a cross-language system should be examined to see if error reduction is significant. Intelligent Data Validation in Data Input Process One of the inherent drawbacks with the pen and paper-based data input method is that if a user inadvertently makes an error du ring the data input process, the error may not be caught until it is passed onto the next recipi ents and discovered a period of time later. By then, data correction may be highly unre liable due to the time then has elapsed. An

PAGE 172

152 intelligent data validation feature is imperative for computerized field documentation tools. For example, when a user enters th e daily equipment timesheet information and accidentally enters 77 (hours) instead of 7 (hours), the application should recognize the data entered is out of norm and query the us er for verification. This not only applies to the numeric information but also to the textual information such as the descriptions of the work performed that need to be congruent with the construction schedule. Research on establishing such a data va lidation model is recommended. Modeling of the Cognitive Activities of the Visual Search Process Through the Use of Eye-tracking Technologies Although the modeling of the cognitive activit ies of the subjects during the visual search sessions was not part of the scope of this study, it would be an interesting research direction and an additional contribution to the understanding of the human-computer interactions in the interf aces designed for mobile computing devices. Fitts Law, commonly referred in the experimental ps ychology and human-computer interaction research field, is a model of human moveme nt, predicting the time required to rapidly move from a starting position to a final target area, as a f unction of the distance to the target and the size of the targ et. Although the visual search game computer program used in this study kept track of th e cursor (x,y) locations on the sc reen at various user events, the (x,y) positions records did not always correspond to the focal points of the eyes of the subjects. With the current eye-tracking t echnologies becoming more and more reliable and reduced setup costs, it is recommended th at research be conducted on the cognitive activities that may differ between the icon us er interface and the text user interface during a visual search process.

PAGE 173

153 APPENDIX A PILOT STUDY RESULTS DATA Table A-1. Average Task Time for Each Training Session (Tn denotes for the Average Task Time for the Nth Training Session) Subject No. Test Platform T1 (ms) T2 (ms) T3 (ms) T4 (ms) T5 (ms) 1 Fujitsu 4224 3236 3232 3156 3035 2 Fujitsu 4711 3667 3272 3261 2791 3 Fujitsu 6167 3989 4411 2809 2795 4 Fujitsu 5311 3581 3177 2772 3245 5 Fujitsu 4415 4077 4093 3442 3191 6 Fujitsu 13224 5466 5186 4479 6185 7 Fujitsu 10834 5109 4583 6402 4721 8 Fujitsu 7977 4514 3617 2801 2815 9 Fujitsu 5122 4486 4030 3700 4671 10 Fujitsu 5776 3105 3170 3484 3246 11 Fujitsu 6724 3761 3323 3333 2739 12 Fujitsu 3371 2443 2930 2283 2307 13 Fujitsu 7545 4103 3567 3692 3044 14 Fujitsu 13510 6971 5315 4749 3895 15 Fujitsu 4773 4011 4053 4819 3503 Mean (Fujitsu) 6912.27 4167 .93 3863.93 3678.80 3478.87 Std. Deviation (Fujitsu) 3199.987 1084.059 746.550 1050.892 1012.110 Std. Error Mean (Fujitsu) 826.233 279.903 192.758 271.339 261.326 16 Non-Fujitsu 6307 4308 3966 3859 2760 17 Non-Fujitsu 2333 2472 2679 2239 2100 18 Non-Fujitsu 6242 6932 4634 3986 2858 19 Non-Fujitsu 3658 3276 2729 2719 3083 20 Non-Fujitsu 4993 3402 4084 2898 3717 21 Non-Fujitsu 12174 7430 5098 5483 5174 22 Non-Fujitsu 7626 3591 3601 3163 3383 23 Non-Fujitsu 5834 3801 3223 3713 3523 24 Non-Fujitsu 5716 4687 4541 4531 3257 25 Non-Fujitsu 5575 3491 4138 3594 2884 26 Non-Fujitsu 4728 4793 3643 4341 3967 27 Non-Fujitsu 3802 3955 3229 4500 4011 28 Non-Fujitsu 6590 3570 3776 2689 2953 29 Non-Fujitsu 6177 4003 3476 4411 3725 30 Non-Fujitsu 5994 5702 4679 3997 3294 Mean (Non-Fujitsu) 5849.93 4360.87 3796.80 3694.20 3399.60 Std. Deviation (Non-Fujitsu) 2198.060 1372.435 711.389 899.898 698.636 Std. Error Mean (NonFujitsu) 567.537 354.361 183.680 232.353 180.387

PAGE 174

154 Table A-2. Number of Errors for Each Training Session (En denotes for the number of errors for the Nth Training Session) Subject No. Test Platform E1 E2 E3 E4 E5 1 Fujitsu 1 0 0 1 1 2 Fujitsu 0 0 1 2 1 3 Fujitsu 7 0 2 1 1 4 Fujitsu 8 1 2 2 1 5 Fujitsu 3 0 1 0 1 6 Fujitsu 14 5 1 1 1 7 Fujitsu 8 5 2 7 2 8 Fujitsu 13 5 5 1 4 9 Fujitsu 12 11 3 3 2 10 Fujitsu 0 1 0 0 0 11 Fujitsu 5 0 0 1 1 12 Fujitsu 7 3 3 1 1 13 Fujitsu 1 2 1 0 0 14 Fujitsu 1 1 2 2 3 15 Fujitsu 4 1 3 3 2 Mean (Fujitsu) 5.60 2.33 1.73 1.67 1.40 Std. Deviation (Fujitsu) 4.763 3.063 1.387 1.759 1.056 Std. Error Mean (Fujitsu) 1.230 0.791 0.358 0.454 0.273 16 Non-Fujitsu 7 6 4 4 2 17 Non-Fujitsu 2 2 1 0 0 18 Non-Fujitsu 4 2 3 2 3 19 Non-Fujitsu 8 0 2 3 2 20 Non-Fujitsu 1 0 2 0 2 21 Non-Fujitsu 9 6 1 2 2 22 Non-Fujitsu 9 3 2 1 2 23 Non-Fujitsu 2 1 2 1 2 24 Non-Fujitsu 2 2 0 0 0 25 Non-Fujitsu 10 9 11 6 1 26 Non-Fujitsu 2 3 2 2 1 27 Non-Fujitsu 4 2 0 1 2 28 Non-Fujitsu 3 0 1 1 0 29 Non-Fujitsu 4 3 4 9 4 30 Non-Fujitsu 12 6 4 1 1 Mean (Non-Fujitsu) 5.27 3.00 2.27 1.87 1.67 Std. Deviation (Non-Fujitsu) 3.555 2.646 1.668 2.264 1.113 Std. Error Mean (Non-Fujitsu) 0.918 0.683 0.431 0.584 0.287

PAGE 175

155 Table Table A-3. Subject 1(Homebuilder Superintendent) Icon Training Session Data Session No. Session Errors Average Task Time (ms) Average Instruction Reading Time (ms) Average Search Time (ms) 1 1 4,462 1,593 2,425 2 1 3,591 1,139 2,186 3 1 3,148 1,319 1,716 4 0 2,833 1,428 1,223 5 1 3,337 1,461 1,720 6 1 3,045 1,182 1,869 7 0 3,006 1,376 1,684 8 0 2,864 1,275 1,429 9 0 3,095 1,129 1,751 10 0 2,739 1,187 1,429 11 0 2,582 1,092 1,524 12 0 2,807 1,086 1,541 13 0 3,363 1,748 1,515 14 0 2,794 1,188 1,505 15 0 2,582 957 1,561 16 0 2,471 948 1,530 17 0 2,667 938 1,726 18 0 2,747 1,086 1,566 19 0 2,760 1,095 1,628 20 0 2,856 1,174 1,589

PAGE 176

156 TableTableTable Table A-4. Subject 2 (Engineer ) Icon Training Session Data Session No. Session Errors Average Task Time (ms) Average Instruction Reading Time (ms) Average Search Time (ms) 1 0 5597 719 4850 2 0 3580 788 2717 3 0 3435 968 2236 4 0 3061 817 2280 5 0 2717 850 1891 6 0 2489 587 1745 7 1 2941 925 1946 8 0 3015 889 1923 9 0 2976 950 1953 10 0 2495 849 1583 11 0 2240 752 1473 12 0 2701 943 1599 13 1 3019 867 2145 14 0 2819 948 1858 15 0 2295 970 1202 16 0 2032 866 1205 17 0 2251 924 1396 18 0 2138 719 1426 19 0 2428 809 1585 20 0 2356 726 1776

PAGE 177

157 Table A-5. Subject 3 (Framing Fore man) Icon Training Session Data Session No. Session Errors Average Task Time (ms) Average Instruction Reading Time (ms) Average Search Time (ms) 1 0 7096 1039 5704 2 1 4659 534 4053 3 1 3986 258 3560 4 1 3429 369 2893 5 0 3612 1019 2417 6 1 5047 868 3706 7 0 3311 713 2553 8 0 4159 579 3520 9 1 5563 541 4671 10 1 3338 585 2646 11 1 3155 1101 1763 12 3 3283 1260 1729 13 0 2966 1348 1587 14 0 3937 1011 2874 15 1 4288 1621 2505 16 0 4773 878 3769 17 0 3030 890 1942 18 0 3130 1427 1762 19 0 3386 752 2615 20 1 3971 1077 2839

PAGE 178

158 Table A-6. Session Time fo r Each Training Session (STn denotes for the Session Time for the Nth Training Session) Subject No. Test Platform ST1 (ms) ST2 (ms) ST3 (ms) ST4 (ms) ST5 (ms) 1 Fujitsu 118248 76920 69561 63202 49515 2 Fujitsu 47615 40244 35340 32029 32451 3 Fujitsu 129979 120551 86206 66933 59352 4 Fujitsu 98642 58969 44688 50188 60157 5 Fujitsu 84610 65978 79898 51905 76825 6 Fujitsu 214750 221020 76240 106390 89910 7 Fujitsu 125634 80293 51435 48545 54966 8 Fujitsu 103000 77345 59750 60202 59812 9 Fujitsu 91775 78845 69853 71094 56634 10 Fujitsu 100443 87395 100757 62093 52415 11 Fujitsu 77999 83954 62714 66182 66854 12 Fujitsu 81961 65246 51085 78365 65496 13 Fujitsu 122773 64826 58787 47581 45077 14 Fujitsu 161403 87163 60598 79349 72286 15 Fujitsu 117035 111659 86956 65485 60279 16 Non-Fujitsu 114785 58154 51694 50973 40086 17 Non-Fujitsu 89388 59876 50483 55560 51084 18 Non-Fujitsu 154722 71784 67286 57542 53788 19 Non-Fujitsu 98171 67938 54108 43703 51113 20 Non-Fujitsu 87355 70802 87206 61538 56901 21 Non-Fujitsu 246765 99112 81688 68068 94376 22 Non-Fujitsu 176724 113894 81497 113553 88938 23 Non-Fujitsu 148003 74728 63221 51894 53286 24 Non-Fujitsu 105592 118774 70391 62741 71953 25 Non-Fujitsu 104080 68678 48930 54529 50592 26 Non-Fujitsu 144218 62289 52286 51033 42892 27 Non-Fujitsu 105923 47989 51354 38746 39557 28 Non-Fujitsu 115556 68849 66916 56812 61879 29 Non-Fujitsu 224693 129466 85924 80276 85062 30 Non-Fujitsu 115325 72624 67347 82298 59685

PAGE 179

159 APPENDIX B FINAL STUDY RESULT DATA Table B-1. Foremen Demographics Foreman Subject No. Age (years) Education Construction Experience (years) Crew Size Type of Foreman Company Years in Business 1 44 High School 25 50 General Foreman 12 2 41 High School 25 4 Paving Foreman 12 3 38 High School 20 5 Paving Foreman 12 4 40 High School 20 1 Earthwork Foreman 20 5 44 High School 25 5 Earthwork Foreman 20 6 41 High School 15 4 Underg round Utility Foreman 20 7 59 High School 38 3 Earthwork Foreman 20 8 35 High School 15 5 Underg round Utility Foreman 20 9 44 College 31 12 Earthwork Foreman 20 10 48 High School 12 3 Earthwork Foreman 20 11 22 College 5 5 Earthwork Foreman 20 12 43 High School . Underg round Utility Foreman 20 13 28 High School 6 2 Underg round Utility Foreman 20 14 33 High School 13 Underg round Utility Foreman 20 15 29 College 11 Underg round Utility Foreman 20 16 50 High School 20 2 Other Foreman 20 17 29 High School 5 2 Other Foreman 20 18 54 High School 35 20 General Foreman 30 19 32 College 7 25 General Foreman 30 20 54 College 33 65 General Foreman 30 21 52 High School 35 30 General Foreman 35 22 32 College 11.5 4 Other Foreman 18 23 53 College 26 General Foreman 18 24 62 College 30 6 Underg round Utility Foreman 18 25 50 High School 32 3 Underg round Utility Foreman 18 26 37 High School 22 18 Underg round Utility Foreman 18 27 38 High School 20 48 General Foreman 23 28 24 College 4 10 Underg round Utility Foreman 23 29 46 High School 21 30 General Foreman 25 30 32 High School 14 2 Earthwork Foreman 25 31 36 High School 20 7 Underg round Utility Foreman 22 32 31 High School 13 7 Underg round Utility Foreman 22 33 46 High School 26 7 Underg round Utility Foreman 22 34 39 High School 15 7 Underg round Utility Foreman 22

PAGE 180

160 Table B-2. Construction Prof essionals Demographics Construction Professional Subject No. Age (years) Education Construction Experience (years) Profession 1 30 Graduate School8 Consultant 2 32 High School 0 CAD Technician 3 25 College 2 Civil Engineer 4 33 College 11 Civil Engineer 5 36 College 13 Civil Engineer 6 30 1 Civil Engineer 7 27 College 7 CAD Technician 8 51 College CAD Technician 9 21 College CAD Technician 10 26 College 4 Civil Engineer 11 3.5 Civil Engineer 12 49 College 32 Project Manager 13 56 College 34 Civil Engineer 14 34 College 9 Civil Engineer 15 Project Manager 16 45 College 13 Project Manager 17 Project Manager 18 60 High School 40 Project Manager 19 41 High School 20 Project Manager 20 51 College 25 Project Manager 21 46 College 28 Consultant 22 49 Graduate School28 Project Manager 23 45 College 23 Project Manager 24 23 Graduate School0 Project Engineer 25 25 College 1 Project Engineer 26 26 College 6 Project Engineer 27 27 College 4 Project Engineer 28 45 College 5 Project Manager 29 43 College 20 Superintendent 30 74 College 66 Superintendent 31 31 High School 10 Inspector 32 25 College 2 Estimator 33 52 College 30 Superintendent 34 51 High School 30 Superintendent 35 College Superintendent 36 27 College 5 Project Manager 37 25 College 3 Project Manager

PAGE 181

161 Table B-3. Student Demographics Student Subject No. Age (years) Academic Status Major Construction Experience (years) 1 26 Graduate Student Building Construction1 2 27 Graduate Student Building Construction4 3 29 Graduate Student Building Construction0.6 4 28 Graduate Student Building Construction5 5 24 Graduate Student Building Construction0 6 24 Graduate Student Building Construction0.8 7 30 Graduate Student Building Construction0.1 8 40 Graduate Student Building Construction0 9 26 Graduate Student Building Construction1 10 Graduate Student Building Construction0 11 24 Graduate Student Building Construction0.9 12 23 Graduate Student Building Construction0.4 13 22 Graduate Student Building Construction0.3 14 22 Undergraduate Student Building Construction0 15 28 Undergraduate Student Building Construction0.4 16 30 Graduate Student Building Construction0.5 17 40 Graduate Student Building Construction1 18 25 Graduate Student Building Construction1 19 28 Graduate Student Building Construction0 20 35 Graduate Student Building Construction5 21 44 Graduate Student Building Construction1 22 30 Graduate Student Building Construction0 23 24 Graduate Student Building Construction7.5 24 24 Graduate Student Building Construction0.7 25 34 Graduate Student Building Construction10 26 28 Graduate Student Other 2 27 25 Graduate Student Building Construction0.3 28 27 Graduate Student Other 0

PAGE 182

162 Table B-4. Foremen’s experience with co mmon touch sensitive screen devices Foreman Subject # ATM Information Kiosks Store Self Checkout Total Experience Score 1 1 0 1 2 2 1 0 1 2 3 1 0 1 2 4 0 0 0 0 5 0 0 1 1 6 1 0 1 2 7 1 0 0 1 8 1 1 1 3 9 1 0 1 2 10 1 1 1 3 11 1 1 1 3 12 1 0 0 1 13 1 0 1 2 14 1 0 0 1 15 0 0 0 0 16 1 0 1 2 17 1 0 1 2 18 1 0 1 2 19 1 1 1 3 20 1 0 1 2 21 1 0 1 2 22 1 1 1 3 23 0 0 0 0 24 1 1 1 3 25 1 1 1 3 26 1 1 0 2 27 1 1 1 3 28 1 1 1 3 29 1 1 1 3 30 1 1 1 3 31 1 0 0 1 32 0 1 0 1 33 1 0 1 2 34 1 0 1 2 35 . “1” – yes “0” – no

PAGE 183

163 Table B-5. Construction profe ssionals’ experience with co mmon touch sensitive screen devices Construction Professional Subject No. ATM Information Kiosks Store Self Checkout Total Experience Score 1 1 1 1 3 2 1 1 1 3 3 1 1 1 3 4 1 1 1 3 5 1 1 1 3 6 1 1 1 3 7 1 1 1 3 8 1 1 1 3 9 1 1 1 3 10 1 1 1 3 11 1 0 1 2 12 0 1 1 2 13 1 0 1 2 14 1 1 1 3 15 1 0 1 2 16 1 1 1 3 17 1 1 1 3 18 1 1 1 3 19 0 0 0 0 20 1 1 1 3 21 1 1 1 3 22 1 1 1 3 23 1 1 0 2 24 1 1 1 3 25 1 1 1 3 26 1 0 1 2 27 1 0 1 2 28 1 1 1 3 29 1 0 1 2 30 0 0 0 0 31 1 1 1 3 32 1 1 1 3 33 1 1 0 2 34 1 1 0 2 35 1 1 1 3 36 1 1 1 3 37 1 1 1 3 “1” – yes “0” – no

PAGE 184

164 Table B-6. Students’ experience with co mmon touch sensitive screen devices Student Subject # ATM Information Kiosks Store Self Checkout Total Experience Score 1 1 0 1 2 2 1 1 1 3 3 1 0 0 1 4 1 0 1 2 5 1 1 1 3 6 1 0 1 2 7 1 1 0 2 8 1 0 0 1 9 1 0 0 1 10 . . 11 1 1 1 3 12 1 1 1 3 13 1 1 1 3 14 1 0 1 2 15 1 0 1 2 16 1 1 1 3 17 1 1 1 3 18 1 0 1 2 19 1 1 1 3 20 1 1 1 3 21 1 1 1 3 22 1 1 1 3 23 1 0 1 2 24 1 1 1 3 25 1 1 1 3 26 1 0 0 1 27 1 1 1 3 28 1 1 1 3 “1” – yes “0” – no

PAGE 185

165 Table B-7. ForemenÂ’ experience with PDAÂ’s Foreman Subject No. Use PDA Use PDA for Work Use PDA for Personal Use Average Weekly PDA Use Time (Hours) 1 Yes Yes No 3 2 No No No 0 3 No No No 0 4 No No No 0 5 No No No 0 6 No No No 0 7 No No No 0 8 No No No 0 9 No No No 0 10 No No No 0 11 Yes No Yes 4 12 No No No 0 13 No No No 0 14 Yes Yes Yes 3 15 No No No 0 16 No No No 0 17 Yes Yes No 5 18 No No No 0 19 No No No 0 20 Yes No Yes 5 21 No No No 0 22 Yes Yes No 3 23 No No No 0 24 No No No 0 25 Yes No Yes 2 26 No No No 0 27 Yes Yes No 4 28 Yes Yes Yes 7 29 Yes Yes No 1 30 Yes No Yes 31 No No No 0 32 No No No 0 33 No No No 0 34 No No No 0

PAGE 186

166 Table B-8. Construction Professi onalsÂ’ experience with PDAÂ’s Construction Professional Subject No. Use PDA Use PDA for Work Use PDA for Personal Average Weekly PDA Use Time (Hours) 1 Yes Yes Yes 4 2 Yes Yes Yes 2 3 Yes Yes Yes 1 4 Yes Yes Yes 5 Yes Yes Yes 5 6 No No No 0 7 No No No 0 8 No No No 0 9 Yes Yes Yes 3 10 Yes Yes Yes 2 11 Yes No Yes 1 12 No No No 0 13 No No No 0 14 Yes Yes No 3 15 Yes Yes No 1 16 No No No 0 17 No No No 0 18 No No No 0 19 No No No 0 20 No No No 0 21 No No No 0 22 Yes Yes Yes 20 23 Yes No Yes 2 24 Yes No Yes 2 25 Yes No Yes 2 26 Yes No Yes 5 27 Yes Yes Yes 1 28 Yes Yes Yes 15 29 No No No 0 30 No No No 0 31 Yes No Yes 2 32 No No No 0 33 No No No 0 34 No No No 0 35 No No No 0 36 Yes No Yes 5 37 Yes Yes Yes 10

PAGE 187

167 Table B-9. Student SubjectsÂ’ Experience with PDAÂ’s Student Subject No. Use PDA Average Weekly PDA Use Time (Hours) 1 No 0 2 Yes 3 3 No 0 4 Yes 21 5 Yes 1 6 No 0 7 No 0 8 No 0 9 No 0 10 . 11 Yes 2 12 No 0 13 No 0 14 No 0 15 Yes 1 16 Yes 3 17 Yes 20 18 No 0 19 No 0 20 No 0 21 Yes 1 22 No 0 23 No 0 24 Yes 2 25 Yes 8 26 Yes 16 27 Yes 2 28 No 0

PAGE 188

168 Table B-10. ForemenÂ’s Ratings of the Efficien cy of the Data Entry Mechanism by Stylus Handwriting on Mobile Computing Devices Foremen Subject No. Rating on PDA Handwriting Data Input Method Equivalent Numeric Rating 1 Efficient 6 2 Efficient 6 3 Inefficient 2 4 Inefficient 2 5 Very efficient 7 6 Very efficient 7 7 Very efficient 7 8 No opinion 4 9 Slightly efficient 5 10 Very inefficient 1 11 No opinion 4 12 Efficient 6 13 Slightly inefficient 3 14 Inefficient 2 15 Very efficient 7 16 No opinion 4 17 Efficient 6 18 Very inefficient 1 19 Efficient 6 20 Efficient 6 21 Efficient 6 22 Slightly inefficient 3 23 Efficient 6 24 Inefficient 2 25 Very inefficient 1 26 Efficient 6 27 Efficient 6 28 Very efficient 7 29 No opinion 4 30 . 31 No opinion 4 32 Efficient 6 33 Inefficient 2 34 Very inefficient 1 35 Efficient 6

PAGE 189

169 Table B-11. Construction Profe ssionalsÂ’ Ratings of the Effi ciency of the Data Entry Mechanism by Stylus Handwriting on Mobile Computing Devices Construction Professional Subject No. Rating on PDA Handwriting Data Input Method Equivalent Numeric Rating 1 Inefficient 2 2 No opinion 4 3 No opinion 4 4 Very efficient 7 5 Efficient 6 6 Very inefficient 1 7 Efficient 6 8 Very efficient 7 9 Slightly efficient 5 10 No opinion 4 11 Slightly inefficient 3 12 Efficient 6 13 Efficient 6 14 No opinion 4 15 Very efficient 7 16 No opinion 4 17 No opinion 4 18 Efficient 6 19 No opinion 4 20 Efficient 6 21 Very inefficient 1 22 Very efficient 7 23 Very efficient 7 24 Efficient 6 25 Efficient 6 26 Efficient 6 27 No opinion 4 28 Efficient 6 29 Efficient 6 30 No opinion 4 31 Slightly efficient 5 32 Slightly efficient 5 33 Efficient 6 34 No opinion 4 35 Efficient 6 36 Slightly inefficient 3 37 No opinion 4

PAGE 190

170 Table B-12. StudentsÂ’ Ratings of the Efficiency of the Data Entry Mechanism by Stylus Handwriting on Mobile Computing Devices Student Subject No. Rating on PDA Handwriting Data Input Method Equivalent Numeric Rating 1 Inefficient 2 2 Slightly inefficient 3 3 Efficient 6 4 Inefficient 2 5 Efficient 6 6 Efficient 6 7 No opinion 4 8 No opinion 4 9 Efficient 6 10 No opinion 4 11 Efficient 6 12 Efficient 6 13 Efficient 6 14 Inefficient 2 15 Inefficient 2 16 Slightly efficient 5 17 Very efficient 7 18 Efficient 6 19 Slightly inefficient 3 20 Efficient 6 21 Slightly inefficient 3 22 No opinion 4 23 Very efficient 7 24 Slightly inefficient 3 25 Inefficient 2 26 Slightly efficient 5 27 No opinion 4 28 Inefficient 2

PAGE 191

171 Table 5-13. Foremen SubjectsÂ’ Ra tings of the Importance of Being Able to Input Data Quickly on Mobile Computing Devices Foreman Subject No. Importance Rating Equivalent Numeric Rating 1 Important 4 2 Important 4 3 Important 4 4 Fairly important 3 5 Important 4 6 Important 4 7 Very Important 5 8 Very Important 5 9 Very Important 5 10 Fairly important 3 11 Important 4 12 Fairly important 3 13 Of little importance 2 14 Fairly important 3 15 Important 4 16 Important 4 17 Very Important 5 18 Important 4 19 Very Important 5 20 Very Important 5 21 Very Important 5 22 Very Important 5 23 Important 4 24 Important 4 25 Very Important 5 26 Important 4 27 Fairly important 3 28 Important 4 29 Very Important 5 30 Very Important 5 31 Important 4 32 Fairly important 3 33 Important 4 34 Very Important 5

PAGE 192

172 Table B-14. Construction Profe ssionalsÂ’ Ratings of the Im portance of Being Able to Input Data Quickly on M obile Computing Devices Construction Professional Subject No. Importance Rating Equivalent Numeric Rating 1 Very Important 5 2 Fairly important 3 3 Very Important 5 4 Very Important 5 5 Important 4 6 Very Important 5 7 Important 4 8 Important 4 9 Very Important 5 10 Important 4 11 Fairly important 3 12 Very Important 5 13 Important 4 14 Fairly important 3 15 Very Important 5 16 Important 4 17 Very Important 5 18 Very Important 5 19 Fairly important 3 20 Important 4 21 Very Important 5 22 Very Important 5 23 Important 4 24 Very Important 5 25 Very Important 5 26 Very Important 5 27 Important 4 28 Very Important 5 29 Very Important 5 30 Very Important 5 31 Important 4 32 Very Important 5 33 Important 4 34 Important 4 35 Very Important 5 36 Very Important 5 37 Important 4

PAGE 193

173 Table B-15. Student SubjectsÂ’ Ra tings of the Importance of Being Able to Input Data Quickly on Mobile Computing Devices Student Subject No. Importance Rating Equivalent Numeric Rating 1 Important 4 2 Very Important 5 3 Very Important 5 4 Of little importance 2 5 Very Important 5 6 Fairly important 3 7 Important 4 8 Important 4 9 Very Important 5 10 . 11 Very Important 5 12 Important 4 13 Important 4 14 Very Important 5 15 Not important at all 1 16 Fairly important 3 17 Fairly important 3 18 Important 4 19 Very Important 5 20 Important 4 21 Fairly important 3 22 Very Important 5 23 Very Important 5 24 Important 4 25 Very Important 5 26 Important 4 27 . 28 Of little importance 2

PAGE 194

174 Table B-16. ForemenÂ’s View about Whether Mo st Content of Their Field Documentation Could be Standardized Foreman Subject No. Agreement Rating Equivalent Numeric Rating 1 Slightly agree 5 2 Agree 6 3 Agree 6 4 Agree 6 5 Slightly agree 5 6 Agree 6 7 Strongly agree 7 8 Strongly agree 7 9 Agree 6 10 Slightly agree 5 11 Agree 6 12 Slightly disagree 3 13 No opinion 4 14 Strongly agree 7 15 Agree 6 16 Agree 6 17 Strongly agree 7 18 No opinion 4 19 Agree 6 20 Agree 6 21 Agree 6 22 Strongly agree 7 23 Agree 6 24 Slightly agree 5 25 Slightly agree 5 26 Slightly agree 5 27 Agree 6 28 Agree 6 29 Strongly agree 7 30 Strongly agree 7 31 Agree 6 32 Slightly agree 5 33 Agree 6 34 Agree 6

PAGE 195

175 Table B-17. Construction Prof essionalsÂ’ View about Whet her Most Content of the Construction ForemenÂ’s Field Docu mentation Could be Standardized Construction Professional Subject No. Agreement Rating Equivalent Numeric Rating 1 Strongly agree 7 2 Slightly agree 5 3 Agree 6 4 Agree 6 5 Strongly agree 7 6 Agree 6 7 Strongly agree 7 8 Agree 6 9 Agree 6 10 Strongly agree 7 11 Slightly disagree 3 12 Strongly agree 7 13 Agree 6 14 Agree 6 15 Agree 6 16 Agree 6 17 No opinion 4 18 Agree 6 19 Agree 6 20 Agree 6 21 Strongly agree 7 22 Agree 6 23 Agree 6 24 Agree 6 25 Agree 6 26 Agree 6 27 Agree 6 28 Agree 6 29 Strongly agree 7 30 Disagree 2 31 Slightly agree 5 32 Agree 6 33 Agree 6 34 Agree 6 35 Agree 6 36 Strongly agree 7 37 Strongly agree 7

PAGE 196

176 Table B-18. ForemenÂ’s Estimate of the Percen tage of the Information in Their Field Documentation Could be Standardized Foreman Subject No. Percentage Can be Standardized (%) 1 40 2 60 3 50 4 50 5 60 6 70 7 100 8 90 9 60 10 70 11 90 12 40 13 100 14 80 15 90 16 80 17 100 18 19 85 20 60 21 100 22 75 23 80 24 75 25 50 26 70 27 60 28 85 29 100 30 100 31 32 33 100 34 35

PAGE 197

177 Table B-19. Construction Professi onalsÂ’ Estimate of the Percen tage of the Information in Construction ForemenÂ’s Documenta tion that Could be Standardized Construction Professional Subject N o. Percentage Can be Standardized (%) 1 90 2 85 3 75 4 65 5 60 6 70 7 80 8 9 90 10 100 11 50 12 80 13 14 15 16 70 17 18 90 19 70 20 90 21 90 22 80 23 100 24 90 25 60 26 90 27 70 28 80 29 80 30 31 80 32 70 33 100 34 75 35 90 36 90 37

PAGE 198

178 Table B-20. ForemenÂ’s Satisfaction Ratings with Icon Visual Search Game and the Text Visual Search Game Icon Visual Search Game Te xt Visual Search Game Foreman Subject No. Satisfaction Rating Equivalent Numeric RatingSatisfaction Rating Equivalent Numeric Rating 1 Liked it 0.67 Liked it 0.67 2 Liked it 0.67 Liked it 0.67 3 Liked it 0.67 Liked it 0.67 4 Liked it very much 1.00 Liked it very much 1.00 5 Liked it 0.67 Liked it 0.67 6 Liked it 0.67 Liked it 0.67 7 Liked it very much 1.00 Liked it very much 1.00 8 Liked it a little 0.33 Liked it a little 0.33 9 Liked it 0.67 Liked it 0.67 10 Liked it a little 0.33 Liked it a little 0.33 11 No opinion 0.00 No opinion 0.00 12 Did not like it -0.67 Did not like it -0.67 13 Liked it 0.67 Liked it 0.67 14 Liked it a little 0.33 Liked it a little 0.33 15 Liked it 0.67 Liked it 0.67 16 Liked it 0.67 Liked it a little 0.33 17 Liked it 0.67 Liked it 0.67 18 Liked it 0.67 Liked it 0.67 19 Liked it very much 1.00 Liked it a little 0.33 20 Liked it a little 0.33 Liked it a little 0.33 21 Liked it very much 1.00 Liked it 0.67 22 Liked it 0.67 Liked it 0.67 23 Liked it a little 0.33 Liked it 0.67 24 Liked it 0.67 Liked it 0.67 25 Liked it 0.67 Liked it 0.67 26 Liked it 0.67 Liked it a little 0.33 27 Liked it 0.67 Liked it a little 0.33 28 Liked it very much 1.00 Liked it 0.67 29 Liked it very much 1.00 Liked it 0.67 30 No opinion 0.00 No opinion 0.00 31 Liked it 0.67 Liked it 0.67 32 Liked it 0.67 Liked it a little 0.33 33 Liked it a little 0.33 Liked it a little 0.33 34 Liked it 0.67 Liked it very much 1.00 Mean 0.5894 0.5203 Median 0.6700 0.6700 Std. Deviation 0.34021 0.32165 Minimum -0.67 -0.67 Maximum 1.00 1.00

PAGE 199

179 Table B-21. Construction Pr ofessionalsÂ’ Satisfaction Rati ngs with Icon Visual Search Game and Text Visual Search Game Icon Visual Search Game Text Visual Search Game Construction Professional Subject No. Semantic Differential Rating Equivalent Numeric Rating Semantic Differential Rating Equivalent Numeric Rating 1 Liked it very much 1 No opinion 0 2 Liked it very much 1 Liked it 0.67 3 Liked it 0.67 Liked it 0.67 4 Liked it 0.67 Liked it a little 0.33 5 Liked it 0.67 Liked it 0.67 6 Liked it 0.67 Did not like it -0.67 7 Liked it 0.67 Liked it a little 0.33 8 Liked it very much 1 Liked it 0.67 9 Liked it a little 0.33 Liked it a little 0.33 10 Liked it a little 0.33 Liked it a little 0.33 11 Liked it 0.67 Liked it 0.67 12 Liked it 0.67 Liked it a little 0.33 13 Liked it 0.67 Liked it a little 0.33 14 Liked it 0.67 Liked it 0.67 15 Liked it 0.67 Liked it 0.67 16 Liked it 0.67 Liked it 0.67 17 Liked it 0.67 Liked it 0.67 18 Liked it 0.67 Slightly disliked it -0.33 19 Liked it 0.67 Liked it a little 0.33 20 No opinion 0 No opinion 0 21 No opinion 0 No opinion 0 22 Liked it 0.67 Liked it 0.67 23 Liked it 0.67 Liked it 0.67 24 Liked it 0.67 Liked it 0.67 25 Liked it a little 0.33 Not at all -1 26 Liked it 0.67 Liked it 0.67 27 Liked it 0.67 Liked it 0.67 28 Liked it 0.67 Liked it 0.67 29 Liked it 0.67 Liked it a little 0.33 30 Liked it a little 0.33 Slightly disliked it -0.33 31 No opinion 0 Liked it 0.67 32 Liked it very much 1 Liked it very much 1 33 Liked it 0.67 Liked it 0.67 34 Liked it very much 1 Slightly disliked it -0.33 35 Liked it 0.67 Liked it 0.67 36 Liked it very much 1 Liked it 0.67 37 Liked it very much 1 Liked it very much 1 Mean 0.6414 0.3976 Median 0.6700 0.6700 Std. Deviation 0.26568 0.45116 Minimum 0.00 -1.00 Maximum 1.00 1.00

PAGE 200

180 Table B-22. Student SubjectsÂ’ Sa tisfaction Ratings with Icon Visual Search Game and Text Visual Search Game Icon Visual Search Game Te xt Visual Search Game Student Subject No. Semantic Differential Rating Equivalent Numeric Rating Semantic Differential Rating Equivalent Numeric Rating 1 Liked it 0.67 No opinion 0 2 Liked it 0.67 No opinion 0 3 Liked it 0.67 Liked it a little 0.33 4 Liked it a little 0.33 Liked it 0.67 5 Liked it 0.67 Liked it 0.67 6 Slightly disliked it -0.33 Liked it a little 0.33 7 Slightly disliked it -0.33 Slightly disliked it -0.33 8 Slightly disliked it -0.33 Slightly disliked it -0.33 9 No opinion 0 Not at all -1 10 No opinion 0 Liked it 0.67 11 Liked it 0.67 Liked it 0.67 12 Liked it 0.67 Liked it 0.67 13 Liked it 0.67 Liked it a little 0.33 14 Liked it 0.67 Liked it 0.67 15 No opinion 0 No opinion 0 16 Did not like it -0.67 Did not like it -0.67 17 Liked it very much 1 Liked it very much 1 18 Liked it 0.67 Liked it 0.67 19 Liked it 0.67 Liked it 0.67 20 Liked it 0.67 Liked it 0.67 21 Slightly disliked it -0.33 Slightly disliked it -0.33 22 Liked it 0.67 Slightly disliked it 0.33 23 Liked it 0.67 Liked it 0.67 24 Slightly disliked it -0.33 No opinion 0 25 No opinion 0 No opinion 0 26 Liked it 0.67 Did not like it -0.67 27 . . 28 No opinion 0 No opinion 0 Mean 0.6414 0.3107 Median 0.6700 0.6700 Std. Deviation 0.26568 0.47149 Minimum 0.00 -0.67 Maximum 1.00 1.00

PAGE 201

181 Table B-23. ForemenÂ’s Importance Ratings on S horter Task Time, Fewer Task Error and Higher User Satisfaction Importance Ratings Foreman Subject No. Shorter Task Time Fewer Task Errors Higher User Satisfaction 1 8.0 5 7 2 8.0 10 10 3 8.0 10 10 4 8.0 3 10 5 9.0 9 10 6 7.0 9 8 7 10.0 10 10 8 10.0 10 5 9 10.0 10 10 10 5.0 10 6 11 9.0 10 10 12 9.0 6 5 13 . 10 14 5.0 . 15 9.0 5 5 16 8.0 8 10 17 6.0 10 7 18 4.0 10 10 19 10.0 10 10 20 8.0 10 5 21 5.0 5 5 22 10.0 10 10 23 5.0 5 7 24 7.0 7 8 25 7.0 5 7 26 . 27 9.0 10 10 28 2.0 1 3 29 9.0 10 10 30 10.0 10 10 31 8.0 8 8 32 . 33 10.0 10 10 34 1 35 . Mean 7.767 7.900 8.200 Median 8.000 10.000 10.000 Std. Deviation 2.0957 2.8690 2.2034 Minimum 2.0 1.0 3.0 Maximum 10.0 10.0 10.0

PAGE 202

182 Table B-24. Construction Prof essionalsÂ’ Importance Rati ngs on Shorter Task Time, Fewer Task Error and Higher User Satisfaction Importance Ratings Construction Professional Subject No.. Shorter Task Time Fewer Task Errors Higher User Satisfaction 1 10.0 9 9 2 8.0 9 9 3 8.0 10 10 4 8.0 10 7 5 6.0 9 7 6 7.0 9 10 7 5.0 10 6 8 8.0 10 8 9 7.0 10 8 10 9.0 10 8 11 8.0 10 8 12 10.0 10 10 13 8.0 10 8 14 8.0 9 8 15 10.0 10 10 16 10.0 10 10 17 8.0 10 8 18 7.0 8 7 19 8.0 9 8 20 9.0 10 10 21 9.0 10 8 22 8.0 10 10 23 10.0 10 5 24 10.0 10 6 25 8.0 10 7 26 8.0 9 10 27 . 28 8.0 9 9 29 8.0 9 8 30 2.0 8 2 31 9.0 10 10 32 8.0 9 9 33 8.0 10 10 34 10.0 10 10 35 9.0 8 9 36 8.0 10 10 37 8.0 10 10 Mean 8.139 9.556 8.389 Median 8.000 10.000 8.500 Std. Deviation 1.5520 0.6522 1.7611 Minimum 2.0 8.0 2.0 Maximum 10.0 10.0 10.0

PAGE 203

183 Table B-25. StudentsÂ’ Importan ce Ratings on Shorter Task Time, Fewer Task Error and Higher User Satisfaction Importance Ratings Student Subject No. Shorter Task TimeFewer Task Errors Higher User Satisfaction 1 8.0 8 9 2 8.0 10 8 3 7.0 5 8 4 7.0 10 5 5 10.0 9 9 6 9.0 4 7 7 8.0 9 6 8 8.0 10 6 9 . 10 7.0 7 8 11 10.0 10 10 12 1.0 5 10 13 9.0 9 6 14 10.0 10 10 15 7.0 9 10 16 7.0 10 3 17 8.0 8 8 18 8.0 7 8 19 8.0 9 6 20 8.0 8 8 21 5.0 9 6 22 9.0 9 10 23 7.0 8 8 24 1.0 2 6 25 6.0 8 8 26 10.0 7 8 27 10.0 10 10 28 8.0 10 5 Mean 7.556 8.148 7.630 Median 8.000 9.000 8.000 Std. Deviation 2.2758 2.0700 1.8636 Minimum 1.0 2.0 3.0 Maximum 10.0 10.0 10.0

PAGE 204

184 Table B-26. ForemenÂ’s Views About Whethe r the Icon-based Field Documentation Systems Would Help Do Their Jobs Foreman Subject No. Agreement Rating Equivalent Numeric Rating 1 Slightly agree 5 2 Agree 6 3 Agree 6 4 Agree 6 5 Slightly agree 5 6 Agree 6 7 Strongly agree 7 8 Strongly agree 7 9 Slightly agree 5 10 No opinion 4 11 Agree 6 12 Agree 6 13 Agree 6 14 . 15 Agree 6 16 Agree 6 17 Agree 6 18 No opinion 4 19 Agree 6 20 Agree 6 21 Agree 6 22 Slightly disagree 3 23 No opinion 4 24 Slightly agree 5 25 Agree 6 26 Agree 6 27 Slightly agree 5 28 Strongly agree 7 29 Strongly agree 7 30 Strongly agree 7 31 Agree 6 32 Slightly agree 5 33 No opinion 4 34 Agree 6 35 .

PAGE 205

185 Table B-27. Construction Prof essionalsÂ’ Views About Wh ether the Icon-based Field Documentation Systems Would Help Foremen Do Their Jobs Construction Professional Subject No. Agreement Rating Equivalent Numeric Rating 1 No opinion 4 2 Agree 6 3 Strongly agree 7 4 Strongly agree 7 5 Agree 6 6 Agree 6 7 Strongly agree 7 8 Agree 6 9 No opinion 4 10 Slightly agree 5 11 Agree 6 12 Strongly agree 7 13 Slightly agree 5 14 . 15 Agree 6 16 Agree 6 17 Slightly agree 5 18 Agree 6 19 Agree 6 20 Agree 6 21 Slightly agree 5 22 Strongly agree 7 23 Agree 6 24 Strongly agree 7 25 Slightly agree 5 26 Slightly agree 5 27 Slightly agree 5 28 Agree 6 29 Agree 6 30 Slightly disagree 3 31 No opinion 4 32 Strongly agree 7 33 Strongly agree 7 34 Agree 6 35 Agree 6 36 No opinion 4 37 Strongly agree 7

PAGE 206

186 Table B-28. Student Subject sÂ’ Views About Whether the Icon-based Field Documentation Systems Would Help Foremen Do Their Jobs Student Subject No. Agreement Rating Equivalent Numeric Rating 1 Strongly agree 7 2 Agree 6 3 Agree 6 4 Slightly agree 5 5 Agree 6 6 Disagree 2 7 Slightly agree 5 8 Slightly disagree 3 9 Disagree 2 10 Agree 6 11 Agree 6 12 Strongly agree 7 13 Slightly agree 5 14 Slightly agree 5 15 Agree 6 16 Slightly agree 5 17 Agree 6 18 Agree 6 19 Slightly agree 5

PAGE 207

187 Table B-29. Foremen SubjectsÂ’ Average Task Time, Average Task Instruction Reading Time, Average Task Search Time, and Average Task Errors in Icon Visual Search Session and Text Visual Search Session Icon Visual Search Test Text Visual Search Test Subject No. Average Task Time Average Instruction Reading Time Average Task Search Time Task Errors Average Task Time Average Instruction Reading Time Average Task Search Time Task Errors 1 6,986 1,117 5,761 0 5,868 1,385 4,572 0 2 4,744 1,014 3,407 0 7,139 1,038 6,142 0 3 4,689 1,181 3,317 0 4,617 1,711 2,828 0 4 6,881 4,096 2,254 0 10,199 6,161 2,996 0 5 5,902 1,475 4,566 0 8,963 3,000 5,811 1 6 6,736 2,752 4,080 0 8,912 4,839 3,273 0 7 9,060 1,679 7,194 0 9,299 3,270 5,861 2 8 4,450 2,022 2,048 0 5,705 2,308 3,077 0 9 5,510 1,007 4,587 1 7,177 1,538 5,548 1 10 6,651 964 5,763 0 5,026 918 4,056 1 11 3,959 706 3,280 0 5,718 1,085 4,555 0 12 8,741 652 7,920 0 10,417 1,630 8,586 2 13 5,769 1,788 3,818 1 9,466 2,375 6,522 1 14 6,040 628 5,404 0 10,212 875 9,012 1 15 5,837 2,326 3,237 0 8,072 2,195 5,678 1 16 4,206 1,310 2,883 2 6,472 2,168 4,459 0 17 4,837 1,818 3,165 1 4,375 1,084 3,821 2 18 5,601 2,186 3,462 1 6,209 1,971 4,088 1 19 4,693 2,841 1,360 0 4,548 3,160 1,105 0 20 6,106 2,031 3,888 2 7,438 2,178 5,150 1 21 9,076 1,646 6,381 2 7,265 1,787 5,050 2 22 4,507 720 3,699 1 5,938 953 4,968 3 23 6,275 1,687 4,644 6 7,231 2,731 4,291 1 24 . . . . 25 . . . . 26 6,141 2,050 4,012 0 8,597 4,179 3,673 2 27 4,366 1,629 2,758 0 5,509 2,058 2,964 0 28 4,380 979 3,327 0 6,234 1,362 4,952 1 29 6,862 2,847 3,558 0 7,335 1,305 5,662 1 30 4,122 1,393 2,841 0 5,152 1,635 3,085 0 31 5,118 1,578 3,296 0 6,764 1,789 4,481 1 32 7,660 1,635 6,052 0 12,199 1,611 10,593 1 33 7,492 2,266 5,440 1 8,276 3,469 4,620 1 34 7,076 1,074 5,793 1 7,452 1,174 5,860 0 35 5,944 649 5,045 0 8,558 1,488 6,425 1 Mean 5952 1629 4,189 0.58 7344 2134 4,963 0.85 Std. Error of Mean 247 135 261 0.204334 208 329 0.138 St. Deviation 5902 1629 3,818 0.00 7231 1787 4,620 1.00 Minimum 1419 775 1,501 1.1731921 1195 1,888 0.795 Maximum 3959 628 1,360 0 4375 875 1,105 0

PAGE 208

188 Table B-30. Construction Pr ofessionalsÂ’ Average Task Time, Average Task Instruction Reading Time, Average Task Search Ti me, and Average Task Errors in Icon Visual Search Session and Te xt Visual Search Session Icon Visual Search Test Text Visual Search Test Construction Professional Subject No. Average Task Time Average Instruction Reading Time Average Task Search Time Task Errors Average Task Time Average Instruction Reading Time Average Task Search Time Task Errors 1 4,716 897 3,949 2 6,167 1,206 4,720 1 2 7,922 821 7,042 4 10,229 1,014 8,980 4 3 4,550 1,861 2,766 0 5,835 2,398 3,387 2 4 7,655 4,147 3,652 2 6,726 2,104 4,863 2 5 3,913 1,143 2,655 0 4,840 1,299 3,411 2 6 6,443 1,502 4,900 3 10,665 945 9,588 3 7 4,293 637 3,478 0 4,129 761 3,440 0 8 4,366 1,153 3,261 0 5,473 1,578 3,425 0 9 5,716 768 5,014 4 5,645 1,278 4,459 1 10 3,250 423 2,836 0 3,424 512 2,863 0 11 6,055 3,916 1,856 0 6,140 4,011 1,471 0 12 6,659 1,761 4,945 0 8,215 1,688 6,515 1 13 7,817 2,824 4,457 0 12,414 3,441 7,804 1 14 5,053 878 4,235 2 8,251 1,205 7,099 5 15 5,377 1,022 4,388 6 9,744 1,288 7,910 4 16 4,014 323 3,701 2 8,278 689 7,551 6 17 5,156 262 4,801 13 6,874 1,442 5,261 8 18 6,150 2,278 3,591 1 7,902 2,009 5,900 1 19 5,107 2,172 2,911 0 7,192 3,165 3,856 0 20 5,896 1,366 4,593 0 5,840 1,864 3,783 0 21 7,613 2,685 4,800 5 5,833 2,950 2,395 1 22 6,397 1,094 5,088 1 7,413 2,059 5,392 0 23 4,656 730 3,750 1 4,106 1,279 2,779 0 24 4,974 505 4,505 4 5,546 1,027 4,558 4 25 6,393 383 6,015 8 7,871 905 6,677 8 26 4,137 1,442 2,725 1 4,262 1,859 2,129 1 27 4,537 1,090 3,305 1 4,465 1,298 2,991 0 28 7,402 1,254 6,217 0 7,025 1,082 5,278 1 29 5,563 1,554 3,938 1 5,908 1,537 3,899 2 30 12,378 2,740 9,433 1 12,146 3,860 8,702 3 31 4,677 2,622 1,848 0 4,569 2,748 1,673 0 32 6,191 541 5,583 1 5,555 776 4,677 0 33 5,385 1,570 3,822 1 5,544 1,993 3,439 1 34 6,608 2,268 4,037 0 7,955 1,514 5,985 1 35 5,147 1,765 3,499 3 8,208 2,821 5,201 3 36 4,052 2,941 829 0 4,096 3,322 778 0 37 5,342 479 4,851 2 5,725 559 5,065 1 Mean 5,717.84 1,508.57 4,142.59 1.86 6,762.43 1,769.89 4,808.22 1.81 Std. Error of Mean 270.187 162.211 252.994 0.442362.624 155.049 358.080 0.357 St. Deviation 1,643.486 986.692 1,538.902 2.6892,205.756943.128 2,178.118 2.171 Minimum 3,250 262 829 0 3,424 512 778 0 Maximum 12,378 4,147 9,433 13 12,414 4,011 9,588 8

PAGE 209

189 Table B-31. StudentsÂ’ Average Task Time, Average Task Instruction Reading Time, Average Task Search Time, and Average Task Errors in Icon Visual Search Session and Text Visu al Search Session Icon Visual Search Test Text Visual Search Test Subject No. Average Task Time Average Instruction Reading Time Average Task Search Time Task Errors Average Task Time Average Instruction Reading Time Average Task Search Time Task Errors 1 5,535 930 4,586 9 8,455 731 7,647 10 2 5,583 1,188 4,419 0 6,399 1,331 5,016 2 3 15,827 2,252 13,892 26 9,875 2,172 7,811 1 4 6,952 702 6,279 3 7,149 953 6,231 1 5 3,862 866 2,970 0 2,820 1,854 797 0 6 4,322 1,164 3,130 2 4,196 1,457 2,757 2 7 4,816 1,027 3,224 0 4,180 2,256 1,467 2 8 8,847 1,020 7,759 2 9,886 1,799 8,182 1 9 . . . . 10 . . . . 11 4,341 619 3,491 1 4,284 1,937 1,801 1 12 7,601 753 6,789 9 9,589 1,174 8,420 10 13 6,022 1,966 3,330 1 18,045 6,724 5,895 2 14 5,372 830 4,159 0 6,256 1,225 5,114 6 15 6,245 1,257 5,119 8 10,238 2,116 6,870 0 16 9,178 1,493 7,855 13 6,827 2,147 4,543 2 17 . . . . 18 5,382 1,043 4,477 5 5,866 1,188 4,727 5 19 6,272 286 5,948 0 7,277 1,324 6,096 0 20 10,124 2,659 7,948 2 6,042 3,688 1,926 0 21 7,249 641 6,855 8 5,816 703 5,101 5 22 3,586 685 2,884 0 4,672 1,018 3,704 4 23 . . . . 24 5,141 540 4,447 2 5,052 784 4,488 1 25 6,285 1,605 4,537 10 8,012 1,648 6,426 3 26 8,246 2,450 5,511 3 7,336 1,440 5,860 0 27 5,896 475 6,048 14 7,851 560 7,266 23 28 2,956 377 2,519 0 3,895 549 3,198 1 Mean 6,485 1,118 5,341 5 7,084 1,699 5,056 3 Std. Error of Mean 546.69 132.72 503.19 1.29 636.67 261.61 450.31 1.03 Median 5,959 975 4,562 2 6,613 1,386 5,108 2 Std. Deviation 2,678.22 650.17 2,465.126.30 3,119.021,281.61 2,206.06 5.05 Minimum 2,956 286 2,519 0 2,820 549 797 0 Maximum 15,827 2,659 13,892 26 18,045 6,724 8,420 23

PAGE 210

190 APPENDIX C SURVEY QUESTIONNAIRE Date of Survey: Survey Index #: SECTION 1 – Foremen’s Demographic Information 1. Foreman’s Name: 2. Age: 3. Education: 4. Years of Construction Experience: 5. Number of workers under supervision: 6. Type of Foreman: a. earthwork foreman b. underground utilities foreman c. paving foreman d. other please specify 7. Company Name: 9. Years in Business: years 8. Company Specializations (check all that apply): a. earthwork b. underground utilities c. paving SECTION 2 – Foremen’s Experience with Touch Sensitiv e Screen Devices and Mobile Computing Devices 9. Which of the following touch-sensitive screen devi ces have you ever used? (check all that apply) a. ATM Machines Information Kiosks Store checkout services b. Other, please specify 10. Have you used a Personal Digital Assistant (P DA) such as Palm Pliot or pocket PC? a. Yes b. No If “Yes,” did you use it for work or personal business? a. work b. personal business c. both If “Yes,” how much time did you use it on a weekly basis? hours 11. How efficient do you think it is to enter the field information on comp uters using the stylus writing method? 12. How important is it to be able to enter field information on computers in a quick and efficient manner? 13. Do you agree most that most of your field documentation could be standardized for input on the computer screen? 14. Please give the percentage of the field information that you think could be standardized: SECTION3 – Visual Search Test 15a. How much did you like the icon game? 15b. How much did you like the text game? 16. Please rate (from 1 to 10 with 10 being highest) the importance of each of the following, concerning data input: a. short task completion time b. few errors c. satisfaction 17. (Demonstrate to the study participant a sample Palm OS based construction equipment timesheet/productivity documentation tool) Do you think the icon system (like the one shown to you) would help you do your job better? very inefficient inefficient slightly inefficient no opinion slightly efficient efficient very efficient not important at all of little importance Fairly important important very important strongly disagree disagree slightly disagree no opinion slightly agree agree strongly agree not at all did not like it slightly disliked it no opinion liked it a little liked it liked it very much not at all did not like it slightly disliked it no opinion liked it a little liked it liked it very much strongly disagree disagree slightly disagree no opinion slightly agree agree strongly agree

PAGE 211

191 Please comment on your answer: 18. If you were given an icon system like the one shown to you for data input, would you use it? a. Yes b. No please explain reason

PAGE 212

192 LIST OF REFERENCES Abou-Zeid, A., and Russell, J. (1993). “Using Data Flow Diagrams to Study Communication Process Between Cons truction Project Participant.” Proceedingsof the 5th International Conference on Co mputing in Civil Engineering ASCE, Anaheim, CA, 245-254. Alexander, J. (1996). “Gator Communicator : Design of a Hand Held Digital Mapper.” Computing in Civil Engineering, Proceedings of the Third Congress ASCE, June 17-19, Anaheim, CA, 1052-1057. Alexander, J., Coble, R., and Elliott, B. (1997). “Hand-held Communication for Construction Supervision.” Proceedings of Construction Congress V ASCE, Oct 4-8, Minneapolis, MN, 972-979. Anderson, D., Sweeney, D., and Williams, T. (1981). Introduction to Statistics: An Applications Approach. West Publishing Co., St. Paul, MN. Bailey, W., Knox, S., and Lynch, E. (1988). “Effects of Interface Design Upon User Productivity.” Proceeding of CHI 88 Conference: Human Factors in Computing Systems ACM SIGCHI, Washington, DC, 207-212. Barker, P. (1989). Basic Principles of Human-computer Interface Design Hutchison, London. Betts, M. (1999). Strategic Management of IT in Construction Blackwell Science. Malden, MA. Bewley, W., Roberts, T., Schroit, D., and Verplank, W. (1983). “Human Factors Testing in the Design of Xerox’s 8010 ‘Star’ Office Workstation.” Proceedings of CHI83 Conference: Human Factors in Computing Systems ACM SIGCHI, Boston, MA, 72-77. Borcherding, J. (1977). “Participativ e Decision Making in Construction.” Journal of the Construction Division ASCE, 103(4), 567-575. Borcherding, J. (1977a). “What is the Construction Foremen Really Like?” Journal of the Construction Division ASCE, 103(1), 71-85. Bowden, S., Thorpe, A., and Arup A. ( 2002). “Usability Testing of Hand Held Computing On a Construction Site.” Proceedings of CIB w78 Conference 2002 Aarhus, Denmark, June 12-14

PAGE 213

193 Brewster, S., Wright, P., and Edwards, A. (1993). “An Evaluation of Earcons for Use in Auditory Human-Computer Interfaces.” Proceedings of CHI93 Conference: H uman Factors in Computing Systems ACM SIGCHI, April 24-29, Seattle, WA, 222-227. Byrne, M., Anderson, J., Douglass, S., and Ma tessa, M. (1999). “Eye Tracking the Visual Search of Click-Down Menus.” Proceedings of CHI99 Conference: Human Factors in Computing Systems ACM SIGCHI, May 15-20, Pittsburgh, PA, 402409. Chalmers, P. (2003). “The Role of Cognitiv e Theory in Human-Computer Interface.” Computer in Human Behavior 19, 593-607. Chin, J., Diehl, V., and Norman, K. (1988). “Development of an Instrument Measuring User Satisfaction of the Human-Computer Interface.” Proceedings of CHI88 Conference: Human Factors in Computing Systems ACM SIGCHI, Washington, DC, 213-218. Coble, B. (1997). “Prototype Development of Icons for Construction Field Supervisory Activities.” Master’s Thesis, School of Building C onstruction, University of Florida, Gainesville, FL. Coble, R. (1994). “Bring the Construction Foremen into Computer Age.” Proceedings of the First Congress on Computing in Civil Engineering ASCE, June 20-22, Washington, D.C., 1446-1453. Coble, R. and Baker, J. (1993). “Maximiz ing the Efficiency of the Construction Foremen.” Proceedings of the 30th Annual Conference ASC, April 7-9, Peoria, IL, 211-213. Coble, R. and Kibert, C. ( 1994). “Integrating Pen Computer s with Other Keyless Data Systems for Construction Applications,” Proceedings of the First Congress on computing in Civil Engineering ASCE, June 20-22, Washington, D.C. Coble, R. and Elliott, B. (1995). “Augmen ting Pen Computers with Two-Dimensional Bar Codes,” Proceedings of the Second Congress on Computing in Civil Engineering ASCE, June 5-8, Atlanta, GA, 1348-1355. Coble, R. and Elliott, B. (1996). “Icon Dr iven Integrated Operating Menu System (IDIOMS),” M. E. Rinker, Sr. School of Building Construction, University of Florida, unpublished paper. Coble, R., Elliott, B., and Coble B. ( 1996). “Automation of Construction Site Supervision.” Proceedings of Construction Modernization and Education International Conference CIB, Oct. 21-24, Beijing, China, 202-207. Condreay, S. (1997). “Automatic Data Co llection Technologies in a Construction Curriculum.” Journal of Construction Education ASC, 1(2), 102-108.

PAGE 214

194 Cox, S., Perdomo, J., and Thabet, W. (2002). “Construction Field Da ta Inspection Using Pocket PC Technology,” Proceedings of CIB w78 Conference 2002 June 12-14, Aarhus, Denmark, cib02-69. Davis, K. and Songer, A. (2003). “Individua ls’ Resistance to Technological Change in the AEC Industry.” Proceedings of Construction Research Congress CI, ASCE, Mar. 19-21, Honolulu, Hawaii, doi 10.1061/40671(2003)66. De La Garza, J. and Howitt, I. (1997). Wireless Communication and Computing at the Construction Jobsite Construction Industry Instit ute (CII), Research Report 13611, Blacksburg, VA. Eco, U. (1976). A Theory of Semiotics Indiana University Press, Bloomington, ID. Ehret, B. (2002). “Learning Where to Look: Location Learning in Graphical User Interfaces.” Proceedings of CHI2002 Conference: Human Factors in Computing Systems ACM SIGCHI, April 20-25, Minneapolis, MN, 211-218. Elliott, B. (2000). “Investigation of Th e Construction Schedu ling Communication Process: Problems, Foremen’s Role, Means of Improvement, and Use of Information Technology.” Ph.D. Dissertation, University of Florida, Gainesville, FL. Elzarka, H., Bell, L., and Floyd, R. (1997). “Application of Pen Based Computing in Bridge Inspection.” Proceedings of the Fourth Congress on Computing in Civil Engineering June 16-18, Philadelphia, PA, 327-334. Fayek, A., AbouRizk, S., and Boyd, B. (1998). “I mplementation of Automated Site Data Collection with a Medium-Sized Contractor.” Computing in Civil Engineering: Proceedings of International Computing Congress 1998 ASCE Convention and Exhibition Oct. 18-21, Boston, MA, 454-465. Finch, E. (2000). Net Gain in Construction: Using the Internet in the Construction Industry Butterworth-Heinemann. Woburn, MA. Flood, I., Issa, R., and O’Brien, W. (2003). “B arriers to the Development, Adoption, and Implementation of Information Technologi es: Case Studies from Construction.” Proceedings of the Fourth Joint In ternational Symposium on Information Technology in Civil Engineering ASCE, EG-ICE, Nov. 15-16, Nashville, TN. Garrett, J. (2003). “From IT-supported Offices towards IT-supported On-site Activity: Vision, Issues and Initial Directions.” Proceedings of the Fourth Joint International Symposium on Informati on Technology in Civil Engineering ASCE, EG-ICE, Nov. 15-16, Nashville, TN. Hemenway, K. (1982). “Psychological Issues in the Use of Icons in Command Menus.” Proceedings of the 1982 Conference on Hu man Factors in Computing Systems Gaithersburg, MA, 20-23.

PAGE 215

195 Hinze, J. and Kuechenmeister, K. (1981) “Productive Foreme n Characteristics.” Journal of Construction Division ASCE, 107(4), 627-639. Hornof, A. and Halverson, T. (2003). “Cogn itive Strategies and Eye Movements for Searching Hierarchical Computer Displays.” Proceedings of CHI2003 Conference: Human Factors in Computing Systems ACM SIGCHI, April 5-10, Ft. Lauderdale, FL, 249-256. Horton, W. (1994). The Icon Book: Visual Symbols for Computer Systems and Documentation John Wiley & Sons, Inc. New York, NY. Howard, R. (1998). Computing in Construction: Pioneers and the Future ButterworthHeinemann. Woburn, MA. Hwang, S., Trupp, T., and Liu, L. (2003). “Needs and Trends of IT-based Construction Field Data Collection.” Proceedings of the Fourth Jo int International Symposium on Information Technology in Civil Engineering ASCE, EG-ICE, Nov. 15-16, Nashville, TN. Information Technology Trends and The Construction Industry (1994). Building Research Establishment Report, BR 269. Building Research Establishment, Garston, UK. Jacob, R. (1995). “Eye Tracking in Advanced Interface Design,” in Virtual Environments and Advanced Interface Design ed. by Barfield W. and Furness, T., pp. 258-288, Oxford University Press, New York, NY. Jaselskis, E. and El-Misalami, T. (2000). “R adio Frequency Identification Application for Constructors.” Proceedings of the Seventeenth International Symposium on Automation and Robotics in Construction September, Taipei, Taiwan, 393-396. Jeffries, R., Miller, J., Wharton, C., and Uyed a, K. (1991). “User Interface Evaluation in the Real World: A Comparison of Four Techniques.” Proceedings of CHI91 Conference: Human Factors in Computing Systems ACM SIGCHI, New Orleans, LA, 119-124. Kacmar, J. (1989). “An Experimental Comparis on of Text and Icon Menu Item Formats.” Working Paper. Texas A & M Universi ty, Computer Science Department. Kangari, R. (1995). “Construction Documentation in Arbitration.” Journal of Construction Engineering and Management ASCE, 121(2), 201-208. Larkin, J. and Simon, H. (1987). “Why A Di agram is (Sometimes) Worth Ten Thousand Words.” Cognitive Science 11, 65-99. Lemna, G., Borcherding, J., and Tucker, R. (1986). “Productive Foremen in Industrial Construction.” Journal of Construction Engineering and Management ASCE, 112(2), 192-210.

PAGE 216

196 Lindberg, T. and Nasanen, R. (2003). “The E ffect of Icon Spacing and Size on the Speed of Icon Processing in the Human Visual System.” Elsevier Computer Science Journal 24, 111-120. Lindsay, P. and Norman D. (1977). Human Information Processing: An Introduction to Psychology Academic Press, New York, NY. Liu, L. (1997). “Construction Field Data Co llection Using the ‘Digital Hardhat.’” Proceedings of Construction Congress V ASCE, Oct 4-8, Minneapolis, MN, 399404. Liu, L. (2000). “Hand-held Multimedia Documentation for Tunnel Inspections.” Proceedings of the Eighth International Conference on Computing in Civil and Building Engineering (ICCCBE-VIII) ASCE, August 14-16, 2000, Stanford, CA, 1021-1028. Logan, R. (1994). “Behavioral and Emoti onal Usability.” In M. Wiklund (Ed.), Usability in Practice Academic Press, Cambridge, MA. Macomber., J. (2003). “IT Strategy for Constr uction Companies: A Pragmatist’s Vision.” Leadership and Management in Engineering ASCE, 3(2), 94-99. Makulsawatudom, A. and M. Emsly, A. (2003) “Factors Affecting the Productivity of the Construction Industry in Thailand: The Foremen’s Perception.” Proceedings of Construction Research Congress CI, ASCE, Mar. 19-21, Honolulu, Hawaii, 771780. Maloney, W. and McFillen, J. (1987). “I nfluence of Foremen on Performance.” Journal of Construction Engineering and Management ASCE, 113(3), 399-415. Marcus, A. (2003). “Icons, Symbols, and Signs: Visible Langua ges to Facilitate Communication.” Interactions 10(3), 37-43. Marks, R. (1998). Designing A Research Project: The Art of Doing Science STA 6201 Class Material, University of Florida, 10-21. Masui, T. (1998). “An Efficient Text Input Method for Pen-based Computers.” Proceedings of CHI 98 Conference: Human Factors in Computing Systems ACM SIGCHI, April 18-23, Los Angeles, CA, 328-335. McCullouch, B. (2003). “Automating Fi eld Data Collection in Construction Organizations.” Proceedings of the Fourth Jo int International Symposium on Information Technology in Civil Engineering ASCE, EG-ICE, Nov. 15-16, Nashville, TN. McCullouch, B. (1991). “Radio Frequency Data Communication A pplications in the Construction Industry.” Proceedings of Construction Congress 1991 ASCE, April 13-16, Cambridge, MA, 679-678.

PAGE 217

197 McCullouch, B. and Gunn., P. (1993). “C onstruction Field Data Acquisition with Penbased Computers.” Journal of Construction Engineering and Management ASCE, 119(2), 374-384. McLaren, I. (2000). “Some Pictorial Sy mbol Systems for Public Places.” Iconic Communication by Barker, P. and Yazdani, M. (eds.), USA Intellect Books, Portland, OR, 42-50. Moschella, D. (2003). Customer-driven IT: How User are Shaping Technology Industry Growth Harvard Business School Press. Boston, MA. Nasanen R. and Ojanpaa H. (2003). “Effect of Image Contrast and Sharpness on Visual Search for Computer Icons.” Elsevier Computer Science Journal 24, 137-144. Nielsen, J. (1994). Usability Engineering Morgan Kaufmann. San Francisco, CA. Nielsen, J. and Phillips, V. (1993). “Estimati ng the Relative Usability of Two Interfaces: Heuristic, Formal, and Empirical Methods Compared.” Proceedings of CHI93 Conference: Human Factors in Computing Systems ACM SIGCHI, Seattle, WA, 214-221. O’Brien, J. (1998). Construction Change Orders: Impact, Avoidance, Documentation McGraw-Hill. New York, NY. O’Connor, J. and Yang, L. (2003). “Impact of Integration and Automation Technology on Project Success Measures.” Proceedings of the Fourth Joint International Symposium on Information Technology in Civil Engineering ASCE, EG-ICE, Nov. 15-16, Nashville, TN. Preece, J., Rogers, Y., Benyon, D., Holland, S., and Carey, T. (1994). Human-Computer Interaction Addision-Wesley Publishi ng Company, New York, NY. Ramsay, J. (1997). A Factor Analysis of User Cognition and Emotion CHI Technical Notes, Atlanta, GA, 22-27. Redmond-Pyle, D. and Moore, A. (1995). Graphical User Interface Design and Evaluation(GUIDE): A Practical Process Prentice Hall, Inc., Englewood, NJ. Repass, K, De La Garza, J., and Thab et W. (2000). “Mobi le Schedule Tracking Technology at the Jobsite.” Proceedings of Construction Congress VI ASCE, Feb. 20-22, Orlando, FL, 204-213. Rojas, E. and Songer, A. (1996). “Interface De sign for Pen-based Computers in the FIRS Project.” Proceedings of the Third Congress on Computing in Civil Engineering ASCE, June 17-19, Anaheim, CA, 1027-1033.

PAGE 218

198 Roberts, T. and Engelbeck, G. (1989). T he Effects of Device Technology on the Usability of Advanced Telephone Functions. Proceedings of CHI89 Conference: Human Factors in Computing Systems ACM SIGCHI, Austin, TX, 331-337. Rohr, G. and Keppel, E. (1984). Iconic Inte rfaces: Where to Use and How toConstruct? Human Factors in Organizational Desi gn and Management, ed. by Hendrick, H. and Brown, O., pp. 269-275. Elsevier. New York, NY. Russell, A. (1993). Computeri zed Daily Site Reporting. Journal of Construction Engineering and Management ASCE, 119(2), 385-402. Salvucci, D. (1999). Inferring Intent in Eye-Based Interfaces: Tracing Eye Movements with Process Models. Proceedings of CHI99 Conference: Human Factors in Computing Systems ACM SIGCHI, May 15-20, Pittsburgh, PA, 254-261. Sassoon, R. and Albertine G. (1997). Signs, symbols and icons : pre-history to the computer age Intellect, Exeter, UK. Senior, B. (1996). Electrical Cons truction Foreman Task Scheduling. Journal of Construction Engineering and Management ASCE, 122(4), 363-369. Shackel, B. (1990). Human Factors and Usability. Human-Computer Interaction Preece, J. and Keller, L. (e ds). Hemel Hempstead, Prentice Hall, 27-41. Shohet, I., and Laufer, A. (1991). What Does the Construction Foreman Do? Construction Management and Economics E. & F. Spon., Vol. 9, 1991, 565-576. Songer, A., Diekmann, J., and Abdul-Hadi, N. (1995). Construction Information Technology in the 21st Century. Proceedings of the Fourth Construction Congress ASCE, Oct. 22-26, San Diego, CA, 193-199. Tenah, K. (1986). Construction Personnel Role and Information Needs. Journal of Construction Engineering and Management ASCE, 112(1), 33-48. Teresa, H., Roland, H., and Chapman R. (2001). An Experimental Comparison of Two Popular PAD User Interfaces Technical Report CSSE0117, Computer Science& Software Engineering Dept., Auburn University, Aubrun, AL. Toole, M. (1998). Uncertainty and Ho me Builders Adoption of Technological Innovations. Journal of Construction Engineering and Management ASCE, 124(4), 323-332. Toole, M. (2003). Information Technology Inno vation: A View of Large Contractors. Proceedings of Construction Research Congress CI, ASCE, Mar. 19-21, Honolulu, Hawaii, doi 10.1061/40671(2003)132.

PAGE 219

199 Tucker, R., Rogge, D., Hayes, W., and He ndrickson, F. (1982). Implementation of Foreman-Delay Surveys. Journal of the Construction Division ASCE, 108(4), 577-591. Whiteside, J., Jones, S., Levy, S., and Wixon, D. (1985). User Performance with Command, Menu, and Iconic Interfaces. Proceedings of CHI Conference: Human Factors in Computing Systems ACM SIGCHI, April 14-18, San Francisco, CA, pp. 185-191. Winch. G. (2002). Managing Construction Project: An Information Processing Approach. Blackwell Science. Malden, MA Williams, T. (2003). Applying Handheld Computers in the Construction Industry. Practice Periodical on Struct ural Design and Construction ASCE, 8(4), 226-231. Worden, A., Walker, N., Bharat, K., and H udson, S. (1997). Making Computers Easier for Older Adults to Use: Area Cursors and Sticky Icons. Proceedings of CHI 97 Conference: Human Factors in Computing Systems ACM SIGCHI, March 22-27, Atlanta, GA, 266-271.

PAGE 220

200 BIOGRAPHICAL SKETCH Tan Qu was born in Guinan, a small town in Qinghai Province, PeopleÂ’s Republic of China. He completed his Bachelor of Science degree in civ il engineering at the Huazhong University of Science and Technol ogy (HUST) in Wuhan, China, in July 1993. Upon graduation from HUST, Mr. Qu went to Harbin Un iversity of Architecture and Engineering (now merged with Harbin Institute of Technology (HIT)) to pursue his masterÂ’s degree in construction management and economics. He received his Master of Science degree in construction management in April 1996. In August 1996 he came to the U.S. to pursue his Ph.D. at the College of Design, Constructi on and Planning of the University of Florida. After finishing his course work at the Ph.D. program in 1998, he worked for a local general contractor in the Orlando, FL, area for several years while carrying out his research pr oject. Mr. Qu is currently working for an ENR listed engineering firm to broaden his professiona l experience in the c onstruction industry.