1 VISUAL SCANNING STRATEGIES ACCOMPANY THE ACQUISTION OF PERCEPTUAL EXPERTISE By ALLISON N. CARR A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE U NIVERSITY OF FLORIDA 2018
2 2018 Allison N. Carr
3 To my mother and father, Brian and Sheila Carr
4 ACKNOWLEDGMENTS This research was conducted with support from ARI grant W5J9CQ 11 C 0047 awarded to Dr. Lisa Scott. The views, opinions, an d/or findings contained in this poster are those of the authors and should not be construed as an official Department of the Army position, policy, or decision. I thank David Sheinberg for helping with the initial stages of stimulus creation, Melody Buyuko zer Dawkins and Charisse Pickron with finalizing the stimulus set, and James Calabro and Matthew Mollison for technical and programming assistance. I thank Ryan Barry Anwar, Travis Jones, Charisse Pickron, Eswen Fava, Hillary Hadley and members of the Bra in, Cognition and Development Lab for relevant discussion, additional stimulus development and research assistance. I would also like to thank Dr. Lisa Scott for continued support throughout this project, as well as Dr. Natalie Ebner and Dr. Jeffrey Farrar for being members of my committee.
5 TABLE OF CONTENTS page ACKNOWLEDGMENTS ................................ ................................ ................................ .. 4 LIST OF TABLES ................................ ................................ ................................ ............ 7 LIST OF FIGURES ................................ ................................ ................................ .......... 8 LIST OF ABBREVIATIONS ................................ ................................ ............................. 9 ABSTRACT ................................ ................................ ................................ ................... 10 CHAPTER 1 INTRODUCTION ................................ ................................ ................................ .... 12 Pe rceptual Expertise ................................ ................................ ............................... 13 Impact of Image Manipulations on Perceptual Processing ................................ ..... 19 Visual Strategy Use and Perceptual Processing ................................ ..................... 21 2 MATERIALS AND METHODS ................................ ................................ ................ 28 Participants ................................ ................................ ................................ ............. 28 Stimuli ................................ ................................ ................................ ..................... 28 Procedure ................................ ................................ ................................ ............... 29 Training ................................ ................................ ................................ ............ 30 Pre and Post test Assessments: Serial Matching Task ................................ ... 31 Pre and Post test Assessments: Eye tracking Procedure ............................... 32 Data Analysis ................................ ................................ ................................ .......... 33 3 RESULTS ................................ ................................ ................................ ............... 38 Image Manipulation ................................ ................................ ................................ 38 Pre test and Post test Accuracy ................................ ................................ ....... 38 Eye Tracking ................................ ................................ ................................ .... 39 Dwell time ................................ ................................ ................................ .. 39 Average fixation duration ................................ ................................ ........... 40 Fixation count ................................ ................................ ............................. 40 Scanning patterns ................................ ................................ ...................... 40 Gaze Contingent Viewing Manipulation ................................ ................................ .. 41 Pre test and Post test Accuracy ................................ ................................ ....... 41 Eye Tracking ................................ ................................ ................................ .... 42 Dwell time ................................ ................................ ................................ .. 42 Average fixation duration ................................ ................................ ........... 43 Fixation count ................................ ................................ ............................. 43 4 DISCUSSION ................................ ................................ ................................ ......... 55
6 Perceptual Expertise ................................ ................................ ............................... 56 Impact of Image Manipulations on Perceptual Processing ................................ ..... 57 Visual Strategy Use and Perceptual Proc essing ................................ ..................... 58 Limitations and Future Research ................................ ................................ ............ 60 Conclusion ................................ ................................ ................................ .............. 64 LIST OF REFERENCES ................................ ................................ ............................... 65 BI OGRAPHICAL SKETCH ................................ ................................ ............................ 71
7 LIST OF TABLES Table page 2 1 Part icipant demographics ................................ ................................ ................... 35 3 1 Pre and post test means and standard deviations by manipulation ................ 45 3 2 Pre and post test dwell time means and standard deviations by manipulation ... 46 3 3 Pre and post test fixation duration means and standard deviations by manipulation ................................ ................................ ................................ ....... 47 3 4 Pre and post test fixation number means and standard deviations by manipulation ................................ ................................ ................................ ....... 48 3 5 Pre and post test similarity score means and standard deviations by manipulation ................................ ................................ ................................ ....... 49
8 LIST OF FIGURES Figure page 2 1 Examples of novel stimuli and image manipulations. ................................ ......... 36 2 2 Tra ining task and serial matching task.. ................................ ............................. 37 3 1 Mean difference discrimination accuracy ( ) difference from pre test and post test (post test minus pre test ) across the four image manipulat ion conditions. ................................ ................................ ................................ .......... 5 0 3 2 Fixation results for image manipulation analysis. ................................ ............... 51 3 3 Similarity scores across training conditions and i mage manipulations.. ............. 52 3 4 Mean difference discrimination accuracy ( ) difference from pre test and post test (post test minus pre test ) across the three gaze contingent mask manipulati on conditions. ................................ ................................ ............ 53 3 5 Fixation results for gaze contingent viewing condition analysis. ......................... 54
9 LIST OF ABBREVIATIONS HSF High spatial frequ ency LSF Sub Low spatial frequency Subordinate
10 Abstract of Thesis Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Master of Sc ience VISUAL SCANNING STRATEGIES ACCOMPANY THE ACQUISTION OF PERCEPTUAL EXPERTISE By Allison N. Carr May 2018 Chair: Lisa Scott Major: Psychology Research examining the acquisition of perceptual expertise suggests that subordinate level training impro ves perceptual discrimination over basic level training However, it was previously unclear whether increased perceptual discrimination is accompanied by changes in visual strategy use. In the current study, adults (n= 28) completed 6 training sessio ns wi th 2 families of computer generated novel objects. Participants were trained with 5 unique species within each of the two families. O ne family was trained at the subordinate level and the other family was trained at the basic level. Before and after traini ng, discrimination accuracy and visual fixation data were measured while participants discriminated trained and untrained object exemplars. To examine the impact of visual features and viewing strategies, image color, spatial frequency, and gaze contingent viewing condition (peripheral, full, foveal) were manipulated across blocks. Consistent with previous reports, r esults showed that discrimination accuracy increased from pre test to post test for the subordinate trained family but not for the basic trai ned family. This was consistent across all image manipulations and viewing conditions. Training also resulted in increased fixation duration and decreased fixation number, however these changes occurred after both
11 subordinate and basic level training. Fin ally, within subjects, visual fixation patterns became increasingly consistent after subordinate, but not basic level training. These findings sugges t that increases in discrimination after subordinate level training and accompanying changes in visual fixa tion patterns reflect the acquisition of perceptual expertise.
12 CHAPTER 1 INTRODUCTION Perceptual expertise is characterized by increased proficiency to discriminate, identify, and recognize exemplars within visual categories (Tanaka & Tayl or, 1991; Scott, 2011 ; Scott, Tanaka & Curran, 2009 ). Perceptual expertise is used for a wide range of professions spanning a variety of health and safety domains. For example, visual expertise is required for a radiologist discriminating between scans wi th and without disease, forensic scientists matching fingerprints to a crime scene and TSA agents matching passport photos to faces Past research on perceptual expertise suggests that learning at the subordinate level enhances discrimination accuracy and neural responses ( Scott, Tanaka, Sheinberg, & Curran, 2006; 2008; Jones et al., in revision) and has identified color and spatial frequency information as important visual features for perceptual experts ( Hagen, Vuong, Scott, Curran, & Tanaka, 2014; 2016 ) Perceptual expertise research provides a knowledge base for design ing training protocols to increase visual discrimination performance across a wide range of skilled professions that require perceptual expertise. In the current investigation, participa nts were trained to discriminate between novel stimuli at the basic and subordinate category levels. Stimulus features, including color and spatial frequency, as well as the viewing condition, were manipulated. Behavioral performance during a discriminatio n task was assessed, and visual strate gy use was examined through eye tracking. The current investigation had four main goals. First, the principle goal of the stud y was to exam ine changes in visual strategies that accompanied the acquisition of expertise. Previous research suggests that changes in visual fixations reflect how a stimulus is being processed (e.g. Henderson, 2011;
13 Rayner & Pollatsek, 1981; Henderson & Cho i, 2015; Rayner, 2009). This implies that eye tracking would be a useful tool for examini ng visual strategy changes associated with the acquisition of perceptual expertise. Second, the current study examined how these changes in visual strategies impacted the development of expert like holistic processing through the use of a gaze contingent v iewing manipulation. Third the present study aimed to determine the extent to which surface features like color and spatial frequency impact ed discrimination performance and visual fixations before and after training Lastly, the current study sought to r eplicate past expertise training studies that used real world stimuli such as bir ds and cars (Scott et al., 2006; 2008), and extend ed this research through the use of novel, computer generated stimuli. The use of novel objects eliminated the possibility of prior experience with a stimulus due to exposure in the natural environment confounding any effects observed. Here I will first review past research on perceptual expertise, followed by findings on visual strategy use holistic processing, and the impact of image manipulations on perceptual processing, and I will end with the research methods and analyses Perceptual Expertise W ork on perceptual expertise has utilized a variety of methods and stimuli, and has include d both real world and laboratory traine d experts (e.g. Johnson & Mervis, 1997, Scott et al, 2006; 2008, Hagen et al., 2014; 2016). Individuals with bird (Johnson & Mervis, 1997; Tanaka & Taylor, 1991) and car expertise (Gauthier, Skudlarski, Gore, & Anderson, 2000) show recognition advantages f or experts relative to novices. Perceptual expertise training also leads to improvements in recognition and discrimination (e.g. Ga uthier & Tarr, 1997 ; Tanaka & Taylor, 1991) as well as enhanced
14 neural processing (Tanaka & Curran, 2001; Gauthier, Curran, Curby, & Collins, 2003; Scott et al., 2006; 2008; Jones et al., in revision). Becoming a visual expert requires a qualitative shift in processing, moving from observation of only basic features to more detailed, domain specific information (Tanaka & Tay lor, 1991). For this to occur there must be a shift from basic to subordinate level categorization, where basic level categories are general categories of objects ( e.g., general categories of birds or cars) and subordinate level categories are comparative ly more specific ( e.g., White necked raven or Honda Civic) (Johnson & Mervis, 1997). With this shift, experts are able to quickly utilize subordinate level information for object discrimination (Tanaka & Curran 2001). Johnson and Mervis (1997) examined di scrimination of birds in real world bird experts compared to bird novices and found that bird experts were more quickly able to recognize and discriminate between birds at the basic level of categorization than novices. Experts also showed an advantage ove r novices when the birds were identified at the subordinate level, suggesting that expertise requires the ability to utilize subordinate level category information for successful recognition (Johnson & Mervis, 1997). Previous training studies also highlig ht the importance of learning visual exemplars at more specific, subordinate levels, to increa se perceptual expertise (Tanaka, Curran, & Sheinberg, 2005; Scott, et a l., 2006; 2008 ; Jones et al., in revision ). Similar to the present investigation perceptua l expertise has been trained using novel objects (Gauthier, Williams, Tarr, & Tanaka, 1998; Rossion, Gauthier, Goffaux, Tarr, & Crommelinck, 2002; Gauthier & Tarr, 1997). The use of novel objects in expertise training studies is critical for determining wh ether training is effective in promoting visual
15 expertise in the absence of prior experience. N have previously been used to answer this question (Gauthier et al., 1998; Rossion et al., 2002; Gauthier & Tarr, 1997). In one stu dy, participants were trained to discriminate between different Greebles (Gauthier et al., 1998). There were two separate genders of Greebles and from this set, each Greeble had its own individual name. Participants rough teaching both the gender and the individual names. Results suggest that after training, participants could identify Greebles equally at both the gender and individual level, indicating an increase in expert level discrimination (Gauthier et al., 1998 ). This previous finding was the first to suggest that expertise training is effective in improving discrimination of completely novel objects. Training studies have also utilized real world object s and animals. Tanaka et al. (2005) trained bird novices t o discriminate between different species of birds for two weeks In this study, participants learned to classify 10 species of wading birds and 10 species of owls. Participants were trained at either the subordinate level, where they learned to discriminat e between each different species or they were trained at the basic level. During training, participants were presented with a picture of each bird and either lette After two week s of training, participants were tested on their ability to discriminate between the different species of birds. They were tested on both the bird exemplars they were trained on, oth er exemplars from the trained bird species, or new species of wading birds or owls. Participants trained at the subordinate level were found to be better at discriminating
16 between the different bird species, highlighting the importance of subordinate level training for increasing visual discrimination (Tanaka et al. 2005). Participants trained at the subordinate level also generalized their learning to the novel exemplars and novel species, suggesting that strategies used for subordinate level discriminati on are transferrable to never before seen species wi thin the same category (Tanaka et al. 2005). S cott et al. (2006) replicated this result, finding improved discrimination after subordinate but not basic level training, and also discovered accompanying neural changes. In this study, participants once again learned to classify wading birds and owls at the basic or subordinate level, and then they were tested on discrimination of the trained bird exemplars, untrained exemplars of the trained species, or ex emplars from new species. ERPs were also recorded during the discrimination tests. Results showed that a specific ERP component, the N170, was increased in amplitude after both subordinate and basic level training, suggesting the N170 is modulated by exper ience with a stimulus. In contrast, N250 amplitude only increased after subordinate level training, suggesting dissociable processes for subordinate level learning (Scott et al., 2006). A separate training study examined the effect of basic and subordina te level categorization on the discrimination of different car exemplars, as well as the extent to which perceptual training related behavioral and neural changes persisted over time (Scott et al., 2008). Similar to the studies with birds, participants wer e trained to classify different classes of cars at either the basic or subordinate level. At the basic level, participants learned to classify three types of cars (sedans, SUVs, or antiques), while at
17 the subordinate level, participants learned to classify different models of each car type (i.e. Honda CRV, Toyota Rav 4). During training, participants were presented with a passively viewed the stimuli. Participants received 8 days of training over a two week period and discrimination performan ce was tested before training, immediately after training, and one week post training. ERPs were recorded during all the pre and post training discrimination tests. Results suggested that the basic and exposure only conditions did not lead to improvements in discrimination of cars immediately after traini ng or after the one week delay. In contrast, subordinate level training led to improved discrimination of the car stimuli immediately after training; this improvement was also present one week after traini ng, suggesting that the effects of sub ordinate level training persist for at least one week (Scott et al., 2008). ERP results aligned with past perceptual expertise research using ERPs (Scott et al., 2006); N170 amplitude increased after any exposure to th e stimuli (after the exposure only condition, basic level training, and subordinate level training), but N250 amplitude only increased after subordinate level training, further differentiating subordinate level learning from basic level and exposure learni ng in both behavior and the brain (Scott et al., 2008). Together, research with bird and car stimuli suggests that subordinate level training leads to improved discrimination of real world exemplars and is accompanied by neural changes suggesting different learning mechanisms for basic versus subordinate level processing (S cott et al. 2006; 2008; Tanaka et al. 2005).
18 extends to a novel object class (Gauthier et al., 1998). However, studies with Greebles did not differentiate between the contributions of basic and subordinate level processing. The current study bridges this gap by assessing the differential impact of basic level versus subordinate level training on di scrimination of novel stimuli. It is predicted that subordinate level training but not basic level training, will lead to increased discrimination of novel objects. The majority of research on perceptual expertise training has focused on behavioral discri mination of stimuli and accompanying neural changes, but little work has examined the extent to which visual fixations change during the acquisition of perceptual expertise. An important distinction between basic and subordinate levels of categorization in volves how visual information is processed (Schyns & Oliva, 1994; Kimchi, 1992). Basic level features are processed automatically and involve only coarse visual information, while subordinate level features involve more fine visual information (Schyns & Ol iva, 1994; Kimchi, 1992). Experts not only extract subordinate level visual information from a stimulus, but they do so immediately upon viewing the stimulus, whereas novices tend to first view objects based on basic level visual characteristics, such as t he shape of the stimulus (Tanaka & Taylor, 1991; Rosch, Mervis, Gray, Johnson & Boyes Braem, 1976). Subordinate level, expert like categorization utilizes high spatial frequency visual information (fine details), while basic level categorization utilize s lower spatial frequency information (coarse visual details) (Collin & McMullen, 2005). Because visual information is used differently at different levels of categorization, it is predicted that changes in the use of visual information during the acquisitio n of
19 perceptual expertise will be accompanied by changes in visual fixations as well as use of visual features such as color and spatial frequency. Impact of Image Manipulations on Perceptual Processing Visual surface informatio n, like color and texture, are important details used in the recognition and discrimination of o bjects (Tanaka & Presnell, 1999; Therriault, Yaxley, & Zwaan, 2009; Bramo, Fasca, Magnus Petersson, & Reis, 2012 ). In one study, stimuli that were strongly associated with a particular color (like red for a strawberry or orange for a pumpkin) were presented in their expected color, in an unexpected color, or grey scale (Therriault et al., 2009). The expected color stimuli were named the mos t quickly, followed by the grey scale stimuli. The unexpected color stimuli (orange strawberry) resulted in the greatest impairment in identification speed. These findings highlight the important role that color plays in object recognition (Therriault et al., 2009). In a subsequent study, the impact of c olor information on objects with less of an association with a single color was examined (Tanaka & Presnell, 1999). Objects were considered either high color diagnostic (HCD), and were associated with a specific color, or low color diagnostic, and not asso ciated with a specific color. Color information was found to only be important for the recognition of HCD objects and had no effect on LCD objects (Tanaka & Presnell, 1999). Color information has also been examined i n relation to expertise. Hagen et al. (2 014) examined how color differentially impacted discrimination of bird stimuli in both bird experts and bird novices. The birds were presented in the expected color, an unexpected color, or grey scale. First, the experts and novices were asked to categorize birds at the basic level. During this task, both the novices and experts used color information for categorization. Experts were also asked to categorize the birds at the
20 subordinate level. level discrimination was better when the color information was consistent with the expected color of the object, compared to when the object appeared in an unexpected color or grey scale (Hagen et al., 2014). This evidence suggests that color is an important factor for expert level discrimination and aids expert performance. It is important to note, however, that this may depend on the extent to which the stimuli are color diagnostic (Tanaka & Presnell, 1999). Birds are often identified by their color, so are high color diagnostic, whereas for ide ntification of car makes and models, color is low diagnostic. Therefore, the facilitation of expert level discrimination observed with the bird stimuli reported by Hagen et al. (2014) may not be reflected in low color diagnostic stimuli. Spatial frequency (SF) information, including the internal feature s shape, and spatial resolution of a stimulus, is another feature found to influence real world expertise. Manipulation of spatial frequency (SF) information within an image provides a method to isolate bot h the fine grained visual details (high SF information) and the global shape of the stimuli (low SF information) (Rosch et al., 1976). Recognition of objects at the basic level requires processing of the global form (low SF), while subordinate level recogn ition requires processing of internal object details (high SF) (Rosch et al., 1976; Tanaka & Taylor, 1991). When bird images were filtered over a range of low spatial frequencies (SF), bird experts categorized common birds at the basic level more quickly a nd more accurately than novices (Hagen et al., 2016). In the same study, Hagen et al. (2016) also filtered out different spatial frequencies while asking real world bird experts to discriminate between birds at the subordinate level. Expert recognition was fastest with a middle range of SFs visible. As such, the authors
21 hypothesize that the middle range of SFs (around 8 32 cpi) is used efficiently by experts and contains information important for subordinate level recognition (Hagen et al., 2016). The prese nt investigation extends the examination of visual surface features (color, SF) for perceptual experts and examines the role of color and SF during the acquisition of perceptual expertise for a novel object class. The goal is to examine whether color and spatial frequency manipulations impact basic level or subordinate level learning of novel objects. Visual Strategy Use and Perceptual Processing Perceptual experts tend to process objects within their domain of expertise more holistically than novices an d recognize each object as a whole, instead of focusin g on individual parts (Bukach, Gauthier, & Tarr, 2006; Gauthier et al., 1998; Gauthier & Tarr, 2002; Gauthier et al., 2003). Both expert face discrimination (Tanaka, & Farah, 1993; Van Belle, De Graef, Verfaillie, Busigny, & Rossion, 2010) and discrimination of objects within a domain of expertise (Gauthier & Tarr, 1997) utilize holistic processing. For both faces and objects of expertise, discrimination of like exemplars involves observation of both the ( Maurer, Grand, & Mondloch, 2002 ) First order configurations involve information about the basic shape of an object, while second order configurations involve details about small deviations from this basic shape that are unique to that exemplar (Diamond & Carey, 1986; Ga uthier & Tarr, 1997; Maurer et al. 2002). Encoding of these second order details requires holistic processing of the exemplar (Diamond & Carey, 1986). Holistic processing has been measured using a variety of tasks and stimulus manipulations (Evans, Georgian Smith, Tambouret, Birdwell, & Wolfe, 2013; Gosselin & Schyns, 2001; Martini, McKone, & Nakayama, 2006 ; McKone, 2004 ; Young, Hellawell,
22 & Hay, 1987 ) H owever it can also be examined by limiting the viewing window of the stimulus through the use of a gaze contingent window/mask (van Diepen, De Graef, & Van Rensbergen, 1994; for origin of methodology: Rayner, 1975). Gaze contingent windows/masks restrict the extent to which a viewer can see a stimulus by plac ing a window on the image being viewed. The window can restrict viewing to only the center peripheral view. The foveal viewing condition restricts viewing to the center of the gaze, only allowing the viewer t o process one feature at a time. This ensures local or featural processing of the image (Van Belle, De Graef, Verfaillie, Rossion, & Lefevre, 2010). In contrast, the peripheral viewing condition blocks the center of the gaze and forces the viewer to look at the periphery of the object instead, leading to a more holistic approach in visual processing (Van Belle, et al., 2010). Van Belle et al. (2010) used gaze inverted faces with a foveal view, peripheral view, or with the full image visible. When the foveal view prevented the face form being processed holistically, there was a much larger impairment in face recognition performance than in the full image or peripheral view condition s. This impairment in face recognition with a foveal view suggests holistic processing is an important component of high level, expert object recognition. These results also suggest that the gaze continent paradigm is a useful tool for measuring featural or holistic processing strategies. In the current investigation, the gaze contingent masking paradigm was utilized to assess how basic and subordinate level training of novel objects impacts holistic processing The use of the foveal view,
23 impairing holis tic processing, should impair discrimination performance at times when expert like holistic processing strategies are utilized. Eye tracking measures can also provide both quantitative information about visual fixations, as well as qualitative information about viewing strategies. Visual fixations involve a 1 00 to 300 millisecond gaze at a stimulus or a portion of a stimulus where information is gathered (Rayner, 1998). It is common to define a fixation by a lower threshold of 100 to 200 ms, with a 200 ms threshold being more conservative and accounting for variability in eye tracking equipment ( Salvucci & Goldberg, 2000 ). Fixations are essential for processing of visual information, and are generally more prevalent in informative areas of a stimulus (Loftu s & Mackworth, 1978). Viewing strategies can also be examined by measuring scan paths, which are sequences of fixations on a stimulus that reflect how that stimulus is being processed (Noton & Stark, 1971). Research examining the visual fixations of exper t radiologists and medical professionals demonstrates that experts may exhibit specific fixation patterns that reflect their greater level of expertise (Drew, Evans, Vo, Jacobson, & Wolfe, 2012; Kundel & La Follette, 1972; Fox, Law & Faulker Jones, 2017; K rupinski Graham, & Weinstein, 2013). Namely, they are faster and require fewer fixations to respond during a task. Kundel & La Follette (1972) examined visual fixations of trained radiologists, as well as untrained novice participants while they view ed ch est radiographs in search of abnormalities. While novices showed no consistent fixation strategies, expert radiologists had concise visual strategies involving fewer fixations and movement of the eyes directly to regions of interest containing an abnormali ty. Krupinksi et al. (2013)
24 found similar results; they examined how resident trainees visually analyze images of breast biopsies across the years of their training. As training progressed, visual strategies become more efficient, with overall fewer fixati ons, and fixations directed towards medically salient areas of the images. Past research outside the field of perceptual expertise has also informed our understandin g of the impact of experience on visual fixations and the extent to which cognitive task demands impact visual fixations. Research on scene viewing suggests that changes in fixations are a direct result of what is being viewed, and that fixations are longer and fewer when cognitive processing is more difficult (i.e. if the scene is more comp lex or densely cluttered) (Henderson, 2011; Rayner & Pollatsek, 1981; Henderson & Choi, 2015; Rayner, 2009). Longer fixation durations have also been found when images were of a lower quality, and during memorization tasks (Loftus, 1985; Mills, Hollingwort h, Van der Stingchel, Hoffman, & Dodd, 2011). Additionally, during reading, when text is more difficult, fixation duration increases and fixation number decreases (Rayner, 2009). This effect extends to the difficulty of the font being read, where a more co mplex font yields longer fixations (Rayner, Reichle, Stroud, Williams, & Pollatsek, 2006). Together, these findings suggest that when the task or the image being viewed is more complex, fixation durations are longer and fewer in number In the context of p erceptual expertise, it is predicted that with trai ning, fixations should decrease in number and increase in duration. However, it is unclear whether this pattern will occur after both subordinate and basic level training or just for subordinate training objects This prediction is consistent with past research with radiologists and medical residents ( Kundel & La Follette, 1972; Krupinski, et al., 2013).
25 In addition to fixation information, the strategy used to scan a scene or image can provide informatio n about processing. In one study, when participants viewed scenes with the goal of memorizing the entire image, there was broader scanning of the images compared to a target search task without memorization (Castelhano, Mack, & Henderson, 2009). Broader sc an paths were also used during viewing of complex scenes compared to simpler images and emotional compared to neutral scenes ( Bradley, Houbova, Miccoli, Costa, & Lang, 2011). Together, these previous findings suggest that when attentional demands are great er, visual stimuli are scanned more broadly. Analysis of visual fixation patterns are looking, but can also measure the consistency of the strategies across tri a ls or with training. The ScanMatch Matlab Toolbo x (Cristino, Mathot, Theeuwes, & Gilchrist, 2010) is a program that inputs fixation locations and durations and then recodes this into a sequence of letters, retaining fixation information. The program then computes the similarity of the scan paths using t his fixation information The ScanMatch program outputs similarity scores that show the degree in which participa nts develop consistent scanning strategies with higher similarity scores interpreted as more consistent scanning strategies (Cristino et al., 2010). Madsen, Larson, Loschky, & Rebello (2012) utilized ScanMatch to examine scanning strategies during the completion of physics problems. Results showed the ScanMatch similarity scores were high within the group of participants who answered the problem s correctly and within the group of participants who answered the problems incorrectly. The correct answer group consistently looked at problem relevant information, while the incorrect answer group consistently looked at
26 novice, problem irrelevant informa tion (Madsen et al., 2012). When overall looking strategy for the correct answer group was compared to the incorrect answer group, similarity scores were much lower suggesting that the correct and incorrect answer group s had different looking strategies. Overall, the ScanMatch Toolbox is an excellent tool to examine the emergence of visual strategy use accompanying the acquisition of perceptual expertise. The current investigation directly examines changes in visual strategies accompanying the acquisition of perceptual expertise through measuring fixation and scan path information before and after basic and subordinate level object training It is predicted that more variable visual strategies would be used after training at the basic versus the subordina te level. More specifically, it is expected that subordinate level training should lead t o a decrease in fixation number and increase in fixation duration compared to basic level training. Subordinate level training is also predicted to produce increasingl y consistent looking strategies within subjects, leading to an increase in similarity scores after subordinate, but not basic level training. If these predictions are supported, it would suggest that subordinate level training leads to the emergence of con sistent changes in visual fixations that accompany the acquisition of perceptual expertise. The current investigation trained participants with two families of novel objects at the basic or subordinate level and measured pre and post training discriminati on and visual fixations, while manipulating image color and spatial frequency information as well as the viewing window (using the gaze contingent masking paradigm). Behavioral performance during a discrimination task was assessed, and visual strate gy use was examined through eye tracking The current investigation had four main goals. First, the
27 principle goal of the study was to examine changes in visual strategies that accompanied the acquisition of expertise. Second, the current study examined how these changes in visual strategies impacted the development of expert like holistic processing through the use of a gaze contingent viewing manipulation. Third, the present study aimed to determine the extent to which surface features like color and spatial fre quency impacted discrimination performance and visual fixations before and after training. Lastly, the current study sought to replicate past expertise training studies that used real world stimuli such as bir ds and cars (Scott et al., 2006; 2008), and exte nded this research through the use of novel, computer generated stimuli. Based on previous research, we predict that accuracy on the discrimination task will increase after subordinate, but not basic level training, and that this learning will generalize to unseen exemplars. We also predict that eye tracking measures will show an increase in fixation duration and a decrease in fixation number, as well as the emergence of consistent scanning strategies, after subordinate, but not basic level training. We e xpect the gaze contingent viewing manipulation to reveal that, when holistic processing is not possible (as in the foveal view condition), performance on the discrimination task will be impaired. Lastly, we hypothesize that within the image manipulations, the color images will have significantly higher accuracy than the greyscale, HSF, and LSF images. Results will contribute to our understanding of the visual strategies and features that are important for the acquisition of expertise.
28 CHAPTER 2 MATERIAL S AND METHODS Participants Participants included 36 adults, but 8 were lost due to poor or missing eye tracking data. The remaining 28 participants (age range: 18 to 31; M = 22.64, SD = 3.37) were students at the University of Massachusetts Amherst Based on self report, participants included 17 females, 11 males and 71.4 % were White, 25.0 % Asian or Pacific Islander, and 3.6 % Black or African American (ethnic makeup: 96.4 % not Hispanic or Latino, 3.6 % Hispanic or Latino). See Table 2 1 for more partici pant demographic information. Participants were paid $15 per hour for pre and post training assessments and $10 per hour for ea ch of the 6 behavioral training sessions. Participants received a $25 bonus for completing all sessions. All participants were p rovided with written consent and all procedures were approved by the IRB at the University of Massachusetts Amherst. Stimuli The stimuli included 240 novel, computer generat Stimuli were generated and edited using Modo (Luxol ogy, LLC). Stimuli were created to form two families of objects (Family A and Family S). Each family included 10 unique ning 12 exemplars (See Figure 2 1 A ). All objects were cropped and scaled to fit within a frame of 500 by 500 pixels and p resented on a grey background. Images measured 17 cm x 17 cm and were presented at a visual angle of approximately 13.85 degrees horizontally and vertically. For training, all images were presented in full colo r. To examine the effect of stimulus features such as color and spatial frequency on discrimination, stimuli were
29 presented across four different image manipulation conditions for the pre and post test assessments. These image manipulations included: fu ll color images, greyscale images, high spatial frequency (HSF) images (HSF; > 8 cycles per image cpi), and low spatial frequency (LSF) imag e s (LSF; < 8 cpi) (See Figure 2 1 B ). To examine visual strategy use during discrimination, stimuli were also present ed in one of three gaze contingent mask conditions during the pre and post test assessments. These included 1) a full image condition where the entire image was in view, 2) a peripheral view condition where only the outer edges of the image were in view a foveal view condition where only the center of the image was in vi ew and the area outside the gaze center was blocked from view (See Figure 2 1 C) The foveal view condition ensured fea tural processing, while the peripheral view condition ensured holistic processing. The gaze contingent mask was 104 pixels horizontally and 156 pixels vertically, making up approximately 21% of the screen horizontally and 31% of the screen vertically. Duri ng training, all objects were presented as the full image. Procedure The experiment consisted of 6 training sessions (about 1.5 hours per session) over a 2 3 week period, for a total of 9 hours and 1 pre training and 1 post training assessment (approxim ately 2 hours each). During the pre training and post testing assessments, participants completed a serial matching task and accuracy ( d' ) and visua l fixation data were measured. During the training, participants were presented with full color images and were asked to deter mine which species they belonged to using numbers on a keyboard. Feedback was provided after each trial.
30 Training Participants completed six days of training on 5 species per family using a naming task (Scot t et al ., 2006; 2008) ( Figure 2 2 A ). Participants were trained using a subset of 6 exemplars per species per family. Fifteen participants completed subordinate level training with novel object Family A and basic level training with Family S, while 13 participants completed subor dinate level training with Family S and basic level training with Family A. Participants completed a total of 25 blocks and a total of 900 trials during each session. In each block, participants were asked to label between one and five species per family u sing numbers. Exemplars were randomized across blocks between sessions, and all trained exemplars were presented during each session. The difficulty of the training was manipulated between subjects. Participants were assigned to one of three training diffi culty conditions: increasing difficulty, decreasing difficulty, or randomized difficulty. Difficulty was manipulated via the number of species participants were required to name across blocks in a single session. In each training block, participants were a sked to label objects from between one and five session with blocks consisting of one species per family and gradually increased to five session with blocks consisting of five species per family and gradually decreased to one that participants were asked to label randomly varied. As no significant differences in performance were found due to the difficulty manipulation, this manipulat ion was collapsed across and not included as a factor in any further analyses.
31 During each trial, participants viewed a single object for 1000 ms before being prompted to respond with the proper subordinate level (species) label. Participants had 2000 ms to respond before the next trial began. For the subordinate trained family, participants were asked to label the speci and at the beginning of each block, participants were shown the possible label choices (between one and five). For the basic trained family, participants were asked to always respond to exemplars by pres sing the spacebar on the keyboard. Feedback was shown if participants answe red incorrectly. The remaining 5 species within each family were untrained and used du ring the pre and post test assessments to examine generalization of learning. Pre and Post test Assessments: Serial Matching Task Before and after training, participants completed a ser i al matching task (see Figure 2 2 B ). During this task discriminatio n accuracy ( ), total fixation duration average fixation duration, and fixation count were measured. Participants completed a total of 1,440 trials, divided into blocks of 480 trials of each of the three gaze contingent mask manipulations (full image, peripheral view, and foveal view) Image manipulation was also a within subjects factors, so trials included were divided equally across the four color and spatial frequency image manipulations (120 trials per condition per block). Additionall y, the trained exemplars, untrained exemplars of trained species, and novel species stimuli were included in order to measure generalization of learning (180 trials per image type per block). The same stimuli were included in pre and post training assessm ents, but were randomly ordered and paired. During image presentation, participants were seated 90 cm away from an LCD monitor. Each test trial,
32 a fixation cross was first presented in the center of the screen for 300 ms. The first of two objects was prese nted f or 3000 ms, followed by another fixation cross for 300 ms. The second object was then presented for 3000 ms. After the second object was presented, a response screen then appeared, asking the participant to press a key indicati ng whether the two pres ented objects were from the same or from a different species. Within a trial, the first object reflected the encoding phase and recall of the species, and the second image reflected the decision making phas e, where same or different species was chosen. Same species trials were always different exemplars from the same species, while different species trials were always exemplars from two separate species. Participants were given up to 15000 ms to answer before beginning the next trial. Trials were types within each training conditi on (trained, untrained exemplars from trained species, new exemplars from untrained species), each color or spatial frequency manipulation, each gaze contingent mask condition, and each family, resulting in 10 trials per specific condition. statistical power, and to examine both successful discriminat ion between species species trials ) and generalization across ex species trials) as measures of expertise. Pre and Post test Assessments: Eye tracking Procedure An EyeLink 1000 remote camera eye tracker (SR Research Ltd, Mississauga, Ontario, CA) was used to record participants visual fixations while they viewed the objects presented on a 17 inch LCD monitor. Fixation location and duration were recorded with an average accuracy of 0.5 and a sampling rate of 500 Hz using a 35mm lens and a 940nm infrared i lluminator. A fixation was defined by a conservative lower
33 threshold of 200 ms to account for variability in eye tracking equipment (Salvucci & Goldberg, 2000). Allowable head movement without accuracy reduction was approximately 22 x 18 x 20 cm (horizonta lly x vertically x depth). The arm mount gaze tracking range was approximately 32 ho rizontally and 25 vertically. Noise due to head movements was minimized through the use of a head stabilizer with a chin rest sitting approximately 90 cm from the screen. An eye track was recovered within 2 ms of losing the track. Data Analysis All behavioral and fixation data were analyzed in SPSS using MANOVAs To examine the unique effects of the image manipulations and gaze contingent viewing manipulations, separate an alyses were conducted. First, the impact of the image manipulation on discrimination performance was evaluated for the full image stimuli (no gaze contingent mask). Next, the effect of the gaze contingent mask was examined using only the full color stimuli Training level (basic or subordinate), test phase (pre or post test ), and generalization condition (trained, untrained stimuli from trained species, and new species) were included as within subject factors for both analyses. For all analyses, paired sam ple t tests were used to follow up significant interactions and both corrected (using the Bonferroni method) and uncorrected p values were reported. Only data on the second image in each trial was analyzed. To examine qualitative differences in visual fix ations from pre to post test and across different conditions, scanning patterns were analyzed using the ScanMatch Matlab Toolbox (Cristino et al. 2010 ). Fixation locations were determined by dividing each image into a grid of 21 horizontal x 17 vertical bins Letters were assigned to each bin on the grid. For each 100 ms a f ixation was located on the grid, the letters
34 corresponding to that bin on the grid were added to a sequence, creating a letter sequence representing all fixations from each trial S equ ences for each trial within a a similarity score was computed for each pair of trials. A higher similarity score reflected more similar patterns of visual fixations across trials, withi n each participant. Detailed analysis information is included in the results section.
35 Table 2 1. Participant demographics Measure Training Age M = 22.64, SD = 3.37 Race 71.4 % were White, 25.0 % Asian or Pacific Islander, and 3.6 % Black or African American Ethnicity 96.4 % not Hispanic or Latino, 3.6 % Hispanic or Latino College GPA M = 3.31, SD = 0.46 High School GPA M = 3.66, SD = 0.53 SAT Score M = 1755.53, SD = 293.36 Field of Study 46.4% natural sciences, 25.0 % public health, 21.4 % arts and humanities 7.1 % business
36 Figure 2 1 Examples of novel stimuli and image manipulations. A) Stimuli were two families (Family A, Family S) of novel, computer generated objects called the tr ained species were labeled 1 5. Species 6 10 were untrained and used to test generalization of learning. An example of an ob ject in each species is shown. 1 2 exemplar s were used for each species. B) For the serial matching task, images were shown e ither in full color, gre yscale, with HSF information visible, or with LSF informati on visible. C) For the serial matching task, images were shown in one of three gaze contingent viewing conditions, including the peripheral view, full image, or foveal view.
37 Figure 2 2 Training task and serial matching task. A) Each training trial consisted of the presentation of one image for 1000 ms, followed by a response screen prompting the participant to press a key corresponding to the correct species number. B) E ach serial matching task trial began with a central fixation cross, followed by the presentation of one species exemplar, another fixation cross, and finally a second exemplar. Participants were then prompted to press a key to indicate whether the two obje cts were from the same or different species.
38 CHAPTER 3 RESULTS Image Manipulation Pre t est and Post test Accuracy Change in was analyzed using a 2 x 2 x 3 x 4 MANOVA, with two levels of test phase (pre test, post test), two levels of training ( basic, subordinate), three levels of generalization condition (trained exemplars, untrained exemplars of trained species new species exemplars ), and four levels of image mani pulation (full color, gre yscale, HSF, LSF ). There was a significant main effect o f test phase, F (1, 27) = 17.41, p < 0. 001, p 2 = 0. 39, such that post test ( M = 1.45, SD = 0. 08) was significantly greater than pre test ( M = 1.17, SD = 0. 08). There was an additional main effect of generalization condition, F (2, 26) = 5.73, p = 0. 01 p 2 = 0. 31. for trained exemplars ( M = 1.36, SD = 0. 07) and untrained exemplars of trained species ( M = 1.37, SD = 0. 08) was significantly greater than for new species exemplars ( M = 1.20, SD = 0. 07; t (27) = 3.24, p < 0 .01 p < 0. 01 corrected; t (27) = 3.37, p < 0 .01 p < 0. 01 corrected). The re was a significant main effect of image manipulation, F (3, 25) = 6.90, p < 0 .01 p 2 = 0. 45, such that there was a greater for full color images ( M = 1.41, SD = 0. 06) relative to the LSF ( M = 1.21, SD = 0. 08; t (27) = 4.18, p < 0. 001, p < 0. 05 corrected) and HSF images ( M = 1.27, SD = 0. 08; t (27) = 3.22, p < 0 .01 p < 0. 05 corrected) for gre yscale images ( M = 1.35, SD = 0. 08) was also greater than for LSF images ( t (27) = 3.10, p = 0. 01 p < 0. 05 corrected). These main effects were qualified by an interaction between test phase and training level, F (1, 27) = 10.85, p < 0 .01 p 2 = 0. 29. See Figure 3 1 to view this interaction, and Table 3 1 for means and standard deviations. This interaction was due
39 to a significant increase in from pre test to post test for stimuli trained at the subordinate level ( t (27) = 6.10, p < 0. 001, p < 0. 001 corrected). No increase was found for stimuli trained at the basic level ( t (27) = 1.36, p = 0. 18 p > 0. 05 corrected). for the basic and subordinate training levels did not differ at pre test ( t ( 27) = 1.61, p = 0. 12 p > 0. 05 corrected) but was significantly higher for the subordinate training level than the basic training level at post test ( t (27) = 2.46, p = 0. 01, p < 0. 05 corrected) Eye Tracking Three measures of v isual fixations were examined. Total fixat ion duration during the trial (dwell time, ms), average fixation duration of each fixation (ms), and fixation count were analyzed separately in response to the second image of the serial matching task. Fixation data was analyzed only for the area of each object and directly surrounding areas not for the entire screen. Scanning patterns for the second image were also examined to measure qualitative dif ferences in looking strategies across the entire screen. As the second image reflects the discrimination decision making portion of th e trial, only the image two eye tracking data was analyzed. A 2 x 2 x 3 x 4 factor MANOVA with two levels of test phase (pre test, post test), two levels of training (basic, subordinate), three levels of generalization condition (trained exemplars, untrained exempl ars of trained species new species exemplars ) and four levels of ima ge manipulation (full color, gre yscale, HSF, LSF) was conducted for each measure. Dwell time There were no significant dwell time main effects or interaction s for image manipulation See Figure 3 2 A and Table 3 2 for means and standard deviations
40 Average fixation d uration There was a significant main effect of test phase, F (1, 27) = 6.85, p = 0. 01 p 2 = 0. 17, such that there was an increase in average fixation duration from pre test ( M = 328.63, SD = 18.32) to post test ( M = 401.32 SD = 34.81). See Figure 3 2B to view this main effect and Table 3 3 for means and standard deviations. No other main effects or interactions were found. Fixation c ount There was a significant main effect of test phase, F (1, 27) = 7.44, p = 0. 0 1, p 2 = 0. 22, such that the number of fixations decreased from pre test ( M = 8.83, SD = 0. 45) to post test ( M = 7.75, SD = 0. 46) See Figure 3 2C to view this main effect and Table 3 4 for means and standard deviations. There was also a significant main effect of generalization condition F (2, 26) = 5.58, p = 0. 01, p 2 = 0. 30, such that the trained exemplars ( M = 8.23, SD = 0. 41; t (27) = 3.21, p < 0 .01 p < 0. 05 corrected) and untrained exemplars of trained species ( M = 8.28, SD = 0. 40; t (27) = 2.28, p = 0. 03 p > 0. 05 corrected) were fixated fewer times than the new untrained species exemplars ( M = 8.37, SD = 0. 41). Sca nning p atte rns To examine qualitative differences in visual fixations from pre to post test and across training and image manipulation conditions scanning patterns were analyzed using the Scan Match Matlab Toolbox (Cristino et al., 2010) See the Data Analysis secti on of the Methods for more information on the ScanMatch Toolbox. A 2 x 2 x 3 x 4 factor MANOVA with two levels of test phase (pre test, post test), two levels of training (basic, subordinate), three levels of generalization condition (trained exemplars, u ntrained exemplars of trained species new species exemplars )
41 and four levels of ima ge manipulation (full color, gre yscale, HSF, LSF) was conducted using the within subject s similarity scores There was a significant main effect of test phase, F (1, 27) = 20.18 p < 0. 001, p 2 = 0. 43 where similarity scores increased from pre test ( M = 0. 64 SD = 0. 01) to post test ( M = 0. 68 SD = 0. 01). There was also a significant main effect of training level, F (1, 27) = 27.04 p < 0. 001 p 2 = 0. 50 where similarity scor es were higher for the su bordinate level training condition ( M = 0. 67 SD = 0. 01) than the basic level training condition ( M = 0. 64 SD = 0. 01). These main effects were qualified by a significant interaction between test phase and training level, F (1, 27) = 8.45 p = 0. 01 p 2 = 0. 24. See Figure 3 3 to view this interaction and Table 3 5 for means and standard deviations. This interaction was due to a significant increase in similarity score s from pre test to post test after subordinate level training ( t (27) = 5.36, p < 0.0 01, p < 0.001 corrected ). No change was found in similarity scores after basic level training ( t (27) = 1.9 0 p = 0.07, p > 0.05 corrected). Similarity scores for the basic and subordinate training levels did not differ at pre test ( t( 27) = 1.70, p = 0.10, p > 0.05 corrected ), but were significantly higher for the subordinate training level than the basic training level at post test ( t (27) = 5.89 p < 0.001, p < 0.001 corrected ). Gaze Contingent Viewing Manipulation Pre test and Post test Accuracy A 2 x 2 x 3 x 3 factor MANOVA, with two levels of test phase (pre test, post test), two levels of training (basic, subordinate), three levels of generalization condition (trained exemplars, untrained exemplars of trained species new species exemplars), and three l evels of gaze contingent viewing manipulation (foveal view, full image (no mask), peripheral view ) was run for There was a significant main effect of test phase, F (1, 27) = 23.49, p < 0. 001, p 2 = 0. 47 with post test ( M = 1.53, SD = 0 .10) being
42 higher than pre test (M = 1.16, SD = 0 .07). There was an additional main effect of generalization condition, F (2, 26) = 4.16, p = 0. 03 p 2 = 0. 24 for untrained exemplars of trained species ( M = 1.40, SD = 0.09) was significantly greater than for new species exemplars ( M = 1.26, SD = 0.08; t (27) = 2.90, p = 0.06, p < 0.05 corrected). These were qualified by a significant test phase by training level interaction, F (1, 27) = 8.86, p = 0. 01 p 2 = 0. 25. See Fig ure 3 4 to view this interaction and Table 3 1 for means and standard deviations This interaction was due to a significant increase in from pre test to post test for stimuli trained at the subordinate level ( t (27) = 9.74 p < 0.001, p < 0.001 corrected). No increase was found for stimuli trained at the basic level ( t (27) = 1.40 p = 0. 17 p > 0.05 corrected). for the basic and subordinate training levels did not differ at pre test ( t ( 27) = 1.61, p = 0. 19 p > 0.05 corrected) but wa s significantly higher for the subordinate training level than the basic training level at post test ( t (27) = 2.46, p = 0.01, p < 0.05 corrected). Eye Tracking Total fixation time (ms), average fixation duration (ms), and fixation count were once again a nalyzed for the second image in each trial using a 2 x 2 x 3 x 3 factor MANOVA with three levels of gaze contingent mask manipulation (foveal view, full image (no mask), peripheral view), two levels of training (basic, subordinate), three level s of general ization condition ( trained exemplars, untrained exemplars of trained species new species exemplars ), and two levels of test phase (pre test, post test). Dwell time There were no significant main effects or interactions acro s s viewing conditio ns See Figu re 3 5 A and Table 3 2 for means and standard deviations
43 Average fixation d uration There was a significant main effect of gaze contingent viewing condition, F (2, 26) = 14.60, p < 0. 001, p 2 = 0. 48, such that the average fixation duration for the full image condition ( M = 361.90, SD = 23.99, t (27) = 5.17, p < 0. 001, p < 0. 001 corrected) and the peripheral view conditions ( M = 376.80, SD = 22.16, t (27) = 3.73, p < 0. 001, p < 0. 01 corrected) were longer than the foveal view condition ( M = 284.44, SD = 10.82) There was also a marginally significant interaction between test phase and gaze contingent viewing condition, F (2, 26) = 2.97, p = 0. 06 p 2 = 0. 16. This trend was due to increased avera ge fixation duration from pre test ( M = 324.30, SD = 18.63) to post test ( M = 399.50, SD = 38.43) for the full ima g e condition only. See Figure 3 5B to view this trend, and Table 3 3 for means and standard deviations Fixation c ount There was a significant main effect of gaze contingent mask, F (2, 26) = 12.96, p < 0. 001, p 2 = 0. 50, due to a greater number of fixations for the foveal view condition ( M = 9.46, SD = 0. 36) than the peripheral view ( M = 7.80, SD = 0. 45; t (27) = 4.82, p < 0. 001, p < 0. 01 corrected) and full image condition ( M = 8.32, SD = 0. 40; t (27) = 3.96, p < 0. 001, p < 0. 01 corrected). There was also a significant main effect of generalization condition F (2, 26) = 4.42, p = 0. 02 p 2 = 0. 25; such that the new species ( M = 8.58, SD = 0. 35) stimuli were fixated more times than the trained exemplars ( M = 8.46, SD = 0. 36; t (27) = 2.57, p = 0. 02 p < 0. 05 corrected). There was also a significant test phase by gaze contingent mask interaction, F (2, 26) = 4.12, p = 0. 03 p 2 = 0. 24, showing a decrease in fixation number from pre test ( M = 8.89, SD = 0. 48) to post te st ( M = 7.75, SD = 0. 44) for the full image condition only ( t (27) = 2.53, p = 0. 01 p < 0. 05 co rrected).
44 See Figure 3 5 C to view this interaction, and Table 3 4 for means and standard deviations
45 Table 3 1. Pre and post test means and standard deviati ons by manipulation Manipulation Training Pre test M ( SD ) Post test M (SD) Color Basic Subordinate 1.30 (0.08) 1.21 (0.09) 1.43 (0.11) 1.72 (0.12) Greyscale Basic Subordinate 1.22 (0.11) 1.15 (0.11) 1.42 (0.11) 1.60 (0.10) HSF Basic Subordinate 1.23 (0.11) 1.10 (0.10) 1.27 (0.12) 1.48 (0.12) LSF Basic Subordinate 1.14 (0.10) 0.98 (0.11) 1.28 (0.13) 1.43 (0.11) Full image (no mask) Basic Subordinate 1.30 (0.08) 1.21 (0.09) 1.43 (0.11) 1.72 (0.12) Peripheral View Basic Subordinate 1.23 (0.11) 1.12 (0.10) 1.30 (0.15) 1.69 (0.14) Foveal View Basic Subordinate 1.11 (0.11) 0.97 (0.11) 1.46 (0.13) 1.60 (0.12) Note: A indicates a significant change from pre test to post test.
46 Tabl e 3 2 Pre and post test dwell time means and standard deviations by manipulation Manipulation Training Pre test M ( SD ) Post test M (SD) Color Basi c Subordinate 2346.03 (59.66) 2405.88 (47.83) 2376.88 (67.78)) 2414.83 (60.94) Greyscale Basic Subordinate 2395.21 (53.87) 2418.03 (61.61) 2334.91 (75.79)) 2379.99 (69.64)) HSF Basic Subordinate 2415.68 (42.51) 2428 .78 (45.53) 2323.00 (80.15 ) ) 2359.05 (73.82)) LSF Basic Subordinate 2360.79 (68.98) 2402.38 (64.75) 2372.17 (73.17)) 2388.41 (74.64)) Full image (no mask) Basic Subordinate 2346.03 (59.66) 2405.89 (47.83) 2376.88 (67.78) ) 2414.83 (60.94)) Peripheral View Basic Subordinate 2376.51 (61.95) 2350.30 (53.12) 2330.44 (68.95)) 2325.81 (66.90)) Foveal View Basic Subordinate 2388.43 (43.55) 2456.77 (42.46) 2387.17 (44.21) 2425.37 (42.01)) Note: A indicates a significant change from pre test to post test.
47 Table 3 3. Pre and post test fixation dura t ion means and standard deviations by manipulation Manipulation Training Pre test M ( SD ) Post t est M (SD) Color Basic Subordinate 303.88 (14.70) 339.28 (24.78) 398.01 (49.00 ) 385.59 (37.03) Greyscale Basic Subordinate 324.88 (21.48) 328.54 (25.07) 390.77 (42.23) 387.25 (35.61) HSF Basic Subordin ate 335.82 (23.81) 342.16 (25.88) 3 55.03 (31.17) 361.24 (28.86) LSF Basic Subordinate 303.25 (13.16) 324.88 (18.28) 398.08 (44.15) 394.38 (24.78) Full image (no mask) Basic Subordinate 303.88 (14.70) 339.28 (24.78) 398.01 (49.00) 385.59 (37.03) Peripheral View Basic Subordinate 386.67 (32.49) 339.28 (24.78) 367.99 (23.62) 385.59 (37.03) Foveal View Basic Subordinate 273.83 (12.59) 28 6.25 (14.37) 279.99 (17.23) 289.79 (14.19) Note: A indicates a significant change from pre test to post test.
48 Table 3 4. Pre and post test fixation number means and standard deviations by manipulation Manipulation Training Pre test M ( SD ) Post test M (SD) Color Basic Subordinate 8.98 (0.47)* 8.80 (0.51)* 7.75 (0.50)* 7.76 (0.44)* Greyscale Basic Subordinate 8.99 (0.52)* 8.83 (0.53)* 7.85 (0.53)* 7.84 (0.5 0)* HSF Basic Subordinate 8.78 (0.44)* 8.74 (0.46)* 7.89 (0.48)* 7.44 (0.42)* LSF Basic Subordinate 8.88 (0.36)* 8.80 (0.51)* 7.60 (0.49)* 7.76 (0.44)* Full image (no mask) Basic Subordinate 8.98 (0.44)* 8.80 (0.51)* 7.75 (0.45)* 7.76 (0.44)* Peripheral View Basic Subordinate 7.80 (0.44) 7.87 (0.50) 7.82 (0.50) 7.72 (0.45) Foveal View Basic Subordinate 9.52 (0.40) 9.48 (0.37) 9.54 (0.42) 9.29 (0.40) Note: A indicates a significant change from pre test to post test.
49 Table 3 5. Pre and post test similarity score means and standard deviations by manipulation Manipulation Trai ning Pre test M ( SD ) Post test M (SD) Color Basic Subordinate 0.61 (0.01) 0.64 (0.01 ) 0.64 ( 0.02 )) 0.70 ( 0.01 ) Greyscale Basic Subordinate 0.62 (0.01 ) 0.65 (0.01 ) 0.65 ( 0.01 )) 0.70 ( 0.01 ) ) HSF Basic Subordi nate 0.64 (0.01 ) 0.64 (0.01 ) 0.64 ( 0.01 )) 0.70 ( 0.01 ) ) LSF Basic Subordinate 0.63 (0.01 ) 0.63 (0.01 ) 0.65 ( 0.01 )) 0.70 ( 0.01 ) ) Note: A indicates a significant change from pre test to post test.
50 Figure 3 1. Mean differe nce discrimination accuracy ( ) difference from pre test and post test (post test minus pre test ) across the four image manipulation conditions for stimuli trained at the basic or subordinate level. For each bar, the black line represent s the mean d ifference, the boxes represent the 95% ta point is marked with a dot. Significant changes from pre test to post test are indicated with a black arrow.
51 Figure 3 2. Fixation results for image manipulation analysis. A) There was a no change in dwell time from pre to post test in any of the image manipulation conditions. B) There was a significant increase in average fixation duration (in ms) from pre to post test across all image manipulations. C) There was a significant decrease in fixation number from pre to post test across all image manipulations. Black lines represent means, boxes represent the 95% confidence interval, arrows represent main effects, A. C. B.
52 Figure 3 3. Similarity scores across training conditions and image manipulations. For the scan path analysis, similarity scores increased from pre to post test after subordinate lev el training, but not basic level training Black lines represent means, boxes represent the 95% confidence interval, arrows represent main
53 Figure 3 4. Mean difference discr imination accuracy ( ) difference from pre test and post test (post test minus pre test ) across the three gaze contingent mask manipulation conditions for stimuli trained at the basic or subordinate level. For each bar, the black line represents the mean difference, the boxes represent the 95% confidence interval of the difference, and each test to post test are indicated with a black arrow.
54 Figure 3 5. Fixation results for gaze contingent viewing condition analysis A) There was a no change in dwell time from pre to post test in any of the gaze contingent mask conditions. B) There was a marginally significant increase in average fixati on duration (in ms) from pre to post test for the full image condition only. C) There was a significant decrease in fixation number from pre to post test in the full image condition only. Black lines represent means, boxes represent the 95% confidence in from pre test to post test are indicated with a black arrow C. A. B.
55 CHAPTER 4 DISCUSSION The goal of the current study was to examine the differential impact of basic and subord inate level training on behavioral discrimination of novel object exemplars and on accompanying changes in visual strategies ( as measured by eye tracking). Accuracy ( ) and eye tracking measures were collected during a pre and post test serial matching t ask to assess the impact of training on discrimination and visual fixations. During the pre and post test assessments, there was also an image manipulation, where the color and spatial frequency of the image was altered, and a gaze contingent mask manipu holistic processing strategies. Through the use of untrained and new exemplars, the serial matching task also measured whether learning generalized to unseen exemplars. Participa nts were trained to discriminate between six species of two families of novel objects where one family was trained at the basic level and the other at the subordinate level. The s ubordinate trained family was presented with specific, spe cies level labels and the basic trained family was presented with only Overall, examination of accuracy on the discrimination test showed that accuracy (measured by ) improved from pre to post test after s ubordinate level, but not basic level t raining. This improvement was consistent across color and spatial frequency image manipulations, as well as generalization conditions. Examination of visual fixation s showed that, although there was no overall ch ange in dwell time, there was a decrease in fixation number and in increase in average fixation duration from pre to post test across all 4 image manipulations. Importantly, this did not differ based on the level in which the objects were trained, suggesting that changes in fixations were due to
56 ex posure to the stimuli through the training process rather tha n the specific type of training. Additionally, examina tion of visual fixation patterns showed that consistent pa tterns emerged within subjects after subordinate, but not basic level training. Pe rceptual Expertise The current increases in after subordinate, but not basic level training are consistent with previous research (Tanaka et al., 2005; Scott et al., 2006; 2008). In these studies, participants were trained to discriminate real world obj ects (i.e. birds, cars) and were found to be better at discriminating these objects after subordinate level training, compared to basic level training. Through the use of trained exemplars, untrained exemplars of trained species and new species, the curre nt study also examined how learning generalized to unseen stimuli Although discrimination performance was better on the trained exemplars than the untrained and new exemplars, there was an improvement in discrimination performance from pre to post test a fter subordinate training for the trained, untrained, and new exemplars, suggesting that increased discrimination transferred from learned exemplars to untrained exemplars within trained species and new species exemplars. This is consistent with Scott et a l. (2006), whi ch found that after subordinate level training, participants were better able to discriminate between trained bird species, as well as untrained bird s within trained species, and new species of birds Interestingly, a similar study using car exemplars found a slightly different effect; subordinate level training le d to generalization of learning to untrained exemplars of trained cars types but not new car types (Scott et al., 2008). These differences in transfer of learning could be due to th e level of structural similarity between the exemplars within a species. High within category similarity leads to the more effective transfer of knowledge (Posner & Keele,
57 1968; Homa & Vosburgh, 1976), and stimuli such as the novel Sheinbugs and different species of birds have a high level of likeness and structural similarity compared to different model of cars, which may vary in appearance ( e.g size and shape of a truck compared to a Volkswagen Beetle ) This could explain why training with stimuli that h ave confined category characteristics, such a birds or Sheinbugs, generalized to other exemplars, while training with more diverse categories such as cars did not lead to the same degree of generalization of learning. Impact of Image Manipulations on Perc eptual Processing Although the results of the current study aligned with past perceptual expertise research, the impact of image manipulations including color and spatial frequency information did not. In the current study increases in observed afte r subordinate level training were consistent across the color and spatial frequency manipulation s I n contrast, Hagen et al. (2014) found that color facilitated expert level discrimination, leading to im proved discrimination over grey scale stimuli. The lev el of color diagnosticity of the stimuli may explain this difference in results Previous experience with birds may lead to birds having high color diagnosticity, so they identify strongly with a specific color. The stimuli in the current study would have low color diagnostic ity since they are novel and have not been associated with a particular color previously Because of this, color may not have been as important of a factor in discriminating between the novel stimuli, leading to equal discrimination pe r formance in the color and grey scale conditions on the serial matching task. Hagen et al. (2016) also found that spatial frequency can impact discrimination of bird exemplars with a mid range of spatial frequencies being optimal for best discrimination. In addition, past research exploring real world object recognition has
58 found that internal features (high spatial frequency information) are important for subordinate level discrimina tion (Collin & McMullen, 2005). It is important to note that the HSF range in the current study (>8 cpi) encompassed both the mid range and HSF ranges used by Hagen and colleagues (2016). Based on the HSF ranges used by Hagen et al., i t was expected that the HSF range in the current study would have led to increased discriminati on after subordinate level training I nstead, no differences in discrimination across the spatial frequency manipulations were observed for the novel objects Since discrimination performance was consistently high after subordinate level training across bo th the high and low spatial frequency manipulations, t his poses the question of why novel objects and real world objects are differentially affected by image manipulations. Past work has shown that expertise requires a depth of knowledge built over long t erm experience, education, and training (Starbuck, 1992; Sveiby, 1997). Real world experts have a depth of knowledge and level of experience within their domains of expertise that our subordinate level trained Sheinbug experts do not. In fact, participants in the current study do not have any knowledge of the objects beyond the name and the perceptual details of the objects. The lack of contextual details and background knowledge on these novel objects may lead to differences in processing, so that real wor ld experts utilize color and spatial frequency information differently than our laboratory trained Sheinbug experts. Future research could examine changes in the processing of novel objects when more extensive experience and contextual knowledge on the obj ects is known (i.e. their function, their location, etc.). Visual Strategy Use and Perceptual Processing The gaze contingent mask manipulation also le d to unexpected results. The results show ed that subordinate level training led to improvements in discrim ination
59 across the full image, peripheral view, and foveal view gaze contingent mask manipulations. Past research on perceptual expertise suggests that experts process objects within their domains of expertise holistically, while novices process the indivi dual features of the object (Bukach et al. 2006). This previous work suggests that, when viewing is limited to only the center of the gaze, holistic processing strategies that are characteristic of expert like processing would not be possible due to limi tations imposed by the mask and discrimination performance would decrease. Surprisingly, our results showed that the foveal view condition showed similar improvements in after subordinate level training to the full image and peripheral view conditions. This suggests that when a holistic processing strategy is not possible and featural processing strategies are utilized instead (as in the foveal condition), subordinate level labels can compensate. Featural processing, in combinat ion with the use of subord inate level labels, can still facilitate improvements in discrimination of novel objects similar to what occurs when expert like, h olistic processing is possible. Lastly, eye tracking measures were collected with the goal of observing the acquisition of pe rceptual expertise through changes in visual strategies. Instead, for dwell time, average fixation duration, and fixation number, visual fixations were not differentially impacted by level of training. We observed a decrease in fixation number and increase in fixation duration from pre test to post test across all image manipulations and across both the basic and subordinate training conditions. Research on scene viewing suggests that changes in fixations are a direct result of what is being viewed, and tha t fixations are longer and fewer when cognitive processing is more difficult (i.e. if the scene is more complex or densely cluttered) (Henderson, 2011;
60 Rayner & Pollatsek, 1981; Henderson & Choi, 2015; Rayner, 2009). The decrease in number of fixations and increase in fixation duration observed in the current study could be explained by an increase in the difficulty of processing from pre to post test, although this still does not explain why there were no differences in fixations across the two training co nditions. M easures of fixations alone may not be useful tools to inform how changes in processing emerge that accompany changes in behavioral discrimination performance. Although these quantitative measures of visual fixation s d id not relate to behavior, t he current study did find qualitative changes in visual fixations that may account for changes in pro cessing associated with the acquis ition of perceptual expertise. R esults showed that consistent scanning strategies developed within subjects from pre to post test a fter subordinate, but not basic level training. This was constant across all color and spatial frequency image manipulations and generalization conditions, suggesting that changes in scanning strategies showed a similar pattern of improvement as changes in behavioral performance after subordinate level training. Limitations and Future Research The current study utilized a fairly homogenous population of students at the University of Massachusetts Amherst, with a majority of the subjects being Ca ucasian. A common limitation to studies utilizing convenience samples of local students is the generalizability of the re categorization and grouping also carries a limitation of how results will translate across cultures. The United States and European culture s are generally defined as more in dividualistic, with a focus the on independent identity and personal goals, while Eastern nations have a collectivist culture s w here identity is in relatio n to a group, group attitudes and goals ( Triandis, 1989; Hofste de, 1980). While w ithin cultures and
61 across dif ferent regions of single nation s (i.e. rural vs. urban) adherence to collectivist and individualist group norms varies based on degree of conform ity to group ideas, group norms are still prevalent as a national culture (Jetten, Postmes & McAuliffe, 2002; Georgas, 1989 ). In general, collectivist cultures tend to work to support goals and norms of their group (the in group) and are callous or even d istrustful of members from outside their group (the out group) (Triandis, 1972). Although categorization and discrimination of objects has not been studied in relation to collectivist and individualist cultures, the grouping of the stimuli in the current s tudy into different sets (individual species in the subordinate trained family and a larger family level group in the basic trained family) could be viewed differently across individualist and collectivist cultures. Future research could e xamine how differ ences in views on group s may impact basic and subordinate level training in the acquisition of perceptual expertise For the visual strategy analyses, the current study looked only at fixation and scan pattern information from the second object presented during the test phase. The first image represented the encoding of the image and retrieval of the species labels for each exemplar, while the second object represented the decision phase, where the participant decided whether the two exemplars were from th e same or a different species. Future research could examine whether there are additional differences in visual strategies that occur during the viewing of the first image related to encoding, that accompany the acquisition of expert level behavioral perf ormance. Although all stimuli utilized in the current study were novel, computer generated presented to each participant on several different occasions in directions on ho w to
6 2 com plete each phase of the study, implying that the novel objects were living things. Research by Caramazza and Shelton (1998) suggests that there are separate semantic representations of living and non living things, and separate supporting neural sy stems representing the living and non living domains. The domain of living things also have more shared attributes (i.e. eyes, tail) and are categorized to a larger degree using visual features while the domain of non living things have more distinctive f eatures and are generally categorized more by function (Warrington & McCarthy, 1987; Farah & McClelland 1991; Garrard, Ralph, Hodges, & Patterson, 2001). These differences in processing strategies for living and non living things suggest that the pre expe rimental way the exemplars were processed. Future research could examine whether differences in behavior or visual strategies emerge when it is unspecified whether the stimuli were living or non living, or when they were explicitly categorized within the non living domain. Categorization at both the perceptual and conceptual level involves placing a stimulus into a group based on its features (Bruner, 1957). Information on the visual features of a stimulus can be collected through sensory perception or through a conceptual understanding of the stimulus due to prior knowledge ( Murphy, 2002 ). Prior conceptual knowledge is important in identifying the features of an object that are the most perceptually important for successful categorization; in this way, once a conceptual understanding is attained, it may be difficult to separate the contributions of perceptual and conceptual knowledge to categorization ( Murphy, 2002 ). B al lester, Patris, Symoneaux, and Valentin (2007) even s uggested that experts rely more on
63 mental representations and concepts than on sensory perception in making categorization and discrimination decisions. Past expertise training studies with birds and ca rs (Scott et al., 2006; 2008) did not dissociate the contribution s of perceptual expertise training and prior conceptual knowledge (i.e. color, physical features of the birds and cars) to improved discrimination performance. In the current study, novel obj ects were used to eliminate the use of prior conceptual knowledge about the stimuli so categorization would be learned only through the perception of features of the objects Two models, the prototype model and the exemplar model, attempt to explain how c ategorization is learned through perception without the contributions of prior knowledge According to the prototype model, there is a n ideal prototype for each category in whi ch all exemplars are compared (Rosch & Mervis, 1975). If the exemplar is highly similar to the prototype, it is more likely to be placed in that category than if it is less similar. According to the exemplar model, instead of having a single ideal prototype, all known exemplars should be compared based on similarity and then placed i nto categories accordingly (Medin & Schaffer, 1978). Our findings on learning of the novel exemplars in the current study are more consistent with the exemplar model. A s more species and exemplars were introduced ions of each categor y evolve d to accommodate the new exemplars Still, both the prototype and exemplar models are focused only on perception of features and do not account for any conceptual kno wledge. Although participants did not have any conceptual know ledge of the novel stimuli before beginning the study, with the continued presentation of exemplars across 9 hours of
64 training, it is l ikely that they began to build a knowledge base on the different species of exemplars as the study progressed. A new mode l building on the prototype and exemplar models, but incorporating learned conceptual knowledge may more accurately characterize the learning of novel objects Even with the use of novel objects, it is still difficult to dissociate the contribution of per ceptual and conceptual knowledge to the acquisition of expertise. Future research could continue to dissect this relationship, examining to what extent conceptual learning is contributing to perceptual expertise. Conclusion In summary, results of this stu dy contribute four main points to our understanding of perceptual expertise. First, through examination of changes in visual strategies from pre to post training, we found that, although dwell time, fixation duration, and fixation number did not differ acr oss training conditions, consistent scanning strategies developed a fter subordinate, but not basic level training. Second, through the use of the gaze contingent mask manipulation, this study found that when a holistic processing strategy is not possible ( as in the foveal view condition), subordinate level labels still facilitate improvements in discrimination performance. Third findings show that color and spatial frequency manipulations did not impact discrimination performance for novel images ; improvem ents in performance occurred across all image manipulation s Lastly, this was the first study that examined the differential impact of basic versus subordinate level training on discrimination of novel stimuli. It extends previous research, showing that su bordinate level training is successful in improving discrimination of novel stimuli similarly to real world stimuli, while basic level training is not.
65 LIST OF REFERENCES Ballester, J., Patris, B., Symoneauxm, R., & Valentin D. (2008). Conceptual vs. pe rceptual wine spaces: Does expertise matter? Food Quality and Preference, 19, 267 276. Bradley, M. M., Houbova, P., Miccoli, L., Costa, V. D., Lang, P. J. (2011). Scan patterns when viewing natural scenes: Emotion, complexity, and repetition, Psychophysio logy 48 1544 1553. Bramo, I., Fasca, L., Petersson, K. M., & Reis, A. (2012). The contribution of color to object recognition. Advances in object recognition systems, 73 88. Bruner, J. S. (1957). On perceptual readiness. Psychological Review, 64 1 23 152. Bukach, C. M., Gauthier, I., & Tarr, M. J. (2006). Beyond faces and modularity: the power of an expertise framework. Trends in cognitive sciences 10 159 166. Cara mazza, A., & Shelton, J. R. (1998). Domain specific knowledge system in the brain : The animate inanimate distinction. Journal of Cognitive Neuroscience, 10 1 34. Castelhano, M. S. Mack, M. L. & Henderson, J. M. ( 2009 ) Viewing task influences eye movement control during active scene perception Journal of Vision 9 1 15 Collin, C. A. & McMullen, P. A. (2005). Subordinate level categorization relies on high spatial frequencies to a greater degree than basic level. Perception & Psychophysics, 67 354 364. Cristino, F., Mathot, S., Theeuwes, J., & Gilchrist, I. D. (2010). ScanMatch: A novel method for comparing fixation sequences. Behavior Research Methods, 42 692 700. Diamond, R. & Carey, S. (1986). Why faces are not special: An effect of expertise. Journal of Experimental Psychology: General, 115, 107 117. Drew, T., Evans, K., Vo M., Jacobson, F., Wolfe, J. M. (2013 ). Informatics in radiology: What can you see in a single glance and how might this guide visual search in medical images? Radiographics, 33 263 274. Evans, K. K., Georgian Smith, D., Tambouret, R., Birdwell, R. L., & Wolfe, J. M. (2013). The gist of the abnormal: above chance medical decision making in the blink of an eye. Psychon Bull Rev, 20 1170 1175. Farah, M. J., & McClelland, J. L. (1991). A computational model of semantic memory impairment: Modality specific ity and emerging category specificity. Journal of Experimental Psychology: General, 120 339 357.
66 Fox, S. E., Law, C. C., & Faulkner Jones, B. E. (2017). Quantitative gaze assessment of for pathology education. Manuscript submitted for publication. Garra r d, P., Ralph, M. A., Hodges, J. R., & Patterson, K. (2001), Prototypicality, distinctiveness, and intercorrelation: Analyses of the semantic attributes of living and non living concepts. Cognitive Neuropsychology 18 125 174. Gauthier, I., Curran, T., Curby, K. M., & Collins, D. (2003). Perceptual interference supports a non modular account of face processing. Nature Neuroscience, 6, 428 432. Gauthier, I., Skudlarski, P., Gore, J. C., & Anderson, A. W. (2000). Expertise for cars and birds recruits brain areas involved in face recognition. Nature Neuroscience, 3, 191 197. for face recognition. Vision Re search, 37 1673 1682. Gauthier, I. & Tarr, M. J. (2002). Unraveling mechanisms for expert object recognition: Bridging brain activity and behavior. Journal of Experimental Psychology: Human Perception and Performance, 28 431 446. Gauthier, I., Williams, P., Tarr, M. J., & Tanaka, J. W. (1998). Training "greeble" experts: a framework for studying expert object recognition processes. Vision Research 38, 2401 2428. Georgas J. (1989). Changing family values in Greece: From collectivist to individualist. Jo urnal of Cross Cultural Psychology, 20 80 91. Gosselin, F. & Schyns, P. G. (2001). Bubbles: A technique to reveal the use of information in recognition tasks. Vision Research, 41 2261 2271. Hagen, S., Vuong, Q. C., Scott, L. S., Curran, T., & Tanaka, J. W. (2014). The role of color in expert object recognition. Journal of Vision, 14, 1 13. Hagen, S., Vuong, Q. C., Scott, L. S., Curran, T., & Tanaka, J. W. (2016). The role of spatial frequency in expert object recognition. Journal of Experimental Psycho logy: Human Perception and Performance, 42, 413 422. Henderson, J. M. (2011). Eye movements and scene perception. In S. P. Liversedge, I. D. Gilchrist, & S. Everling, The Oxford handbook of eye movements (pp. 593 606). New York: Oxford University Press. H enderson, J. M. & Choi, W. (2015). Neural correlates of fixation duration during real world scene viewing: Evidence from fixation related (FIRE) fMRI. Journal of Cognitive Neuroscience, 27, 1137 1145.
67 Hofste de, G. (1980 ) consequences Beverly H ills, CA: Sage. Homa, D., Vosburgh, R. (1976). Category breadth and the abstraction of prototypical information J. Exp. Psychol: Human Learning and Memory, 2, 322 330. rms of individualism and collectivism, levels of identification and identity threat. European Journal of Social Psychology, 32, 189 207. Johnson, K., & Mervis, C. (1997). Effects of varying levels of expertise on the basic level of categorization. Journal of Experimental Psychology: General, 126 248 277. Jones, T., Hadley, H., Cataldo, A., Arnold, E., Curran, T., Tan aka, J. W., & Scott, L. S. (in revision ). Neural and behavioral effects of subordinate level training of novel objects across manipulations o f color and spatial frequency European Journal of Neuroscience. Kimchi, R. (1992). Primacy of holistic processing and global/local paradigm: A critical review. Psychological Bulletin, 112 24 38. Krupinski, E. A., Graham, A. R., & Weinstein, R. S. (2013 ). Characterizing the development of visual search expertise in pathology residents viewing whole slide images. Human Pathology, 44, 357 364. Kundel, H. L. & La Follette, P. S. (1972). Visual search patterns and experience with radiological images. Radiol ogy, 103 523 528. Loftus, G. (1985). Picture perception: Effects of luminance on available information and information extraction rate. Journal of Experimental Psychology General 114, 342 356. Loftus, G. & Mackw orth, N. H. (1978). Cognitive determinants of fixation location during picture viewing. Journal of Experimental Psychology: Human Perception and Performance, 4, 565 572. Madsen, A. M., Larson, A. M., Loschky, L. C., & Rebello, N. S. (2012). Differences in visual attention between those who correct ly and incorrectly answer physics problems. Physical Review Physics Education Research, 8 1 9. Martini, P., McKone, E., & Nakayama, K. (2006). Orientation tuning of human face processing estimated by contrast matching in transparency displays. Vision Rese arch, 46 2102 2109. Maurer, D., Grand, R. L., & Mondloch, C. J. (2002). The many faces of configural processing. Trends in Cognitive Science, 6, 255 260.
68 McKone, E. (2004). Isolating the special component of face recognition: Peripheral identification an d a Mooney face. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30 181 197. Medin, D. L. & Schaffer, M. M. (1978). Context theory of classification learning. Psychological Review, 85 207 238. Mills, M., Hollingworth, A., Van der St i n gchel, S., Hoffman, L., & Dodd, M. D. (2011). Examining the influence of task set on eye movements and fixations. Journal of Vision, 11 1 15. Murphy, G. L. (2002). The big book of concepts Cambridge, MA: MIT Press. Noton, D. & Stark, L. (1971). Scanpa ths in eye movements during pattern perception. Science, 171 308 311. Posner, M.I. & Keele, S.W. (1968). On the genesis of abstract ideas. J. Exp. Psychol. 77 353 363. Rayner K. (1975). The perceptual span and peripheral cues in reading. Cognitive Psych ology 7, 65 81. Rayner, K. (1998). Eye movements in reading and information processing: 20 years of research. Psychological Bulletin, 124 372 422. Rayner, K. (2009). Eye movements and attention in reading, scene perception, and visual search. The Quarte rly Journal of Experimental Psychology 62 1457 1506. Rayner, K., & Pollatsek, A. (1981). Eye movement control during reading: Evidence for direct control. Quarterly Journal of Experimental Psychology, 33A 351 373. Rayner, K., Reichle, E. D., Stroud, M. J., Williams, C. C., & Pollatsek, A. (2006). The effect of word frequency, word predictability, and font difficulty on the eye movements of young and older readers. Psychology and Aging, 21 448 465. Rosch, E. & Mervis, C. B. (1975). Family resemblances: Studies in the structure of categories. Cognitive Psychology, 7 573 605. Rosch, E., Mervis, C. B., Gray, W. D., Johnson, D. M., & Boyes Braem, P. (1976). Basic objects in natural categories. Cognitive Psychology 8, 382 439. Rossion, B., Gauthier, I., G offaux, V., Tarr, M.J., Crommelinck, M., (2002). Expertise training with novel objects leads to left lateralized facelike electrophysiological responses. Psychol. Sci,. 13 250 257. Salvucci, D. D., & Goldberg, J. H. (2000). Identifying fixations and sa ccades in eye tracking protocols. Proceedings of the Eye Tracking Research and Applications Symposium (pp. 71 78). New York: ACM Press.
69 Schyns, P. G., & Oliva, A. (1994). From blobs to boundary edge: Evidence for time and spatial scale dependent scene rec ognition. Psychological Science, 5, 195 200. Scott, L.S. (2011). Face perception and perceptual expertise in adult and developmental populations. I n G.R. Rhodes, J. Haxby, M. Johnson, & A. Calder (Eds.) Handbook of Face Perce ption Oxford, UK: Oxford Univ ersity Press. Scott, L.S., Tanaka, J.W., & Curran, T. (2009). Degrees of expertise. In I.Gauthier, M. Tarr, & D. Bub (Eds.), Perceptual expertise: Bridging brain and behavior (107 138). Oxford: Oxford Scholarship Online. Scott, L.S., Tanaka, J.W., Sheinb erg, D.L., & Curran, T. (2006). A reevaluation of the electrophysiological correlates of expert object processing, Journal of Cognitive Neuroscience, 18, 1453 1465. Scott, L. S., Tanaka, J.W., Sheinberg, D.L., & Curran, T. (2008). The role of category learning in the acquisition and retention of perceptual expertise: A behavioral and neurophysiological study. Brain Research, 1210 204 215. Starbuck, W.H. (1992). Lear ning by knowledge intensive firms. Journal of Management Studies, 29, 713 740. Sveiby, K. E. (1997). The new organizational wealth: Managing and measuring knowledge based assets. San Francisco, CA: Berrett Koehler Publishers. Tanaka, J. W., & Curran, T. (2001). The neural basis for expert object recognition. Psychological Science, 16, 145 151. Tanaka, J. W., Curran, T., & Sheinberg, D. L. (2005). The training and transfer of real world perceptual expertise. Psychological Science, 16, 145 151. Tanaka, J W., & Farah, M. J. (1993). Parts and wholes in face recognition. The Quarterly Journal of Experimental Psychology, 46 225 245. Tanaka, J. W., & Presnell, L. M. (1999). Color diagnosticity in object recognition. Perception & Psychophysics 1140 1153. T anaka, J. W., & Taylor, M. (1991). Object categories and expertise: Is the basic level in the eye of the beholder? Cognitive Psychology 23, 457 482. Therriault, D. J., Yaxley, R. H., & Zwaan, R. A. (2009). The role of color dia gnosticity in object recogn ition and representation. Cognitive Process, 10 335 342. Triandis, H. C. (1972). The analysis of subjective culture New York, New York: Wiley. Triandis, H. C. (1989). The self and social behavior in differing contexts. Psychological Review, 96, 506 520
70 Van Belle, G., De Graef, P., Verfaillie, K., Busigny, T., & Rossion, B. (2010). Whole not hole: Expert face recognition requires holistic perception. Neuropsychologia, 48 2620 2629. Van Belle, G., De Graef, P., Verfaillie, K., Rossion, B., & Lefevre, P. (2010). Face inversion impairs holistic perception: Evidence from gaze contingent stimulation. Journal of Vision, 10, 1 13. Van Diepen P. M. J. De Graef P. Van Rensbergen J. (1994). On line control of moving masks and windows on a complex background usin g the ATVista videographics adapter. Behavior Research Methods, Instruments, & Computers 26 454 460. Warrington, E. K., & McCarthy, R. A. (1987). Categories of knowledge: Further fractionations and an attempted integration. Brain, 110 1273 1296. Young, A.W., Hellawell, D., Hay, D.C. (1987). Configurational information in face perception. Perception, 16, 747 759.
71 B IOGRAPHICAL SKETCH Allison Carr was born in Alpharetta, GA and attended North Gwinnett High School, graduating in 2012. She then attended Emo ry University from 2012 to 2016, graduating summa cum laude with a Bachelor of Science in n eurosci ence and a Bachelor of Arts in d ance. After graduation, she attended the University of Florida and received a Master of Science i n p sychology. She is now con tinuing to work toward a PhD in p sychology, doing research on p erceptual learning and expertise