The multiaperture optical (MAO) system based on the apposition principle

MISSING IMAGE

Material Information

Title:
The multiaperture optical (MAO) system based on the apposition principle
Physical Description:
xiii, 140 leaves : ill., photos ; 28 cm.
Language:
English
Creator:
Lin, Shih-Chao, 1955-
Publication Date:

Subjects

Subjects / Keywords:
Synthetic apertures   ( lcsh )
Genre:
bibliography   ( marcgt )
theses   ( marcgt )
non-fiction   ( marcgt )

Notes

Thesis:
Thesis (Ph. D.)--University of Florida, 1988.
Bibliography:
Includes bibliographical references.
Statement of Responsibility:
by Shih-Chao Lin.
General Note:
Typescript.
General Note:
Vita.

Record Information

Source Institution:
University of Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 001115088
notis - AFL1803
oclc - 19879193
sobekcm - AA00004814_00001
System ID:
AA00004814:00001

Full Text















THE MULTIAPERTURE OPTICAL (MAO) SYSTEM
BASED ON THE APPOSITION PRINCIPLE












By

SHIH-CHAO LIN


A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF
THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY



UNIVERSITY OF FLORIDA


1988























To my parents

Mr. Chung-Liang Lin

&

Mrs. Yang Li-Shuang Lin











a t
















ACKNOWLEDGMENTS


The author would like to express his deepest

appreciation and gratitude to Dr. Richard T. Schneider, the

chairman of his supervisory committee, for his guidance and

support in this research and for the faith and friendship he

showed toward the author throughout this academic endeavor.

His deepest appreciation also goes to Dr. Edward E. Carroll

for his guidance, encouragement and friendship. Sincere

thanks are also extended to the other members of the

supervisory committee, Dr. William H. Ellis, Dr. Tom I-P

Shih and Dr. Gerhard Ritter.

Special note should be made of the valuable support

from Dr. Neil Weinstein. Dr. Weinstein not only gave the

author moral support but also help the author overcome his

language problem.

The author will be forever indebted to his parents, Mr.

Chung-Liang Lin and Mrs. Yang Li-Shuang Lin for their

unfailing faith and encouragement. Without them the author

would never have been able to complete all his academic

endeavors.
















TABLE OF CONTENTS


ACKNOWLEDGMENTS. .

LIST OF TABLES .

LIST OF FIGURES. .

ABSTRACT . .

CHAPTER


1 INTRODUCTION. . .

1.1 Introduction . .
1.2 Vision Systems . .
1.3 Literature Survey on Multiaperture
Optical Systems .

2 THE INSECT EYES . .

2.1 Optical Part . .

2.1.1 Cornea . .
2.1.2 Crystalline Cone. .
2.1.3 Crystalline Tract .

2.2 Sensory Part . .

2.2.1 The Sensor .. .
2.2.2 The Nerve Connection. .

2.3 Image Formation .

3 OPTICAL STUDIES . .

3.1 Light Horn . .
3.2 Optics of Light Horns. .

3.2.1 Experimental Setup. .
3.2.2 The Ray Tracing Program .

3.3 Results of the Optical Studies .

iv


PAGE

iii

vi

vii

xii











3.3.1 Vertical Cross Section. .. 25
3.3.2 Wall Pattern. . .. 41
3.3.3 Horizontal Cross Section .... 55

3.4 Applications . 58

3.4.1 Vertical Cross Section. ... 58
3.4.2 Wall Pattern. . .. 65

3.5 Summary and Discussion .. 95

4 LIGHT COLLECTION OF THE LIGHT HORN .. .105

4.1 Intensity Concentration. ... 106
4.2 Comparison of Light Horns and Lenses 112

5 THE MULTIAPERTURE OPTICAL SYSTEM DESIGN .. 116

5.1 The System Design .. 116

5.1.1 MAO Mask. . .. .118
5.1.2 Detector Board. . ... 120
5.1.3 Memory Board and Processor Board. 124

5.2 The Performance of the MAO Device. ... .125
5.3 Discussions. . .. 130

6 SUMMARY AND CONCLUSIONS .. . 132

REFERENCES . . 138

BIOGRAPHICAL SKETCH. . .. .140
















LIST OF TABLES


Table Page

3.1 Dimensions of experimental devices. .. .21

3.2 The distance-polar angle relation of light
horn used . .. . 54

3.3 Matrices of point sources at different location 62

3.4 Matrix of a point source image on the 5 X 5
light horn array. . .. .66
















LIST OF FIGURES


Figure Page

2.1 The ommatidium of insect eye.
a) Photopic eye; b) Scotopic eye 8

2.2 The cross section of rhabdom
a) Photopic eye; b) Scotopic eye .. 12

2.3 The microvilli . ... .13

3.1 Schematic of the MAO device eyelet .. 17

3.2 The experimental setup . ... 19

3.3 Three different modes of generating the
pattern, a) Vertical cross section;
b) Wall of cylinder; c) Horizontal slice 22

3.4 The diagram of the law of reflection ... 24

3.5 Photo, vertical cross section image pattern,
parallel beam--parallel to axis. ... 26

3.6 Two dimensional ray tracing for light horn;
parallel beam--parallel to axis. ... 28

3.7 Photo, vertical cross section image pattern,
parallel beam--parallel to axis--yellow
side right of axis, red side left of axis. 29

3.8 Photo, vertical cross section image pattern,
parallel beam--5 degrees off axis. ... 31

3.9 Photo, vertical cross section image pattern,
parallel beam--11 degrees off axis ... 32

3.10 Computational pattern, vertical cross section
image pattern, parallel light source,
parallel to axis . ... 34

3.11 Computational pattern, vertical cross section
image pattern, parallel light source,
5 degrees off axis . .35

vii







viii

3.12 Computational pattern, vertical cross section
image pattern, parallel light source,
11 degrees off axis . .. 37

3.13 Computational pattern of the parabolic
light horn, vertical cross section image
pattern, parallel light source, parallel
to axis. .. . .. .38

3.14 Computational pattern of the parabolic
light horn, vertical cross section image
pattern, parallel light source, 5 degrees
off axis . .. 39

3.15 Computational pattern of the parabolic
light horn, vertical cross section image
pattern, parallel light source, 11 degrees
off axis. .. . 40

3.16 Vertical cross sections across cylinder;
a) 33 units b) 48 units c) 86 units away
from the exit aperture .. 42

3.17 Ray pattern impinging on cylinder wall for
object on axis. . .. 44

3.18 Photo, wall pattern, parallel light source,
parallel to axis . 45

3.19 Ray pattern impinging on cylinder wall for
object one degree off axis .. .. 46

3.20 Ray pattern impinging on cylinder wall for
object two degrees off axis. .. .47

3.21 Distance of center of bands from "Zero" band
vs. the off-axis angle. . .. .49

3.22 Angular resolution as a function of
polar angle . . 51

3.23 Angular resolution vs. light horn length. 53

3.24 The computational pattern which would appear
on the horizontal slice. .. ... 56

3.25 Photo, pattern on horizontal slice. ... 57

3.26 A 5 X 5 detector array arrangement. ... 59

3.27 Vertical cross section pattern projected on a
5 X 5 detector array and its possible
matrices. ... . .. 60







ix

3.28 Computational pattern for multiple light horns
arranged on a 3 X 3 array .. .... 63

3.29 Computational pattern for multiple light horns
arranged on a 5 X 5 array .. 64

3.30 Fiber shaped detector . ... .67

3.31 The correlation of the image information
among three adjacent eyelets. ... 69

3.32 Detector tube with ring shape detectors 70

3.33 Intensity distribution on the cylinder wall;
object on the axis, 1000 units distance 72

3.34 Intensity distribution on the cylinder wall,
object 0.34 degree off axis .. 73

3.35 Intensity distribution on the cylinder wall,
object 0.52 degree off axis ... .74

3.36 Characteristic band position of the objects,
object 1000 units away from entrance
aperture. .. . 75

3.37 The triangular grid . 76

3.38 The object space covered by the eyelet
system . . .. 77

3.39 The image pattern created by the Addition
method, object point at node (1,1). ... .80

3.40 The image pattern created by the Addition
method, object point at node (5,5). ... .81

3.41 The image pattern created by the Addition
method, object point at node (1,26) .. 82

3.42 The image pattern created by the Addition
method, object point at node (17,17). ... .83

3.43 The image pattern created by the Multiplication
method, object point at node (1,1). ... 86

3.44 The image pattern created by the Multiplication
method, object point at node (5,5). ... .87

3.45 The image pattern created by the Multiplication
method, object point at node (1,26) .. .88









3.46 The image pattern created by the Multiplication
method, object point at node (17,17). .. 89

3.47 The image pattern created by the Cross
Correlation with Addition method,
object point at node (1,1). .. 91

3.48 The image pattern created by the Cross
Correlation with Addition method,
object point at node (5,5) . 92

3.49 The image pattern created by the Cross
Correlation with Addition method,
object point at node (1,26) .. 93

3.50 The image pattern created by the Cross
Correlation with Addition method,
object point at node (17,17) .. .... 94

3.51 The image pattern created by the Cross
Correlation with Multiplication method,
object point at node (1,1). ... 96

3.52 The image pattern created by the Cross
Correlation with Multiplication method,
object point at node (5,5). . 97

3.53 The image pattern created by the Cross
Correlation with Multiplication method,
object point at node (1,26) .. 98

3.54 The image pattern created by the Cross
Correlation with Multiplication method,
object point at node (17,17). ... 99

3.55 The image pattern created by the Multiplication
method, object points at nodes (1,1) and
(17,17) . . 100

3.56 The image pattern created by the Multiplication
method, object points at nodes (1,26) and
(17,17) . . 101

4.1 Concentration ratio curves of conical light
horns with different cone angles for an
on-axis point source at 1000 units
distance. . . 107

4.2 The concentration ratio of a parabolic light
horn at different distances away from
the exit aperture with an on-axis point
source at 1000 units away from entrance 108










4.3 Comparison of concentration ratios between
a light horn and a lens . .. 110

4.4 The concentration ratio curves of the
off-axis point source .. 111

4.5 The light collection of a lens. ... 114

5.1 The multiaperture optical (MAO) device. ... .117

5.2 Arrangement of detectors on optic RAM.
a) detector location; b) detector in use
(marked by "X") . 121

5.3 Detector arrangement underneath the light
hole . .. 123

5.4 Performance of the system using the mask
with cylindrical holes . 126

5.5 Performance of the system using the mask
with conical holes .. 127

5.6 The result pattern of Figure 5.4 after
the clean-up . 128

5.7 The result pattern of Figure 5.5 after
the clean-up .. .. 129















Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the degree of Doctor of Philosophy

THE MULTIAPERTURE OPTICAL (MAO) SYSTEM
BASED ON THE APPOSITION PRINCIPLE

By

Shih-Chao Lin

April, 1988

Chairman: Dr. Richard T. Schneider
Major Department: Nuclear Engineering Sciences


Automation freed mankind from repeated boring labor

and/or labor requiring an instantaneous response. When

applied as robotics it could even free mankind from

dangerous labor such as handling radioactive material. For

a robot or an automated system a vision device has proven to

be an important element.

Almost all artificial vision systems are similar in

design to the human eye with its single large lens system.

In contrast, the compound eye of an insect is much smaller

than the human eye. Therefore, it is proposed to imitate

the insect eye in order to develop a small viewing device

useful in robotic design.

The basic element of the multiaperture optical system

described here is a non-imaging light horn. The optical

xii








xiii

studies on the non-imaging light horn (a simulated insect

eye eyelet) have been done and show that this device may

produce images when several horns are used together in an

array. The study also shows that with several non-imaging

devices the position of an object point light source can be

determined very easily.

One possible realization of multiaperture optical

system design based on the apposition principle is proposed

and discussed. The multiaperture optical system proposed is

a small, low cost device with digital image processing.















Chapter 1

INTRODUCTION



1.1 Introduction

Machines freed mankind from labor which required great

strength or long endurance. Automation freed mankind from

repeated boring labor and/or labor requiring instantaneous

response. Robotics finally frees mankind from dangerous

labor and from the kinds of tasks of which humans are not

capable or willing to endure because they require special-

ized abilities, albeit, with low levels of intelligence.

Early machines needed only very primitive sensors or no

sensor at all. Automation needed more sophisticated

sensors. A photocell rather than an electrical contact may

have been used. Logically one may expect then that robotics

will require even more sophisticated sensors--one of them

being vision rather than just the detection of light.

One such vision system may be a television camera

attached to suitable digitizing equipment. The digitizing

equipment is required since the information provided by the

TV camera is not viewed by a human observer. Instead, it

has to be interfaced with specialized intelligence which has

been programmed into the main computer of the robotic

1







2

system. This constitutes the major difference between

automation and robotics. Therefore, the question arises,

what kind of vision system is easiest to interface with a

computer?

Images can be easily evaluated by a computer if they

are made available in the form of digitized picture elements

(pixels). Therefore, vision systems with digitization

ability are preferred when a computer is chosen to do the

recognition process.

There seem to be two principal preferred designs for

vision systems to be found in nature. These are (1) the

single lens eye found in vertebrates and (2) the

multiaperture eye found in arthropods (the "insect eye").

Most optical instruments have been patterned after the

vertebrate eye. This raises the question: is this design

superior to the insect eye or only considered to be so by

optical designers?

It is the objective of this dissertation to investigate

whether the insect eye is indeed generally inferior to a

single lens eye, or if there are special applications where

the insect eye may be more suitable than the single lens

eye. If the second case is true, the proof should be

offered in the form of a description of such a superior

device.

1.2 Vision Systems

The difference between the two vision systems consists

in the number of apertures used to form the image. For this







3

reason one needs to investigate the difference between

single aperture optics (SAO) and multiple aperture optics

(MAO). This dissertation is concerned with some of the

unique features of multiple aperture optics.

Therefore, let us describe the insect eyes, albeit in

somewhat oversimplified terms. In the overall MAO system

there could be three possible ways of extracting an image

from an object. These three possible ways are as follows:

1. Each eyelet collects only one pixel and the

resulting overall image is a mosaic (the so-called

apposition eye).

2. The lens of each eyelet projects a fairly large

image onto the retina and all these images are

superimposed precisely (the so-called superposition

eye).

3. Each lens projects a small image onto the retina,

the individual images do not overlap and together

they form the total image.

For the artificial multiaperture optics system, one

cannot possibly worry about focusing of many eyelets or

adjusting them so that all of the individual images are

superimposed correctly. Therefore, the second option, the

superposition eye, is not a desirable choice. Similarly,

the focusing and the overlapping problems of the small

individual images are the reasons to reject the third

option. The first option, the apposition eye, removes the

focusing problem but raises the question of whether or not







4

acceptable resolving power can be achieved. The answer to

this question is one of the major subjects of this disser-

tation. From behavioral studiesp8 it is known that

insects have remarkably acute vision and are capable of

resolving even small, rapidly moving distant objects at low

ambient light levels. For the designer, who intends to

build a camera based on the design principles of the insect

eye, it would be very helpful to know how such "super

resolution" is achieved. In this dissertation an answer is

proposed.

In an effort to understand the optics of the insect

eye, a simple model consisting of a hollow cone with a

reflecting wall attached to a non-reflecting cylindrical

section was selected for analysis. This cone, similar to

the crystalline cone of the insect eye, is suggested as a

model for the eyelet of MAO devices. In Chapter 3 the

studies of the geometrical optics of these artificial

multiaperture optical elements will be discussed. The light

concentration of these optical elements will also be

discussed in Chapter 4.

If one were to assume that each eyelet acquires only

one pixel, one would have to conclude (using a conventional

approach) that the insect has a very poor resolution system.

The question that arises is: why would anybody want a small

camera which has poor resolution? The answer is that maybe

one does not want to take pictures which are intended to be

viewed by a human observer with this camera. Instead, it







5

could be a camera which recognizes objects and reports the

presence of the object to the main computer of a vehicle or

a robot. For example, if such a camera were the size of a

postage stamp, it could be fitted into the "hand" of a robot

and could make the task of picking up certain objects much

easier. If the recognition scheme could be hardwired

into the detector array, the restrictions on the motion of

robot would be reduced. Therefore, the recognition cannot

be too complex. Also any preprocessing by optical means

would be very beneficial.

1.3 Literature Survey on Multiaperture Optical Systems

An early insect eye model was studied by Schneider and
2
Long. They constructed an insect eye model with 100

eyelets. Each individual eyelet consisted of two lenses,

one aperture stop and an optical fiber bundle. The end of

each fiber was attached to one photosensitive detector which

was connected to an amplifier in order to obtain signals

which were strong enough for analysis. A computer was used

to study the resulting signals. A computer program

reconstructed the image pattern and the image was displayed

on a video terminal. Although it was an early model of a

multiaperture optical device, Schneider and Long were able

to conclude that the multiaperture optical system could have

inherent digitization and large field of view abilities.

They concluded that the multiaperture optical device can

have "small depth of the structure," which means a thin

device with a large field of view.







6

Kao3 has presented this first generation mechanical

insect eye in much detail. He discussed three different

models: a one-eyelet, a seven-eyelet and a 89-eyelet model.

The computer system used here was an HP-85 microcomputer

with an HP6942A multiprogrammer analog-to-digital converter.

With this system, the image was converted to a digital

pattern and analyzed. The recognition technique was

discussed in his studies.

The multiaperture optical systems in the earlier

studies were quite primitive. They were large in size and

the mechanisms were not much different from the human eye

and optical function studies were not done. Basically,

those models combined several shrunken single lens eyes into

a large array to form a semi-compound eye.

The present study builds on these earlier works. Two

insect eye models were built and studied. Computer ray

tracing programs were developed to simulate the path of

light in the insect eye. Unexpected patterns were obtained

which showed how insect eyes may produce an image. The

optics study led to the multiaperture optical device which

is discussed in Chapter 5. Finally, the studies will be

summarized and discussed in Chapter 6.
















CHAPTER 2

THE INSECT EYES



Since the insect eye is used as a model for the MAO

device, it is helpful to review the anatomy of the insect

eye here briefly. According to Chapman, most adult insects

have a pair of compound eyes bulging out, one on each side

of the head. This provides for a wide field of view,

essentially in all directions. Each compound eye has up to

10,000 eyelets which are known as ommatidia. Each

ommatidium is believed to be a non-imaging optical system.

Each ommatidium consists of approximately 30 cells, is

about tens of micrometers in diameter, and is hundreds of

micrometers in length. Functionally speaking, the

ommatidium consists of two parts (as shown in Figure 2.1):

an optical part and a sensory part. The optical part

collects the light and forms the special pattern for

recognition. The sensory part analyzes the pattern and is

capable of perceiving the image of the scene.

2.1 Optical Part

The basic optical system of the ommatidium consists of

two lenses, (1) a cornea (which is a biconvex lens) and (2)

a conical optical element. Some insects also have a

7

















-Crystalline
Cone






Crystalline
Tract


(a) (b)


Figure 2.1 The ommatidium of insect eye.
a) Photopic eye; b) Scotopic eye







9

wave-guide-like crystalline tract at the end of the conical

crystalline cone.

2.1.1 Cornea

Chapman states that the cornea of the insect eye

consists of two corneagen cells, usually forming a biconvex

corneal lens at the outer end of the ommatidium. The lens

is transparent and colorless. It is also a cuticular

surface, often thick and solid, which can protect the soft

tissue of the insect eye.

According to Meyer-Rochow, the diameter of the corneal

lens of most insects falls between 25 and 35 micrometers.

Unlike the diameter, the thickness of the cornea varies

drastically from 4% up to 20% of the total length of the

ommatidium.

2.1.2 Crystalline Cone

The crystalline cone of the ommatidium usually consists

of four cells, known as the Semper cells. The cone is

transparent with a surface like a paraboloid. Hausen6

studied the optical properties of the crystalline cone. He

concluded that the cone has a length of about 42 pm. The

distal end is slightly curved. Measurements in his studies

showed that the index of refraction can be approximated as a

parabolic function. At the center axis, the index has the

highest value of about 1.50, while at the edge it has a

value of 1.383. From Snell's Law, one can easily see that

this change in the index of refraction would cause the light

at the edge of the crystalline cone to be totally reflected







10

and not transmitted to the neighboring eyelets. This is

like the gradient-index lenses which are now being manu-

factured. Thus, the light going into the crystalline cone

is either transmitted to the rhabdom or reflected back out.

2.1.3 Crystalline Tract

Most entomologists believe that the two main categories

of the insect eyes can be described as either the

apposition eye or the superposition eye. Goldsmith and

Bernard7'169 believed that the more suitable names for

these classifications are photopic and scotopic eyes,

respectively. As discussed by them, the scotopic (super-

position or clear zone) eyes are capable of adaptation to

variations in light intensity. Therefore, the scotopic eye

is also known as a light-adapted eye.

The crystalline tract occurs only in the scotopic eyes.

It is located between the crystalline cone and the rhabdom.

The optical function of the tract is like that of a wave-

guide. It has an important role on the adaptations of the

scotopic eye to the light.

2.2 Sensory Part

2.2.1 The Sensor

Goldsmith and Bernard7p182 reported that for the

photopic eyes, there is a hose-like structure attached at

the exit of the crystalline cone. This attached structure

is the sensory element and is called the rhabdom. The

scotopic eye is similar to the photopic eye with the







11

exception that there is a wave-guide, the crystalline tract,

positioned between the cone and the rhabdom.

The rhabdom consists of retinula cells. Normally there

are seven or eight retinula cells in an ommatidium. Near

the ommatidial axis, the retinula cells are differentiated

to form the rhabdomeres. Therefore, most of the eyelets

contain seven or eight rhabdomeres. Hence, the sensory

part of the ommatidium is called the rhabdom. A cross

section through the rhabdom of the photopic eye is shown in

Figure 2.2.a and the cross section of the scotopic eye is

shown in Figure 2.2.b. For the scotopic eyes, pigment is

located close to the exit of crystalline cone at low

intensity levels and moved halfway down the crystalline

tract at higher light levels. Goldsmith and Bernard7 also

state that the pigment granules within retinular cells 1 to

6 (see Figure 2.2.b) migrate laterally to the rhabdomeres,

when in the light-adapted state, but the pigment granules do

not migrate within the two central cells.

In the rhabdomere the light sensitive elements are the

microvilli--these are tiny tubes having typically a diameter
7
of less than one micrometer. Layers of these tubes are

oriented in an alternating crossing pattern, more or less

perpendicular to the longitudinal axis of the rhabdomere as

indicated in Figure 2.3. Mazokhin-Porshnyakov concluded

that the visual pigments are disposed on the surface of the

tubules.









12
















0






41
$4

















t4
0
0
-d





0









0. .
0

cu
$4

0




.00







4-4
1-
60
*,-






















IMicrovilli ||



I II

OI I' I 11 X1OOOO Retinular
i i cell


S, i|

I~,,r ,


Figure 2.3


The microvilli









2.2.2 The Nerve Connection

As it was seen in Figure 2.2, six rhabdomeres are

arranged around a seventh (and possibly an eighth) at the

center. According to Goldsmith and Bernard, each retinular

cell has one nucleus and one axon. The six outer retinular

cells synapsed in a single cartridge, and in the case of

the scotopic eye these six cells do migrate laterally in the

light-adapted state. The one or two central cells have

different connections than the surrounding six cells.

2.3 Image Formation

The mosaic theory, which is believed to be the theory

governing insect vision, was proposed by MUller8 in 1826.

The mosaic theory assumed that each eyelet of the insect eye

is only capable of a limited field of view which does not

overlap with the field of the adjacent eyelets which make up

the compound eyes. Each ommatidium contributes only one

point out of the total image pattern. Hence, under the

mosaic theory, one eyelet of the facet eye is a non-imaging

optical device.

As mentioned earlier, the rhabdom consists of seven or
9
eight rhabdomeres as the sensing elements. Kuiper observed

the rhabdomeres by illuminating the rhabdom of Apis

mellifera. He found that only the individual rhabdomeres

were illuminated, but not the rhabdom as a whole unit.

Furthermore, the arrangement of the rhabdomeres in the

rhabdom is in a symmetrical, radial pattern. This indicates

that each eyelet of an apposition eye could contribute more







15

than just one pixel to the total image although this

ommatidium could still be a non-imaging optical element.

















CHAPTER 3

OPTICAL STUDIES



3.1 Light Horn

In order to understand the function of the ommatidium

as an optical element, models were built to simulate it.

Like the ommatidium, each model (seen in Figure 3.1)

consists of a light horn (a hollow cone with reflecting

walls)--simulating the crystalline cone--attached to a

non-reflecting cylindrical section--simulating the rhabdom

of the insect eye.

As mentioned in Chapter 2, the crystalline cone of the

insect is transparent but has a refractive index larger than

the surrounding medium which causes the light entering the

crystalline cone to be internally totally reflected. In the

models, total reflection was replaced by regular (specular)

reflection. The simpler of the two designs, the cone with

reflecting walls, was selected in this study because it can

be manufactured more easily.

Light horn studies were started long ago. Initially,

they caught the attention of optical scientists because

light horns seem to circumvent the Second Law of Thermo-

16












7

i


-Z-







18

dynamics. Obviously, one would think that by making the

exit aperture small enough, one should be able to obtain an

illumination density larger than the intensity of the light

source. However this is not the case, as will become clear

in the analysis presented below.

In the modern age, the light horn was studied as a

possible concentrator for solar energy. Williamson10

commented that although the light horn can be designed to

transmit images, its principal usefulness is found in the

transmission of the maximum amount of energy rather than the

possible image forming potential of the light horn. Welford

and WinstonI1 concluded that the "ideal concentrator" would

have walls shaped like a rotation parabola, similar to the

conical wall of an insect eye.

3.2 Optics of Light Horns

In the present dissertation the emphasis is on the

image formation rather than on the energy concentration

phenomena. Light horn optics were studied in two parts:

experimentally and theoretically (ray tracing).

3.2.1 Experimental Setup

Figure 3.2 shows the experimental setup. It consists

of a source of parallel light, a photographic shutter, a

light horn and a film holder (or camera). In contrast to

normal picture taking techniques, there is no lens between

the light source and the image.

In the experimental setup two different light horns

were used, one with a large cone angle and the other with a
















































































U






1-







x 0




bO







20

small cone angle. The dimensions of these light horns are

shown in Table 3.1. Since only the optical properties are

of interest here, all the dimensions are quoted in units

relative to the radius of the entrance aperture (i.e., one

unit represents the entrance aperture radius of the light

horn). (Note: For different size light horns, dimensions

will all scale linearly.)

Because of the difficulty in manufacturing a parabolic

light horn, only conical light horns were used in the

experiment (However, the parabolic light horn was studied by

ray tracing).

The patterns generated could be observed in three

different modes (see Figure 3.3) depending on where the

camera (detector) was placed. The images were:

(1) on vertical cross sections of the cylinder (at

various distances from the exit of the light horn),

(2) on the walls of the cylinder,

(3) on horizontal slices through the cylinder (at

various elevations).

For the vertical cross sections, the film is held perpen-

dicular to the axis of the light horn. For the wall pattern

the film can be wrapped around the cylinder or the image

pattern can be observed directly on the interior surface of

the cylinder. In this dissertation, the wall pattern is

determined by taking pictures of the side of the frosted

glass cylindrical tube. Photos of these three positions of

the camera were used to support the results of ray tracing.






















Table 3.1 Dimensions of experimental devices


Entrance Aperture Radius:
Exit Aperture Radius:
Light Horn Length:
Cylinder Radius:
Cylinder Length:


Device 1

52.83 mm
11.18 mm
154.94 mm


Device 2


9.18 mm
7.35 mm
174.50 mm
9.50 mm
1219.20 mm


































ck


4-1 --I
0

O *r

4-) CI1




014


Oh




0 U
00
114 t -



0 N
(DU *-








4 Cd



p p












c)

4I-
[-1 0 0
.,- *r ts
LC04->*i-
Hr -1
(U







23

3.2.2 The Ray Tracing Program

For light horns of the sizes that are of interest here,

diffraction effects play only a minor role and therefore

this study was only concerned with geometrical optics. The

inner surface of the light horn is assumed to be a totally

reflective surface and to obey the law of reflection. For

computer ray tracing, it is useful to write a program based

on the vector form (by components) of this law of

reflection. Welford and Winston gave the vector equation

of the law of reflection as


r = r. 2( n (3.1)
rr 1

A
where r. is the unit vector of the incident ray, r is the
A
unit vector of the reflected ray and n is the normal vector

of the surface (see Figure 3.4).

When Welford and Winstonll studied the non-imaging

concentrator, they indicated that some rays were returned

back out through the entrance aperture if the incident rays

have too large an angle with the optical axis. For large

cone opening angle light horn, not all of the incident rays

passed through the exit aperture of the light horn and only

part of the light beam contributed to the final pattern.

Therefore for a viewer observing in front of the light horn

entrance aperture, although there was no light source behind

the light horn, he who can still observe the shining

reflection. Mazokhin-PorshnyakovI mentioned that on the

surfaces of many insect eyes a "wandering" spot is found.




























r. ^ / 0





rr












Figure 3.4 The diagram of the law of reflection








25

This "wandering" spot is a black spot with a shining

background and its location changes with different direction

of observation. It is so-called the "pseudopupil." The

partly reflected and partly passed through of the light

beams could be the reason why the pseudopupil is found in

the insect eyes.

The vector method can be used to calculate the

trajectories of the light beams and the resulting image on

the film or the detectors. The study was restricted to the

following conditions: 1. The light horn was of conical

shape and hollow. 2. The exit aperture of the light horn

has a smaller radius than the entrance aperture.

3.3 Results of the Optical Studies

3.3.1 Vertical Cross Section

When a parallel incoherent light source was mounted

paraxially with the axis of the light horn, an interesting

pattern was found on the image side (shown in Figure 3.5).

The image pattern in this case is a set of concentric rings

around a central disk. It was unexpected that these

concentric rings should have very sharp edges and that there

is no light between the rings< As will be shown below,

these concentric rings are not a diffraction pattern but can

be explained with geometrical optics alone. Nevertheless,

it is most astonishing that an empty cone should produce

such a sharp structure.

In order to understand the reason why the parallel beam

could construct this sharp image-like structure, a two























































Figure 3.5 Photo, vertical cross section image pattern,
parallel beam--parallel to axis







27

dimensional ray tracing is shown in Figure 3.6. It shows an

individual light horn and four mirror images. Assume a

parallel light beam (or a point light source at infinity)

enters the light horn parallel to the optical axis. The

rays bordered by lines 1-1' is the non-reflected light beam

which forms the central disk of the image pattern. The

light beam bordered by lines 1-2, is reflected once and

projected to the opposite side of the axis of the light

horn. The light bordered by lines 1-2 and 1'-2' forms an

annulus which constitutes the first bright ring. Similarly,

the rays between 2-3 and 2'-3' form the second bright ring.

This is why there is a sharp edge and a dark space between

the center disk and the first ring.

To prove that this interpretation is correct, a color

filter was added to the white light source. It consisted of

two halves, a yellow and a red one, whereby the dividing

line was located on a diameter of the light horn aperture.

The result is shown in Figure 3.7. Although, in this

dissertation the color is not shown, the grey tone of this

picture still indicates the result. As predicted from the

interpretation above, the center disk is divided into a

yellow half (right half) and a red half (left half), the

dividing line being the diameter of the center disk. The

first ring is also divided into a yellow half and a red half

along the same dividing line. In contrast to the center

disk, the right side of the first ring is red. For the











































Figure 3.6 Two dimensional ray tracing for light horn;
parallel beam--parallel to axis






















































Figure 3.7 Photo, vertical cross section image pattern,
parallel beam--parallel to axis--yellow side
right of axis, red side left of axis







30

second ring, the colors are reversed again, yellow to the

right and red to the left.

To study the effect when the light source is not on the

axis of the light horn, the parallel light source was made

to have 5 degrees with the axis of the light horn. Figure

3.8 shows the image in this case. The central disk is

displaced slightly compared to the first case (Figure 3.5).

The rings of this case split into twin pairs and they are no

longer perfect rings. There is a sharp cutout which

occurred on the brighter part of the twins. It is believed

that this cutout was reflected to the opposite side to form

the "twin." Figure 3.9 shows the case when the light source

is moved still further off-axis (11 degrees). The central

disk almost disappears. The first ring and its twin turn

into crescents and appear on the same side. The central

disk disappears when the angle is larger than 12 degrees.

While these results are certainly unexpected and

interesting, one can also draw some practical conclusions

from them. If one placed one detector at the center of the

light horn and one at the edge of the first ring, the

combination of the illumination on the detectors could be

used to detect the off-axis angle of the point source at

infinity, at least for 0, 5 and 12 degrees. For example, if

both detectors show strong illumination, the object must be

off-axis by less than 5 degrees. If just the center

detector detects the brightness of the source and the

detector on the wedge does not detect the source, then the























































Figure 3.8 Photo, vertical cross section image pattern,
parallel beam--5 degrees off axis








32


Figure 3.9 Photo, vertical cross section image pattern,
parallel beam--l1 degrees off axis







33

object is in between 5 and 12 degrees. If both detectors

see nothing, then this shows that the source is off axis by

more than 12 degrees.

Therefore, although the light horn is a non-imaging

optical device, it is still capable of more than just simply

detecting the presence or absence of an object within it's

field of view (FOV).

The above study was done with a light horn of certain

specific dimensions (light horn 1, see Table 3.1). To make

sure that a generic effect was discovered, a second light

horn with different dimensions was used. Similar results

were obtained. The only differences between results of the

two light horns are the size and the number of rings. The

number of rings depends on the length of the light horn and

the light horn opening angle. In general, only the

unreflected light rays and singly reflected rays are

important; the doubly reflected rays appear only at large

polar angles where poor resolution destroys their

usefulness. The relationship between the maximum cone

length and the cone angle opening resulting in only one ring

and the central disk is shown as equation 2.

L = Ai tan(2a) cot(a)/(tan(a) + tan(2a)) (3.2)


Figure 3.10 shows the results of the ray tracing for

light which is parallel to the axis of the light horn. As

can be seen, the obtained pattern agrees very nicely with

the photograph (Figure 3.5). Figure 3.11 shows the case of













































Figure 3.10 Computational pattern, vertical cross
section image pattern, parallel light
source, parallel to axis







































*~ *
*.0,' -....







*t ,. ...
"


I M E S 0






















Figure 3.11 Computational pattern, vertical cross
section image pattern, parallel light
source, 5 degrees off axis







36

parallel light 5 degrees off axis. Again it agrees with the

photograph (Figure 3.8), even the wedge in the first ring

shows up exactly the same. Figure 3.12 (corresponding to

the photograph, Figure 3.9) shows the case which is 11

degrees off axis. Notice the similarity with the experi-

mental result; i.e., the central disk almost disappears.

The parabolic light horn was also studied mathe-

matically to examine the vertical cross section pattern.

The ray tracing result of the light source in front of the

light horn at the axis with the vertical cross section taken

at three units down the cylinder is shown in Figure 3.13.

Similar to the cone-shaped light horn (Figure 3.5), the

image of parabolic light horn consists of a central

disk and one ring-like annulus. The difference of these two

(Figures 3.10 and 3.13) is the cone shape light horn could

have more than one rings but the parabolic light horn

has only one ring-like annulus. In both cases the ring or

the ring-like annulus all have a sharp edge at the inner

boundary. When the light source is 5 degrees off axis

(Figure 3.14), similar to Figure 3.11, the center disk moved

aside and the ring-like annulus changed its shape. For the

light source located at 11 degree of axis, the image pattern

is shown in Figure 3.15. Again it is similar to Figure

3.12. This indicates that the parabolic light horn and the

cone-shaped light horn have similar properties.

In the cases discussed above, the patterns were taken

at a fixed location, the outlet of the light horn. The
























S* I

.C tt'. .






su Ic *11K d ) om ,'i



II *






















section image pattern, parallel light
source, 11 degrees off axis
source, ii degrees off axis







38

















*. :.. .. : N .:.























Figure 3.13 Computational pattern of the parabolic light
horn, vertical cross section image pattern,
parallel light source, parallel to axis

































*. -.. ..,: :;- : I ..-


..*

,II- .























Figure 3.14 Computational pattern of the parabolic light
horn, vertical cross section image pattern,
parallel light source, 5 degrees off axis
































































i

*


Figure 3.15 Computational pattern of the parabolic light

horn, vertical cross section image pattern,

parallel light source, 11 degrees off axis


.
.



~..

'. ~ ~











41

question arises: what if the location of the sensors was

moved as in the scotopic eye? Figure 3.16 shows typical

patterns of the number 1 light horn (see Table 3.1) at three

different vertical cross sections along the length of the

cylinder when the object point is at infinity on the axis.

The image is approximately in focus in the case shown in

Figure 3.16.(a), which is 33 units down the cylinder from

the exit aperture. (Again, one unit is the radius of the

entrance aperture.) Larger and larger rings are produced

further down the axis (e.g., Figure 3.16.(b) is at 48 units)

until the reflected rays begin to impinge on the cylinder

wall as shown in Figure 3.16.(c), at 86 units. From Figure

3.16, one can draw the conclusion that when the sensor

location is moved further away from the light horn, the size

of the ring becomes larger and larger, but the size of the

central disk stays almost the same. It is also found that

the intensity of the light on the rings is higher when the

film location moves closer to the light horn. This might be

a reason why the pigment granules of the scotopic eye move

closer to the crystalline cone and migrate to the center

under conditions of darkness.

3.3.2 Wall Pattern

When the light beam passes through the light horn, it

projects an image on the wall of the attached cylinder. The

image pattern of this case is a three dimensional pattern.

To simplify the analysis one may imagine that the cylinder












42













.. --: --- _: -- -- : .







::--...:.: :- ,
: i .ii--









m00
0



Ci 4-)
; : ;=, CO -



I.s 00.n
r3 5











=.. .l 4-J
S:. s -. : o *,-'

*""* -i Cn-
a ;z




o ~4
I 3 0
>) O-4

1>0
r-4

cn

: 2 2: 2: _S a Z o :





zs Z: XI
:s:;:ss ~ ~ ~ s tcl=:;:;Sn



^r~ss s s i s^s:!?







43

has been cut along the top and then unrolled and flattened.

The radius of the cylinder is taken to be 1.0 unit.

The wall pattern of an on-axis object point source at

infinity appears to be a uniform band around the cylinder

wall superimposed on a weak and uniform background. Figure

3.17 shows the computational results in this pattern of rays

impinging on the wall. The uniform background, produced by

the unreflected rays, is not shown on this plot. Here the

abscissa represents the distance down the cylinder while the

ordinate represents the angle around the axis. Figure 3.18

is the luminous wall pattern of the light source observed

down the cylinder. To have the image shown clearly on the

cylinder wall, a frosted glass tube was attached to the exit

aperture of the light horn, the simulate cylindrical part of

the non-imaging device. (The center of the pattern is on

the bottom of the tube and the photograph was taken from the

side.) In this case, the tube used to take the photos of

the wall pattern was too long to remain complete rigid and

it bent at the non-supported part where the band should

occur. Therefore the photo showed only part of the band.

The gap in the band is caused by the laser. Exclude the

nonuniform phenomena caused by the material and laser beam,

both the experimental photo and the computational result

showed that they agreed with each other.

The wall pattern from the computational results for an

object point one degree off the optical axis is shown in

Figure 3.19. A somewhat distorted band is found between 34






















-4



U







II
I II II


w*







I






III I I I I I
.T,/^


0 Distance from light horn exit
(10 units per scale)









Figure 3.17 Ray pattern impinging on cylinder wall
for object on axis






















































Figure 3.18 Photo, wall pattern, parallel light source,
parallel to axis

































* '......
. . .. :.. .. .. ., : .
.. .. .. .. .. ..... :. '.. .. :
.. .
,' "..'= ":


) Distance from light horn exit
(10 units per scale)


f


I 1
x


Figure 3.19


Ray pattern impinging on cylinder wall for
object one degree off axis







47

and 47 units down the cylinder, and the unreflected rays

form a more intense background, producing the horizontal

elliptical pattern at the center. Although the band of this

case is no longer uniformly distributed, the band location

and the band width were still clearly shown. Figure 3.20

is the wall pattern for an object point two degrees off the

optical axis.

Comparing Figures 3.19 and 3.20, one can clearly see

that when the object point moves off-axis, the band location

moves closer to the light horn exit aperture. This

indicates that the distance of the band could be a measure

of the polar angle of the object point. Of course this

measurement has to be taken from a reference point. The

band caused by an on-axis object, as shown in Figure 3.17,

could be the reference or "zero" band. The width of the

bands produced on the cylinder wall is a measure of the

polar angular resolution of the device, while the distance

of a certain observed band from the "zero" band is a measure

of the polar angle of a certain object point. Figure 3.21

plots the distance of the center of various bands from the

"zero" band as a function of off-axis polar angle for

several different light cones. The center point (Bc) and

the width (B ) of the "zero" band can be found by the

following equations (symbols as in Figure 3.1):


Be = [(De + Ae/2 + Ai/2) cot(2a + 7i) L ]/2


(3.3)







48































0 Distance from light horn exit
(10 units per scale)


Figure 3.20 Ray pattern impinging on cylinder wall
for object two degrees off axis
..q
ugggn
s AI1rz-ru *S.***
~i-
C.

0 itnefo ih onei




















200

Ae= 0 ..9


150




100
Ae=0. 8



50


A =0.7

o .61 .62 .63 .64 .65 .66 .7 .68 9
Polar angle of the object (radians)




Figure 3.21 Distance of center of bands from "Zero" band
vs. the off-axis angle







50

Br = [L (Ai Ae) cot(2a + 7Y)/ 2 ] / 2 (3.4)



where D is the diameter of the cylinder and 7. is the angle

between the incident ray and the axis of the light horn.

The distance, Db, between the "zero" band and an off-axis

angle band (one reflection only) would be



Db = (De + Ai/2 + Ae/2)(cot(2a) cot(2a + 7i)/2. (3.5)



In each case, the radius of the entrance aperture of the

light horn is 1.0 unit.

If two object points are present at different polar

angles, they may be distinguished if their respective bands

do not overlap much. The band width, B may be thus

converted to polar angles and the angular resolution plotted

as a function of polar angle as in Figure 3.22. The value

plotted is the full-width of the bands converted to a polar

angle, in radians. The angular resolution improves as the

light horn exit aperture approaches the size of the entrance

aperture. The fraction (F) of the total light incident on

the horn which is reflected to form bands is



F = (R02 R12) / R02 (3.6)



where R0 is the entrance aperture radius, and RI is the exit

aperture radius. Hence, the resolution improves at the

expense of the efficiency as one might expect.




















.05


.04. R1'. .7 R .







I /I
7 R



.03



a01



0 .01 .02 .03 .04
Polar arnle (radians)


Figure 3.22 Angular resolution as a function of
polar angle


.05







52

The unreflected light patterns (for example, the

central elliptical patterns in Figures 3.19 and 3.20) could

conceivably be used to distinguish objects at different

azimuthal angles. However, for the geometries studied here

the azimuthal angular resolution would be very poor. More

precise azimuthal angular information can be obtained by

cross correlation between several eyelets. This will be

discussed later.

As far as the polar angular resolution is concerned it

is of interest to see how it varies with the length of the

light horn when the entrance and exit apertures are kept

constant. Figure 3.23 shows the results for an entrance

aperture radius of 1.0 unit, an exit aperture radius of 0.8

unit, cylinder radius of 0.8 unit, and various horn lengths

from 5.0 to 60.0 units. The comparison is made for an

object point one degree off axis. Optimum resolution

appears at a length of about 20.0 units.

The polar angular resolution of the device using wall

patterns is thus limited by (1) the width of the bands, (2)

the presence of background from the unreflected rays, and

(3) the distortion of the bands for off-axis object points.

Nevertheless, a number of angular bands may be

distinguished. For example, using light horn device 2, the

data indicate that this device might be fitted with sensor

rings inside the cylinder as shown in the Table 3.2.

If this horn-cylinder combination were used as a simple

collimator, its light acceptance would be characterized by
























.025


.020



.015



.010-


.005 -


01 I I I I I I I '
0 10 20 30 40 50 60 70 80


Horn length


(units)


Figure 3.23 Angular resolution vs. light horn length






















Table 3.2 The distance-polar angle relation of light
horn used


Distance from light horn
(units of entrance radius)


80.
72.
61.
49.
42.
29.


72.
61.
49.
42.
29.
15.


Polar angle range
(milliradians)


0.0 to
1.5 to
4.5 to
8.0 to
13.5 to
24.0 to


1.5
4.5
8.0
13.5
24.0
50.0







55

the half-angle of the 1 x 19 unit collimator, i.e. about

0.05 radian. Thus, use of the reflection bands on the

cylinder wall enables finer resolution and possible imaging

within a single collimator tube. If the light horn with an

exit aperture of 0.95 unit were used, about 20 distinguish-

able angular ranges would be obtained. However, the

reflecting surface area would drop from 36 percent of the

total entrance aperture to about 10 percent and the length

of the cylinder would need to be increased from about 150

units to over 300 units.

3.3.3 Horizontal Cross Section

As we have seen, both the vertical cross section and

the wall pattern could be used to improve the resolution of

the non-imaging optical device. It is interesting to know

what the horizontal cross section (seen as in Figure 3.3.c)

pattern would be and whether it could be used to improve

resolution or not. Such a pattern was computed for the

plane located 1.0 unit above the optical axis and is shown

in Figure 3.24 (the object point was on axis). In this case

the pattern appears as a parabolic band on a uniform

background. Figure 3.25 shows the photograph pattern

corresponding to the calculation shown in Figure 3.24 where

the observed fine structure is caused by the laser and

should be considered as an artifact. Of course the

photographs show more detail than the ray tracings predict,

which is to be expected since multiple reflections and

diffraction effects were ignored. Although the parabolic









56










M A0

w





0




C a

r4-


%%--4r-4
,. .o













'. *
/



rXN
~-'*....
**r






















































Figure 3.25 Photo, pattern on horizontal slice







58

band can be used (similar to the wall pattern band) to

define the polar angle, the location of the parabolic band

changes too dramatically as the elevation of the plane

varies. Therefore, it is suggested not to use this

information, since the wall pattern can do the job nicely

already.

3.4 Applications

3.4.1 Vertical Cross Section

As discussed in Section 3.3.1, the vertical cross

sectional pattern of a point (or parallel) light source

produced by light horn is a disk and several rings. If this

pattern is projected on a 5 X 5 detector array, as shown in

Figure 3.26, the number of rays falling on each detector,

in connection with a threshold setting, can be used to

identify the location of the object point. Figure 3.27

(a,b,c) shows the overlapping pattern of the 0, 5 and 11

degrees cases, respectively. The detector responses can be

used to analyze the edge of an object boundary. If the

detector response is set to be "1" when the number of rays

is larger than the threshold setting and to be "0" when it

is less than the threshold setting, the response of the

detector array is used to construct the matrix which

indicates the location of the point object (or edge of an

object). Figure 3.27 (d,e,f) includes the possible matrices

that correspond to Figure 3.27 (a,b,c) respectively.

For this application the detector array does not

necessarily have to be exactly a 5 X 5 array; more detectors














DDDDD
DDDDD

DDaDD
DaDDD
000001 7F-


Figure 3.26 A 5 X 5 detector array arrangement




















- .



/ -- *ll DjJ--- --







I I
i
* F T
Sili


El 7


4 V-4 0 r-I4
"4" V-4 0 -4l
U V-4 0 r4"4
0 F4'-4 -4 0
4 0 0 0 '-i



















-40004
0-40001-4












0 V-4 -4 4 0


( 4 41-4 -.4 14


0 -4 -4 -4 0


0












L
C)
4)
0
kJ



14















Ul
.I~







0}
0
(D
01








V
U,




0
o

O 0>


0


t0

o
0 (.

a E

, cu'fl
o1 e



0











0
Cr- 0
4-) -O






0

a) p
Uo M
0




C4-




3r-

n u
U (U









would give better resolution. As discussed earlier, the

higher order rings are produced by several reflections. The

intensities of the higher order rings are not as large as

the intensity of the first ring and therefore might not be

sufficient for detection. Consequently, it is suggested

that the length of the light horn be chosen so that only one

ring and the disk be produced in the zero degree case. In

this case a 3 X 3 detector array or, (as in the rhabdomeres

of the insect eye), six on a circle and one or two at the

center is sufficient to resolve a relatively small polar

angle difference especially when the field of view is made

small. Also, the depth of the device could be held to a

minimum when the number of rings is minimal. By doing so,

not only the size of device could be reduced but also the

amount of information to be analyzed is reduced. Table 3.3

lists the matrices of a point source at different locations

which could be used to determine the polar angle of the

object point.

Having the results of the single light horn optical

study available, it is desirable to expand the study to

multiple light horns. Figure 3.28 shows the resulting

computational pattern of a centered object point in front of

9 light horns arranged on a 3 by 3 array. In this case, the

displacement of the light horn apices were set to be four

units (two times the entrance diameter). Figure 3.29 shows

the pattern of 25 light horns. A 3 by 3 detector array is

placed behind each light horn, the resulting matrix is shown
















Table 3.3 Matrices of point sources at


different


(Note: The point sources is at
light horn.)


location


1000 units away from the


1
0 1
1


1
10 1
1


0
20 1
0


1
30 0
1


1
60 0
1



0
90 0
0














































Figure 3.28 Computational pattern for multiple light
horns arranged on a 3 X 3 array









64


















S. *.

q.* ,: } ." :
".. ,' .






li'st I
: .. ..: .






*.,i*. *' S o" S =-** d : .
*.. ...-. .. .. .,. .
.. .. .. .*''. *"*'. .' l., *"






: ... .. ... .
:.; *. ".. .I.;





..M_ ,
: IA ,.. .
; '' : :*." .". .... ";." ':': ."' .


























Figure 3.29 Computational for multiple light horns
arranged on a 5 X 5 array
;g-.








arranged on a 5 X 5 array







65

in Table 3.4. It can be seen in Figure 3.27 that some of

the rays strike the detector array of the neighboring light

horns if the off-axis angle is not equal to zero. Thus a

cylinder-like divider, similar to the pigment cells of

scotopic compound eye, is suggested to prevent the

overlapping of the image patterns of neighboring light

horns.

3.4.2 Wall pattern

As described earlier, the light sensitive elements of

the insect eye are the layers of the microvilli which divide

the cylindrical shaped rhabdom into a multitude of pixels;

albeit in the direction of the cylinder axis rather than

perpendicular to the axis as one would expect for a focal

plane array. The microvilli-contained rhabdom is a long

cylinder rather than just the couple layers of microvilli

shown in Figure 2.3. It suggests a possible application of

the wall patterns. The fiber shaped detectors which was

designed by Schneider2 (shown as Figure 3.30) could be a

detector for this application.

Based on the studies of the wall pattern, the position

of the singly reflected band can be used as a measure of the

off-axis polar angle for object points in the field of view

of the light horn. However, it seems unlikely that more

information than just the polar angle can be determined with

a single horn-cylinder combination. At any specific polar

angle, the patterns are quite insensitive to the azimuthal






















Table 3.4 Matrix of a point source image on the 5 X 5
light horn array.


1 0 0 0 1 1 0 0 0 1 0 0 0 1 1 0 0 0 1
0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0
0 0 0 1 1 0 0 1 0 0 0 1 0 0 1 1 0 0 0
0 0 1 1 1 0 0 1 0 1 0 1 0 0 1 1 1 0 0
1 0 1 1 1 1 1 0 0 1 0 0 1 1 1 1 1 0 1
1 0 0 0 1 1 0 1 1 1 1 1 0 1 1 0 0 0 1
0 0 0 0 1 0 0 1 1 0 1 1 0 0 1 0 0 0 0
0 1 1 1 0 1 1 0 0 0 0 0 1 1 0 1 1 1 0
0 0 0 0 0 1 1 0 1 1 1 0 1 1 0 0 0 0 0
1 0 0 1 1 1 0 0 1 1 1 0 0 1 1 1 0 0 1
0 0 0 0 0 1 1 0 1 1 1 0 1 1 0 0 0 0 0
0 1 1 1 0 1 1 0 0 0 0 0 1 1 0 1 1 1 0
0 0 0 0 1 0 0 1 1 0 1 1 0 0 1 0 0 0 0
1 0 0 0 1 1 0 1 1 1 1 1 0 1 1 0 0 0 1
1 0 1 1 1 1 1 0 0 1 0 0 1 1 1 1 1 0 1
0 0 1 1 1 0 0 1 0 1 0 1 0 0 1 1 1 0 0
0 0 0 1 1 0 0 1 0 0 0 1 0 0 1 1 0 0 0
0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0
1 0 0 0 1 1 0 0 0 1 0 0 0 1 1 0 0 0 1




















Signal


Wire


Detector


Figure 3.30 Fiber shaped detector







68

angle. The construction of an image would require at least

three adjacent horn-cylinder combinations with overlapping

fields of view. Figure 3.31 shows the two dimensional area

which is defined by overlapping the fields of view of three

adjacent light horns. With a honeycomb shaped arrangement

the light horns could cover all of object-space.

If ring shaped detectors could be made and arranged to

form a detector tube as shown in Figure 3.32, these

detectors would not only detect the presence of light but

would act as number of single channel analyzers. An

individual single channel analyzer would measure the

intensity of light which strikes on the inner surface of the

ring shaped detector. The total output of the detector tube

is then an intensity distribution along the length of the

wall of the cylinder. The intensity distributions (of the

three adjacent eyelets) may be correlated to reconstruct the

image perceived by the three eyelet system.

Since a detector cannot distinguish the singly

reflected beam from other light beams, all of the light

beams--instead of just the singly reflected beams--would

contribute to the intensity distribution of the wall

pattern. However, the length of the horn element may be

chosen to eliminate all rays reflected more than n times,

where n is any arbitrary integer, n = 0, 1, 2, .

Through ray tracing, the intensity distributions (along

the length of the cylinder wall) at different polar angles

were created and several of them are shown in Figures






























U)

0n

0b

0,

U
0b

.0
0


c
o
*r
0



.H1-
4a



S4-4





C
rI


t- 4-)
4d Q)
0





C
0 X

CO CO
4-)





cd
r-4 Q)
0

Qi 0










.-4











70








o
4J
a

41

a.





-------~--- (
4-i
a1
a)











C14
cn
bfl
--i
a)



4.)
0



a)
e c
a)
^ Z*1
^ ^ ^ ^ \ *

*^ ~^ 0
-'^ ^ I
""" ^ ~ ^L






^--------^cr)










rr







71

3.33-35. (Note: The light horns used here are 20 units long

with a one degree cone angle. The object plane is assumed

to be 1000 units away from the entrance aperture of these

horn-cylinder combinations.) Among these intensity

distributions, it is found that the maximum is located at

different longitudinal positions for different polar angles.

Similar to the position of the singly reflected rays, the

location of the peak depends also on the off-axis polar

angle. This relation between the peak positions and the

polar angles of the objects is shown in Figure 3.36. The

intensity at this characteristic position, the position

where the peak is located, indicates the possible

contribution to the image pattern from the particular polar

angle. Therefore this characteristic position is also used

to construct the image.

In order to simplify the analysis, both the object and

image spaces were put on the identical triangular grid, each

sampled in identical two-dimensional arrays with the node

point arrangement as shown in the Figure 3.37. Although

three eyelets could be arranged to see an area larger than

the triangular area formed by the axes of the three eyelets,

for simplicity the three eyelets were arranged to have

overlapping fields of view only to such a degree to just

cover this triangular area. Any object area other than that

covered by this triangle may be covered by other eyelets in

the array (see Figure 3.38). Hence the image space is also

limited to this triangular area.




















'--

O



Cu
C Q)


r1-4



t3 03




0 .*D
0*4J
4 C






S,4 Ar X
cd cd
0)



< -A 00






O C

0 0 .
0 .0 *0
0) .*r
.. CO 0)








0(












AlTsua-ul
O


















O-. O 6























I
0 0





C
*r1
r-4


4j u
.r4

Q. 4-1




0 -i 0 (n
k 0 o






0) C



Eo 1



4 4*r-1
O) OO 0





0
oI 0r 0.-4







o---------------- 0 M)4

6M 6 O
o dU d
X^T~ua~ui































































0 C
0
D C
N 1
6 6
X~ sua~uI


0





I-'
CO
tt)











.0 4






--4

01
e o

4-4

U


C)0P


'-4



















go
14






o3









0
JO
03



-r4o









4t1
in
(-"





U
HO























CO

O C


0 p
o u~



o
40 0)
.) 4 )





4 0
*rn C
0 04-4








|o .vo
O 0



o *ct
O (
044 4 J
ae) 4 "
Pe r-



4 0 4-



C) mo









0 00 0 0

















(51,1)


(17,17)


(1,26)


Figure 3.37 The triangular grid


(1,1)


(1,51)


















































Figure 3.38 The object space covered by the eyelet
system.







78

A source point in object space produces illuminated

pattern on the detection surfaces of the three horn-cylinder

combinations. Since the point sources at any location on

the ring centered at the optical axis have only one unique

polar angle resulting only one set of illuminated pattern.

It is not possible to locate the position of the object

point directly since its relation to the polar angle is

ambiguous. When constructing an image from these detector

patterns, an object point (i,j), of intensity I.. is mapped
lJ
into an irregular area (point spread function) around image

point (i,j). This point spread function is not spatially

invariant (isoplanatic). However, one can pick an image

point (i,j) and calculate the probability that the tube

patterns are caused by a source point at the corresponding

object point (i,j). The resulting probability distribution

on the image space is then an "image" of the point (i,j) of

the object space.

Four methods were used to construct the image patterns:

Addition, Multiplication, Cross Correlation with Addition

(CCA) and Cross Correlation with Multiplication (CCM). All

of them use the same grid system and the definition of these

operations are discussed below.

For a known object point the intensity distributions on

the three eyelets, f f and f3, can be easily found,

because the polar angle to the three eyelets can be calcu-

lated. To construct the image pattern, all the nodes on the

triangle were scanned. One can then search for the







79

characteristic position of the image point by checking the

intensity distribution along the detector tube in order to

determine the contribution of light from the object point to

this particular point. Therefore, when tracing the point

source on the image space, this intensity is placed on the

particular ring which is related to this particular polar

angle. Thus, for every image node point (i,j) the relative
1 2 3
polar angles, e 9 8 and the characteristic
1 2 3
positions, L L L to the light horns were found.
c c c
The intensity values--f (L ) where n stands for 1, 2 or 3--
C
at the characteristic positions were extracted from the

distribution curves. Since the intensity on the node point

stems from various eyelets, it is reasonable to apply a

superposition law to the intensities and therefore a

reasonable way to reconstruct the image has been found. The

sum of the intensities at the characteristic positions,

f (L ), from the distributions of all three eyelets were

assigned to the node and then the intensity of the image

point (i,j) could be expressed as



I age f (L ) + f2(L2 f (L (3.7)
i,j C C C


This Addition method was used to construct the

intensity patterns created by projection of all the nodes

into the object space and these patterns can be used as a

template to identify the position of the objects. Figures

3.39-42 are four intensity patterns which represent the





























POINT SOURCE AT NODE ( 1 1 )


1
i
f


I
'i


i?
i
i



i
i

i.


>8.93 i>1.30 f >1.49 M >1.68 -1.86




Addition method









Figure 3.39 The image pattern created by the Addition
method, object point at node (1,1)
















POINT SOURCE AT NODE ( 5 5 )





,/ \









)1.86 >1.48 I>1.69 >1.90 -2.12


Addition method





Figure 3.40 The image pattern created by the Addition
method, object point at node (5,5)





























POINT SOURCE AT NODE ( 1 26 )


i=


1.. 03 =- >1.44 )1.64 >1.85 =2.85




Addition method







Figure 3.41 The image pattern created by the Addition
method, object point at node (1,26)


i
r
i
i
r
,'
i


fi
f
t



















POINT SOURCE AT NODE ( 17 17 )



/ N


1,


>)8.8 ->1.89 )1.24 >1.40 = -1.55


Addition method





Figure 3.42 The image pattern created by the Addition
method, object point at node (17,17)







84

possible image of an object point. Due to the fact that it

is impractical to indicate the exact value of the intensity

at the various nodes on the figures, these intensities are

shown in ranges which are characterized by different

patterns. Only the intensity values larger than half of the

maximum value are shown on the figures. The intensity

values of less than half of the maximum do not make a

significant contribution to the total image. Besides, the

position of the intensity value of half the maximum

indicates the resolution of the system on that particular

object point. From these patterns it can be seen that the

highest intensity value occurs as an area which includes the

position of the object point. Thus one can conclude that

the Addition method gives good results for identifying the

location of the object point. However, one must remember

that the intensity distribution is not a delta function,

but rather a point spread function with a finite area.
n n
Hence, the relationship between L and 8 .. is only
c 1,3
appoximate. The detector intensity fn represents only the

probability that there is a source point at the object

point (i,j).

Although the Addition method generated a maximum at the

location of the object point, the intensity differences

between the point and its neighbors were found to be small

or even zero and therefore indistinguishable. Source points

closer together than about one third of the side of the

triangle would probably not be distinguishable. Therefore,







85

the Multiplication method is also suggested. The Multipli-

cation method finds the possible intensities at the node

point from all three intensity distributions then calculates

the product of those three values and assigns this product

value to the node point and this could be expressed as


Iiage f (L ) f2 (L 2 f3(L 3) (3.8)
ii] C C C

Figures 3.43-46 show the results of the Multiplication

method. Although there is still no sharp image point in

some of the cases, the intensity ratios are much larger and

the location of the object point is more distinct, such as

at the node point (1,1) in Figure 3.43. Source points

closer together than about one-fifth of the side of the

triangle would probably not distinguishable.

As mentioned, the intensity distributions along the

cylinder wall that result from a given object point can be

found when the point object position is known. One can also

construct the intensity distribution functions along the

cylinder, 0 (r,z), for each light horn where r stands for

the distance of the image point from the axis of the light

horn n, (n=l, 2 or 3), and z is the distance from exit

aperture. Thus, the integral (with respect to z) of the

product of the functions fn(z) and On(r,z) is the

probability, P (r) of the image point occurring on the ring

of the radius r centered at the axis of light horn n. For

the intensity distribution was generated by the ring shaped

detector, it is a discrete function instead of continuous















POINT SOURCE AT NODE ( 1 1 )



A t


/
f"
/J


I-


I'


> 2.3x > 3.2, B ) 3.6< > 4.1x 4.5%


Multiplication method




Figure 3.43 The image pattern created by the
Multiplication method, object point
at node (1,1)


















POINT SOURCE AT NODE ( 5 5 )


> 6.9>: > 9.?7. 0 >11.1Y. E >12.4x. -13.8


Multiplication method





Figure 3.44 The image pattern created by the
Multiplication method, object point
at node (5,5)