• TABLE OF CONTENTS
HIDE
 Title Page
 Copyright
 Dedication
 Acknowledgement
 Table of Contents
 Key to Symbols
 Abstract
 Introduction
 Survey of relevant work
 The distributed compartment model...
 The distributed compartment...
 Discussion of the distributed compartment...
 Conclusions
 Appendix
 Reference
 Biographical sketch
 Copyright














Title: distributed compartment model for resource management and access control
CITATION THUMBNAILS PAGE IMAGE ZOOMABLE
Full Citation
STANDARD VIEW MARC VIEW
Permanent Link: http://ufdc.ufl.edu/UF00090908/00001
 Material Information
Title: distributed compartment model for resource management and access control
Series Title: distributed compartment model for resource management and access control
Physical Description: Book
Creator: Greenwald, Steven Jon,
 Record Information
Bibliographic ID: UF00090908
Volume ID: VID00001
Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
Resource Identifier: alephbibnum - 002018874
oclc - 32769134

Table of Contents
    Title Page
        Page i
    Copyright
        Page ii
    Dedication
        Page iii
    Acknowledgement
        Page iv
    Table of Contents
        Page v
        Page vi
        Page vii
    Key to Symbols
        Page viii
        Page ix
    Abstract
        Page x
        Page xi
    Introduction
        Page 1
        Page 2
        Page 3
        Page 4
        Page 5
        Page 6
        Page 7
    Survey of relevant work
        Page 8
        Page 9
        Page 10
        Page 11
        Page 12
        Page 13
        Page 14
        Page 15
        Page 16
        Page 17
        Page 18
        Page 19
        Page 20
        Page 21
        Page 22
        Page 23
        Page 24
        Page 25
        Page 26
        Page 27
        Page 28
        Page 29
        Page 30
        Page 31
    The distributed compartment model philosophy
        Page 32
        Page 33
        Page 34
        Page 35
        Page 36
        Page 37
        Page 38
        Page 39
        Page 40
        Page 41
    The distributed compartment model
        Page 42
        Page 43
        Page 44
        Page 45
        Page 46
        Page 47
        Page 48
        Page 49
        Page 50
        Page 51
        Page 52
        Page 53
        Page 54
        Page 55
        Page 56
        Page 57
        Page 58
        Page 59
        Page 60
        Page 61
        Page 62
        Page 63
        Page 64
        Page 65
        Page 66
        Page 67
        Page 68
        Page 69
        Page 70
        Page 71
        Page 72
        Page 73
        Page 74
        Page 75
        Page 76
        Page 77
        Page 78
        Page 79
        Page 80
        Page 81
        Page 82
        Page 83
        Page 84
        Page 85
        Page 86
        Page 87
        Page 88
        Page 89
        Page 90
        Page 91
        Page 92
        Page 93
        Page 94
        Page 95
        Page 96
        Page 97
        Page 98
        Page 99
        Page 100
        Page 101
        Page 102
        Page 103
        Page 104
        Page 105
        Page 106
        Page 107
        Page 108
        Page 109
        Page 110
        Page 111
        Page 112
        Page 113
        Page 114
        Page 115
        Page 116
        Page 117
        Page 118
        Page 119
        Page 120
        Page 121
        Page 122
        Page 123
    Discussion of the distributed compartment model
        Page 124
        Page 125
        Page 126
        Page 127
        Page 128
        Page 129
        Page 130
        Page 131
        Page 132
        Page 133
        Page 134
        Page 135
        Page 136
        Page 137
        Page 138
        Page 139
        Page 140
        Page 141
        Page 142
        Page 143
    Conclusions
        Page 144
        Page 145
        Page 146
    Appendix
        Page 147
        Page 148
        Page 149
        Page 150
        Page 151
        Page 152
        Page 153
        Page 154
    Reference
        Page 155 (MULTIPLE)
        Page 156
        Page 157
        Page 158
    Biographical sketch
        Page 159
        Page 160
        Page 161
    Copyright
        Copyright
Full Text










THE DISTRIBUTED COMPARTMENT MODEL FOR RESOURCE
MANAGEMENT AND ACCESS CONTROL














By

STEVEN JON GREENWALD


A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY
UNIVERSITY OF FLORIDA


1994

































Copyright 1994

by

Steven Jon Greenwald
























This work is dedicated to the three people who have had the most impact on my

life. First, to my parents, Edith and Marvin Greenwald who never failed to offer

their support and encouragement. Secondly, to my sweetheart, Laura Corriss who

never failed to be there when I needed her, and who encouraged me when I needed

it the most. Without the influence all three of these remarkable people had on me, I

doubt this work would ever have been done.













ACKNOWLEDGEMENTS


I wish to give special thanks to my Ph.D. committee chairman, advisor, and

friend, Dr. Richard E. "Nemo" Newman-Wolfe for the guidance he has given me

during the four years I have spent at the University of Florida as a doctoral student.

His influence concerning every facet of my doctoral student life is profound. This

dissertation was unquestionably shaped and influenced for the better because of his

incredibly valuable suggestions and contributions.

The other members of my Ph.D. committee also deserve thanks for spending their

valuable time on me. Each was so important that I present them in alphabetical order.

Dr. Paul Avery of the Physics Department, as my external (to the Computer and

Information Sciences Department) faculty member, offered a different viewpoint than

the CIS department, which was greatly appreciated. Dr. Manuel Bermudez provided

me with much important guidance concerning not only my dissertation research, but

also about "learning the ropes." Dr. Ted Johnson had a knack for instantly putting

things in a perspective that I found invaluable and which should greatly improve the

quality of my future research. Lastly, Dr. Joe Wilson deserves thanks for providing

(among other things) some "spoilers" during my qualifying exam that helped direct
me to areas of research I might not have encountered otherwise.

Some special people who deserve thanks are Dr. Steve Thebaut for being my

first supervisory committee chairman, and Dr. Randy Chow for being my second

(and then helping me out by substituting for Ted Johnson at the last minute for
my doctoral defense). John Bowers, the department graduate secretary, was always

extremely helpful, and no doubt made a difficult situation easier.


















TABLE OF CONTENTS


ACKNOWLEDGEMENTS


KEY TO SYMBOLS ................... .............

ABSTRACT . . . . . . .. . .. . . . . . . . . .

CHAPTERS

1 INTRODUCTION ...............................


1.1 Overview ............
1.2 Definitions ...........
1.3 Problem Statement . . .
1.3.1 Introduction......
1.3.2 Security Problems with
1.3.3 Comments .......
1.4 Dissertation Organization .


. . . . . . .
. . . . . . .
............

Distributed Systems
. . . . . . .
. . . . . . .


2 SURVEY OF RELEVANT WORK .........

2.1 Introduction ...................
2.2 Access Matrix Model ..............
2.2.1 Introduction ...............
2.2.2 Description ...............
2.2.3 Comments ...............
2.3 The Bell-LaPadula model ...........
2.3.1 Introduction ...............
2.3.2 Military Message Experiment .. ..
2.3.3 Air Force Data Services Center Multics
2.3.4 Kernelized Secure Operating System .
2.3.5 Guard . .. .. . . . . . .
2.3.6 Comments . . . . . . . ..
2.4 Other Information Flow Models . . . .
2.4.1 Introduction . . . . . . . .
2.4.2 Description . . . . . . . .
2.4.3 Comments ................
2.5 Military Message System . . . . . .
2.5.1 Introduction ...............
2.5.2 Description . . . . . . . .
2.5.3 Comments ................


. . . .. . . . iv













2.6 Andrew . . . .
2.6.1 Introduction .
2.6.2 Description .
2.6.3 Comments . .
2.7 The ADMIRAL Model
2.7.1 Introduction .
2.7.2 Description .
2.7.3 Comments . .
2.8 IX . .. . . .
2.8.1 Introduction .
2.8.2 Description .
2.8.3 Comments . .
2.9 Amoeba . . . .
2.9.1 Introduction .
2.9.2 Description .
2.9.3 Comments . .
2.10 Other Work . . .
2.11 Conclusions . . .


3 THE DISTRIBUTED COMPARTMENT MODEL PHILOSOPHY ....

3.1 Introduction . . . . . . . . . . . . . . . .
3.2 Distributed Handles ............................
3.3 Distributed Compartments . . . . . . . ..... . . ...
3.4 Conclusions . . . . . . . . . . . . . . . .

4 THE DISTRIBUTED COMPARTMENT MODEL . . . . . . .

4.1 Introduction . . . . . . . . . . . . . . . .
4.2 The Standard Model ...........................
4.2.1 Com ponents ............................
4.2.2 Secure State Invariants . . . . . . . . . . .
4.2.3 The Rules of Operation .....................
4.3 Afterm ath . . . . . . . . . . . . . . . . .

5 DISCUSSION OF THE DISTRIBUTED COMPARTMENT MODEL . .


5.1 Introduction ..........
5.2 Comparison of the Distributed
5.2.1 Introduction......


5.2.2 Similarities .....
5.2.3 Differences ......
5.2.4 Conclusions .....
5.3 Alternatives to the Standard
5.3.1 Introduction .....
5.3.2 Second Model ....
5.3.3 Third Model .....
5.3.4 Fourth Model ....
5.3.5 Fifth Model .....
5.3.6 Conclusions .....
5.4 Implementation Issues . .


D

.


0


. . . . . . . . . . . .
Compartment Model to BLP . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
distributed Compartment Model . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .












5.4.1 Introduction ............
5.4.2 Software Design ..........
5.4.3 Software Implementation . . .
5.5 Usage Examples ..............
5.5.1 Electronic Mail Between Discoms
5.5.2 "Simple" File Creation . . .
5.5.3 "Complex" File Creation . . .
5.5.4 Replicated Fault Tolerant Files .
5.5.5 CPU Resource Access ......
5.5.6 Distributed Conferencing . . .
5.5.7 Grading Projects .........
5.5.8 Paper Collaboration .......
5.5.9 Location Transparency . . .
5.6 Future Research . . . . . . .
5.7 Conclusions ................

6 CONCLUSIONS ................


. . . . . . . . 131
. . . . . . . . 132
. . . . . . . . 134
. . . . . . . . 134
. . . . . . . . 134
. . . . . . . . 135
. . . . . . . . 135
. . . . . . . . 135
. . . . . . . . 136
. . . . . . . . 136
. . . . . . . . 137
. . . . . . . . 139
. . . . . . . . 140
. . . . . . . . 140
. . . . . . . . 143

. . . . . . . . 144


APPENDICES


A MATHEMATICAL NOMENCLATURE . .

A.1 Set Notation ................
A.2 Relations ..................


. . . . . . . . . 147


B AN OVERVIEW OF THE BELL-LAPADULA MODEL

B.1 Introduction ............. .........
B.2 Overview ................... ....
B.2.1 Descriptive Capability . . . . . .
B.2.2 General Mechanisms . . . . . . .
B.2.3 Specific Solutions . . . . . . . .
B.3 Conclusion .......................

REFERENCES .........................


. . . . . . 149


BIOGRAPHICAL SKETCH ......................


... 159














KEY TO SYMBOLS


Symbol Description

I such that

iff if and only if

S then, implies

A logical and

V logical or

-- logical not

U union

n intersection

0 empty set

V for all, the universal quantifier

3 there exists some, the existential quantifier

,9 there does not exist some

E is an element of the referenced set

is not an element of the referenced set

C contained in or equal to, subset

(9 not contained in or equal to, not a subset

C contained in but not equal to, proper subset

maps

maps to, for the element of a set

maps to, for a set

II, projection: returns the nfh element of a tuple

.j- governs

viii











. sires
I+ rules

,n n-accesses

pn n-requests

T, T Rule set, element of a rule set

T usage function (for a resource)

cr designates an unused, sanitized resource

--a designates an unused, unsanitized resource

A discom function (for a resource)













Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy

THE DISTRIBUTED COMPARTMENT MODEL FOR RESOURCE
MANAGEMENT AND ACCESS CONTROL

By

Steven Jon Greenwald

August 1994



Chairman: Dr. R. E. Newman-Wolfe
Major Department: Computer and Information Sciences


Given the present trends regarding distributed computer systems, the standard

model of security used on most distributed systems is outmoded. This older model

dates back to simpler times and is based on the idea of a centrally managed system

(usually a mainframe or minicomputer system). Even modern networked computer
environments are usually centrally managed systems using this older model.

This older model is based on the idea that there is a central managing authority,

called the system administration, that is ultimately responsible for the management

of computer security. This management is usually done with some form of a discre-

tionary access control method, where each user is granted (or denied) privileges and

resources depending on the security policies enforced at that particular system. The

system administration, among other things, manages the system resources, creates

and destroys user accounts, and grants and revokes user privileges. This model is

typified by an operating system such as UNIX.

This model introduces several difficulties when working in a distributed comput-

ing environment. The scope of this dissertation is in the problem areas of resource
x











allocation and access control. The solution proposed herein is a "Distributed Com-

partment" model consisting of two major components. First, "Distributed Handles"

are a method for user identification and access control. Second, "Distributed Com-

partments" are a method for allowing users to manage resources within a distributed

system across computer system administrative boundaries without many of the re-

straints of the old model. A formal security model is presented that defines these

concepts, and further refines them into a state transition model.

The formal axiomatic model presented consists of component sets and their mem-

bers. There is a set of binary relations used to partially order the sets and specify

operations on sets. The model defines a secure system state and rules of operation

provide secure state transitions from one secure system state to another. There is a

set of secure system state invariants that each rule must satisfy in order to maintain

a secure system state. Each rule is proven secure within the model.
















CHAPTER 1
INTRODUCTION

1.1 Overview

The security model currently used on most distributed systems is an old one,

dating back to simpler times when most computer systems were centralized. This

model is based on the idea that there is a central managing authority, called the

system administration, that is ultimately responsible for the management of computer

security [33]. In this model, the system administration, among other things, manages

the system resources, creates and destroys user accounts, and grants and revokes user

privileges. This model is typified by an operating system such as UNIX [2].

This model introduces some difficulties when working in a distributed computing

environment. The solution proposed here is a new model consisting of two com-

ponents: distributed handles and distributed compartments. This new model, the

Distributed Compartment Model is a new paradigm for the management of resources

and the controlling of user access on distributed computer systems. The Distributed

Compartment Model was specifically created to rectify some of the problems and

deficiencies of the current models of access control [13].

It is my hope that the Distributed Compartment Model will be implemented

in a distributed computing system environment, such as a UNIX based distributed

computer system.










1.2 Definitions

The following definitions are provided to help characterize the problem. In gen-

eral, they correspond to their usual meaning in the computing community. There may

be considerable overlap between some of the definitions and those defined elsewhere,

such as at the National Computer Security Center [29].


Computer System This is a collection of one or more computers and their software

that is managed as a discrete unit. A particular computer system being used

may also be referred to as a host.

HostID This is a unique identifier for a computer system.

System Administration This is the agent responsible for maintaining a particular

computer system. That computer system may be composed of just one com-

puter, or may be a network of computers. For the purposes of this dissertation

the system administration is the supreme authority for a computer system.

User This is a discrete entity, usually human, that uses computing resources. Users

are atomic (i.e., they may not be subdivided).

UserID This is an operating system dependent user identification label. This is a

well-known concept in operating systems. Under a typical operating system,

the system administration must manage userIDs in a centralized fashion. Any

requests for new userIDs must be made to the system administration.

Password This is a string that a user must provide at login time in order to authen-

ticate a userlD and gain access to a computer system. In certain cases this

string may be null (i.e., no password is needed for access).











Standard Access Control Model (SACM) This is a common method of computer sys-

tem access control, consisting of users who are each provided by the system

administration with a unique userID, and who require a password to gain ac-

cess to the computer system.

Distributed System This is a collection of computer systems that can communicate

with each other. A distributed system may be managed by different system

administrations.

Distributed Collaboration This is work performed by more than one user using a

distributed system.

Groupware This is software designed to facilitate distributed collaboration.


1.3 Problem Statement

1.3.1 Introduction

Most of the computer systems in use are based on an old, centralized method

of security. The research that this dissertation describes is specifically concerned

with the management of system resources and the management of access control

in a distributed computing environment. Several systems already exist to facilitate

security in a distributed environment, such as Andrew [37]; however, they still have

elements of the old centralized methods. Some of these systems will be surveyed

later.

1.3.2 Security Problems with Distributed Systems

SACM presents the following problems when working in a distributed environ-

ment.











1. UserlDs are often duplicated across name-space domains in a distributed sys-

tem. For example, two different users may have the same userID on two different

computer systems within a distributed system. This presents a problem when

using groupware: how can each user be unambiguously identified? Currently

the only way would be to append a host computer system identifier to each

userID. For example, with a userID of 'sjg' and a host of 'cis.ufl.edu' we could

use the Internet method of 'sjg@cis.ufl.edu'. This is cumbersome in many cases.

2. Location transparency may not be possible. In an application where location

transparency is a goal, using a userID and hostID combination is unacceptable.

For mobile users who change hosts often, the combination of userID and hostID

fails to uniquely identify the user. For example, the user with the userID

'sjg' may be identified as 'sjg@limpkin.cis.ufl.edu' at one time, and may be

identified as 'sjg@chameleon.cis.ufl.edu' at another time. This means that using

the combination of userID and hostID as a means of identification results in

multiple aliases for the same user. When this is combined with the possibility

of two users in different name-space domains having the same userID, it may

result in serious identification problems (e.g., is 'sjg@limpkin.cis.ufl.edu' the

same user as 'sjg@buvax.barry.edu'?). Another problem is that one user may

have two (or more) different userIDs at different locations, causing another

identification problem (i.e., is 'greenwald@buvax.barry.edu' the same user as

'sjg@ufl.edu'?).

3. Unique user identifiers based on UserID and HostID combinations may be re-

dundant to groupware collaborators. In many cases of collaboration the users

do not even care about which computer systems their colleagues are using. For








5


example, two researchers at different universities collaborating on the same pa-

per are not particularly interested in cumbersome host computer names-they

are only interested in collaborating with one other.

4. There exists a "weak link in the chain" effect. This means that security is

a problem since the security of the entire distributed system depends upon

the security of the individual computer systems that are being used within a

heterogeneous name-space. One lax system administration can compromise an

entire distributed system by allowing access to unauthorized users, sharing of

userIDs, etc. This results in the system with the weakest security setting the

maximum security quality of the entire distributed system.

5. In many installations the system administration is reluctant to permit a single

user to have multiple userIDs (this is not a criticism of system administrations-

they often have good reasons for this policy). This makes it difficult for users

to test and use groupware. The reason this point is important is that for many

groupware applications a particular user may need to assume different roles. For

example, a user may wish to simultaneously assume the roles of "professor" and

"chairman" for a particular groupware session. In a groupware system such as

the first version of the Distributed Conferencing System (DCS) [32], the only

way to allow this was to change certain UNIX environment variables. This is

an extremely insecure method since there is no operating system access control

regarding the modification of environment variables in UNIX [2]. However,

the constraints imposed upon the developers of DCS Version 1 unfortunately

mandated this unsafe practice.











6. It may be difficult to share resources with other users on other computer systems

without getting permission from the system administrations involved. For ex-

ample, two users subject to different system administrations who wish to share

a file with each other may find it impossible without using cumbersome meth-

ods (e.g., File Transfer Protocol, electronic mail) that are unsuited for real-time

applications.

7. Foreign user accounts are often necessary to correct the previous problem. This

places a management burden on the system administration because it has to

manage users from a foreign environment. In addition, there is the very serious

difficulty of the system administration initially verifying the identify of these

foreign users, who are often not physically present on site.


1.3.3 Comments

The above problems result because most of the security paradigms in use are

outmoded. They are based on the assumption of a centralized access control mecha-

nism dating from the days when centralized time-sharing mainframes dominated the

field. This naturally resulted in centralized management of system resources, and the

implicit condition of location dependency for users and resources. These conditions

were not seen as problems because the security systems were designed for these single

stand-alone systems.

1.4 Dissertation Organization

This dissertation is organized as follows. Chapter two is a survey of the relevant

work in the field. Chapter three is a description of the distributed compartment

philosophy. Chapter four is a description of the formal security model which resulted

from the philosophy of chapter three. Chapter five is a discussion of the Distributed







7


Compartment Model, specifically concerning the similarities and differences between

the standard model and the Bell-LaPadula model, alternate models, implementation

details, examples of how the model can be used, and some issues regarding future

research. Chapter six concludes the dissertation. In addition, appendix A is a de-

scription of some of the mathematics used in the model, and appendix B is brief

overview of the Bell-LaPadula model.
















CHAPTER 2
SURVEY OF RELEVANT WORK

2.1 Introduction

This chapter surveys some of the work in the field of distributed system security

that is relevant to the research described in this dissertation.

I first consider an old method, the Access Matrix Model. Next I review the

Bell-LaPadula model, one of the most influential information flow models, along

with systems that are either based on it or have been heavily influenced by it: the

Military Message Experiment, Multics, the Kernelized Secure Operating System, and

finally Guard. Then I examine Denning's information flow model. I then consider the

Military Message System, an ambitious project in message based security. Andrew,

a system that is currently in use at Carnegie Mellon University, is next reviewed.

Then I consider the ADMIRAL Model, a distributed computing system based server

system. Next comes a review of IX, a disappointing experiment in multilevel secure

UNIX. I next review Amoeba, an object based distributed operating system. Finally,

I conclude with some reviews of other, miscellaneous work.

2.2 Access Matrix Model

2.2.1 Introduction

The access matrix model is based on an operating system view of security (as

opposed to the military based models that follow). It was originally described by

Lampson [22] and further refined by Denning [11], Graham [17], and Harrison, Ruzzo,

and Ullman (commonly referred to as "HRU") [19]. The model is simple and general,











and is widely used [23]. It is still a topic of research, and many modifications of it

exist, such as Sandhu's typed access matrix model [36].

2.2.2 Description

The access matrix model has the following three components.


1. A set of passive objects, for example, files, devices, and other operating system

defined entities.

2. A set of subjects that may actively manipulate the objects. Subjects are them-

selves composed of two things: a process and a domain. A domain is a set of

constraints determining how subjects may access objects. Subjects may also be

objects at particular times (i.e., passive and operated upon by other subjects).

3. A set of rules governing how the subjects may actively manipulate the passive

objects.


The access matrix is a two-dimensional array with each subject occupying a row,

and each object occupying a column. The row and column entry defines the access

rights between the subject and object. Access rights are such things as read, write,

append, etc. The access matrix defines the protection state of the system.

All accesses to objects by subjects are enforced by a reference monitor mechanism

that refers to the access matrix and enforces it. Any improper access attempts should

be rejected by the reference monitor.

2.2.3 Comments

Since the model does not specify what the particular access rules are, it is ex-

tremely flexible. Because of that it has been applied quite widely [33]. However, this

very flexibility makes it quite difficult to verify the security of the system without











looking at the entire matrix. In addition, strict implementations of the matrix usu-

ally result in a very sparse matrix. Because of this, the model is usually implemented

as one of the following:

1. the capability list, where each subject is provided with a list of objects that the

subject may access, along with the access modes allowed for those objects;

2. the access control list, where each object is provided with a list of subjects that

may access that object, along with the access rights those subjects have for

that object;

3. combinations of the preceding two lists.

This model is suited for a wide variety of applications, and is quite prevalent in

many computer systems, but it does not correspond to common military security

requirements that require a multilevel security approach.

Another problem with this model is that it is typically implemented by allowing

the owner of an object (e.g., a file could be an object) to grant and revoke access to

that object to other users. This makes it even more difficult to manage the system.

2.3 The Bell-LaPadula model

2.3.1 Introduction

The Bell-LaPadula model (BLP) [6] is one of the most influential models in the

design of computer security systems [24]. It is an example of an access control model

and is concerned with the rights of subjects and how they access objects within the

system. The model is an abstraction of a multilevel secure computer system and

does not concern itself with any of the applications on the (hypothetical) system.

The basic format of the model is to define a set of axioms and properties that, when

enforced, prevent applications from violating the security of the system.











The model is composed of subjects which are entities that can initiate actions,

and objects which are passive and are acted upon by subjects. Both of these have

security levels. The security level of a subject is said to be its "clearance," and that

of an object is said to be its "classification."

There are two axioms that are of particular relevance to the research described

later in this dissertation. The *-property (pronounced "star property") that prohibits

a subject from "writing down" to an object that has a lower security level than the

subject, and the simple security rule that does not allow a subject read access of

an object to which its clearance does not "dominate" the level of the object (level i

dominates level j if i > j). These two axioms may also be referred to as "no write

down" and "no read up."

It has been noted that systems that strictly enforce BLP are impractical, due to

the need by users to violate the *-property. A typical example is a user who needs

to extract a lower-level (e.g., UNCLASSIFIED) paragraph from a higher-level (e.g.,

CONFIDENTIAL) document and then use it in another lower-level document [24].

This does not violate our intuitive sense of security-it seems to be a permissible

thing to do from our human perspective, but it is strictly prohibited in BLP.

Because of this limitation, a special set of subjects called trusted subjects are

included in the model. These subjects are explicitly allowed to violate the *-property

because it is assumed that they will never violate the security of the system. Bell

and LaPadula state that these trusted subjects "are those subjects not constrained by

the *-property" [6, page 64]. They make the assumption that such trusted subjects

"must be shown not to consummate the undesirable transfer of high level information

that *-property constraints prevent untrusted subjects from making" [6, page 64].

Formally, a trusted subject can read an object at a particular security level and write











to a different object at a lower security level. Significantly, the model places no

restrictions on the behavior of such a trusted subject's violation of the *-property.

Systems have been designed to implement the trusted subject feature. For ex-

ample, KSOS [26] has trusted processes that are permitted to violate the *-property.

The problem with this is that having such things makes it very difficult to determine

the actual security policy that is being administered. In effect, another layer of com-

plexity is added, since the trusted subjects must be verified to be secure themselves.

In projects that have actually implemented BLP's axioms without trusted sub-

jects, it was found that the model was overly restrictive [24]. Because of this, trusted

subjects, in some form, have been present in a number of significant security projects.

A brief description of these projects now follows.

2.3.2 Military Message Experiment

The Military Message Experiment (MME) was an evaluation of the usefulness

of an interactive message system in an actual military operations environment [41].

The message system developed was called SIGMA and was used by a large number

of military personnel. Technically MME was a simulation of a secure system due to

the fact that it was running on top of an insecure operating system (TENEX), but

it was designed as if it were actually running on a security kernel. A security kernel

is a small, theoretically tamper proof system that is supposed to enforce a particular

security system.

Initially it was decided that SIGMA would strictly adhere to BLP without trusted

subjects. This was one of the causes of several of the following problems that arose:


1. The *-property created problems for some of the message system users who

needed to lower the security level of certain information. People make mistakes,

and a user might create a message at a high security level and afterward decide











that the message should have been at a lower level. Strict enforcement of the
*-property does not allow this "write-down" of information.


2. It was found that some of the messages in SIGMA were of a multilevel nature.

For example, a paragraph in a high security level message might, itself, be of

a lower security level. BLP makes no allowance for this, so the entire message

must be treated at the highest security level present in the message. Later on,

a user might wish to extract the paragraph, but be prevented because even

though the user has the original security level of the paragraph, the model

forces the paragraph to have the higher security level of the entire message.

3. The developers found that there was no provision in BLP for application de-

pendent security rules. In their particular case, they had a military security

rule where certain users could have release authority and could therefore invoke

the release operation. The details of the release operation are not important (it

was needed to certify that certain military organizations originated the message

being released). The important point was that such operations are not part of

the original Bell-LaPadula model and must be defined outside it.


To solve the first problem, the SIGMA developers used trusted processes. They

also noted that trusted processes helped to a certain extent with the third problem.

However, this made the exact security policy that SIGMA was enforcing difficult for

users to understand. This was not clearly understood initially and led to a serious

problem: SIGMA's designers required user confirmations of actions taken by the

trusted processes. They assumed this would add to the security of the system. It

turned out that by adding the trusted processes the security policy became muddled











in the minds of some users. Many users did not fully understand what the confirma-

tions were for, and just issued them automatically! Obviously, this is not acceptable,

and illustrates one of the problems with trusted subjects.

2.3.3 Air Force Data Services Center Multics

AFDSC Multics is the operating system from which UNIX evolved. In the middle

of the 1970's Multics was modified to include the Access Isolation Mechanism (AIM)

[24] and the resulting system is referred to as Multics-AIM. Multics-AIM enforces

BLP with "trusted functions." The trusted functions, when invoked, go through an

operating system "gate" that is designed to enforce access control on segments of

objects, to allow security officers to review user requests to downgrade the security

level of objects, and to provide other "privileged" operations.

While the system works and is considered a success, there are difficulties that

arise from its strict adherence to BLP. When users wish to transfer information to

other users at different levels, it requires them to log on and off repeatedly to change

their security level. Receiving mail is a particular problem, since a user at a lower

security level is not notified that he has received mail at a higher level until he logs

on at the security level at which the mail was sent. This resulted in a set of special

trusted functions that the system administration was allowed to use without going

through the inconvenience of logging on and off repeatedly.

2.3.4 Kernelized Secure Operating System

The Kernelized Secure Operating System (KSOS) [26] was a security kernel system

running a UNIX compatible interface. It was originally intended to strictly enforce the

axioms of BLP for user programs, but the developers realized that strict enforcement

was incompatible with its functional requirements in certain circumstances. For

example, a user was allowed to reduce the security level of a file that he owned.











This required the need for special software to support such privileges, which was

outside the domain of the kernel, and that violated the axioms of BLP.

2.3.5 Guard

Guard [42] is a system that allows a human operator to monitor and sanitize

the queries and responses between database systems operating at different security

levels. Whenever there is a potential violation of BLP, the human operator reviews

the situation interactively, and is allowed to violate BLP (at least in one version), by

allowing information from the system with the higher security level to be downgraded

and passed to the system with the lower level. In effect, the human operator becomes

the trusted process in this system.

2.3.6 Comments

It is clear that the axioms and properties of BLP are overly restrictive in "real

world" applications. This requires the use of trusted subjects that, unfortunately,

are not well defined in the model, and therefore have the potential to become too

permissive. With no formal specification of what a trusted subject can and can not

do, the developers of a system using the model are left on their own as to what is

allowable. These criticisms led to the development of such systems as the Military

Message System (a description of which follows later).

2.4 Other Information Flow Models

2.4.1 Introduction

While BLP is technically an information flow model, it occupies a special place in

the field of computer security due to its emphasis on access control. Other information

flow models are more general than BLP in that they are interested mainly in the flow

of information from one object to another. Access control models such as BLP are











more concerned with the exercise of security rights by subjects and how those rights

are applied to objects. Information flow models differ significantly from the access

matrix model in that they can even be applied to the variables in a computer program

(as opposed to the larger objects of the access matrix model).
Denning's Lattice Model of Secure Information Flow [10] is perhaps the best

known example of an information flow model, although there is a lot of work done in

this field, such as Cuppens' analysis of authorized and prohibited information flows

[9]. It will be used as an example in the remainder of this section, although there are

other lattice models such as Wu, Fernandez, and Zhang's [43].

2.4.2 Description

Denning's flow model is a simple lattice structure with five components:

1. a set of information objects (e.g., files);

2. a set of processes that are the active agents responsible for the flow of informa-

tion;

3. a set of security classes that are disjoint;

4. class combining operators that operate on information from two classes and

specify the resulting class of information generated;

5. the flow relation, that specifies whether information is allowed to flow from any

one pair of security classes to another.

Denning also makes a distinction between static binding in which objects are

bound to a security class regardless of their contents, and dynamic binding where the

security class of an object can change depending on its contents. Denning maintains

that static binding is the most desirable for security reasons. For example, in static











binding, an object might be classified as SECRET even though it contains only lower-

level information. This is not the case in dynamic binding, where the security level

of the object would change to match the highest level of its contents (see the section

on IX later in this chapter for an example of a dynamic binding system).

2.4.3 Comments

The advantage of the information flow models is that, in theory, it is easier to

verify the security of a system than with other models. Flow proofs can be used to

demonstrate that a given set of security assertions hold in a program at a particular

place.

However, the flow models appear to me to be rather narrow in scope. Some form

of access control such as an access matrix is probably going to be needed in any "real

world" implementation. In addition, the programming language emphasis may not

be wide enough for all applications. The primary value of the flow models appears

to be in proving that covert storage channels [31] do not exist in the other models.

For example, the information contained in program return codes when a request is

denied is not addressed in BLP. Since the main applicability of the information flow

models seems to be at a lower level than the problem statement of this dissertation,

the information flow models do not solve the problem (nor even address the issues).

2.5 Military Message System

2.5.1 Introduction

The Military Message System (MMS) security model has the goal of defining an

"integrated security model that captures the security policy that a military message











system must enforce, without mentioning the techniques or mechanisms used to im-

plement the system or to enforce the policy" [24, page 205]. As can be seen from the

goal statement, it is a message based model.

One of the primary goals of this model is to allow users to understand the policies

of a message based security system. Other goals are to help the designers of future

military message systems and to facilitate certification of those systems.

2.5.2 Description

The distinguishing concept of MMS is that of a user role. A role is defined as:
The job a user is performing, such as downgrader, release, distributor and
so on. A user is always associated with at least one role at any instant,
and the user can change roles during a session. To act in a given role,
the user must be authorized for it. Some roles may be assumed by only
one user at a time (e.g., distributor). With each role comes the ability to
perform certain operations. [24, page 206]

In addition, the authors define the military idea of a security container that is

defined as:
A multilevel information structure. A container has a classification and
may contain objects (each with its own classification) and/or other con-
tainers. In most MMS family members, message files and messages are
containers. Some fields of a message (such as the Text field) may be
containers as well. [24, page 206]

It should be noted that devices are also containers. Containers are distinguished

from objects in that objects are atomic: they are single-level units of information.

Therefore an object may not contain another object, and can not be multilevel.

Objects can be as simple as a particular field in a message (the example given in [24]

is the date-time group of a message).

There are many other definitions present in the model, most of them correspond-

ing to well-known ideas in the areas of multilevel security (e.g., clearance, user, op-

eration, ID, etc.). The interested reader is referred to Landwehr, Heitmeyer, and

McLean [24] for more details on this.











The importance of all of this from my perspective regarding this dissertation is

in the area of how a user views the operation of MMS. Users gain access through

logging in by providing a UserID and passing some form of system authentication.

They may then perform operations depending on the particular roles for which the

user is authorized (usually viewing or modifying objects or containers). Needless to

say, the system enforces the security model (the authors are vague as to how this

is actually to be done, instead assuming that the model will somehow be correctly

implemented). Significantly, no provision is made for the auditing of users. This

is a deliberate omission by the authors-they did not forget about it, instead they

ignored the issue after making cursory note of it.

In order to avoid the problems with having vaguely defined trusted subjects, the

model makes the following assumptions.


1. There exists a System Security Officer (SSO) who manages the initial clearances

and classifications, and sets the user roles appropriately.

2. Users must then enter the correct classification of the object or container when

they are accessing the information (i.e., creating, changing the contents, or

reclassifying).

3. A user may define access sets within a particular classification, which is a set of

triples composed of users or roles, operations, and operands that are associated

with entities. This results in the ability to create a "need-to-know" capability

for other users.

4. The user must properly control information extracted from special containers

that have the property of Container Clearance Required (CCR). Containers with

this property are restricted to users with the appropriate security clearance.











Essentially, CCR requires that a user must have at least the maximum clearance

required for any member of the container.


The rational behind these rules is that when there is no other source of information

about the security of entities, the user is assumed to correctly provide information. If

this were all there was to the model, it would be obviously unacceptable since users

would probably run riot through a system, transferring classified material to lower

levels. However, this is prevented by the use of user roles. For example, a user can

only perform an operation on an entity if the user or his current role appears in the

entities access set with the appropriate operations and operands. Another restriction

is that there is a special role of downgrader without which a user can not downgrade

the security level of any entity. Messages can not be released unless the user has the

role of release. There are other assertions similar to these that the authors assume

will make the model effective.

2.5.3 Comments

The MMS model has been used almost without change by the messaging project

Diamond [14] which is a distributed multimedia document project, and reportedly

has been adapted for use in document preparation and bibliographic systems [1].

I am extremely impressed with the work done on this system. However, while this

model appears good within its context, it does not address all the problems noted in

the problem statement in chapter one. Some specific problems not addressed follow.


1. There is no location transparency. This was not even an issue in the design of

MMS.

2. The exact organization of containers is deliberately left vague. Presumably this

is so that application developers will not be hampered with rigid definitions.











However, this seems to raise the same objections that trusted subjects cause

with the Bell-LaPadula model (i.e., the organization of the containers may be

overly permissive).

3. Only a user with the role of a System Security Officer can set the roles and

clearances of users. This is not a drawback in the context of the MMS model,

but it is possible that this may place a burden on the System Security Officers,

causing bureaucratic problems and delays.

4. The model does not specify how users are initially created (presumably by some

centralized system administration, although that is just speculation by me).


2.6 Andrew

2.6.1 Introduction

Andrew is a distributed computing environment that is a joint project of Carnegie

Mellon University (CMU) and IBM Corporation [37]. It has been under development

since 1983, and is expected to eventually encompass over 5,000 workstations at the

CMU campus. The development of security mechanisms for Andrew has been a major

issue in its development. Its major use is as an information sharing system that uses

a distributed file system as its mechanism.

Andrew is no longer an experimental system, having actually been implemented.

It continues to evolve and grow. At the point of its inauguration in late 1986 there

were over 400 workstations serving about 1,200 active users, with a file system storing

15 gigabytes on 15 servers. Due to the large scale of the system, the developers

realized that the typical academic laissez-faire approach to security would not be

practical.

Andrew is considered mature and robust and is in regular use at CMU.










2.6.2 Description

Andrew is composed of two components. Virtue is a set of workstations, while

Vice is a collection of servers and local area networks. Each Virtue workstation runs

the UNIX 4.3 BSD operating system. In Virtue, a distributed file system that spans

all the workstations is the primary data-sharing mechanism. The distributed file

system appears as a single large subtree of the local file system. A process called

Venus that runs on each workstation manages the access of files in this shared name

space by finding the files on individual servers on Vice, caching them locally, and

then emulating the UNIX operating system.

There are many levels of security in Andrew that are not within the scope of

this dissertation (such as the details of how the Vice servers are physically secured).

What is of interest is how the system is managed in terms of group work.

It is common in Andrew that a group of workstations be used by the same pool

of users (e.g., located in the same laboratory). It becomes the joint responsibility

of those users to ensure the integrity of the security of those workstations (both

hardware and software). The developers accept this because there are only a few files

stored locally on the workstations (for initialization purposes mainly).

The protection domain in Andrew is composed of Users and Groups. A user is

an entity that can authenticate itself to Vice (among other things). A group is a

set of other groups and users associated with a user called the Owner. The name

of the owner becomes the prefix for the group owned. There is a special user called

"System" which is omnipotent, corresponding to the UNIX superuser.

Membership in a group can be inherited and the "IsAMemberOf" relation holds

between a user or group X and a group G if and only if X is a member of G. The

reflexive, transitive closure of this relation for X is a subset of protection domains











called its Current Protection Subdomain (CPS) that is the set of all groups that X is

a member of directly or indirectly. The result is that a member of a group inherits

all the privileges of any ancestral groups. This was done to simplify management of

privileges due to the scale of Andrew.

Additionally, "group accounts" and "project accounts" are collective accounts

shared by multiple users. The reasons given for these are significant:

1. obtaining an individual entry for each human user may involve excessive ad-

ministrative overheads;

2. the identities of users who are collaborating in a group may not be known a

prior;

3. the protection mechanisms of Andrew make it easier to create just one "pseu-

douser" and specify the protection policies for that pseudouser than for multiple

users.

Finally, since the inauguration of Andrew, it was found necessary to enhance its

functionality by adding support for multiple Cells. A cell is essentially a "completely

autonomous Andrew system with its own protection domain, authentication, file

servers, and system administrators" [37, page 277]. Cells are noted as complicating

the security mechanisms of Andrew.

2.6.3 Comments

I feel that the inheritance mechanism of Andrew is inappropriate. I understand

why the designers incorporated it: to make administration easier. However, it seems

potentially dangerous. If the privileges of a higher-level group changes, all the users

underneath it will inherit the changes. This could be disastrous if a management

mistake (or security attack) is made.











I also feel that the need for pseudousers underscores the problems with distributed

system security that I mentioned earlier in the problem statement in chapter one,

regarding the difficulty with system administrations. Although the designers are

strongly opposed to pseudousers, they are forced to permit these collective entities.

Significantly, Satyanarayanan notes that collective pseudousers are created because

of the bureaucratic difficulties with creating groups:

We conjecture that [the creation of collective entities] is primarily because
the addition of a new user is cumbersome at present. In addition, groups
can only be created and modified by system administrators. [37, page
253]


The designers seek to remedy this situation by implementing a protection server

that will allow users to create and manipulate groups themselves, instead of hav-

ing to rely on system administrators. I can well understand the need for this (the

Distributed Compartment Model described in this dissertation contains something

similar) but I feel that this is potentially very dangerous when combined with inher-

itance. What is to prevent a user who has acquired great privileges from creating

groups and adding users? It seems like a serious mistake. Time will tell.

2.7 The ADMIRAL Model

2.7.1 Introduction

Project ADMIRAL is a collaborative project carrying out research into the use and

management of high performance networks. Stepney and Lord [39] have developed

a formal model for an access control system for ADMIRAL that allows computing

facilities from different system administrations to communicate with each other. Ad-

ministrators are allowed to retain control of their own subnets. Although this model

is not necessarily specific to project ADMIRAL, to avoid confusion I will refer to it

as "the ADMIRAL model."











The basic idea behind the ADMIRAL model is that users can log in to a dis-

tributed computing system and make service requests to any part of the system

without having to identify themselves further. All access control decisions are han-

dled automatically after the initial logging in procedure and are transparent to the

user.

The design of the ADMIRAL model was started in the mid 1980's at GEC Re-

search, at the Marconi Research Centre in the United Kingdom.

2.7.2 Description

The ADMIRAL model is most concerned with the frustration that users and

administrators face in a network composed of several "autonomous access control

systems" that have to interact. Users must repeatedly log in and out, and adminis-

trators have to maintain additional access control information.

A system based on the ADMIRAL model will have the following properties.


1. Autonomous administrations are supposed to be able to work with each other,

but they still retain control over their own facilities.

2. A user's access to specific services can be controlled, even if the user and the

service are under different administrations. This control is supposed to be

transparent to users, unless they try to access restricted services not available

for their use.

3. An administrator can make use of another administrator's facilities, as long as

they both agree in advance.

4. Multiple levels of security are available to users and administrations. Users and

administrators can insist on particular levels for particular operations.











The ADMIRAL model is based on a client-server model. Principals make requests

of clients who handle it by passing it on to a server. Authorities provide the control

over the privileges of the principals (e.g., "JOHN has READ access to FILEx").

Authorities act as intermediaries between the rights of the principals to access a

server. Authorities communicate with each other and trust statements that other

authorities make. Statements consist of a record of the issuing authority, a principal,

a server, and the requests the principal has permission to make of the server.

An entire client-server cycle is known as a transaction and takes place in the

following steps.


1. A principal makes a request for some service via a client.

2. The client makes the request on the principal's behalf to the Client's Local

Authority (CLA). The CLA holds cached statements about the principal and

server that are obtained from its own store.

3. If the statements needed are not cached, the CLA contacts other trusted au-

thorities via the network.

4. The request is passed on to the Server's Local Authority (SLA) which then

checks the access rights using its cached statements that are obtained from its

own store.

5. If the statements needed are not cached, the SLA contacts other trusted au-

thorities via the network to obtain the statements.

6. If all the access conditions have been met, the request is passed on to the server

for processing.

7. Further exchange of data may occur at this point after processing.










From this it can be observed that the ADMIRAL model is a type of remote pro-

cedure call system. Obviously, if the principal does not have the proper permissions,

the request should be denied.

2.7.3 Comments

The ADMIRAL model is a simple system. The main problem I see with it is how

they can implement "Trust." The authors note that this caused a lot of problems

when they attempted to implement the model. I suspect that this is the reason that

it is not yet fully implemented. It was impossible for them to formalize exactly what

"Trust" is. For example, they had made assumptions that Trust had to be transitive

(if A trusts B, and B trusts C, then A trusts C). When they formalized this, they

realized "that it would be very easy for all authorities to end up trusting all the

others, making the concept of Trust useless" [39, page 592]. So they removed that

property. The authors are vague on how they finally solved this problem (I suspect

they really haven't completely solved it, but are still experimenting with different

methods).

2.8 IX

2.8.1 Introduction

I debated whether or not to include a section on IX in this dissertation. On the

one hand, it is an actual implementation of a multilevel secure UNIX-like system

by AT&T Bell Laboratories [27]. On the other hand, it was a failure in the "real

world" for reasons that the authors don't detail clearly (but which I can guess). If

the authors had detailed the reasons why IX failed more clearly, I would consider it

a more valuable experiment. When I finished debating with myself, I decided to give

an overview of IX and why I think it failed.










2.8.2 Description

IX was designed and built at AT&T Bell Labs as an experimental multilevel

secure version of the UNIX operating system. It supports document classification

with mandatory access control, where classified input must yield classified output.

The IX model differs from BLP, primarily by violating the *-property (although the

authors don't explicitly state this).

Every entity in IX has a label associated with it that describes its security clas-

sification. Users can have access only to entities they are cleared for. However, the

authors of IX chose to use dynamic binding: the "labels of processes or files may

adjust automatically during computation to guarantee that outputs are classified at

least as high as the inputs from which they derive" [27, page 673]. Data transfers are

allowed only in the direction of increasing labels.

IX has many other features, but for me, the concept of labels is the most im-

portant. IX tracks data flows by the use of labels. Every exchange of data must be

labeled. However, because IX uses dynamic binding, labels must be checked every

time a data transfer takes place, not just at the beginning of a transaction. This

takes a lot of processing time.

2.8.3 Comments

According to the authors, IX has not "stood the test of abuse outside the labora-

tory" [27, page 691]. I believe this was because of a problem with dynamic binding

systems called "label creep" which happens when high-level data are placed in low-

level files by accident. This results in "contaminating" the file with no hope of

revocation. Obviously, a user would be reluctant to use a system where he might lose

all his data by a simple mistake.










2.9 Amoeba

2.9.1 Introduction

Amoeba is an object based distributed operating system developed at the Free

University and the Centre for Mathematics and Computer Science in Amersterdam

[28]. The authors report that Amoeba is one of the fastest distributed systems of

which they have knowledge. It is a long term project, lasting over 10 years.

2.9.2 Description

Amoeba consists of four hardware components: a processor pool, workstations,

and servers (all connected through a local area network), and a gateway. The idea

behind the processor pool is that the number of processors should exceed the number

of users, providing increased performance (a user can be allocated more than one

processor for parallel computations) and fault tolerance. Workstations are used only

as user interfaces (i.e., they only execute processes that manage the user interface).

The servers are traditional file servers, print servers, etc. The gateways are to other

Amoeba systems that can be accessed over a wide area network.

The Amoeba software is object-based, and is a client-server type system. Every

object is protected by a "capability" which is "the set of operations that the holder

may carry out on the object" [28, pages 45-46].

Amoeba's model makes heavy use of remote procedure calls and threads (light-

weight processes). There is a special server dedicated to finding the location of

objects.

Security in Amoeba is a variation of the commonly criticized "security through

obscurity" method [15]. A client request is addressed to a particular server's port.

The authors say that "knowledge of a port is taken by the system as prima facie

evidence that the sender has a right to communicate with the service" [28, page48].










This is not as bad as it could be, since a simple cryptographic system is used to make

it difficult to determine the proper port number for various services.

2.9.3 Comments

Amoeba seems more concerned with the protection of objects than with their

management, and this seems to be a serious drawback from the perspective of the

problem statement of this dissertation. For example, the authors state "a system

manager or project coordinator cannot hand out capabilities explicitly to every user

who may access a shared public object" [28, page 49]. On the other hand, a user can

do this with his own objects. Essentially, the designers of Amoeba have developed a

.traditional resource management and access control policy on a distributed system.

This is a great accomplishment, but does not change the underlying SACM.

2.10 Other Work

I reviewed other work that had little relevance to this dissertation, but nonetheless

deserves some passing mention.

Benson, Akyildiz, and Appelbe [7] are concerned with the differences between

sequential security models and concurrent security models and developed a security

model they termed the "Centralized-Parallel-Distributed model" [7, page 183]. Their

main emphasis is on concurrent systems.

Glasgow and MacEwen [16] describe their work on a multilevel secure system

called Snet. Their main interest is in information flow, and a specification language

called "Lucid" which they use to specify distributed systems and prove that formal

model components are consistent. They claim that Lucid is intermediary between a

formal model and an actual system implementation of a multilevel secure system.

Hanushevsky [18] describes the IEEE Mass Storage Reference Model's security.

The model is concerned with protection components, which consist of authentication,










authorization, enforcement, and auditing. While there was some discussion of the

problems of authorization in a varied name-space, this paper did not seem particularly

relevant to the topic of this dissertation.

2.11 Conclusions

The work reviewed varied from some which were spectacularly successful, to others

which were dismal failures. However, my primary interests were the lessons learned

from both the successes and the failures. They confirmed the items mentioned in

the problem statement in chapter one, and illustrated the need for a new paradigm,

which would address the problems of these "traditional" security systems.

The next chapter is about my solution to the problem statement, and some of the

philosophical considerations that were involved in creating that solution.















CHAPTER 3
THE DISTRIBUTED COMPARTMENT MODEL PHILOSOPHY

3.1 Introduction

The philosophical justifications and a narrative description of the Distributed

Compartment Model are presented here first, before the actual formal security policy

model in the chapter that follows. This is done as an aid to understanding the formal

security model in the following chapter. The development of the formal security

policy model was driven by the following philosophy, so it is only natural to present

these ideas first, in an informal way. The reader interested in the development of

formal security policy models in general is referred to the National Computer Security

Center's overview of the modeling process [30]. The philosophy that follows can be

thought of as an organizational security policy.

I would like to emphasize that the philosophy behind the solution to the problem

statement in chapter one was not motivated by the traditional view of a multilevel

secure, categorized system, such as that presented in the famous "Orange Book" of

the Department of Defense" [12]. I believe that this traditional view of "security"

can be be a stumbling block to some people who try to understand the philosophy

behind the Distributed Compartment Model.

The solution to the problem statement of chapter one has two parts. The first

is Distributed Handles, a means for user identification and access control. The sec-

ond is Distributed Compartments, a method for allowing users to manage resources

within a distributed system across computer system boundaries with a measure of

independence from any system administrations. It will be seen that the Distributed
32










Compartment Model violates BLP insofar as the *-property is violated. The simple

security rule is retained.

3.2 Distributed Handles

The central concept of distributed handles is that for groupware applications

userIDs should be eliminated as a means of identification and access control.

The proposed solution is the concept of a distributed handle, that the groupware

application uses as an identifier for users. A user joining a groupware session is

queried for a handle that is unique to that application (later we shall see how this

uniqueness is guaranteed), and is then verified by a groupware security manager.

This keeps user access to that application as separate as possible from the operating

system. Verification can be an entirely independent operation (e.g,. user knowledge,

physical attributes, possession of security objects) using a method such as Kerberos

[38], [15]. Passwords will probably be the most common authentication method.
Under this method, an individual user would first need to gain access to a par-

ticular computer system in the distributed system through SACM by having a valid

userID and password. The user would then need a valid distributed handle and would

then need to be validated by the groupware's access control security in order to be

allowed access to the application.

This approach has several advantages that follow.


1. Security dependencies are reduced. Security is not entirely dependent on an

operating system, system administration, bureaucrats, etc. As things stand

now, even one lax system administration can disrupt security in groupware

applications. Some examples of these security policy violations follow.











Users can be permitted to share accounts with one another. Under this

situation it is impossible to verify the identify of the actual user by the

use of userIDs.

Users who have no authorization to use a particular groupware application

may be given permission to access the files used.

Users who need access to a particular application, yet lack the permission

to access the application, may have to wait a long time for the (usually

overworked) system administration to grant the proper permission.

However, it should be noted that it is impossible to protect against poor oper-

ating system security. For example, a distributed operating system could allow

the interception of keystrokes from a workstation over a computer network.

This could not be prevented by this model.

2. Handles can be more descriptive than userIDs. For example, a userID of "sjg"

does not convey much information. With handles there could be a more de-

scriptive name such as "Steve," "Greenwald," "Third Programmer," "Referee

2," etc.

3. Multiple handles can be permitted for the same user. The advantages of allow-

ing this follow.

The testing of groupware applications becomes much easier. One user can

easily simulate many users by having several handles.

Anonymity is possible.

Multiple roles for individual users becomes possible. Different handles can

be used for different user roles. A user needing to change roles just needs

to use the appropriate distributed handle.











Intervention by the system administration is limited. The system admin-

istration does not have to be concerned with creating multiple accounts

for the same user.


4. Binding of users to roles becomes possible since more than one user may share

the same distributed handle. This allows multiple users to share the same role.

5. Security mechanisms can be implemented relatively independently of any un-

derlying operating system. Keeping security matters within the particular

groupware application facilitates development of whatever higher-level oper-

ating system paradigm is wanted, independent of the actual operating system.

For example, a new operating system could be built while using an old one,

incorporating separate user accounts. Testing of experimental security mecha-

nisms becomes easier.

6. Management of handles can be made part of the groupware application, al-

lowing different security methods to be implemented. For example, security

can be partitioned in a hierarchical manner with different people maintaining

the handles of their own compartments (this idea will be discussed later in the

distributed compartment section).


One area specifically not covered is access control. There are a variety of access

control methods available (e.g., passwords, physical attributes, possession of objects).

Specific access control methods were not an area of research for me, since I feel that

this is another problem entirely, and is separate from the Distributed Compartment

Model. However, one important point is that the system administration will not be

responsible for the access control of distributed handles. That will be the responsi-

bility of the particular groupware application. This frees the system administration











from the burden of managing the handles, and frees the groupware managers from

the necessity of having to access the system administration every time maintenance

is needed for access control.

Distributed Compartments, a description of which follows, is the designated plat-

form for the access control and administration of distributed handles.

3.3 Distributed Compartments

A distributed compartment (also called a discom) is a logical group of objects that

is not restricted to a single physical computer system. Objects consist of such things

as files, hardware devices, programs, users, and subdiscoms [6], [33], [10]. A discom is

conceptually similar to a standard hierarchical directory structure, however, it does

not necessarily reside on a single computer system. The users of discoms gain access

via distributed handles.

A root discom is called an empire discom. An empire discom must have a unique

identifying name in the particular name-space domain being used (more on this later).

Discoms have users called subjects. Each discom must have at least one sub-

ject called a governor. Governors have the maximum privileges for the discom they

govern. Other subjects may have lesser privileges.

The privileges of a discom consist of at least 24 operations called the initial priv-

ileges:


1. create a new object;

2. destroy an existing object;

3. modify an existing object by adding or removing resources from it;


4. merge two existing objects into a single object;











5. split an existing object into two objects;

6. create a child discom;

7. destroy a child discom;

8. merge two child discoms into a single child discom;

9. split a child discom into two child discoms;

10. destroy an existing empire;

11. merge two existing empires into a single empire;

12. split an existing empire into two empires;

13. create a new subject;

14. destroy an existing subject;

15. create a new privilege;

16. destroy a non-initial privilege;

17. create a governor from an existing subject;

18. rescind a governorship by converting the governor to a non-governor subject;

19. add a resource to the resource pool of a discom;

20. remove a resource from the resource pool of a discom;

21. grant a privilege to a subject;

22. rescind a privilege from a subject;











23. make a subject a member of a child discom;

24. remove a subject as a member of a child discom.


This combination of subjects, objects, and privileges, makes it possible to create

a system similar to an access control matrix in the Distributed Compartment Model.

The Distributed Compartment Model has a set of secure state invariants, consist-

ing of axioms and properties which formalize it. Their informal definitions follow.


Genesis Axiom This simply states that the initial state of the system is secure.

Divine Right Axiom A subject can create an empire discom only if given that privi-

lege by the administrators of the system administrations involved. This is the

only area where the system administration need be involved in the management

of discoms. The rational for this is that the system administration is ultimately

responsible for the use of its system. It should be given the right to restrict the

creation of empire discoms.

Temporal Axiom A subject may only access an object with the same time index as

the subject. The rationale for this is to prohibit "time travel" so that (for

example) a subject at the current time is not allowed to modify an object in

the past.

Usage Property If a subject is currently accessing an object, it either accessed the

object before the present time, or it requested access of the object before the

present time. The rationale for this is to prevent subjects from accessing objects

without requesting that access through the confines of the model.











Creator Property The creator of a discom automatically becomes a governor of that

discom. The rationale for this is that if this property was not present, then it

would be possible to create discoms that were inaccessible to everyone.

Government Property The governor of a discom may grant and revoke privileges to

non-governor subjects of that discom. The rationale for this is that someone

has to grant privileges or nothing will get done. The purpose of the model is

to eliminate, as much as possible, involvement by the system administration.

If all subjects had the power to grant privileges, then anarchy would be the

result. Therefore only governors or those subjects they give this privilege to

may have this special status.

Cordon Property Discoms may never intersect with other discoms. The rationale for

this is that if discoms intersected with one another, then information could flow

between them in unrestricted ways, possibly violating the Nova property and

the Ceiling property (defined later).

Nova Property A non-governor subject may only access a descendant discom if made

a member of that discom by a governor of an ancestor discom (conditional down

access). This is a clear violation of the BLP *-property ("no write down"), hence

the name (a nova is an exploding star). The rationale for this is that governors

control their resources and may allocate them as they wish.

Demesne Property The governor of a discom always has unrestricted access to de-

scendant discoms. The rationale for this is that a governor, by definition,

"owns" the resources of the discom. Since any descendant discoms that exist

are part of the resources of a discom, the governor should not be prohibited

from any access to them. In addition, if a governor could not always access











descendant discoms, completely autonomous discoms could result, creating a

potential need for the intervention of the system administrations involved, if

the governor ever needed access again after the autonomous discoms prohibited

access to him.

Ceiling Property A subject may not access an ancestor discom without being a sub-

ject of that discom. This is not just a restatement of the BLP simple security

rule ("no read up"). The Ceiling property does not allow any access at all of an

ancestor discom without membership. The rationale for this is that the gover-

nors of the ancestor discoms are allowed to manage their resources as they see

fit. They may not wish even write-only access from descendant discoms since

that might use up resources (e.g., disk space) and cause (for example) a denial

of service problem, or a covert storage channel [31].

Sanitation Property Resources that are unused must contain the sanitized value for

their type. The rationale for this is to prevent information from being inadver-

tently disclosed through the reuse of resources.


It can be seen from the above description that a distributed compartment is

actually a groupware application, with access to the discoms by distributed handles.

Once again, management of the distributed compartments is not done by the local

system administration, but ultimately by the individual users who are governors of

empire discoms (by the Divine Right axiom).

3.4 Conclusions

The combination of distributed handles and distributed compartments is a rea-

sonable solution to the problem statement of chapter one. Combining these two ideas

reveals the following areas of concern.











1. How should system resources be allocated within the discoms? It should be

possible to implement a system where the resources of a discom are restricted

according to a subject's privileges within that discom (with something analo-

gous to an access control matrix). For example, limiting the CPU time of a

discom member might prove to be a very valuable thing in a real-world appli-

cation.

2. What is the best way to manage the distribution of the resources of the discoms

over a distributed computer system?

3. What is the best way to manage the name-space that will occur with this

system? Since each discom is essentially a separate groupware application, it

would be desirable if each discom had its own name-space. This would allow

unique distributed handles within each discom.


These concerns motivated some of the details present in the Distributed Com-

partment Model in the following chapter.
















CHAPTER 4
THE DISTRIBUTED COMPARTMENT MODEL

4.1 Introduction

There is actually more than one Distributed Compartment Model, all very similar

to one another. The first model is termed the Standard Model, which I consider to

be the most desirable model in terms of applicability to the real world. The models

which follow the Standard Model (in the next chapter) are experiments involving

slight changes to some of the attributes of the Standard Model. All the models take

the form of a formal security policy model [30, page 133] in the tradition of BLP.

All the sets used in the models contain a finite number of elements, with the

exception of the time set (defined later).

The notation used for set superscripts, subscripts and accents is generally con-

sistent throughout the model. Subscripts, superscripts, and accents may be omitted

when the information they would impart is contextual, and therefore redundant. The

terms "world," "empire," and "distributed compartment" which follow, are all de-

fined later. A "hat" is used to designate a set of a world (e.g., B means set B of a

world). A superscript is used to designate a set of an empire (e.g., Bj means set B

of empire j). A subscript is used to designate a set of a distributed compartment

(e.g., Bi means set B of distributed compartment i). For the elements of a set, sub-
scripts are used as indices (e.g., bk means b is the kth element of some set). If an

item has a double subscript, then the first subscript is used to indicate a distributed

compartment, and the second is an index (e.g., bi,k means the kth element of a set in

distributed compartment i).











All the sets (with the exception of the time set) and elements are bound to some

element of the time set (defined later). This allows the contents of a set to change

from time to time. If this were not done, the resulting system would be completely

static and of no practical use. In order to designate a set (or the element of a set)

at a particular time t, we just place the time in parenthesis after the name of the set

(e.g., B(t) is set B at time t). The time index may be omitted when the information
it would impart is not needed (e.g., all the sets in a statement are bound to the same

time).

Additionally, for a set B in distributed compartment D, in empire Ej in a world,

W the following apply.


1. B =_ Bi = B = {bl, b2,... bn} is a set in a distributed compartment.

2. For empire EP,

B'= U B
1 if Bf is a set within a distributed compartment, and is the set of all sets of Bq

otherwise:
B ={B{, B ...,B


3. For world W,
B= U B
1ij if B3 is a set within an empire, and is the set of all sets of Bj otherwise:


h = B1, B 2, ..., B nJ.










4.2 The Standard Model

4.2.1 Components

The following definitions of the elements of the Standard Model consist of the

name and semantics of the set, followed by the elements of the set.

System Administration (administration) This is an autonomously maintained set of

resources within a world. The set of all administrations in world W is

A= {A', A2,..., An}.

A particular administration is a set of resources:

Ak= r, r2,...,ri}.


The administration function A is a mapping of model components to adminis-

trations. For example, A(o) returns the set of administrations that belong to

the resources of object o.

User This is an entity within a single system administration that uses resources.

Users may be thought of as representing human beings or processes, however,

users do not span system administration boundaries.

Resources These are atomic units of computer systems that can be used to perform

computations, communications, and data storage/retrieval. Examples include

blocks of disk storage, peripherals, allocations of CPU time, etc. Resources can

be composed into objects. The Ith resource for administration Ak is denoted by

ri. A resource pool is a collection of resources, not necessarily from the same

system administration. For example, the set of resources that belong to discom

i is:


= r', r', r }.










In the case of resources, the administration function maps the system adminis-

tration that the resource belongs to:

A(rk) = Ak

The value of a resource is determined by the type of the resource (resource

types are not defined within this model), however, resources have a sanitized

value for their type which is used in conjunction with the sanitation property

(defined later). We assume that resources have some mapping to an undefined

value space, of which sanitized (or unsanitized) values are a subset.

The usage function T is used to determine whether a particular resource in a

resource pool is in use by an object or not. If the object is unused, it is also

used to determine whether the resource is sanitized or not. T(r, t) = a (-

denotes "sanitized") if the resource is not being used at time t and is sanitized,

T(r, t) = -ac if the resource is not being used at time t and is unsanitized.

Otherwise the usage function returns the particular object using the resource.

The discom function A is used to determine the discom to which a resource is

currently pooled. For example, the assertion that Di is the discom of resource
n:

A(rn) = D,

is equivalent to

rn E Ri.

System Administrator (administrator) This is the supreme authority for a system

administration.

Object This is a set of resources that may be accessed by subjects. Objects are

passive and acted upon, and may span several administrations. Examples are










files, passivated subjects, CPU time, and child discoms. An object belongs to

exactly one discom. For example, the set of all objects in discom Di is:


Oi = Oi = {o, 02,...,On).


The set of sets of all objects for empire Ej is:

S U o .
1
A given object, on, is a set of zero or more resources from the resource pool of

the discom to which it belongs (these resources may be from various adminis-

trations), for example:

on = {rf, r , ..., r.

The administration function A(o) is a mapping of objects to sets of adminis-

trations, defined as follows for A(o), o E Oi:

A(o) = U {A(ri) I rt E o}.

It is included so that there is a convenient way to determine the system admin-

istrations that belong to an object's resources.

Distributed Compartment (discom) This is an object that is a 5-tuple associating a

set of subjects Si, a set of objects Oi, a set of privileges Pi, an action set ASi

(all defined later), and a resource pool Ri. A discom is not necessarily restricted

to a single, physical, computer system. For example:


Di = D = D = (Si, Oi, Pi, AS;, Ri)

Discoms do not intersect with one another. Intersection means that for (D;

Dj),











1. Oi Oj = 0;

2. Pi n P = 0;

3. AS, n AS, = 0;

4. Ri n Rj = 0.


Note that the subject sets may or may not necessarily intersect since the sub-

jects can range all over an empire: Si n Sj = 0 in some cases (S n Sj = 0

when discom Di and Dj happen to have no subjects in common).

The set containing all the discoms in an empire (defined later) is denoted by

D3.

The set containing all the discoms in a world (defined later) is denoted by D.

The following projections apply to any discom:


1. III(Di) = Si is the set of all the subjects in a discom;

2. H2(Di) = Oi is the set of all the objects in a discom;

3. 13(Di) = Pi is the set of all the privileges in a discom;

4. Il4(Di) = ASi is the action set of a discom;

5. II5(Di) = Ri is the resource pool of a discom.


Since a discom is an object, and objects can span several administrations, a

discom can span several system administrations. The administration function

A(D) is a mapping of discoms to administrations, defined as follows for A(Di):


A(Di) = U {A(ri) I r E R-}.











Empire This is a partially ordered set of discoms:

Ej = (D, 1+),

where

D3 D.

D is ordered as as set of rooted, directed trees within a world with each empire

having the following properties:


1. there is one root discom, Dj = DE called the empire discom that has no

predecessors and from which there is a path to every other discom in the

empire;

2. each discom other than DE has exactly one predecessor.


Formally, a set of empires within a world W containing the set of discoms D

must satisfy the following properties:


1. (Ei 7 Ej) =- (D' n Di = 0);

2. ( Hk(Di) n Uk(Dj) 0) for k = 1, 2, 3, 4, 5;

3. the empire relation [+ must apply (defined later).


Since an empire is composed of discoms, it can span several system adminis-

trations. The administration function A(E) is a mapping of empires to admin-

istrations, defined as follows for A(EJ):

A(Ej)= U A(D3).
1
Empire Relation This is a binary relation used to create an empire, denoted by J+

and pronounced "rules." The empire relation is the transitive closure of the











"sires" relation (1). The sires relation applies only to a parent-child relation

(e.g., Da. I Db means Db is the child of Da). The sires relation has the following
properties for an empire E :

1. asymmetry: (Da Db) = -'(Db Da);

2. irreflexivity: -(Da 1 Da) V (D, Ej);

3. single parenthood: (Da I Db) > V (De, Da), -(De I Db).

The rules relation is the transitive closure of sires. It has the property of

((Da 1+ Db) A (Db I+ De)) == (Da 1+ De).

In addition, there are no cycles allowed for the rules relation:

(Da 1+ Db) = V (D, J+ Da), -(Db 1+ Dc).


Empire discom This is the discom D' that is the upper bound of an empire (i.e.,

the root of the empire tree of discoms). This should not be confused with an
empire.

World This is a forest of empires (i.e., an unordered set whose members are empires).
A world is identified by some method that can not be specified in this model.

Worlds should be uniquely identified, but there is no way to guarantee this
within the scope of this model.

W = = {El, E2,...,E"}.


Subject This is an active entity that can perform operations on objects by the use

of applications. Subjects are partially ordered by the governs relation (4-). A











subject belongs to exactly one empire, but can belong to multiple discoms. The

set of all subjects in empire E' is:

S3 = {s1, S2,..., S}.

The set of all subjects in discom Di E Ej is:

Si C = S c Si.

A subject is a tuple of the form:


Sk = (Di, E3, Ok),

where Di is the subject's upper discom, E3 is the empire in which the subject is

contained, and Ok E Di is the default object which the subject will become when

objectified/passivated (this object also contains all the resources the subject
uses in its upper discom). Subjects may be passivated so that they can be

accessed by other subjects. Note that in the rules that follow, whenever a

subject is used in the place where an object is required, it is assumed that the

subject is passivated into its default object. The upper discom is the highest

discom in the empire which a subject may ever access, in accordance with the

Ceiling property (see later). A subject is a member of all discoms between any

arbitrary discom of which he is a member and its upper discom. The upper

discom must rule all other discoms of which the subject is a member.

The following projections apply to any subject:


1. Hl(sk) = Di is the upper discom of the subject;

2. H2(sk) = Ej is the empire of the subject;


3. II3(sk) = ok is the default object of the subject.








51


Label This is a string created from some world specific alphabet with a world specific

maximum and minimum length. The image of the handle mapping function

(defined later) of the subjects in discom Di is:

Li = L{ C L.

The set of all labels in use in empire E3 is:

L'= U Li.
1
Handle This is the method of subject identification. A handle is a label that results

from a mapping of a subject to a set of labels within an empire. The set of all

handles in empire E3 is:

H3 = {hi, h2,..., h}.

The set of all handles in discom Di is:

H, = H, C H3.


Handle Function The handle function H(s) is a mapping of subjects to labels, defined

as follows:


1. H : S 2L where L is the set of all possible labels in world W;

2. every subject must have at least one label: H : s i- (2L 0);

3. subjects can share the same labels: H(si) nH(sj) Z 0 is allowed for i j.


For any subject si we can say that H(si) is the set of handles for si.

Privileges These are the actions a subject can perform upon an object within a

particular discom. Privileges are generic and are not restricted to a particular











subject or object within a discom. The set of privileges in discom Di is:


Pi = {pi, P2,..., P24,...,Pn}.

The set of all sets of privileges for empire E' is:


P = {Pi, P2,...,P,}.

Each privilege is a tuple of the form:


pi = (i, D),

where i is the index of the privilege, and D is the discom of the privilege. The

index of a privilege corresponds to an action on a type of object.

As an example, the first privilege, which creates an object from resources in

discom Di is not the same privilege as the one which creates an object from

resources in discom Dj.

The first privileges (starting with pi) of each discom are called the initial priv-

ileges and are reserved for the following:


1. create object;

2. destroy object;

3. modify object;

4. merge objects;

5. split object;

6. create discom;

7. destroy discom;

8. merge discoms;











9. split discom;

10. destroy empire;

11. merge empires;

12. split empires;

13. create subject;

14. destroy subject;

15. create new privilege;

16. destroy non-initial privilege;

17. create governor;

18. rescind governorship;

19. add resource to resource pool;

20. remove resource from resource pool;

21. grant privilege to subject;

22. rescind privilege from subject;

23. make subject member of child discom;

24. remove subject as member of child discom.


The indices of the initial privileges are the same for all discoms, because they

refer to the same types of objects. Note that it would appear that some of the

initial privileges do not have an object to act upon since actions are initiated

by subjects using a privilege upon an object (defined later). However, each

one of these privileges does in fact have an object to act upon. For example,

the create discom privilege has the current discom as its object and results in











creating a child discom. Granting privileges to a subject uses the action set as

the object (i.e., the action set is modified).

Request Relation This is a binary relation used to indicate when a particular sub-
ject requests access to a particular object, and denoted by pk. For example,

(Sk PI Om) (pronounced "s sub k I-requests o sub m").

Access Relation This is a binary relation used to indicate that a particular sub-

ject accesses a particular object, and is denoted by Rk. A tuple of the form

(sk 1t om) (pronounced "s sub k i-accesses o sub m") may be added to the
current subject access set (defined later) of a subject in discom Di, CSAi,k(t +

1) iff (Sk, (pt, Om)) E ASi(t) and sk is accessing om using pi at time t + 1.

If a subject is no longer accessing an object, then the previous tuple must be

removed from that subject's current subject access set. For example: CASk(t +

1) = CAS(t) {(sk 1 Om)}.

Governor This is the set of discom subjects that always have the maximum privileges

for that discom and any descendants of that discom. For discom Di, V (p E

Pi, o E Oi, g E Gi) => (g, (p, o)) E ASi. A governor is also the governor of
any descendant discoms. The set of all governors in discom Di is:

Gi = G; = {gi, g2,...,gn} C Si.


Oligarch This is the governor of an empire discom. An oligarch governs an entire
empire. The set of all oligarchs for empire E/ is:

j = {w, 2, w,. .., }.


Another way of stating this is to say that LP = Gi when Di = DE.











PO Matrix This is a conceptual set consisting of the cartesian product of the privilege
set and the object set within a particular discom. For discom Di:

POi = (Pi x Oi).


SPO Matrix This is a conceptual set consisting of the cartesian product of the subject

set and the PO Matrix within a particular discom. For discom Di:

SPOi = (Si x POi).


Action Set This is a subset of the SPO Matrix for discom Di, (AS, C SPOi) that

consists of tuples of the form (sk, (p1, om)) and is used to determine if subject

sk may access object om using privilege pt. This set can be thought of as being
a list where each entry is a subject with the privileges and objects the subject

can use. In addition, the action set of empire Ej is:

ASj = U AS3.
1 The action set of world W is:

AS = U AS'.
1
The following projections apply to any action set's SPO tuple:

1. HI(SPO e ASk) = s is the subject of the SPO tuple;

2. HI2(SPO e ASk) = (p, o) is the privilege-object tuple of the SPO tuple.

Capabilities Set This is a conceptual subset that is the second projection of the action
set for a particular subject in a particular discom Di:


CSi,k C {(p, o) I 3x e AS;, sk = IIi(x) A (p, o) = 12(x)}.











This can be thought of as isomorphic to a standard capabilities list, since it

consists of tuples of the form (p, o).

In addition, a change to the standard nomenclature (as noted earlier) is made

to some sets such as the capabilities set. CSi,k indicates the capabilities set of

subject Sk in discom Di.

Time Set This is a set of time indices used to identify discrete moments in a world:

T = {0, 1,..., t,...}.

The time set is required because the system being modeled is not static. The

system can be thought of as being single stepped from one time index to a

greater (later) time index [6]. As a convenience, we set to = 0.

Unless a set B(t) is explicitly changed by a rule (defined later), then B(t + 1) =

B(t).

Current Subject Access This is a set of the form:

CSAi,j(t) = {(sj Rk o)}.

Such a relation means that subject sj in discom D, is k-accessing object ol

at time t. If a subject is not accessing any object at a particular time, then

CSA ,j(t) = 0.

Current Access Set This is the set of all the current subject accesses within discom

Di at time t:

CAS;(t)= U CSAi,k(t).
1 The set of all the current subject accesses within empire E3 at time t is:

CASj(t)= U CASi(t).
1









The set of all the current subject accesses within world W at time t is:

CAS(t)= U CASj(t).
1
Subject Access Request This is a set of the form:

SARi,k(t) = {(sk pi Om)}.

Such a relation in SAR(t) means that subject Sk in discom DA is requesting 1-

access to object om at time t. There can be only one request per subject in the

subject access request set (i.e., ISAR(t)I < 1). This is done to help simplify the

model (for example, to eliminate any potential deadlock problems). If a subject

is not requesting access to any object at a particular time, then SARi,k(t) = 0.

The triple is a request for access, not access itself.

Access Request Set This is a set of all the current subject accesses within discom D,

at time t:

ARSi(t) = U SARi,k(t).
1 The set of all the current subject accesses within empire E3 at time t is:

ARS (t) = U ARSi(t).
x The set of all the current subject accesses within world W at time t is:

ARS(t) = U ARSJ(t).
1
The purpose of the ARS is to provide a way for the system to request access

at time t and get a response to that request at time t', t' > t.

Governs Relation This is a binary relation (denoted by 41), relative to each discom:

4 C (Si x Si) for discom D;. It is used to create a lattice of all the subjects










in all discoms. In accordance with the lattice principle, subjects are partially
ordered under 4 (a relation is a partial ordering relation if it is reflexive, an-
tisymmetric, and transitive). If a is a governor of a discom, then a has the
maximum privileges for that discom. If a 4 b (pronounced "a governs b") then
a may do at least everything that b may do (i.e., a has all of the rights that b
has) and may also destroy b:

(a 4 b) > (CSi,b C CSi,,).

This relation has the following properties for discom Dq = (Si, O;, Pi, ASi, Ri)
with sl, sm, s, G Si:

1. transitivity: ((si 4 sm) A (sm 4 Sn)) = (si 4 sn);

2. antisymmetry: ((sl 4 sn) A (sm 4 st)) iff (si = Sm), and likewise ((s; 4
Sm) A (si # sm)) = -(sm 4- sI) (governors do not govern each other);

3. reflexivity: (s, 4 si) V (si Gi);

4. non-comparability: subjects do not have to be comparable, so for two
subjects st, sm : (-(si 4J sm) A -(Sm 4 st)) is possible;

5. subjectivity: every non-oligarch subject must be governed by a governor:
V s Q 3 s', (s' 4 s);

6. least upper bound (LUB): every pair of non-oligarch subjects has a least
upper bound: V ((st 0 Sj) A (sm ( aj)) 3 w, ((w 4 st) A (w 4 s,)).

System State This is a tuple consisting of a world's current access set, and a world's
action set for a particular time t:

SS(t) = (CAS(t), AS(t)).


The set of all systems states is E.











Secure System State This is a system that satisfies the secure state invariants (defined

later). The set of all secure system states is denoted by E.

Rules of Operation This is a set of secure state transition constraints that transform

a system state SS(t) to SS(t'), t' > t:


T= {rT, T2,...,Tn}.


It should be emphasized that system states are time variant (i.e., the state of

the system varies with time). The only way that a system state can change is

by the application of a rule.

A rule is a function causing a state transition from one system state to another

system state:

Tk : Ex 2P -+ ,

where p is the set of all requests. All rules are designed to be secure state

preserving: 7k : E x 2` -+ E. A state transition is composed of a system state

along with an access request set which is transformed into another system state:


Tk : (SS, ARS) H SS',


or,

Tk(SS, ARS) = SS'.


In order to simplify the model, rules are applied one at a time to one subject

request at a time. Since the model is designed for use with distributed systems,

we make the simplifying assumption that all requests are serialized within the

particular domain upon which the rule operates (e.g., a discom, an empire,

etc.).











In addition, the operation of rules is atomic. This means that all the steps of a

rule must be executed, or none. This prevents the security of the system from

being defeated by interrupting a rule's transition from one secure system state

to another before the rule was completed, and possibly exploiting a security

hole which could result.

Secure State Preserving For any SS(t), if SS(t) is a secure system state, and

-i(SS(t), ARS(t)) = SS(t'), t' > t where SS(t') is also a secure system state,

then ri is secure state preserving.

System This is all sequences of system states with some initial system state SS(to)

which satisfies the secure system state invariants for all subsequent system

states.


4.2.2 Secure State Invariants

The secure state invariants are the properties we want the model to have. In order

for the system to be secure, these properties must always be in effect, which is why

they are termed "invariant." They are related to the traditional idea of invariance

used in the field of program verification in that if they hold before a rule is executed,

they should hold after it is executed. Here, "invariance" is used in the same sense

that it is used in BLP [6].


Genesis Axiom The initial system state of the system, SS(to), is secure.

Divine Right Axiom A subject can create an empire discom only if given that priv-

ilege by the administrators of the system administrations) involved. This is

the only area where the system administration needs to be be involved in the

management of discoms.










Temporal Axiom A subject may only access an object with the same time index as
the subject:

V (a, b, t, t', j, 1), ((Sa(t) e Sj(t), Ob(t') E O(t')) A (sa(t) Ob(t'))) =

(t = t').

Usage Property If a subject accessing an object does not release the object, it still
accesses it. If a subject did not request an object, it can not access it:

V (a, 1, b, t > to), (sa(t) NI ob(t))

((Sa(t 1) N Ob(t- 1)) V (Sa(t- 1) PI Ob(t 1))).

In other words, if a subject is currently accessing an object, it either accessed
the object before the current time, or it requested access of the object before
the current time.

Creator Property The creator of a discom automatically becomes a governor of that
discom:

((s, E II(Dj(I))) A (si R6 Dk(t)))

((Dj(t+l) I Dk(t+l)) A (si E H,(Dk(t+l))) A (sj E Ili(Dk(t+l))) A (s. 4 sj)).

Equivalently:

V si InI,(D(t)), ((S t6 Dk(t)) A (Dj(t + 1) 1 Dk(t + 1))) =S E Gk(t + 1).

Government Property The governor of a discom may grant and revoke any privileges
to non-governor subjects of that discom:

((gi E 111(Dj(t))) A (sk Gj(t))) = (gi 21 Sk(t)),

and,


((gi E I1(Dj(t))) A (sk i Gj(t))) =: (gi 22 Sk(t)).










Cordon Property Discoms may never intersect with other discoms:

V (Di(t) # Dj(t)) (D(t) n Dj(t) = 0).


Nova Property A non-governor subject may only access a descendant discom if made
a member of that discom by a governor of an ancestor discom (conditional down
access):
((t' > t) A (Dj(t) J+ Dk(t)) A (si G n1(Dj(t)))) =

s; E Hl(Dk(t'))iff (g91 E 1(Dj(t)) 123 si).

Demesne Property The governor of a discom always has unrestricted access to de-
scendant discoms:

((si E Gj(t)) A (Dj(t) J,+ Dm(t))) = (s E Gm(t)).


Ceiling Property A subject may not access an ancestor discom without being a sub-
ject of that discom:

((s(t) = (Dk, E', oi)) A (Dj(t) I' Dk(t))) = (si I1i(Dj(t))).

Note that this implies that a subject may only access discoms that are either
its upper discom or ruled by its upper discom.

Sanitation Property Resources that are unused must contain the sanitized value for
their type:
(V o (), r ) (T(r, t) = a).

4.2.3 The Rules of Operation

Rules take the form of a set of requirements followed by a set of operations. If
the requirements are not all true, the rule cannot be applied. Each rule starts at










some initial time t and concludes at some time greater than t. This is because a rule

may take several discrete time steps to complete. Each type of privilege access has a

corresponding rule.

The secure state invariants will be shown to be truly invariant under these rules

through theorems and their proofs following the individual rules. That is, if all the

requirements are true, then the application of a rule to a secure state will result in a

secure state.

If a set is not one of the sets that are in the discom tuple (i.e., S, O, P, AS, R) or

a member of those sets, then it is conceptual and need not be explicitly maintained.

This also applies to the functions used (e.g., the usage function), which will sometimes

be mentioned when necessary.

A group of lemmas is now introduced prior to the rules and theorems. These

lemmas will be used in the theorems which prove the rules of operation are secure

state preserving.

Lemmas

Lemma 1 (Usage lemma): If a rule r is such that any object of the rule is not

being accessed by any subject when the rule begins at initial time t, and the subject

requests access to the object at initial time t, and at time t', t' > t the subject

accesses the object, and at time t", t < t' < t" where t" is the final time of the rule

the subject releases the object, then the Usage property holds.

Proof: The Usage property is concerned only with the proper request/access of

an object by a subject. If the following conditions occur during a rule r7 for a subject

s; E Sj and an object o E Oj at times t, t', and t" such that t < t' < t" in discom


1. (si pi o) E ARSj(t);










2. (si Nl o) e CSAj,i(t');

3. (si ,I o) CSAj,i(t");


then the Usage property holds.

Q. E. D.


Lemma 2 (Creator lemma): If a rule r is such that a discom is not created, then

the Creator property holds. For example, if k 7 6, 8, 9, and k < 24, and SS(t) E E

then Tk does not cause a violation of the Creator property.

Proof: The Creator property is concerned only with the creation of a discom. If a

rule r is applied to SS(t) E E where no discom is created, then the Creator property

vacuously holds.

Q. E. D.


Lemma 3 (Government lemma): If a rule 7 does not affect the actions of a gover-

nor, then the Government property holds. For example, if k = 17, 18, and k < 24,

and SS(t) E E then Tk does not violate the Government property.

Proof: The government property is concerned only with the actions of governors

in granting and revoking privileges to non-governor subjects in a discom. If a rule Tk

does not affect the capabilities set of any governor regarding that governor's subject-

affecting privileges, then the Government property must hold. The only rules rk, k <

24 that have this affect are 717 and Tis.

Q. E. D.










Lemma 4 (Cordon lemma): If a rule r does not cause any object set 0, privilege

set P, action set AS, or resource set R to add any new elements, then it can't possibly

make any discoms intersect and the Cordon property holds.

Proof: Since discom intersection is defined for discoms Di and Dj where i : j as

follows:

1. O nOj = 0;

2. P n Pj = 0;

3. ASi n ASj = 0;

4. Ri n Rj = 0,

then in general,

((Ai n A2 = 0) A (A/ C A,) A (A' C A2)) = (A' n A2 = 0).

Therefore if ruler does not add elements to {0 O0 E 6}, {P P }, {AS I AS E

AS}, and { R E R}, then it must be of the form: ((A' C Al) A (A' C A2)) =

(A' n A' = 0), and it can not possibly cause a discom intersection, and therefore the

Cordon property holds.

Q. E. D.


Lemma 5 (Nova lemma): If SS(t) E E and if a rule r begins at time t and ends

at time t', t' > t, and Vj, (Sj(t') Gj(t')) C (Sj(t) Gj(t)) then the Nova property

holds.

Proof: The Nova property is concerned only with a non-governor subject accessing

a descendant discom. If a rule is applied to SS(t) E E such that a new subject is

not added to any discom's subject set then the Nova property holds because if a new










subject is not added to the subject set then there can not be any new privileges that

the (non-existent) subject can use in a discom.

Q. E. D.


Lemma 6 (Demesne lemma): If SS(t) G E and a rule r does not affect the access of

governors of the descendants of a discom, then the Demesne property holds. That is,

if V j, Gj(t) = Gj(t') when Tk(SS(t), ARS(t)) = SS(t') then the Demesne property

holds.

Proof: The Demesne property is concerned only with a governor's ability to access

a descendant discom: ((Dj J+ Di) A (g E Gj)) =* (ga, Gi). If a rule is applied to

SS(t) E E such that no change is made to any governor set in a descendant discom,
then the Demesne property vacuously holds.

Q. E. D.


Lemma 7 (Ceiling lemma): If SS(t) E E and a rule r does not affect how a

subject accesses an ancestor discom, then the Ceiling property holds. That is, if

V i, a, CSi,a(t) = CSi,a(t'), t' > t for Sa when Tk(SS(t)E, ARS(t)) = SS(t') and the

change to CSi,a(t') does not affect the subject's access of an ancestor discom, then

the Ceiling property holds.

Proof: The Ceiling property is concerned only with a subject not being allowed

to access an ancestor discom without also being a subject of that ancestor discom:

V a, i, j, ((sa E Si) A (Dj D+ Di)) => (sa III(Dj)). If a rule is applied to

SS(t) E such that no subject access is made to an ancestor discom, then the

Ceiling property vacuously holds.

Q. E. D.











Lemma 8 (Sanitation lemma): If SS(t) E and a rule 7 does not affect any

resources, then the Sanitation property holds. That is, if T(rj, t) = T(rj, t') V i, j

when Tk(SS(t), ARS(t)) = SS(t') A (t' > t) then the Sanitation property holds.

Proof: The Sanitation property is concerned with unused resources containing the

sanitized value for their type. If a rule is applied to SS(t) GE such that no resources

are affected: T(rf, t) = T(ri, t'), t' > t, then the Sanitation property vacuously
holds.

Q. E. D.

Ti: Object Creation Rule

When a subject sc in discom Di requests the creation of a new object on, at time

t then the following must occur.


1. Requirement: sc(t) must have the "create object" privilege pi in its capabilities

set: (pi, ASi) e CSi,c(t).

2. Requirement: The access request set ARSi(t) must contain a tuple of the form

(sC pi (on, AS;)).

3. Requirement: o,(t) must not exist: on Oi(t).

4. Requirement: o,(t + 1) must be composed of unused resources r E Ri(t) such

that T(r, t) = a:

V rj, rj e on(t + 1) j (rj E Ri(t)) A (T(rj, t) = o).


5. Operation: CSAi,c(t + 1) = CSAi,c(t) U {(s R1 (on, ASi))}.

6. Operation: When the previously unused resources have been allocated to on(t+

1) the usage function T must return that object for each resource that now










composes o,(t + 1). For example, if o,(t + 1) is to be composed of resource ra
then T(ry, t + 1) = on(t + 1).

7. Operation: on(t + 1) must be added to the set Oi(t + 1) : Oi(t + 1) = O(t) U

{on(t + 1)}.

8. Operation: The action set of all the governors of Di must be updated to allow
each governor the maximum access to O,(t + 1):

AS(t + 1) = ASi(t) U {(gj, (pi, on)) V gj E Gi(t + 1), 1= 2, 3, 4, 5}.


9. Operation: CSAi,c(t + 2) = CSAi,c(t + 1) {(s, 1i (on, AS;))}.

10. Operation: ARSi(t + 2) = ARSi(t) {(s, pi (on, ASi))}.

Theorem 1 (Object Creation Theorem): r7 is secure state preserving.
Proof: Assume that SS(t) E E contains a discom Di which contains a subject so
which requests the creation of a new object on at time t.

1. Usage property: by lemma 1 from (2), (5), and (9).

2. Creator property: by lemma 2.

3. Government property: by lemma 3.

4. Cordon property:

by (7) the object set 0, has increased: V j 7 i, Oj(t + 1) = Oj(t) and
On Oj(t) since ,On(t) (by (3)) and Oi(t) nOj(t) = 0 (by the assumption
that SS(t) is a secure system state) and Oi(t + 1) Oi(t) = {o,} therefore

Oi(t + 1) n Oj(t + 1) = {on} n Oj(t) = 0;

Pi is not affected: Pi(t) = Pi(t + 1);











by (8), AS; does not intersect any other action set: V j : i, ASi(t) n

ASj(t) = 0, and ASj(t + 1) = ASj(t), and /1o,(t) (by (3)), and o, =

{ H2( 2(x)) | x E ASi(t+1)-AS;(t)}, so o { II2( 2(x)) | x E ASj(t)},
therefore Vj J i, AS; n ASj = 0;

Ri is not affected: Ri(t) = Ri(t + 1);

therefore the Cordon property holds.

5. Nova property: by lemma 5.

6. Demesne property: the only change to the privileges of the governors of Di is by

(8), and this change is an addition to their privileges, and does not affect their
access to descendant discoms: if (g E Gi(t)) A (Di(t) J+ Dj(t)) => (g E Gj(t))

then g E Gj(t + 1). Therefore the Demesne property holds.

7. Ceiling property: by lemma 7.

8. Sanitation property:

By (4) all resources are initially unused: T(r, t) = a at time t;

by (6) all modified resources are assigned to the new object on : T(r, t +
1) = o+(t + 1);

therefore the Sanitation property holds.

Q. E. D.

T7: Object Destruction Rule

When a destroyer subject Sd in discom D; requests the destruction of an existing
object oe, at time t then the following must occur.










1. Requirement: Sd(t) must have the "destroy object" privilege p2 in its capabilities
set: (p2, e) 6 CSi,d(t).

2. Requirement: the access request set ARSi(t) must contain a tuple of the form

(Sd p2 Oe).

3. Requirement: the object o, must exist at time t : oe Oi(t).

4. Requirement: the current access set CASi(t) must not contain any triples which
contain oe (i.e., no subjects may be accessing the object at the time of destruc-
tion): (sa Rb oe) CASi(t) V a, b.

5. Operation: CSAi,d(t + 1) = CSAi,d(t) U {(sd N2 Oe)}.

6. Operation: all the resources of oe must be sanitized, such that the usage function
T at time t must return a for each resource that previously composed o :
(r E oe(t)) = (T(r, t + 1) = o).

7. Operation: o, must be deleted from Oi(t + 1) : Oi(t + 1) = Oi(t) {oe}.

8. Operation: the action set ASi(t + 1) must be modified to delete all tuples
which contain the destroyed object oe : AS(t + 1) = AS(t) {(s, (p, o,)) s

Si(t), p e Pi(t + 1)}.

9. Operation: CSAi,d(t + 2) = CSAi,d(t + 1) {(sd t2 Oe)}.

10. Operation: ARSi(t + 2) = ARSi(t) {(sd P2 Oe)}.

Theorem 2 (Object Destruction Theorem): 72 is secure state preserving.
Proof: Assume that SS(t) E contains a discom Di which contains a subject Sd
which requests the destruction of an existing object o, at time t.











1. Usage property: by lemma 1 from (2), (5), and (9).

2. Creator property: by lemma 2.

3. Government property: by lemma 3.

4. Cordon property: 0, shrinks, P; does not change, ASi shrinks, and R, does not

change. Therefore by lemma 4 the Cordon property holds.

5. Nova property: by lemma 5.

6. Demesne property: by lemma 6

7. Ceiling property: by lemma 7.

8. Sanitation property: by (6), all of the resources which made up the object that

was deleted, o0, are sanitized and returned to the resource pool R, of discom

D : (r E oe(t)) =J (T(r, t + 1) = a), therefore the Sanitation property holds.

Q. E. D.

Tm: Object Modification Rule

This encompasses addition (T3A) and subtraction (T3s) of resources from objects,

and not a change of data within the object (e.g., changing the contents of a memory

cell).

73A: when a modifier subject sm in discom Di requests the addition of a resource

r, to an existing object, oe, at time t then the following must occur.

1. Requirement: Sm(t) must have the "modify object" privilege p3 in its capabili-

ties set: (P3, Oe) E CSi,m(t).

2. Requirement: the access request set ARSi(t) must contain a tuple of the form

(Sm p3 (r, oe)).










3. Requirement: rl must exist in the resource pool Ri(t) of discom Di(t) : ri E

Ri(t).

4. Requirement: the object oe must exist at time t : oe E Oi(t).

5. Requirement: the resource to be added must be sanitized: T(ra, t) = a.

6. Operation: CSAi,,(t + 1) = CSAi,m(t) U {(sm 13 (ra, oe))}.

7. Operation: the usage function for the resource must return the object:

T(r', t + 1) = o.

8. Operation: ra is added to oe(t + 1) : o,(t + 1) = oe(t) U {ra}.

9. Operation: CSAi,m(t + 2) = CSAi,m(t + 1) {(sm t3 (rt, Oe))}.

10. Operation: ARSI(t + 2) = ARSi(t) {(sm P3 (ri, oa))}.

Theorem 3A (Object Modification Theorem A): 73A is secure state preserving.
Proof: Assume that SS(t) e E contains a discom Di which contains a subject sm

which requests the addition of a resource ra to an existing object o, at time t.

1. Usage property: by lemma 1 from (2), (6), and (9).

2. Creator property: by lemma 2.

3. Government property: by lemma 3.

4. Cordon property: There is no change to Oi, Pi, AS;, or Ri, therefore by lemma

4 the Cordon property holds.

5. Nova property: by lemma 5.

6. Demesne property: by lemma 6.










7. Ceiling property: by lemma 7.

8. Sanitation property: by (5), the resource which is merged with the object o, is
sanitized. By (7) and (8) the unsanitized resource belongs to o, : (T(ra, 1) =

a) A (oe(t + 1) = oe(t) U {ra}) A (T(rl, t + 1) = oe), therefore the Sanitation
property holds.

Q. E. D.
T3S: when a modifier subject sm in discom Di requests the subtraction of a resource
ra from an existing object oe, at time t, then the following must occur.

1. Requirement: sm(t) must have the "modify object" privilege p3 in its capabili-
ties set: (P3, Oe) E CSi,m(t).

2. Requirement: the access request set ARSi(t) must contain a tuple of the form

(Sm P3 (rT, Oe)).

3. Requirement: the object o, must exist at time t : oG E Oi(t).

4. Requirement: ra must exist in the resource pool Ri(t) of discom Di : r Ri(t).

5. Requirement: T(ra, t) = oe.

6. Operation: CSAi,(t + 1) = CSAi,m(t) U {(sm 3 (rT, Oe))}.

7. Operation: T(r t + 1)= a.

8. Operation: rfa is subtracted from oe(t + 1) : Oe(t + 1) = Oe(t) {rl}.

9. Operation: CSAi,m(t + 2) = CSAi,m(t + 1) {(sm 3 (rT, Oe))}.

10. Operation: ARSi(t + 2) = ARSi(t) {(sm p3 (r, Oe))}.











Theorem 3S (Object Modification Theorem S): Tas is secure state preserving.

Proof: Assume that SS(t) E contains a discom Di which contains a subject sm

which requests the subtraction of a resource r, from an existing object oe at time t.


1. Usage property: by lemma 1 from (2), (6), and (9).

2. Creator property: by lemma 2.

3. Government property: by lemma 3.

4. Cordon property: There is no change to O;, Pi, ASi, or Ri, therefore by lemma

4 the Cordon property holds.

5. Nova property: by lemma 5.

6. Demesne property: by lemma 6.

7. Ceiling property: by lemma 7.

8. Sanitation property: by (7), the resource being subtracted from o, must be

sanitized: (oe(t + 1) = (oe(t) {r'})) = (T(r t + 1) = a). Therefore the

Sanitation property holds.


Q. E. D.

74: Object Merging Rule

When a subject sm in discom Di requests the merging of two objects Oa and Ob

into a third object oc at time t, then the following must occur.


1. Requirement: sm(t) must have the "merge objects" privilege p4 in its capabili-

ties set: (p4, Oa), (P4, Ob) E CSi,m(t).










2. Requirement: the access request set ARSi(t) must contain a tuple of the form

(Sm P4 ( O, 0, ))-

3. Requirement: the objects to be merged, oa and ob must exist in Oi(t) : o0, ob E
Oi(t).

4. Requirement: the object which will be the result of the requested merge, oc,
must not exist at time t : oc Oi(t).

5. Requirement: there must be only one triple in the current access set CASi(t+ 1)
that contains o, and Ob, and it must be (sm &4 (o,, Ob, Oc)) (i.e., no other subject
is accessing them).

6. Operation: CSAi,m(t + 1) = CSAi,(t) U {(sm 4 (oa, Ob, c,))}.

7. Operation: the new object, o,(t + 1) must be created and added to the object
set: Oi(t + 1) = Oi(t) U {oc(t + 1)}.

8. Operation: the resources of the two objects to be merged, Oa(t) and ob(t) must
be deleted from Oa(t + 1) and ob(t + 1) and moved to o,(t + 1) resulting in
o,(t + 1) = 0 and Ob(t + 1) = 0.

9. Operation: the resources of the new object, o,(t + 1) must be equal to the
resources of the old objects o,(t) and ob(t): Oc(t + 1) = Oa(t) U Ob(t).

10. Operation: the old objects Oa(t+l) and ob(t+l) must be deleted from Oi(t+2) :

Oi(t + 2) = Oi(t + 1) {o,(t + 1), ob(t + 1)}.

11. Operation: the action set ASi(t+2) must be modified to delete all tuples which
contain the deleted objects o, and ob : ASi(t + 1) = ASi(t) {(s, (p, o)) (o =
Oa) V (o = Ob), P E Pi(t)}.










12. Operation: the action set ASi(t+2) must be modified to include the new object
oc : ASi(t + 2) = ASi(t + 1) U {(s, (p, o,)) | 3s E Si(t), p E Pi(t) such that

((s, (p, o,)) e AS;(t) or (s, (p, ob)) e ASi(t))}.

13. Operation: CSAi,m(t + 2) = CSAi,m(t + 1) {(sm 4 (oa, Ob, Oc))}.

14. Requirement: ARSi(t + 2) = ARSi(t) {(sm p4 (oa, Ob, oc))}.

Theorem 4 (Object Merging Theorem): 74 is secure state preserving.
Proof: Assume that SS(t) G E contains a discom Di which contains a subject Sm
which requests the merging of two objects o, and ob into a third object oc at time t.

1. Usage property: by lemma 1 from (2), (6), and (13).

2. Creator property: by lemma 2.

3. Government property: by lemma 3.

4. Cordon property:

by (7) the object set 0, has increased: V j 5 i, Oj(t + 2) = Oj(t) and
oc 0 Oj(t) since flo;(t) (by (4)) and Oi(t)n Oj(t) = 0 (by the assumption
that SS(t) is a secure system state) and Oi(t + 2) Oi(t) = {oc} therefore
Oi(t + 2) n Oj(t + 2) = 0;

Pi is not affected: Pi(t + 2) = Pi(t);

lemma 4 applies to ASi at time t + 1 since nothing was added to it;

by (12) AS (t+2) does not intersect any other action set: Vj J i, AS (t+
2) n ASj(t + 2) = 0, and ASj(t + 2) = ASj(t), and oC(t) (by (4)), and
o, = { 12(2)) | x e ASi(t + 2) ASi(t)}, so o, ( {2( H 2(x)) Ix E
ASj(t)}, therefore V j = i, ASi(t + 2) n ASj(t + 2) = 0;











R, is not affected: Ri(t + 2) = Ri(t);

therefore the Cordon property holds.

5. Nova property: by lemma 5.

6. Demesne property: by lemma 6.

7. Ceiling property: by lemma 7.

8. Sanitation property: by lemma 8.

Q. E. D.

Tr: Object Splitting Rule

When a subject s, in discom Di requests the splitting of an object oc into two

other objects o, and ob then the following must occur.

1. Requirement: s, must have the "split object" privileges p5 in its capabilities

set: (ps, o,) E CSi,c(t).

2. Requirement: the access request set ARSi(t) must contain a tuple of the form

(S, P5 (oc, O0, Ob)).

3. Requirement: the object to be split, o, must exist in Oi(t) : oc E Oi(t).

4. Requirement: the objects that are the result of the split, o, and Ob must not
exist in Oi(t) : o, Oi(t) A ob 0 Oi(t).

5. Requirement: the current access set CASi(t) must contain only one triple with

o, in it, and it must be of the form (s, s5 oc) : CASi(t) = {(s, N5 o,)}.

6. Operation: CSAi,,(t + 1) = CSAi,8(t) U {(s, R5 (o0, oa, Ob))










7. Operation: the new objects, o, and Ob are created and added to Oi(t + 1)
Oi(t + 1) = Oi(t) U {oa(t + 1), Ob(t + 1)}.

8. Operation: the resources of the split object oc(t) must be allocated in any
manner chosen by s, among the two new objects o,(t + 1) and ob(t + 1), and
the resources must be subtracted from oc(t + 1) (i.e., o,(t + 1) = 0), so that
Oa(t + 1) U Ob(t + 1) = Oc(t) and o,(t + 1) n Ob(t + 1) = 0.

9. Operation: s, must release access to the object o, that was split from the
current access set: CASi(t + 1) = CASi(t) {(s, N5 oc)}.

10. Operation: the old object oc(t) must be deleted from Oi(t + 2) : Oi(t + 2)

Oi(t + 1) {o,(t + 1)}.

11. Operation: the action set, AS;(t+l) must be modified to delete all tuples which
contain the split object o,(t + 1) : ASi(t + 1) = AS(t) {(s, (p, oc)) s E

Si(t), pe P(t)}.

12. Operation: the action set, ASi(t + 2) must be modified to include the new ob-
jects o,(t+2) and ob(t+2) : V d, e, ((Sd, (pe, Oc)) e ASi(t)) = ((sd, (pe, oC)) E
ASi(t + 2)) A ((sd, (pe, Ob)) E ASi(t + 2)).

13. Operation: CSAi,,(t + 2) = CSAi,,(t + 1) {(s, R5 (oc oa, Ob))}.

14. Operation: ARSi(t + 2) = ARSi(t) {(s, p5 (or, o0, Ob))}.

Theorem 5 (Object Splitting Theorem): 75 is secure state preserving.
Proof: Assume that SS(t) E E contains a discom Di which contains a subject s,
which requests the splitting of an object o, into two other objects o, and ob at time
t.










1. Usage property: by lemma 1 from (2), (6), and (13).

2. Creator property: by lemma 2.

3. Government property: by lemma 3.

4. Cordon property:

by (7) the object set 0O has increased: V j # i, Oj(t + 2) = Oj(t) and
oa, ob i Oj(t) since oa(t), and 4ob(t) (by (4)) and Oi(t) n O(t) = 0 (by
the assumption that SS(t) is a secure system state) and Oi(t+2) O(t) =

{oa, Ob} therefore Oi(t + 2) n Oj(t + 2) = 0;

Pi is not affected: Pi(t + 2) = Pi(t);

AS; at time t+1 is not affected since nothing was added to it: ASi(t+l) =
ASi(t);

by (12) ASi(t+2) does not intersect any other action set: Vj 7 i, ASi(t+
2) n ASj(t + 2) = 0, and ASj(t + 2) = ASj(t), and /oa(t), Ob(t) (by

(4)), and oa, Ob = { 11 2( 2()) x E ASi(t + 2) ASi(t)}, so oa, Ob
{ 11( 2(x)) I x E ASj(t)}, therefore j V i, ASi n ASj = 0;

Ri is not affected: Ri(t + 2) = Ri(t);

therefore the Cordon property holds.

5. Nova property: by lemma 5.

6. Demesne property: by lemma 6.


7. Ceiling property: by lemma 7.











8. Sanitation property: by (8) the resources of the split object oc(t) are allocated
between the two new objects oa(t + 1) and ob(t + 1), and the resources must

be subtracted from oc(t + 1). V r E R, (T(r, t) = cr) = (T(r, t + 1) =

u), (T(r, t) E Oi {o,}) = (T(r,t + 1) = T(r, t)), (T(r, t) = o,) =

((T(r, t + 1) = o,) V (T(r, t + 1) = ob)). Therefore the Sanitation property
holds.


Q. E. D.

mr: Discom Creation Rule

When a creator subject s, in discom Di requests the creation of a new discom Dj,

at time t then the following must occur.

1. Requirement: Sc(t) must have the "create discom" privilege pi in its capabilities

set: (p6, Di) E CSi,c(t).

2. Requirement: the access request set ARSi(t) must contain a tuple of the form

(so p6 (Di, Dj)).

3. Requirement: the discom to be created, Dj, must not exist at time t : ADj(t).

4. Operation: CSAi,,(t + 1) = CSAi,,(t) U {(s, N6 (Di, Dj))}.

5. Operation: the new discom Dj(t + 1) must be a child of Di(t + 1) : D(t + 1) 1

Dj(t + 1).

6. Operation: s8 E Di(t) must become the first subject s1 E Sj(t + 1) of the new

discom Dj(t + 1).

7. Operation: all the governors gk E Gi(t) must become subjects of Sj(t + 1) :

Gj(t + 1) = Gi(t). These governors are known as the initial subjects of Dj.










8. Operation: s, must become the first governor g, of the new discom Dj(t + 1) :
s, = 1g E Gj(t + 1). The rest of the initial subjects also become governors of
Dj(t + 1): Vg E Gi(t), g E G(t+ 1).

9. Operation: Oj(t + 1) = 0.

10. Operation: Pj(t + 1) must contain only the initial privileges.

11. Operation: ASj(t + 1) = 0 since there are no objects: Oj(t + 1) = 0.

12. Operation: Rj(t + 1) must be allocated a set of unused resources from the
parent discom's resource pool such that Ri(t) = Ri(t + 1) U Rj(t + 1) and

Ri(t + 1) n Rj(t + 1) = 0 and V r E Rj(t + 1) = T(r, t) = a.

13. Operation: CSAi,c(t + 2) = CSAi,c(t + 1) {(so R6 (Di, Dj))}.

14. Operation: ARSi(t + 2) = ARSi(t) {(s, p6 (Di, Dj))}.

Theorem 6 (Discom Creation Theorem): T7 is secure state preserving.
Proof: Assume that SS(t) E E contains a discom D; which contains the subject
Sc which requests the creation of a child discom Dj at time t.

1. Usage property: by lemma 1 from (2), (4), and (13).

2. Creator property:

by (6) s, E IIl(Di(t));

by (1) and (2) (s Re6 Dj);

by (5) Dk(t + 1) 1 Di(t + 1);

by (6) s, = (s, E Sj(t + 1));

by (7) and (8) s, = (gl e Gj(t + 1));










therefore the Creator property holds.

3. Government property:

by (7) gk e Gi(t) are members of Sj(t + 1);

by (8) Sj(t + 1) = Gj(t + 1);

therefore the Government property holds.

4. Cordon property:

by (9) Oj(t + 1) = 0 therefore V k, k Z j, Ok(t + 1) n Oj(t + 1) = 0;

by (10) Pj(t + 1) contains only the initial privileges, which are bound to
discom Dj(t+ 1) by definition, therefore: V k, k # j, Pj(t+ 1)nPk(t+1) =

0;

by (11) ASj(t + 1) = 0 therefore V k, k 7 j, ASj(t + 1) n ASk(t + 1)= 0;

by(12) R(t+1)nRj(t+l) = 0 andV k, k j, Rk(t+)Rj(t+1) = 0
since Rk(t + 1) = Rk(t), Rk(t) Rj(t) = 0 by the assumption that SS(t) is
a secure system state, and Rj(t + 1) C Ri(t) (this is also true for Ri(t+ 1));

therefore the Cordon property holds.

5. Nova property: this property applies only after the creation of a discom, how-
ever, when Dj(t + 1) is created Sj(t + 1) = Gj(t + 1) U {sc}, therefore the Nova
property holds.

6. Demesne property:

by (7) the members of Gi(t) become subjects of Sj(t + 1);

by (8) Sj(t + 1) = Gj(t + 1);











therefore the Demesne property holds.

7. Ceiling property: By (11) the action set of Dj(t + 1) is empty, therefore the

subjects of Dj(t + 1) can not access Di unless they are already members of Di.

Additionally, Dj has no children. Therefore the Ceiling property holds.

8. Sanitation property: by (12) Rj(t + 1) = Ri(t) Ri(t + 1); therefore the Sani-

tation property holds.

Q. E. D.

-7: Discom Destruction Rule

When a governor gd in a discom Dj which sires a discom Di requests the destruc-

tion of discom Di at time t, then the following must occur.

1. Requirement: gd must have the "destroy discom" privilege p7 in its capabilities

set: (P7, Dj) E CSj,d(t).

2. Requirement: the access request set ARSj(t) must contain a tuple of the form

(gd P7 Di).

3. Requirement: the discom Di(t) to be destroyed must rule no other discoms

(i.e., it must have no descendants): -(Di(t) J+ Dk(t)) V i 5 k.

4. Operation: CSAj,d(t + 1) = CSAj,d(t) U {(gd 7 Dj)}.

5. Operation: the action set AS,(t + 1) of discom Di(t + 1) must be emptied:

ASi(t + 1) = 0.

6. Operation: the subject set Si(t + 1) of discom Di(t + 1) must be emptied:
Si(t + 1) = 0.











7. Operation: the privilege set Pi(t + 1) of discom Di(t + 1) must be emptied:

Pi(t + 1) = 0.

8. Operation: every object in Di(t +1) must be emptied by returning the resource

of every object in D;(t+1) to the resource pool of the parent discom, Dj(t+l) in

a sanitized state: Vo c Oi(t), Vr E o, ((T(r,t+1) = a) A (A(r) = Dj(t+l))).

This means that Vo E Oi(t + 1), o(t + 1) = 0.

9. Operation: all the (now empty) objects in Oi(t+2) must be deleted: O((t+2) =

0.

10. Operation: all the members of the tuple of discom Di(t + 3) must be deleted,

causing Di(t + 3) to cease to exist.

11. Operation: CSAj,d(t + 3) = CSAj,d(t + 2) {(gd 17 Dj)}.

12. Operation: ARSj(t + 3) = ARSj(t) {(gd p7 Di)}.

Theorem 7 (Discom Destruction Theorem): 77 is secure state preserving.
Proof: Assume that SS(t) E E contains a governor gd in discom Dj that sires a

discom Di, and gd requests the destruction of discom Di at time t.

1. Usage property: by lemma 1 from (2), (4), and (11).

2. Creator property: by lemma 2.

3. Government property: by lemma 3.

4. Cordon property: by (8) every object in Di(t+ 1) must be emptied by returning
the resource of every object in Di(t + 1) to the resource pool of the parent

discom, Dj(t + 1) in a sanitized state: V o E Oi(t), V r E o, ((T(r,t + 1) =

o) A (A(r) = Dj(t + 1))). Therefore the Cordon property holds.











5. Nova property: by lemma 5.

6. Demesne property: by lemma 6.

7. Ceiling property: by lemma 7.

8. Sanitation property: by (8) all resources from the discom to be deleted, Di

must be sanitized before being returned to the resource pool Rj of the parent

discom Dj : V (r E (oa Oi(t))) => T(r, t + 1) = a. Therefore the Sanitation

property holds.


Q. E. D.

rs: Discom Merging Rule

There are various ways to merge discoms. It was decided that only the merging

of two sibling discoms would be allowed at any one time, in order to ensure that the

secure state invariants would not be violated (e.g., the Ceiling property). The rule

now follows.

When a governor subject (gm E Gk) in the parent discom Dk of two sibling

discoms Di and Dj (i.e., Dk I Di and Dk [ Dj) requests that the siblings be merged

at time t then the following must occur.


1. Requirement: gm E Gk(t) must have the "merge discoms" privilege ps in its

capabilities set: (ps, Dk) E CSk,m(t).

2. Requirement: the access request set ARSk(t) must contain a tuple of the form

(g9 Ps (Di, Dj, Dr)) which indicates that discoms D; and Dj are to be merged
into discom DI such that Dk will sire DI when the rule has finished.


3. Requirement: discom DI must not exist at time t : BDl(t).










4. Requirement: the two sibling discoms to be merged, Di(t) and Dj(t) must be
children of discom Dk(t) : (Dk(t) I Di(t)) A (Dk(t) I Dj(t)).

5. Operation: CSAk,m(t + 1) = CSAk,m(t) U {(g. Ns (Di, Dj, Dr))}.

6. Operation: a new discom, Dr is created at time t + 1 with all the members of
its tuple empty: Di = (S1 = 0, 01 = 0, Pi = 0, ASi = 0, Ri = 0), such that
Dk I DI.

7. Operation: SI(t + 2) = Si(t + 1) U Sj(t + 1).

8. Operation: G1(t + 2) = Gi(t + 1) U Gj(t + 1).

9. Operation: O1(t + 2) = Oi(t + 1) U Oj(t + 1). Since all children discoms are
objects in their parent discom, this means that any children discoms of the two
discoms being merged are automatically merged as well.

10. Operation: Pi(t + 2) = {(k, Dr) 3p E Pi(t + 1) U Pj(t + 1), k = IIl(p)}.

11. Operation: AS(t + 2) = {(s, (p, o)) I 3x ASi(t + 1) U ASj(t + 1), s =
ni(X), p = ( II( nl( 12(x))), Dr), o = 112( 112(X))}.

12. Operation: R1(t + 2) = Ri(t + 1) U Rj(t + 1).

13. Operation: discoms Di(t+2) and Dj(t+2) and any of their contents are deleted:

,Di(t + 2), 4Dj(t + 2).

14. Operation: CSAk,m(t + 2) = CSAk,m(t + 1) {(gm R8 (Di, Dj, Dr))}.

15. Operation: ARSk(t + 2) = ARSi(t) {(gm Ps (Di, Dj, Dr))}.

Theorem 8 (Discom Merging Theorem): TS is secure state preserving.











Proof: Assume that SS(t) E E contains a governor subject (gm E Gk) in the

parent discom Dk of two sibling discoms Di and Dj at time t who requests the

merging of the siblings.

1. Usage property: by lemma 1 from (2), (5), and (14).

2. Creator property:


by (6) a new discom Di(t + 2) is created;

by (1) and (2) Di was created by a governor gm E Gk where Dk I D1;

by (8) gm automatically becomes a governor of Dr;

therefore the Creator property holds.

3. Government property: by (11) since Si(t) is secure, then g Gi has access to
Di (likewise for Dj) at time t, then by (8) g E Gi, and by (11) g has access to

Di(t + 2). Therefore the Government property holds.

4. Cordon property: a merge of two discoms is a forced intersection which, never-

theless, does not violate the Cordon property because a single, new discom is

created. This is because Di(t + 2) and Dj(t + 2) are deleted. This means:

V a, a 5 1, Oa(t + 2) = Oa(t + 1) = Oa(t) and O,(t) n Oi(t) = 0 and

Oa(t) n Oj(t) = 0 by the assumption that SS(t) is a secure system state,
and 01(t + 2) = Oi(t) U Oj(t), therefore Oa(t + 2) n O(t + 2) = 0;

a similar argument is used for PI, ASI, and RI because they are all com-

posed of components from Di and Dj which did not intersect at time t by

the assumption that SS(t) is a secure system state, therefore they don't
intersect now;











therefore the Cordon property holds.

5. Nova property: by lemma 5.

6. Demesne property: by lemma 6.

7. Ceiling property: by lemma 7.

8. Sanitation property: by lemma 8.


Q. E. D.

Tq: Discom Splitting Rule

Discom splitting presents the problem that any child discoms of the discom to

be split must be allocated in some way to the new discoms which result from the

split (i.e., no child discoms may be orphaned). Two methods of doing the split were

considered (there are others that were not considered). First, the discom can be split

into two sibling discoms. Second, the discom can be split such that one of the two

discoms which result from the split can be made a child of the other. There seemed

no practical reason for the latter, so the former method was chosen. The rule now

follows.

When a governor subject (g, E Gk) in the parent discom Dk of the discom Di to

be split (i.e., Dk I Di) requests that Di be split into two discoms D, and Dj at time

t then the following must occur.


1. Requirement: g,(t) must have the "split discom" privilege p9 in its capabilities

set: (ps, Dk) E CSk,s(t).

2. Requirement: the access request set ARSk(t) must contain a tuple of the form

(g, pq (Di, Dj)) where Di is the discom to be split, and (Di, Dj) is a tuple










of the two discoms that will result from the split (i.e., the split will result in
Dk I Di and Dk I Dj).

3. Requirement: Di(t) must exist, and must be sired by Dk(t) : Dk(t) Di(t).

4. Operation: CSAk,s(t + 1) = CSAk,,(t) U {(g, 89 (Di, Dj))}.

5. Operation: a new discom, Dj is created at time t + 1 and is sired by Dk :
Dk(t + 1) 1 Dj(t + 1). The elements of the tuple of Dj(t + 1) are initially
empty: Dj(t + 1) = (Sj = 0, Oj = 0, Pj = 0, ASj = 0, Rj = 0).

6. Operation: the elements of S(t) must be split between Si(t + 2) and Sj(t + 2)
so that Si(t) = Si(t + 2) U Sj(t + 2). Note that Si(t + 2) n Sj(t + 2) $ 0 is
permitted, and also Gi(t) C Si(t + 2) n Sj(t + 2).

7. Operation: the elements of Oi(t) must be split between Oi(t + 2) and Oj(t + 2)
so that Oi(t) = Oi(t + 2) U Oj(t + 2) and Oi(t + 2) n Oj(t + 2) = 0.

8. Operation: the privileges of Pi(t) must be duplicated among Pi(t + 2) and
Pj(t + 2) so that:

Pi(t + 2) = P(t);

P(t + 2) = {(k, Dj) k e I(Pi(t))}.

9. Operation: the elements of ASi(t) must be duplicated among ASi(t + 2) and
ASj(t + 2) so that:

AS;(t + 2) = ASi(t) {(s, (p, o)) I s Si(t + 2) V o i Oi(t + 2)};

ASj(t + 2) = {(s, (p, o)) 3x e ASi(t), s = HI(x) n Sj(t + 2), p =
( nIl( 1( 112(x))), Dj), o = n2( 2n(x)) n Oj(t + 2)}.




University of Florida Home Page
© 2004 - 2010 University of Florida George A. Smathers Libraries.
All rights reserved.

Acceptable Use, Copyright, and Disclaimer Statement
Last updated October 10, 2010 - - mvs