Title: Scheduling garbage collection of JavaVM on embedded real-time systems
CITATION PDF VIEWER THUMBNAILS PAGE IMAGE ZOOMABLE
Full Citation
STANDARD VIEW MARC VIEW
Permanent Link: http://ufdc.ufl.edu/UF00100788/00001
 Material Information
Title: Scheduling garbage collection of JavaVM on embedded real-time systems
Physical Description: Book
Language: English
Creator: Goh, Okehee, 1967-
Publisher: State University System of Florida
Place of Publication: Florida
Florida
Publication Date: 2001
Copyright Date: 2001
 Subjects
Subject: Embedded computer systems   ( lcsh )
Garbage collection (Computer science)   ( lcsh )
Computer and Information Science and Engineering thesis, M.S   ( lcsh )
Dissertations, Academic -- Computer and Information Science and Engineering -- UF   ( lcsh )
Genre: government publication (state, provincial, terriorial, dependent)   ( marcgt )
bibliography   ( marcgt )
theses   ( marcgt )
non-fiction   ( marcgt )
 Notes
Summary: ABSTRACT: Since Java's portability as well as development productivity enables new applications to be deployed easily, Java becomes quite an attractive language for the embedded systems whose service requirements have become diverse recently. Garbage collection not only helps alleviate the programmers' burden struggling to prevent errors related to memory management, but also makes the memory usage efficient. This is one more advantage to embedded systems that have resource constraints. However, the unpredictable execution time of garbage collection is one of the obstacles for JAVA to be used in embedded systems that usually have time constraints. To overcome this difficulty, garbage collection must provide a predictable execution time. Once the execution time is bounded, garbage collection is scheduled to guarantee real-time systems' schedulability and minimize memory usage. In order to form the basis of the bounded execution time of garbage collection, we show the analysis about what factors determine the execution time of garbage collection and how garbage collection affects the application execution. Based on this analysis, we investigate the schedulability of real-time tasks that use automatic memory management systems.
Thesis: Thesis (M.S.)--University of Florida, 2001.
Bibliography: Includes bibliographical references (p. 72-74).
System Details: System requirements: World Wide Web browser and PDF reader.
System Details: Mode of access: World Wide Web.
Statement of Responsibility: by Okehee Goh.
General Note: Title from first page of PDF file.
General Note: Document formatted into pages; contains xi, 75 p.; also contains graphics.
General Note: Vita.
 Record Information
Bibliographic ID: UF00100788
Volume ID: VID00001
Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
Resource Identifier: oclc - 47889812
alephbibnum - 002729347
notis - ANK7111

Downloads

This item has the following downloads:

thesis3 ( PDF )


Full Text











SCHEDULING GARBAGE COLLECTION OF JAVAVM ON EMBEDDED REAL-
TIME SYSTEMS















By

OKEHEE GOH


A THESIS PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
MASTER OF SCIENCE

UNIVERSITY OF FLORIDA


2001




























Copyright 2001

by

Okehee Goh























To My Parents















ACKNOWLEDGMENTS

I'd like to express my deep gratitude to my advisor, Dr. Yann-Hang Lee, for his

continuous guidance, advice, and support throughout the course of my M.S. study. I

would like to thank my supervisory committee members, Dr. Douglas D.Dankel and Dr.

Jonathan C.L.Liu, for their kind support.

I would like to especially thank my parents for their unconditional love as well as

my siblings. I also deeply thank Prof S.Y.Min who helped and guided me to start a late

study.

I thank H.Choi, who always has remained as a good friend. I thank our members

in the Real-Time System Lab, Daeyoung, Yoonmee, Youngjoon, and Yan, for their

sincere friendship and research impression. I would like to express my gratitude toward

Vidya for her precious time doing proof-reading. I wish them all success in their studies

and a bright future.

















TABLE OF CONTENTS


page

A C K N O W L E D G M E N T S .............................................................................................. iv

LIST O F TA B LE S .......................................................................................... ........ vii

LIST OF FIGURES.......................................................... ix

A B S T R A C T ....................... ... .............. ... ............................ x

CHAPTERS

1 IN T R O D U C T IO N ....................................................................... 1

2 AUTOMATIC MEMORY MANAGEMENT SYSTEM .............................................. 5

Introduction ................. .................................................... 5
Basic Garbage Collection Techniques.................................... ..............
Reference Counting .................................................................. .. ..............
M ark-Sweep Collection ................................................................. .. ............
Copying Collection ............................................ 9
Generational Garbage Collection ................................. ...................... ......... 10
Increm ental Garbage collection ............................................................. ............ 11

3 REA L-TIM E SY STEM S ......................................................................... ............. 15

W hat Is A R eal-Tim e System ?................................................ ..................... .... 15
Scheduling Algorithm s for Periodic Tasks ................. ................ ... ................ 16
Scheduling Aperiodic and Sporadic Jobs In Priority Driven Systems ....................... 18
Schedulability T est .............................................................. ..................... 21

4 JAVA AND REAL-TIME SYSTEMS....................... .................................... 24

Trends In Embedded Systems.................................................................... 24
Shortcomings of Java for Real-Time Embedded Systems .......................................... 25
Real-Time Extension For the JAVA platform ............................................................27
Introduction of Java VM for Embedded Systems....................................................... 29
K V M ................................. ................... ...... .................... .......................... 2 9
C h a iV M .................................................. .................................................... 3 0


v









JB E D .............. ..................... .............................. ............... 3 1

5 RELATED WORK FOR REAL-TIME GARBAGE COLLECTION....................... 33

Bakers' Increm ental Copying Collection ................................. ......... ........ ..... 33
H/W Supported Real-Time Garbage Collection ................. ............... 34
Real-Time Non-Copying Garbage Collection ........................................................ 35
Hard Real-Time Garbage Collection in Jamaica VM ............................................... 36
Scheduling a Garbage Gollector without Interrupting Hard Real-Time Tasks ........... 36
Scheduling a Garbage Gollector Using a Sporadic Server ................ ................ 38
Summary of Real-Time Garbage Collection ............... ....... ................... 39

6 BEHAVIOR OF INCREMENTAL GC ON EMBEDDED SYSTEMS..................... 40

Implementation of Incremental GC for JavaVM ............. ................................. 40
Review of KVM ............................................. .......... .............. .. 40
Implementation of Incremental GC on KVM....................................................... 41
Limitations of Incremental GC's Implementation ...............................................45
M easurem ent of G C behavior ...................... .... .......... .................... .............. 46
Specification of Three Test Applications .......... ............ ...... .. ............ 46
Behavior of Non-Incremental GC and Incremental GC............................... 47
Garbage Collection Execution Time ........................................ .............. 54
M utator's Overhead due to GC ....................................................... ........ .... 56

7 SCHEDULABILITY TEST OF REAL-TIME TASK SET USING GC..................... 57

Scheduling B background For G C ............................................................................... 58
Our Approaches to GC Scheduling .................................. ............................ ........ 59
Schedulability Test ................................................................. .. ..........60
Schedulability T est E xam ples .................................................................. ........ 6 1
Sum m ary of Schedulability Test .................................. .............................. ...... 69

8 CONCLUSION ..... ......................... ......... ................ .. 71

REFEREN CE S ..................................................... ................... ....... ....... 72

BIOGRAPHICAL SKETCH .......................................................... ........... ..... 75
















LIST OF TABLES


Table Page

1. Sample Task Set for UB Test ............................................................. .............. 22

2. Sample Task Set for RT Test......... .... ..... .... ....................... 22

3. Lists of JAVA Bytecodes that Need Write-Barrier ...................................................... 43

4. Specification of Three Test Applications for GC Experiment................. ................ 47

5. Characteristics of Non-Incremental Mark-Sweep Garbage Collection with Vary Heap
Size ............................................................... ..... ...... ........ 48

6. Characteristics of Incremental Mark-Sweep Garbage Collection with Vary Heap Size.... 50

7. Regressed Parameters for Garbage Collection Time per Cycle...................................... 55

8 S y m b o ls ............. ......... .. .. ......... .. .. ...................................................... 6 2

9. Exam ple Task Set 1 ................ ............................................ ........ .. 62

10. Example Task Set l's Status after Triggering a GC cycle ..................... ...................63

11. Example Task Set 1 Reflecting the Mutator Overhead and GC.............. .................. 64

12. Size of Reserved Memory Required in Example Task Set 1.............. ...................... 65

13. Example Task Set 2 .................................................................. ....... 66

14. Example Task Set 2's Status after Triggering a GC cycle .............................................66

15. Example Task Set 2 Reflecting the Mutator Overhead and GC.............. .................. 67

16. Size of Reserved Memory Required in Example Task Set 2................. ................ 68

17. Exam ple Task Set 3 ................................................................. .......... 68

18. Example Task Set 3's Status after Triggering a GC cycle...........................................69









19. Example Task Set 3 Reflecting the Mutator Overhead and GC.............. .................. 69

20. Summary of Schedulability Test of Three Examples................... ..................................... 70










LIST OF FIGURES


Figure Page


1. Copying from From space to Tospace........................................................... ........... 10

2. Black-W white Pointer ....................................... ......................... ........ 13

3. Sporadic Server's Budget Replenishment............................................... ................. 20

4. Bakers' Incremental Copying Algorithm's Tospace Layout...........................................34

5. Operation of Non-Incremental GC and Incremental GC .............................................. 42

6. Pause Time of a GC Cycle with Varying Heap Size ............................................... 49

7. Total Time Elapsed for GC with Varying Heap Size ................................................49

8. Average Scanning Count for Free Object with Varying Heap Size................................ 52

9. Overhead of M utators due to GC................................................................. ... 52

10. Number of WB Check according to GC Invoke Count........... ............................. 54















Abstract of Thesis Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Master of Science

SCHEDULING GARBAGE COLLECTION OF JAVAVM
ON EMBEDDED REAL-TIME SYSTEMS

By

Okehee Goh

May 2001



Chairman: Dr. Yann-Hang Lee
Major Department: Computer and Information Science and Engineering

Since Java's portability as well as development productivity enables new

applications to be deployed easily, Java becomes quite an attractive language for the

embedded systems whose service requirements have become diverse recently. Garbage

collection not only helps alleviate the programmers' burden struggling to prevent errors

related to memory management, but also makes the memory usage efficient. This is one

more advantage to embedded systems that have resource constraints.

However, the unpredictable execution time of garbage collection is one of the

obstacles for JAVA to be used in embedded systems that usually have time constraints.

To overcome this difficulty, garbage collection must provide a predictable execution

time. Once the execution time is bounded, garbage collection is scheduled to guarantee

real-time systems' schedulability and minimize memory usage.









In order to form the basis of the bounded execution time of garbage collection, we

show the analysis about what factors determine the execution time of garbage collection

and how garbage collection affects the application execution. Based on this analysis, we

investigate the schedulability of real-time tasks that use automatic memory management

systems.














CHAPTER 1
INTRODUCTION

Recently, the market of embedded devices has extended to a variety of consumer

and business products, including devices such as mobile phones, pagers, PDAs, set-top

boxes, process controllers, office printers, and network devices. They are required to

provide more intelligent services and flexible update of functionality. These systems'

program upgrade for new services must be easily conducted in an automatic and

transparent manner if they are connected into a network. As examples of these services,

smart cards can upgrade to a new encryption algorithm and PDAs can download a new

cyber stock market trading program through a wireless Internet.

Java was originally designed for embedded system software even though it has

been popularized as a Web development language thanks to its simplicity for flexible,

portable, distributed application. The features of JAVA that support high portability and

productivity can comply with the needs of recent embedded systems that are required to

offer new applications as soon as possible to the demands of the market.

JAVA has two limitations that restrict its use as a programming language in

embedded system with real-time constraints: limited runtime performance due to

interpretation overhead and non-deterministic runtime behavior due to dynamic linking

and automatic garbage collection. Just-In-Time (JIT) or Ahead-of-Time compilation and

ROMizing that links necessary library classes with JavaVM enhances the runtime

performance. However, garbage collection's non-deterministic behavior is still a big

hassle for the JAVA language to be used for real-time systems.









Since dynamic memory management is not normally deterministic, real-time

systems that need a predictable execution time mostly use static memory management

systems that allocate memory during an application initialization. Therefore, they need no

more memory management during runtime. However, its memory usage is inefficient and

dynamic data structure such as recursive methods calls and lists cannot be used since the

total amount of memory must be statically determinable in a static memory management

system. This inefficient memory usage is not proper especially for embedded systems,

which have resource constraints.

Manual memory management allows programmers to manipulate memory

allocation/de-allocation. However, it can give another burden to programmers. They

should pay attention to potential memory related errors such as the dangling pointer-

accessing to the memory that does not exist and memory leaks that waste memory.

Automatic memory management, garbage collection (GC in short), is a useful

mechanism to relieve programmers from the burden of these errors. Additionally this is

important for a fully modular programming language to avoid unnecessary inter-module

dependence in terms of information encapsulation.

However, if a garbage collector's execution time is unpredictable and unbounded,

the programming language supporting automatic garbage collection cannot be applied to

any embedded systems. There are many research results trying to reduce pause time due

to garbage collection or to enhance the performance of garbage collection. As an

example, incremental garbage collection algorithms distribute and hide the elapsed time

for garbage collection throughout the execution of their applications. While reducing the

pause time caused by garbage collection, simple incremental GC is not sufficient for real-









time systems because its unbounded execution time cannot guarantee the deadline of a

real-time application.

Beyond simple incremental GC, many algorithms that provide no memory

fragmentation and exactness have been introduced to achieve the predictability of

execution time of GC and the predictability of free memory [WIL93, SIE99]. Based on

this predictability, Henriksson [HEN98] and Kim et al. [KIM99] integrate garbage

collection with real-time scheduling.

The goal of this research is to schedule real-time applications using GC without

violating the real-time constraints of applications on embedded real-time systems.

Scheduling real-time tasks requires the analysis of the worst-case execution time and

additionally the memory requirement in the case of using GC. For this goal, we need to

find factors to determine the execution time of GC and the overhead time of applications

caused by GC and examine the behaviors of GC. To aid us in this work, we implemented

an incremental GC and obtained experiment data while executing on a microprocessor.

Based on these results, we performed a schedulability test of real-time applications that

use GC by adopting one of the known scheduling algorithms. This test as well as the

analysis of required memory shows how GC affects the schedulability of real-time

applications.

This thesis is composed of eight chapters. The first chapter presents the

motivation and introduction of this research. The second chapter describes the basic study

of garbage collection. The third chapter describes the requirements of real-time systems,

scheduling algorithms, and schedulability test of real-time systems. The fourth chapter

presents current trends in embedded systems, the real-time extension for Real-Time Java,






4


and several JavaVM products targeting embedded systems. The fifth chapter discusses

the characteristics of real-time garbage collection. The sixth chapter presents the

implementation and behaviors of incremental garbage collection. The seventh chapter

describes considerations for scheduling a GC and the schedulability of a real-time task set

using GC. The eighth chapter concludes this research.














CHAPTER 2
AUTOMATIC MEMORY MANAGEMENT SYSTEM


Introduction

Garbage collection is an automatic reclamation process of computer storage.

Garbage is defined as memory objects1 that are no longer used by running applications.

When a memory request is not satisfied, the memory allocation routine triggers a garbage

collection to reclaim space, as necessary. An explicit call for "memory deallocator" is not

necessary because the calls to the garbage collector (collector in short) are implicitly in

the calls to the memory allocator; the allocator invokes the garbage collector as required

to free up the space it needs.

Manual memory management by keeping track of whether memory is used or not

can cause programming errors such as a dangling pointer or memory leak. Garbage

collection helps prevent these problems and alleviate the programmers' burden in terms

of memory management. Especially, garbage collection is important for fully modular

programming language to avoid unnecessary inter-module dependence and for object-

oriented languages to keep encapsulation.

An ideal garbage collector reclaims garbage objects (garbage in short) when they

are not used any more. However, it may be hard to determine when these objects are used

for the last time. So, in general, garbage collection uses a "liveness" (accessibility)



1 If there is no specific description, "objects" in following chapters indicates "memory
objects."









criterion to determine whether objects are garbage or not, even though this standard is

more or less conservative. This is explained in terms of "root set" and "reachability." At

the moment garbage collection is invoked, active variables are considered as live.

Typically this includes statically allocated global or module variables, as well as local

variables in activation records of the activated stacks and variables in registers. This

forms a "root set." Heap objects reached from a "root set" are considered "live objects"

because a running program through active variables can access these objects. In addition,

any heap object reached from live objects is also live. Any object that is not reachable

from the root set is treated as garbage.

Conservatism is one of the common terms used when explaining garbage

collection algorithms. This term can be used to describe several conditions that are short

of exactness. As the first usage of conservatism, if the collector does not maintain

information about a reference pointer, the collector should depend on a guess, rather than

using the exact information, to decide whether the value stored into an object is a

reference type or a numeric type. As an example, if the value exists within the range of

memory address, it is a pointer reference [KVMOO]. The second usage of conservatism is

about when garbage is reclaimed. The incremental copying algorithm of Baker [BAK78]

considers objects that are allocated during an incremental garbage collection cycle as live

objects. Those objects cannot be reclaimed until the next cycle after their death. Baker's

algorithm will be introduced in more detail in Chapter 5.

This chapter surveys several garbage collection algorithms including reference-

counting, mark-sweep, and copying as basic algorithms and incremental and generational

garbage collection algorithms as a common approach to reduce the pause time in tracing









collectors [JON94, WIL94]. Garbage collection algorithms reflecting real-time

requirements are introduced in Chapter 5.


Basic Garbage Collection Techniques

There are two basic approaches to find "live" objects in memory space: by

"reference counting" and by "tracing." Reference counting garbage collectors keep a

count of reference pointers to each object and the count is used as local approximation

when determining true liveness of the object. Tracing garbage collector determines

objects' liveness by traversing pointers that the program can traverse. The tracing

garbage collection algorithms include mark-sweep, copying and other variants.

Reference Counting

Each object typically has a header field for describing information of the object,

including a reference count. Each time a reference to the object is created, the object's

count is increased. When an existing reference to the object is eliminated, the count

decreases by one. A garbage collector reclaims the memory space occupied by the object

whose reference count becomes equal to zero. One of the advantages of reference

counting is its simple implementation. As another advantage, it can easily satisfy "real-

time" requirements while it guarantees that memory management operations never halt

the executing program and provides its incremental nature. The problem of this algorithm

is that it cannot reclaim cyclic objects. If the pointers in a group of objects create a cycle,

the objects' reference count cannot reach to zero even though they are not referred to a

program. There is an efficiency problem such that its cost is generally proportional to the

amount of work done by the running program in reference counting.









Mark-Sweep Collection

The collection operation of a mark-sweep collector is composed of two phases:

marking and sweeping. The marking phase marks all alive objects directly or indirectly

reached by the "rootset" to distinguish alive objects among all objects. This is performed

by starting a traverse from the "rootset." The sweeping phase reclaims the garbage

objects that are not marked in the marking phase. The first problem of this algorithm is

that it is difficult to handle objects of varying sizes without fragmentation problem. The

second problem of this algorithm is that the cost of a collection is proportional to the size

of the heap. However, many Java VM products adopt the mark-sweep collection

algorithm because it is simple and proper to small memory size systems compared to

copying or generational collection algorithms. Algorithm 1 describes the mark-sweep

collection algorithm.

A mark-compact collection algorithm as a variant of a mark-sweep collection

remedies the fragmentation and allocation problem of the mark-sweep collection. After

the marking phase in this algorithm, objects are compacted by moving most of the live

objects until all of the live objects are contiguous. The sweep phase in this algorithm

includes additional work for calculating the address where the objects are to be moved,

sliding down objects to be adjacent to a live neighbor, and changing the pointer from

other objects to the moved objects.




Algorithm 1. Mark-Sweep Garbage Collection.
New() =
if free_pool is empty
marksweep(
sweep()
newcell = allocate(









Algorithm 1 -- Continued

return newcell

marksweep( =
for R in Roots
mark(R);
sweep();
mark(N) =
if mark bit(N) = unmarked
mark bit(N) = marked;

for M in Children(N)
mark(*M)

sweep()=

N = Heap bottom
while N < Heap top
if mark bit(N)==unmarked
free(N)
else mark bit(N) = unmarked
N = N + size(N);

Copying Collection


In a copying collection, memory space is divided in two spaces, "fromspace" and

"tospace." A running program allocates memory objects in one semi-space (current semi-

space or tospace). When a memory allocation request of the running program is not

satisfied in the current semi-space due to shortage of the space, the program stops and a

copying collector is called to reclaim the space occupied by garbage objects. Current

semi-space becomes "fromspace" and the other space is switched to "tospace" (flipping).

Henceforth, memory allocation requests are satisfied in "tospace." All of the live data in

"fromspace" are copied to "tospace." In the case of objects that have multiple paths, a

slightly more complex process is needed so that the same object is not copied to

"tospace" in multiple times. When an object is transported to "tospace," a "forwarding











pointer" is installed in the old version of the object. The "forwarding pointer" not only

signifies that the old object is obsolete but also indicates where to find the new copy of

the object.

As advantages of copying collector, its compacting nature avoids memory

fragmentation problem. The most immediate cost of a copying collector is the use of two

semi-spaces because the space is doubled compared to a non-copying collector. The

overhead for a copying collection is proportional to the number of alive objects. Thus,

this algorithm is attractive when the ratio between alive objects and garbage objects is

low.






Al B Cl 1

C C



D


FROMSPACE TOSPACE

S pointer
~ forwarding
address





Figure 1. Copying from Fromspace to Tospace





Generational Garbage Collection

According to research about the characteristics of memory objects, most objects

live a very short time, while a small percentage of them live much longer [WIL94]. It









indicates that a copying algorithm has a high probability at a current garbage collection

cycle copying the objects that have been copied from "fromspace" to "tospace" at a

previous garbage collection. That is, garbage collection cycle spends a time to copy the

same objects in multiple times.

With this feature, a generational garbage collection divides a heap into two or

more areas (generation) based on objects' age. Survived objects from garbage collection

are promoted into an old generation area. Garbage collection operates more frequently at

young generation areas than at old generation areas. Because garbage collection is

performed in a part of entire heap, a garbage collection pause takes less time than a

copying algorithm, except the worst case in which old generation areas are full.

One of consideration for this algorithm is to keep information about a reference

pointer from old generation' objects to young generation' objects. This information is

used to traverse objects in young generation as if "rootset" does. This algorithm is

appropriate to interactive applications whose pause time due to garbage collection should

not be noticed long to users.


Incremental Garbage collection

Reducing the length of garbage collection pause is an issue for interactive or real-

time applications. Incremental GC algorithms distribute their execution time throughout

the program execution, instead of performing an entire garbage collection. That is, an

incremental garbage collection does garbage collection work incrementally by

interleaving application executions. Typical example of this algorithm is to collect certain

amounts of garbage whenever a memory allocation request of application arrives.









There are two concerns for an incremental garbage collector. Firstly, a proper

amount of memory should be reserved when garbage collection is triggered. The

allocation requests during a garbage collection cycle should be satisfied to avoid

application failures due to memory starvation. Secondly, interleaved executions of

applications (mutators) and a garbage collector that share the same memory may cause a

memory consistency problem. That is, the reachability status of memory objects known

to the collector can be changed by the mutators while mutators execute. As shown in

Figure 2, suppose that a garbage collector marks reachable objects from "rootset" by

memory tracing. The garbage collector marks an object A as a live object at Step 1. A

pointer that reaches from an object B to an object C is updated to the pointer from A to C

by mutators at Step 2. The collector traces and marks the object B. Even though the

object C is reachable, the garbage collector does not have a chance to visit the object C

again. The garbage collector determines the object C as a garbage object at current

garbage collection cycle because it is not marked. "Tri-color algorithm" and "read/writer-

barriers" present how to solve memory objects' consistency problem due to interleaved

executions of mutators and garbage collector.


























Figure 2. Black-White Pointer



Dijkstra's tri-color marking paints objects into one of following three colors.

According to this way, when pointer to C is updated from B to A (black-white pointer),

the object A is painted with gray at the step 2. Garbage collector is forced to trace again

the object A, objects with gray color.

Black Object that is visited including all the descendants is painted as black.

Grey Object that is visited by collector but their descendants haven't been

visited is painted as gray. It indicates this object should be visited again.

White At the beginning of garbage collection cycle, all the objects starts as white.

At the final stage of garbage collection cycle, if there are no gray objects, white

objects are considered garbage.

Two approaches exist to avoid "black-white pointer": "read barrier" and "write

barrier." Read-barrier has mutator never see white objects by visiting white objects

immediately whenever mutator tries to access white objects. Write-barrier records the

situation when mutator tries to write a pointer to white objects into black objects so that









collector visits again the black objects or the white objects and reflect objects' change.

Generally read-barrier is more expensive than write-barrier because a pointer update is

less frequent than a pointer read. Algorithm 2 presents Steele's write-barrier with the

example of Figure 2. An object A is pushed into a stack for garbage collector to visit

later. Steele's algorithm paints the object A with gray and pushes it into a stack. Locking

of this code is necessary in order to synchronize accesses by multiple mutators because

Steele's algorithm presents an incremental garbage collection for multi-processor

systems. However, not all incremental garbage collection algorithms always support

concurrency that garbage collector work performs mutators' work concurrently.




Algorithm 2. Steele's Write-Barrier
Update(A,C) =
Lock gcstate
*A= C
if phase == marking_phase
if marked(A) and unmarked(C)
mark bit(A)=unmarked
gcpush(A)


Reducing a garbage collection pause time by distributing it throughout entire

mutator execution is suitable to interactive applications that have users not be aware of

response pause due to garbage collection. However, in the view of real-time systems,

supporting just incremental garbage collection cannot guarantee meeting the deadline of

hard real-time systems because accumulated incremental garbage collection pause cannot

make the worst-case execution time bounded.















CHAPTER 3
REAL-TIME SYSTEMS

This chapter gives an overview of the concepts of real-time systems, scheduling

algorithms for periodic jobs and periodic jobs, and the schedulability test for real-time

processes.


What Is A Real-Time System?

Unlike general purpose computers that require the output's correctness and high

throughput, real-time systems additionally have one more requirement, timely-constraint

services. That is, real-time systems need to finish their work on a timely basis.

Application development maps the timing constraint into the deadlines of jobs.1 Thus, to

guarantee meeting the deadlines of jobs that share resources in real-time systems is a

crucial issue in process scheduling. Examples of real-time applications include air traffic

control, robot control, network packet switching, voice recognition, real-time database,

and so on.

Henceforth, we will call a process that is responsible for activities with real-time

constraints a real-time task. Real-time tasks are categorized into four classes in terms of

their arrival pattern and their deadline. Tasks' deadline indicates the instant of time by

which their execution is completed. If meeting a deadline is so critical that its failure

causes fatal faults, it is called hard. If a late result is acceptable to some extents even



1 Job, process, task are used to indicate same work unit.









though meeting a deadline is desirable, it is considered soft. If tasks' arrival interval is

regular, those tasks are periodic tasks. Aperiodic tasks are tasks with irregular arrival

time. Generally, periodic tasks have a soft deadline and require a fast average response

time. Sporadic tasks that have hard real-time deadlines among periodic tasks are

restricted to have a minimum inter-arrival time to guarantee meeting their deadline

[SPR89].

The systems that have several tasks and whose tasks share a resource like CPU

time are required to schedule those tasks carefully in order to meet the deadline of tasks

and increase the resource utilization. The criteria for real-time systems are schedulability

that is the ability to meet deadlines of all tasks, ensured worst-case latency that is

bounded to the worst case system response time to events, and stability that is required to

meet all hard real-time tasks even if all deadlines cannot be met.

The execution of jobs can be interleaved. A scheduler may suspend the execution

of less urgent jobs and execute more urgent jobs, and resume the suspended jobs when

more urgent jobs are completed. A job is preemptable if its execution time can be

suspended at any time to allow the execution of other jobs. A job is nonpreemptable if it

must be executed from the start until completion without any interruption. For

preemptable jobs, designer should consider the cost of preemption [LIU97].


Scheduling Algorithms for Periodic Tasks

A scheduling algorithm is a set of rules that determine the task to be executed at a

particular time. As common scheduling algorithms, there are a clock-driven approach and

a priority-driven approach.









In a clock-driven scheduling, the schedule of all the jobs is computed off-line and

is kept into a table to use at a run-time. This method helps save a scheduling overhead at

a run-time. However, it is inflexible in adding tasks or deleting tasks. It can be used just

in deterministic systems whose tasks' release time and resource demands are known in

advance and do not vary.

In a priority-driven scheduling, the scheduling of tasks is determined by the

priority assigned to tasks. This approach intends not to leave any resource free as long as

there are any ready tasks. According to the rules of assigning priorities, there are rate-

monotonic algorithm, deadline monotonic algorithm, and earliest deadline first

algorithm, etc. As another criterion, a scheduling algorithm is calledfixed (static) if a

priority is once assigned to a task and used until the task finishes, and dynamic if a

priority is assigned varyingly to each job of a task. The priority-driven scheduling has

many advantages compared to the clock-driven scheduling: easy implementation, and no

a priori about a release time and an execution time. However, when job parameters vary,

the timing behavior of the priority-driven systems is non-deterministic; that is, it is

difficult to validate whether all tasks meet their deadlines or not.

Rate-monotonic algorithm (RM in short) [LIU73] assigns a priority to tasks

according to their request rates: the shorter the period, the higher the priority. RM

algorithm requires tasks to be periodic, independent, and equal in terms of deadline to

period.

Deadline-monotonic algorithm (DM in short) assigns priority to tasks according

to their relative deadlines: the shorter the relative deadline, the higher the priority. When









the relative deadline is arbitrary, DM algorithm is better than RM algorithm in that it can

produce a feasible schedule when RM algorithm fails.

Earliest deadline first algorithm (EDF in short) assigns a priority to individual

jobs of the tasks according to their absolute deadlines. It is a dynamic algorithm. A

system with relative deadlines equal to their respective periods can be feasibly scheduled

on one processor if and only if the total utilization is equal to or less than one.

Task sets are said to be schedulable if the total utilization of the tasks is less than

or equal to Schedulable Utilization of a scheduling algorithm. Schedulable Utilization is

used as criteria to measure the performance of scheduling algorithms. Clearly, the higher

the schedulable utilization, the better the algorithm. In this aspect, EDF with schedulable

utilization 1 is better than RM and DM algorithms. However, in overloaded systems,

outperforming of this dynamic algorithm makes it impossible to predict which tasks miss

their deadline. In contrast, in a fixed priority algorithm, it is possible to predict which

tasks will miss their deadline because overruns of jobs can never affect higher priority

tasks in overloaded case.


Scheduling Aperiodic and Sporadic Jobs In Priority Driven Systems

To schedule periodic tasks and sporadic tasks with periodic tasks, two goals are

presented. Firstly, a sporadic task, which passes an acceptance test should be scheduled to

must meet its deadline without causing periodic tasks and previously accepted sporadic

tasks to miss their deadlines. A scheduler conducts an acceptance test, based on sporadic

tasks' execution time and deadline. Secondly, periodic tasks must be completed as soon

as possible. The four algorithms that schedule periodic tasks or sporadic tasks are stated

below: Background, Polling server, Deferrable server, and Sporadic Server.









Background approach schedules periodic tasks when there are no periodic tasks

or sporadic tasks. This algorithm can always produce a correct schedule and is simple.

However, the response time of periodic tasks can be long unnecessarily when periodic

tasks' load is high.

Polling server creates a periodic task, polling server, with a fixed period and an

execution time (execution budget) to serve periodic requests. If there are no periodic

requests at the moment of the invocation of polling server, the budget of the polling

server is exhausted because the budget is not preserved. other periodic tasks are then

scheduled if exist. This algorithm's drawback is also the long response time of periodic

tasks.

Deferrable server creates a periodic task to serve periodic requests as a polling

server. Unlike the polling server, it can preserve its budget if there are no periodic

requests on the invocation of the deferrable server. Preserving its budget is not allowed

from a period to a period. The preserved budget during server's period can be used to

serve periodic requests that arrive in the same period as long as the server's budget is

not exhausted. A remaining budget is exhausted at the end of period and a new budget is

replenished at the beginning of new period. This algorithm gives the improved response

time for periodic tasks because it can provide immediate services for periodic requests

as long as the budget remains. This algorithm does not impose any scheduling impact to

higher priority periodic tasks, but does to lower priority tasks. Assume that an periodic

task occurs at ps- es where ps is deferrable server's period and es is its execution budget.

The deferrable server can use its budget during the first ps and the server can continue its

execution for another es when the budget is replenished at next ps. Thus, the worst-case









response time of lower priority tasks than the deferrable server may suffer from an extra

delay caused by the deferrable server.

Sporadic Server is improved over a deferrable server. The moment of budget

replenishment is based on when the budget is consumed in terms of different rules of

budget replenishment. Replenishment of a budget occurs one "period" after the budget

starts to consume. Figure 3 shows budget's replenishment rule of a sporadic server.




Period 100
Replenish Replenish
Budget 20 Budget 20 Budget 20




t tt t
Aperiodic
task Aperiodic
task
Figure 3. Sporadic Server's Budget Replenishment



Unlike a deferrable server, which may delay lower priority tasks for more time

than a periodic task with a same period and execution time, the consumption and

replenishment rules of sporadic server algorithms ensure the sporadic server with period

ps and execution budget es never demand extra processor time than the periodic task with

same ps and es in any instant time. Therefore, the sporadic server acts exactly like a

periodic task and we can check for the schedulability of the periodic tasks. Some systems

containing the sporadic server may be schedulable while a deferrable server with the

same parameters may not be schedulable.









Schedulability Test

How can we know whether a given task set is schedulable or not? Unlike other

applications tracking errors by trials for debugging, applying real-time applications to

real environments on the purpose of debugging can cause very dangerous results if these

hard real-time systems are for safety-critical systems. For predictability that predicts

whether a system can guarantee the deadline of all hard real-time tasks or not, analysis

techniques and methods to determine the schedulability of scheduling algorithm are

required.

As a schedulability test for RM scheduling algorithm, Utilization Bound (UB in

short) is introduced, based on Critical Instant. Critical instantfor any task occurs

whenever the task is requested simultaneously i ith requestsfor all higher priority tasks

[LIU73]. That is, Critical Instant indicates the instant at which a task has the largest

response time.

Suppose that Tc, T2.....,n are n periodic tasks, with their periods being C1, C2,.. Cn,

and their execution time being T1, T2,.., Tn, respectively. Each task's CPU utilization is

computed Ui = Ci/Ti. The CPU utilization for a set of tasks is said

U= U1 + U2 + ...+Un

Utilization Bound test means that a set of n independent periodic tasks scheduled

by RM algorithm will always meet their deadlines, for all task phasing, if


U= < U(n) = n(2 -1)
1=0 Ti

U(1)= 1.0, U(2) = 0.828, U(3) = 0.779,...., U(co)= 69

Table 1 shows sample of UB test with 3 tasks. This task set's total utilization,

0.753, is less than U(3), 0.779. So, this task set is schedulable according to the UB test.









Table 1. Sample Task Set for UB Test
C T U
Ti 20 100 0.200
T2 40 150 0.267
_3 100 350 0.286
TOTAL 0.753

By using UB test, the task set which has less utilization than a given UB value is

schedulable and the task set whose utilization is bigger than 1 is not schedulable due to its

overload. However, a set of tasks whose utilization is bigger than utilization bound but

less than or equal to 1 cannot be concluded whether it is schedulable or not because UB

test is a sufficient condition. So, UB test is said conservative even though it is simple.

As more precise test than UB test, Response Time test (RT in short) [JOS86]

defines that for a set of independent and periodic tasks, if each task meets its first

deadline, with the worst-case task phasing of critical instance, the deadline will always be

met. Let anbe a response time of task i. an may be computed by the following iterative

formula:




an+1= Ci+ I a" C where ao= C
J=0 TJ J=0


The test terminates when an+1 becomes an. Task i is schedulable if its response time

is earlier than its deadline: an < T .




Table 2. Sample Task Set for RT Test
C T U
Ti 40 100 0.400
T2 40 150 0.267
T3 100 350 0.286
TOTAL 0.953











Table 2 shows that a given task set is not schedulable because its utilization 0.953

is bigger than U(3), 0.779, according to UB test. T3 needs to apply RT test to decide

whether in a critical instance, its first instance meets its deadline or not. As a result,

because a3 is less than its deadline, 350, task T3 is schedulable by using RT test.

a0 = 40+40+100 = 180

1 80 1 80
ai=100+ 101 x40+ 1 10 x40 =260
100 150


a2 = 100+ 260 x40+ 260x 40 =300
100 150

F 3007 300-
a3 = 100+ 001 x40+ 0 x 40 =300
100 150

Both UB and RT test have limitations: all tasks are independent, deadline of tasks

is at the end of period, tasks are scheduled by using rate monotonic algorithm, and there

is no consideration for an interrupt handling and blocking factors. For more detailed

analysis of schedulability, we must take into account these considerations [OBE94].














CHAPTER 4
JAVA AND REAL-TIME SYSTEMS

JAVA popularized as Internet applications, has become applied to enormously

various areas due to its advantage. Embedded systems are also one of those frontier areas

that try to accept JAVA. However, some characteristics of JAVA inhibit its use in

embedded systems that have real-time constraints. As a practical action to adopt JAVA

in real-time systems, a collaborating group comprised of experts of various industries and

academic areas looks forward to standardize JAVA extension for real-time systems.

Besides, many JAVA VM products targeting embedded systems have been introduced in

the market. This chapter surveys the combination of embedded systems and JAVA, the

proposed real-time extension for JAVA, and some JAVA VM products targeting

embedded systems and considering real-time requirements.


Trends In Embedded Systems

Typically, embedded devices have dedicated functionality for specific set of tasks,

which are characterized long life and high reliability. Recently, a market of embedded

devices has been extended to a variety of consumer and business products, including

devices such as mobile phones, pagers, PDAs, set-top boxes, process controllers, office

printers, and network devices, etc. In general, embedded systems have real-time

constraints.

Early development environment of embedded devices was assembly language.

Then, some companies have shifted to using high-level language C or C++. Running









environment of embedded systems includes a large number of target operating systems

and processors. Market's demands for new functionality on embedded devices get

changed rapidly, so development cost is getting higher and porting new applications on

various environments is problematic to make market's time quickly. To decrease these

costs, manufacturers turn to more open, standards-based development environment -

JAVA [EMBOO].

The distinct advantages of JAVA stated below prop shorter development

schedules and reduced costs for products:

1. Simplicity and productivity. Mostly, JAVA programming language is

easy to learn and use and the extensibility and reusability as object-

oriented languages enhance the productivity.

2. Portability. The feature of JAVA, "write once and run everywhere"

makes developers move applications to various target systems with

minimal efforts.

3. Security. JAVA's security model originally designed for

distributed/networked applications ensures the real-time code from a given

source to access resources in some cases that permission for those

resources is allowed.


Shortcomings of Java for Real-Time Embedded Systems

Real-time application development requires developers to determine the memory

and CPU time requirements for each real-time task and analyze total collective workload

of all tasks in worst-case. Besides these analyses, additional functionalities are required

for scheduling tasks to meet their deadline. However, current JAVA implementation is









not appropriate for development of real-time software. Followings list JAVA's specific

shortcomings [NIL98] as a real-time programming language.

1. Garbage Collection. In real-time applications, the response time of

memory allocation requests must be bounded, enough size of memory

must be assured for allocation requests, and the preemption latency of a

garbage collector task must be bounded when high priority tasks come out.

For analysis of memory requirements in real-time applications, Java run-

time environment must allow applications to determine how much

memory or how much total memory is available in the execution

environment. Current JAVA does not provide these features.

2. Task Scheduling. In current JAVA execution environment, there are no

ways to determine how many other tasks are running, what priority they

own, and how fraction of CPU time they consume. Therefore, there are no

ways to assure that a particular task can finish its job within its real-time

constraints.

3. Task Synchronization. JAVA uses monitor to protect a critical section

from simultaneous accesses of multiple tasks. To predict the execution

time of a task that performs a critical section, competing tasks'

information, complexity information of monitor code, and precise

information about blocking behavior by preventing the protocol inversion

among real-time tasks that share a critical section must be known. Current

JAVA specification does not provide these features.









Real-Time Extension For the JAVA platform

The requirement group for Real-Time Extension for the JAVA platform that is

collaborated with many experts from industries and organizations and is sponsored by

NIST, specified the functional requirements that is expected to be needed by real-time

applications written in the JAVA programming language in 1999 [CAR99].

The Real-Time for Java expert Group (RTJEG) convened under the Java

Community Process of Sun Microsystems released a preliminary release of
Time Specification for Java > (RTSJ in short), accepting the requirements of NIST group

in 2000.

Followings are brief overview of the seven enhanced areas in RTSJ [BOLOO].

1. Thread Scheduling and Dispatching. RTSJ does not specify a specific

scheduling mechanism. Implementations of thread scheduling will allow

the programmatic assignment of parameters proper for the underlying

scheduling mechanism as well as provide any necessary methods for the

creation, management, admittance, and termination of real-time Java

threads.

2. Memory management. Memory allocation and reclamation specifications

are defined to be independent to any specific algorithms, allow a program

to precisely characterize implemented GC algorithms' effects on the

execution time, preemption, and dispatch of real-time Java threads, and

allow the allocation and reclamation of objects outside of any interference

by any GC algorithms.

3. Synchronization and Resource Sharing. The real-time threads must

consider a blocking caused through a priority inversion. The









implementation of the Java keyword synchronized includes one or more

algorithms that prevent a priority inversion among real-time Java threads

that share any serialized resource.

4. Asynchronous Event Handling. To accommodate asynchronous events

that happen likely in the real-world, RTSJ generalizes the Java language's

mechanism of asynchronous event handling.

5. Asynchronous Transfer of Control. Sometime a drastic change in the

real-world needs that the point of logic execution moves immediately to

another location. RTSJ includes a mechanism that extends Java's

exception handling to allow applications to programmatically change the

locus of control of a Java thread.

6. Asynchronous Thread Termination. Application logic may need to

arrange for a real-time Java Thread to expeditiously and safely transfer its

control to its outermost scope and thus end in a normal manner. Unlike the

traditional, unsafe, and deprecated Java mechanism for stopping threads,

RTSJ's mechanism for asynchronous event handling and transfer of

control is safe to reflect drastic and asynchronous changes in the real-

world.

7. Physical Memory Access. Byte-level access to physical memory as well

as classes that allow construction of objects in physical memory is

defined.









Introduction of Java VM for Embedded Systems

Recently, many Java VM products targeting an embedded market have been

introduced with an effort to apply Java language in embedded systems. KVM from Sun

Mircrosystems Corp., J9 from IBM, Tao VM from Tao, Kaffe of open source policy,

ChaiVM from Hewlett-Packard Corp., Jbed from Esmertec Inc., PERC VM from

NewMonics Inc., etc. are JavaVM products that compete to be a winner in the market.

PDAs are already released products in the market by adopting JavaVM on the top of

PalmOS.

Most important and common focus of these JavaVM products is toward

minimizing the size of VM to make running on limited resource environment possible.

This type of a small JavaVM would need only a few tens or hundreds of kilobytes of

memory to run, while this would nevertheless support the complete bytecode set,

dynamic class loading, garbage collection, multithreading, and other essential features of

the Java virtual machine. Other central goals are portability and ease of understanding

[TAI99].

This section introduces a few Java VMs, which are directly related or indirectly

referenced to our research among many Java VM products. Rather than presenting whole

picture of selected Java VMs, the brief description of features of these products is

presented especially in aspects of a garbage collection algorithm.

KVM

KVM is a compact, portable Java virtual machine suitable for 16/32-bit

RISC/CISC microprocessor with a total memory budget of no more than a few hundred

kilobytes [KVMOO].









KVM's modular structure enables the essential functionalities to be customized

according to devices' demands. That is, KVM is built so that the features not needed for a

particular target implementation can be easily removed. To enhance the portability, the

multithreading capabilities of KVM are implemented entirely in software without

utilizing the possible multitasking capabilities of underlying operating system. From the

OS's view, KVM has one physical thread of control inside the virtual machine. Thus,

when a native function like I/O is called from KVM, all the threads in KVM are blocked

by default.

To reduce the VM startup time, a tool JavaCompact allows Java classes to be pre-

linked into the JavaVM (Rominizing).

In general, all the direct object pointers are encapsulated in handles, which are

presented as references to the programmer. Because the handles introduce memory and

execution overhead, KVM's objects does not use handles.

KVM uses non-moving, non-incremental, mark-and-sweep garbage collection

algorithm. So, the execution of all the threads stops whenever a garbage collection takes

place. This part will be reviewed in Chapter 6 in more detail.

ChaiVM

ChaiVM [HPCOO], JavaVM developed in Hewlett-Packard, supports portable,

soft real-time capability, and a reduced memory footprint suitable for embedded devices.

It is claimed that the memory footprint of ChaiVM and its base libraries is 550KB. When

compiled with the complete packages, the libraries require 1.3MB of ROM and 256K of

RAM on strongARM processors.









ChaiVM utilizes the platform's native thread environment to support creation,

deletion, synchronization and scheduling of threads. ChaiVM layers Java semantics on

the top of the platform's facilities.


For memory management, ChaiVM uses an incremental tri-color mark-and-sweep

conservative pointer finding garbage collection algorithm. The execution of garbage

collection thread (the marker thread) is intertwined with the execution of all other

threads. The part of GC operations is performed by the mutators whose stacks' marking

is done by themselves. The configurable GC parameters include the priority of GC

thread, and the number of JAVA objects and memory size to trigger GC. Also, ChaiVM's

concurrent garbage collection running as a separate and background thread performs

while user threads continue to execute.


ChaiVM provides Rominizing and ahead-of-time compiler for performance gains.

JBED

Jbed [JBE99] is a real-time OS integrated with the Java VM and written in the

Java. This combination of OS and Java virtual machine may help avoid the possible

overlapping overhead compared to the case that both perform separately.

To overcome a slow speed of interpretation of bytecodes, Jbed compiles bytecode

into machine code through two ways. Firstly, the bytecode can be linked into an image

that can be downloaded to permanent memory (ROMizer). Secondly, Target Bytecode

Compiler on the embedded systems compiles the bytecodes on linking and loading before

they are called.

As a scheduling algorithm, Jbed uses Earliest Deadline First Scheduling while

supporting a priority inheritance for hard real-time tasks. Additionally, it includes an






32


admission test tool to calculate the target's load and to analyze the schedulability of a set

of real-time tasks.

A garbage collector that implements a mark-sweep GC algorithm performs

garbage collection during system's idle time without interfering the processing of real-

time tasks.














CHAPTER 5
RELATED WORK FOR REAL-TIME GARBAGE COLLECTION

As described in Chapter 2, both incremental garbage collection and generational

garbage collection are not sufficient for real-time systems in aspects of unpredictability of

the worst-case execution time and the memory utilization. This chapter surveys several

approaches proposed as real-time garbage collection. Most algorithms focus on how to

have the execution time of garbage collection bounded for the predictability as well as

have a garbage collection pause minimized. Some adopt real-time scheduling for a

dedicated garbage collection task. All these garbage collections are based on an

incremental garbage collection to have the garbage collection pause time bounded.


Bakers' Incremental Copying Collection

Baker's algorithm [BAK78] is one of the most famous incremental copying

algorithms. The garbage collector of Baker's algorithm does not copy all alive object

fromfromspace to tospace immediately at a flip time. To minimize the latency of a flip

operation, objects directly referenced by rootset are moved to tospace at a flip time. For

objects that will be moved later fromfromspace, a space is reserved in tospace.

Remaining objects infromspace are gradually moved to tospace whenever a new

allocation is initiated. To do so, the tospace area as shown in Figure 4 is arranged into

two sub-areas: one grows upward while a garbage collector compacts alive data from its

bottom end at B and another grows downward while new allocations are made from its

top end at T.













Bottom Top


tB tT

Figure 4. Bakers' Incremental Copying Algorithm's Tospace Layout



A read barrier is used to trap mutator accesses for memory consistency against

mutator's changes during a garbage collection cycle. If the trapped object is in a

fromspace, it is copied to tospace and the address of the new copy is returned to the

mutator. In this way, the mutator can only see tospace objects. Since the mutator never

sees a white object, it can never install a reference to a white object into a black (black-

white pointer) and hence, never disrupt the collector's traversal.

This algorithm fails to provide the bounded time of garbage collection from the

following aspects. Firstly, the time taken to evacuate a root set atomically at flip time

depends on the size of a root set. Secondly, because of the overhead of a read-barrier, the

time taken to read objects depends on whether those objects are already moved or not.


H/W Supported Real-Time Garbage Collection

Nilsen's approach [NIL95] is initially based on Baker's algorithm. He considers

the read-barrier of Baker's algorithm as the major overhead associated with real-time

garbage collection. In order to reduce this overhead, hardware assisted garbage collection

system uses a garbage collecting memory module (GCMM), a memory module equipped

with a special arbiter circuitry. While garbage collection is performing, the arbiter is

responsible for redirecting mutator's memory store and fetch operations that refer to









memory objects waiting to be copied out offromspace. However, Henriksson [HEN98]

pointed out that the specific memory module for garbage collection cannot quickly keep

up with rapid hardware development technology.


Real-Time Non-Copying Garbage Collection

Wilson [WIL93] indicates that the read barrier used in Baker's algorithm is too

expensive and too unpredictable to meet the real-time deadline of applications. For

example, when mutators traverse a list that was not reached and not moved to tospace, it

causes copying all objects of list to tospace; that is, CPU time for this operation is not

bounded.

They propose three solutions to enable non-copying collection to be predictable:

adopting a write-barrier that is more cost-effective than a read barrier, using an implicit

non-copying reclamation, and avoiding a fragmentation problem. The implicit non-

copying reclamation has an effect avoiding a sweep phase by applying an advantage of

copying collection. In the non-copying reclamation, sets that indicate semi-spaces in

copying collection are constructed an old set and a new set by using double-linked lists.

Objects have a header field to record that which sets these objects belong to. As the

collector traverses reachable objects, they are unlinked from an old set and linked into a

new set. After finishing a garbage collection cycle, the objects remained in an old set are

garbage and reclaimed. For de-fragmentation of memory, memory is partitioned into

multiple sets of different-sized chunks of memory and each set contains cells of fixed

size. A requested memory object is assigned to a memory cell in a proper set according to

its size.









Hard Real-Time Garbage Collection in Jamaica VM

Jamaica Virtual Machine (JamaicaVM in short) [SIE99] is designed as a Java

virtual machine for real-time and embedded systems. Based on a simple incremental

mark-sweep garbage collection algorithm, they adopt several ways to provide predictable

execution time for garbage collection operation.

For a root-scanning which stops thread execution for a bounded amount of time,

there is one single root pointer in a stack. All other references that must be present in the

stack are copied to heap at the synchronization point. Thus, an execution of thread is

suspended while references in a stack are copied to heap.

To avoid the overhead caused by the use of handle, they use a new object layout.

Heap is partitioned into blocks of fixed size to prevent a memory fragmentation problem

of mark-sweep algorithm. Working on single fixed size block regardless of Java objects

or their structure can help implement small units of garbage collection work that can be

done in incremental steps.

The garbage collector has to be able to distinguish reference values from non-

references that are stored on the heap. To do this, a bit array large enough to hold one bit

for every word that is present on the heap is used and all words that contain references

have their corresponding bit set.

The gray objects that are changed through a writer-barrier are stored in a linked

list to finish a garbage collector cycle in a constant time.


Scheduling a Garbage Gollector without Interrupting Hard Real-Time Tasks

Semi Concurrent Scheduling in [HEN97, HEN98] integrates a real-time

scheduling with garbage collection to guarantee time constraints of real-time









applications. This algorithm presents an incremental copying garbage collector running

as a separate process in systems that are based on a preemptive process scheduler.

The approach assumes that real-time systems constitute a few hard real-time

processes and a set of soft real-time processes. Basic idea for scheduling of real-time

garbage collection is to keep hard real-time processes from being interrupted by a

garbage collection process. To do so, high-priority is assigned to hard real-time tasks.

Garbage collection work is suspended during the execution of high-priority processes and

is resumed during idle time of high-priority processes. The remaining time is divided for

the execution of low priority processes with soft real-time demands and the execution of

garbage collection that is motivated by low priority processes.

Heap memory for high priority processes (MHP) must be reserved because there

can be a case that memory space is not enough for newly allocated objects before a semi-

space flip is due. Programmer should estimate how big MHP is needed without

interrupting high priority processes by a garbage collector. This analysis work must be

based on the knowledge about each high-priority process such as periods, worst-case

execution times, and worst-case allocation demands. To satisfy mutators' memory

requests during a garbage collection cycle, that how much reserved memory is needed is

determined by garbage collection's response time.

If most of processes are high priority processes, the garbage collection's response

time is long. The drawback of this algorithm is that running a garbage collector during

high priority processes' idle time might require a large size of reserved memory for high

priority processes.









Scheduling a Garbage Gollector Using a Sporadic Server

Kim et al. [KIM99, KIMOO] integrate real-time scheduling algorithms with

garbage collection work in order to guarantee time constraints for real-time tasks. As the

Semi Concurrent Scheduling in [HEN98], garbage collector using a copying collection is

scheduled concurrently with multiple mutators (application tasks) running on embedded

real-time systems.

For embedded real-time systems that have limited memory, memory constraint is

also a considerable factor as well as guaranteeing the schedulability of real-time tasks. A

garbage collector is executed as a sporadic task because garbage collection requests

arrive irregularly and they have hard deadline to keep mutators from suffering from

memory starvation. A certain amount of memory should be reserved to serve memory

allocation requests while perform a garbage collection work incrementally. The size of

reserved memory for this purpose depends on the worst-case response time of a garbage

collector. Background approach to serve periodic tasks has always the lowest priority. In

terms of memory requirements, this is not efficient because of the long response time of

background execution.

Decreasing the worst-case response time of a garbage collector is to reduce the

size of reserved memory. For this purpose, a sporadic server with the highest priority

must serve garbage collection operations. With a carefully selected budget of the sporadic

server in order to guarantee mutators not missing their deadlines, the worst-case response

time of a garbage collection cycle is computed and the memory requirement is decides.

The proposed algorithm reduces the memory requirement by up to 44% against the slack

stealing scheduling approach.









Summary of Real-Time Garbage Collection

Ive [IVEOO] points out that real-time garbage collection should achieve the

predictability of execution time of GC and the predictability of free memory. To meet

these needs, real-time garbage collection requires incremental, non-fragmenting, and non-

conservative GC. Incremental garbage collection distributes and hides elapsed time for

garbage collection throughout interfering mutators' execution to get bounded pause time.

Non-fragmenting of memory is enabled through compacting or copying memory to

contiguous space. To distinguish the reference type values with the numeric type values,

maintaining separately the reference location information makes GC non-conservative

[PRIOO]. The predictability includes the amount of free memory that are available for

allocation as well as the worst-case execution time of GC and bounded time for memory

allocation.

To summarize the effort for the predictability of the execution time of GC and the

free memory, Wilson [WIL93] proposes maintaining two memory sets to make sweep

phase unnecessary in non-copying algorithm and using sets of different-sized chunks of

memory for memory de-fragmentation. Siebert [SIE99] proposes moving the data of the

root-set into the heap to reduce the pause time due to root-set scanning, maintaining extra

data structure to distinguish the reference type values from numeric type values and using

a fixed block size regardless of the Java object data structure for memory de-

fragmentation and incremental GC.

Based on this predictability, Henriksson [HEN98] and Kim et al. [KIM99]

schedule garbage collection with real-time tasks. They show the worst-case response time

of garbage collection according to the scheduling algorithm and the amount of memory

that should be reserved during a garbage collection cycle.














CHAPTER 6
BEHAVIOR OF INCREMENTAL GC ON EMBEDDED SYSTEMS

This chapter describes the implementation of an incremental garbage collection

for KVM and the analysis of its execution behavior.

Even though an incremental GC is implemented in KVM, memory fragmentation

and inexactness for locating the reference in the garbage collector make this garbage

collector not quite satisfy the requirements of real-time garbage collector. Nonetheless,

such experimental results in terms of execution time and overhead of garbage collection

will illustrate the important factors that can affect the scheduling of garbage collector

with other tasks.


Implementation of Incremental GC for JavaVM


We choose Sun Microsystems' KVM for embedded systems as a JavaVM in our

experiments. KVM is ported to VxWorks OS and its stop-i/e-\\ orld mark-sweep garbage

collection (non-incremental GC in short) is modified in order to support incremental

garbage collection.

Review of KVM

KVM is designed for small and resource-constrained devices with aims of

portability, simplicity, and clarity. In order to make KVM more suitable to small devices,

a mark-sweep garbage collection algorithm which is non-moving, non-compacting, and

handle-free is adopted. Since a free memory list is maintained as a memory structure,

memory allocation takes arbitrary long time to search for free memory blocks of suitable









size from a free memory list. It becomes worse because of memory fragmentation caused

by the garbage collection that does not support a compaction operation. Another problem

is the conservative pointer identification that is used to check whether the value of objects

is a pointer to another memory object or just a constant value. This conservative scanning

results in an arbitrary amount of memory to be retained. Additionally, stop-1he-1I orld

garbage collection makes all threads stop their execution while garbage collection

operations are performed. If we do not need to consider real-time requirements of

applications, stop-iwe-i orld mark-sweep garbage collection algorithm is quite proper for

small size memory system because it can avoid the complexity and overhead of

incremental garbage collection.

To enhance portability, KVM's thread system is implemented in software without

using any thread facilities of underlying platform. Thus, threads are scheduled by

JavaVM rather than the underlying OS. Thus, KVM's threads are not preemptive in this

sense.

With KVM's features introduced above, just implementing incremental garbage

collection into KVM is not enough to satisfy the requirement of garbage collection for

real-time requirement. However, based on the data we can get through the experiment of

incremental garbage collection, we can gain an insight about how to estimate garbage

collector's execution time and its overhead that affects all mutators.

Implementation of Incremental GC on KVM

As described in Chapter 2, an incremental garbage collection is performed by

interleaving its work with the mutators' execution. Since the interleaved execution of

mutators that share the same heap with a garbage collector may make changes affecting a

garbage collection during a garbage collection cycle, a synchronization mechanism is









needed. Figure 5 shows the difference between the execution of non-incremental GC and

the incremental GC.


Mutatorl Mutator2 Mutator



Mutatorl I M2 M2 M2 M2 M3 M3





GC1 GC1 GC1 GC1 GC2 GC2


Figure 5. Operation of Non-Incremental GC and Incremental GC




Mark-sweep collection is composed of three phases: root-set scanning, marking,

and sweeping. We implemented an incremental garbage collection by dividing the

operation of marking phase and sweeping phase into several units that are performed

sequentially. Root-set scanning that traces the stack of all threads is done atomically.

For synchronization between mutators and a garbage collector, Steele's write-

barrier is used to trap JAVA instructions (bytecodes) that write pointers into other objects

and holds 'black-white pointer' condition. Table 3 shows the list of JAVA bytecodes that

may need a write-barrier due to their pointer update operation [VEN94].

While an incremental garbage collection cycle is carried out, mutators can ask

memory allocation requests. To avoid memory starvation of mutators during a garbage

collection cycle, proper amount of memory should be reserved.









Table 3. Lists of JAVA Bytecodes that Need Write-Barrier

Bytecodes Description
aastore Store reference into array

putfield Set field in Object

putstatic Set static field in class

putfield_quick quick version of putfield

putstatic_quick quick version of putstatic


A memory allocation request triggers garbage collection when the size of

available memory decreases under a specified level (MEMORYTHRESHOLD) of total

heap size. The entire garbage collection cycle (GC_CYCLE) is composed of multiple

small garbage collections (GCINVOKE). The time elapsed for a GCINVOKE is

determined by the number of memory objects (MARKCOUNT) that are traced to check

whether they are alive memory or garbage during marking phase and the number of

memory objects (SWEEP_COUNT) that are collected during sweeping phase. Figure 6

shows the layout of free memory list during garbage collection.

As described in Chapter 2, the reachability of objects decides whether they are

alive or not. In JAVA, each thread has its own stack but all threads share memory heap.

Local variables in the stack of each thread and global variables of system classes form a

rootset. The memory objects reached by the rootset directly or indirectly are considered

as alive objects and marked in marking phase. Sweeping phase collects garbage objects

that are not marked in marking phase and link them to free memory list. A certain amount

of instructions of mutators (INSTCOUNT) are intertwined with each GCINVOKE.









This algorithm is described by the following Algorithm 2. A rootset scanning is

performed only once at the beginning of a garbage collection cycle.


Algorithm 3. Mark-Sweep Incremental Garbage Collection
New() ={
if gGCStatus == DOING GC&& InstructionCount > NextGcTriggerInstCnt
Gc);
NextGcTriggerInstCnt += INCGCINSTCNT;
else if free size < HEAP THRESHOLD
Gc);
NextGcTriggerInstCnt = InstructionCount + INCGCINSTCNT;


AllocateFromFreelinks()
==null){
gnCompleteGC=TRUE;
Gc);
gnCompleteGC=FALSE;
obj=AllocateFreelinks();
if obj = null
return FALSE;


// do GC until finish GC cycle


}
if(phase == DOING GC)
mark(obj);


return obj;
}


Gc) ={
switch(phase){
case MARK READY PHASE:
markRootsObjs();
case MARK NON ROOT PHASE:
markNonRootsObj s);
if (marking is complete until Heap_Space_Top)
phase = MARKSWEEPPHASE;
case MARK SWEEP PHASE:
if gnIncQueueCnt > 0
MarkIncQueueObjects0;
// objects that must be traced again due to write-barrier


sweepTheHeap();


obj =
if(obj









Algorithm 3 -- continued

if (sweeping is complete until Heap_Space_Top)

phase = MARK READYPHASE;
}

}

MarkIncQueueObjects ()={
while no object in GcIncQueue
markChildren(obj ect)
}

Update(A,B)={
*A = B;
if(ISMARKED(A) && ISNOTMARKED(B)) // write-barrier
push GcIncQueue(A);
gnIncQueueCnt++;
}

Limitations of Incremental GC's Implementation

We intended implementing an incremental garbage collection with minimum

modification of original memory management system of KVM. As a result, even though

garbage collection in KVM is conducted incrementally, there still exist immanent

problems of the memory management system in KVM, i.e., unbounded root-set scanning,

conservative pointer identification, memory fragmentation, arbitrary allocation time due

to searching for a suitable size of a free memory block.

By implementing incremental GC, two problems among the above mentioned

problems get worse. Firstly, there is another conservatism in terms of when garbage

objects are detected. To avoid multiple root-set scanning during garbage collection cycle,

the new memory objects that are allocated during a garbage collection cycle are

considered as alive objects. No new objects can be reclaimed until next cycle after their

death. Hence, arbitrary memory objects are retained unnecessarily.









Secondly, frequent invocations of GC_INVOKE in incremental GC bring more

memory fragmentation compared to non-incremental garbage collection. Additionally,

more fragmentation is occurred while the memory reclaimed by GC is added to the free

memory list.

Both issues result in the longer searching times for suitable size of free memory

during memory allocation for mutators.


Measurement of GC behavior

The behavior of incremental GC of KVM is measured in 80MHz PowerPC603e

running VxWorks [VXW95]. As test applications, we use GCBenchmark written by

Hans Boehm, Hewlett-Packard, GCTest by Pat Tullman, University of Utah, Benchmark

by William G. Griswoold et al., UCSD. All these test applications are benchmarks

designed for garbage collection and not real-world applications. Trivial changes such as

the size of memory allocated and APIs used were made.

Specification of Three Test Applications

Table 4 describes the garbage collection behavior of each application when

enough size of heap to accommodate all memory allocation requests is given. 'Average

Scan Length' in the table indicates the length of searching for a suitable size of a free

memory block.

Under the condition of enough heap memory, there is no memory fragmentation

problem caused by garbage collection. Memory allocation always succeeds finding

objects of required size at one time. In fact, this application uses dynamic memory

allocation even though garbage collection does not need to be invoked. However, when

the free memory scanning length is 1, its allocation overhead is very trivial. With this









reason, we consider the total execution time of application shown in this table as the

execution time of application in static memory management system. This value is used

for computing the overhead in dynamic memory management system.




Table 4. Specification of Three Test Applications for GC Experiment
GCBenchmark GCTest Benchmark
Heap Size (bytes) 750000 850000 950000
Total Execution Time of 1111 2131 2299
App. (ms)
Number of Classes Loaded 64 67 77
Allocated Memory (bytes) 749964 823488 933488
Objects Allocated 25137 19804 25345
Average Scan Length 1 1 1
Bytecodes executed 385550 1054316 770811

Behavior of Non-Incremental GC and Incremental GC

The characteristics of GC measured with different heap configurations are shown

in Table 5 for non-incremental GC and in Table 6 for incremental GC. They are

measured through GCBench. Other applications, Benchmark and GCTest, also show

similar results for measurement criteria.

Measurement of Incremental GC in Table 6 is conducted under the conditions

where MEMORY THRESHOLD is 20%, MARK COUNT is 2000, SWEEP COUNT is

1500 and INT_COUNT is 1000. These conditions mean that garbage collection is

triggered when free memory size decrease under 20% of total heap size. Garbage

collection(GCINVOKE) continues until 2000 objects are marked in marking phase and

1500 objects are swept in sweeping phase. The interleaved execution of mutators

continues until a next memory allocation request since mutators have executed 1000

instructions. If there is no enough memory before 1000 instructions have been finished,

GC_CYCLE completes without interleaving mutators.









For the non-incremental algorithm, 'Pause Time per Cycle' at Table 5 indicates

each pause time stopped for garbage collection operations. It is also the time that takes

until a garbage collection cycle completes because the non-incremental garbage

collection keeps running until it finishes its whole cycle. That the time of a garbage

collection cycle is increasing proportionally with the heap size indicates that the heap size

is one of the factors which determine the elapsed time of a garbage collection cycle in

mark-sweep algorithm. However, the total elapsed time for garbage collection decreases

according to heap size's increase because the frequency of garbage collection execution

decreases with larger memory. Figure 6 and Figure 7 show the elapsed time of a garbage

collection cycle and the total elapsed time for garbage collection according to the variant

size of heap.




Table 5. Characteristics of Non-Incremental Mark-Sweep Garbage Collection with Vary
Heap Size
Heap size (x 10000) 20 25 30 35 40 45
Pause AVG(ms) 31 55 48 56 65 73
Time Per MAX(ms) 32 60 49 57 66 73
Cycle(ms) MIN(ms) 30 49 48 56 64 73
# of GC Cycle 10 6 3 2 2 1
Total Time Elapsed for 313 278 146 113 130 73
GC (ms)
Overhead due to 797 961 792 769 752 573
GC(ms)
App. Total Execution 2221 2350 2049 1993 1993 1757
Time (ms)
Avg.Alive Objects 3817 4476 3920 3912 3923 3914
Avg.Garbage Objects 2217 4086 5546 7252 9023 10579
Avg.Scan Length 84 96 79 77 64 59













6 I

--- -- n-in--- --nul-X

.*"ii' f-"


'41
s,,





E




i"-
s


S.4+
A


a20 s 40a ew so
Feap bite (Px 10T o bythm)







Figure 6. Pause Time of a GC Cycle with Varying Heap Size


DDE
Kr- hs~mrrp


-Y 4.

-I


Figure 7. Total Time Elapsed for GC with Varying Heap Size


A

P-y. 9

*


2nD00


1~1 I I ,


F
:--<"


60 7Q 80









Table 6. Characteristics of Incremental Mark-Sweep Garbage Collection with Vary Heap
Size
Heap size(x 10000) 20 25 30 35 40 45
Paused AVG(ms) 3 3 3 3 4 4
Time per MAX(ms) 10 10 11 9 9 8
GC MIN(ms) 1 1 1 1 1 1
Invoke(ms)
Avg. Paused Time per 28 37 33 49 61 68
GC Cycle (ms)
216 89 59 51 45 33
# of GC Invoke
Avg. # of GC Invoke 8 10 12 13 15 17
per GC Cycle
# of GC Cycles 27 9 5 4 3 2
# ofWrite-barrier 9843 4343 2958 2172 2095 1692
Check
Total Time Elapsed for 760 337 221 199 185 137
GC (ms)
Overhead due to 754 988 866 863 910 708
GC/WB(ms)
App. Total Execution 2625 2436 2198 2173 2206 1956
Time (ms)
Avg. Alive Objects 3784 3804 3601 3878 3916 3917
Avg. Garbage Objects 771 2168 3747 4883 6218 7575
Avg. Scan Length 87 97 90 80 69 66


Similarly, for incremental GC, "Pause Time per GC Invoke" at Table 6 is the

pause time for each garbage collection operation (GCINVOKE) while mutators'

execution is interleaved. A GC cycle (GC_CYCLE) is composed of multiple small GC

operations (GCINVOKE). The total number of GCINVOKE in incremental GC can

become considerably high. Since this leads to the increase of preemption cost, this is

extra cost that should be considered when employing incremental GC. We do not

incorporate this cost in our measurement because it depends on the JavaVM architecture

and underlying platform.

According to Figure 6, the time elapsed for a garbage collection cycle in

incremental GC is slightly less than that of non-incremental GC. Non-incremental GC is









triggered when free memory is exhausted completely or there are no more suitable

memory blocks from free memory list. However, incremental GC is triggered while

available memory is below a threshold. It implies that incremental GC is triggered with

the less number of memory objects allocated and its operation covers the less size of heap

for marking and sweeping. Thus, incremental GC performs the less amount of operation

relatively compared to non-incremental GC. However, in incremental GC, garbage

collections are more frequently triggered because of less available memory, memory

fragmentation problem and the retained objects uselessly. It makes total elapsed time for

GC in incremental GC longer than non-incremental GC (Refer to Figure 7).

Unlike average scan length of the experiment without performing garbage

collection (Table 4), in both non-incremental GC and incremental GC, it takes

significantly high average scan length to search for a suitable size of memory object for

allocation requests. This explains that garbage collection without compaction operation

causes memory fragmentation problem. Incremental GC makes fragmentation worse

because it triggers garbage collection more frequently and allocates memory while a

garbage collection cycle is going on. Higher scanning lengths for free memory blocks

leads mutators' overhead to increase. Figure 8 and Figure 9 show that the total overhead

and the scan length are tightly co-related.






























1 1 4





i "



+













Figure 8. Average Scanning Count for Free Object with Varying Heap Size


-+- HnestKPm II' 1
- istn-ine irtlnitalCC


Th~


n *


2g m 40 sa
gMp 9i.L Ox 1o0Ma bytel)



Figure 9. Overhead ofMutators due to GC


r-c--"


~r"+


4

r;:


v
***











As one of significant cost of employing Incremental GC, we should consider a

barrier processing that is used to solve the memory inconsistency that may happen

possibly in between execution of garbage collection and mutators. We adopt a writer-

barrier for this purpose. Number of writer-barrier checks in Table 6 indicates that

mutators during a garbage collection cycle have executed bytecode instructions listed in

Table4 as many as that number. This does not indicate the number of memory objects

that are trapped by "black-white pointer." In fact, just part of total writer-barrier check

may satisfy 'black-white pointer'. However, in a broad view, we extend the overhead of

barrier processing in this range since writer-barrier checks also put extra burden to the

performance of mutators. The number of writer-barrier check will increase according to

the number of interleaved execution of mutators during garbage collection cycles. The

number of interleaved execution of mutator is proportional to the count of GC_INVOKE.

Figure 10 shows the relation between the number of writer-barrier checks and the number

of GCINVOKE. In Figure 9, the overhead of mutators due to incremental GC should

reflect the cost of write-barrier checks as well as scanning overheads for free memory

blocks.

According to the comparison of the number of average alive objects between

Table 5 and Table 6, it shows that the ratio of the number of average alive objects is less

in incremental GC compared to non-incremental GC. This explains that the conservative

decision is involved in determining alive objects.





















IA








,0 413 SD B 100 IZ 1 IBD ISO 80 2M 2
Gh hM CHd

Figure 10. Number of WB Check according to GC Invoke Count




Garbage Collection Execution Time

As basing the first order approximation, garbage collection time is tightly related

to memory size in mark-sweep algorithm. We can also hypothesis that the collection time

depends upon the numbers of alive and garbage objects. The garbage collection time per

cycle is modeled by the following equation where C is the garbage collection time per

cycle, H is the heap size, L is the number of alive objects, and G denotes the number of

garbage objects.

C = Bo + B1H + B2L + B3G ------- (A)

A regression analysis with benchmark results gives the estimated parameters in


Table 7.


lo Wiaa i _















Table 7. Regressed Parameters for Garbage Collection Time per Cycle
non-incremental GC incremental GC
Bo -28.34 Bo -2.16
B1 37.12 (per MB) B1 54.21 (per MB)
B2 0.012 B2 0.005
B3 0.004 B3 0.004


Besides the heap size, the collection cost is dominated by the marking-phase more

than the sweeping phase. Linearly scanning the heap will generally be less expensive than

tracing data structures [JON94]. The number of alive objects, L, and the number of

garbage objects, G, in above equation (A) are dependent on elapsed time for marking

phase and elapsed time for sweeping phase, respectively. The difference in parameters B2

and B3 of non-incremental GC in Table 7 explains this fact.

A parameter for heap size of incremental GC, B1, is relatively bigger than that of

non-incremental GC in Table 7. To include extra cost caused by an incremental work in

incremental GC, we should measure relevant factors of incremental work and make

them reflect in the garbage collection execution time. Because of a difficulty involved in

these measuring, these factors were not reflected. As a result, B1 seems to increase to

express extra cost for incremental work. The incremental work can contain preemption

cost that is brought by frequent invocation of garbage collection. The kinds of

incremental work depend on the design of garbage collector.









Mutator's Overhead due to GC

Mutators suffer from arbitrary long search due to memory fragmentation caused

by GC. Hence, in non-incremental GC, the overhead can be calculated by considering the

scan length searched for a free memory block in following equation. The parameter

values we get through this experiment are 5.93 for Bo and 4.0E-4 for B1.

Overhead = Bo + B1 x Average Scan Length x Number of allocated objects

-- (equation B)

For incremental GC, additionally, we should consider the overhead of write-

barrier. The rate of occurrence of write-barrier varies according to the characteristics of

applications. In our experiment, the overhead caused by writer-barrier is not measured

accurately. The parameter value we get through an experiment without considering

write-barrier is 56.8 for Bo and 3.79E-4 for B1.

As the most general solution for memory fragmentation, fixed size memory

blocks or sets of different-sized chunks of memory are suggested [SIE99, WIL93].

However, fixed size blocks may waste memory when a memory block whose size is less

than the fixed size block is allocated. So, it is in doubt whether fixed size block can be a

solution to embedded systems that have memory constraints.














CHAPTER 7
SCHEDULABILITY TEST OF REAL-TIME TASK SET USING GC

On the basis of the predictability of execution time of GC and the predictability of

free memory obtained by real-time garbage collection as described in Chapter 5, GC can

be scheduled through the analysis of the worst-case execution time in real-time systems.

Besides real-time systems' common goal that it must guarantee the deadline of all hard-

real time tasks, integrating garbage collection with real-time scheduling has one more

constraint; not causing memory starvation during the garbage collection cycle. These two

constraints require the analysis of the amount of reserved memory as well as the

schedulability of real-time systems.

One more important concern in scheduling GC is to minimize the size of memory

used, especially for embedded systems that have resource constraints. Reducing the

response time of GC can be a solution because the amount of reserved memory is

proportional to the length of response time of GC. Eventually, scheduling GC in

embedded real-time systems has three goals: guaranteeing the deadline of tasks, no

memory starvation, and minimizing the memory size.

Chapter 7 talks about scheduling algorithms that are appropriate for real-time

applications using GC in embedded systems, and the result of schedulability test and the

required memory size through three test cases.









Scheduling Background For GC

As described in Chapter 5, there are two approaches scheduling a garbage

collector as a real-time task. Henriksson [HEN98] proposes scheduling a garbage

collector to run at idle time of hard real-time tasks. He divides real-time tasks into three

priority groups in his scheduling algorithm: High priority for hard-real time tasks, middle

priority for a garbage collector that is motivated by hard-real time tasks, lower priority

for soft real-time tasks and garbage collectors that are motivated by soft real-time tasks.

Garbage collection, which is motivated by hard-real time tasks, operates during idle time

of high priority tasks. That is, a garbage collection should not interrupt high priority

tasks. Similarly, Jbed [JBE99], JavaVM for embedded devices, runs a garbage collector

as a background task.

Kim et al. [KIM99] proposes a sporadic server that is in charge of performing

garbage collection. He treats garbage collection requests as periodic tasks that have hard

deadlines since these requests not only arrive unpredictably but also should meet their

deadlines to keep mutators from suffering from memory starvation.

The size of memory for interleaved mutators' memory allocation requests

proportionally depends on the response time of a garbage collection cycle. That is, the

less the response time of garbage collection, the less the reserved memory. In conformity

with this condition, the sporadic server has the highest priority to reduce the response

time of garbage collection while it does not violate meeting the deadline of hard real-time

tasks. The worst-case response time of garbage collection is determined by following (A)

where RGC, CGC, SSsize, and Tss are the worst-case response time of a garbage collection,

the execution time of a garbage collection cycle, the sporadic server's execution time,

and sporadic server's period respectively. CGC is the elapsed time to finish a garbage









collection cycle. The response time is derived by considering the execution time of

interleaved mutators until a garbage collection finishes.


RGC= ( Tss- SSsze) + CG -- (A)
SSsze


Our Approaches to GC Scheduling

Garbage collection requests are considered as periodic requests because of the

triggering moments of a garbage collector. KVM's incremental garbage collection is

triggered at a memory allocation request while the size of free memory goes below a

specified threshold. That implies it is hard to expect when garbage collection will be

triggered. If we ignore this underlying design of triggering GC based on the free memory

threshold, we can consider garbage collection as periodic requests. However,

synchronization overhead due to unnecessary garbage collection may not be negligible.

One of the significant advantages gained by garbage collection is an efficient

usage of memory in embedded systems that have memory constraints. This condition

requires the short response time of garbage collection to save memory that must be

reserved during a garbage collection cycle. For this purpose, it is appropriate to assign

high priorities on tasks that are responsible for garbage collection requests as long as their

execution does not violate guaranteeing the deadlines of real-time tasks.

The sporadic server algorithm to schedule periodic requests acts exactly like a

periodic task and we can check for the schedulability of the periodic task.

With above reasons, we understand that Kim et al.'s approach [KIM99] meets

scheduling demands of KVM's garbage collection. To schedule real-time applications

using GC, we adopt his idea that a sporadic server of the highest priority serves garbage









collection requests that arrive periodically. By using the analysis of the worst-case

execution time of GC, based on Chapter 6's experiment, we will show the schedulability

test result of real-time task sets using GC.


Schedulability Test

In order for real-time applications to be allowed to use automatic memory

management systems, the schedulability as well as the memory requirement should be

checked by taking into account the execution time of a sporadic server for garbage

collection and the overhead that is placed on the applications due to incremental GC.

Consider a task set for a schedulability test. Based on the analysis of GC

execution behaviors in Chapter 6, we derive the execution time of garbage collection and

the overhead of the task set by using tasks' information such as a period, an execution

time, an amount of memory allocation, an alive rate of memory objects, and a scanning

length searched for suitable size of a free memory block.

The tasks are scheduled with a fixed priority based scheduling, rate monotonic

scheduling, which assigns a priority according to the occurrence rate of task instances. A

sporadic server serves garbage collection requests that arrive periodically. The period

(replenishment time) of the sporadic server is given for that task to be the highest priority

task. Its deadline is equal to the period because we assume there exists enough memory

for garbage collection until next garbage collection request. The overhead of mutator

caused due to incremental GC is considered as a blocking time and is added to the

execution time of each task. All tasks except the sporadic server will henceforth be called

mutator tasks. After this overhead is added into mutator tasks' execution time, the









execution time (budget) of the sporadic server is determined within the range of not

violating schedulability of the task set.

In order to know how much memory should be reserved during a garbage

collection cycle, the worst-case response time of garbage collector must be computed.

Another consideration of the schedulability test is the blocking time that may be

caused by a priority inversion. The mutator tasks and the sporadic server share the same

memory area and some data structures, i.e., a stack where gray-colored memory objects

trapped by a write-barrier are placed for tracing later. While a sporadic server waits for

the stack processing of lower priority tasks, the middle priority tasks that preempt the

lower priority task may block the sporadic server. The algorithms that prevent the higher

tasks from being blocked by the lower priority tasks, i.e., basic inheritance protocol or

priority ceiling protocol, should be adapted in scheduling GC. We do not deal with this

case because of time limitation.

With the scheduling mentioned above, three examples below show how garbage

collection affects schedulability of real-time task sets.

Schedulability Test Examples

To simplify scheduling of tasks with automatic memory management systems, we

suppose several factors: the memory threshold for a garbage collection trigger is 20%, the

memory allocation of tasks occurs at the very beginning of the period of each task, the

size of JAVA objects is fixed as 32 bytes- it does not mean that the memory object size is

32 bytes, the alive rate of memory objects is 20%, there are no memory objects retained

conservatively, heap size is 200000(bytes), average scan lengths searched for a free

memory object is 20.









Table 8. Symbols
Description
Symbol
TI Periodic task
Ci, Ti Execution time, period of Ti
Ai Amount of memory that is allocated during Ti
Number of objects that is allocated during Ti (Suppose
that Object size is 32byte. Oi Ai/ 32)
H Heap Size
HTH Heap Size that is reached to Threshold
Ni Number of instances that have occurred before GC trigger
Number of allocated Objects during N occurrence
NOi
(NOi = Ni*Oi )
Number of alive objects during N occurrence
LO
(0.2*NOi Oi)
GO Number of garbage objects during N occurrence (O-LO)
Ratio of the number of objects among total number of
ROi objects
(NOi / ZNo )
-0
OVTi Overhead accumulated on a task (Overhead GC ROi)
OVIi Overhead of each instance of task (OVTi / Ni)
RGC Response time of a garbage collection cycle
r, Number of active instances during RGC

Example 1: Schedulable but memory starvation

Table 8 shows a task set that is composed of four tasks. The deadline of each task

in the task set is equal to its period and tasks are scheduled by rate monotonic scheduling.

When the task set does not use automatic memory management, they are schedulable

according to Utilization Bound test (UB test in short).


Table 9. Example Task Set 1
Ci Ti Ai(Bytes) Oi
zi 2 10 1350 43
z2 4 30 2700 85
T3 10 60 6750 211
T4 15 200 10125 317









U = 0.575 < UB (4) 0.756




Table 10. Example Task Set l's Status after Triggering a GC cycle
Ni NOi LO GO ROi OVTi OVIi Rounded-
(ms) (ms) off value
of OVIi
(ms)
Ti 41 1763 352 1411 0.35 33.6 0.82 1
T2 14 1190 238 952 0.23 22.08 1.58 2
T3 7 1477 295 1182 0.29 27.84 3.98 4
'4 2 634 128 506 0.13 12.48 6.24 7
Total 5064 1013 4051 98


To compute the execution time of a garbage collection cycle and the overhead of

mutators, we need to know how many memory objects exist and how many alive objects

exist. Tablel0 describes the tasks' status such as the number of instances (Ni), the

allocated memory objects etc. when a garbage collector triggers under a free memory

threshold. At this moment, available free memory is 39350 Bytes (H HTH= 200,000 -

160605). By using the equation A for the garbage collection execution time (CGc) and the

equation B for the mutators's overhead (Overhead GC) presented in Chapter 6, we obtain

both values in below expressions.



CGC = -2.16 + 5.17E-5 x 200000 + 0.005 x 1013+ 0.004 x 4051= 29.449 30

Overhead G = 56.8 + 3.79E-4 x 20 x 5064= 95.18- 96

OVTi indicates the accumulated overhead for each task's all instances occurred by

this moment. We get this value by distributing Overhead GC to each task according to the

number of allocated memory objects. OVIi is the each task's overhead that is affected by









an incremental GC. As an example, OVTi and OVIi of'l are given below. Finally, Tc has a

Ims overhead due to an incremental garbage collection.

Ti's OVTi = Overhead G xROi= 96 x 0.35 = 33.6

i's OVIi = OVTi / Ni = 33.6 / 41 = 0.82 1



In Table 10, "New Ci" is the execution time of each task increased by the overhead

of each task (OVIi). To is a sporadic server whose execution time Co is determined by

considering the task set's schedulability.




Table 11. Example Task Set 1 Reflectin the Mutator Overhead and GC
New Ci Ti
(ms)
To 1 10
Ti 3 10
C2 6 30
T3 14 60
C4 22 200

U = 0.94 > UB (5) 0.743

As stated in Table 11, the task set is newly formed with a sporadic server to and

the increased Ci (New Ci) of mutator tasks. The Response-Time Test (RT test in short)

indicates that the task set is schedulable. As an example, let us study the response time of

C4. The response time of the first instance of T4 is 168 ms, therefore it can meet the

deadline, 200ms. Other tasks meet their deadline in the same way.


Ro= -C =1+3+6+14+22=46
]=0










F4 46 F46 46 46
Ri=C i+ = 14+ xl + 3+ x 6 + 4 x 14=68
i T, + 10 10 30 60



R6= ....= 168

R7= .... = 168
The response time of a garbage collection cycle is computed as following

according to [KIM99]. By the equation (C), RGC is 300 ms.


RC GC (Tss- SSsize + CGC -- equation (C)
RC= SSz

The number of active instances during the response time of a garbage collection


cycle (x, ) is computed as ,r = IGC Table 11 shows the number of active instances


and the size of memory required during RGC.




Table 12. Size of Reserved Memory Required in Example Task Set 1
n, Mi (Bytes)
Ti 30 40500
C2 10 27000
T3 5 33750
'4 2 20250
n
Total memory -M that will be requested during Rccis 121500 bytes. It is
J=0

bigger than currently reserved memory 39350 bytes (H HTH). Hence, this task set will

suffer from memory starvation during a garbage collection cycle even though it is

schedulable.









Example 2: Schedulable and enough memory

The example task set 2 in Table 12 is also scheduled in the same way of the

example task set 1 and schedulable according to UB test when static memory systems are

used.


Table 13. Example Task Set 2
Ci Ti Ai Oi
Ti 2 20 1350 43
C2 4 60 2700 85
T3 10 100 6750 211
'4 15 200 10125 317


U= 0.48< UB (4) 0.743

When a garbage collector is triggered under a free memory threshold, available

free memory is 39350 bytes (H HTH = 200,000 160605). Similarly with the example

task set 1, Table 13 shows the task's status at the time of GC trigger. CGc and Overhead

GC are computed in following ways.

CGC = -2.16 + 5.17E-5 x 200000 + 0.005 x 1010+ 0.004 x 4046= 29.414 30

Overhead G = 56.8 + 3.79E-4 x 20 x 5056 95


Table 14. Example Task Set 2's Status after Triggering a GC cycle
Ni NOi LO GO ROi OVTi OVIi Rounded-
off value
of OVTi
Ti 32 1376 275 1101 0.27 26 0.81 1
T2 11 935 187 748 0.19 18 1.64 2
C3 7 1477 295 1182 0.29 28 4 4
'4 4 1268 253 1015 0.25 24 6 6
Total 5056 1010 4046 95









Table 14 shows the task set that reflects the sporadic server and the increased

execution time of each mutator task by its overhead.




Table 15. Example Task Set 2 Reflecting the Mutator Overhead and GC
New Ti
Ci
To 5 10
Ti 3 10
C2 6 30
T3 14 60
'4 21 200

U = 0.745 > UB (5) 0.743

With to and the increased Ci of mutators, the set is schedulable by RT Test. The

following shows RT test for C4. The response time of the first instance of T4 is 79 ms, so it

can meet its deadline, 200ms. Other tasks meet their deadline in the same way.

For the response time of T4,


Ro= C =5+3+6+14+21 = 49
J=0


R= Ci+ C 21 + x5 + x3+ x6 +
ST 1 10 10 30


491 x14=65
S60



R2= ....= 79

R3=....= 79








The garbage collection cycle's response time computed is 120 ms. Table 15

shows the number of active instances and the size of memory required during a garbage

collection cycle.




Table 16. Size of Reserved Memory Required in Example Task Set 2
I', Mi
Ti 6 8100
T2 2 5400
T3 2 13500
T4 1 10125
Total 37125

Total M, 37125 bytes, is less than currently reserved memory 39350 bytes (H -

HTH). Thus, it will not suffer from memory starvation during a garbage collection cycle as

well as it is schedulable.

Example 3: Not Schedulable

Example task set2 in Table 16 is also schedulable according to UB test when

static memory system systems are used.




Table 17. Example Task Set 3
Ci Ti Ai Oi
zi 2 10 1350 43
T2 4 30 2700 85
z3 10 60 6750 211
T4 15 120 10125 317

U = 0.625 < UB (4)

When free memory reaches to the threshold, available free memory is 30575

bytes (H HTH = 200,000 169425). Similarly with Example task setl, Tablel8 shows









the tasks' status at the moment of GC trigger. CGc and Overhead GC are computed in

following ways.

CGC = -2.16 + 5.17E-5 x 200000 + 0.005 x 1068 + 0.004 x 4270 = 30.6 31

Overhead GC = 56.8 + 3.79E-4 x 20 x 5338 = 97.3 98

Table 18 shows the task set that reflects the sporadic server and the increased

execution time of each mutator task by its overhead.


Table 18. Example Task Set 3's Status after Triggering a GC cycle____ R
Ni NOi LO GO ROi OVTi OVIi Rounded-
off value
of OVTi
Ti 40 1720 344 1376 0.32 31.36 0.784 1
T2 14 1190 238 952 0.22 21.56 1.54 2
C3 7 1477 296 1181 0.28 27.44 3.92 4
'4 3 951 190 761 018 17.64 5.88 5
Total 5338 1068 4270 98


Table 19. Example Task Set 3 Reflecting the Mutator Overhead and GC
New Ti
Ci
To 1 10
Ti 3 10
C2 6 30
T3 14 60
'4 21 120

U= 1.008 > 1


Summary of Schedulability Test

Let us recall the test environment where the memory threshold for GC is

supposed to 20% of a given memory size, 200000 bytes. Then, Example 2 indicates that









the remained free memory, about less than 40000 bytes, is enough to satisfy the requests

of memory allocation during a garbage collection cycle. However, in the case of Example

1, it is not enough to keep the threshold 20%. The difference of RGc between Example 1

and Example 2 results in whether it suffers from memory starvation or not since the less

response time of garbage collection needs the less reserved memory.




Table 20. Summary of Schedulability Test of Three Examples
Task Set Old RGc CGc Overhead GC New Schedulability
UB (ms) (ms) (ms) UB
Example 0.575 300 30 96 0.94 Schedulable/
Memory Starvation
Example 0.48 120 30 95 0.745 Schedulable/
No memory Starvation
Example 0.625 310 31 98 1.008 Not schedulable

These examples tell us how GC can be scheduled in real-time systems. Also,

these show how the GC scheduled as a sporadic server influences on the schedulability

and the memory requirements of real-time task sets.

In order to make it possible to schedule GC, we have several assumptions before

the test. The first assumption is about the predictability of execution time. The memory

allocation time and the execution time of GC must be bounded for the analysis of the

worst-case execution time of tasks including a GC task. The second assumption is about

the predictability of free memory. This includes the prediction of the amount of memory

that are reclaimed through GC and the prediction of free memory that are available for

allocation. Non-fragmentation and exactness required in real-time garbage collection

algorithms are involved in both issues mentioned above. This indicates that scheduling a

GC cannot be achieved without using real-time garbage collection algorithms.














CHAPTER 8
CONCLUSION

The portability and the productivity of Java are attracting a new market,

embedded devices. The automatic memory management system of Java is not only the

advantage to enable an efficient memory usage, but also the disadvantage to disable

timeliness services in embedded real-time systems. This condition motivates that when

real-time garbage collection is supported, a garbage collector is scheduled with real-time

tasks.

The experiment of the execution behavior of GC in embedded systems explains

the impact of stop-the-world GC and incremental GC as listed below. The execution time

of GC is determined by memory heap size, the number of alive objects, and the number

of garbage objects. The memory fragmentation problem causes long allocation time if

GC algorithm does not support compaction operations. This is an overhead imposed on

mutators. Additionally, incremental GC adds the overhead by barrier processing to

mutators. The conservatism involved in a decision of a pointer location and a decision of

the garbage reclamation moment has garbage objects retained unnecessarily. This

analysis leads to specify the requirements for real-time garbage collection.

To integrate garbage collection with real-time tasks, a sporadic server is chosen as

a scheduling algorithm for a garbage collector. The schedulability test shows how GC can

be scheduled and influences on real-time systems.















REFERENCES


[BAK78] Baker, H.G.Jr., List Processing in Real Time on a Serial Computer,
Communications of the ACM, 21, 4, 280-293, Apr 1978.

[BOLOO] Bollella, Greg, Brosgol, Ben, Furr, Steve, Hardin, David, Dibble, Peter,
Gosling, James, and Turnbull, Mark, The Real-Time Specification for
Java, The Real-Time for Java Experts Group, Addison-Wesley, Boston, 1-
4, 2000.

[CAR99] Carnahan, Lisa, and Ruark, Marcus, Requirement for Real-Time Java
Extensions, Report from the requirement group for Real-Time
requirements for Java Platform, NIST, USA, 1999.

[EMBOO] Technical Overview of EmbeddedJava Technology, Sun Microsystems,
http://java.sun.com/products/embeddedjava/overview.html, 2000.

[HEN97] Henriksson, Roger, Scheduling Real-Time Garbage Collection,
OOPSLA'97 GC & MM Workshop, Atlanta, Georgia, USA, 1997.

[HEN98] Henriksson, Roger, Scheduling Garbage Collection in Embedded Systems,
Dept. of Computer Science, Lund University, 1998.

[HPCOO] HP ChaiVm Internals, Hewlett-Packard Inc., http://www.chai.hp.com/,
2000.

[IVEOO] Ive, Anders, Implications of Real-Time Garbage Collection in Java, Lund
University, 2000.

[JBE99] Jbed VM, Esmertec Inc., http://www.esmertec.com/, 1999.

[JON94] Jones, Richard, and Lins, Rafael, Garbage Collection Algorithms for
Automatic Dynamic Memory Management, Wiley, New York, 513-646,
1996.

[JOS86] Joseph, M., and Pandya, P., Finding Response Times in a Real-Time
System, BCS Computer Journal, 29(5), 390-395, 1986.

[KIM99] Kim, T., Chang, N., Kim, N., and Shin, H., Scheduling Garbage Gollector
for Embedded Real-Time System, Proceddings of the ACM SIGPLAN









1999 Workshop on Languages, Compilers and Tools for Embedded
Systems, Atlanta, Georgia, USA, 55-64, May 1999.

[KIMOO] Kim, T., Chang, N., Kim, N., and Shin, H., Bounding Worst-Case
Garbage Collection Time for Embedded Real-Time Systems, Proceedings
of the 6th IEEE Real-Time Technology and Applications Symposium
(RTAS'2000), Washington, DC, 46-55, May 2000.

[KVMOO] CLDC and the K VirtualMachine(KVM), Sun Microsystems, Inc.,
http://java.sun.com/products/cldc/, 2000.

[LIU73] Liu, C. L., and Layland, J., Scheduling Algorithms for Multiprogramming
in a Hard Real-Rime Environment, Journal of the ACM, 10(1), 1973.

[LIU97] Liu, Jane., Real-Time Systems, a manuscript, University of Illinois
Urbana-Champaign, 1997.

[NIL95] Nilsen, Kelvin, High-Level Dynamic Memory Management for Object
Oriented Real-Time Systems, Workshop on Object-Oriented Real-Time
Systems, San Antonio, TX., Oct 1995.

[NIL98] Nilsen, Kelvin, Adding Real-Time Capabilities to the Java Programming
Language, Communications of the ACM, Volume 41, Jun 1998.

[OBE94] Obenza, Ray, Guaranteeing Real-Time Performance Using Rate
Monotonic Analysis, Embedded System Conference, 1994.

[PRIOO] Printezis, Tony, and Detlefs, David, A Generational Mostly-Concurrent
Garbage Collector, ISMM2000, Minneapolis, Minnesota, 2000.

[SIE99] Siebert, Fridtjof, Hard Real-Time Garbage Collection in the Jamaica
Virtual Machine, The Sixth International Conference on Real-Time
Computing Systems and Applications (RTCSA'99), Hong Kong, 1999.

[SPR89] Sprunt, B., Sha, L., and Lehoczky, J., Aperiodic Task Scheduling for
Hard-Real-Time Systems, The journal of Real-Time Systems, vol. 1, pp.
27-60, 1989.

[TAI99] Taivalsaari, Antero, Bush, Bill, and Simon, Doug, The Spotless System:
Implementing a Java System for the Palm Connected Organizer, Sun
Microsystems, Palo Alto, California, 2000.

[VEN99] Venners, Bill, Inside the JAVA2 Virtual Machine, McGrawHill, New
York, 1999.






74


[VXW95] VxWorks Programmers Guide: Version5.3, Wind River Systems,
Alameda, California, 1995.

[WIL94] Wilson, P.R., Uniprocessor Garbage Collection Techniques, Technical
Report, University of Texas Austin, Jan 1994.

[WIL93] Wilson, P.R., and Johnstone, Mark S., Real-Time Non-Copying Garbage
Collection., Position paper for the 1993 ACM OOPSLA Workshop on
Memory Management and Garbage Collection, Washington DC, Sep
1993.















BIOGRAPHICAL SKETCH

Okehee Goh was born on December 2, 1967, in Pusan, Korea. She received her

Bachelor of Science degree in computer engineering from Pusan National University,

Pusan, Korea, in February 1992. She worked at the Computing Center of Seoul National

University from April 1992 to October 1994. She then worked as a senior software

engineer at the R&D group of Nowcom Corporation, Seoul, Korea, from November 1994

to March 1999. She joined the Department of Computer and Information Science and

Engineering at the University of Florida in May 1999 to pursue a Master of Science

degree, and since then has worked as a research assistant in the Real-Time System Lab of

the department. Her research interests include real-time scheduling, garbage collection of

Java virtual machine and the Internet services.




University of Florida Home Page
© 2004 - 2010 University of Florida George A. Smathers Libraries.
All rights reserved.

Acceptable Use, Copyright, and Disclaimer Statement
Last updated October 10, 2010 - - mvs