Title: Reconfigurable computing : from satellites to supercomputers
CITATION PDF VIEWER THUMBNAILS PAGE IMAGE ZOOMABLE
Full Citation
STANDARD VIEW MARC VIEW
Permanent Link: http://ufdc.ufl.edu/UF00094694/00001
 Material Information
Title: Reconfigurable computing : from satellites to supercomputers
Physical Description: Book
Language: English
Creator: George, Alan D.
Publisher: Alan D. George
Place of Publication: Gainesville, Fla.
Publication Date: July 18, 2007
Copyright Date: 2007
 Notes
General Note: Presented at Reconfigurable Sustems Summer Institute, 2007 ; keynote address
 Record Information
Bibliographic ID: UF00094694
Volume ID: VID00001
Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.

Downloads

This item has the following downloads:

RSSI07_ADG ( PDF )


Full Text













Alan D. George, Ph.D.
Director, NSF Center for High-Performance Reconfigurable Computing (CHREC)
Professor of ECE, University of Florida


RSS July 18, 2007
RECONFIGURABLE SYSTEMS SUMMER INSTITUTE





Outline


* Motivations, challenges, vision


m A new national


research


center


* Selected case studies


* Conclusions


CHREC
NSF Center for High-Performance
Reconfigurable Computing


f UNIVERSITY BY f
UF FLORIDA
BYU









Motivations,


Challenges,


Vision


CHREC
NSF Center for High-Performance
Reconfigurable Computing


f UNIVERSITY BY f
UF FLORIDA
BYU





Opportunities for HPRC?


CHREC
NSF Center for High-Performance
Reconfigurable Computing


UNIVERSITY ', -f
UF FLORIDA
BYU


L0


C00






What is a Reconfigurable Computer?

m System capable of changing hardware
structure to address application demands


a Static or dynamic reconfiguration
a Reconfigurable computing, configurable
computing, custom computing, adaptive
computing, etc.
a Often a mix of conventional & reconfigurable
processing technologies (control-flow, data-
flow)
* Enabling technology?
a Field-programmable hardware (FPLDs)
* Applications?
a Broad range satellites to supercomputers!
a Faster, smaller, cheaper, less power & heat,
more versatile


I ..C-niqur3i)I
C jAllpulrr
General-PurL .. h FPG' ".i
ProcesEs r;


(e.g DSP-' NP;


ASICs


Performance


0~~


LCHREC
NSF Center for High-Performance
Reconfigurable Computing


UF FLORIDA
VbTnA BYU






When and where do we need RC?

When do we need RC?
a When performance & versatility are critical
Hardware gates targeted to application-specific requirements
System mission or applications change over time
j When the environment is restrictive
Limited power, weight, area, volume, etc.
Limited communications bandwidth for work offload
a When autonomy and adaptivity are paramount
Where do we need RC?
a In conventional HPC systems & clusters where apps amenable
Field-programmable hardware fits many demands (but certainly not all)
High DOP, finer grain, direct dataflow mapping, bit manipulation,
selectable precision, direct control over H/W (e.g. perf. vs. power)
a In space, air, sea, undersea, and ground systems (HPEC)
Embedded & deployable systems can reap many advantages w/ RC
UNIVERSITY l f
CHREC UFFLORIDA
NSF Center for High-Performance 6 BYU
Reconfigurable Computing., B






Vision for HPRC
HPC = High-Performance Computing
HPEC = High-Performance Embedded Computing
Next frontier for high-speed computing
a Based on new & emerging technologies in field-programmable hardware
a Versatility of the CPU, horsepower of the ASIC, adaptive tradeoffs
a Dual-paradigm computing conventional and RC processing in tandem
a Powerful approach for new performance levels in HPC
L Versatile approach for high-speed embedded computing
Major research & technology challenges in realizing full potential
u Vertical gap between users and systems (semantics, productivity)
a Horizontal gap between conventional and RC processing (architecture)
a Infrastructure for HPC and HPEC environments (libraries & services)
a Methods, standards, & tools for application/core portability (reuse)
a Insight to influence next-generation FPLDs & systems (better targets)
Many challenges best addressed via industry/university collaboration
u Industry, government, & academe partners; linkage to standards groups
UNIVERSITY of
*-CHREC UF FLORIDA
NSF Center for High-Performance 7 .I BYU
Reconfigurable Computing., B






Bridging the Gaps

m Vertical Gap
a Semantic gap between design levels
Application design by scientists & programmers-
Hardware design by electrical & computer engineers
L We must bridge this gap to achieve success
Better languages and environments to express parallelism of multiple
types and at multiple levels
Better translators, libraries, run-time systems, target devices
Both evolutionary and revolutionary steps
a Finding best balance of performance, productivity, portability
* Horizontal Gap
L Architectures crossing the processing paradigms
Cohesive, optimal collage of CPUs, FPGAs, interconnects, memory
hierarchies, communications, storage, et al.
Simple retrofit to conventional architecture? Future integration?
UNIVERSITY ',
CHREC UF FLORIDA
NSF Center for High-Performance 8 BYU
Reconfigurable Computing., B






Traditional Computing Lessons?

Good News
a User programming model moved from ML (SDL?) to HLL
Productivity (abstraction), portability (device-independent)
a CPUs redesigned as better targets; ISA convergence
Performance (ILP arch tailored for compilers), portability (x86)
a Body of experience incorporated into opt. compilers
Performance (transparent to user; productivity & portability)


Bad News
a Much easier for sequential programming than parallel
ILP heavily/transparently mined by device (pipelining, superscalar)
Witness major concerns re: multicore/multithreaded apps
a Mythical parallelizing compilers
Complexities of parallel apps & archs beyond modern compilers
HPC languages aid design but fail in automating/parallelizing
Situation for HPRC is potentially more difficult to automate
UNIVERSITY io
*FCHREC UFFLORIDA
NSF Center for High-Performance 9 BYU
Reconfigurable Computing 1 ., BI "






A Research Challenge Stack

Performance prediction
a When and where to exploit RC?
Performance analysis
a How to optimize complex systems and apps?
Numerical analysis
a Must we throw DP floats at every problem?
Programming languages & compilers
a How to productively express & achieve parallelism?
System services
a How to support variety of run-time needs?
Portable core libraries
a Where cometh building blocks?
System architectures
a How to scalably feed hungry FPGAs?
Device architectures
L How will/must FPLD roadmaps track for HPC or HPEC?


Pfance
Prediction
Performance











=^fHliH^^




Analysis


CHREC
NSF Center for High-Performance
Reconfigurable Computing


SUNIV BRSITY f,' f
UF FLORIDA
SBYU





Logistical Challenges

Fragmented & proprietary set of vendor products
a Natural for any emerging technology
a Disconcerting for all but early adopters, risk takers
C4 needed for ultimate success
a Commitment, cooperation, collaboration, convergence
Consortia and other partnerships are vital
a Research consortia: academia + industry + government
e.g. NSF Center for High-Performance Reconfigurable
Computing (CHREC)
a Consortia for standards, practices, adoption
e.g. OpenFPGA
a Catalytic initiatives, focused R&D teams
e.g. proposed new DARPA program on FPGA tools
UNIVERSITY io
*FCHREC UFFLORIDA
NSF Center for High-Performance 11 BYU
Reconfigurable Computing., ",









A New National


Re


S


earch Center


CHREC
NSF Center for High-Performance
Reconfigurable Computing


f UNIVERSITY BY f
UF FLORIDA
BYU






What is CHREC? CHRE C
NSF Center for High-Performance
Reconfigurable Computing

NSF Center for High-Performance Reconfigurable Computing
a Pronounced "shreck"
a Under development since Q4 of 2004 (LOI to NSF)
Lead institution grant by NSF to Florida awarded on 09/05/06
Partner institution grant by NSF to GWU awarded on 12/04/06 ;W,
BYU and VT hopeful of partner institution grants in Q4 of 2007
a Kickoff workshop held in Dec'06; CHREC operations began in Jan'07
Under auspices of I/UCRC Program at NSF
a Industry/University Cooperative Research Center
CHREC is supported by CISE & Engineering Directorates @ NSF
a CHREC is both a Center and a Research Consortium
University groups form the research base (faculty, students)
Industry & government organizations are research partners, sponsors,
collaborators, and technology-transfer recipients
UNIVERSITY l f
CHREC UFFLORIDA
NSF Center for High-Performance 13 BYU
Reconfigurable Computing., B






NSF's


Model for I/UCRC Centers


Research Interaction


Basic Applied/Development


CHREC
NSF Center for High-Performance
Reconfigurable Computing


UNIVERSITY i/f
UF FLORIDA
BYU
B 1 iax 31%





Objectives for CHREC

Serve as first national research center in reconfigurable
high-performance computing
a Basis for long-term partnership and collaboration amongst industry,
academe, and government; a research consortium
a RC: from supercomputers to high-speed embedded systems
Directly support research needs of our Center members
a Highly cost-effective manner with pooled, leveraged resources and
maximized synergy
Enhance educational experience for a large set of high-
quality graduate and undergraduate students
a Ideal recruits after graduation for Center members
Advance knowledge and technologies in this field
a Commercial relevance ensured with rapid technology transfer
JI UNIVERSITY io
*FCHREC UFFLORIDA
NSF Center for High-Performance 15 BYU
Reconfigurable Computing., B






CHREC Faculty

* University of Florida
a Dr. Alan D. George, Professor of ECE UF Site Director
a Dr. Herman Lam, Associate Professor of ECE
Dr. K. Clint Slatton, Assistant Professor of ECE and CCE
L Dr. Greg Stitt, Assistant Professor of ECE
u Dr. Ann Gordon-Ross, Assistant Professor of ECE
L Dr. Saumil Merchant, Research Scientist in ECE
* George Washington University
a Dr. Tarek EI-Ghazawi, Professor of ECE GWU Site Director
a Dr. Ivan Gonzalez, Research Scientist in ECE
Dr. Mohamed Taher, Research Scientist in ECE
* Brigham Young University pending approval by NSF
u Dr. Brent E. Nelson, Professor of ECE BYU Site Director
a Dr. Michael J. Wirthlin, Associate Professor of ECE
u Dr. Brad L. Hutchings, Professor of ECE
* Virginia Tech pending approval by NSF
u Dr. Shawn A. Bohner, Associate Professor of CS VT Site Director
a Dr. Peter Athanas, Professor of ECE
L Dr. Wu-Chun Feng, Associate Professor of CS and ECE
L Dr. Francis K.H. Quek, Professor of CS


CHREC
NSF Center for High-Performance
Reconfigurable Computing


UNIVrRSITY /f
UF FLORIDA
SBYU






21 Founding Members in CHREC


* Air Force Research Laboratory
* Altera
* Arctic Region Supercomputing Center
* Cadence
* Hewlett-Packard
* Honeywell
* IBM Research
* Intel
* NASA Goddard Space Flight Center
* NASA Langley Research Center
* NASA Marshall Space Flight Center
* National Cancer Institute & SAIC
* National Reconnaissance Office
* National Security Agency
* Oak Ridge National Laboratory
* Office of Naval Research
* Raytheon
* Rockwell Collins
* Sandia National Laboratories
* Silicon Graphics Inc.
* Smiths Aerospace (now GE Aviation)


Honeywe


OAK RDGE NATINAAL MLABORATO

Rockwell
Co, ins


[4]


-- Arctic Region
' Supercomputing Center


U


smiths


(inteI


IRavylo n cadence


CHREC
NSF Center for High-Performance
Reconfigurable Computing


UF FLORIDA
VbTnA BYU


sg .,.MtI






Benefits of Center Membership

Research and collaboration
a Selection of project topics that membership resources support
L Direct influence over cutting-edge research of prime interest
a Review of results on semiannual formal basis & continual informal basis
L Rapid transfer of results and IP from projects @ ALL sites of CHREC
Leveraging and synergy
a Highly leveraged and synergistic pool of funding resources
L Cost-effective R&D in today's budget-tight environment
Multi-member collaboration
a Many benefits between members
L e.g. new industrial partnerships & teaming opportunities
Personnel
a Access to strong cadre of faculty, students, post-docs
Recruitment
a Strong pool of students with experience on industry & govt. R&D issues
Facilities
L Access to university research labs with world-class facilities
UNIVERSITY of
*KCHREC UFFLORIDA
NSF Center for High-Performance 18 .I BYU
Reconfigurable Computing, ",





CHREC


&


CHREC


OpenFPGA


Research context and support Open
STecholog inOpenFPGA
) Community
Technology innovations


Production
Utilization


-CHRECI
NSF Center for High-Performance
Reconfigurable Computing


UNIVERSITY 'i/f
UF FLORIDA
BYU


Diagram c/o
Dr. Eric Stahlberg





Education & Outreach

* CHREC is enabling advancements at all its sites
New & updated courses
a Degree curricula enhancements
a Student internship connections
u Visiting scholars


* Example:


new RC courses at Florida site


a New undergraduate (EEL4930) & graduate (EEL5934)
courses in RC starting Aug'07


* Lectures, lab experiments, research projects
a Fundamental topics
a Special topics from research in CHREC
* Supported by new RC teaching cluster


Rockwell
Col/ins


a Sponsored by educational grants from Rockwell Collins & Altera
* 12 workstations each housing PCle card with Stratix-ll FPGA


L-CHREC
NSF Center for High-Performance
Reconfigurable Computing


UNIVERSITY 'I
UF FLORIDA
BYU







Selected Case Studies


1) Simulative Performance Prediction
2) Performance Analysis
3) Applications Studies
4) Device Architectures & Tradeoffs
5) Advanced Space Computing
6) DARPA Study on FPGA Tools
It UNIVERSITY yf
CHREC UFFLORIDA
NSF Center for High-Performance 21 .a. BYU
Reconfigurable Computing i ., B,, ;






1) Simulative Performance Prediction

* Goals
a Develop framework for simulative performance
prediction of complex RC systems and apps
Facilitate fast system design tradeoffs
a Explore design tradeoffs of complex, multi-paradigm
systems & applications via modeling and simulation
* Challenges
a Design a framework to accurately model
a wide range of current and future RC Legend
systems and applications H ,-ardware Coe Smujon n
Balance simulation speed and fidelity
* Simulation Framework Application Script
A Characterization Generation
L Framework divided into two domains R
Application domain and simulation domain
a Framework allows arbitrary applications Mo Sysln Oupul,
to be simulated on any arbitrary system Development alibration Analyst
Model components & application scripts
can be reused after initial development for RC Simulation Framework
rapid simulative analyses
UNIVERSITY ',
CHREC UF FLORIDA
NSF Center for High-Performance 22 a BYU
Reconfigurable Computing, B., ,








Results Highlights

* Performance prediction from RC system models

driven by RC application scripts

a Scripts characterize high-level behavior of
application through defining key events
Simulation speed balanced by abstracting away
fine computation details

* Results from case study with Hyperspectral

Imaging (HSI) illustrate framework capabilities


-oU

35
30
Q.
S25

I
10
20




5
0


a Analyze performance while varying numerous
independent variables


C.
S25

. 20
U)
Fn 15
I
10
5


#sample script
RCINITFABRIC 1 55296 100.0E6
RC_CORECONFIG 1 Classify 3.84E6 200 1041 50000 8192 128 0 5000
#Initialization
MPI _nit
COMP 288.4118587 685880122 10850480 3147074
MPIsend 0 1 MPIINT 33554432 1 42
MPIReCV 1 0 MPIINT 1048576 1 43
#Loop of processing iterations on FPGA
RC_STARTLOOP 32768
RC_COREREQUEST 1 Classify 8192 0
COMP 15.4
RC STOPLOOP
COMP 10300.0143
MPIRecv 1 0 MPIINT 2097152 1 91
COMP 9801.73
#irfrap up
MPI Barrier 0 1 MPI CHAR 64 1 11
COMP 5.922
MPI_Finalize

Sample RC Application Script



Speedup of HSI V4LX100/PCI-X cluster
(Nallatech)








--128x128 image
-+256x256 image


0 2 4 6 8 10 0 2 4 6 8 10
# of Nodes # of Nodes
Projected speedup (vs. 3 GHz Xeon) on cluster ofXD000 servers [EP2S180 FPGA via HT] (left)
and cluster of Xeon servers [V4LX100 FPGA via PCI-X] (right)


CHREC
NSF Center for High-Performance
Reconfigurable Computing


UNIV- RSITY If
UF FLORIDA
SBYU


Speedup of HSI EP2S180/HT cluster
(XDI)







S--128x128 image
-- 256x256 image


A,






2) Performance Analysis


Instrument


* Goals Instrumm
a Productively identify & remedy performance
bottlenecks in RC applications (CPUs & FPGAs) Measure


ented Application
Execute
SExecution
Environment
I


* Motivations Measured Data File Analyze
a Complex systems difficult to analyze by hand Present I(Automatically)
Manual instrumentation is unwieldy Visualizations
Large volume of raw data is overwhelming Analyze
a Tools to quickly locate performance problems (Manually)
Collect & view performance data with little effort Potential Bottlenecks
Analyze performance data, identify bottlenecks Optimizepcato
Critical for complex apps & systems in HPRCModid
. Challenges "
a How do we expand notion of software performance analysis into
software-hardware realm of RC?
a What are common bottlenecks for dual-paradigm applications?
a What techniques are necessary to detect performance bottlenecks?
a How do we analyze and present these bottlenecks to a user?


-CHRECI
NSF Center for High-Performance
Reconfigurable Computing


JUNIVrRSITY 1 f
UF FLORIDA
SBYU


I Orgina Appiato ]






What to Instrument in Hardware?


* Control
L Watch state machines, pipelines, etc.
* Replicated cores
LI Under "
parall More on this r
* Commu
On-c be presented
a On-b(
L Off-b< on Friday by S
System I I I | ....


ur aociora


search will

at RSSI'07

eth Koehler,

I student


Iorel
ore


EI~oie1
oir1


ors)


PGA / Device
FP ..---


Memory FPGA I |4IFPGA
board' AFPGA a cor
So FPGA
|CPU a)
SCPU App core
0.040 FPGA 0
I CPU | P Embedded CPU(s)
CPU & Primary Interconnect
Primary InrnCPU & Primary Interconnect
Primary Interconnect


Network 0

Legend Primary Interconnect
Traditional Processor
FPGA CommunicatiNetwork
C) FPGA Communication |


CHREC
NSF Center for High-Performance
Reconfigurable Computing


uNivrSiiTY -f
UF FLORIDA
T.TAc BYU


. . .


r_ __1 __ 1






3) Applications Studies
Where's the
- Goals ^ beef?
Develop understanding from case-study experience of
decomposition & mapping strategies w/ complex apps
Scenario applications defined jointly with CHREC members ..V
Hardware/software partitioning, co-design, optimization
Concomitantly explore complimentary issues (HLL vs. HDL,
design portability, numerical precision, etc.)
* Motivations
HPRC still in its infancy; need more lessons learned & insight w/ real apps
* Research Challenges
Multilevel algorithm partitioning, analysis, & optimization -
Balancing performance with portability, precision, productivity
* Current Activities
Application design and evaluation
PDF estimation, LIDAR processing, multiscale data fusion, molecular dynamics
Development of RC-Amenability test (RAT), a simple speedup predictor
Design comparisons (HDL vs. HLL for same app)
e.g. LIDAR processing via AccelDSP vs. VHDL, molecular dynamics in Impulse C
UNIVERSITY Nf
CHREC UFFLORIDA
NSF Center for High-Performance 26 .'.a BYU
Reconfigurable Computing, B







Ex: Probability Density Function (PDF) Estimation

* Background Necessity is the mother of invention.
Compute-intensive problem with wide range of apps (e.g. image proc., machine learning)
Case study for RAT (RC Amenability Test) our methodology for quickly & efficiently
estimating speedup of a specific top-level app design on a specific FPGA platform
STarget platform -Xeon server hosting Nallatech H101-PCIXM card with V4LX100 FPGA and PCI-X interconnect


2-D Numerical Precision Estimate
r 32-bit fixed 64-bit float


50 100 150 200


250 50 100 150 200 250
250 50 100 150 200 250


2-D PDF Resource utilization
( kernels/core =8; BRAM = 512 words)
DSP48s 16/96 16%
BRAM 36/240 15%
Slices 7272/49152 14%
RAT prediction
Predicted Speedup 6.8
Error analysis
Max. % Error 0.12%


t f1(sec) 158.75
1st Board Implementation Speedup (single core) (s)
tRC (sec) 34.57


Multi-core designs are
=4.6 underway
Dual-core speedup
A.-- : i:- 1....


tsof was computed in C on a 3.2GHz Intel Xeon processor and single-precision floating point prediction IOA
tRC observed from first board implementation (90 MHz)
I Designed a scalable architecture for higher-dimensional PDF estimation & identified key design parameters
I Investigating portability issues & formulating a design pattern as reference solution for future problems


CHREC
NSF Center for High-Performance
Reconfigurable Computing


UN]vrnwi-Tv -f
UF FLORIDA
VbTnA BYU


Left: FPGA Estimate Right: GPP Estimate (double)


S.=-
S0)

o Q-
(n.
s= o
EO






4) Device Architectures & Tradeoffs

* Goals: develop fundamental research foundation for comparative analysis and
insight on RC & competing processing technologies
a Study FPLD processing technologies (FPGA, FPOA, et al.), compare vs. alternatives
a Develop models to quantitatively compare (speed, power)
a Set stage to later explore new FPLD architectures to serve needs of key apps
* Motivations: comprehensive tradeoff analysis to determine a notional future
roadmap for FPLDs to target needs of RC for HPEC and/or HPC
* Challenges
a Application & kernel benchmarking on disparate suite of devices
Broad and complex range of design tools, architecture skills, etc.
L Analytical modeling of resource, performance, & power
characteristics; testbed experimentation to calibrate models
" Approach
L Evaluate various RC & competing processing technologies
Altera Stratix-11/Ill FPGAs, Xilinx Virtex-4/5 FPGAs, MathStar FPOA, Monarch PCA, Cell
Broadband Engine, AltiVec vector accelerator, PowerPC baseline (perhaps GPU in future)
L Analyze benchmark results, formulate characterization methods, construct device
characterization matrix & models => insight on key app/device mappings & tradeoffs
UNIVERSITY 'f f
* CHREC UFFLORIDA
NSF Center for High-Performance 28 i BYU
Reconfigurable Computing., B







Preliminary Results


SCharacterization Studies =(ALU bit operations/cycle)x frequency
Characterization Studies 7=L Die are
a Example: Computational Density
Altera Stratix-ll EP2S180
a Die area: 40mm x 40mm
a Process Technology: 90 nm
a Operations: 2.2 million/cycle
a Frequency: 450 MHz
a y= 1,180
L Broader suite of studies (e.g. Device
Memory Bandwidth, Computational
Intensity, etc.) is underway


Theoretical Computational Density


Altera Xilinx Xilinx Cell FPOA
Stratix2 Virtex4 Virtex4
SX55 LX100

n


Kernel Benchmarking
L Example: 2D Convolution
Device Speedup
PPC 1
Cell 3.7
AltiVec 4.4
FPGA 80
FPOA 168


L Using HPEC Challenge benchmarks
et al. and retargeting them for
devices under study Note on Ce


PPC Cell AltiVec


FPOA FPGA


/I: multithreaded x6, not vectorized, on SPEs; best case projected @ 3.7x4.4 = -16x speedup


CHRECI
NSF Center for High-Performance
Reconfigurable Computing


2D convolution specs: 8-bit signed integer numerics, 8-bit pixels,
3x3 mask size, 32Kx1 K (32 MB) image size, sharpening filter


UF FLORIDA
,1 BYU


->






5) Advanced Space Computing

* What is advanced space computing?
a New concepts, methods, and technologies to enable and deploy high-performance
computing in space for an increasing variety of missions and applications
* Why is advanced space computing vital?
[ On-board data processing
Downlink bandwidth to Earth is extremely limited
Sensor data rates, resolutions, and modes are dramatically increasing
Remote data processing from Earth is no longer viable
Must process sensor data where it is captured, then downlink results
L On-board autonomous processing & control
Remote control from Earth is often not viable
Propagation delays and bandwidth limits are insurmountable
Space vehicles and space-delivered vehicles require autonomy
Autonomy requires high-speed computing for decision-making
* Why is it difficult to achieve? .-::1
Cannot simply strap a rocket to a Cray "
Hazardous radiation environment in space
Platforms with limited power, weight, size, cooling, etc.
Traditional space processing technologies (RadHard) are severely limited
a Potential for long mission times with diverse set of needs
Need powerful yet adaptive technologies
UNIVERSITY f
CHREC UFFLORIDA
NSF Center for High-Performance 30 .i BYU
Reconfigurable Computing B YU .,






Example: NASA/Honeywell/UF Project

1st Space Supercomputer Dependable Multiprocessor (DM)
* 1st Space Supercomputer Instruments
Instruments
a In-situ sensor processing
a Autonomous control
Spacecraftl/IF Controller
a Speedups of 100x to 1000x SpSscemraftI IF ^
a First fault-tolerant, parallel, Reconfigurable
reconfigurable computer for space Spacecra F Sste Cluster
(NMP ST-8 orbit in 2009) Controller Computer
(RHPP Data Data
SfueProcessor *Processor
SInfrastructure for fault-tolerant, (PC, FPGA) (PPC, FPGA)
high-speed computing in space #1 #N"
L Robust system services N
L Fault-tolerant MPI services Hih-Speed Network A
FPGA services
a Application services
Standard design framework acecrat Mission -Specific Mission -Specific
Standard design framework Spacecraft Interface Devices
a Providing transparent API to various
resources for earth & space
scientists

NSF Center for High-Performance 31 I 1 IV
Reconfigurabe Computing






Dependable Multiprocessor



* DM System Architecture

a System controllers/managers
m Redundant RadHard PPC boards

Data processing engines
COTS boards (PPC, FPGA, AltiVec)

a Fault-tolerant (FT) infrastructure
Versatile dynamic mix
SIFT, NMR, ABFT, hybrid
Hardened Syst
* DM Middleware (DMM) MissionSpecific Param

a FT embedded MPI (FEMPI)
JM FTM
a FT system services Reliable Messaging Midd
COTS OS and Drive
a HA middleware Hardened Processor
JM Job Manager
3 Apps & FPGA services JMA -JobManager A
FTM Fault Tolerance


em
eters COTS Data Processors
MPI Application Process
JMA ASL FCL FEMPI
leware Reliable Messaging Middleware
rs COTS OS and Drivers
COTS Packet-Switched Network COTS Processor


gent
Manager


FEMPI Fault-Tolerant Embedded MPI
ASL -Application Services Library
FCL FPGA Coprocessor Library


CHREC
NSF Center for High-Performance
Reconfigurable Computing


f UNIVERSITY 1,f
UF FLORIDA
SBYU






Dependable Multiprocessor


* Space Missions for DM
3 First is NMP ST-8 mission in 2009 for NASA/JPL
6-month orbit, minimal configuration, technology proof of concept
HPRC system, but stripped (PPC clocks slowed, FPGAs removed, data
network downgraded, etc.) to save cost, weight, power for test mission
a Many potential opportunities for DM deployment & HPRC in space
Upcoming NASA missions and apps in space, such as:
A u Hubble Space Telescope Rescue
@ Autonomous rendezvous & capture of tumbling target (chaotic, uncooperative),
characterized by hypothesized saving of HST nearing its end of life
NASA synthetic neural system code (c/o Dr. M. Rilee @ GSFC) for autonomous
recovery is being ported & parallelized at UF for HPRC operation on DM system
La Autonomous Disturbance Detection & Monitoring System (ADDMoS)
aj On-situ sensor processing for James Webb Space Telescope (JWST)
Upcoming DoD apps in space, such as:
j High-Performance Space Surveillance
L Operationally Responsive Space (ORS)


CHREC
NSF Center for High-Performance
Reconfigurable Computing


f UNIVrRSITY f
UF FLORIDA
SBYU





Artist


's Depiction of ST


-8


Spacecraft


Dependable Multiprocessor (DM)


hh .4
Li..


AL.
.A M


CHREC
NSF Center for High-Performance
Reconfigurable Computing


f UNIVERSITY BY f
UF FLORIDA
BYU


r U


.Z-72'-






6) DARPA Study on FPGA Tools


* CHREC invited to lead new
study for DARPA (Sept-June)
a Focus on R&D challenges for
application development &
execution on FPGA-based systems
a Several activities
Identify taxonomy of tools & DOD
use cases (HPC, HPEC, other)
Characterize limitations of existing
1 tools, analyze technical challenges,
& identify potential solutions
SExplore & devise roadmap for future
solutions & projected impact
Host workshop in 2008 to foster
broader research discussion
a Soliciting broad input


Creating a Research Agenda
for FPGA Tools (CRAFT)
I. Formulation
(a) Algorithm design exploration
(b) Architecture design exploration
(c) Performance prediction (speed, area, etc.)
II. Design
(a) Linguistic design semantics and syntax
(b) Graphical design semantics and syntax
(c) Hardware/software codesign
III. Translation
(a) Compilation
(b) Libraries and linkage
(c) Technology mapping (synthesis, place & route)
IV. Execution
(a) Test, debug, and verification
(b) Performance analysis and optimization
(c) Run-time services


CHRECI
NSF Center for High-Performance
Reconfigurable Computing


SUNIV BRSITY ',I f
UF FLORIDA
SBYU


-4















Conclusions


CHREC
NSF Center for High-Performance
Reconfigurable Computing


f UNIVERSITY BY f
UF FLORIDA
BYU





Conclusions

HPRC making inroads in ever-broadening areas
a HPC and HPEC; from satellites to supercomputers!
Currently, adopters are the brave at heart
a Face weaknesses of design methods, tools, systems, devices, etc.
a Fragmented technologies with gaps and proprietary limitations
Research & technology challenges abound
a Many R&D challenges lie ahead to realize full potential
a Balancing the four Ps: performance, productivity, portability, precision
Industry/university collaboration is critical to meet challenges
a Incremental, evolutionary advances will not lead to ultimate success
a Researchers must take more risks, explore & solve tough problems
a Industry & government as partners, catalysts, tech-transfer recipients
JI UNIVERSITY io
*ICHREC UFFLORIDA
NSF Center for High-Performance 37 BYU
Reconfigurable Computing., B >.







Thanks for Listening!


* For more info:

a www.chrec.org

u qeorge5chrec.orq


riti I .' .j *it.rt* 1s 01 r. ai r Lr*T *t 'Itr L*'" ut- .g .L r.j eIi rl r r. *r
', gI-trr rj I l t Ir. P. 4at *1 d T lJ : "t-l 1,' } ni l r:ig ie I .T| t l iT r, a ir Wi *T l I+
r.4 r, P irt ra n r f:gr. I J E*t *- i I 1S -: 1 TI!.F .HI I!r.- 5 a-lr. 'l ar.r
.f* r. n l l a t r .. r.: .i t-t '. L rI rr. of F1 nnnrd r* Art a .r r. S. c r. r fal hwnin"
I__~rwra_, j__ :1ni .:+ ii. i 'iaa V,' :h rer ..Rrg.m__ mi-ng 'l ^FrerL ey.u.at_ agm- |prh iJ cir.'*ar'+ r-i-r *,,-B r
c M .- arI r i, r-1 .K-T .r ."" E'} lA '.l. I-I e lr ., : I __ ..-.',r-w c i. II"i.ce 1* I s. .J -* ,.L .1i- ,


* Questions?


0


FOtI1DIrG MEMBERS U. (rREtC


IT %.I k I r t ,
UF FLORIDA
ii, BYU
.%. -.


A. 4T Tr irLJ- Irr Irkr1 r L I i *t :r.j I .fr- .. :r :-. aSI
.J- r f 3 l '* .a'. T .3r' -r rl n, t. Jt J.a- t I'n I aT.r I 1 I* an ri *t,*1 j uj a 1 'rt *
.1 fI i c .Lr u .,s r f I 2e.S* I*''M ** . u. I -nd r r- f T r rr.- 1
Ir-. ^-' 1 rrr. i r warir.rr rE Ed-J re r r- A rijJ d i TIrr I -r r ay str"t* ^. 'w
-.r r4 jll '.-A *. J.rk: ., -r rt Jr r
J~r;c M l J- rre It.ir t*,


1 CHRE C
NSF Center for High-Performance
Reconfigurable Computing


UF FLORIDA
TX BYU




University of Florida Home Page
© 2004 - 2010 University of Florida George A. Smathers Libraries.
All rights reserved.

Acceptable Use, Copyright, and Disclaimer Statement
Last updated October 10, 2010 - - mvs