Dept. of CISE, University of Florida, Gainesville, FL, 32611-6120, Technical Report, May 2005
Process Integration for the Building Construction Domain:
Experiences and Opportunities for Future Research
Jungmin Shin1, Joachim Hammer', and William O'Brien2
'Dept. of CISE, University of Florida
Gainesville, FL 32611-6120, USA
2Dept. of Civil, Architectural, and Environmental Engineering,
University of Texas at Austin,
Austin, TX 78712-0273, USA
wj ob@mail. utexas .edu
There is a need for new information technology solutions that can help automate process
coordination and integration tasks among enterprises. Despite ongoing efforts by the Web
services, workflow, and data management communities to enable transactions across
loosely coupled, distributed systems, it is difficult and expensive to construct specific
implementations for all but simple or generic services. In this paper we describe the
development of a simple process connector to link two scheduling applications. Our goal is
to evaluate the strengths and weaknesses of existing technologies to support project
coordination in the building construction domain. We describe specific challenges that we
faced when building our prototype connector including the modeling of complex and
semantically rich processes and their interactions, especially in light of different levels of
detail and constraints that must be propagated across the processes. We report on some of
the lessons learned from implementing our prototype and identify challenges and
opportunities for future research.
Efficient sharing of data and control among heterogeneous business processes in different
enterprises remain a challenging and labor-intensive endeavor to implement. As in the case of
integrating legacy data sources, it is difficult to match, translate, and merge heterogeneous process
specifications across enterprise boundaries. Hence there is a need for new information technology
solutions that can help automate process coordination and integration tasks. This need has been
documented in various industries such as manufacturing (e.g., "synchronized" supply chains )
and building construction (e.g., the Partnership for Advanced Technology in Housing
(www.pathnet.org)), as well as in various domains requiring coordination of tasks among disparate
agencies such as disaster management or homeland security.
In response, the computer industry is developing the Web services infrastructure (e.g., WSDL,
SOAP, .NET) to enable transactions across loosely coupled, distributed systems. In addition to
these frameworks, there are specifications for the exchange of process information, notably PSL
 and XPDL . These rich specifications are complex and the resulting implementations
cannot be easily configured (or reconfigured). Other relevant efforts to support process
collaboration from related communities include development of workflow management systems
(e.g., ) as well as data sharing tools (e.g., mediators  and legacy source wrappers ).
However, while these efforts provide the necessary foundations to realize (Web-enabled)
coordination among distributed sources, it is difficult and expensive to construct specific
implementations for all but simple coordination tasks.
In this paper, we describe the development of a simple process connector to link two
scheduling applications in the building construction domain. By this process connector we refer to
a software module that enables the sharing of data and control flow information, i.e. invoking and
terminating requests, among collaborating yet disparate processes. In our prototype, the process
connector links the schedule of a project manager, containing entries for all of the tasks that make
up the project, with that of a subcontractor performing electrical work. Since both schedules depict
related information at different levels of detail, using different representations, constraints, and
terminology, the process connector must mediate between these heterogeneities. In a sense, the
process connector becomes the executor of a newly defined "collaboration process". Through this
process, the process connector can communicate with other local processes and access certain data
elements. Generating process connectors efficiently and with minimal human involvement is a
principal aim of our project.
Besides validating the concept of a process connector and demonstrating how it works, an
additional goal of developing the prototype is to evaluate the strengths and weaknesses of existing
technologies to support project coordination. We describe specific challenges that we faced when
building our prototype connector including the modeling of complex and semantically rich
processes and their interactions, especially in light of different levels of detail and constraints that
must be propagated across the processes. We report on the lessons learned from implementing our
prototype and identify challenges and opportunities for future research. Finally, we provide our
experiences including recommendations as to which technologies are needed and how we plan to
contribute to the research including our plans to generate process connectors automatically.
2. Sample Scenario
In order to better illustrate the challenges faced when developing our process connector, we start by
introducing a simple collaboration scenario. In this scenario, two participants, a construction
manager (CM) and a subcontractor (SubA), collaborate on a building construction project. The CM
manages the overall project, and the SubA is one of subcontractors installing electrical systems.
Both CM and SubA use Microsoft ProjectTM for managing their schedules, two very small
samples of which are shown in Fig. 2-1 and Fig. 2-2. The schedules contain Task Name,
Duration, Start and Finish date of tasks, as well as Predecessor. Each schedule may
also contain other information such as cost, resource, and links between tasks. The CM has a
master schedule for the entire project and some of its tasks may refer to tasks in the schedule of its
ID Task Name Duration Start Finish 18,04 Jul 25 '04 Aug 1 '04
MIT WT IF S S|MIT IWT IF S S MIT
1 Electrical Rough In 6 days Tue 72O/04 Tue 7i27104 ;;..; ; ; ;: '"; '";
2 Fixtures 5 days Wed 728104 Mon 812104 ::::::::::::
Fig. 2-1. CM's Schedule
As we can see in Fig. 2-1, CM's schedule is composed of two tasks, "Electrical Rough In" and
"Fixtures". The "Electrical Rough In" task is scheduled to start on Tuesday, July 20th, and finish on
Tuesday, July 27th. This task is followed by the "Fixtures" task, which is scheduled to start on
Wednesday, July 28th and finish on Monday, August 2nd.
ID Task Name Duration Start Finish
Jul18, '04 Jul25, '04 Au
SM TWT F S SM TWT F S S
1 0200 Fixtures 3 days Wed 7/28/04 Fri 7/30/04
2 0300 Wiring Devices 2 days Mon 7/26/04 Tue 7/27/04
3 0500 Cable Tray 2 days Thu 7/22/04 Fri 7/23/04
4 1300 Equipment 2 days Tue 7/20/04 Wed 7/21/04 7li
Fig. 2-2. SubA's Schedule
SubA's schedule in Fig. 2-2 is composed of four tasks, "0200-Fixtures", "0300-Wiring
Devices", "0500-Cable Tray", and "1300-Equipment". The "1300-Equipment" task is scheduled to
start on Tuesday, July 20th, and finish on Wednesday, July 21st. It is succeeded by "0500-Cable
Tray" which is scheduled to start on Tuesday, July 22nd and finish on Friday, July 23rd, "0300-
Wiring Devices" which is scheduled to start on Monday, July 26th and finish on Tuesday, July 27th
and so on.
As we can see in the figures above, a task in CM's schedule can be related to one or more tasks
in SubA's schedule providing additional details. Relationships between tasks in different schedules
can be of cardinalities m:n, l:n, or 1:1. For example, the "Electrical Rough In" task in CM's
schedule is related to three tasks in SubA's schedule namely "1300-Equipment", "0500-Cable
Tray", and "0300-Wiring Devices" (1:n type relationship). Once the three sub-level tasks in SubA's
schedule are finished, the corresponding master task "Electrical Rough In" in CM's schedule is
finished as well. From the simple schedules above, we can see that, for a given project, the
subcontractor SubA has greater level of detail in its schedule than the project manager, CM.
Returning to our initial goal of supporting collaboration, the relationship among tasks and the
resulting differences in levels of detail in the two schedules cause coordination challenges in case
changes in one schedule must be propagated to the other. For example, if the "0300-Wiring
Devices" task in SubA's schedule is delayed by one day, the finish date in SubA's schedule would
change from Tuesday, July 27th to Wednesday, July 28th. This will impact the finish date of the
"Electrical Rough In" task in CM's schedule. However, before the changes can take effect in both
schedules (assuming SubA's own constraints can be met), existing constraints in CM's schedule
must also be checked. For example, in our scenario, CM's schedule shows a subsequent task
"Fixtures" which would have to be delayed accordingly. Among other things we need to check
whether "Fixtures" has any matching tasks in other schedules, which is the case in this example. In
the scenario depicted in Figs. 2-1 and 2-2, the "Fixtures" task in CM's schedule is related to the
"0200-Fixtures" task in SubA's schedule (1:1 type relationship). In this simple scenario, the one-
day delay of "0300-Wiring Devices" would have minimal impact on CM's schedule and may be
acceptable. However, in more realistic examples with many subcontractors (who in turn may
employ their own subcontractors) and many related tasks, such delays may cause a large ripple
effect through the project.
Given the hierarchical relationship that exists among the tasks in a project, operations for
managing schedule data like inserting, updating and deleting should be propagated to the affected
participants. Other challenges include the transformation of data and metadata from one
representation to another (e.g., 12 hour vs. 24 hour clock). Current practice relies on human input
to communicate the necessary changes and to solve the resulting constraint propagation problems.
The focus of this research project is to investigate efficient mechanisms to identify and link relating
tasks and propagate changes to the correct schedules with only minimal human involvement.
In this section, we describe our process connector prototype linking the sample schedules of CM
and SubA as shown in Figs. 2-1 and 2-2 above. Its overall architecture is shown in Fig. 3-1. Our
process connector links the infrastructures of those entities participating in the collaboration and
includes its own collaboration processes. As described in the introduction, we use the term process
connector to refer to the software that enables process integration. In our initial prototype, the
process connector identifies matching tasks and checks date inconsistencies as a result of updates
to either schedule. We briefly describe the conceptual architecture in Sec. 3.1 and explain some of
the implementation details in Sec. 3.2. In Sec. 3.3, we explain how to use our prototype.
3.1 Description of Components
As we can see in Fig. 3-1, CM and SubA use Microsoft ProjectT for managing their
schedules, which are stored in a local Schedule Information Repository. We
implemented a Web service in each individual infrastructure. This Web service provides an
interface for accessing the Microsoft ProjectTM schedule information of CM and SubA shown in
Fig. 2-1 and Fig. 2-2.
Match Candidate Selector
CM Generate XML SubA
MS Project b n Wb MS Project
Web Matching Web
service Information service
Information Process Connector Information
Date Value Checker
Fig. 3-1. The conceptual architecture of our sample prototype. Shaded components represent modules that
have been implemented as part of this work.
In order to identify matching relationships among tasks in the two schedules, we implemented
the Match Candidate Selector (MCS) module. MCS extracts schedule information from
the underlying two schedules through the corresponding Web services. It provides a GUI for a
construction expert to resolve matching relationships with the mapping and generating XML
functionalities. Automatic matching of tasks is very challenging and related to the well-known
schema matching problem (see, e.g., ) which has attracted a lot of attention from the database
The relationship information is stored in the Matching Information Repository
(MIR). Currently, MIR is an XML file. We plan to use an XML DBMS in case the data volume
and query requirements warrant it. Please note that we will have more to say about the
representation of the relationship information in Sec. 3.2.
Our process connector currently consists of a single collaboration process, referred to as Date
Value Checker (DVC) in Fig. 3-1. DVC utilizes matching information, which was generated
by the MCS, and detects inconsistencies among date values of the matching tasks from different
participants. DVC propagates inconsistencies to the participants through a Web service of each
participant. Currently, our process connector produces alerts about inconsistencies rather than
making any changes to the underlying schedules.
3.2 Implementation Details
We have implemented a Web service in each participant using Visual Basic for Applications
(VBA)  as well as Web Service Description Language (WSDL) technologies . The Web
service enables Internet access and allows communication between a process connector and
participants. In addition, the Web service provides access to the schedule information, e.g., task
name, start date of a task, of CM and SubA. We chose VBA since it simplifies manipulating of
the data in a MS ProjectTM file, and WSDL because of its widespread use as a Web service
I C:\proconnector\CM.mpp O j I \proconnectoA\SuA.mpp OruiieJ
D 0200 Fixtures
0 0300 -Wiring Devices
B 0500 Cable Tray
Mapping I Seletion Reset Done
Fig. 3-2. The GUI of Match Candidate Selector
MCS extracts task information from the schedules of CM and SubA using the Web services
and generates an XML file, which stores matching relationships among interrelated tasks.
Identifying relationships among tasks is currently the responsibility of a domain expert, who can
use the GUI of MCS shown in Fig. 3-2. The matching information is stored in an XML file, a
sample of which is shown in Fig. 3-3. The GUI is implemented using Visual Basic for easy
connection with our Web service, and using Java to simplify the creation and parsing of XML.
Fig. 3-3 shows a portion of an XML document representing the fact that the "Electrical Rough
In" activity in CM's schedule is matched to three subactivities, "0300-Wiring Devices", "0500-
Cable Tray", and "1300-Equipment", in SubA's schedule. The schedule that has a greater level of
detail is termed ,.ify/h,., the other schedule a flow. We currently show 1:1 and l:n matches
between the two schedules using a tree-structured representation for the subflows. In the future, we
can extend this representation to a graph to handle n:m relationships, for example, by using the
tag for grouping several tasks.
Electrical Rough In
0300 Wiring Devices
0500 Cable Tray
Fig. 3-3. Matching Relationship in an XML
The DVC collaboration process shown in Fig. 3-1, reads matching relationship information in
an XML file and checks matching tasks for inconsistencies in their start and end dates. For example,
as described in Sec. 2, the subtasks "1300-Equipment", "0500-Cable Tray", and "0300-Wiring
Devices" in SubA's schedule should be executed between the start and finish dates of "Electrical
Rough In" task in CM's schedule.
Fig. 3-4. Process Diagram for Date Value Checker
DVC is composed of two routing activities and one activity, which can be seen in Fig. 3-4. The
"Check Date Info" activity is implemented as a Java application, the routing activities, which are
dummy activities for just routing to the actual activities. A routing activity has neither a performer
nor an application and its execution has no effect on workflow relevant data or application data. In
our sample scenario, the "Check Date Info" activity parses matching relationship information
stored in XML, checks start date and finish date of matching tasks, and sends the result of
inconsistencies to CM, which is the manager of whole construction through a Web service.
The DVC process is described using an XML Processing Description Language (XPDL) .
We chose XPDL since it is a very flexible and semantically rich process description language and
also serves as the process interchange format in the Workflow area . An activity in XPDL can
be implemented in Tool, which implementation is supported by one or more applications) .
Any Workflow engine, which adheres to the specifications recommended by the Workflow
Management Coalition (WfMC), can execute process instances that are described in an XPDL
including our DVC process.
In summary, the process connector prototype that is based on the architecture shown in Fig. 3-1
requires the following tools and systems.
Shark Workflow Engine . This is one of the workflow engines that follow the
specification of the WfMC. This engine can execute the DVC collaboration process written
in XPDL. For executing the collaboration processes of our process connector in XPDL, we
use this engine by setting the application name in XPDL to the real application program we
implemented in a specified directory.
Apache Axis Java Toolkit. Apache Axis is an Open Source SOAP server and client. SOAP
is a mechanism for inter-application communication between systems written in
arbitrary language, across the Internet. By using WSDL2 JAVA command, our Web service
description is changed to java object, which can be used in java client programs. Our Web
service can be deployed in an XML, which describes what the service is, what methods it
exports and other aspects of the SOAP endpoint. In the end, a Java client,
PropagateChanges. class, can use our Web service, which is implemented in the
Apache Tomcat Web Server. Apache Axis assumes that there is a web server up and
running on the localhost at port 8080. We use this web server for this purpose.
Visual Studio .NET Server. The components of each Web service were integrated using
Visual Studio .NET.
Apache Xerces2 Java Parser . In order to generate, parse, and traverse an XML
document, matching. xml, we use this library. Also, Apache Axis Java Toolkit uses this
JaWE Java Workflow Editor . This is an editor for generating a process description in
XPDL. A GUI interface for drawing process diagrams is provided and those diagrams are
converted in XPDL. We designed the DVC process using a JaWE (Java Workflow Editor)
1.4. We can produce the process diagram with this, and automatically generate the
corresponding XPDL file.
3.3 How to Use the Prototype
In this section, we explain how to use our current prototype. We use two schedules in Fig. 3-5 as
our input schedules. The matching relationships are represented in red lines.
ID Task Nare Duration Start Finish Jul18, '04 Jul25, '04 Au
Electrical Rough In 3 days Thu 7/22/04 Mon 7/26/04 E-.
2 Fixtures 2 da s Wed 7/28/04 Thu 7/29/04
ID Task Name D ration Start Finish Jul 18, '04 Jul25, '04 Au
1 0200 Fixtures 3
2 0300 Wring Devices
3 0500 Cable Tray
4 1300- Equipment
days Wed 7/28/04 Frl 7/30/04
:Fig. 3-5. CM and SuA
Fig. 3-5. CM and SubA schedules
The following information items are required to run the prototype:
Two schedules written in Microsoft ProjectT including Task Name,
Duration, Start, and Finish columns. The format of date information is
MM/DD/YYYY. (e.g., the schedules for CM and SubA shown in Figs. 2-1, 2-2, and
again in Fig. 3-5)
The corresponding DateValueChecker.xpdl file, shown in Appendix A,
generated by JaWE Java Workflow Editor
Application programs PropagateChanges. class, Traverse.class, and
DateInfo class that we implemented for the "Check Date Info" activity in Fig. 3-4.
The PropagateChanges.class traverses an XML file that has matching
relationship information by using Traverse. class, checks date information of tasks
by using DateInfo. class and sends the result to the participants through a Web
WSDL stub classes generated from a WSDL file (e.g., CMInfo.wsdl) for letting the
Java client application PropagateChanges class use methods of the Web service
of each participant.
The Java class DocBuilder. class for generating a matching relationship file shown
in Fig. 3-3.
A Visual Basic Application MCS that provides a GUI interface for matching related
tasks in participants' schedule.
Now, we explain the steps for executing our prototype. First, we execute MCS to match the
tasks in the two schedules shown in Fig. 3-5 using the GUI shown in Fig. 3-2. As in Fig.3-2, after
opening the two schedules, CM. mpp and SubA. mpp, the tasks for each schedule will be displayed
in the corresponding listboxes. To identify matches, the user must check the corresponding tasks
from each listbox. For example, in Fig. 3-2 we see that the "Electrical Rough In" task on the left
side corresponds to the three tasks "0300-Wiring Devices", "0500-Cable Tray", and "1300-
Equipment" on the right. Save the mapping by pressing the Mapping button in bottom left-hand
comer of the GUI. Once all tasks are matched, press Done button. A terminal window will appear
for executing a Java application DocBuilder. class in order to construct the match
information document, matching. xml, shown in Fig. 3-3.
i : :..-, ir.3 i r.---.---
ip.-n I i- I : -
Fig. 3-6. Shark Workflow Engine login window
Next, we execute the process connector for checking the date consistency of matching tasks.
This requires the user to log in to the Shark Workflow Engine System using the connection
client shown in Fig. 3-6. When the GUI for the Workflow Engine administration appears (Fig. 3-7),
go to the Package management tab and press Load. In a new pop-up Load Package
into engine window, select DateValueChecker. xpdl and press Load again. Wait while
Shark Workflow Engine loads the DateValueChecker package into memory, and press Close.
Now the XPDL package is available and the user can instantiate processes from process definitions
contained in this XPDL.
FmiC iJonrror F-Efresninr Mi
User management Application mapping Cache management Worklist management
Repository management Package management Process instantiation managementI Process monitor
Id Version Name No. of process definitions
DateValueChecker 1 DateValueChecker 1
Load Unload all versions Unload Updatel Sync package cache
Fig. 3-7. Shark Workflow Engine admin interface after DateValueChecker package is loaded.
Go to the Application mapping tab in Fig. 3-7, and press Add. A new window shown in
Fig. 3-8 will be displayed. Select CheckConstraintsApp in the left panel, and select
org. enhydra.shark. toolagent. JavaClassToolAgent in the right panel. Populate
the Application name field in the right pane with V2\PropagateChanges, which is an
implemented Java application for checking date inconsistencies.
' Create appl icaio defiitio to toI anplto pi
Package Id: DateValueChecker
Process Definition Id: IDateValueChecker_Worl
Application Id: DateValueChecker_Worl_App3
Application name: CheckConstraintsApp
om enhydra shark toolagent.BshToolAgent
am enhydra shark toolagent.MailToolAgent
am enhydra shark toolagent.SOAPToolAgent
am enhydra shark toolagent.RuntimeApplicationToalAgent
This tool agent executes Java classes, by calling its static re-
When calling this tool agent's invokeApplication() method, the
name parameter should be the full name of the class that should
4 I _J
Application name: V2\PropagateChangesI
I Apply] 4 CloseI
Fig. 3-8. Application mapping window of Shark Workflow Engine
Finally, press Apply button and then Close button to close the mapping dialog. The XPDL
application definitions are mapped to a real application, PropagateChanges. class. Now
that everything is prepared for the process connector execution, and you can click Process
instantiation management tab in Fig. 3-7, and press the Instantiate button after you
select Process definition-C hekingProcess under Opened packages and
Package-DateValueChecker. This is shown in Fig. 3-9. As the process is executed, the
result of DateValueChecker.xpdl is passed to CM's infrastructure and saved in result. txt file
through a Web service method, passResult () .
File ionnecirinr, Pefresning r sMi
LIser mangermnl ppiallor. mapping i :nhe rranrgemren[ I N.Jorkhlil rnagereni
Fepo-ilor'mrraniaemeril Faci ale rrana3aemeri Fro.:ess inslailialion managerrieni I Proce_-2 ronitor
Q I 2TnTl 12J P ie r i iu r Enjml j DFj r i'cieI hee..-i'it!M IicrirnenTj
Fig. 3-9. Process instantiation management interface of Shark Workflow Engine.
The final outcome of the process connector execution is twofold:
1. The file matching. xml that stores the matching relationship among the tasks in the
two schedules (e.g., CM and SubA); and
2. The file result. txt (shown in Fig. 3-10) containing the result of date inconsistency
checking of matching tasks.
E" I F le i E_: er.:h Eie.t T a_:': 'l. :,.:r.l M A..i, -r,: l[' ...Il.
Subactivity 0300 -Wiring Devices's finish date is laterthan parent activity Electrical Rough In's finish date.
Parent activity Electrical Rough In's start date is later than subactivity 1300 Equipment's start date.
Subactivity 0200 Fixtures's finish date is later than parent activity Fixtures's finish date.
|File: result.txt, 323 bytes, 4 lines, PC, ANSI F 4 1 i Read FOvr Block Sync Rec Iaps
Fig. 3-10. The result of our process connector execution stored in CM's infrastructure
I a Cr ii eC i't je*The: er~~
-31 eanr' F'uoirh
rll e pro.: pe'l es
I FShrks dmnitrto Aplcaton- dmnitrto Ami I
I 11 T 'e d ',, '- s It
4. Evaluation and Lessons Learned
We now evaluate our prototype based on the goals we mentioned in Sec.1. Recall that our stated
goals are to develop a process connector prototype in order to validate the usefulness of the concept
and to better understand the underlying challenges. In addition, we were interested in learning
about the strengths and weaknesses of existing technologies to support project coordination.
We start by briefly evaluating the experiences we gained from developing the connector. Even
though our process connector works as expected, we spent a considerable amount of time
implementing a piece of software that has limited functionality. This does not include the time
spent for analyzing and choosing from among the myriad process integration technologies.
Web services are a platform-independent solution for providing application-to-application
communication over the Internet . This is very important given the heterogeneous infrastructures
of the participants. For example, once we implement an object with functions for accessing
schedule information, WSDL can be generated automatically from that object using existing tools.
By using WSDL for accessing schedule information, different applications, e.g., the application in
CM's infrastructure and the MCS in a process connector can communicate with each other
regardless of the platforms on which they run. However, for dynamic discovery of functions, we
need a way to interpret the WSDL. Recently, ontology with WSDL has become a viable option for
automatic and dynamic discovery of function names.
In our prototype, we currently depend on a manual matching among the related tasks in
different schedules that have different levels of detail. The WS-Resource framework is a set of
Web service specifications and conventions designed to standardize representation of, and provide
access to stateful resources in a distribute environment . This is good for describing resources
as Web services. However, there is no mention about identifying relationships among different
resources. A Workflow reference model considers hierarchical organizational relationships, but it is
for the participants and not for the data . So, to our knowledge, none of the standard is helpful
in identifying or managing relationships that involve different levels of detail of process
information (or relationship among the data) in disparate entities.
In our prototype, we designed our XML document for expressing the different levels of detail
among the data. If we want to propagate update, e.g., changes to date values, we can navigate the
matching relationships by using the traverse functions of the XML parser. XML namespaces
enable applications to share semantic elements in a document. However, as the level of nesting of
subcontractors and hence of relationships among tasks increases, managing the different levels of
detail gets very complex. As other Web service standard proposed, we need a conceptual model or
language for describing the different levels of detail.
In our process connector, processes are described in XPDL. This XPDL file can be executed in
existing Workflow engines. In order to minimize our dependence on Workflow engines, we do not
use any other function except for the ability to execute XPDL. For example, sending and receiving
of data is performed by our Web service. For communication between participants and process
connector, we need to add a control information communication gateway in the future. XPDL
abstracts from the concrete implementations or environment, thus these aspects are not of interest
at process definition time . Internally, the activities described in XPDL can be implemented in
the form of a Java application or a Web service (e.g., the DVC ). However, implementing
application programs or Web services takes time and requires expertise.
Finally, we found that the subflow concept of XPDL is helpful for designing our process
connector. Our process connector is an executable program consisting of one or more collaboration
processes. Hence a process connector can have an executing flow, which can be dynamic depend
on the requirements. We design a flow of a process connector with a collaboration process as an
activity. Then, each activity is implemented in a subflow with many other activities which are
implemented in application programs. Using the subflow concept, we can design our process
connector in modular (hierarchical) fashion. In the end only the collaboration processes are shown
to the user of the process connector.
We conclude that the current prototype is not scalable across many collaborators with many
collaborative processes. As a result of our development efforts, we see the opportunity for future
research in at least three areas, which we briefly outline below.
We have implemented a Web service for obtaining schedule information from participants.
In our simple scenario, this was relatively straightforward since both participants used the
same application, Microsoft ProjectTM, which provides a well-defined API. However, as
new participants using different applications may decide to join the project, we need to
develop a more scalable approach to generate an access module (adapter) for a participant.
Specifically, we need a tool for discovering the capabilities (e.g., API of the schedule
application in our scenario) provided by the participating infrastructure. This is related to
developing wrappers in data integration scenarios (see, for example, ). Also, the
desired tool needs to be able to translate data between the schemas of participants (e.g.,
units of time from hours to days, or from 12 hour to 24 hour clock).
Since there are different levels of detail in different schedules, the MCS provides a GUI to
a construction expert as a tool for matching relationships. A task that not only requires
domain expertise but also time and effort which needs to be repeated whenever a new
participant joins, for example. Just because we have a Web service does not mean
integration across multiple candidate matches are easy, since none of Web service
standards are capable of presenting the relationship among the data. We need a way to
discover process definitions/capabilities and mapping tasks in different participants with
less involvement from construction experts. We need to find matching relationships among
the tasks in different process by analyzing participants' process information.
In our prototype, we used a Workflow editor as our high-level process design tool and
implemented the processes enabling collaboration in Java. As the number of collaborative
processes increases, the amount of implementation of Java applications increases as well.
Implementing Java applications or Web services takes time and is not trivial for non-
programmers. In addition to using XPDL as an overall process description language, we
need a specification language and corresponding ontology for describing the program logic
of an activity in an XPDL process description. We also need a suitable compiler to
translate the specifications into Java programs so that users do not need to write as much
low-level code as we did in our prototype.
5. Related Work
Process integration is the science of managing the movement of data and the invocation of
processes in the correct, proper order to support the collaborative efforts of partner organizations
. Our process connector supports process integration by creating collaboration processes, which
can affect across the participating infrastructures.
The PSL is an emerging standard process specification language for exchanging process
information among manufacturing software applications. With the help of the formal semantic
definitions (the ontology), it serves as a translator among different applications that have different
semantics of terms and concepts representing the information that they are exchanging . PSL
extensions, which are one of components of PSL, make possible to define the resources to express
information involving concepts that are not part of PSL core. In , PSL is evaluated for the
applicability in the construction industry by adding extensions to current PSL ontology, such as
organization module, construction activity module and project module. Followed by mapping
concepts between PSL and construction application, the wrappers are used to retrieve information
from applications and convert the information into the PSL format. In addition, the wrappers are
also used to parse information from PSL files and transfer the regenerated information to other
applications. Similar to this, in , by developing the parsers and translators, information exchange
among different ontology standards, i.e PSL, ifcXML, and aecXML, is successfully implemented
through ontology mapping step. Above research efforts are followed by how to integrate
distributed software applications as Web services . Communication server and Communication
agent are employed. The Communication server is responsible for listening requests from clients,
including various applications and client devices, and received requests are broadcasted to different
communication agents. Communication agents, which are connected with individual applications,
pick up the request and process it. In addition, an active mediator is built to act as an information
broker between the client devices and the information sources . The user can send a request
from a Web browser and updated result are returned to the mediator, which transforms the results
suitable for display on the client device. In the end, any schedule updates that are done in client
devices' web browser are passed to the applications through communication server and
communication agents. The new schedule, which reflects any changes from web browser, is
transferred to the client web browsers. The update or rescheduling is simple, since only the
following works are delayed consequently. The FICAS metamodel  is chosen as a framework
for composition of information services to megaservices. The key characteristic of the FICAS
metamodel is the explicit separation of control-flows from data-flows. FICAS consists of buildtime
and runtime components. The buildtime components are responsible for composing megaservices
and compiling megaservice specifications into control sequences that serve as inputs to the runtime
environments. The runtime components are responsible for the executions of the control sequences.
Defined CLAS language is used for specifying megaservices, which are composed from
autonomous services. The megaservice controller in runtime environment is responsible for
carrying out the execution of a megaservice. The controller serves as a centralized coordinator for
all the control messages incurred by the megaservice. PSL can potentially support various
reasoning mechanisms beyond data exchange, such as checking inconsistencies of project
information from different sources . Project information from different applications should be
same each other. PSL can be used to check for consistency and to resolve some of the conflicts. A
theorem-prover Otter (Organized Techniques for Theorem-proving and Effective Research) is
employed as the logic reasoning tool. The example scenario is two different groups have schedule
information in different format. Those two schedules should be same, but they are not. So, by using
Otter, the cyclic in dependency relationships incurred from different ordering of activities and the
different start dates and durations of same activities can be detected.
Our research is different from this effort in following aspects. (1) In above work, all
applications share the same project schedules, so there is no need to figure out the hidden
relationships among different project schedules as us. They are just focusing on using the PSL as a
neutral standard for exchanging information among different applications. In our case, a
construction manager and subcontractors use different applications and have different schedules,
which are expressed in different perception, i.e. different levels of detail, for the same work based
on their involvement and responsibilities. We need to find the hidden relationships among the tasks
in different schedules. The hidden relationship among those different schedules should be
formalized and stored for notifying changes in one of schedules to all other relating schedules. The
schedule changes in one of subcontractors can give effects to a construction manager and other
subcontractors. Also, we need a specification language that can describe the different levels of
detail. (2) In FICAS model, CLAS specification language is used not for specifying the new
collaboration processes that will be executed over different schedules, but for composing
megaservices with existing autonomous information services. Megaprogramming is a technology
for programming by composing with large modules called megamodules rather than programming
from scratch . Megamodules are internally homogeneous, independently maintained software
systems managed by a community with its own terminology, goals, knowledge, and programming
traditions. Each megamodule describes its externally accessible data structures and operations and
has an internally consistent behavior. Megamodules are linked together according to composition
specifications to form megaservices. The role of CLAS is similar to other service composition
languages. It is assumed that individual autonomous information services exist already. Our
specification language enable computer executable modeling of specific instances of interacting
processes over different applications and different schedules. Interacting processes or collaboration
processes, which are part of process connectors, are implemented in our specification language. (3)
In our case, even though each schedule can be translated in PSL, the translated PSL files need to be
merged for generating one PSL schedule, which has explicit relationship information among the
tasks. This merging and reconfiguring PSL files are not easy. Without merging the PSL files, we
cannot figure out inconsistencies as above research by using the OTTER reasoning tool, since those
individual PSL files have no relationship information. Instead of this, we prefer to find the hidden
relationships among the schedules and generate new knowledge in our formal representation, and
use that information whenever we need.
Considerable research is being conducted in the Web services area. A service composition,
which is a similar concept to process integration, combines services following a certain
composition patterns to achieve a business goal, solve a scientific problem, or provide new service
functions in general. This provides a mechanism for application integration that seamlessly
supports cross-enterprise (business-to-business) and intra-business application integration .
However, process integration is bigger concept than service composition, since service composition
is based on the service-oriented computing, which assumes that services are naturally advertised
for the composition. Business Process Execution Language for Web Services (BPEL4WS)  is a
standard for web service composition. WSDL descriptions of services related to and are used by
the BPEL4WS process definition. This provides the mechanism for defining service compositions
in the form of choreographies of Web services: choreography consists of the aggregation of
services according to certain business rules. Web Service Choreography Interface (WSCI)  is
an XML-based interface description language that describes the flow of messages exchanged by a
Web Service participating in choreographed interactions with other services. The Business Process
Modeling Language (BPML)  is a meta-language for the modeling of business processes. BPML
provides an abstracted execution model for collaborative and transactional business processes
based on the concept of a transactional finite-state machine. BPML considers e-Business processes
as made of a common public interface and as many private implementations as process participants.
BPML processes can be described in a specific business process modeling language layered on top
of the extensible BPML XML Schema. BPML represents business processes as the interleaving of
control flow, data flow, and event flow, while adding orthogonal design capabilities for business
rules, security roles, and transaction contexts.
In , sequences of technologies of the Internet, agents, XML, SOAP, UDDI, WSDL are
examined, and the works that remain to be done are suggested. While EDI and similar systems
required expertise as well as much longer construction times, SOAP and WSDL are simple open
standards with plenty of tools. However, Web service related technologies are still inadequate for
dynamic discovery and integration of services. UDDI is a service directory, but structured to
provide meta-metadata about services. UDDI just has a unique identifier for WSDL descriptions.
The WSDL, which in service providers' site, contains metadata about the service such as a
description of the service and how to use it. However, realizing automatically which services do
what is not easy with current WSDL descriptions. The users of those services need to be aware of
which services do what. The name of methods is C programming style and realizing what services
do dynamically needs more technologies.
The goal of ProcessLink  is to provide a technical infrastructure and methodology that
would allow globally distributed engineers, designers and their heterogeneous tools to work
together in structured coordination. The basic idea is to integrate people and software, perform
change propagation, and notify exactly those people that are affected by the change, and tell them
the effect of the change.
The distributed coordination framework in  for project schedule changes is based on an
agent-based negotiation approach wherein software agents on behalf of the human subcontractors
evaluate the impact of any changes in collaboration. Workflow is the automation of a business
process, in whole or part, during which documents, information or tasks are passed from one
participant to another for action, according to a set of procedural rules . However, such
workflow technology, though useful for document processing, is not useful for novel design among
distributed participants . Workflow concerns more on defining a process than integrating
processes in different infrastructures. Especially, workflow tools do not support change
We have developed a prototype process connector for linking scheduling applications in the
construction domain. Specifically, we described our conceptual architecture and how we have
implemented a simple prototype using Web services and Workflow technologies. The goal was to
better understand the challenges that must be met when attempting to integrate collaborating
processes including matching of tasks, propagating changes across different levels of detail, and
overcoming heterogeneities in the representation of the underlying processes and their data.
Despite the myriad technologies, standards and tools, our experience has shown that there are at
least three opportunities for research and development of methodologies to reduce human
involvement, for example, in the form of low-level code.
This material is based upon work supported by the National Science Foundation under Grant
number CMS-0075407. Any opinions, findings and conclusions or recommendations expressed in
this material are those of the authors) and do not necessarily reflect the views of the National
Science Foundation (NSF).
1. Tony Andrews, et al. Business Process Execution Language for Web Services (BPEL4WS) 1.1,
OASIS, May 2003. ftp://www6.software.ibm.com/software/developer/librarv/ws-bpel.pdf
2. BPMI. BPML: Business Process Modeling Language 1.0, Business Process Management
Initiative, March 2001. http://www.bpmi.org
3. Robert J. Brunner, et al. Java Web Services Unleashed. SAMS, 2002.
4. Rodrigo Castro-Ravent6s. Comparative Case Studies of Subcontrator Information Control
Systems. MS thesis, Building Construction Dept., University of Florida, August 2002.
5. Jinxing Cheng, Michael Gruinger, Ram D. Sriram, and Kincho H. Law. Process Specification
Language for Project Scheduling Information Exchange. International Journal of IT in
Architecture, Engineering and Construction, 1(4), pages 307-328, Dec. 2003.
6. Jinxing Cheng and Kincho H. Law. Using Process Specification Language for Project
Scheduling Information Exchange. Proceedings of the 3rd International Conference on
Concurrent Engineering in Construction, pages 63-74, Berkeley, CA, 2002.
7. J. Cheng, Kincho H. Law, and Bimal Kumar. Integrating project management applications as
Web services. Proceedings of the 2nd International Conference on Innovation in Architecture,
Engineering and Construction, Loughborough University, UK, June 25-27, 2003.
8. J. Cheng, Pooja Trivedi, and Kincho H. Law. Ontology mapping between PSL and XML-based
standards for project scheduling. 3rd International Conference on Concurrent Engineering in
Construction, Berkeley, CA, pp. 143-156, (2002).
9. Erik Christensen, Francisco Curbera, Greg Meredith, and Sanjiva Weerawarana. Web Service
Description Language (WSDL) 1.1, World Wide Web Consortium (W3C), March 2001.
10. Karl Czajkowski, et.al. The WS-Resource Framework 1.0, March 2004. http://www-
11. Enhydra.org. Java Workflow Editor (JaWE) 1.4, http://jawe.objectweb.org/
12. Enhydra.org. Enhydra Shark Workflow Engine. http://shark.objectweb.org/
13. Alon Y. Halevy, Zachary G. Ives, Dan Suciu, and Igor Tatarinov. Schema Mediation in Peer
Data Management Systems. Proceedings of the 19th International Conference on Data
Engineering, pages 505-518, Bangalore, India, 2003.
14. J. Hammer and W. O'Brien. Enabling Supply-Chain Coordination: Leveraging Legacy Sources
for Rich Decision Support. Applications of Supply Chain Management Research, E. Akcaly, et
al., editors, Kluwer Science Series in Applied Optimization, pages 1-47, 2004.
15. Joachim Hammer, William O'Brien, and Mark S. Schmalz. Scalable Knowledge Extraction
from Legacy Sources with SEEK. First NSF/NIJ Symposium on Intelligence and Security
Informatics (ISI 2003), pages 346-349, Tucson, AZ, 2003.
16. Keesoo Kim, Boyd C. Paulson Jr., Raymond E. Levitt, Martin A. Fischer, and Charles J. Petrie.
Distributed Coordination of Project Schedule Changes using Agent-Based Compensatory
Negotiation Methodology. AI EDAM, 17(2), pages 115-131, 2003.
17. Hau L. Lee and Seungjin Whang. E-Fulfillment: Winning the last mile of E-Commerce. MIT
Sloan Management Review, 42(4), pages 54-62, 2001.
18. David S. Linthicum. B2B Process Integration. eAI Journal, pages 50-56, October 2000.
19. David Liu, Jinxing Cheng, Kincho H. Law, and Gio Wiederhold. An Engineering information
service infrastructure for ubiquitous computing. ASCE Journal of Computing in Civil
Engineering, 17(4): 219-229, 2003.
20. D. Liu, K. H. Law, and G. Wiederhold. Data-flow Distribution in FICAS Service Composition
Infrastructure. Proceedings of 15th International Conference on Parallel and Distributed
Computing Systems, Louisville, KY, 2002.
21. Microsoft. Visual Basic for Applications (VBA).
22. M.P. Papazoglou and D. Georgakopoulos. Service-Oriented Computing. Communications of
ACM, 46(10), pages 24-28, October 2003.
23. Charles Petrie. ProcessLink Coordination of Distributed Engineering. Center for Design
Research, Stanford University, August 1997.
24. Charles Petrie and Christoph Bussler. Service Agents and Virtual Enterprises: A Survey. IEEE
Internet Computing, 7(4), pages 68-78, 2003.
25. Craig Schlenoff, et al. The Process Specification Language (PSL) Overview and Version 1.0
Specification. ISTIR 6459, National Institute of Standards and Technology (NIST),
Gaithersburg, MD, 2000. http://www.mel.nist.gov/msidlibrarv/doc/nistir6459.pdf
26. Gio Wiederhold. Value-added Mediation in Large-Scale Information Systems. Proceedings of
the 6th IFIP TC-2 Working Conference on Data Semantics: Database Applications Semantics,
pages 34-56, Atlanta, GA, 1995.
27. Gio Wiederhold, Peter Wegner, and Stefano Ceri. Toward Megaprogramming.
Communications of ACM, 35(11), pages 89-99, November 1992.
28. Workflow Management Coalition (WfMC). Workflow Management Coalition Terminology &
Glossary 3.0, Feb. 1999. http://www.wfmc.org/standards/docs/TC-1011 term glossary v3.pdf
29. Workflow Management Coalition(WfMC). Workflow Process Definition Interface XML
Processing Description Language (XPDL) 1.0, October 2002.
http://www.wfmc.org/standards/docs/TC-1025 10 xpdl 102502.pdf
30. Workflow Management Coalition (WfMC). Workflow Management System.
31. World Wide Web Consortium (W3C). Web Service Choreography Interface (WSCI) 1.0,
August 2002. http://www.w3.org/TR/2002/NOTE-wsci-20020808
32. Xml.Apache.org. Apache Xerces2 Java Parser. http://xml.apache.org/xerces2-j/
Appendix A. Sample DateValueChecker. xpdl file.
xmlns:xpdl="http://www.wfmc.org/2002/XPDL1.0" xmlns: xsi="http://www.w3.org/2001/XMLSchema-
instance" xsi:schemaLocation="http://www.wfmc.org/2002/XPDL1.0 http://wfmc.org/standards/docs/TC-
Checking the date constraints of the input mapping information