Print

Print


======== Apologies for cross-posting =======

Tutorial to be held in connection with TPDL 2013, 22 September 2013,
Valletta, Malta http://www.tpdl2013.info/

From Preserving Data to Preserving Research: Curation of Process and
Context

 *ABSTRACT*

In the domain of eScience, investigations are increasingly collaborative.
Most scientific and engineering domains benefit from building on the
outputs of other research: by sharing information to reason over and data
to incorporate in the modeling task at hand. This raises the need for
preserving and sharing entire eScience workflows and processes for later
reuse. We need to define which information is to be collected, create means
to preserve it and approaches to enable and validate the re-execution of a
preserved process. This includes and goes beyond preserving the data used
in the experiments, as the process underlying its creation and use is
essential.

The TIMBUS project and Wf4Ever project team up for this half-day tutorial
to provide an introduction to the problem domain and discuss solutions for
the curation of eScience processes.
1.     TUTORIAL LEVEL: Introductory level 2.     DURATION: Half-day 3.
    OUTLINE
OF THE CONTENT

The tutorial will cover the following topics:

*Introduction to Process and Context Preservation*: The introduction will
motivate the need for process and context preservation, illustrate how this
task is difficult in an evolving domain, and introduce a use case for the
rest of the tutorial to illustrate approaches and tools.

*Data Citation*: Data forms the basis of the results of many research
publications, and thus needs to be referenced with the same accuracy as
bibliographic data. Only if data can be identified with high precision can
it be reused, validated, verified and reproduced. Citing a specific data
set is however not trivial - it exists in a vast plurality of
specifications and instances, can potentially be huge in size, and its
location might change. We will provide an overview over existing approaches
to overcoming these challenges. Further, we will present the issue of
creating data citations of data held in databases, especially of dynamic
data sets where data is added or updated on a regular basis.

*Re-usability and traceability of workflows and processes*: The processes
creating and interpreting data are complex objects. Curating and preserving
them requires special effort, as they are dynamic, and highly dependent on
software, configuration, hardware, and other aspects. We will discuss these
issues in detail, and provide an introduction to two complementary
approaches.

The first approach is based on the concept of Research Objects, which
adopts a workflow-centric approach and thereby aims at facilitating the
reuse and reproducibility. It allows packaging the data and the methods as
one Research Object to share and cite it, and thus enable publishers to
grant access to the actual data and methods that contribute to the findings
reported in scholarly articles.

A second approach focuses on describing and preserving a process and the
context it is embedded in. The artifacts that may need to be captured range
from data, software and accompanying documentation, to legal and human
resource aspects. Some of this information can be automatically extracted
from an existing process, and tools for this will be presented. Ways to
archive the process and to perform preservation actions on the process
environment, such as recreating a controlled execution environment or
migration of software components, are presented. Finally, the challenge of
evaluating the re-execution of a preserved process is discussed, addressing
means of establishing its authenticity.
4.     INTENDED AUDIENCE

The tutorial is targeted at researchers, publishers and curators in
eScience disciplines who want to learn about methods of ensuring the
long-term availability of experiments forming the basis of scientific
research.
5.     EXPECTED LEARNING OUTCOMES

The tutorial participants will become understand

·       Motivations and challenges of process preservation

·       Motivations, stakeholders and challenges of making data citable

·       How Data is Cited Today: OECD [1] report on data citability, Google
search of data sets, requirements, guidelines, metadata, locators and
identifiers, approaches to naming schemes and properties.

·       Available technologies for identifiers: Archival Resource Key (ARK),
Digital Object Identifiers (DOI), Extensible Resource Identifier (XRI),
HANDLE, Life Science ID (LSID),     Object Identifiers (OID), Persistent
Uniform Resource Locators (PURL), URI/URN/URL, Universally Unique Identifier
(UUID)

·       Approaches and Initiatives for citing data: CODATA, Data Cite,
OpenAire, challenges and opportunities: granularity, scalability,
complexity and evolving data sets current research questions

·       Ontologies needed to capture research objects: Core Ontology of the
RO family of vocabularies, workflow centric ROs, provenance traces, life
cycle of research objects.

·       Wf4Ever Toolkit / technological infrastructure for the preservation
and efficient retrieval and reuse of scientific workflows: software
architecture, functionalities, software interfaces to functionalities,
reference implementation as services and clients:

~    Collect, manage and preserve aggregations of scientific workflows and
related objects and annotations

~    Workflow sharing through a social website

~    Execution of workflows

~    Testing completeness, execution, repeatability and other desired
quality features

~    Testing the ability of a Research Object to achieve its original
purpose after changes to its resources.

~    Recommendations of relevant users, Research Objects and their
aggregated resources

~    Converting workflows into Research Objects

~    Search for workflows by input parameters or frequency of use

~    Collaborative environment

~    Access and use of research objects and aggregated resources.

~    Synchronization with remote repositories

~    Visualization of correlation between similar objects

·       TIMBUS context model and tools to semi-automatically capture the
relevant context of a business process for preservation

~    The scope of context regarding business process preservation -
technology, application and business context, aligned with enterprise
architecture

~    The context meta-model, with domain independent and domain specific
aspects

~    Demonstration of a context model instance of example processes (in the
eScience domain)

~    Tools to automatically capture some parts of the context (software
dependencies, data formats, licenses, ...)

~    Outlook on reasoning and preservation planning, based on the context
model