Print

Print


[Forwarded upon request. --ELM]


Dear colleagues,

You are invited to participate in the 1st Workshop on Scholarly Document
Processing (SDP 2020) to be held in conjunction with the 2020 Conference on
Empirical Methods in Natural Language Processing (EMNLP 2020) on November
19. The workshop will be held VIRTUALLY with EMNLP 2020.

*Important updates:*


   - The workshop will be held virtually on November 19. Details about mode
   of participation will be released closer to the workshop.
   - The new submission deadline for research papers is August 15, 2020.
   - The new deadline for the shared tasks system runs is August 15, 2020.
   - All three shared tasks are still open for participant registration:
   https://ornlcda.github.io/SDProc/sharedtasks.html#register
   - We are delighted to announce that the workshop will feature two
   keynote speakers:
      - Kuansan Wang, Managing Director, MSR Outreach Academic Services
      - Steinn Sigurðsson, Scientific Director of arXiv, Professor in the
      Department of Astronomy & Astrophysics at The Pennsylvania State
University

*About the workshop:*

The SDP 2020 workshop will consist of a research track and three shared
tasks.

The shared tasks include the 6th edition of the CL-SciSumm shared task (
https://github.com/WING-NUS/scisumm-corpus/) and two new summarization
tasks -- CL-LaySumm and LongSumm -- geared towards easier access to
scientific methods and results.

SDP is led by organizers of BIRNDL (https://philippmayr.github.io/BIRNDL-WS/)
and WOSP (https://wosp.core.ac.uk/) workshop series.

Details about mode of participation will be announced later on our website
and Twitter.
Website: https://ornlcda.github.io/SDProc/
Twitter: https://twitter.com/sdproc


* Detailed call for papers:*

*** Introduction ** *


In addition to the long-standing challenge faced by scholars of keeping up
with the growing literature in their own and related fields, they must now
compete with malign pseudo-science and dis-information in informing public
policy and behavior. This has stimulated workshops and research focused on
enhancing search, retrieval, summarization, and analysis of scholarly
documents. However, the general research community on scholarly document
processing remains fragmented, and efforts towards natural language
understanding of scholarly text that is central to vastly improve all the
said downstream applications are not widespread.

To address these gaps, we propose the first Workshop on Scholarly Document
Processing.
We seek to reach to the broader NLP and AI/ML community to pool the
distributed efforts to improve scholarly document understanding and enable
intelligent access to the published research. The goal of SDP is two-fold:
to increase collaboration between communities interested in leveraging
knowledge stored in scholarly literature and data, and to establish SDP as
the single-focused primary venue for the field.


We seek to appeal to the mainstream NLP and ML community working on SDP
tasks – which are NLP tasks – to publish at SDP as we seek to establish SDP
as the integrated premier venue. We have established a steering committee <
https://ornlcda.github.io/SDProc/steeringcommittee.html> to help us turn
SDP into a conference in the forthcoming years.


* ** Topics of Interest ***

We invite submissions from all communities interested in natural language
processing, information retrieval, and data mining problems in scholarly
documents; and in processing scholarly documents for easier access to
various audiences. The topics of interest include, but are not limited to:

   - Information extraction, text mining and parsing scholarly literature
   - Reproducibility and peer review
   - Lay summarization (i.e., summaries created for non-experts) of
   individual and collections of scholarly documents
   - Discourse modeling and argument mining
   - Summarization and question-answering for scholarly documents
   - Semantic and network-based indexing, search and navigation in
   structured text
   - Graph analysis/mining including citation and co-authorship networks
   - Analysing and mining of citation contexts for document understanding
   and retrieval
   - New scholarly language resources and evaluation
   - Connecting and interlinking publications, data, tweets, blogs or their
   parts
   - Disambiguation, metadata extraction, enrichment, and data quality
   assurance for scholarly documents
   - Bibliometrics, scientometrics, and altmetrics approaches and
   applications
   - Other aspects of scholarly workflows including open access/science,
   and research assessment
   - Infrastructures for accessing scholarly publications and/or research
   data

*** The 6th Computational Linguistics Scientific Document Summarization
Shared Task (CL-SciSumm 2020) ** *
(Organisers: Muthu Kumar Chandrasekaran)

CL-SciSumm is the first medium-scale shared task on scientific document
summarization, with over 500 annotated documents. Last year's CL-SciSumm
shared task introduced large scale training datasets, both annotated from
ScisummNet and auto-annotated. For the task, Systems were provided with a
Reference Paper (RP) and 10 or more Citing Papers (CPs) that all contain
citations to the RP, which they used to summarise RP. This was evaluated
against abstract and human written summaries on ROUGE.


The task is defined as follows:

*Given*: A topic consisting of a Reference Paper (RP) and Citing Papers
(CPs) that all contain citations to the RP. In each CP, the text spans
(i.e., citances) have been identified that pertain to a particular citation
to the RP.

*Task 1A*: For each citance, identify the spans of text (cited text spans)
in the RP that most accurately reflect the citance. These are of the
granularity of a sentence fragment, a full sentence, or several consecutive
sentences (no more than 5).

*Task 1B*: For each cited text span, identify what facet of the paper it
belongs to, from a predefined set of facets.

*Task 2 (optional bonus task)*: Finally, generate a structured summary of
the RP from the cited text spans of the RP. The length of the summary
should not exceed 250 words.

This year, CL-SciSumm '20 will have two new tracks: *LaySumm* and *LongSumm*
.

*** CL-LaySumm 2020: The 1st Computational Linguistics Lay Summary
Challenge Shared Task ** *
(Organisers: Anita De Waard, Ed Hovy)

To ensure and increase the relevance of science for all of society and not
just a small group of niche practitioners, researchers have been
increasingly tasked by funders and publishers to outline the scope of their
research for the general public by writing a summary for a lay audience, or
lay summary. The LaySumm summarization task considers automating this
responsibility, by enabling systems to automatically generate lay
summaries. A lay summary explains, succinctly and without using technical
jargon, what the overall scope, goal and potential impact of a scholarly
paper is.

The corpus for this task will comprise full-text papers with lay summaries,
in a variety of domains, and from a number of journals. Elsevier will make
available a collection of lay summaries from a multidisciplinary collection
of journals, as well as the abstracts and full text of these journals.

The task is defined as follows:

*Given*: A full-text paper, its abstract, and a lay summary of a given paper

*Task*: For each paper, generate a lay summary of the specified length


*Evaluation*

The Lay Summary Task will be scored by using several ROUGE metrics to
compare the system output and the gold standard lay summary. As a follow-up
to the intrinsic evaluation, we will crowdsource a number of automatically
generated lay summaries to a panel of judges and a lay audience. Details of
the crowdsourcing evaluation will be announced with the sharing of the
final test corpus on July 1st.

All nominated entries will be invited to publish a paper in Open Access
(Author-Payment Charges will be waived) in a selected Elsevier publication.
Authors will be asked to provide an automatically generated lay summary of
their paper, together with their contribution.


*** LongSumm 2020: Shared Task on Generating Long Summaries for Scientific
Documents ** *
(Organisers: Michal Shmueli-Scheuer, Guy Feigenblat)

Most of the work on scientific document summarization focuses on generating
relatively short summaries (250 words or less). While such a length
constraint can be sufficient for summarizing news articles, it is far from
sufficient for summarizing scientific work. In fact, such a short summary
resembles more to an abstract than to a summary that aims to cover all the
salient information conveyed in a given text. Writing such summaries
requires expertise and a deep understanding in a scientific domain, as can
be found in some researchers’ blogs.

The LongSumm task opted to leverage blogs created by researchers in the NLP
and Machine learning communities and use these summaries as reference
summaries to compare the submissions against.

The corpus for this task includes a training set that consists of 1705
extractive summaries and around 700 abstractive summaries of NLP and
Machine Learning scientific papers. These are drawn from papers based on
video talks from associated conferences (Lev et al. 2019 TalkSumm) and from
blogs created by NLP and ML researchers. In addition, we create a test set
of abstractive summaries. Each submission is judged against one reference
summary (gold summary) on ROUGE and should not exceed 600 words.


*** Submission Information ** *

Authors are invited to submit full and short papers with unpublished,
original work. Submissions will be subject to a double-blind peer review
process. Accepted papers will be presented by the authors at the workshop
either as a talk or a poster. All accepted papers will be published in the
workshop proceedings.

*Submission Website*: Submission is electronic, using the Softconf START
conference management system: https://www.softconf.com/emnlp2020/sdp2020/

The submissions should be in PDF format and anonymized for review. All
submissions must be written in English and follow the EMNLP 2020 formatting
requirements: https://2020.emnlp.org/call-for-papers.
*Long paper submissions*: up to 8 pages of content, plus unlimited
references.
*Short paper submissions*: up to 4 pages of content, plus unlimited
references.
Final versions of accepted papers will be allowed 1 additional page of
content so that reviewer comments can be taken into account.

Shared Task registration: Participants of all shared tasks need to register
here:
https://docs.google.com/forms/d/e/1FAIpQLScfHzByrog-k299qBuCp3SbPWcb905_kmOWMvHpDH57VLpVrg/viewform.



* ** Important Dates ***

*Research track*:
Submission deadline – August15, 2020

Notification of Acceptance – September 29, 2020
Camera-ready submission due – October 10, 2020
Workshop – November 19, 2020

*Shared task track*:
Training set release – Feb 15, 2020
Deadline for registration – April 30, 2020 (remains open till
evaluation window starts)
Test set release (Blind) – July 1, 2020
System runs due – August 1, 2020
Preliminary system reports due – August 15, 2020
Camera-ready submission due – September 29, 2020
Workshop – November 19, 2020

*** SDP 2020 Keynote Speakers ***
SDP keynotes are invited by the organizing committee and will present in
the research track of the workshop.

Kuansan Wang, Managing Director, Microsoft Research Outreach Academic
Services
Steinn Sigurdsson, Scientific Director of arXiv and Professor at the
Pennsylvania State University

* ** SDP 2020 Journal Extension ***
In the past, the accepted authors were invited to submit an extended
version of their work to a special issue of a selected journal. The
organizers are currently in the process of identifying appropriate journals
to host a similar special issue this year. Relevant updates including
topics and requirements for this special issue will be shared on the
workshop website in due time.


*** Organizing Committee ***
Muthu Kumar Chandrasekaran, Amazon, Seattle, USA
Anita de Waard, Elsevier, USA
Guy Feigenblat, IBM Research AI, Haifa Research Lab, Israel
Dayne Freitag, SRI International, San Diego, USA
Tirthankar Ghosal, Indian Institute of Technology Patna, India
Drahomira Herrmannova, Oak Ridge National Laboratory, USA
Eduard Hovy, Research Professor, LTI, Carnegie Mellon University, USA
Petr Knoth, Open University, UK
David Konopnicki, IBM Research AI, Haifa Research Lab, Israel
Philipp Mayr, GESIS – Leibniz Institute for the Social Sciences, Germany
Robert M. Patton, Oak Ridge National Laboratory, USA
Michal Shmueli-Scheuer, IBM Research AI, Haifa Research Lab, Israel
Dominika Tkaczyk, Crossref, UK


*** Steering Committee ***
C. Lee Giles, David Reese Professor, College of Information Sciences and
Technology, Pennsylvania State University
Min-Yen Kan, Associate Professor, School of Computing, National University
of Singapore
Dragomir Radev, A. Bartlett Giamatti Professor of Computer Science, Yale
University
Jie Tang, Professor and Associate Chair of the Department of Computer
Science and Technology, Tsinghua University
Alex Wade, Group Technical Program Manager, Chan Zuckerberg Initiative
Kuansan Wang, Managing Director, Microsoft Research Outreach Academic
Services
Bonnie Webber, Professor, School of Informatics, University of Edinburgh

*** Programme Committee ***
Please visit our website for the complete list of PCs:
https://ornlcda.github.io/SDProc/programcommittee.html
More details available on the workshop website: <http://goog_1307099532>
https://ornlcda.github.io/SDProc/


With kind regards,
SDP 2020 organizing committee


--
Min-Yen KAN (Dr) :: Associate Professor :: National University of Singapore :: NUS School of Computing, AS6 05-12, 13 Computing Drive
Singapore 117417 :: +65 6516 1885(DID) :: +65 6779 4580 (Fax) :: [log in to unmask] (E) :: www.comp.nus.edu.sg/~kanmy (W)