Print

Print


Dear colleagues,

Sharing the announcement of another opportunity for digital humanities
training this summer. Scholarships available—for continuing professionals
and students based on need.

---

Humanities Intensive Learning and Teaching Institute
HILT 2017

We are delighted to announce that HILT 2017 registration is now open!
Register NOW  http://www.dhtraining.org/hilt2017

HILT will be held June 5–8, 2017, with special events on June 9, on the
campus of the University of Texas at Austin.

Courses for 2017 include:

GETTING STARTED WITH DATA, TOOLS, AND PLATFORMS
Brandon Locke, Director, Lab for the Education and Advancement in Digital
Research (LEADR), Michigan State University
Thomas Padilla, Humanities Data Curator, University of California, Santa
Barbara

Starting a digital humanities research project can be quite intimidating.
This course is designed to make that process less so by exploring tools and
platforms that support digital humanities research, analysis, and
publication. We will begin by reframing sources as data that enable digital
research. We will work throughout the week on approaches to (1) finding,
evaluating, and acquiring (2) cleaning and preparing (3) exploring (4)
analyzing (5) communicating and sharing data. Emphasis will be placed
across all stages on how to manage a beginner digital research project in
such a way that helps to ensure that your project remains accessible, that
the process is well documented, and that the data are reusable.


WORKING WITH SCALAR
Curtis Fletcher, Associate Director of the Polymathic Labs, University of
Southern California Libraries, and Co-Principal Investigator, Scalar Project

This 4-day workshop is for scholars and students who wish to work on a
Scalar project or publication and seek comprehensive training in the
platform and in-depth support with editorial, technical and design
decisions. The workshop will include basic, intermediate and advanced
training sessions in Scalar, discussions of readings on multimodal
scholarship, and both collaborative whiteboarding sessions and one-on-one
design meetings devoted to each project. The aim of the workshop is to help
participants think through the conceptual, structural and technical aspects
of their projects as well as the project’s relation to the emergent field
of digital media and scholarship overall.

Scalar is a free, open source authoring and publishing platform designed
for scholars writing media-rich, born-digital scholarship. Developed by The
Alliance for Networking Visual Culture, Scalar allows scholars to assemble
media from multiple sources and juxtapose that media with their own writing
in a variety of ways and to structure essay- and book-length works in ways
that take advantage of the unique capabilities of digital writing,
including nested, recursive, and non-linear formats.


HELP! I’M A HUMANIST! — PROGRAMMING FOR HUMANISTS WITH PYTHON
Brandon Walsh, Assistant Professor and Mellon Digital Humanities Fellow,
Washington and Lee University Libraries

This course introduces participants to humanities programming through the
use of Python for data acquisition, cleaning, and analysis. The course
assumes no prior technical knowledge and will focus on accomplishing basic
research tasks. Students should walk away feeling equipped to tackle a
variety of typical problems that arise for digital humanists.

We will discuss programming and debugging concepts through the design,
implementation, and presentation of small text analysis projects. Primary
technologies and topics covered in this course will include the command
line, Git, GitHub, and Python; working with data sources such as API’s, CSV
files, and data scraped from the web; and basic text analysis. Over the
course of the week, we will work with data from DPLA and Project Gutenberg.
If the words above mean nothing to you, don’t panic—this course is for you.


BLACK PUBLICS IN THE HUMANITIES: CRITICAL AND COLLABORATIVE DH PROJECTS
Sarah Patterson, Graduate Student Co-Founder and Coordinator, Colored
Conventions Project, PhD Candidate, University of Delaware
Jim Casey, Graduate Student Co-Founder and Coordinator, Colored Conventions
Project, PhD Candidate, University of Delaware

Forming reciprocal partnerships between academia and publics realizes a
primary goal of calls for social justice in Digital Humanities practices
and projects. In this discussion-centric course, we will explore the
possibilities for developing collaborative and public-facing digital
projects invested in social justice. As a path to cultural criticism, we
ask: how might we adapt digital practices in the humanities to bring
students and public communities into our scholarship on Black American
experiences and other underrepresented identities and texts in DH? What are
some of the challenges of working through the politics of marginalization
and with scattered archives, and how might we design multi-faceted projects
that engage those topics in meaningful ways?

This course will cover the intersections of project management, digital
pedagogy and data visualization. We will hone strategies for weaving
together inclusive community partnerships with undergraduate research
through crowdsourcing, exhibits, and digital collections. Taking a hands-on
approach, we will become acquainted with the processes of data. How do
datasets make arguments? How can we collaborate with librarians and
information professionals to unpack the resonances of power, authority, and
violence in humanities data?

Using the Colored Conventions Project and other small- to medium-sized DH
projects as examples, students will have the opportunity to create and
workshop blueprints for their own projects. By the end of the week,
participants will have a working understanding of an array of approaches to
project design and implementation, including data viz., metadata,
curriculum, and more.


TEXT ANALYSIS
Katie Rawson, Humanities Librarian, Emory University

Can topic modeling help me answer my question? How do I extract the people
and places from the texts I study? What is principal component analysis?
How do I build a corpus I can mine using text analysis tools? How can I
study shifts in discourse over time?

This class will examine methods and practices for text analysis. Freely
available tools and excellent tutorials have made it easier to apply
computational text analysis techniques; however, researchers may still find
themselves struggling with how to build their corpus, decide upon a method,
and interpret results. We will survey the how and why of variety of
commonly used methods (e.g. word distribution, topic modeling, natural
language processing) as well as how develop and manage a collection of
texts.

NEW APPROACHES TO LITERARY ARCHIVES
Porter Olsen, PhD Candidate, University of Maryland

The past decade has seen the rise of hybrid and born-digital literary
collections as prominent authors from the latter 20th century have (either
in person or through their estates) donated their papers to libraries and
other collecting institutions. Over that period the archival community has
worked to develop the necessary preservation methods and access systems to
ensure the long-term preservation of these born-digital materials, while
also making them available to researchers. Like the archivists tasked with
processing these born-digital materials, the scholar of latter 20th and
early 21st century literature must also develop new skills and expertise.
In this course participants will develop those skills and digital fluencies
necessary to take full advantage of existing and future hybrid literary
collections. Participants will learn fundamentals of digital objects
including how data is stored on a variety of legacy and contemporary media,
how to access file-level metadata such as file creation and modification
times, and how to work with a variety of file systems. We will also
carefully explore examples of born-digital and hybrid literary collections
such as the Salman Rushdie collection at Emory University, the John Updike
collection at Harvard University, and the Gabriel Garcia Marquez collection
at the Harry Ransom Center. Instruction will be a mixture of lecture,
discussion, and hands-on practical activities.


HUMANITIES RESEARCH WITH SOUND: INTRODUCTION TO AUDIO MACHINE LEARNING
Stephen McLaughlin, PhD Candidate, University of Texas at Austin

Libraries and archives have digitized thousands of hours of historical
audio in recent years, including literary performances, radio programs, and
oral histories. In the rush to preserve these recordings before their
physical media decay, applying detailed metadata has often been an
afterthought. Unlike digitized text, which is readily searchable in most
cases, describing the contents of audio recordings typically means
listening in real time. Using a range of tools, the High-Performance Sound
Technologies for Access and Scholarship (HiPSTAS) project at the University
of Texas at Austin has worked to shine a light on these large collections
and encourage their use in research.

Participants will gain skills useful for using sound collections for a
range of humanities research questions. By learning the basics of how to
discover and identify patterns, search and sift collections of sounds,
humanists can unlock new collections of valuable primary source material.
This workshop will begin with an overview of machine learning techniques
for expediting audio annotation, beginning with event detection
classifiers, speaker diarization, and speech-to-text processing. We will
then use the GUI-based tool Sonic Visualiser to tag audio events and use
those data to search for additional instances in a wider corpus. Experience
recording or editing digital audio will be helpful but is not strictly
necessary. No prior experience with Python or machine learning is required.


INTRODUCTION TO THE TEXT ENCODING INITIATIVE (TEI) FOR HISTORICAL DOCUMENTS
Caitlin Pollock, Digital Humanities Librarian, Indiana University-Purdue
University Indianapolis

The Text Encoding Initiative (TEI) Guidelines are a standard defining an
XML vocabulary for representing textual materials in digital form. This
course will focus on encoding historical primary sources both to give
provide context and to support analysis and visualization of features of
text relevant to humanities scholars. In this introductory course,
participants will focus on documenting provenance of historical materials,
recording bibliographic metadata, and developing encoding workflows that
identify features of interest. Participants will also become familiar with
the TEI guidelines and will discuss how to manage text encoding projects in
ways that support uniform data creation and best practices for integrating
TEI with other metadata standards.

Participants will review examples of TEI usage in other digital humanities
project and then devote time to encoding TEI documents relevant to their
research interests. For those with no previous experience, readings about
XML and the TEI will be provided prior to class.


More information about all the courses can be found at:
http://www.dhtraining.org/hilt2017/courses/


Sponsored student scholarships are available for undergraduate and
graduate students as well as continuing professionals.
http://www.dhtraining.org/hilt2017/important-dates-costs/#scholarships


REGISTRATION
Regular: $975
Early Career Scholars and Cultural Heritage Professionals: $775
Student: $550

Registration fees includes admittance to one course, the HILT Ignite and
Social, and a HILT swag bag as well as breakfast and lunch in our campus
dining hall.

 http://www.dhtraining.org/hilt2017/important-dates-costs/
 http://www.dhtraining.org/hilt2017

We hope to see you in Austin this summer!

---

Trevor Muñoz
Assistant Dean for Digital Humanities Research, University Libraries
Associate Director, Maryland Institute for Technology in the Humanities
(MITH)
University of Maryland
301.405.8927 | @trevormunoz | http://trevormunoz.com

########################################################################

to manage your DLF-ANNOUNCE subscription, visit diglib.org/announce