|Getting to Grips with Semantic and Geo-annotation using Recogito 2||Lunes (mañana)|
|This workshop introduces Recogito 2, a tool developed by Pelagios Commons that enables annotation of geographic place references in text, images and data through a user-friendly online platform. Perhaps the most notable feature of Recogito 2 is the ability to produce semantic data without the need to work with formal languages directly, while at the same time allowing the user to export the annotations produced as valid RDF, XML and GeoJSON formats.|
The workshop walks participants through all stages of using Recogito 2 to annotate different types of source documents: from uploading a file to the online platform, through annotation, to the download of the annotations in the available data formats. Through practical examples it will show how to:
* Annotate text sources
* Annotate images and tables
* Export data in a variety of useful formats
|Interactions: Platforms for Working with Linked Data||Lunes (tarde)|
|Following on from a successful LOD workshop in Montreal that saw 30 plus people come together and discuss the potential for linked data in the humanities, this workshop will focus more specifically on interacting with Linked Data. There are many different platforms for working with linked data — for visualizing, creating, reconciling, cleaning, and analyzing it. Some of these tools have been developed from within the Digital Humanities community, and others have been developed beyond it but adapted to our purposes. We hope to create the opportunity for fruitful exchange by providing time for hands-on demonstration and discussion.|
|An introduction to encoding and processing text with TEI||Lunes (mañana)|
|This workshop offers an exploration of scholarly text encoding, aimed at an audience of humanities students and scholars and new adopters of digital humanities. The Text Encoding Initiative (TEI) provides a widely used standard for the representation of texts in digital form and has become a fundamental technology for digital scholarship. In this workshop, we will devote special attention to the contexts and varieties of TEI encoding in digital projects, and we will explore how to write as well as read the TEI. A short, complementary introduction to XPath navigation is designed as a gateway to learning how to process and analyse TEI data.|
By the end of the day, our participants will have learned how to launch their own TEI projects. They will have gained skills in writing code and navigating it, in reading and comprehending TEI, and will be introduced to resources which will enable them to continue learning by themselves afterwards.
|Building International Bridges Through Digital Scholarship: The Trans-Atlantic Platform Digging Into Data Challenge Experience / Construyendo lazos internacionales a través de becas digitales: La plataforma Trans – Atlántica descubriendo experiencias en desafió de datos||Lunes|
|This workshop will focus on how international partnership can benefit large-scale research projects in digital scholarship. During the workshop, participants will learn about the Digging into Data Challenge 4, an initiative of the Trans-Atlantic Platform (T-AP) for Social Sciences and Humanities, a network of public funders representing countries in Europe, North America, and South America. The Digging into Data Challenge invited international teams to undertake multidisciplinary projects that use techniques of large-scale data analysis and demonstrate how these can lead to new insights. The Digging into Data Challenge has had four rounds of funding, and offers an valuable opportunity to (1) see how the international dimension benefits the scholarship; (2) understand the challenges of working internationally on big data projects addressing questions in the humanities and social sciences; (3) understand how international funding initiatives might enable research in ways that domestic funding cannot.|
This workshop is targeted at (1) individuals who are interested in “scaling up” their research efforts to include an international dimension and (2) funders who are interested in launching or joining international funding opportunities. The workshop will touch on various themes that impact digital researchers and international collaboration, including:
• legal considerations,
• the intellectual challenges for large scale research,
• big data skills,
• funding policies and processes, and
• the challenges to international research collaboration for researchers from both small and large countries.
The workshop is scheduled as a full-day event so as to allow ample time for conversation and networking.
In order to better incorporate and interests of workshop participants and foster dialog and discussion, participants may provide a brief one-page synopsis outlining their interest in international collaboration and what they hope to gain from the workshop. The synopsis should be sent to: email@example.com
|Indexing Multilingual Content with the Oral History Metadata Synchronizer (OHMS)||Martes|
|Are you in need of a way to provide access to oral histories not recorded in English? Do you dream of creating multilingual metadata for interviews recorded in one language but made accessible in another? In 2016 The University of Kentucky Nunn Center updated the Oral History Metadata Synchronizer (OHMS) application with multilingual functionalities, creating the capability to synchronize both a transcript/translation, as well as to create a bilingual index, making all of these searchable and synchronized to the corresponding moment in the audio or video. In this half-day workshop, OHMS founder and creator Doug Boyd will demonstrate the multilingual functionalities of OHMS. Through demonstration of a bilingual use case, power users Teague Schneiter and Brendan Coates will walk attendees through each step of the indexing process to prepare a sample Spanish-English index. Instructors will also guide attendees to develop workflows to support multilingual indexing.|
|Machine Reading: Advanced Topics in Word Vectors||Martes|
|This half-day workshop is an introduction to word vectors and text vectorization broadly. We will focus on building an intuition of how word vectors work, incorporating visualization methods, using pre-trained vectors, and exploring applications of word embeddings. We will teach you both the high-level concepts and the practical usages of these widely used analytical tools for text analysis in digital humanities (DH). It is a hands-on workshop with practical activities for the participants starting with a review of word vectors by way of visualization, an overview of downloadable word vectors, and examining the potential pitfalls of using word vectors in humanistic analysis and the methods for mitigating these issues. Given the general applicability of machine learning models in real life, addressing issues concerning biased models, datasets, and algorithms, is of vital importance for correct interpretation of their applications.|
We will provide a Python Jupyter Notebook and an accompanying text corpus (English and Spanish) that we will work through as a group. By the end of the workshop, the participants will have working knowledge of how and where to download or train word embeddings and the caveats of using them.
The workshop materials and hands-on help will be offered in both English and Spanish.
|Where is the Open in DH?||Martes|
|When it comes to promoting the importance of open scholarship, Latin America and the Caribbean stand out in a sense that the concept of “openness” is generally accepted all over the region. Open access is established as the most extended communication model in the academic community, giving visibility and value to scientific production at a regional and global level. Nevertheless, the question still remains to what extent has this wide acceptance of openness influenced the work of the humanistas digitales. Much of the most well-known digital DH work in the world tends to focus myopically on projects coming out of North America and Western Europe. So, what would it take to bring DH into a more global openness, not only in terms of access but also in terms of methods, best practices and opportunities for collaboration? And what could this openness look like set against the backdrop of the long-standing and highly developed open access movement of Latin America and the Caribbean? The workshop will analyse these challenges and will highlight initiatives and explore options to advance open in DH. It will place output modes, from collaborative web projects to traditional publications to research data, in the context of the larger open access movement, which is changing the face of academic research and society in a very profound way.|
|Herramientas Para Los Usuarios: Colecciones y Anotaciones Digitales||Martes|
|Los grupos de investigación LEETHI, ILSA, NUPILL, LOEP y Support Factory proponen un taller intensivo en español y portugués donde presentarán herramientas que han creado y probado en los últimos años para la anotación libre y semántica de textos (@note y DLNotes), para el estudio de la versificación (AOIDOS) y para la creación de colecciones digitales con fines investigadores y educativos (CLAVY y CONTENT AWAY). Son herramientas creadas desde las necesidades específicas de los expertos para la investigación en lengua y literaturas española y portuguesa y para la enseñanza superior. Los asistentes al taller podrán valorar las posibilidades de adaptación de esas herramientas a sus propios contextos. Nuestro objetivo es demostrar que se puede crear tecnología en español y en portugués al alcance de todos y para las necesidades de cada uno.|
|Jumpstarting Digital Humanities Projects||Lunes (tarde)|
|“Jumpstarting Digital Humanities Projects” is a half-day pre-conference workshop on various aspects of beginning a digital humanities project: scoping and planning a sizable project; determining when to use institutional infrastructure and when to go beyond the institution; winning cooperation from institutional authorities and collaborators; collecting and digitizing materials; and designing for iterative development and efficient feedback loops. Our sessions will focus on the common type of digital humanities project that consists of a assembling a database of source material and generating interactive interpretations such as maps and visualizations from that database. Five scholars from different disciplines and institutions, each a participant in the Mellon-funded Resilient Networks for Inclusive Digital Humanities initiative, will give short tutorials, and workshop attendees will spend an hour on exercises in which they can begin planning a digital humanities project with help from the instructors.|
|Semi-automated Alignment of Text Versions with iteal||Martes|
|Our half-day tutorial concerns the semi-automated alignment of different versions of texts in complex traditions. We will introduce iteal (Interactive Text Edition Alignment Tool) in the context of debates about multi-text problems, pre-modern spelling variance and debates about distant forms of reading. The tutorial will include a demonstration of specific use cases, a discussion of the relevance of the implemented system to particular textual problems relevant to the participants, as well as a hands on discovery of the system. Whereas methods of hand aligning and visualizing texts exist in TEI, we focus on the possibility of computational alignment for the purpose of exploratory visualization at multiple scales. Our visual analytics environment iteal is not English-specific. Participants interested in trying their own data in any language with iteal are encouraged to be in touch with the organizers as soon as possible. A teaser outlining a brief workflow with iteal can be found at https://vimeo.com/230829975. A more detailed description of the workshop can be found at http://djwrisley.com/itealcdmx|
|New Scholars Seminar||Lunes|
|The re-creation of Harry Potter: Tracing style and content across novels, movie scripts and fanfiction||Lunes|
|This tutorial brings together two popular and complementary text analysis tasks in DH; stylometry and text reuse detection. While stylometry typically focuses on how texts are written, text reuse studies are geared towards what texts are written about. Both methodologies tie into the theoretical notion of intertextuality, albeit in complementary ways. Creativity and individuality are important phenomena at stake in both fields: are writers at liberty to escape their own ‘stylome’ – or unique stylistic fingerprint – and to which extent can they free themselves from the many predecessors to which they are intertextually indebted? In this workshop we offer a hands-on introduction to these topics using the case study of Rowling’s Harry Potter novels, their well-known movie adaptations and the numerous works of fanfiction which this work has invited. The tutorial focuses on the use of a number of practical tools (Python, Tracer, Stylo) to tackle this fascinating case study.|
|Bridging Justice Based Practices for Archives + Critical DH||Lunes|
|As scholars and practitioners in digital humanities, we create, analyze, trouble, and reference “the archive,” though are often signaling vastly different (mis)understandings of archives, archivists, and archival practices. While both archivists and digital humanists engage critical questions around shared areas of practice (i.e. access, labor, privacy) these conversations often occur in parallel spheres with little recognition of the intellectual contributions in the distinct yet intersecting fields of archives and DH. This workshop aims to bridge the discourse occurring in critical archival studies and critical digital humanities by engaging participants in collective knowledge building exercises to articulate justice based practices related to appraisal, access, description, pedagogy, privacy, provenance, and system design, and then contribute these suggested practices to expand existing resources on critical archives and DH.(1)|
Please note, this workshop will be held at the community archive La Casa de El Hijo del Ahuizote.
(1) Michelle Caswell, Ricardo Punzalan, and T-Kay Sangwand (Guest Editors) Journal of Critical Library and Information Studies special issue: Critical Archival Studies. Vol 1, No 2. 2017
|As the digital humanities take firm root in the humanities curriculum, institutions around the world are now committing significant resources toward developing DH and integrating it in standalone courses, graduate degrees and undergraduate majors and minors within and across departments. With this commitment comes the realization that such formal implementation of DH and its siblings (e.g. digital social sciences, digital media, etc.) at a degree-granting level requires articulation of core requirements and competencies, identification and hiring of faculty who are capable of teaching DH in a variety of learning environments (coding, systems, application of methods), evaluating a broad spectrum of student work, and beyond. It also changes the foundational principles of the work of those in our network, as training increasingly involves learning how to teach competencies at the same time as we ourselves develop and maintain them in light of fast-paced advances. At the 2017 mini-conference, attendees reached consensus about forming an ADHO Special Interest Group (SIG) dedicated to DH Pedagogy in all its forms. In support of this, for our 2018 mini-conference and meeting, we continue in inviting proposals for lightning talks on all topics relating to digital pedagogy and training — and especially this year for those that will lead us to substantial discussion about how a SIG could support instructors, students, practitioners, and administrators. Mini-conference talks will take place in the morning, and the afternoon member meeting will be dedicated to work on a collaborative draft of the SIG proposal.|
|Archiving Small Twitter Datasets for Text Analysis: A Workshop for Beginners||Martes|
|Twitter data can be very valuable for researchers of perhaps all disciplines, not just DH. Given the difficulties to properly collect and analyse Twitter data as viewable from most Twitter Web and mobile clients and the very limited short-span of search results, there is the danger of losing huge amounts of valuable historical material. In this workshop for non-coders, participants will be guided through two tasks: the first task will guide participants in creating an application to tap into Twitter’s API, in our case to get Twitter data. The second task will guide participants in the use of a Google spreadsheet to capture streaming (live) data from Twitter in order to archive it, download it and perform text analysis, data visualisations and other studies.|
Workshop materials and hands-on help will be available in both English and Spanish.
|Distant Viewing with Deep Learning: An Introduction to Analyzing Large Corpora of Images||Martes|
|The tutorial provides a hands-on introduction to the use of deep learning techniques in the study of large image corpora. Image analysis tasks covered in the tutorial include object detection, facial recognition, image similarity, and image clustering. We will make three open-access image corpora (historic photographs, still frames from moving images, and scanned works of art) available to test these methods. Alternatively, participants may bring and use an image dataset of interest to them. This tutorial is aimed at scholars who work with visual materials who want to integrate DH methods into their analysis of image corpora. No prior programming experience is required.|