ICTeSSH Program
- Day 1 29 June 2020
- Day 2 30 June 2020
- Day 3 01 July 2020
- Impact Hub Amsterdam
Registration of participants
The training will be organized by CESSDA.
The operation of sustainable services provided by Social Sciences and Humanities Research Infrastructures (such as CESSDA ERIC) is only possible if the software components used to deliver them are production ready. In recognition of this requirement, CESSDA has long been working on defining its own internal procedures to ensure appropriate quality levels are set and met. Where software components fail to meet these standards, Research Infrastructures typically experience high installation, configuration and annual maintenance costs, lack of support, and difficulties with modifying or upgrading the software. In short, a general lack of “fitness for purpose”.
In the past few years, CESSDA ERIC has turned its attention to software maturity , as a means of improving the quality of the software components that make up its tools and services, by providing architectural and usability guidelines, specifying quality standards, providing a common technical infrastructure and targeted training sessions for its software developers.
Based on our recent experience of providing internal training on developing and deploying ‘fit for purpose’ software components, we are able to offer a session for the SSH community on software testing strategies and tools.
Trainees will be introduced to the mechanics of the build/test/deploy cycle for Docker containers via an example application.
More details available at https://ictessh.uns.ac.rs/cessda
The workshop will be organized by Qeios.
What if researchers could use an Open Science platform enabling them to seamlessly write with colleagues and instantly publish both their Articles and Definitions without leaving it? And what if the wider community of peers could then give the most transparent and diverse feedback by openly review both Articles and Definitions? In our hands-on session, researchers can try a new way of integrating scholarly definitions as the building blocks of their new piece of research, and have the approval of the wider community of peers.
Attendees are expected to bring a laptop.
More details available at https://ictessh.uns.ac.rs/qeios
Lunch is included in the conference fee. Vegetarian lunch with seasonal products including organic juices.
The conference opening ceremony
Against the monist programs and philosophies nowadays prevalent, I argue in favor of a dualism between information and meaning. The dynamics of (Shannon-type) information processing and meaning processing are different. In the social sciences, one studies the reflexive processing of meaning. Meaning is provided from the perspective of hindsight (against the arrow of time) and may generate redundancy: options which have not yet been realized. A calculus of redundancy can be envisaged.
Background study:
Leydesdorff, L., Johnson, M. W., & Ivanova, I. (2018). Toward a calculus of redundancy: Signification, codification, and anticipation in cultural evolution.. Journal of the Association for Information Science and Technology, 69(10), 1181-1192. doi: 10.1002/asi.24052
The standard contributor model in science is vertically integrated. Resources are centralized to an individual or small team that conducts the entire research process: idea, design, collection, analysis, and report. This approach makes it easy to assign credit, but it is inefficient in capitalizing on specialized expertise, it produces a lot of small science, and it is exclusive. A complementary model is horizontally distributed. Crowdsourcing modularizes and distributes the research process across many contributors. This approach leverages expertise, enables big science, and is more inclusive. I will illustrate value of crowdsourcing in the context of a metascience effort investigating reproducibility of psychological research.
This session includes presentation of four papers:
| 04:00 | The Intertwining of Reputation and Sharing |
| 04:15 | The Uptake of Open Science: Mapping the Results of a Systematic Literature Review |
| 04:30 | Why is getting credit for your research data so hard? |
| 04:45 | What flowers can bloom in a green open access landscape? Imagining a future with BitViews |
Here will be a short abstract
Here will be a short abstract
Here will be a short abstract
Here will be short abstract
Drinks reception will be organized in Plaza (café) in the Impact Hub Amsterdam building
All attendees are welcome to join us on walking tour
- Impact Hub Amsterdam
Organisers:
The workshop is organised by the following SSHOC project partners:
• European Research Infrastructure for Language Resources and Technology (CLARIN) – Daan Broeder
• Digital Research Infrastructure for the Arts and Humanities (DARIAH) – Laure Barbot
• Austrian Centre for Digital Humanities and Cultural Heritage (OEAW-ACDH) – Matej Durco
• Association of European Research Libraries (LIBER) – Vasso Kalaitzi
• Trust-IT – Marieke Willems
Short Description and objectives of the workshop
SSHOC will realise the Social Sciences and Humanities part of the European Open Science Cloud. One of the SSHOC project’s core objectives is to foster the transition from the current Social Sciences and Humanities landscape into a cloud-based infrastructure, that will operate according to the FAIR principles, offering access to research data and related services adapted to the needs of the SSH community. Furthermore, the tools, services, repositories and other resources brought in by project partners or generated during the course of the project will be featured in the SSH Open Marketplace.
The project partners are developing tools and services for SSH researchers, data experts and research librarians who are part of the targeted end-users of the content of the SSH Open Marketplace. SSHOC aims to align the SSH Open Marketplace and its content with current research data practices.
The SSH Open Marketplace has been developed in the SSHOC project for over a year now. The approach for development activities is to use agile and UX best practices, therefore involving targeted end-users as much as possible. Workshops, interviews, brainstorming sessions as well as prioritisation meetings were heavily used in our developments. As a result, the project is going to release alpha version of the SSH Open Marketplace in June 2020. To follow our development approach and improve SSH Open Marketplace offerings, the project needs ongoing user feedback and engagement. Therefore the proposed workshop will have the following main objectives:
- Raise awareness of the SSH Open Marketplace and the services and tools incorporated for research communities. Provide clear information on how SSH Open Marketplace can help researchers in their daily activities and how it supplements and the existing services offered by EOSC (e.g. EOSC Marketplace).
- Engage SSH research community present at ICTeSSH to collect their input and feedback on the functionality and content of the SSH Open Marketplace, including reflections on the maintenance routines planned to be implemented in the system.
- Share experiences from using agile and UX best practices in development activities of the SSH Open Marketplace.
Agenda and speakers:
- Introduction, what are EOSC and SSHOC and what’s in it for you? Connecting to end-user communities @ICTeSSH – Marieke Willems (Trust-IT) & Vasso Kalaitzi (LIBER) – (10 min)
- What is the SSH Open Marketplace and what is it not? – Laure Barbot (DARIAH) – (10 min)
- Opening up the domain of SSH services and tools; how can we connect technologies and researchers – Daan Broeder
(CLARIN) – (10 min) - End-users view on the SSH Open Marketplace content – Speakers are selected applicants from call for participation
of researchers from different SSH subdomains, as indicated in the engagement mechanism below – Moderator: – Matej
Durco (OEAW-ACDH) – (75 minutes)
a. An end-user journey on connecting to the SSH Open Marketplace
b. Content alignment with tools & services already in use and needed in daily life?
c. Contextualisation: optimising user experience & needs for training
d. Identification of services & tools not yet referenced in the SSH Open Marketplace - Integrating end-user feedback & closing – Matej Durco (OEAW-ACDH) (15 min)
The Dimensions API has been built to allow institutions and researchers to define their own analysis criteria as well as enhancing their current tools but most importantly, it’s easy to use by beginner data scientists, researchers, librarians and research managers who don’t have team members with technical abilities.
The training will focus on 2 key areas: disambiguating researcher names from outside the organization in CRIS systems and automating reports that focus on the wider impact of research.
The goal of this training is to encourage participants to think of new ways they can understand the reach and impact of their research as well as:
- Learn the basics of DSL (Dimensions Search Language)
- Use the Dimensions/ GRID integration
- Run python scripts (either one of the templates we will be providing or coming up with their own)
Tools used: Google Collab – free environment from Google although participants can choose their environment of choice.
Profile (necessary skills and knowledge) of trainees
No technical skills or prior knowledge of Dimensions or any of the additional tools/ languages necessary. Own laptops needed.
More details about the training can be found at https://ictessh.uns.ac.rs/digital-science
Unlimited Fair Chain coffee and tea package including sweet sustainable treats and organic fruits
The six papers will be presented in this session
| 11:30 | Integration of national publication databases – towards a high-quality and comprehensive information base on scholarly publications in Europe |
| 11:45 | Instant insight: do data visualisation dashboards aid literature searching? |
| 12:00 | Challenges and opportunities of comprehensive bibliographic databases for studies of the social sciences and humanities |
| 12:15 | Supporting research strategy related to the UN’s Sustainable Development Goals with bibliometric data |
| 12:30 | Arts and Humanities and the others: Why can’t we measure arts and humanities |
| 12:45 | Sustainable Vocabulary Development for Data Archives: A Model of Cooperative Metadata Management |
Here will be a short abstract
A short abstract will be here
A short abstract will be here
A short abstract will be here
A short abstract will be here
A short abstract will be here
| Lunch is included in conference fee. Warm vegetarian lunch with seasonal products. |
Fully exploiting the opportunities of open research data requires researchers to openly share their data and to use the research data that others have openly shared. Despite existing policies that oblige data sharing (e.g. of scientific funding agencies, the European Commission and universities) researchers are often reluctant to share and use open research data. Previous research already shows that researchers may have very good reasons for not sharing research data openly and for not using open research data, for example because of the fear of not receiving credit for openly sharing research data, because of a lack of skills in open data use or because of technical issues. The majority of obstacles for ORD sharing and use cannot be mitigated completely. Nevertheless, the negative impact of many challenges can be reduced with the right infrastructural and institutional arrangements, as suggested by previous research. This raises the question which infrastructural and institutional arrangements may work in which context, since research disciplines all have their own specific characteristics. In my talk I will discuss various examples of infrastructural and institutional arrangements, derived from my extensive research in open data and open science, and I will explain how they affect research data sharing and use. I will discuss both arrangements that have already been applied in various research disciplines, as well as novel, promising and questionable arrangements for the disciplines of social sciences and humanities. I will highlight questions that still remain to be solved.
Despite an improved digital access to scientific publications in the last decades, the fundamental principles of scholarly communication remain unchanged and continue to be largely document-based. The document-oriented workflows in science have reached the limits of adequacy as highlighted by recent discussions on the increasing proliferation of scientific literature, the deficiency of peer-review and the reproducibility crisis. We need to represent, analyse, augment and exploit scholarly communication in a knowledge-based way by expressing and linking scientific contributions and related artefacts through semantically rich, interlinked knowledge graphs. This should be based on deep semantic representation of scientific contributions, their manual, crowd-sourced and automatic augmentation and finally the intuitive exploration and interaction employing question answering on the resulting scientific knowledge base. We need to synergistically combine automated extraction and augmentation techniques, with large-scale collaboration. As a result, knowledge-based information flows can facilitate completely new ways of search and exploration. In this talk we will present first steps in this direction and present some use cases in the context of our Open Research Knowledge Graph initiative and the ERC ScienceGRAPH project.
Unlimited Fair Chain coffee and tea package including sweet sustainable treats and organic fruits
Each sponsor will have a time slot for presentation of its products and platforms.
| 03:45 | Elsevier |
| 04:05 | Odissei data |
Propositions of the quiz will be stated here before the conference. Value of the winner prize will be 500 euros.
The exact location of the dinner will be announced before the conference.
- Impact Hub Amsterdam
This session includes presentations of eight papers.
| 09:00 | Exploring doctoral scholars performing research with tools and their digital literacy skills in Africa: case study of CODESRIA doctoral Mentees |
| 09:15 | TRIPLE project: building a discovery platform to enhance collaboration |
| 09:30 | Creating a learner corpus infrastructure: Experiences from making language learner data available |
| 09:45 | Governing games: Adaptive game selection in Math Garden |
| 10:00 | Social Media Mapping as Digital Infrastructure for Disaster Prevention and Reduction |
| 10:15 | Digitising tabular data at scale – an evaluation of current technologies using Australian census records |
| 10:30 | Media Analytics: A Resource to Research Language Usage by Major News Outlets |
| 10:45 | System Approach for Digital History |
A short abstract will be here
A short abstract will be here
A short abstract will be here
A short abstract will be here
A short abstract will be here
A short abstract will be here
A short abstract will be here
A short abstract will be here
Unlimited Fair Chain coffee and tea package including sweet sustainable treats and organic fruits
It is essential for researchers to have an up-to-date understanding of the literature in their research field. However, keeping up with all relevant literature is highly time consuming. Bibliometric visualizations can support this task. These visualizations provide intuitive overviews of the literature in a research field, enabling researchers to obtain a better understanding of the structure and development of a field and to get an impression of the most significant contributions made in the field.
In this talk, I will give an introduction to two software tools for bibliometric visualization: VOSviewer (www.vosviewer.com) and CitNetExplorer (www.citnetexplorer.nl). VOSviewer is a popular tool for visualizing bibliometric networks of publications, authors, journals, and keywords. CitNetExplorer is a tool for the visualization and analysis of citation networks of scientific publications. I will pay special attention to applications of VOSviewer and CitNetExplorer in the social sciences and humanities, focusing in particular on the use of advanced text mining, network analysis, and visualization techniques for analyzing large amounts of textual data.
Discussion about usage of ICT tools in SSH. The list of panelists:
| Daniela Duca, SAGE Ocean | Daniela Duca is a product manager for SAGE Ocean, collaborating with startups to help them bring their tools to market. Before joining SAGE, she worked with student and researcher-led teams that developed new software tools and services, providing business planning and market development guidance and support. She designed and ran a 2-year program offering innovation grants for researchers working with publishers on new software services to support the management of research data. |
Lunch is included in the conference fee. It will be served as lunch box on the last conference day after closing ceremony.
The exact start location will be announced on the conference



