- Day 1 29 June 2020
- Day 2 30 June 2020
- Day 3 01 July 2020
- The conference online tool
The Dimensions API has been built to allow institutions and researchers to define their own analysis criteria as well as enhancing their current tools but most importantly, it’s easy to use by beginner data scientists, researchers, librarians and research managers who don’t have team members with technical abilities.
The training will focus on 2 key areas: disambiguating researcher names from outside the organization in CRIS systems and automating reports that focus on the wider impact of research.
The goal of this training is to encourage participants to think of new ways they can understand the reach and impact of their research as well as:
- Learn the basics of DSL (Dimensions Search Language)
- Use the Dimensions/ GRID integration
- Run python scripts (either one of the templates we will be providing or coming up with their own)
Tools used: Google Collab – free environment from Google although participants can choose their environment of choice.
Profile (necessary skills and knowledge) of trainees
No technical skills or prior knowledge of Dimensions or any of the additional tools/ languages necessary. Own laptops/desktops needed.
More details about the training can be found at https://ictessh.uns.ac.rs/digital-science
The workshop will be organized by Qeios.
What if researchers could use an Open Science platform enabling them to seamlessly write with colleagues and instantly publish both their Articles and Definitions without leaving it? And what if the wider community of peers could then give the most transparent and diverse feedback by openly review both Articles and Definitions? In our hands-on session, researchers can try a new way of integrating scholarly definitions as the building blocks of their new piece of research, and have the approval of the wider community of peers.
Attendees are expected to have a laptop or desktop.
More details available at https://ictessh.uns.ac.rs/qeios
The conference opening ceremony
This paper contributes to “Open Science” theory, with a specific focus on Open Science data generated by scholars. To this end, a mixed-method systematic literature review, including science mapping techniques, was conducted. Our preliminary results reveal the potential of Open Science as a domain for interdisciplinary research. A keyword co-occurrence network analysis using the VOSviewer visualisation tool identified five clusters of interrelated sub-concepts within Open Science research. The key distinctive characteristics and the various categories of Open Science data have been identified. The relevant data platforms have been provided to exemplify each category of Open Science data. Finally, a distinction between Open Science data and Open Government data was explored and the convergence point between them was presented
The integration of data sharing into the academic research process is part of a huge scholarly debate. It is associated with many advantages for scientists, publishers, funding agencies and the public. On the other hand, there still are many problems, impediments and presumed disadvantages. To some extent, the reason for this restraint is the collision with the principle of priority in discovery, which is used for the acknowledgement of scientific reputation.
This study’s aim is to investigate scientists’ practice of data-sharing. The theoretical approach is based on the Sociology of Science by Pierre Bourdieu (1975, 1991, 2004) and will be extended by using the concept of ‘scientific capital exchange’ by Panofsky and the concept of results of performance yielded during research by Barlösius et al. (2018). By now, twenty-one qualitative interviews with scientists of three different disciplines (i.e., biology, neurosciences, computer sciences) were conducted for empirical analyses from projects all over Germany. The analysis shall offer an insight into scientists’ sharing practices, the ways in which those are influenced by external effects and the expected return of sharing. The presentation will focus on external effects, the organization of research as well as on scientists’ definition and description of research data.
Institutions, funding bodies, and national research organizations are pushing for more data sharing and FAIR data. In many places, this has led to extra bureaucracy without clear benefits for the researcher, nor for the system of research. Is research really getting better if we share our data?
The answer is a resounding ‘yes’, but then we better make sure that we can better track where the data ends up, and that we should make sure that the additional burden on the researcher, as well as on the institutions are
well thought through.
So why is getting credit for your data so much harder?
- Research data policies and plans are not enough to make data sharing go well. Standalone, they will only add administrative burden. So while easy to implement, they are not necessarily the right place to start this journey.
- When researchers share data, the data typically ends up in one of the thousands of domain/subject repositories, and unfortunately these are not automatically ‘counting’ these yet and they rely on researcher self-reporting.
- Researchers are actually already sharing their data on a daily basis, using the tools they always use.
Unfortunately these tools are still disconnected from the data management layer. To implement good data management practices, there are systems and tools that support the full lifecycle of research. This allows institutions to follow their research data and manage this across projects, without adding admin overhead.
BitViews is a blockchain application that collects, validates, and aggregates worldwide online usage data of author’s approved manuscripts (AAMs) deposited in Open Access Institutional Repositories. It creates a free public ledger of usage events that allows anyone to see which research outputs have been accessed, where, and when, thus providing the raw material to construct discipline- and region-specific non-citation based measures of research impact.
BitViews’ short-term implications include:
- The re-alignment of journal impact measures (from citations to usage);
- Changed patterns in the production of research articles (towards high-usage topics);
- Creation of new networks of research collaboration;
- Enhanced opportunity for open data sharing.
BitViews’ long-term effects are transformative. Because BitViews promotes the “unbundling” of AAMs from published articles, it endows AAMs with independent value. Two disruptive consequences follow: the very concept of APCs is undermined and the conditions are created for the academy to regain ownership of peer review. Relegating commercial publishers to the role of providers of post-AAM services, huge resources will be released. As soon as AAMs are de‐coupled from articles, the same process and infrastructure can be applied to research monographs, thereby completing the cycle of Open Access to the whole production of knowledge.
The standard contributor model in science is vertically integrated. Resources are centralized to an individual or small team that conducts the entire research process: idea, design, collection, analysis, and report. This approach makes it easy to assign credit, but it is inefficient in capitalizing on specialized expertise, it produces a lot of small science, and it is exclusive. A complementary model is horizontally distributed. Crowdsourcing modularizes and distributes the research process across many contributors. This approach leverages expertise, enables big science, and is more inclusive. I will illustrate value of crowdsourcing in the context of a metascience effort investigating reproducibility of psychological research.
Against the monist programs and philosophies nowadays prevalent, I argue in favor of a dualism between information and meaning. The dynamics of (Shannon-type) information processing and meaning processing are different. In the social sciences, one studies the reflexive processing of meaning. Meaning is provided from the perspective of hindsight (against the arrow of time) and may generate redundancy: options which have not yet been realized. A calculus of redundancy can be envisaged.
Leydesdorff, L., Johnson, M. W., & Ivanova, I. (2018). Toward a calculus of redundancy: Signification, codification, and anticipation in cultural evolution.. Journal of the Association for Information Science and Technology, 69(10), 1181-1192. doi: 10.1002/asi.24052
- The conference online tool
The workshop is organised by the following SSHOC project partners:
• European Research Infrastructure for Language Resources and Technology (CLARIN) – Daan Broeder
• Digital Research Infrastructure for the Arts and Humanities (DARIAH) – Laure Barbot
• Austrian Centre for Digital Humanities and Cultural Heritage (OEAW-ACDH) – Matej Durco
• Association of European Research Libraries (LIBER) – Vasso Kalaitzi
• Trust-IT – Marieke Willems
Short Description and objectives of the workshop
SSHOC will realise the Social Sciences and Humanities part of the European Open Science Cloud. One of the SSHOC project’s core objectives is to foster the transition from the current Social Sciences and Humanities landscape into a cloud-based infrastructure, that will operate according to the FAIR principles, offering access to research data and related services adapted to the needs of the SSH community. Furthermore, the tools, services, repositories and other resources brought in by project partners or generated during the course of the project will be featured in the SSH Open Marketplace.
The project partners are developing tools and services for SSH researchers, data experts and research librarians who are part of the targeted end-users of the content of the SSH Open Marketplace. SSHOC aims to align the SSH Open Marketplace and its content with current research data practices.
The SSH Open Marketplace has been developed in the SSHOC project for over a year now. The approach for development activities is to use agile and UX best practices, therefore involving targeted end-users as much as possible. Workshops, interviews, brainstorming sessions as well as prioritisation meetings were heavily used in our developments. As a result, the project is going to release alpha version of the SSH Open Marketplace in June 2020. To follow our development approach and improve SSH Open Marketplace offerings, the project needs ongoing user feedback and engagement. Therefore the proposed workshop will have the following main objectives:
- Raise awareness of the SSH Open Marketplace and the services and tools incorporated for research communities. Provide clear information on how SSH Open Marketplace can help researchers in their daily activities and how it supplements and the existing services offered by EOSC (e.g. EOSC Marketplace).
- Engage SSH research community present at ICTeSSH to collect their input and feedback on the functionality and content of the SSH Open Marketplace, including reflections on the maintenance routines planned to be implemented in the system.
- Share experiences from using agile and UX best practices in development activities of the SSH Open Marketplace.
Agenda and speakers:
- Introduction, what are EOSC and SSHOC and what’s in it for you? Connecting to end-user communities @ICTeSSH – Marieke Willems (Trust-IT) & Vasso Kalaitzi (LIBER) – (10 min)
- What is the SSH Open Marketplace and what is it not? – Laure Barbot (DARIAH) – (10 min)
- Opening up the domain of SSH services and tools; how can we connect technologies and researchers – Daan Broeder
(CLARIN) – (10 min)
- End-users view on the SSH Open Marketplace content – Speakers are selected applicants from call for participation
of researchers from different SSH subdomains, as indicated in the engagement mechanism below – Moderator: – Matej
Durco (OEAW-ACDH) – (75 minutes)
a. An end-user journey on connecting to the SSH Open Marketplace
b. Content alignment with tools & services already in use and needed in daily life?
c. Contextualisation: optimising user experience & needs for training
d. Identification of services & tools not yet referenced in the SSH Open Marketplace
- Integrating end-user feedback & closing – Matej Durco (OEAW-ACDH) (15 min)
The need for a comprehensive infrastructure for scholarly publication information has been on EU’s agenda for a long time. Also, the European Commission’s open science agenda highlights the necessity of a good information base to follow up open access publishing across Europe. However, an all-inclusive information infrastructure on research publications is still missing. The most widely used commercial databases, Web of Science and Scopus lack coverage especially in SSH fields. Meanwhile, the aggregating harvesters, such as Google Scholar and OpenAIRE, are highly inclusive but their coverage is ‘accidental’ rather than systematic.
During the past 10 years, European countries have invested significantly in national research information infrastructures. Now, at least 20 European countries have a national database for research publication metadata. The strength of these databases lies in their comprehensiveness and quality assurance since they often have a mandatory nature. They are, however, neither yet integrated nor widely used for cross-country comparisons. To this end, a proof of concept of a European publication infrastructure was carried out in the framework of ENRESSH (www.enressh.eu). The ENRESSH-VIRTA-PoC integrated publication data from four countries, the concept being built on the strengths of the Finnish national VIRTA system. This presentation highlights the results from the PoC and outlines future steps towards the integration of national publication databases in Europe.
The Consortium of European Social Science Data Archives (CESSDA) is composed of 18 countries. Its mission is to “provide a full scale sustainable research infrastructure enabling the research community to conduct high-quality research in the social sciences..”. Critical to this infrastructure are vocabularies which not only support consistent and interoperable metadata but also enable classification, indexing and discovery of research data. This presentation outlines how the UK Data Service and CESSDA collaborated on a standards-based implementation of a new vocabulary management and publishing platform, based on linked data. In particular, we will present a detailed overview of the technical challenges around versioning models and delivering change logs, how to make appropriate choices between SKOS, SKOS-XL and XKOS for multilingual use cases, and how the platform takes account of the needs of different communities, whether expert ontologists, translators or casual thesaurus browsers. We also demonstrate the different machine-readable endpoints and user interfaces that are available to those audiences.
National bibliographic databases offer great opportunities for bibliometric research. Connecting multiple national databases provide even more useful instruments. Although good progress has been made in developing common identifiers (DOI, ORCID, GRID, …), the interoperability between different national databases still faces challenges on several levels.
We illustrate these difficulties by addressing the different questions that appeared during the development of the Academic Book Publishers Register (ABP), which aims to integrate academic publisher lists from national bibliographic databases, in order to have a comprehensive international list of publishers respecting the highest academic standards.
In 2015 the United Nation’s General Assembly agreed on the 2030 Sustainability Agenda including 17 Sustainable Development Goals1. The targets and indicators associated with these goals address the most pressing environmental, social and economic challenges humanity faces. The global science community has a crucial role in achieving these goals, by informing policy, and driving positive change through research, education and social engagement.
Given the importance research institutions play in the 2030 Agenda, there is a clear need to map their contribution to the SDGs, to inform strategy and develop collaborations. Elsevier’s data science teams have built a set of search queries based on Scopus2, one of the largest abstract and citation database of peer-reviewed literature, with keywords reflecting the specific targets and indicators of the various SDGs. The queries were reviewed by subject experts and were introduced as pre-defined research areas in Elsevier’s web-based analytics tool, SciVal3, providing consistency across SDG related bibliometric analyses. The queries and the related documentation on how they were built, are free to download from Mendeley4 , and to further improve the queries, a crowdsourcing tool was launched on the RELX SDG Resource Center5 where experts can validate the relevance of scientific literature to the various SDGs.
In this talk, the speaker will present how these queries were used supporting Aalborg University’s SDG related strategy, and the work that also led to their recent success in the THE Impact Rankings6. We will go through the important indicators of Aalborg’s SDG related scholarly contribution, and how the relevant research landscape was mapped in SDG 4 Quality Education, while acknowledging the challenges of bibliometrics in SSH.
1. United Nations. United nations general assembly. https://www.un.org/ga/search/view_doc.asp?symbol=A/RES/70/1&Lang=E (2015) doi:10.1163/157180910X12665776638740.
2. Baas, J., Schotten, M., Plume, A., Côté, G. & Karimi, R. Scopus as a curated, high-quality bibliometric data source for academic research in quantitative science studies. Quantitative Science Studies 1, 377–386 (2020).
4. Jayabalasingham, Bamini; Boverhof, Roy; Agnew, Kevin; Klein, Lisette (2019), “Identifying research supporting the United Nations Sustainable Development Goals”, Mendeley Data, v1http://dx.doi.org/10.17632/87txkw7khs.1
The use of numbers (publications and citations) to evaluate research/er performances are very common since ease of use. However, disciplinary differences must be considered to evaluate research/ers accurately. without misjudgments in tenures and incentives. The most different discipline from others in terms of publications and citation patterns is arts and humanities. The main aim of this study is to reveal the main differences between arts & humanities and the other fields by considering publication frequency, citations, internationalization and interdisciplinarity. For this aim, the main statistics for 59,728,700 papers between 1980-2018 (e.g. number of citations, % of documents cited, documents in JIF journals, impact relative to world, industry and international collaborations and number of open-access documents) are gathered from InCites in terms of the 255 Web of Science subject categories. In terms of the number of publications, the papers published in the fields of Health & Life Sciences and Pure Sciences & Engineering are more than four times than the Social Sciences and almost eight times than Arts & Humanities. Similarly, the percentage of cited publications and the number of citations per publication in the Arts and Humanities is considerably lower than the other disciplines. These differences underline the need to evaluate Arts and Humanities separately from the others.
Fully exploiting the opportunities of open research data requires researchers to openly share their data and to use the research data that others have openly shared. Despite existing policies that oblige data sharing (e.g. of scientific funding agencies, the European Commission and universities) researchers are often reluctant to share and use open research data. Previous research already shows that researchers may have very good reasons for not sharing research data openly and for not using open research data, for example because of the fear of not receiving credit for openly sharing research data, because of a lack of skills in open data use or because of technical issues. The majority of obstacles for ORD sharing and use cannot be mitigated completely. Nevertheless, the negative impact of many challenges can be reduced with the right infrastructural and institutional arrangements, as suggested by previous research. This raises the question which infrastructural and institutional arrangements may work in which context, since research disciplines all have their own specific characteristics. In my talk I will discuss various examples of infrastructural and institutional arrangements, derived from my extensive research in open data and open science, and I will explain how they affect research data sharing and use. I will discuss both arrangements that have already been applied in various research disciplines, as well as novel, promising and questionable arrangements for the disciplines of social sciences and humanities. I will highlight questions that still remain to be solved.
Despite an improved digital access to scientific publications in the last decades, the fundamental principles of scholarly communication remain unchanged and continue to be largely document-based. The document-oriented workflows in science have reached the limits of adequacy as highlighted by recent discussions on the increasing proliferation of scientific literature, the deficiency of peer-review and the reproducibility crisis. We need to represent, analyse, augment and exploit scholarly communication in a knowledge-based way by expressing and linking scientific contributions and related artefacts through semantically rich, interlinked knowledge graphs. This should be based on deep semantic representation of scientific contributions, their manual, crowd-sourced and automatic augmentation and finally the intuitive exploration and interaction employing question answering on the resulting scientific knowledge base. We need to synergistically combine automated extraction and augmentation techniques, with large-scale collaboration. As a result, knowledge-based information flows can facilitate completely new ways of search and exploration. In this talk we will present first steps in this direction and present some use cases in the context of our Open Research Knowledge Graph initiative and the ERC ScienceGRAPH project.
Each sponsor will have a time slot for presentation of its products and platforms.
Propositions of the quiz will be stated here before the conference. Value of the winner prize will be 500 euros.
- The conference online tool
The Media Analytics website allows the exploration of temporal dynamics in word frequency usage and latent association between word sets in a corpora of popular news media articles from the Anglosphere such as the New York Times, the Wall Street Journal, the Washington Post or the Guardian. The usage of word frequencies and latent associations to study cultural phenomena allows the tracking of historical events as well as shifting societal trends as reflected by how words are used in the corpus. The online tool is made available for interested researchers to do their own analysis of chronological word frequency usage and associations in popular news media outlets.
As the digital infrastructures are toughened in the effective measures for disaster prevention and reduction, the importance of ICT and internet environment is widely recognized especially in recent Japan. At the time of the Heavy Rain Disaster in western Japan in July, 2018, it was possible to gather and accumulate various disaster information using the function of social media mapping included in our spatiotemporal information system. Taking up the above social media mapping, the present study described the issues related to the development and utilization of digital infrastructures as one of the measures for disaster prevention and reduction.
It is possible to rescue and support victims, and cause excessive information and confusion, due to the close relationship between the real and virtual space. Additionally, it is essential to effectively utilize the information included in the virtual space at the time of disaster. Specifically, it is an important issue to make use of the information in social media for rescue in the real space. Furthermore, it is essential to take the measures for the people vulnerable to disaster who require the disaster information most. For this, it is necessary to prepare a variety of ICT in addition to oral communication.
Present advanced capabilities for information storage and a clear presentation, uncover the possibility of accumulation in digital form large volumes of historical data. For this purpose, it is necessary to establish the theoretical foundations and principles of formalization and presentation of historical knowledge. This paper presents the description and experience of practical implementation of the developed methods for formalization and analysis of historical data sources.
Utilizing online digital educational content has become the norm when teaching young students. A variety of adaptive educational practice systems is readily available and allows students to practice various domains, on a preferred difficulty and pace. However, due to the intensification of the teaching profession and the possibilities of practicing from home, students might be left unsupervised, and as a result do not practice domains that are most important. This study proposes a solution to govern these students, i.e., provide computerized data driven supervision that guides students in practicing domains most important with no intervention of a teacher.
Through an experiment involving 13 578 participants, a new governing method was tested and found to have positive effects on both engagement and ability, with almost no changes to the visual interface needed. Governing seems a promising technique in general, and was effectively tested and introduced in Math Garden.
SSH research is divided across a wide array of disciplines, sub-disciplines, and languages. While this specialisation makes it possible to investigate the extensive variety of SSH topics, it also leads to a fragmentation that prevents SSH research from reaching its full potential. Use and reuse of SSH research is suboptimal, interdisciplinary collaboration possibilities are often missed partially because of missing standards and referential keys between disciplines. By the way the reuse of data may paradoxically complicate a relevant sorting and a trust relationship. As a result, societal, economic and academic impacts are limited. Conceptually, there is a wealth of transdisciplinary collaborations, but in practice there is a need to help SSH researchers and research institutions to connect them and support them, to prepare the research data for these overarching approaches and to make them findable and usable. The TRIPLE (Targeting Researchers through Innovative Practices and Linked Exploration) project is a practical answer to the above issues, as it aims at designing and developing the European discovery platform dedicated to SSH resources. Funded under the European Commission program INFRAEOSC-02-2019 “Prototyping new innovative services”, thanks to a consortium of 18 partners, TRIPLE will develop a full multilingual and multicultural solution for the discovery and the reuse of SSH resources. The project started in October 2019 for a duration of 42 months thanks to European funding of 5.6 million €.
We present a learner corpus infrastructure project that aims at increasing the value of existing research data by setting up a fruitful environment for learner corpus research that goes beyond the scope of individual projects allowing for the continued exploitation, maintenance, dissemination and preservation of previously collected corpora of spoken or written language produced by language learners or native speakers who are objects to language assessment. Aspects discussed regard especially the long-term preservation and publication of the corpus data in a research data repository, making it available to the greater academic public while trying to follow the FAIR Guidelines for Data Stewardship (Wilkinson 2016). While the Findability principle was mostly covered by our decision to make the data available through a research data repository integrated into the CLARIN infrastructure, in our presentation we will also discuss issues regarding Accessibility, Interoperability and Reusability that arise from the nature of this particular data type. In particular, we will address aspects like choosing and setting up a research data repository, choosing and providing data formats, versioning and data provenance and licensing.
This paper present how African doctoral scholars in Social Sciences and Humanities are performing research with tools as well as their digital literacy skill. The study was conducted among 90 CODESRIA’s mentees to explore the use of digital research tools in carrying out their research engagements in the ICT-driven era. The approach was pragmatic and the design focused on critical study that describe, explore and analyse on the subject of performing research with ICT tools and digital literacy skills. Questions were asked on the level of use, perceived effectiveness of the technologies and perceived constraints through checklist to gauge the digital literacy skills of participants. The use of digital research tools with variety of information sources and resources in studies and research are not new to scholars. Scholars possesses the ability to manipulate and use technologies with confidence as they believed such are critical to their academic success. However, the adoption and use of ICTs among doctoral scholars have been embraced to an extent in collaboration and doing research in Africa. The knowledge and skills to search for and ethically use information effectively are in the average by the scholars.
A short abstract will be here
It is essential for researchers to have an up-to-date understanding of the literature in their research field. However, keeping up with all relevant literature is highly time consuming. Bibliometric visualizations can support this task. These visualizations provide intuitive overviews of the literature in a research field, enabling researchers to obtain a better understanding of the structure and development of a field and to get an impression of the most significant contributions made in the field.
In this talk, I will give an introduction to two software tools for bibliometric visualization: VOSviewer (www.vosviewer.com) and CitNetExplorer (www.citnetexplorer.nl). VOSviewer is a popular tool for visualizing bibliometric networks of publications, authors, journals, and keywords. CitNetExplorer is a tool for the visualization and analysis of citation networks of scientific publications. I will pay special attention to applications of VOSviewer and CitNetExplorer in the social sciences and humanities, focusing in particular on the use of advanced text mining, network analysis, and visualization techniques for analyzing large amounts of textual data.
Discussion about usage of ICT tools in SSH. The list of panelists:
|Daniela Duca, SAGE Ocean||Daniela Duca works in the innovation team at SAGE Publishing. She explores how new technologies are changing the way social scientists are doing research, while incubating or finding and promoting software tools in this space. In her latest whitepaper, she reviewed more than 400 software tools, packages and apps used by social scientists, who develops and funds them, what makes them successful, and what is the future of technologies for social science. She has experience in program and product management, financial technology, and research data. She holds degrees in biochemistry, economics, development studies, as well as a PhD in innovation management.|
|Miloš Jovanović||Dr. Miloš Jovanović is currently the head of unit of the group “Tools and Methods” at the Fraunhofer Institute for Technological Trend Analysis INT in Euskirchen, Germany. His group works on developing and scanning for new IT-tools and methods that can be employed for the scientific work at their institute. His research focuses on bibliometrics, patentometrics, and recently altmetrics and the visualization of data. He also worked in FP7 and H2020 projects for the EU-Commission as project coordinator and work package leader. He studied modern history, politics, media science and information science at the Heinrich-Heine-University in Düsseldorf and finished his PhD working on a scientometric method to classify technologies into basic or applied science.|