This is the ICTeSSH 2020 conference program. All times are in Central European Summer Time (CEST = UTC + 2)
Download program as a PDF
- Day 1 29 June 2020
- Day 2 30 June 2020
- Day 3 01 July 2020
- The conference online tool
The Dimensions API has been built to allow institutions and researchers to define their own analysis criteria as well as enhancing their current tools but most importantly, it’s easy to use by beginner data scientists, researchers, librarians and research managers who don’t have team members with technical abilities.
The training will focus on 2 key areas: disambiguating researcher names from outside the organization in CRIS systems and automating reports that focus on the wider impact of research.
The goal of this training is to encourage participants to think of new ways they can understand the reach and impact of their research as well as:
- Learn the basics of DSL (Dimensions Search Language)
- Use the Dimensions/ GRID integration
- Run python scripts (either one of the templates we will be providing or coming up with their own)
Tools used: Google Collab – free environment from Google although participants can choose their environment of choice.
Profile (necessary skills and knowledge) of trainees
No technical skills or prior knowledge of Dimensions or any of the additional tools/ languages necessary. Own laptops/desktops needed.
More details about the training can be found at https://ictessh.uns.ac.rs/digital-science
The workshop will be organized by Qeios.
What if researchers could use an Open Science platform enabling them to seamlessly write with colleagues and instantly publish both their Articles and Definitions without leaving it? And what if the wider community of peers could then give the most transparent and diverse feedback by openly review both Articles and Definitions? In our hands-on session, researchers can try a new way of integrating scholarly definitions as the building blocks of their new piece of research, and have the approval of the wider community of peers.
Attendees are expected to have a laptop or desktop.
More details available at https://ictessh.uns.ac.rs/qeios
The conference opening ceremony
This paper contributes to “Open Science” theory, with a specific focus on Open Science data generated by scholars. To this end, a mixed-method systematic literature review, including science mapping techniques, was conducted. Our preliminary results reveal the potential of Open Science as a domain for interdisciplinary research. A keyword co-occurrence network analysis using the VOSviewer visualisation tool identified five clusters of interrelated sub-concepts within Open Science research. The key distinctive characteristics and the various categories of Open Science data have been identified. The relevant data platforms have been provided to exemplify each category of Open Science data. Finally, a distinction between Open Science data and Open Government data was explored and the convergence point between them was presented
The integration of data sharing into the academic research process is part of a huge scholarly debate. It is associated with many advantages for scientists, publishers, funding agencies and the public. On the other hand, there still are many problems, impediments and presumed disadvantages. To some extent, the reason for this restraint is the collision with the principle of priority in discovery, which is used for the acknowledgement of scientific reputation.
This study’s aim is to investigate scientists’ practice of data-sharing. The theoretical approach is based on the Sociology of Science by Pierre Bourdieu (1975, 1991, 2004) and will be extended by using the concept of ‘scientific capital exchange’ by Panofsky and the concept of results of performance yielded during research by Barlösius et al. (2018). By now, twenty-one qualitative interviews with scientists of three different disciplines (i.e., biology, neurosciences, computer sciences) were conducted for empirical analyses from projects all over Germany. The analysis shall offer an insight into scientists’ sharing practices, the ways in which those are influenced by external effects and the expected return of sharing. The presentation will focus on external effects, the organization of research as well as on scientists’ definition and description of research data.
Institutions, funding bodies, and national research organizations are pushing for more data sharing and FAIR data. In many places, this has led to extra bureaucracy without clear benefits for the researcher, nor for the system of research. Is research really getting better if we share our data?
The answer is a resounding ‘yes’, but then we better make sure that we can better track where the data ends up, and that we should make sure that the additional burden on the researcher, as well as on the institutions are
well thought through.
So why is getting credit for your data so much harder?
- Research data policies and plans are not enough to make data sharing go well. Standalone, they will only add administrative burden. So while easy to implement, they are not necessarily the right place to start this journey.
- When researchers share data, the data typically ends up in one of the thousands of domain/subject repositories, and unfortunately these are not automatically ‘counting’ these yet and they rely on researcher self-reporting.
- Researchers are actually already sharing their data on a daily basis, using the tools they always use.
Unfortunately these tools are still disconnected from the data management layer. To implement good data management practices, there are systems and tools that support the full lifecycle of research. This allows institutions to follow their research data and manage this across projects, without adding admin overhead.
BitViews is a blockchain application that collects, validates, and aggregates worldwide online usage data of author’s approved manuscripts (AAMs) deposited in Open Access Institutional Repositories. It creates a free public ledger of usage events that allows anyone to see which research outputs have been accessed, where, and when, thus providing the raw material to construct discipline- and region-specific non-citation based measures of research impact.
BitViews’ short-term implications include:
- The re-alignment of journal impact measures (from citations to usage);
- Changed patterns in the production of research articles (towards high-usage topics);
- Creation of new networks of research collaboration;
- Enhanced opportunity for open data sharing.
BitViews’ long-term effects are transformative. Because BitViews promotes the “unbundling” of AAMs from published articles, it endows AAMs with independent value. Two disruptive consequences follow: the very concept of APCs is undermined and the conditions are created for the academy to regain ownership of peer review. Relegating commercial publishers to the role of providers of post-AAM services, huge resources will be released. As soon as AAMs are de‐coupled from articles, the same process and infrastructure can be applied to research monographs, thereby completing the cycle of Open Access to the whole production of knowledge.
The standard contributor model in science is vertically integrated. Resources are centralized to an individual or small team that conducts the entire research process: idea, design, collection, analysis, and report. This approach makes it easy to assign credit, but it is inefficient in capitalizing on specialized expertise, it produces a lot of small science, and it is exclusive. A complementary model is horizontally distributed. Crowdsourcing modularizes and distributes the research process across many contributors. This approach leverages expertise, enables big science, and is more inclusive. I will illustrate value of crowdsourcing in the context of a metascience effort investigating reproducibility of psychological research.
Against the monist programs and philosophies nowadays prevalent, I argue in favor of a dualism between information and meaning. The dynamics of (Shannon-type) information processing and meaning processing are different. In the social sciences, one studies the reflexive processing of meaning. Meaning is provided from the perspective of hindsight (against the arrow of time) and may generate redundancy: options which have not yet been realized. A calculus of redundancy can be envisaged.
Leydesdorff, L., Johnson, M. W., & Ivanova, I. (2018). Toward a calculus of redundancy: Signification, codification, and anticipation in cultural evolution.. Journal of the Association for Information Science and Technology, 69(10), 1181-1192. doi: 10.1002/asi.24052
- The conference online tool
|9.00-9.10||What are EOSC and SSHOC and what’s in it for you?||Ron Dekker (CESSDA Director, SSHOC coordinator, EOSC EB member)|
|9.10-9.20||Opening up the domain of SSH services and tools; how can we connect technologies and researchers||Daan Broeder (CLARIN)|
|9.20 – 9.30||What is the SSH Open Marketplace and what is it not?||Laure Barbot (DARIAH)|
|9.30-10.45||End-users view on the SSH Open Marketplace content – An end-user journey on connecting to the SSH Open Marketplace – Content alignment with tools & services already in use and needed in daily life – Contextualisation: optimising user experience & needs for training – Identification of services & tools not yet referenced in the SSH Open Marketplace||Moderator: Matej Durco (OEAW-ACDH) 4 SSH Open Marketplace Testers|
The need for a comprehensive infrastructure for scholarly publication information has been on EU’s agenda for a long time. Also, the European Commission’s open science agenda highlights the necessity of a good information base to follow up open access publishing across Europe. However, an all-inclusive information infrastructure on research publications is still missing. The most widely used commercial databases, Web of Science and Scopus lack coverage especially in SSH fields. Meanwhile, the aggregating harvesters, such as Google Scholar and OpenAIRE, are highly inclusive but their coverage is ‘accidental’ rather than systematic.
During the past 10 years, European countries have invested significantly in national research information infrastructures. Now, at least 20 European countries have a national database for research publication metadata. The strength of these databases lies in their comprehensiveness and quality assurance since they often have a mandatory nature. They are, however, neither yet integrated nor widely used for cross-country comparisons. To this end, a proof of concept of a European publication infrastructure was carried out in the framework of ENRESSH (www.enressh.eu). The ENRESSH-VIRTA-PoC integrated publication data from four countries, the concept being built on the strengths of the Finnish national VIRTA system. This presentation highlights the results from the PoC and outlines future steps towards the integration of national publication databases in Europe.
In the last years humanities are witnessing a growth of available digital data related to archival materials, ancient books, museum objects, etc.. In this context, scholars and cultural heritage professionals have to be able to correlate different data sources, to better investigate the articulation of historical phenomena and of the transformation processes that affected human history and culture. Moreover, in the analysis of the digital data it is essential that they are not considered in isolation but in conjunction with all the contextual information needed to answer the research questions.
Now these aims can be achieved through the use of DSpace-GLAM, an extension of DSpace specifically structured for cultural heritage management, adopted up to now by some important European cultural institutions. The presentation will illustrate how by means of DSpace-GLAM flexible data model and of its IIIF Image Viewer, it is possible not only to navigate through the pages of the various documents, but also to study the historical and geographical context of the digital objects, exploring people, events and places related to them and creating historical social networks that can be visualized within the application; therefore, moving the application from a Digital Library to a Digital Humanities Platform.
National bibliographic databases offer great opportunities for bibliometric research. Connecting multiple national databases provide even more useful instruments. Although good progress has been made in developing common identifiers (DOI, ORCID, GRID, …), the interoperability between different national databases still faces challenges on several levels.
We illustrate these difficulties by addressing the different questions that appeared during the development of the Academic Book Publishers Register (ABP), which aims to integrate academic publisher lists from national bibliographic databases, in order to have a comprehensive international list of publishers respecting the highest academic standards.
In 2015 the United Nation’s General Assembly agreed on the 2030 Sustainability Agenda including 17 Sustainable Development Goals1. The targets and indicators associated with these goals address the most pressing environmental, social and economic challenges humanity faces. The global science community has a crucial role in achieving these goals, by informing policy, and driving positive change through research, education and social engagement.
Given the importance research institutions play in the 2030 Agenda, there is a clear need to map their contribution to the SDGs, to inform strategy and develop collaborations. Elsevier’s data science teams have built a set of search queries based on Scopus2, one of the largest abstract and citation database of peer-reviewed literature, with keywords reflecting the specific targets and indicators of the various SDGs. The queries were reviewed by subject experts and were introduced as pre-defined research areas in Elsevier’s web-based analytics tool, SciVal3, providing consistency across SDG related bibliometric analyses. The queries and the related documentation on how they were built, are free to download from Mendeley4 , and to further improve the queries, a crowdsourcing tool was launched on the RELX SDG Resource Center5 where experts can validate the relevance of scientific literature to the various SDGs.
In this talk, the speaker will present how these queries were used supporting Aalborg University’s SDG related strategy, and the work that also led to their recent success in the THE Impact Rankings6. We will go through the important indicators of Aalborg’s SDG related scholarly contribution, and how the relevant research landscape was mapped in SDG 4 Quality Education, while acknowledging the challenges of bibliometrics in SSH.
1. United Nations. United nations general assembly. https://www.un.org/ga/search/view_doc.asp?symbol=A/RES/70/1&Lang=E (2015) doi:10.1163/157180910X12665776638740.
2. Baas, J., Schotten, M., Plume, A., Côté, G. & Karimi, R. Scopus as a curated, high-quality bibliometric data source for academic research in quantitative science studies. Quantitative Science Studies 1, 377–386 (2020).
4. Jayabalasingham, Bamini; Boverhof, Roy; Agnew, Kevin; Klein, Lisette (2019), “Identifying research supporting the United Nations Sustainable Development Goals”, Mendeley Data, v1http://dx.doi.org/10.17632/87txkw7khs.1
The use of numbers (publications and citations) to evaluate research/er performances are very common since ease of use. However, disciplinary differences must be considered to evaluate research/ers accurately. without misjudgments in tenures and incentives. The most different discipline from others in terms of publications and citation patterns is arts and humanities. The main aim of this study is to reveal the main differences between arts & humanities and the other fields by considering publication frequency, citations, internationalization and interdisciplinarity. For this aim, the main statistics for 59,728,700 papers between 1980-2018 (e.g. number of citations, % of documents cited, documents in JIF journals, impact relative to world, industry and international collaborations and number of open-access documents) are gathered from InCites in terms of the 255 Web of Science subject categories. In terms of the number of publications, the papers published in the fields of Health & Life Sciences and Pure Sciences & Engineering are more than four times than the Social Sciences and almost eight times than Arts & Humanities. Similarly, the percentage of cited publications and the number of citations per publication in the Arts and Humanities is considerably lower than the other disciplines. These differences underline the need to evaluate Arts and Humanities separately from the others.
Fully exploiting the opportunities of open research data requires researchers to openly share their data and to use the research data that others have openly shared. Despite existing policies that oblige data sharing (e.g. of scientific funding agencies, the European Commission and universities) researchers are often reluctant to share and use open research data. Previous research already shows that researchers may have very good reasons for not sharing research data openly and for not using open research data, for example because of the fear of not receiving credit for openly sharing research data, because of a lack of skills in open data use or because of technical issues. The majority of obstacles for ORD sharing and use cannot be mitigated completely. Nevertheless, the negative impact of many challenges can be reduced with the right infrastructural and institutional arrangements, as suggested by previous research. This raises the question which infrastructural and institutional arrangements may work in which context, since research disciplines all have their own specific characteristics. In my talk I will discuss various examples of infrastructural and institutional arrangements, derived from my extensive research in open data and open science, and I will explain how they affect research data sharing and use. I will discuss both arrangements that have already been applied in various research disciplines, as well as novel, promising and questionable arrangements for the disciplines of social sciences and humanities. I will highlight questions that still remain to be solved.
Despite an improved digital access to scientific publications in the last decades, the fundamental principles of scholarly communication remain unchanged and continue to be largely document-based. The document-oriented workflows in science have reached the limits of adequacy as highlighted by recent discussions on the increasing proliferation of scientific literature, the deficiency of peer-review and the reproducibility crisis. We need to represent, analyse, augment and exploit scholarly communication in a knowledge-based way by expressing and linking scientific contributions and related artefacts through semantically rich, interlinked knowledge graphs. This should be based on deep semantic representation of scientific contributions, their manual, crowd-sourced and automatic augmentation and finally the intuitive exploration and interaction employing question answering on the resulting scientific knowledge base. We need to synergistically combine automated extraction and augmentation techniques, with large-scale collaboration. As a result, knowledge-based information flows can facilitate completely new ways of search and exploration. In this talk we will present first steps in this direction and present some use cases in the context of our Open Research Knowledge Graph initiative and the ERC ScienceGRAPH project.
Each sponsor will have a time slot for presentation of its products and platforms.
|04:30pm||Digital Science / Dimensions|
|04:40pm||SAGE Publishing / SAGE Ocean|
- Quiz with 500 euros prize for a winner
- Everyone can participate, and there can be only one winner
- Questions are related to the sponsors’ session presentations
- Your email will be used as identifier
- There will be 10 multiple choices questions
- Be quick and accurate
- Don’t forget to press submit button at the end
- Link to the quiz will be posted in the chat box of the Zoom webinar
- GOOD LUCK!!!
- The conference online tool
This presentation will go over the main history, philosophy and features of Chataigne (http://benjamin.kuperberg.fr/chataigne). You will get an idea on how Chataigne can help you test your ideas, synchronize your softwares together and create interactions and animation in a fast and efficient way.
As the digital infrastructures are toughened in the effective measures for disaster prevention and reduction, the importance of ICT and internet environment is widely recognized especially in recent Japan. At the time of the Heavy Rain Disaster in western Japan in July, 2018, it was possible to gather and accumulate various disaster information using the function of social media mapping included in our spatiotemporal information system. Taking up the above social media mapping, the present study described the issues related to the development and utilization of digital infrastructures as one of the measures for disaster prevention and reduction.
It is possible to rescue and support victims, and cause excessive information and confusion, due to the close relationship between the real and virtual space. Additionally, it is essential to effectively utilize the information included in the virtual space at the time of disaster. Specifically, it is an important issue to make use of the information in social media for rescue in the real space. Furthermore, it is essential to take the measures for the people vulnerable to disaster who require the disaster information most. For this, it is necessary to prepare a variety of ICT in addition to oral communication.
Present advanced capabilities for information storage and a clear presentation, uncover the possibility of accumulation in digital form large volumes of historical data. For this purpose, it is necessary to establish the theoretical foundations and principles of formalization and presentation of historical knowledge. This paper presents the description and experience of practical implementation of the developed methods for formalization and analysis of historical data sources.
Utilizing online digital educational content has become the norm when teaching young students. A variety of adaptive educational practice systems is readily available and allows students to practice various domains, on a preferred difficulty and pace. However, due to the intensification of the teaching profession and the possibilities of practicing from home, students might be left unsupervised, and as a result do not practice domains that are most important. This study proposes a solution to govern these students, i.e., provide computerized data driven supervision that guides students in practicing domains most important with no intervention of a teacher.
Through an experiment involving 13 578 participants, a new governing method was tested and found to have positive effects on both engagement and ability, with almost no changes to the visual interface needed. Governing seems a promising technique in general, and was effectively tested and introduced in Math Garden.
Most citizens in modern liberal democracies regularly consume news media content to inform themselves about current affairs. Thus, content analysis of news and opinion articles from popular media outlets can provide rich insight about the cultural milieu where such textual artifacts originated. Combining computational tools for content analysis with human led finesse can help an analyst exploit the capabilities of scalable computational methods while also leveraging human skills and expertise to guide the analysis. This work introduces an online tool, the http://media-analytics.org website, that empowers researchers by providing modern analytics tools to study language usage in textual content from news and opinion articles of major media outlets. Due to the diachronic nature of news articles, the Media-Analytics.org website allows the exploration of temporal dynamics in word frequency usage and strength of association between word pairs. It is the hope of the author that these tools can help other researchers gain insights about the temporal flux in language usage by major news media organizations.
SSH research is divided across a wide array of disciplines, sub-disciplines, and languages. While this specialisation makes it possible to investigate the extensive variety of SSH topics, it also leads to a fragmentation that prevents SSH research from reaching its full potential. Use and reuse of SSH research is suboptimal, interdisciplinary collaboration possibilities are often missed partially because of missing standards and referential keys between disciplines. By the way the reuse of data may paradoxically complicate a relevant sorting and a trust relationship. As a result, societal, economic and academic impacts are limited. Conceptually, there is a wealth of transdisciplinary collaborations, but in practice there is a need to help SSH researchers and research institutions to connect them and support them, to prepare the research data for these overarching approaches and to make them findable and usable. The TRIPLE (Targeting Researchers through Innovative Practices and Linked Exploration) project is a practical answer to the above issues, as it aims at designing and developing the European discovery platform dedicated to SSH resources. Funded under the European Commission program INFRAEOSC-02-2019 “Prototyping new innovative services”, thanks to a consortium of 18 partners, TRIPLE will develop a full multilingual and multicultural solution for the discovery and the reuse of SSH resources. The project started in October 2019 for a duration of 42 months thanks to European funding of 5.6 million €.
We present a learner corpus infrastructure project that aims at increasing the value of existing research data by setting up a fruitful environment for learner corpus research that goes beyond the scope of individual projects allowing for the continued exploitation, maintenance, dissemination and preservation of previously collected corpora of spoken or written language produced by language learners or native speakers who are objects to language assessment. Aspects discussed regard especially the long-term preservation and publication of the corpus data in a research data repository, making it available to the greater academic public while trying to follow the FAIR Guidelines for Data Stewardship (Wilkinson 2016). While the Findability principle was mostly covered by our decision to make the data available through a research data repository integrated into the CLARIN infrastructure, in our presentation we will also discuss issues regarding Accessibility, Interoperability and Reusability that arise from the nature of this particular data type. In particular, we will address aspects like choosing and setting up a research data repository, choosing and providing data formats, versioning and data provenance and licensing.
Knowsi is a tool for researchers and participants to manage their consent relationship. Built initially to support the consent and GDPR needs of the user research community, Knowsi is supported by the Sage Ocean Concept Grant to bring a better consent experience to researchers and research participants in the social sciences.
This paper discusses the advantages and disadvantages of ICTs in the teaching and learning of foreign languages, in particular the use of WhatsApp in the Certificate in Portuguese programme offered by the Institute of Distance Education of the University of Eswatini. Theoretical aspects will be combined with practical examples of what has been happening on the WhatsApp platform since 2013 with learners of Portuguese. The practical part comes from field research undertaken by the tutor as a direct observer of activities and feedback between lecturers, tutors and students. The impact of COVID19 on the use of WhatsApp will also be looked at briefly. In conclusion, it will be noted that WhatsApp is a valuable environment to ensure that teaching and learning continues beyond the classroom and can be an important motivator for lifelong learning.
It is essential for researchers to have an up-to-date understanding of the literature in their research field. However, keeping up with all relevant literature is highly time consuming. Bibliometric visualizations can support this task. These visualizations provide intuitive overviews of the literature in a research field, enabling researchers to obtain a better understanding of the structure and development of a field and to get an impression of the most significant contributions made in the field.
In this talk, I will give an introduction to two software tools for bibliometric visualization: VOSviewer (www.vosviewer.com) and CitNetExplorer (www.citnetexplorer.nl). VOSviewer is a popular tool for visualizing bibliometric networks of publications, authors, journals, and keywords. CitNetExplorer is a tool for the visualization and analysis of citation networks of scientific publications. I will pay special attention to applications of VOSviewer and CitNetExplorer in the social sciences and humanities, focusing in particular on the use of advanced text mining, network analysis, and visualization techniques for analyzing large amounts of textual data.
Discussion about usage of ICT tools in SSH. The list of panelists:
|Daniela Duca||Daniela Duca works in the innovation team at SAGE Publishing. She explores how new technologies are changing the way social scientists are doing research, while incubating or finding and promoting software tools in this space. In her latest whitepaper, she reviewed more than 400 software tools, packages and apps used by social scientists, who develops and funds them, what makes them successful, and what is the future of technologies for social science. She has experience in program and product management, financial technology, and research data. She holds degrees in biochemistry, economics, development studies, as well as a PhD in innovation management.|
|Miloš Jovanović||Dr. Miloš Jovanović is currently the head of unit of the group “Tools and Methods” at the Fraunhofer Institute for Technological Trend Analysis INT in Euskirchen, Germany. His group works on developing and scanning for new IT-tools and methods that can be employed for the scientific work at their institute. His research focuses on bibliometrics, patentometrics, and recently altmetrics and the visualization of data. He also worked in FP7 and H2020 projects for the EU-Commission as project coordinator and work package leader. He studied modern history, politics, media science and information science at the Heinrich-Heine-University in Düsseldorf and finished his PhD working on a scientometric method to classify technologies into basic or applied science.|
|Suzanne Dumouchel||Suzanne Dumouchel, PhD in French literature, is a research engineer at the CNRS. She works in the Huma-Num unit, an infrastructure for digital humanities. She leads the European project TRIPLE which aims to develop a platform for data discovery, research projects and researchers in SSH with which various innovative services are associated. She is co-coordinator of the European infrastructure OPERAS, in charge of strategic partnerships, dedicated to open access scholarly communication in the field of SSH. In addition, she is involved in the setting up of the EOSC by participating in various working groups and by developing services to integrate the EOSC catalogue. Strongly committed to the Open Science movement and to the promotion of research in SHS, she is particularly active in the field of research infrastructures. She is interested, among others, in innovative technologies such as the blockchain, in questions and issues related to citizen science, or in the epistemology of digital humanities.|
|Ulf-Dietrich Reips||Ulf-Dietrich Reips is a full professor in the Faculty of Sciences at the University of Konstanz, where he holds the Chair for Psychological Methods, Assessment, and iScience (https://iscience.uni-konstanz.de/de). For more than two decades he has been working on Internet-based research methodologies (or Internet science), the psychology of the Internet, measurement, privacy, social media, and big data. In 1994, he founded the Web Experimental Psychology Lab, the first laboratory for conducting real experiments on the World Wide Web. His 2002 article “Standards for Internet-based experimenting” in the journal Experimental Psychology defined the field. Reips was elected the first non-North American president of the Society for Computers in Psychology (SCiP) and he is the founding editor of the International Journal of Internet Science. Many of his publications are among the most highly cited in their journals, see http://www.uni-konstanz.de/iscience/reips/pubs/publications.html. Ulf has worked, lived, and studied in California, Colorado, Israel, Germany, Spain, Switzerland, and the UK. |
Ulf and his team develop and provide free Web tools for researchers, teachers, students, and the public. They received numerous awards for their Web applications (available from the iScience Server at http://iscience.eu/) and methodological work serving the research community.
In 1996 Ulf won in the first Internet Literature competition in Germany, co-organized by the German weekly Die Zeit and IBM with his digital poem “Das Websonett”.
In his spare time, Ulf enjoys family life with wife, daughter, and their two cats in Switzerland, swimming, Katamaran sailing, soccer, and playing the French game of Boules.