EGI Federation Home
Open Calls

EGI Call for Use Cases

Apply and get Access to Services, Tools, Training & Support sponsored by various National Funding Agencies

With this call for use cases, EGI offers its services and related user support and training to individual researchers, national and international research projects, research communities, research infrastructures and commercial research entities. These services are sponsored by various national funding agencies and are free to access for the use cases selected through the call.

The offering

A collaborative path to the digital era

EGI offers compute and storage resources, compute platform services, data management services and related user support and training through this call. National funding agencies sponsor the majority of the services.

Selection and results

Starting from July 2023, EGI will continue to support the integration of new scientific use case applications in the EOSC Compute Platform under the following new conditions:

  1. EGI offers individual researchers, national and international research projects, and communities access to infrastructure and platform services, dedicated user support and training. The services, support and training are sponsored by various national funding agencies and are free to access to the use cases that will be selected through the call.
  2. Proposals for scientific use cases will be evaluated by independent experts from the EGI Federation. They assess the objective of the Use Case, and analyse its maturity and feasibility. Applicants will be notified of the outcome of the evaluation within three weeks from the submission date.
  3. If the outcome is positive, EGI will be in touch to identify the resource and service providers that will be involved during the integration plans. These can be national or international, depending on the coverage of the request and its expected impact.
  4. Technical experts from the EGI will monitor the integration plan of the use case in the EOSC Compute Platform on a regular basis, and once the Use Case has reached the production phase, a Case Study based on the experiences will be produced.

Timeline

Submissions can be made at any time. There are cut-off dates with a 3-monthly frequency, each followed by the evaluation of the applications submitted up to that date.

Next cut-off date: 15 May 2024.

Evaluation and notification are within one month after the cut-off date.

Next cut-off date: 15 May 2024

Check the results of the EGI-ACE open call

The EGI-ACE call for use cases officially concluded in June 2023.
After 30 months of the project, a total of 42 scientific use cases applications received technical support, consultancy and resources from the EC-funded project to integrate with the EOSC Cloud Compute Platform.

For some of these scientific applications, a success story demonstrating the results and impact on the research fields is also available on the EGI website.

EGI-ACE open call results

The objective of this use case is to perform extensive testing using a high-resolution distributed hydrological model to identify areas needing improvement in process description and datasets. The focus will be on locations with long observational records and various model setups for uncertainty analysis.

Implement an innovative computational procedure to indirectly describe the effect of climate change on biodiversity using environmental DNA (eDNA) information (data and metadata), taking advantage from peculiar archived eDNA collector specimens, i.e. honey samples produced over the last decade.

The objective of the use case is to create an AI-driven mobile app that mixes car sharing and carpooling models allowing cost sharing and safe travel through direct trustworthy transactions among passengers.

The use case aims at extending the computational abilities of OBM service network and improve the integration ability of services.

The scientific objective is to study whether the developed extension to the OGC Sensor Things API for Citizen Science is fit for purpose for large data sets coming from different domains such as environmental and biodiversity.

The scientific objective concerns the characterization of halide perovskite materials, interfaces and defects using molecular dynamics (MD), leading to a more clear understanding of the stability and the degradation mechanisms in these materials.

The information collected in this study, will be used for the repositioning of known drugs in order to identify putative antiviral drugs and to design
anionic polymers both targeted to the spike glycan holes.

The main scientific goal is the development of combined density functional theory (DFT) and machine learning techniques to predict electronic properties of hybrid perovskite materials and interfaces, which are currently used in perovskite solar cells.

The use case aims at establishing an easy-to-use cloud service that allows for fast pKa and isoelectric point calculations using user-provided protein structures or those obtained from the Protein Data Bank.

Running advanced scientific software requires expert knowledge not only on the phenomena to be modelled, but also on how to access and operate compute resources, develop, compile, and install the relevant software, and how to optimally use high-performance computing resources. The AiiDAlab web platform allows any interested researcher to work with advanced simulation tools. It provides an infrastructure for computational scientists to develop, execute, and share computational workflows as intuitive web services. The AiiDAlab applications can be used by theoretical as well as experimental scientists to easily prepare and run simulations through a web browser.

Starting from scientific data already held in the node of the cs3mesh4eosc consortium, we want to ensure that users can select the correct VREs and scientific workflows to operate on this data, seamlessly.

This use case will develop a novel multi-lingual search engine enabling efficientanalysis of large text corpora; assemble and expand a comprehensive dataset of questions/queries relevant to literary research, enabling swift and accurate text retrieval; and provide a crowdsourcing platform for gathering essential data to assess and refine language models used in text analysis.

The use case will compute the most complete public-domain score concordance dataset, improving
the analysis quality based on the current SOTA OMR algorithms, and provide infrastructure for dynamically expanding the concordance graph for newly digitized scores.

The use case will focus on intraoperative tumor delineation after optical modulated imaging, cardiovascular tissue structural integrity evaluation. It will also tackle rare disease pharmacological evolution guided by optical tissue properties monitoring and personalized pharmacology development.

The use case proposes an online to allow users to carry out simulation and optimization of scientific and engineering problems using the open-source applications using SPHinXsys, which is a multi-physic library. With such computing service, users are expected to construct simulations and optimizations online (within a standard browser) as a Web Assembly application.

JONAS is an INTERREG Atlantic Area-funded research project to address the issue of underwater noise. The JONAS VRE will deliver formatted underwater noise data products to sustained EU ocean observing initiatives, based on data and data products developed by the project’s Thematic activities. A selection of results from each thematic activity will be ported to the VRE and made available to EU observing initiatives and the community.

MATRYCS aims to develop and deploy an open interoperable scalable data-driven framework through an open cloud service provider to manage in a fully scalable and interoperable way the implementation of policy objectives, and hence to generate win-win situations, which may enable the adoption of novel business models by buildings-related existing stakeholders and/or opening up new opportunities for BVC stakeholders.

The analysis of scientific data taken with a generic gamma-ray telescope is dominated by the low statistics, each photon must be treated as a single particle with a determinate energy and an incoming direction. This makes the gamma-ray data analysis very time-consuming especially when the integrated observation is about years, then billions of photons, as the case of the Fermi Large Area Telescope onboard the Fermi Gamma-ray Science Telescope (Fermi-LAT). Resources devoted to the optimization of the time data analysis is fundamental for the Fermi-LAT science. This use case will allow easy integration of years of data and interface directly with the official Italian Mirror data archive of Fermi-LAT data hosted at the Space Science Data Center in Rome (SSDC).

MINKE proposes a new vision in the design of marine monitoring networks considering two dimensions of data quality, accuracy and completeness, as the driving components of quality in data acquisition.
Some of the new digital services developed in Cos4Cloud are addressing common challenges with MINKE, including for example automatic identification/validation of observations using advanced Artificial Intelligence methodologies that may improve both accuracy and completeness of validated data. Cos4Clod services will be offered through the EOSC hub to both traditional and citizen scientists and could be incorporated in MINKE RIs to provide improved Virtual Access in Cloud-based systems.

The use case aims at establishing proof of concept of the impact of a novel, thermodynamics-based Large Eddy Simulation – LES model for large scale turbulent fluid flows. Moreover, it wants to evaluate the turbulence quantities in terms of length scales, time scales, Leonard stresses, entropy generation history, applied to geometries suitable for the proposed computational strategies. A comparative analysis of the obtained results will be performed by validating the new model against the reference Direct Numerical Simulation – DNS and some commercial LES models. As a result, the use case will define possible new approaches to be used for the rapid prototyping in realistic, large scale turbulent fluid flows.

SAPS (Surface Energy Balance Automated Processing Service) is a service to estimate Evapotranspiration (ET) and other environmental data that can be applied, for example, on water management and the analysis of the evolution of forest masses and crops. SAPS allows the integration of Energy Balance algorithms to compute the estimations that are of special interest for researchers in Agriculture Engineering and Environment. These algorithms can be used to increase the knowledge on the impact of human and environmental actions on vegetation, leading to better forest management and analysis of risks.

In recent years, technological progress has been made in plant phenomics. High-throughput plant phenotyping platforms now produce massive datasets involving millions of plant images concerning hundreds of different genotypes at different phenological stages in both field and controlled environments. Various initiatives have helped to structure the European phenotyping landscape (EMPHASIS, EPPN) and enable researchers to use facilities, resources and services for plant phenotyping across Europe. However, among these services, the data services need to be improved. There is a need to build a federated and interoperable e-infrastructure allowing researchers to share and analyze phenotyping data.

Many scientific domains have been transformed into data-driven disciplines. Research projects increasingly and crucially depend on the exchange and processing of data and have developed platforms tailored to their needs. Pangeo, a world-wide community driven platform initially developed for Geoscience, has a huge potential to become a common gateway able to leverage a wide variety of infrastructures and data providers for various science fields. Public pangeo deployments that are providing fast access to large amounts of data and compute resources are all USA-based and members from the Pangeo community in Europe do not have a shared platform where scientists or technologists can exchange know-how. The main objective is to demonstrate how to deploy and use Pangeo on EOSC and underlined the benefits for the European community.

CAROUSEL+ aims to deliver immersive experiences through visceral and embodied group social
interaction with real people and physically plausible multi-modal AI characters, that can react and
interact with humans and each other in a meaningful manner through dance in XR. To achieve and
maintain the illusion of embodiment, participants need to feel that the effects of their actions,
movements and voice are all immediate. The virtual environment and the participating avatars
need to be as reactive as possible, with minimal time spent between actions and effects of those
actions, which is significantly affected by the overall latency.

Grape cultivation throughout the world is affected by various diseases. These diseases cause serious constraints in getting desired yields and good quality products. Study of disease epidemiology plays an important role in working out strategies for effective management of these diseases and in reducing the number of unwanted fungicide applications. In this context, the objective of the project GRAPEVINE is to adapt a set of models (phenological, disease and meteorological models) in the field of viticulture. The results of these models will train a predictive model based on Machine Learning (ML) techniques to improve the prevention and control of grape diseases in the wine cultivation sector (the most important diseases are Downy mildew, Powdery mildew, Black rot, and Botrytis).

NEANIAS comprehensively addresses the ‘Prototyping New Innovative Services’ challenge set out in the ‘Roadmap for EOSC’. NEANIAS drives the co-design, delivery, and integration into EOSC of innovative thematic services, derived from state-of-the-art research assets and practices in three major sectors: underwater research, atmospheric research and space research. With this application, NEANIAS broadens the integration in EOSC, specifically with the EOSC Compute Platform.

The main goal of this application is to further support the IoT-SESOD dataset computing and data generation experiment started under EUHubs4Data. The recent critical Log4J (CVE-2021-44228) vulnerability demonstrated the importance of having SBoM (Software Bill of Materials) to be able to immediately assess the impact/non-impact of a particular vulnerability on a system or device.
With IoT devices things are much more challenging as the firmware and devices are essentially “blackboxes” when compared to traditional computers.
Thanks to IoT-SESOD experiment, Binare Oy was able to identify vulnerable Tp-Link SDN devices and timely notify the vendor.

I-NERGY will demonstrate innovative AI-as-a-Service (AIaaS) Energy Analytics Applications and digital twins services, validated along 9 pilots during the project, which: span over the full energy value chain; deliver other energy and non-energy services to realise synergies among energy commodities, with nonenergy sectors, and with non- or low-technical domains end users.

Genopole, a biocluster with a properous ecosystem of students, researchers, and SME’s, together with University of Evry Paris-Saclay, Telecom SudParis and the Informatic School for Industry (INSEE) have organised a Hackathon dedicated to Digital Genomics. The major challenge is bringing together interdisciplinary sets of domain experts and specialized students in computer scientists with various degrees of experience and skills to “hack” solutions related to scientific topics of mutual interest.
For three days, participants will come together to discuss progress on each of the topics, digital genomics best practices, coding styles, etc.
At the end of the Hackathon, every team should provide a Jupyter Notebook with data, tools, models and so on.

The use of in-situ photography is fast becoming an inevitable technology that helps document a wide variety of agricultural and environmental practices. This in-situ data stream can be collected from professional surveys (e.g. the LUCAS survey), from novel street-view like collections as well as contributions by farmers, operators and citizens active in the rural environment. For more complex agri-environmental practices, robust in-situ information is still very scarce, especially for the quantitative evaluation of landscape features. The use case was seeking to scale up their work by deploying larger sets on multiple-GPU instances to be able to cross-validate among data sources, integrate the higher complexity of CAP specific evidence. This should enable the researchers to choose from a wider set of adequate, but more resource demanding DL models, perform faster hyper-parameter tuning and test large banks of models against extensive test sets.

Calculate 49 standard climate indices on several very large datasets, some with very high spatial resolution: CMIP6 and ERA5. Extended to CMIP5 and CORDEX if time permits. Those datasets can then be used by end users to assess climate change impacts for several geographical regions at different spatial and temporal scales. They can also be used by a large user base to explore the impact of climate change on compound extreme events (frequency, intensity, coverage), using traditional or novel techniques (such as Data Mining)

Nano-sized magnetic clusters have great potentials on technological and industrial applications, especially in emerging areas. Magnetic clusters (MC) show fascinating magnetic phenomena; however, quite a lot of efforts still remains to focus on developing and understanding the complicated magnetic properties of nano-cluster magnets. The use case works on the exploratory synthesis and characterization of new magnetic molecular materials by interlinking the smaller homo- and heterometallic clusters of 3d and 3d-4f ions or any their appropriate combinations into nano-sized aggregates using adequate linkers as carboxylate, O,N-based ligands, Schiff-base derivatives, etc., to probe and model the physical (electronic and magnetic) properties of discrete coordination nano-clusters, looking for unusual magnetic behavior.

The use case addresses the reactivity of hydrated CO2 with unsaturated organic compounds in the gas phase, and molecular aggregation of CO2 via multiple updates of unsaturated organic compounds and its implication in atmospheric nucleation. The research output would provide valuable insights on the fate of hydrated CO2, which is abundant in lower atmosphere worldwide due to global warming.

Data acquisition and generation in biodiversity research and biodiversity informatics are rapidly increasing. These data sources vary from real-time sensors, and monitoring data to molecular sequencing to large scale digitised images from natural science collections. The diverse nature of the data sources holds great potential for multidisciplinary research projects addressing many contemporary challenges including climate change modelling, biodiversity conservation, and food security measures to name a few.  This proposal argues that to take advantage of a wide variety of data and cutting-edge machine learning techniques, we need sustainable support for feature engineering and storage.

The Latin American Giant Observatory (LAGO) is a large astroparticle observatory based on the deployment of a detection network at a continental scale in Latin America. LAGO’s main objectives involve measuring transient and secular variations of the astroparticle background radiation from ground level, as a tool to study high energy astrophysical and space weather phenomena. The use case will characterise the expected response of these sites and their signal-to-background acceptance capability.

The use case supports researching the passage of high-energy charged particle beams through straight and bent crystals. In view of the large number of non-interacting particles, the task of finding their trajectories, of course, is easily parallelized: each computational core finds the trajectory of a certain group of particles. The computing resources offered by EGI-ACE will support studying the possibilities of extracting beams of negatively charged particles (such as electrons and antiprotons) from cyclic accelerators.

The DataCloud use case will try to make a setup of cloud and edge resources that the innovative DataCloud platform can use for the execution of data processing pipelines for at least one of our five use cases.

The aim of this project is the corroboration of nuclear magnetic resonance (NMR) data with quantum chemistry simulations in order to better describe the structure and dynamics of biomolecules such as metabolites and proteins. The objectives of this proposal are the implementation of NMR experiments <-> quantum chemistry simulations workflow with open source software compiled on a cluster, performing quantum chemistry simulations with ab initio methods to aid the NMR assignment process, and perform molecular dynamics simulations on proteins in order to restrict the number of possible models that fit the NMR experimental data and test in-silico intrinsic motions and chemical reactions which will be later tested experimentally.

The scientific objectives of the use case include research into processes beyond the leading order in perturbative quantum chromodynamics; the calculation of the integral cross sections, differential distributions, and correlation observables in production of heavy quarks in association with the vector bosons and jets under conditions of the experiments at the Large Hadron Collider at CERN; and the investigations of dependence of the observables on the gluon, charm, and beauty quark distributions in the protons.