
The ENVRI-Hub Next project brings together 11 of Europe's environmental research infrastructures, creating a complex ecosystem of software services. These many teams working independently create some challenges, as different development practices can result in slow progress and introduce risks. The solution? A single and automated pipeline assembling the code from the developers and carrying it into production, ensuring quality, security, and speed in this process.
Context and Challenges
In ENVRI-Hub Next, each group enjoys a healthy level of autonomy within the project. However, this may lead to the fragmentation of quality process evaluation, and modern research infrastructures live and die by the quality of their software. Different teams may follow different build processes and quality assessments; some ship their applications in Docker images, while others don’t. Each group may reach different levels of unit test coverage, and vulnerability scanning may range from "when we remember" to "never". The resulting patchwork slows delivery and puts sensitive datasets at risk. To accelerate science without sacrificing rigorous procedures, we have designed and rolled out a single end‑to‑end pipeline based on continuous integration/ continuous delivery (CI/CD) and a GitOps approach (Figure 1), with the current setup running on CNCA infrastructure and developed in collaboration with the LIP team. Our goal is simple: every ENVRI‑Hub Next component should follow the same automated path from commit to production, with security, FAIR compliance, and reproducibility embedded from the first line of code.
From Commit to Production: Principles and Architecture
The proposed approach emphasises "quality from the first commit," meaning that every code change undergoes immediate scanning. Steering to a shift-left strategy, moving testing and quality checks to the earliest possible point in development. As soon as a developer pushes new code, a continuous integration workflow kicks in to run an automated quality gate. The objective is simple: issues caught earlier are easier and cheaper to fix. To accomplish this, we enforce the usage of some tools integrated into the pipeline:
SonarQube: is a platform that automates source code scanning analyses for bugs, security vulnerabilities, and “code smells” (maintainability issues). It provides developers with instant feedback on the code's health. This means that if a commit attempts to introduce a bug or something considered bad practice, Sonarqube will block that code from moving forward. SonarQube not only spots issues in the code, but it also suggests potential fixes and explanations, providing references to the risks associated. It is worth noting that SonarQube does not do everything, and practices such as peer code review and time allocation to reduce existing technical debts are still necessary. The significant advantage of using tools such as SonarQube is that it prevents the introduction of easily spotted issues into the source code and allows them to be fixed before production stages.
SQAaaS (Software Quality Assurance as a Service): developed by CSIC, LIP and UPV, brings modern DevOps practices to research software, regardless of the developers' team expertise. It provides a ready-to-use CI/CD pipeline to help researchers/developers ensure that their software meets Open Science principles. SQAaaS workflow runs licence checks, unit-test coverage, documentation linters, vulnerability scans and FAIR-compliance checks. At the end of the assessment, a verifiable digital badge (bronze, silver or gold) is minted for proper recognition. This turns visible Quality Assurance work into a credential that funders, researchers, journals and infrastructure managers can inspect. Despite the narrower language coverage in terms of static and security analysis, overall SQAaaS complements SonarQube with policy-level checks and public recognition of software quality.
EVERSE (European Virtual Institute for Research Software Excellence): a Horizon-Europe collaboration that complements the pipeline by supplying a “knowledge layer” for software quality. EVERSE collective knowledge captured in the Research Software Quality toolkit (RSQkit) provides the community with best-practices consolidated into checklists, templates and policy guidance. In practical terms, ENVRI-Hub Next teams can decide what they want to test (e.g. citation metadata, reproducible environments, governance files). EVERSE also maintains a Network of Research Software Quality, offering software quality seminars, webinars and other events; ensuring that when the automated tools surface a problem, developers have a clear, peer-reviewed path to remediation. Together with SQAaaS, it provides guidance with early-warning systems to ensure that ENVRI-Hub Next software is secure, FAIR and reproducible.
Shipping To Production
For enhanced robustness, security, and efficiency of the ENVRI-Hub Next applications ecosystem, all are designed to be cloud-native following the blueprint for modern applications defined in the "Twelve Factors App" methodology. This means the applications are containerised and benefit from enhanced scalability, improved security through isolation, greater portability across different environments, among other benefits of this approach. Once the source code has successfully passed the quality gates, the pipeline proceeds with the application containerisation. For this step, we leverage rootless containerisation solutions like Kaniko and Buildah, which allow us to build Docker images securely without requiring elevated privileges on the build host, minimising potential attack surface and improving the overall system security.
After the containerisation step, it is almost imperative to perform some container security vulnerability scanning against the produced Docker image. Tools such as Trivy, a comprehensive and easy-to-use open-source scanner, allow the inspection of the resulting Docker image. This tool scans for known vulnerabilities (CVEs) in the operating system packages and application dependencies. It also identifies bad practices within the resultant Docker image, such as secrets stored in the image's file system, undefined non-root users, or other misconfigurations. If Trivy finds critical vulnerabilities either on the base image or the produced image, it halts the pipeline, preventing the insecure image from ever being published or deployed.
Container vulnerability scanning is a crucial security measure for the entire cluster. Given that all applications share the same Kubernetes cluster (with multi-tenancy in place), it minimises the potential for vulnerable containers to compromise individual applications or the broader cluster environment.
The Deployment Pipeline: A GitOps Approach
The images validated and secure are versioned and published to GitLab's integrated Container Registry. From there, deployment is managed entirely through a GitOps workflow, using a combination of Helm and ArgoCD. Helm packages all the necessary Kubernetes manifests into a version-controlled chart, allowing us to template configurations for different environments. Then ArgoCD acts as the GitOps agent, continuously monitoring the Git repository containing the Helm charts. When a change is detected, ArgoCD automatically synchronises it to the target Kubernetes cluster, preventing configuration drift, ensuring that the live state always matches the desired state defined in Git. However, an additional value of ArgoCD is that it empowers development teams, as they can manage their services through the ArgoCD web interface without needing direct cluster credentials.

Security Always As A Priority
Security is not a single step but a continuous flow throughout the entire life cycle. HashiCorp Vault provides a centralised and secure way to manage all secrets, from database passwords to API keys, ensuring they are never hard-coded in Git repositories. And security doesn't stop at deployment. Using tools such as Falco for real-time threat detection based on application behaviour, alerting to any suspicious activities like unexpected shell access or file system modifications. Complementing this, Trivy appears again to continuously scan running workloads, ensuring that it can detect vulnerabilities that might emerge after deployment as new CVEs are discovered.
Our Motto
In essence, the quality of software and security is a fundamental aspect for modern research infrastructures like the ones collaborating in the ENVRI-Hub Next project. Managing a distributed ecosystem with multiple institutions and repositories clearly highlights the need for a unified approach to ensure consistency and reliability across all components.
Our solution, built on top of a CI/CD pipeline and following a GitOps approach (Figure 1), directly attempts to address this challenge, making sure every ENVRI-Hub Next component follows an automated and rigorous path from the very first line of code to production. Embracing a strategy to catch issues earlier in the source code.
Ultimately, we intend not only to accelerate the delivery of high-quality software but also ensure that the best practices are followed through every stage of the development life-cycle. We aim to give the developers ease of mind to deploy with the same confidence on a Friday afternoon as they would do on a Monday morning.
EGI, a European landmark infrastructure for scientific computing, celebrated its 20th anniversary in 2023, marking
The Brisbane Statement prominently acknowledges the transformative role of Digital Research Infrastructures, a recognition that