Belle II is an international collaboration put together to record and analyse the experimental data collected by the SuperKEKB accelerator in Japan. The experiment involves around 700 scientists from 23 countries and regions and builds on the success of Belle, a previous phase that ran between 1999 and 2010.
Belle II is looking into the imbalance of matter and antimatter in our Universe and relies on EGI High-Throughput Compute and storage services to analyse and share their data.
The EGI infrastructure has been already proven for the stable operation and scalability. That is the established technology, which is very important feature for us.
In the beginning of times, right after the Big Bang, the amount of matter and antimatter in the Universe was balanced. But today everything we observe on Earth and in space is made of matter only. “If the behavior of the matter and antimatter were the same, the present imbalance will not happen,” explains Takanori Hara, the Computing Coordinator of the Belle II collaboration. “So the imbalance implies something tilted this balance.”
The original Belle experiment was designed to understand the difference in the behaviour of matter and antimatter, first postulated in 1973 by Japanese physicists Makoto Kobayashi and Toshihide Maskawa as the so-called CP-violation. Their theory predicted that the effect of difference between matter and antimatter emerges in “B” meson pair system. The work of the Belle team confirmed the prediction with experimental results and Kobayashi and Maskawa won the Nobel Prize for Physics in 2008.
“However, the Kobayashi-Maskawa theory is not enough to explain the present matter-dominated universe,” says Hara. “To realize the current universe, we need a new mechanism beyond the Kobayashi-Maskawa theory and, although we have many theories now, we do not know which one is right.”
The Belle II experiment at an upgraded SuperKEKB accelerator plans to search for the new mechanism of the CP violation beyond the Kobayashi-Maskawa theory.
The SuperKEKB accelerator is expected to generate an amount of data similar to the ATLAS detector of Cern. The number of institutes part of the Belle II experiment is however much smaller and “because of this, we are always facing the problem of resources,” explains Hara.
Since 2009, Belle II has consumed over 1.6 billion CPU hours (HEPSPEC, elapsed time) of compute time and submitted more than 16 million compute jobs. The team manages the workload with DIRAC, a system originally developed for LHCb to guarantee the interoperability of heterogeneous computing systems. Belle II also uses GGUS as the issue tracker and GOCDB as the downtime information system.
In addition to EGI Federation resources, the Belle II experiment also benefits from Cloud, HPC, local clusters.
The Belle II team needs to establish a distributed computing infrastructure with a limited manpower. “The EGI infrastructure has been already proven for the stable operation and scalability,” says Hara. “That is the established technology, which is very important feature for us.”
An illustration of the Belle II detector.
The data centres providing the most computing and storage resources to Belle II are:
Belle II experiment also benefits from Cloud, HPC, local clusters.