EGI Federation Home
Updated 30/08/2022

A successful collaboration with the AI-SPRINT project

AI-SPRINT is an EU-funded project which defines a novel framework for the design and operation of AI applications in computing continua.
The project goes beyond supporting AI applications development, by enabling the seamless design and partition of AI applications among the plethora of cloud-based solutions and AI-based sensor devices, providing security and privacy guarantees.
POPNAS (Pareto-Optimal Progressive Neural Architecture Search) is one of the design time tools of the AI-SPRINT project, which is based on Neural Architecture Search, an Auto-ML technique capable of finding optimal neural network architectures for a given task and dataset. The algorithm can consider and optimize multiple objectives, making it easier to deploy the final architectures under potential system constraints. Furthermore, the final architectures are composed of stacking multiple modular units, which makes partitioning into the edge and cloud simple and efficient. This approach makes it possible to generate state-of-the-art neural network models in a single end-to-end process,
with minimal AI-expertise requirements, enabling wider adoption of deep learning techniques in the industry.
AI-SPRINT focuses its efforts on the applications of Artificial Intelligence and Edge Computing in three thematic use cases.

  1. Personalised Healthcare: developing an automated system for personalised stroke risk assessment and prevention.
  2. Maintenance & Inspection: creating an infrastructure that reduces downtime and revenue losses caused by degenerative asset performance.
  3. Farming 4.0: delivering edge and intelligent sensors to optimise phytosanitary treatments.
AI-SPRINT used the following EGI Services to fulfil the project objectives:

POPNAS expand PNAS, an established method in the NAS literature, with an additional surrogate ML model, used to estimate the training time required by the selected architecture. Predicting both the accuracy and training time reached by the candidate architectures allows POPNAS to address NAS as a multi-objective optimization problem, solved through Pareto-optimality. This optimization technique selects for training only the neural architectures estimated to reach the best tradeoff between the considered metrics.
Experiments have been performed on four different image classification datasets (CIFAR10, CIFAR100, fashion-MNIST and EuroSAT), executing POPNAS and PNAS with the same configuration parameters, to make a fair comparison. POPNAS algorithm can find architectures with competitive accuracy with PNAS, while drastically reducing the search time by an average 4x speed-up factor. Pareto optimization is the key factor to find simpler architectures with similar accuracy, pruning suboptimal time-consuming architectures from the training selection, drastically improving resource usage and energy requirements.

The work has been published at the WCCI 2022 and the open-source code is available on Zenodo.