HyVar proposes a development framework for continuous and individualized evolution of distributed software applications running on remote devices in heterogeneous environments. The framework will combine variability modeling from software product lines with formal methods and software upgrades, and be integrated in existing software development processes. HyVar's objectives are (O1) to develop a Domain Specific Variability Language (DSVL) and tool chain to support software variability for such applications; (O2) to develop a cloud infrastructure that exploits software variability as described in the DSVL to track the software configurations deployed on remote devices and to enable (i) the collection of data from the devices to monitor their behavior; and (ii) secure and efficient customized updates; (O3) to develop a technology for over-the-air updates of distributed applications which enables continuous software evolution after deployment on complex remote devices that incorporate a system of systems; and (O4) to test HyVar's approach as described in the above objectives in an industry-led demonstrator to assess in quantifiable ways its benefits. HyVar goes beyond the state-of-the-art by proposing hybrid variability; i.e., the automatic generation and deployment of software updates combines the variability model describing possible software configurations with sensor data collected from the device. HyVar's scalable cloud infrastructure will elastically support monitoring and customization for numerous application instances. Software analysis will exploit the structure of the variability models. Upgrades will be seamless and sufficiently nonintrusive to enhance the user quality experience, without compromising the robustness, reliability and resilience of the distributed application instances. To maximize impact and innovation, the consortium balances carefully selected academic and industrial partners ensuring both technology pull and push.
Deep Learning (DL) algorithms are an extremely promising instrument in artificial intelligence, achieving very high performance in numerous recognition, identification, and classification tasks. To foster their pervasive adoption in a vast scope of new applications and markets, a step forward is needed towards the implementation of the on-line classification task (called inference) on low-power embedded systems, enabling a shift to the edge computing paradigm. Nevertheless, when DL is moved at the edge, severe performance requirements must coexist with tight constraints in terms of power/energy consumption, posing the need for parallel and energy-efficient heterogeneous computing platforms. Unfortunately, programming for this kind of architectures requires advanced skills and significant effort, also considering that DL algorithms are designed to improve precision, without considering the limitations of the device that will execute the inference. Thus, the deployment of DL algorithms on heterogeneous architectures is often unaffordable for SMEs and midcaps without adequate support from software development tools. The main goal of ALOHA is to facilitate implementation of DL on heterogeneous low-energy computing platforms. To this aim, the project will develop a software development tool flow, automating: • algorithm design and analysis; • porting of the inference tasks to heterogeneous embedded architectures, with optimized mapping and scheduling; • implementation of middleware and primitives controlling the target platform, to optimize power and energy savings. During the development of the ALOHA tool flow, several main features will be addressed, such as architecture-awareness (the features of the embedded architecture will be considered starting from the algorithm design), adaptivity, security, productivity, and extensibility. ALOHA will be assessed over three different use-cases, involving surveillance, smart industry automation, and medical application domains