WHAT WE DO
CONVOLVE enforces the EU’s position in the design and development of smart edge-processors, such that it can become a dominant player in the global edge processing market.
With the rise of deep learning (DL), our world braces for Artificial Intelligence (AI) in every edge device, creating an urgent need for edge-AI processing hardware.
Unlike existing solutions, this hardware needs to support high throughput, reliable, and secure AI processing at ultra-low power (ULP), with a very short time to market.
With its strong legacy in edge solutions and open processing platforms, the EU is ideally positioned to become the leader in this edge-AI market. However, certain demands need to be met to position the EU as a leader in these technologies: Edge processors need to become 100x more energy efficient; Their complexity demands automated design with 10x design time reduction; They must be secure and reliable to get accepted; Finally, they should be flexible and powerful to support the DL domain. CONVOLVE addresses these demands and thereby enables EU leadership in Edge-AI.
Objective 1: Improve energy efficiency by 100x
Achieve 100x improvement in energy efficiency compared to state-of-the-art COTS solutions by developing near-threshold self-healing dynamically re-configurable accelerators. This involves the development of an Ultra-Low-Power (ULP) library with novel architectural and micro-architectural accelerator building blocks, in short ULP blocks, having common or standard interfaces, and optimized at micro-architecture, circuit and device levels. Different architectural paradigms will be evaluated, such as Compute-in-Memory (CIM), Compute Near Memory (CNM), and Coarse-Grained Reconfigurable Arrays (CGRA), all keeping processing very close to the memory to reduce energy consumption. The accelerator blocks are optimized to execute the computation patterns of both Artificial Neural Networks (ANN) and Spiking Neural Networks (SNN) efficiently. To further reduce energy consumption, support for application dynamism will be provided in ULP blocks to dynamically adapt computational precision, data path width, early-termination, skipping layers/neurons, etc. Leakage will be reduced by advanced power management, and by using non-volatile ReRAM based crossbar units. Novel self-healing mechanisms will be introduced to deal with hardware variability at near-threshold region.
Objective 2: Reduce design time by 10x
Reduce design time by 10x to be able to quickly implement an ULP edge AI processor combining innovations from the different levels of stack for a given application using a compositional ULP design approach. CONVOLVE researches efficient design-space exploration (DSE) techniques combining different levels of hierarchy in a compositional way, i.e., hardware and software components can be seamlessly glued together, while guaranteeing overall behaviour and reliability; this deals with the SoC heterogeneity and supports efficient mapping of applications to hardware architectures. Designing ULP accelerator blocks with common interfaces will allow these to be plugged into a modular architecture template. We will then generate an SoC architecture using these modular architecture templates after performing automated DSE; this allows the evaluation of all architecture possibilities.
Objective 3: More secure, for longer
Provide hardware security against known attacks and real-time guarantees by compositional Post Quantum Cryptography (PQC) and real-time Trusted Execution Environment (TEE). We will design PQC accelerator blocks with standard interfaces that can be plugged into a modular architecture template to make hardware secure, even in the long term (over a decade). Furthermore, CONVOLVE develops design-for-security shames and makes sure that all security features can be added in a compositional manner while providing real-time guarantees. We will explore design for robustness, to deal with in-field failures and non-ideal real-world environments.
Objective 4: Smarter AI models + ULP accelerators
Smart edge applications: CONVOLVE will develop smarter AI models to be combined with ULP accelerators. The project will explore AI models which dynamically adapt to the data input, such that the ‘common input case’ can be executed much more efficiently. This will dramatically reduce energy consumption. Furthermore, inspired by the redundancy and self-healing properties of biological brains, we enhance reliability by on-line (re-) learning, adapting parameters and weights on the fly. This requires new and cheap learning algorithms. Finally CONVOLVE investigates whether spiking neural networks (SNNs) could have an edge w.r.t. ANNs for certain application domains, especially for streaming input and always-power-on attention blocks.