• Who I am
    • Biography
    • Curriculum Vitae
  • NPC Lab
  • Grants & Awards
  • My Scientific Podcasts
  • Contact me!
PIONEER logo

(2026–2028) PIONEER — Neuromorphic Active Vision for Embodied Object Perception

GACR Standard Projects / CTU

Overview: PIONEER seeks to decode biological active sensing principles, leveraging low-power and low-latency neuromorphic sensing and computing to develop an intelligent system that optimizes visual information processing. The project develops a fully neuromorphic end-to-end active vision architecture, assesses biological realism in neuromorphic vision, and integrates active embodied object perception in humanoid robotics using the iCub platform.

Work Packages:

  • WP1 – Neuromorphic Track: Develop integrated fully neuromorphic end-to-end active vision architecture with log-polar retina, V1 edge extraction, and OMS cells for motion detection using Speck neuromorphic hardware.
  • WP2 – Biologically Plausible Track: Explore benefits of incorporating greater biological realism by replacing cortical stages with biologically detailed recurrent SNN models of primate V1 and V2, determining optimal abstraction level for robotics.
  • WP3 – Active Vision for Object Perception: Integrate Sensorimotor Contingency Theory (SMCT) rules for embodied object perception through saccadic and wrist movements, enabling object discrimination and motion prediction on iCub robot.

Technologies: Speck neuromorphic platform with DYNAP-CNN chip, DAVIS346 COLOR event-driven cameras, iCub humanoid robot with 3D-printed camera mounts, pan-and-tilt unit (PTU), spiking neural networks, YCB object dataset.

Impact: PIONEER advances low-latency, energy-efficient processing by implementing a fully active neuromorphic system for object perception. The project bridges neuroscience, robotics, and AI, supporting EU Sustainable Development Goals by developing low-power intelligent systems. It establishes new benchmarks for neuromorphic robotic vision and offers alternatives to conventional feedforward computer vision approaches that rely on large datasets.

Hover for details
ENDEAVOR logo

(2024–2026) ENDEAVOR — Event-Driven Active Vision for Object Perception

Marie Skłodowska-Curie Action / CTU

Overview: ENDEAVOR is a cutting-edge project that enhances robotic vision using event-driven cameras, spiking neural networks, and infant-inspired active exploration mechanisms. Unlike traditional vision systems that rely on static frames, ENDEAVOR enables robots like iCub to move, observe, and interact with objects in a bioinspired and energy-efficient manner.

Objectives:

  • O1 – EDAV Model: Adapt a frame-based vision model to event-driven input using iCub's head and wrist movements.
  • O2 – SNN EDAV: Implement a spiking neural network version using neuromorphic hardware for real-time, low-power operation.
  • O3 – Benchmarking: Compare models based on accuracy, latency, and energy use on the YCB object dataset.

Technologies: DAVIS346 event-driven cameras, Speck neuromorphic chip with sCNN support, iCub humanoid robot with 3D-printed mounts, Sensorimotor Contingency Theory (SMCT) for perception-action coupling.

Impact: ENDEAVOR bridges neuroscience, robotics, and AI to push forward energy-efficient perception in robotics. The project supports the EU's Sustainable Development Goals by advancing low-power intelligent systems, and aims to establish benchmarks for neuromorphic robotic vision.

Hover for details

Giulia D'Angelo

[email protected]

  • Linkedin
  • GitHub
  • YouTube
  • Google Scholar