Research Areas

Event-based Vision

Our lab specializes in developing cutting-edge algorithms for event-based cameras, which represent a paradigm shift in computer vision. We focus on continuous video reconstruction, motion understanding, and high-speed vision applications using event cameras.

  • Continuous color video reconstruction
  • Motion field estimation
  • Low-latency visual processing
  • Neural event processing
Event-based continuous color video reconstruction
Event-based 3D reconstruction

3D Computer Vision

We develop novel algorithms for 3D reconstruction using event cameras. Our research focuses on converting event-based apparent contours into accurate 3D models, enabling real-time reconstruction of dynamic scenes.

  • Event-based 3D reconstruction
  • Continuous visual hull computation
  • Dynamic scene understanding
  • Real-time 3D modeling

Motion Understanding

Our research focuses on understanding complex motion patterns in dynamic scenes using event cameras. We develop unsupervised learning approaches for motion segmentation and scene understanding, particularly useful for robotics applications.

  • Independent motion segmentation
  • Unsupervised learning
  • Dynamic scene analysis
  • Event-based tracking
Event-based motion segmentation

Current Projects

Neural Event Processing

Developing neural network architectures specifically designed for processing event-based data streams.

Dynamic Scene Understanding

Creating algorithms for understanding complex, dynamic scenes using multi-modal sensor fusion.

Embodied Navigation

Investigating active perception strategies for robot navigation in unknown environments.