1

Please forgive the naivity of this question, it's just due to lack of experience.

It goes without saying that self-driving cars have up to 8 cameras and more that do various vision related tasks:

  • object detection and localization,
  • 2D to 3D depth perception,
  • semantic segmentation,
  • and probably more that I have yet to learn about.

My question is on the synchronization of these cameras:

I am assuming there's a hardware sync to ensure all inputs are captured at the "exact same" time. Is that true?

Is there a way to account in the model / protocol of capture for the fact that the inputs might have some delay in between them?

Even though this question makes me sound very novice, feel free to discuss advanced topics. I have just started reading this paper on depth perception using a single camera and am able to understand it.

jottbe
  • 421
  • 5
  • 17
Sam Hammamy
  • 133
  • 5
  • I found this just now https://developer.ridgerun.com/wiki/index.php?title=Synchronizing_Multiple_Cameras – Sam Hammamy Oct 14 '19 at 13:38
  • I'll leave the question open as it may help someone. If I come up with an actual answer, I'll post it below – Sam Hammamy Oct 14 '19 at 13:39
  • This detailed answer gives some more hints https://raspberrypi.stackexchange.com/questions/28113/raspberry-pi-camera-when-is-it-ready-for-next-frame/28223#28223 – Sam Hammamy Oct 14 '19 at 14:09

1 Answers1

0

This paper from Texas Instruments pretty much outlines the entire problem and current state of the art.

Sam Hammamy
  • 133
  • 5