Stereo video stream from on-board cameras

After completing our BeagleBoard based car, we decide to build the quad-copter which also could be controlled over the Internet. In addition, we set as our goal to implement adaptive live stereo video streaming from two on-board cameras based on our previous work presented at Gstreamer conference 2010.

PandaBoard folks from TI offered the opportunity to win the PandaBoard at Gstreamer conference. To participate, it was necessary to submit description of the intended project. We did participate and were very glad to get notification that we will receive the PandaBoard for our stereo video streaming project (the board is already arrived). However, hardware accelerated video encoders are not yet released by TI. That is why we decide to make an iteration using BeagleBoard xM to check some new software ideas and make more integrated (more compact, less weight) hardware design as a preparation for the quad-copter development.

For these purposes we develop small (approx. 200×140mm) vehicle based on DFRobot's RP5 tracked platform. Two standard model motor controllers were used to drive motors. Control signals are standard PWMs used in modeling. On top of the platform we mount plastic box where BeagleBoard xM and level shifters are hidden. And finally on top of the vehicle we mount two Logitech 9000Pro USB cameras. The following pictures illustrate the whole mechanical design.
Complete vehicle: RP5 tracked platform, box with BeagleBoard xM and soldered level shifters, TP-Link WLan N stick and two Logitech 9000Pro USB camera
There is a plenty of space remains within the box. We will need this space later when we will move to PandaBoard.
More detailed view of the BeagleBoard xM inside

In the "basement": NiMh battery, two standard PWM motor controllers and power regulator module.
We are quite happy with overall design. However it could not be considered as a final yet. There are two problems:

  1. Very unstable camera mounts. In fact we did not decide yet which cameras we will use in the future. Current 9000Pro has great optic, UVC compliant and as a result works out of the box on the BeagleBoard with Angstrom. However, it is rather large and is very inconvenient to mount. That is why we are also considering the option to use much smaller analog cameras with corresponding USB grabber.
  2. Only two motors could be currently controlled directly from BeagleBoard. The reason is that only two hardware PWM generators are available on the expansion slot. Since our final goal is to build quad-copter, it is not an issue because we will be using I2C to communicate with motor controllers instead of PWM. However, more sensors will be necessary and not all of them are I2C devices (such as for example gyro and accelerometers). So most probably we will use the Arduino Nano micro-controller board to interface with sensors. As a side effect, it would be very easy to add more PWM channels to control additional motors and servos from Arduino for this particular vehicle.
We are currently working on these two issues and will post an update soon.

Now to software part of the project. This vehicle is currently running Angstrom with modified and extended version of our control applications. Similar to our previous vehicle we are using Angstrom Linux distribution, Gstreamer to work with video data, ZeroC Ice for communication and OpenGL for user interface. As a result, we can control the vehicle over the Internet and there is solution for firewall/NAT traversal problem and video adaptation for changing networking conditions.

As mentioned at the very beginning of this post, our goal was to provide stereo vision for vehicle drivers. We are mixing left and right frame from two cameras in one large frame, compress it using Gstreamer and corresponding DSP-accelerated h264 encoder. Then, video is transmitted using Ice middleware to the client computer where we use OpenCV computer vision library to generate anaglyph stereo image. The result of this process is
illustrated on the following picture.
Driver cockpit with anaglyph stereo video generated from two on-board cameras. Corresponding glasses (red/cyan) are required to see stereo effect.

It might be possible to generate anaglyph image directly on BeagleBoard but we decide against it and instead transmit two whole left and right frames to the client. There are several reasons for it. First, using anaglyphs to achieve stereo effects leads to the partial lost of color information and as a result rather poor image quality. The more sophysticated approaches to display stereo images such as for example polarized glasses are available. They can display both images without losses. To utilize this approach it is necessary to transmit complete left and right frames to the visualization application (and this is what we are doing). Second reason is limited computation power available on on-board computer. As a result it is not feasible to implement computation intensive algorithms (such as for example real-time depth map calculation) on on-board computer. Instead it might be beneficial to run such algorithms on more powerful driver computer or even on high performance cluster (with GPUs, SPEs, FPGA or whatever might be most appropriate). For this purposes it is again necessary to transmit complete left and right frame to the driver's computer. Moreover, some widely-used computer vision libraries such as OpenCV are highly optimized for x86 platform and pretty slow on BeagleBoard. That is why, we decide against doing image processing on on-board computer and instead perform such calculation on driver's computer.

This is our very first experiments with stereo vision and the code base is not as clean and stable as we want it to be. There are also some issues with camera calibration and positioning. That is why the stereo appearance of the image above is not as good as it could be. We are currently working very actively on it and will push the update to our git repository very soon.
License Content on this site is licensed under a Creative Commons Attribution 3.0 License .

Comments

  1. Hi,

    Was looking for DSP based video encoder and came across your posts. I'm working on a project and I also need low latency for video link.

    Can you tell me what sort of end to end latency was introduced using TI's h.264 encoder?

    Thanks

    ReplyDelete
    Replies
    1. Hi,

      Since you are talking about end-to-end latency, then it is not only encoder but also video transmission involved. In our project, for small size video (320x240 to 640x480) the latency was under one second.

      You might be also interested to take a look at the following presentation we have made for more details: http://gstconf.ubicast.tv/videos/adaptive-video-streaming-with-ice-and-gstreamer .

      Regards,
      Andrey.

      Delete

Post a Comment