09 Mar 2014

The Good, the Bad and the Ugly: Video streaming in embedded systems

First of all, this blog post is all from a personal experience and solely from the point of view regarding video streaming applications for embedded systems. This post is intended for those who are looking for embedded systems for video streaming in particular. Also, I am not an electrical engineer but I am mainly a software engineer and researcher, thus please excuse me if I had some faults while writing this article.


Recently, I have been working on implementing stereo camera video streaming application to be used in mobile robots. Thus I was faced with several constraints regarding the choice of tools and the desired implementation way. Also the target application required real-time, HD resolution video stream running at 25 FPS, the maximum allowed delay shouldn’t exceed 300ms since the application is mainly focused on maintaining user experience for remote video conferencing and telecommunication.

The main hardware constraints which I was faced with are mainly : Size, Weight and Power consumption. Since the robot is totally isolated from any wired connection, and only wireless (WiFi) connection were permitted. Also as I mentioned before, the application requires stereo cameras to be used, and the cameras should output at least 1280×720@25 FPS for my target purpose.

Thus I have considered three options to work with, which can be divided (as you may guess ) into : The Good, the Bad and the Ugly. But before I start listing the boards, I would like to mention the video streaming method that I am using in the all three boards.

Video Streaming: GStreamer

GStreamer is a cross platform video streaming opensource library, and it is well known for people who are working in multimedia area. The library provides almost all known encoders/decoders, plus a huge variety of plugins and elements to be used for processing the media before streaming it. It should be understood that this library doesn’t provide image processing capabilities such as OpenCV!

One nice thing about this library, it can run with full capability using command line, no need for digging into code for those who are unfamiliar with programming. But depending on application, you may need to have your own processing steps ( same as I have) , then you would get your hands dirty in some coding.

In this application, I am using h264 encoder, and sending RTP stream over UDP into the viewer pc.


And Now, it is time for listing our three boards:

1) The Good: Intel NUC



I have worked with this board for quite a while, and I can say it is the king of lite weight, general use, fully capable PC! With its i5 Intel processor and customizable memory which can be upgraded up to 16GB, it is capable to do heavy processing tasks. Also, it supports both USB 2.0 and USB 3.0 which is amazing for connecting high speed peripherals and cameras.


In one project, I had to do the same requirements of streaming stereo cameras in realtime over WiFi connection, but I had less restrictions of weight, size and power consumption compared to this time. This board has proved to be a great choice for video streaming applications (also I suppose it can fit very well for almost any other application!). Also other advantage of using x86 based architecture is the ability to develop using your most comfortable tool, and not getting restricted by a specific architecture or operating system.


However, as every good story has a sad turn, this board didn’t work out with the intended application due to its weight which was including the wifi module and memories, was about 300 grams. The main heavy part was the cooling fan, a big heavy part compared with the other parts of the board. Also it consumes extra power, and requires 12v ~2A input, which was difficult to provide.

Also, please keep in mind that this solution is quite expensive, about 280$!!

2) The Bad: Beagleboard

When I first heard about it, I was quite excited to know that it supports USB web cameras!

Beagleboard, a very light weighted board running Ångström distribution which is Unix based operating system. This board has proved to be a great assistant in many robotics applications and projects. With up to ~ 1GHz processor 512MB memory, it can operate connected peripherals very efficiently.


However, when the decision came to encode and stream the video, it proved to be very poor due to DSP (Digital Signal Processing) low performance on the board. it couldn’t perform more than 320×240, and even with that the encoding performance was quite bad.

To test it by your own, you can use the following command (please make sure that gStreamer is installed):

gst-launch v4l2src always-copy=FALSE ! ‘video/x-raw-yuv,format=(fourcc)YUY2’, width=320, height=240, framerate=25/1 ! videorate ! ffmpegcolorspace ! TIVidenc1 codecName=h264enc engineName=codecServer frameRate=25 resolution=320×240 genTimeStamps=false encodingPreset=3 ! fakesink

Replace “fakesink” with your desired streaming sink such as “udpsink”.

Since this board proved for me that its incapable of achieving the minimum desired request of providing 1280×720 frame resolution, I considered it to be ranked as “BAD” and thus totally unusable under any circumstances for my desired application.

3) The Ugly: Raspberry Pi

The last but not the least, Raspberry Pi. At the beginning of my research, I couldn’t think of the possibility of using Raspberry Pi as an option for stereo streaming cameras, the reason was mainly that it doesn’t support usb cameras, and thus it is not possible to attach two cameras into the same board. Raspberry Pi supports a specially made camera board (which can be purchased from adafruit for about 30$).


When I experienced working with this camera, I found out that it is quite nice, it can output FullHD video images at 30fps. Also, the amazing thing about it that it has h264 hardware encoding, thus it will save a huge effort of the CPU unit on Raspberry Pi to do the heavy task of encoding it. In order to test the camera, you can simply run the following command using the shell command:

raspivid -t 999999 -h 720 – w 1280 -fps 25 -hf -b 20000000

Regarding the numbers and arguments, you can refer to the documentation file.

After the unsuccessful tests of the previous two boards, I have decided to switch to a risky solution, which is to use two separate Rasp Pi to stream each eye separately, for each board I run the following command on the terminal to do streaming into my machine:

raspivid -t 999999 -h 720 – w 1280 -fps 25 -hf -b 20000000 -o – | gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 ! gdppay ! udpsink=<ViewerIP> port=<eye port>

<ViewerIP> : is your viewing pc port, same for both boards

<eye port> : the target receiving port of the eye, different on each board

Note that I am only parsing h264 stream and encapsulate it in rtp format before sending it over udp.



On the receiving PC, you can check both eyes, running side by side using the following command:

gst-launch-0.10 -v videomixer name=mix
sink_0::xpos=0 sink_0::ypos=0 sink_0::alpha=1 sink_0::zorder=0
sink_1::xpos=0 sink_1::ypos=0 sink_1::zorder=1
sink_2::xpos=640 sink_2::ypos=0 sink_2::zorder=2 ! ffmpegcolorspace ! autovideosink name=sink sync=false
videotestsrc pattern=”black” ! video/x-raw-rgb,width=1280,height=800 ! ffmpegcolorspace ! mix.sink_0
udpsrc name=src1 port=5001 ! application/x-rtp, payload=96 !queue ! rtph264depay ! ffdec_h264 ! ffmpegcolorspace ! videoscale ! video/x-raw-rgb,width=640,height=800 ! videorate ! ffmpegcolorspace ! mix.sink_1
udpsrc name=src2 port=5000 !application/x-rtp, payload=96 !queue ! rtph264depay !ffdec_h264 !ffmpegcolorspace ! videoscale ! video/x-raw-rgb,width=640, height=800 ! videorate ! videoflip method=5 ! videoflip method=4 !ffmpegcolorspace !mix.sink_2

It is long command, but what it basically do is creating a side-by-side video mixer, and receiving on ports (5000,5001) each eyes rtp video stream using udp, then decoding the stream before combining it and displaying it.


As a result, this solution was the best among the others in this situation, but it was kind-of an “Ugly” one, since it may lack the synchronization between both eyes when network latency happens.

Finally, I would like to repeat that this is entirely my own experience of dealing with those boards, I tried to be fair among all of the previous ones, and showed the goods and bads and ugly faces of each.

Hacks 7 Comments


  1. Hi,

    This was indeed a good post. I am trying to build something similar i.e an FPV system for Drones. I am planning to use i.MX6 (freescale) for building an embedded video server to be mounted on the Drone. I would really appreciate if you could provide some insights on this. Do you suggest any other platforms for this usecase? I have already evaluated TI DM388 and the support is really bad, so had to give up on this platform.

    1. Hey VK,
      Sorry my reply took long time, just noticed it!
      This project was quite some time long (4 years), so many things have changed since then.
      I worked with embedded systems however the DSP was really slow to do the encoding/decoding for dual camera systems. In my recommendation, so far the best low-profile hardware that can do realtime streaming is Raspberry Pi. Pi Camera provides hardware H264 encoding of the video frames thus you would save tremendous time on the encoding part.

Leave a Reply