Everyone else who wants to jump onboard the train and kickstart live video is now faced with somewhat of a daunting road ahead of them. Getting the technology off the ground and fully integrated into their existing setup can be a challenge, even for the experts.
To help all of those on the road to integrating live video, Martin Storsjö, CTO for Bambuser has kindly offered his words of wisdom on the trials and tribulations of live video from a tech perspective.
Q) What are the technical pain points that developers are faced with when they decide to integrate live video into their platform?
Martin: “If a developer wants to integrate live video from scratch, there are a few major issues that they should be ready to face:
- Adapting to network conditions — both at the broadcasting end and at the viewers’ end.
- Protocol support (in general there’s no single protocol for playback that is supported by both mobile platform and modern desktop browsers).
Alternatively, if developers are using some sort of existing service or SDK, they can forget about most of these issues and just focus on delivering added value to their own service or app.
Some SDKs and platforms simply provide you with lower level building blocks, while others abstract away all the gory details and provide a turnkey solution.”
Q) What are the key components for maintaining low latency live streaming?
Martin: “The key to maintaining a low latency in a live stream, is to make sure that every individual step throughout the pipeline is designed keeping the low latency target in mind.
Depending on what level of latency is acceptable (i.e. if aiming for real-time use as in video conferencing, or if aiming for low latency broadcast with a latency of a second or two), the protocols and implementations used in each link in the chain need to be tuned for the desired mode of operation.
For the low latency broadcast case, some amount of buffering is allowed in each step (to allow for some tolerance to minor hiccups), but the buffer size in each step needs to be limited to a certain maximum since all the individual buffer sizes add up.”
Q) What are the different types of streaming outputs/protocols that developers can choose from when integrating live streaming? And why should a developer choose one protocol over another?
Martin: “There are mainly two different families of protocols in use, for two distinct use cases.
If aiming for a very low latency, near real-time, e.g. as in video conferencing, most implementations use UDP based protocols like RTP. In these setups, the video stream is usually flowing directly from one peer to another. (Higher level protocols such as RTSP and WebRTC are all based on RTP, as are many direct video calling systems like FaceTime.)
If aiming for slightly higher latency, to the magnitude of a second or two, it is much more common to use a TCP based protocol, with RTMP being the most common one today. There are also others, but they essentially all work the same.
With the TCP based protocols, rate control/congestion control, which is essential for adapting the video quality to the available network speed, is already built into the operating system. With UDP on the other hand, all such algorithms need to be reinvented at the application level.
RTMP originally was a proprietary protocol for communication between Adobe’s Flash Player and their own Flash Media Server.
For live streaming setups, it could be used both for sending the stream from the person broadcasting to the server and from the server to all individual viewers using a Flash Player. With Flash getting phased out, RTMP is today more seldom used in delivery from the server to viewers — today that role is filled by a variety of different protocols (WebRTC/RTP, segmented formats like HLS and DASH, custom protocols over web sockets, plain progressive streams over HTTP). But RTMP remains as some sort of lingua franca, as a cross-vendor protocol, for sending video from the broadcaster to a server.”
Q) Despite having quality technology at the backend, what are the external factors that can jeopardize the quality of the stream for the end users?
Martin: “The main risk when it comes to mobile live broadcasting is usually the unreliability, or let’s say variability, of the mobile networks.
It is not easy to know beforehand what broadcast speeds you will get once you are broadcasting e.g. a major event — even if you have done tests at the right location beforehand (before the whole audience shows up), a huge crowd can easily bring down the speed of the cellular network, which you will only notice once the event is happening.
Having access to a private wifi connection on-site can usually be helpful in such conditions, but that normally requires cooperation from the event host.
The same general issue can be observed on a larger scale as well, where cellular network speeds actually have dropped in certain cities as more users have started using them. See example here.
In addition to network issues from the individual user to their ISP, the routing between operators also can turn out to be a problem. Even though the network connectivity, in general, seems to be ok, the traffic can sometimes end up being routed via a long detour for various reasons (often only intermittently). In these cases, you only notice an issue when you start using more bandwidth.
Also aside from the network speeds in general, serverside scalability can be an issue; the service provider needs to make sure that enough servers are available so that the quality of one individual stream doesn’t suffer from all the other concurrent streams.”
Plug-and-play live streaming integration
Apart from creating your own live video streaming architecture, Bambuser offers an end-to-end live streaming platform with SDKs for live video broadcasting and playback.
Try out Bambuser for free to see how it fits into your app’s workflow.