News from Industry

ML in WebRTC: The noise suppression gold rush

bloggeek - Mon, 08/31/2020 - 12:30

Communication vendors are waking up to the need to invest in ML/AI in media processing. The challenge will be to get ML in WebRTC.

Two years ago, I published along with Chad Hart a report called AI in RTC. In it, we’ve reviewed the various areas where machine learning is relevant when it comes to real time communications. We’ve interviewed vendors to understand what they’re doing and looked at the available research.

We mapped 4 areas:

  1. Speech Analytics
  2. Voicebots
  3. Computer Vision
  4. RTC Quality and Cost Optimization

That last area was tricky. Almost everyone was using rule engines and heuristics at the time for all of their media processing algorithms and only a few made attempts to use machine learning.

My argument was this:

At some point, applying more heuristics to media processing algorithms loses its appeal

There’s so much we can do with rule engines and heuristics. Over time, machine learning will catch up and be better. We are now at that inflection point. Partially because of the technology advances, but a lot because of the pandemic.

ML in media processing is challenging

When looking at machine learning in media processing, there’s one word that comes to mind: challenging

Machine learning is challenging.

Media processing is challenging.

Together?

These are two separate and far apart disciplines that need to be handled.

The data you look at is analog in its nature, and there’s often little to no labeled data sets to work with.

A few of the things you need to figure out here?

  • How do you find machine learning engineers, or whatever they are called in their titles this day of the week?
  • Do these engineers know anything about media processing? How do you get them up to speed with this technology? Or is it the other way around? Getting media engineers trained in machine learning
  • Can you generate or get access to a suitable data set to use? Do you even have access to enough data?
  • Where do you focus your efforts? Audio or video? Maybe just network? Should you go for server side implementation or client side one? What about model optimizations?
  • When do you deem your efforts fruitful? Ready for production?

This isn’t just another checkmark to place in your roadmap’s feature list. There’s a lot of planning, management effort and research that needs to go into it. A lot more than in most other features you’ve got lined up.

The noise suppression gold rush

If I had to pick areas where machine learning is finding a home in communications, it will be two main areas:

  1. Video background processing (more on that at some future point in time)
  2. Noise suppression

Both topics were always there, but took centerstage during the pandemic. People started working from weird places (like home with kids) and you now can’t blame them. One of the best games I play in workshops now? Checking who’s got the most interesting room behind him…

Video background is about stopping me from playing. Noise suppression is about you not hearing the lawn mower buzzing 16 floors below me, or the all-too-active neighbor above me who likes to home renovate whenever I am on a call – with a power drill.

How I think my neighbor looks like whenever I am on a conference call

This need has led to a few quick wins all around. The interesting 3 taking place in the domain of WebRTC (or near enough) are probably the stories about Google Meet, Discord/Krisp and Cisco/BabbleLabs.

Google Meet Google Meet built its own noise suppression technology

In June, Serge Lachapelle, G Suite Director of Product Management was “called to the flag” and was asked to do a quick interview for The Verge on Google Meet’s noise suppression. Serge was once the product manager for WebRTC at Google and moved on to Google Meet a few years back.

You can watch the short interview here:

The gist of it?

  • Google decided to implement it in the cloud
  • They use “secure” TPUs for that (Tensorflow Processing Units, a specialized chip in the Google Cloud for machine learning workloads)
  • The feature is optional. It can be enabled or disabled by the user
  • Noises it cancels are almost arbitrary. It is something that is really hard to define as initial requirements. It is also something that will be fine tuned and tweaked over time by Google

As I stated earlier, Google isn’t taking any prisoners here and contributing this back to the community freely as part of WebRTC. They are making sure to differentiate by making sure their machine learning chops are implemented outside of the open source WebRTC library. This is exactly what I’d do in their place.

Discord Discord “bought” its way to noise suppression by partnering with Krisp

Krisp is one of the few vendors tackling machine learning in media processing and doing that as a product/service and not a feature. They’ve been at it for a couple of years now, and things seems to be going in their favor this year.

Krisp managed to do a few things:

  1. Focus on noise suppression. They started “all over the place” with voice related media algorithms, and seem to be finding product-market-fit in noise suppression
  2. Won a deal/partnership with Discord
  3. Got their technology to work inside the browser (see here)
Execution

The Discord story was first published in April on Discord’s blog. Noise suppression was added in beta to the Discord desktop app. Was that done using the browser technology used in Discord’s Electron app or by the native implementation that Krisp has is an open question, but not the most relevant one.

Three months later, in July, Discord got noise suppression into iOS and Android. This was also done using Krisp and with a spanking short video explainer:

Ongoing success

Here are my thoughts here:

  • Adding this to mobile means they got positive feedback on the desktop integration
  • Especially considering how they phrased it:

As we continue to improve voice chat, Krisp is an integral part of making Discord your place to talk. No matter how stressful the world around us may be, Krisp is here to help every one of our 100 million monthly active users feel more connected to our far-away friends.

  • 100 MAU is what Discord now has, and this is a vote of confidence in Krisp and in ML-based noise suppression technology
  • Discord shouts to the world that they are using Krisp. Something not many companies do for their suppliers
    • This may mean that they got this on the cheap (or free)
    • Or it means that they are cozying up to Krisp

My read of it? Krisp might be acquired and gobbled up by Discord to make sure this technology stays off the hands of others – if that hasn’t happened already – just look at this page – https://krisp.ai/discord/ (and then compare it to their homepage).

Cisco Cisco gobbled up BabbleLabs to own noise suppression technology

In the case of Cisco, the traditional approach of reducing risk by acquiring the technology was selected – acquihiring.

Last week, Cisco issued a press release of their intent to acquire BabbleLabs.

BabbleLabs was in the same space as Krisp. A company offering machine learning-based algorithms to process voice. The main algorithm there today as we’ve seen is noise suppression. This is what Cisco were looking for and now they will have it inhouse and directly integrated into WebEx.

Cisco devices not to self-develop. They also decided to own the technology. The reasons?

  • Google owns it
  • Zoom has its own implementation
  • That left… WebEx

Will BabbleLabs stay open? No.

In his recent post about the acquisition, Chris Rowen, CEO of BabbleLabs, explains what lead to the acquisition and paints a colorful future. The only thing missing in that post is what about existing customers. The answer is going to be a simple one: They will be supported until the next renewal date, when they will simply be let go.

A win to Krisp. If it isn’t in the process of being acquired itself already.

Who’s next?

This definitely isn’t the end of it. We will see more vendors taking notice to this one and adding noise suppression. This will happen either through self-development or through the licensing of third party solutions such as Krisp.

The challenge with these third party solutions is that they feel more like a feature than a product or a full fledged service. On one hand, everyone needs them now. On the other hand, they need to be embedded deep in the technology stack of the vendors using them. The end result is relatively small companies with a low ceiling to their potential growth (=not billion dollar companies). This puts a strain on such companies, especially if they are VC backed.

On the other hand, everyone needs noise suppression now. Where do they go to buy it? How do they build it?

Noise suppression is just the beginning

Noise suppression is just the beginning here. In the workshop I did last month on WebRTC innovation and differentiation, I’ve taken the time to focus on this. How machine learning is now finding a place in bringing differentiation to the actual communication. Noise suppression was one of the topics discussed, with many others.

There were 3 main areas that we will see growing investment in:

  1. Voice treatment – noise suppression, packet loss concealment, voice separation, etc
  2. Video treatment – video compression, super resolution, etc
  3. Background blur/replacement – I am placing it on its own, as it seems to be the next big thing

Each of these domains has its own set of headaches and nuances.

Server, native or browser? Should you employ ML in WebRTC in the cloud or on the edge?

This is a big question.

If you look at the examples I’ve given for noise suppression:

  • Google Meet chose cloud
  • Discord is native and browser
  • WebEx is native as far as I can tell

Going for native or browser means you’re closer to the edge and the user. You can do things faster, more efficiently and with a lower cost to you (you’re practically employing the user’s device to bear the brunt of running the machine learning inference algorithm). That also means you have less resources for other things like the actual video and you’re limited in the size of the model you can use for your algorithm.

Cloud means a central place where you can do training, inference, A/B testing, etc. It is probably easier to maintain and operate in the longer run, but it will add some delay to the media and will definitely cost you to run at scale.

Each company will choose differently here, and you may see a company choosing for one algorithm to run it in the cloud and for another to run on the edge.

Are you planning for this ML/AI future?

Machine learning and artificial intelligence is in our future. Both in the communication space and elsewhere. It is finally coming also to media processing directly. In a few years, it will be a common requirement from services.

Are you planning for that in any way?

Do you know how you’re going to get there?

Will you be relying on third parties or on your own inhouse technology for it?

There are no open source solutions at the moment for any of it. At least not in a way that can be productized in a short timeframe.

If you need assistance with answering these questions, then check my workshop. It is recorded and available online and it is more relevant than ever.

WORKSHOP: WebRTC Innovation and Differentiation in a Post Pandemic World

The post ML in WebRTC: The noise suppression gold rush appeared first on BlogGeek.me.

RED: Improving Audio Quality with Redundancy

webrtchacks - Thu, 08/20/2020 - 11:47

Back in April 2020 a Citizenlab reported on Zoom’s rather weak encryption and stated that Zoom uses the SILK codec for audio. Sadly, the article did not contain the raw data to validate that and let me look at it further. Thankfully Natalie Silvanovich from Googles Project Zero helped me out using the Frida tracing […]

The post RED: Improving Audio Quality with Redundancy appeared first on webrtcHacks.

WebRTC unbundling: the beginning of the end for WebRTC?

bloggeek - Mon, 08/10/2020 - 12:30

2020 marks the point of WebRTC unbundling. It seems like the new initiatives are the beginning of the end of WebRTC as we know it as we enter the era of differentiation.

Life is interesting with WebRTC. One moment, it is the only way to get real time media towards a web browser. And the next, there are other alternatives. Though no one is quite announcing them the way they should.

We’re at the cusp of getting WebRTC 1.0 officially released. Seriously this time. For real. I think. Well… maybe.

Towards differentiation

If I were to chart our path through this crazy world of WebRTC, it would look something like this:

2020 marks the beginning of the differentiation stage for WebRTC

Towards the end of 2019, and at greater force during the pandemic, we’ve seen how the future of WebRTC looks like. It is all about differentiation.

Up until now, all vendors had access to the same WebRTC stack, as it is implemented by Google (and the other browser vendors), with the exact same capabilities in the browser.

No more.

I’ve alluded to it in my article about Google’s private WebRTC roadmap. Since then, many additional signals came from Google marking this as the way forward.

Today, there are 2 separate WebRTC stacks – the one available to all, and the one used internally by Google in native applications. While this is something everyone can do, Google is now leveraging this option to its fullest.

The interesting thing that is happening is taking place somewhat elsewhere though. WebRTC is now being unbundled so that Google (and others) don’t need to maintain two separate versions, but rather can have their own “differentiation” implemented on top of “WebRTC”.

Unbundling WebRTC

At this point, you’re probably asking yourselves what does that mean exactly. Before we continue, I suggest you watch the last 15 minutes from web.dev LIVE Day Two:

That’s where Google is showing off the progress made in Chrome and what the future holds.

The whole framing of this session feels “off”. Google here is contemplating how they can bring a solution that can fit Zoom, that when 99% of all vendors have figured out already how to be in the browser – by using WebRTC.

The solution here is to unbundle WebRTC into 3 separate components:

The components set to unbundle WebRTC
  1. WebTransport – enables sending bidirectional low latency UDP-like traffic between a client and a “web server”, which in our context is a media server
  2. WebCodecs – gives the browser the ability to encode and decode audio and video independently of WebRTC
  3. WebAssembly – a browser accelerator for running code and an enabler for machine learning

While these can all be used for new and exciting use cases (think Google Stadia, with a simpler implementation), they can also be used to implement something akin to what WebRTC does (without the peer-to-peer capability).

WebTransport replaces SRTP. WebCodecs does the encoding/decoding. WebAssembly does all the differentiation and some of the heavy lifting left (things like bandwidth estimation). Echo cancellation and other audio algorithms… not sure where they end up with – maybe inside WebCodecs or implemented using WebAssembly.

What comes after the unbundling of WebRTC?

This isn’t just a thought. It is an active effort taking place at Google and in standardization bodies. The discussion today is about enabling new use cases, but the more important discussion is about what that means to the future of WebRTC.

As we unbundle WebRTC, do we need it anymore?

With Google, as they have switched gears towards differentiation already, it is not that hard to see how they shift away from WebRTC in their own applications:

Google Stadia Does Stadia has a reason to use WebRTC?

Google Stadia is all about cloud gaming. WebRTC is currently used there because it was the closest and only solution Google had for low latency live streaming towards a web browser.

What does Google Stadia need from WebRTC?

  1. The ability to decode video in real time in the browser
  2. Send back user actions from the remote control towards the cloud at low latency

That’s a small portion of what WebRTC can do, and using it as the monolith that it is is probably hurting Google’s ability to optimize the performance further.

Sending back user actions were already implemented in Stadia on top of QUIC and not SCTP. That’s because Google has greater control over QUIC’s implementation than it does over SCTP. They are probably already using an early implementation of WebTransport, which is built on top of QUIC in Stadia.

The decoding part? Easier to just do over WebTransport as well and be done with it instead of messing around with the intricacies of setting up WebRTC peer connections and maintaining them.

For Stadia, unbundling WebRTC will result moving away from WebRTC to a WebTransport+WebCodecs combo is the natural choice.

Google Duo & Google Meet Meet & Duo. Will moving away from WebRTC improve their competitiveness?

For Duo and Meet things are a bit less apparent.

They are built on top of WebRTC and use it to its fullest. Both have been optimized during this pandemic to squeeze every ounce of potential out of what WebRTC can do.

But is it enough?

Differentiation in WebRTC

Google has been adding layers of differentiation and features on top and inside of WebRTC recently to fit their requirements as the pandemic hit. Suddenly, video became important enough and Zoom’s IPO and its huge rise in popularity made sure that management attention inside Google shifted towards these two products.

This caused an acceleration of the roadmap and the introduction of new features – most of them to catch up and close the gap with Zoom’s capabilities.

These features ranged from simple performance optimizations, through beefing up security (Google Duo doing E2EE now), towards machine learning stuff:

  • Proprietary packet loss concealment algorithm in native Duo app
  • Cloud based noise suppression for Meet
  • Upcoming background replacement for Meet
Advantages of unbundling WebRTC for Duo/Meet

Can Google innovate and move faster if they used the unbundled variant? Instead of using WebRTC, just make use of WebTransport+WebCodecs+WebAssembly?

What advantages would they derive out of such a move?

  1. Faster time to market on some features, as there’s no need to haggle with standardization organizations on how to introduce them (E2EE requires the introduction of Insertable Streams to WebRTC)
  2. Google Meet is predominantly server based, so the P2P capability of WebRTC isn’t really necessary. Removing it would reduce the complexity of the implementation
  3. More places to add machine learning in a differentiated way, instead of offering it to everyone. Like the new WaveNetEQ packet loss concealment was added outside of WebRTC and only in native apps, it could theoretically now be implemented without the need to maintain two separate implementations

If I were Google, I’d be planning ahead to migrate away from WebRTC to this newer alternative in the next 3-5 years. It won’t happen in a day, but it certainly makes sense to do.

Can/should Google maintain two versions of WebRTC?

Today, for all intent and purpose, Google maintains two separate versions of WebRTC.

The first is the one we all know and love. It is the version found in webrtc.org and the one that is compiled into Chrome.

The other one is the one Google uses and promotes, where it invests in differentiation through the use of machine learning. This is where their WaveNetEQ can be found.

Do you think Google will be putting engineers to improve the packet loss concealment algorithm in the WebRTC code in Chrome or would it put these engineers to improve its WaveNetEQ packet loss concealment algorithm? Which one would further its goal, assuming they don’t have the manpower to do both? (and they don’t)

I can easily see a mid-term future where Google invests a lot less in WebRTC inside Chrome and shifts focus towards WebTransport+WebCodecs with their own proprietary media engine implementation on top of it powered by WebAssembly.

Will that happen tomorrow? No.

Should you be concerned and even prepare for such an outcome? That depends, but for many the answer should be Yes.

The end of a level playing field and back to survival of the fittest

WebRTC brought us to an interested world. It leveled the playing field for anyone to adopt and use real-time voice and video communication technologies with a relatively small investment. It got us as far as where we are today, but it might not take us any further.

Recent changes marks the shift from a level playing field in WebRTC towards survival of the fittest

For this to be sustainable, browser vendors need to further invest in the quality of their WebRTC implementations and make that investment open for general use. Here’s the problem:

Apple (Safari)

Doesn’t really invest in anything of consequence in WebRTC.

  • They seem to care more about having an HEVC implementation than in getting their audio to work properly in mobile Safari in WebRTC
  • To date, they have taken the libwebrtc implementation from Google and ported to work inside Safari, making token adjustments to their own media pipelines
  • I am not aware of any specific improvements Apple made in Safari’s WebRTC implementation to quality via the media algorithms used by libwebrtc itself

Apple cares more about FaceTime than all of that WebRTC nonsense anyways…

Mozilla (Firefox)

Actually have a decent implementation.

  • While Firefox uses libwebrtc as the baseline, they replaced components of it with their own
  • This includes media capturer and renderer for audio and video
  • They have invested a lot in improving the audio pipeline in Firefox, which affects quality in WebRTC
Microsoft (Edge)

Their latest Edge release is Chromium based.

  • They aren’t doing much at the moment in the WebRTC part of it is far as I am aware
  • They could improve the media pipeline implementation of Chromium (and by extension Edge) for Windows 10
  • But…
    • Do they have an incentive?
    • Would they contribute such a thing back to Google or keep it in their Edge implementation?
    • Would Google take it if Microsoft gave it to them?

And then there’s Microsoft Teams, which offers a sub par experience in the browser than it does in the native application. All of the investment of Teams is going towards improving quality and user experience in the app. The web is just an afterthought at the moment

Google (Chrome)

Believe WebRTC is good enough.

  • There are some optimizations and improvements that are now finding their way into WebRTC in the browser
  • But a lot of what is done now is kept out of the web and the open source community. WaveNetEQ is but an example of things to come
  • It is their right to do that, but does this further the goal of WebRTC as a whole and the community around it?

Now that we’re heading towards differentiation, the larger vendors are going to invest in gutting WebRTC and improving it while keeping that effort to themselves.

No more level playing field.

Prepare for the future of WebRTC

What I’ve outlined above is a potential future of WebRTC. One that is becoming more and more possible in my mind.

There’s a lot you can do today to take WebRTC and optimize around it. Making your application more scalable. Offering better media quality as well as use experience. Growing call sizes to hundreds or participants or more.

Investing in these areas is going to be important moving forward.

I’ve recently created a workshop covering the present and future of WebRTC, along with techniques and best practices employed by vendors in this space. If you want to learn more, you may want to take that workshop.

WebRTC Innovation and Differentiation in a Post Pandemic World

The post WebRTC unbundling: the beginning of the end for WebRTC? appeared first on BlogGeek.me.

WebRTC ports: Understanding IP addresses and port ranges in WebRTC

bloggeek - Mon, 08/03/2020 - 12:30

WebRTC IP addresses and port ranges can be a bit tricky for those unfamiliar enough with VoIP. I’d like to shed some light about this topic.

A recent back and forth discussion that I had with one of the people taking my online WebRTC course made it clear to me that there are still things I take for granted because I come with a VoIP heritage to what it is I am doing today. Which is why this article here.

Connecting a WebRTC session takes multiple network connections and messages taking place over different types of transport protocols. There are two reasons why that decision was made for WebRTC:

  1. There was a desire to have it run peer to peer, directly exchanging real time media between two browsers. This requires a different look at how to handle network entities such as NATs and firewalls
  2. Real time media is different from other data sent over the internet in browsers. The transport and signaling protocols already available were just not good enough to preserve high quality and low latency

Lets see how connections get made over the internet and how WebRTC makes use of that.

A quick explainer to internet connections

We will start by looking at the building blocks of digital communications – TCP and UDP.

The table below summarizes a bit the differences between the two:

TCP and UDP are two extremes of how transport protocols can be expressed TCP connections

TCP is a reliable transport protocol. As such, it has a built-in retransmission mechanisms that is meant to make sure whatever is sent is received on the other end and in the same order of sending.

To do that properly, a TCP connection needs to be created. A TCP connection is a set of 4 values:

Source IP:Source port + Destination IP:Destination Port

How does one establish a TCP connection?

On your local machine you “bind” one of the local IP addresses of the machine to a local port number. That IP and port needs to be available and not taken for something else already. Then you need to try and connect to the destination IP:port.

Let’s say I want to connect to google.com.

For me, google.com resolves to the IP address 172.217.23.110. Assuming I want to connect to port 80 (a “randomly” picked port), I’d do the following: Bind a local IP:port (arbitrary local port), and connect it to 172.217.23.110:80.

Knowing the IP:port on source and destination of the connection means knowing the connection – there cannot be two such connections. Once you bind a local port to connect it to a remote address over TCP, that port cannot be reused until the connection is closed and done with.

If I want to open another TCP connection from my machine to the same address, I will need to bind yet another port on my local address and connect it to the destination IP:port,

Obviously, there are some caveats and edge cases I am ignoring here, but for our needs, the above is enough of an explanation.

UDP “connections”

Since UDP is connectionless, there’s no real connection with UDP. No context whatsoever.

To send a message over UDP, I need again the quad of  values:

Source IP:Source port + Destination IP:Destination Port

But this time, there’s no real connection. What happens here is that I open a local IP:port, and whenever I want to send out a message, I just tell it the destination IP:port and be done with it.

WebRTC signaling connections and addresses

WebRTC signaling is just like any other web application connection.

In order to send and receive the SDP blobs to make the connection, I need to be able to communicate between the browsers and that is done using traditional networking means available in the browser: either HTTP or WebSocket. Both (ignoring HTTP/3) are implemented on top of TCP.

What does that mean?

  • When my browser connects to the signaling server, it connects to an HTTPS or a Secure Websocket address (because… security)
  • The destination address will be whatever the DNS will resolve for the name of the server I connect to; and for the most part, this connection will be done towards port 443
  • The local address will be whatever local address I have on my machine
  • The local port will be an arbitrary local port that the operating system will allocate

The end result?

The signaling server has a static IP and port, while the client is “dynamic” in nature

Local ports are arbitrary (and ephemeral). Destination port is 443 (or whatever advertised by the server).

WebRTC media connections and addresses

Media in WebRTC gets connected via SRTP. Most of the time, that would happen over UDP, which is what we will focus on in this section.

In naive SRTP implementations from before the WebRTC era, each video call usually used 4 separate connections:

  1. RTP for sending voice data
  2. RTCP for sending the control of the voice data
  3. RTP for sending video data
  4. RTCP for sending the control of the video data

While WebRTC can support this kind of craziness, it also uses rtcp-mux and BUNDLE. These two effectively bring us down to a single connection for voice, video, media and its control.

What happens though is this –

  • You create a peer connection
  • Then you add media tracks to it, effectively instructing it on what it is about to send or receive (or at least what you want it to send or receive)
  • WebRTC will then allocate and bind local IP addresses and ports to handle that traffic. As with outgoing TCP connections, the local ports are going to be arbitrary and ephemeral
  • The allocated IP:port addresses are now going to be used in the SDP being negotiated. These will be used as the local candidates

Since these addresses and ports are local, there’s high probability that they will be blocked by firewalls for incoming traffic.

Media servers work in the exact same way. In most cases, the addresses that they will use will be public IP addresses, but the ports will be arbitrary. That’s because media servers usually prefer handling each incoming device separately, by receiving its traffic on a dedicated socket connected to a specific port.

STUN “connections” and addresses

Since we’re all behind NATs with our private IP addresses, we need to know our public IP address so we can connect to others directly (peer-to-peer).

To do that, STUN is used. WebRTC will take the media local IP:port it created (in that section above), and use it to “connect” over UDP to a STUN server.

This is in concept somewhat similar to how our signaling works – the local IP address has an arbitrary port, while the remote IP:port is known – and configured in advance in our peer connection iceServers. My advice? Have that port be 443.

Why do we do all that with STUN? So that we create a pinhole through the NAT which will allocate for us a public IP address (and port). The STUN server will respond back with the IP address and port it saw, and we will publish that so that the other side will attempt reaching out to us on that public IP:port pair. If the NAT allows such binding, then we will have our session established.

The STUN server has a static IP and port, while the client (and NAT) operate with “dynamic” IP addresses and ports

The above shows how Google’s STUN server works from my machine in AppRTC:

  • My private address was 192.168.1.100:57086
  • The STUN address in the iceServers was 108.177.15.127:19302 (the 19302 is the static port Google decided to use – go figure)
  • My public IP address as was allocated by the NAT was 176.231.64.35:57086 (it managed to maintain and mirror my internal arbitrary port, which might not always be the case). This is the address that will get shared with the other participant of the session
TURN connections, addresses and port ranges in WebRTC

With TURN, the server is relaying our media towards the other user. For that to happen, my browser needed to:

  1. Connect to a TURN server
  2. Ask the TURN server to allocate an address for the relay (and let me know what that address is)
  3. Use that address for incoming and outgoing traffic for the remote participant of the session
TURN servers have a static IP and port, but they allocate an address with a dynamic port for each client they serve

The above shows how Google’s TURN server works from my machine in AppRTC:

  • My private IP for this session was 192.168.1.100:57086 (it was a different port, but I was too lazy to look it up, so bear with me)
  • The STUN address in the iceServers was 108.177.15.127:19305 (as with STUN, the 19305 is the static port Google decided to use – still not sure why)
  • The TURN server replied back with an allocation address of 108.177.15.86:28798. This was then placed as my ICE candidate. If the remote participant were to send media towards that address, the TURN server would forward that data to me

UDP, TCP and TLS work similarly in TURN when it comes to address and port allocation. What is important to notice here is how the TURN server opens up and allocates ports on the its public IP address whenever someone tries to connect through it.

Understanding port ranges in WebRTC configurations

WebRTC makes use of a range of addresses, ports and transport protocols. Far more than anything else that we run in our browsers. As such, it can be quite complex to grasp. There is order and logic in this chaos – this isn’t something inflicted on you because someone wanted to be mean.

In WebRTC the addresses and ports that get allocated by the end devices (=browsers), media servers and TURN servers are dynamic. This means that in many cases we have to deal with port ranges.

Go to any voice or video conferencing service running over the Internet. Search for their address and port configuration. They all have that information in their knowledgebase. A list of addresses and ports you need to open in your firewalls, written nicely on a page so that the IT guy will be able to copy it to his firewall rules.

Should these ranges be large? As in 49,152 to 65,535? Should this range be squeezed down maybe?

I’ve seen vendors creating a port range of 10 or 100 ports. That’s usually too little to run in scale when the time comes. I’d go with a range of 10,000 ports or more. I’d probably also try first to estimate the capacity of the machine in question and figure out if more ports might be needed to maintain the sessions per second I am planning on supporting (allocated TCP ports take some time to clear up).

Is this “wholesale” port range a real security threat or just an imaginary one? How do you go about explaining the need to customers who like their networks all clamped down and closed?

If you are looking to learn more about WebRTC, check out my WebRTC training courses. In the near future, I will start working on a new course about TURN installation and configuration – if you are interested in early access – do let me know.

The post WebRTC ports: Understanding IP addresses and port ranges in WebRTC appeared first on BlogGeek.me.

WebRTC TURN: Why you NEED it and when you DON’T need it

bloggeek - Mon, 07/20/2020 - 12:30

WebRTC TURN servers are an essential piece of almost any WebRTC deployment. If you aren’t using them, then make sure you have a VERY good reason.

Connecting a WebRTC session is an orchestrated effort done with the assistance of multiple WebRTC servers. The NAT traversal servers in WebRTC are in charge of making sure the media gets properly connected. These servers are STUN and TURN.

3 ways to connect WebRTC sessions

When connecting a session between two browsers (peer-to-peer) in WebRTC, there are 3 different alternatives that might happen.

Connect directly, across the local network Connecting WebRTC over a local network

If both devices are on the local network, then there’s no special effort needed to be done to get them connected to each other. If one device has the local IP address of the other device, then they can communicate with each other directly.

Most of the time and for most use cases, this is NOT going to be the case.

Connect directly, over the internet, with public IP addresses Connecting WebRTC directly using public IP address obtained via STUN

When the devices aren’t inside the same local network, then the way to reach each other can only be done through public IP addresses. Since our devices don’t know their public IP addresses, they need to ask for it first.

This is where STUN comes in. It enables the devices to ask a STUN server “what is my public IP address?”

Assuming all is well, and there are no other blocking factors, then the public IP address is enough to get the devices to connect to each other. Common lore indicates that around 80% of all connections can be resolved by either using the local IP address or by use of STUN and public IP addresses.

Route the media through a WebRTC TURN server Connecting WebRTC by using TURN to relay the media

Knowing the public IP address is great, but it might not be enough.

There are multiple reasons for this, one of them being that the NAT and firewall devices in use are not allowing such direct traffic to take place. In such cases, we route the data through an intermediary public server called TURN.

Since we are routing the data, it is an expensive endeavor compared to the other approaches – it has bandwidth costs associated with it and it is why you Google won’t ever offer a free TURN server.

Transport protocols and WebRTC TURN servers

TURN comes in 3 different flavors in WebRTC (6 if you want to be more accurate).

How testRTC checks and explains connectivity alternatives of TURN servers in qualityRTC

You can relay your WebRTC data over TURN by going either over IPv4 or IPv6, where IPv4 is the more popular choice.

Then there’s the choice of connecting over UDP, TCP or TLS.

UDP would work best here because WebRTC knows best when and how to manage network congestion and if to use retransmissions. Since it doesn’t always work, it might require the use of TCP or even TLS.

Which type of a connection would you end up with? You won’t really know until the connection gets established, so you’ll need to have all your options opened.

When is a TURN server needed in WebRTC?

That’s easy. Whenever there can’t be a direct connection between the two devices.

For peer to peer, you will need to install and run a TURN server.

Try direct, then TURN/UDP, then TURN/TCP and finally TURN/TLS

The illustration above shows our “priorities” in how we’d like a session to connect in a peer to peer scenario.

If you are connecting your devices to a media server (be it an SFU for group calling or any other type of a server), you’ll still need a TURN server.

Why? Because some firewalls block certain types of traffic. Many just block UDP. Some may even block TCP.

With a typical WebRTC media server, my suggestion is to configure TURN/TCP and TURN/TLS transports and remove the TURN/UDP option – since you have direct access to the public IP address of the media server, there’s no point in using TURN/UDP.

Try direct to server, then TURN/TCP and finally TURN/TLS

The illustration above shows our “priorities” in how we’d like a session to connect with a media server.

What about ICE-TCP?

There’s a mechanism called ICE-TCP that can be used in WebRTC. In essence, it enables a media server to provide in the SDP a ICE candidate using a TCP transport. This means the media server will actively wait on a TCP port for an incoming connection from the device.

It used to be a Chrome feature, but now it is available in all web browsers that support WebRTC.

This makes the use of TURN/TCP unnecessary, but will still leave us with the need of TURN/TLS.

Try direct UDP to server, then direct ICE-TCP to server and finally TURN/TLS

The illustration above shows our “priorities” in how we’d like a session to connect with ICE-TCP turned on.

The elusive (mis)configuration of TURN servers in WebRTC

Configuring TURN servers in WebRTC isn’t an easy task. The reason isn’t that this is rocket science. It is more due to the fact that checking a configuration to ensure it works properly isn’t that simple.

We are used to testing things locally. Right?

Here’s the challenge – in WebRTC, trying it on your machine, or with your machine and the one next to it – will ALWAYS WORK. Why? Because they connect directly, across the local network. This means TURN isn’t even necessary or used in such a case. So you never test that path in your code/configuration.

What can you do about it?

  1. Be aware of this
  2. Use the sample provided by Google for Trickle ICE testing. It won’t check everything, but it will validate that you’ve at least installed and configured the TURN server semi-properly
  3. Block UDP on the machine in your local network and then try to connect a session to another machine on your local network. Make sure it went over TURN/TCP relay (check webrtc-internals dump for that)

The above things can be done locally and repeatedly, so start there. Once you get this to work, move towards the internet to check it there.

Quick facts Do you need a TURN server if you connect your sessions to a WebRTC media server?

Yes. WebRTC media servers don’t support TLS type of transport. Sometimes they do support TCP via ICE-TCP. For the cases where calls can’t connect in other ways, you will need to use TURN/TCP or TURN/TLS.

Do media servers need to have WebRTC TURN server configuration?

Usually not. In most cases you will be installing media servers with direct internet access on a public IP address. This means that having TURN configured only on the WebRTC client side is enough.

How do you test a TURN server configuration for your application?

An easy way is to block UDP traffic and see if your WebRTC client can still connect. Another one is to use Google’s Trickle ICE sample.

The post WebRTC TURN: Why you NEED it and when you DON’T need it appeared first on BlogGeek.me.

Announcing: WebRTC fiddle of the month

bloggeek - Mon, 07/13/2020 - 12:30

Once a month, I will be publishing along with  Philipp Hancke a WebRTC fiddle of the month as a free lesson in my WebRTC Codelab. This continues an old tradition of Mozilla’s Jan-Ivar, the fiddle ot the week.

Time for a new experiment. If all goes well, we will be making it a monthly thing.

Somehow, a week or two ago, we came to the conclusion that it would be nice to do a short video explainer of something that people are trying to figure out with WebRTC.

To make this happen, we decide together what to do the short lesson about, then Philipp Hancke writes a jsFiddle piece of code to implement it. And then we sit and record the explanation of it, creating a new free lesson in our joint WebRTC Codelab course.

What does each WebRTC fiddle of the month include?

Each WebRTC fiddle of the month has these 3 resources:

  1. A short video explaining the problem and how we solved it
  2. The link to the jsFiddle code
  3. Transcription of the video
Why are we doing it?

Creating the WebRTC Codelab was fun. We’re thinking of recording a new course at some point, but until we wrap our heads around that one, we’ve decided to continue recording some more lessons.

Seems like we work good together, so finding yet another excuse to do something made enough sense.

Oh, and if you happen to decide that you need to learn more about WebRTC, and end up enrolling to our course, then that’s a definite win

Two requests I have for you #1 – Check our first “fiddle of the month”

Our first explainer fiddle is about creating a peer connection that includes screen sharing and an audio microphone stream wrapped nicely together. You might get zoom-fatigue (or the WebRTC equivalent of that term), but you almost always want to talk when trying to screen share and collaborate.

Go watch our fiddle – Sharing screen + microphone together

Future WebRTC fiddles won’t be announced here, but only on social media and in the WebRTC Weekly (so subscribe to it). These fiddles will all be available in the WebRTC fiddle of the month section of the WebRTC Codelab course.

#2 – Suggest some more ideas for such fiddles

Got ideas or requests for fiddles?

I can’t promise we will record your request, but I can promise you we will seriously think about it once we sit down to decide what to record.

Just contact me and tell me what you think.

The post Announcing: WebRTC fiddle of the month appeared first on BlogGeek.me.

Using getDisplayMedia for local recording with audio on Jitsi

webrtchacks - Tue, 06/30/2020 - 15:22

I wanted to add local recording to my own Jitsi Meet instance. The feature wasn’t built in the way I wanted, so I set out on a hack to build something simple. That lead me down the road to  discovering that: getDisplayMedia for screen capture has many quirks, mediaRecorder for media recording has some of its […]

The post Using getDisplayMedia for local recording with audio on Jitsi appeared first on webrtcHacks.

WebRTC browser support on desktop and mobile

bloggeek - Mon, 06/29/2020 - 12:30

2020 offers an interesting viewpoint to WebRTC browser support. Where exactly is it available in desktop and mobile, and what can you do about it as a developer?

This is almost a yearly article that I now write, each time with a slightly different focus to it. We’re now halfway into 2020, and things are changing fast.

Here’s a quote that I am seeing a lot this year:

Sometimes this quote is quite literally true. pic.twitter.com/SkVooF9Fez

— The Long Now Foundation (@longnow) January 1, 2020

It rings true for the last few weeks when it comes to WebRTC, but somehow, in the domain of WebRTC browser support, we’re still standing in place.

My most up to date slide on WebRTC browser support?

We will get back to it in detail a bit later.

For now I’d like to look at the “Can I use” website, filtered for WebRTC. It gives a good starting point (although somewhat misleading). I will use that as the basis of looking at WebRTC on desktop and mobile.

WebRTC support on desktop

On the desktop today, all modern web browsers support WebRTC.

This has been the case for quite some time now. I’ve announced that this means that WebRTC is ready towards the end of 2018.

Why?

Because the consumption model in the desktop today is done through web applications, while on mobile, it is predominantly based on native applications. So the moment all desktop browsers are nicely represented and supported, things look bright.

This isn’t to say that there aren’t challenges with WebRTC browser support – obviously there are.

I can list a few of them here out of the top of my head:

  • No support in Internet Explorer (and there won’t be any for this dying browser)
  • Edge still has its old version and the new Chromium based one. They behave differently, with the new Chromium one acting just like Chrome for anything WebRTC related
  • Safari still suffers from its need to differentiate by not doing what other browsers are doing. It has no VP9 support, and it is still somewhat buggier than the rest of the crowd
  • Things like hardware acceleration support and proper rendering, antivirus CPU issues and CPU consumption in general still plague the main implementation (=Chrome) today
WebRTC support on mobile

When it comes to mobile, support for WebRTC is a bit more complicated.

WebRTC iOS Safari support Is WebRTC really available on iOS Safari?

iOS Safari has been supporting WebRTC since Safari 11.

We’re now in Safari 13.5 and things are still rather grim when it comes to true support of WebRTC.

iOS Safari WebRTC is such a broken mess that my going suggestion to clients unfortunately is to not support it and redirect users to a native app installation. I had to manually go through all open WebRTC bugs in webkit to figure out how to explain this to my clients and help them in reaching that conclusion and even conveying that to their customers.

There are nasty bugs in iOS Safari that have been opened since 2019 or earlier relating to media handling of WebRTC. These aren’t just edge cases, but rather things you’ll have users bump into in regular use. Some of them have finally been fixed in the latest 13.5.5 beta earlier this month.

Oh – and if you plan on using any OTHER browser on iOS then WebRTC won’t be supported there. Why? Because Apple hasn’t made WebRTC available in its Webkit Webview on iOS and they aren’t allowing anyone to build a mobile iOS browser that doesn’t use Webkit as its rendering engine. So much for freedom and choice.

Up until now, there was no serious way to run a WebRTC web application in iOS Safari in production at scale. Hopefully, this is now mostly solved…

Android browsers support for WebRTC How well is WebRTC supported in the fragmented Android world?

Android has its own set of headaches when it comes to WebRTC. That’s because there’s no single Android out there, but rather a slew of them.

Here’s what we can glean from a close look at that “can I use” list above.

  • Android Browser, Chrome for Android and Firefox for Android are the gold standard. If these are what your browsers “bump” into, then you’re in good shape
  • Opera… it depends. Opera Mobile is just fine. Opera Mini doesn’t support WebRTC
  • UC Browser for Android has no WebRTC support
  • Samsung Internet, quite the popular option you may run into, has WebRTC, but requires a webkit prefix. What does this mean? That it probably runs an old version of the WebRTC implementation, never a good thing with WebRTC
  • QQ Browser and Baidu Browser, both the China alternatives to Chrome also have WebRTC support but also with a webkit prefix. Support there would also be tricky for web applications without some serious regression testing

While WebRTC is nicely supported in Android, it is going to be hard sometimes to decide what that support exactly means. Knowing that is a mix of understanding the device and the browser the web application is being executed on top.

Where do we go from here?

If you read everything until here, then understand this: WebRTC is a work in progress.

It is the best (and only) alternative you have for real time communications that works in the browser without any installation. It works well enough for large companies to release applications (web and native) that attract massive user bases.

As with many other technologies, starting to use it is simple. Getting it to a professional level requires a lot more investment and commitment.

Next week I’ll be starting off my “future of WebRTC” workshop. This workshop is going to cover many aspects of the changing landscape of WebRTC. I’ll be touching issues of infrastructure, optimization and differentiation. All with a view of the current best practices as well as the latest trends.

There are 2-3 more seats available in the workshop. If you are interested in joining, check here.

The post WebRTC browser support on desktop and mobile appeared first on BlogGeek.me.

VP9 Codec: Is it time to adopt it in your WebRTC application?

bloggeek - Mon, 06/08/2020 - 00:30

VP9 is the best unused codec today that can improve video quality and media experience in your WebRTC application. Lets see who is this codec good for.

Last year there were 3 video codecs available in browsers for WebRTC: VP8, H.264 and VP9. Now there seem to be 5, with the addition of HEVC and AV1. Let’s put some sense into what is really going on, and where does each of these fit, focusing on VP9.

WebRTC video codec support by browser All modern web browsers today support WebRTC

In the good old days, WebRTC video codec support was “simple”. The industry was bickering and fighting between VP8 or H.264 until resolving the matter by mandating both VP8 and H.264 codec support in web browsers.

Then Google went ahead, adding VP9 into the mix. Mozilla went along with it and added it to Firefox.

After that, we’ve got to the point where the Alliance of Open Media was created with AV1 as its video codec, prepping us nicely into the next codec war we’re going to face – the upcoming AV1 codec versus HEVC – both of which are now available (or eminently available, or soon to be available) in web browsers with WebRTC.

I’ve created this simple table for you to understand in which web browser which video codec is available for WebRTC:

To make things simple, if you need to launch something in 2020, then the video codecs available to you are:

  1. H.264 and VP8 on ALL browsers
  2. VP9 on all browsers other than Safari

Which leads us to the next question…

HEVC & AV1. Should you join the experiment? Be sure you want to be part of WebRTC’s future video codec(s) experiment(s)

Let’s check the new video codecs that are sprouting in WebRTC to understand where we stand with them.

HEVC

It seems that Apple is adding HEVC to Safari. It is available to some extent in the Safari Technology Preview, so developers can tinker with it without knowing when it will be publicly available in Safari. And there is no indication or an inkling of an indication that other browser vendors are going to join – Google won’t. Mozilla definitely won’t. Microsoft just might, but that would mean forking away a bit from Chromium which is now the engine inside their Edge browser – not the right focus in my mind for Microsoft here.

HEVC is like VP9 in the same way that H.264 is like VP8:

  • You need to deal with patent royalties when using HEVC (which are a litigation mess in the making)
  • VP9 has less hardware acceleration available than HEVC

The only difference is that HEVC doesn’t exist in any browser yet and will only be available on Safari while H.264 is available in all browsers.

To understand where we’re at with H.264 vs VP8 we only need to read the stats shared by Google in their recent semi-celebration for 10 years of WebM and WebRTC:

“These technologies have succeeded together, as today over 90% of encoded WebRTC video in Chrome uses VP8 or VP9.”

The bolded marking is my own doing – and just so we’re clear:

  • This is Chrome only
  • It shows H.264 has less than 10% “market share” in Chrome WebRTC
  • It also insinuates that VP9 doesn’t have a large “market share”, otherwise, a ballpark figure would have been provided. More on that later
  • This is why I think HEVC doesn’t really stand a chance against VP9 in the context of WebRTC

Now with AV1 coming up and the huge backing behind it, the HEVC track is all but dead. At least for the majority of WebRTC developers.

If you want to use HEVC in WebRTC, then you limit yourself to future Safari releases and native applications (where you modify the WebRTC codebase to add HEVC). Don’t expect it to work in any other web browser

AV1

AV1 is the best next thing in video coding. The best invention since sliced bread. The best unlikely cooperation amongst industry co-opetitors moving away from royalty bearing video codecs towards an open video codec.

It is supposed to be better than both VP9 and HEVC from a compression standpoint.

And it is supposed to be a coombaya experience where everyone is supporting it. The members list of the Alliance of Open Media foundation behind it is impressive. It includes all browser vendors and many chipset vendors. How can you go wrong here?

The only problem for me is adoption time. It takes a long time to get a video codec to market.

CodecYear startedAgeH.264200317VP8200812HEVC20137VP920137AV120191Video codec maturity

To get a video codec to market properly time is needed. From specification, to implementation, to modifying the implementation to work for real time communications, to optimizing the implementation to work reasonably on available CPUs.

In Chrome it doesn’t officially exist. It is there behind a flag, making sure users can’t really enjoy it and web developers don’t have meaningful enough access to it.

Getting a codec that came out of the oven a year or two ago to production is risky business.

If you need this article to learn about codecs, then AV1 should NOT be in your roadmap in 2020. You better wait this one out a little bit

Who is using VP9 codec today?

Google.

Not Google it. Google.

They use VP9 in Google Meet. That’s a large traffic source using VP9, but it says a lot about the adoption of VP9 so far.

There are also a few instances where VP9 is used in streaming or live streaming use cases. Nothing major though.

Adoption challenges of using VP9 codec Photo by Mathias Jensen on Unsplash

Why so low an adoption after being out on the market for 7 years?

I can only guess…

  1. It takes time. 7 years just isn’t enough when we’re still all figuring out WebRTC
  2. VP8 and H.264 is good enough for almost everyone
  3. VP9 requires more CPU than VP8 or H.264, and there are complaints about CPU use in WebRTC with these codecs already, so VP9 won’t help alleviate that problem – only worsen it
  4. Not enough hardware acceleration for VP9. There are more hardware decoders for VP9 today than they used to, but not much in the way of encoders. It does exist on Intel however, which is great
  5. Not enough knowledge and understanding on how to utilize VP9, so no one’s really trying it properly
  6. The popular open source media servers are all configured for VP8 or H.264 by default even if they do support VP9. And no one changes default settings. I would also assume and expect these open source platform to not optimize for VP9 anyways – not enough uptake yet
  7. The Alliance of Open Media and its success to put out such a strong cadre of supporters. By doing that, many large vendors in the industry are making the decision to “wait it out” with VP9 and skip this video codec generation directly to AV1
The benefits of VP9 codec

I’ve written about the role of VP9 in WebRTC before.

The premise of VP9 is improving encoding compression over VP8.

Compression rate

That comes at a cost of expending more CPU, giving us the option to balance between using network and CPU resources.

VP9 gives you either less bitrate for the same quality or more quality for the same bitrate than VP8

When looking at the higher end of the bitrate equation, one may prefer using VP8 or H.264 – we have enough network resources so we couldn’t really care on that front while saving on CPU might be beneficial

On the lower end of the bitrate equation, we’d want to squeeze every bit we have running on the network on higher quality. And then using VP9 might make more sense: since the bitrate is limited, we can spare more CPU on that and use VP9.

Sad thing is we can’t really know this in advance, at least not always.

Scalability

Implementing a workable large scale video group call with WebRTC isn’t trivial. There are a lot of aspects to deal with both from a network perspective as well as from a CPU perspective.

The name of the game in this case is optimization, and this comes by having more flexibility in the tools you can use for optimizing the hell out of your video experience.

The flexibility here comes from VP9 SVC implementation in WebRTC:

  • With H.264 and VP8 you can use simulcast – sending multiple video streams in multiple bitrates for the same content
  • VP8 also supports Temporal Scalability – sending multiple frame rates in a single video stream
  • With VP9 you can use SVC (Scalable Video Coding) – sending a single video stream with multiple layers for different resolutions, frame rates and quality levels
  • To be clear, getting SVC to work in WebRTC isn’t easy, requires “reverse engineering” some of the workings of Google Meet and its proprietary SDP munging, but it might actually be worth the effort

If I had to chart the flexibility of a WebRTC video codec for large group sessions based on the tools it gives developers, this is what I’ll get:

We expend more CPU on VP9 but we win in network performance and scalability of a video group call by doing that.

Plotting a route towards AV1

AV1 is the future.

Should you skip VP9 and just head to AV1 once it is ready? I don’t know.

The thing is, we had 2020. A pandemic that got us all cooped up at home doing video calls like crazy. The world has changed and with it priorities in our industry.

We’ve fast forwarded roadmaps by 5 years, so the future is already here. Can you wait a year or two more before you introduce a better video codec? If yes, then go straight to AV1. If you can’t, then you should seriously consider starting off with VP9 adoption.

A quick recap Who is using VP9 codec in WebRTC applications?

Google Meet makes use of VP9 codec. Sadly, there is no other popular, large scale WebRTC application that makes use of it.

Does VP9 codec support SVC in WebRTC?

Yes. In fact, VP9 is the only codec today that supports SVC (Scalable Video Coding) in WebRTC. This gives developers more flexibility in large group video calls and even live broadcasts than other video codecs.

The challenge is that VP9 SVC support in WebRTC isn’t official or well documented.

What benefits does VP9 codec bring to WebRTC?

Better compression rate compared to VP8 and H.264 which are mandatory to implement in WebRTC.

Better scalability. It has flexible tools that assist in scaling video group calls.

Should I use VP9, HEVC or AV1 Video codec in my WebRTC application?

HEVC and AV1 don’t yet exist in WebRTC browser implementations. At least not in a way you can utilize in a production service.

VP9 is available and usable in Chrome, Firefox and Edge.

If you are looking to improve video quality or reduce bitrates in your WebRTC application, then you should seriously look at VP9.

The post VP9 Codec: Is it time to adopt it in your WebRTC application? appeared first on BlogGeek.me.

Microsoft Teams, Direct Routing and SBCs (pt.1)

webrtc.is - Tue, 05/26/2020 - 00:08

My journey creating a scalable SBC as a Service for Microsoft Teams Direct Routing is over at Snapsonic.com.

Surviving WebRTC CPU requirements in large group calls

bloggeek - Mon, 05/25/2020 - 12:30

Enabling large group video calls in WebRTC is possible, but requires effort. WebRTC CPU consumption requires optimizations and that means making use of a lot of different techniques.

Cramming more users in a single WebRTC call is something I’ve been addressing here for quite some time.

The pandemic around us gave rise to the use and adoption of video conferencing everywhere. Even if this does slow down eventually, we’ve fast forwarded a few years at the very least in how people are going to use this technology.

It all started with a Gallery View

What’s interesting to see is how requirements and feature sets have changed throughout the years when it comes to video conferencing. When I first joined RADVISION, the leading screen layout of a video conference was something like this:

It had multiple names at the time, though today we refer to it mainly as gallery view (because, well… that’s how Zoom calls it).

Somehow, everyone was razor focused on this. Cramming as many people as possible into a single screen. Some of it is because we didn’t know better as an industry at the time. The rest is because video conferencing was a thing done between meeting rooms with large displays.

It also fit rather well with the centralized nature of the MCU, who ruled video conferencing.

Enter Speaker View

At some point in time, we all shifted towards the speaker view (name again, courtesy of Zoom):

There were a few vendors who implemented this, but I think Google Hangouts made it popular. It was the only layout they had available (up until last month), and it was well suited for the SFU technology they used. It also made a lot of sense since Hangouts took place in laptops and desktops and not inside meeting rooms.

With an SFU, we reduce the CPU load of the server by offloading that work to the user devices. At the same time, we increase our demand from the devices.

Hello 2020

Then 2020 happened.

With it came social distancing, quarantines and boredom. For companies this meant that there was a need for townhall meetings for an office, done on a regular interval, just to keep employees engaged. Larger meetings between room meetings because even larger still as everyone started joining them from home.

The context of many calls went from trying to get things done to get connected and providing the shared goals and values that are easier to achieve within the office space. This in turn, got us back to the gallery view.

Oh… and it also made 20+ user meetings a lot more common.

3 reasons why WebRTC is a CPU hog

The starting point is challenging with WebRTC CPU use when it comes to video calling. WebRTC has 3 things going against it at the get go already:

#1 – Video takes up a lot of pixels

1080p@30fps is challenging. And 720p isn’t a walk in the park either.

The amount of pixels to process to encode 1080p?

62 million pixels every second… I can’t count that fast

You need to encode and decode all that, and if you have multiple users in the same call, the number of pixels is going to grow – at least if you’re naive in your solution’s implementation.

What does this all boils down to? WebRTC CPU use will go over the roof, especially as more users are added into that group video call of yours.

#2 – Hardware acceleration isn’t always available

Without hardware acceleration, WebRTC CPU use will be high. Hardware acceleration will alleviate the pain somewhat.

Deciding to use H.264?

  • Certain optimizations won’t be available for you in group calling scenarios
  • And on devices without direct access to H.264 (some Android devices), you’ll need to use software implementations. It also means dealing with royalty payments on that software implementation

Going with VP8?

  • Hardware acceleration for it isn’t available everywhere
  • Or more accurately, there’s almost no hardware acceleration available for it

What about VP9?

  • Takes up more CPU than H.264 or VP8
  • You won’t find it in Safari
  • Not many are using it, which is a challenge as well
  • Hardware acceleration also a challenge

So all these pixels? Software needs to handle them in many (or all) cases.

#3 – It is general purpose

WebRTC is general purpose. It is a set of APIs in HTML that browsers implement.

These browsers have no clue about your use case, so they are not optimizing for it. They optimize for the greater good of humanity (and for Google’s own use cases when it comes to Chrome).

Implementing a large scale video conference scenario can be done in a lot of different ways. The architecture you pick will greatly affect quality but also the approach you’ll need to take towards optimization. This selection isn’t something that a browser is aware of or even the infrastructure you decide to use if we go with a CPaaS vendor.

And it all boils down to this simple graph:

Complexity vs group size in WebRTC conference calls

The bigger the meeting size, the more WebRTC CPU becomes a challenge and the harder you need to work to optimize your implementation for it.

Why now?

I’ve been helping out a client last month. He said something interesting –

“We probably had this issue and users complained. Now we have 100x the users, so we hear their complaints a lot more”

We are using video more than ever. It isn’t a “nice to have” kind of a thing – it is the main dish. And as such, we are finding out that WebRTC CPU (which people always complained about) is becoming a real issue. Especially in larger meetings.

Even Google are investing more effort in it than they used to:

@googlechrome 83 is now in beta with interesting changes to the video compositor. It should free up some CPU cycles when using @webrtc apps such as @whereby @confrere_video and #GoogleMeet

— Serge Lachapelle (@slac) April 17, 2020 3 areas to focus on to improve performance in group video conferences

Here are 3 areas you should invest time in to reduce the CPU use of your group video application:

#1 – Layout vs simulcast

Simulcast is great – if used correctly.

It allows the SFU to send different levels of quality to different users in a conference.

How is that decision made?

  • Based on available bandwidth in the downlink towards each participant
  • The performance of the device (no need to shove too much down a device’s throat and choke him with more data that it can decode)
  • The actual frame resolution of where that video should appear
  • How much is that video important for the overall session

Think about it. And see where you can shave off on the bitrates. The lower the total bitrate a device needs to deal with when it has to encode or decode – the lower the CPU use will be.

#2 – Not everyone’s talking

Large conferences have certain dynamics. Not everyone is going to speak his mind. A few will be dominant, some will voice an opinion here and there and the rest will listen in.

Can you mute the participants not speaking? Is there an elegant way for you to do it in your application without sacrificing the user experience?

There are different ways to handle this. Anything from a dominant speaker, through the use of DTX towards automatic muting and unmuting of certain users.

#3 – UI implementation

How you implemented your UI will affect performance.

The way you use CSS, HTML and your JS code will eat up the CPU without even dealing with audio or video processing.

Look at things like the events you process. Try not to run too much logic that ends up changing UI elements every 100 milliseconds of time or less on each media track – you’re going to have a lot of these taking place.

My eBook on the topic is now available

If you are interested in how to further optimize for video conference sizes, then I just published an eBook about it called Optimizing Group Calling in WebRTC. It includes a lot more details and suggestions on the above 3 areas of focus as well as a bunch of other optimization techniques that I am sure you’ll find very useful.

The post Surviving WebRTC CPU requirements in large group calls appeared first on BlogGeek.me.

How to know if an open source WebRTC media server is kept up to date?

bloggeek - Mon, 05/18/2020 - 12:00

If you are going to start a WebRTC project that requires a media server, you better be sure you know how frequently as well as when was the last time the code got updated.

A WebRTC media server is a type of server that is required to build applications that offer group calling capabilities among other things. There are other types of WebRTC servers that are needed, but this is not the place or time to discuss them.

Rising interest in WebRTC media servers

Here’s a discussion I had multiple times in the last month: People asking about this or that WebRTC media server or project, wanting to know if they should adopt it for their own application.

This rise stems from the increased interest in video conferencing due to the pandemic. Video has shown its usefulness in the biggest possible way. We’re all stuck at home, and the only way to communicate is by “calling”. Video adds context and meaning to voice only calling so it is becoming widespread.

In some cases, as in India, the government decided to put out funding for a video conferencing challenge, where vendors are invited to build applications for the local market. In others, remote-something is becoming a thing where the existing generic solutions don’t cut it (there are many such verticals).

As it so happens, a lot of teams are now trying to figure out which open source WebRTC media server they should pick and use.

There’s an article I wrote almost 3 years ago on 10 Tips for Choosing the Right WebRTC Open Source Media Server Framework. I took the time today to update it as well as the selection worksheet in it.

One thing that developers seem to miss is how easy it is to understand the freshness of the code – how up to date a WebRTC media server code really is.

So here we go.

How do the most popular WebRTC media servers compare to each other?

What I like doing is using the insights feature in github. It gives a nice initial perspective of a project besides the popularity metrics of watches, starts and forks (they are nice, but just to get me interested – not it making an opinion).

For that purpose, I like looking at the Pulse, Contributors and Code frequency metrics provided by github.

Doing such a check is useless without context. And context is built by looking at alternatives. In this case,  I decided to look at some of the most popular WebRTC media server alternatives out there: Janus, Jitsi, Kurento and mediasoup

Why these 4? Because they are mentioned in almost every conversation I have about open source WebRTC media servers.

Some of these projects are built out of multiple github repositories. For the purpose of this comparison, I tried looking at the main repo holding the media server itself. Here’s the ones I’ve used for each:

Pulse

The github pulse lists recent activity of the project. I’ve looked at a period of 1 month here on all 4 projects to get the following picture:

JanusJitsiKurentomediasoupAuthors141014Total commits7635842Files65569514Additions3,1522,0912922,670Deletions3631,8622141,993

A few thoughts:

  • Jitsi and mediasoup seem to be doing a lot of code optimizations based on the ratio between additions and deletions
  • Janus had a lot of additions versus deletions, so a lot of new code this past month?
  • All projects are skewed towards a single developer at their core. Jitsi skewed towards two main developers
  • Kurento is way behind the others
  • mediasoup seem to have a smaller community of active contributors around it (it is the youngest of these projects)
Contributors

The github contributors view gives us a nice time perspective of these projects, with a focus on who are the main contributors over time.

A few thoughts:

  • Activity in Janus has been stable over time, but leans heavily on a single developer, Lorenzo Miniero
  • For Jitsi, activity is leaning on 3 developers, 2 of whom have been with the project for a very long time
  • Kurento’s activity fell since the acquisition by Twilio and never really recuperated. Most activity since then can be attributed to Juan Navarro
  • mediasoup’s activity has a kind of a cyclic nature to it that is hard to explain from outside. Most of the work there can be attributed to Iñaki Baz Castillo
Code Frequency

This chart on github shows the additions and deletions made to a project throughout its lifetime.

For the image below I tried aligning the projects as well as I could on the 10k  range, which might have distorted them a bit, but should place them nicely in the context of each other. Notice that I couldn’t do that for mediasoup – see my thoughts below. Also note that the X axis of the timescale in each is different, but that wasn’t interesting for me for this comparison.

A few thoughts:

  • Janus and Jitsi show healthy code contributions throughout their existence. I attribute the spikes outside the chart area I’ve selected as on time efforts or just mistakes. Over time, they show stability
  • Kurento’s activity has seen better days
  • mediasoup is interesting. It shows bouts of activity, mainly due to heavy use of branching across its version releases
  • These projects are rather small in nature. If you search github for popular projects outside the WebRTC space, you’ll see a lot higher levels of activity
Why do we want an up to date WebRTC media server?

This looks like an obvious question, but it really isn’t.

To check this out, lets look at another popular project that I suggest people to use when they need a SIP over Websocket implementation in JavaScript: JsSIP

Does this make JsSIP a dead project? Or is it just that there’s nothing much to add besides code fixes here and there?

(Interestingly, SIP.js shows totally different behavior)

When it comes to WebRTC, the same cannot be said. WebRTC is “work in progress” at its core. Browsers deprecate and introduce new features with each release, new codecs are introduced and the slew of use cases using WebRTC is still growing strong. This means that to keep pace with these changes, WebRTC media servers need to be updated as well. Otherwise, they wither and die, with time being unable to offer the level of quality and connectivity developers and users expect.

What other criteria should you be looking for?

Freshness of code is only one criteria, but there are many more. The first one should probably be does this media server fit my requirements? Not all media servers are built equal or for the same purpose, and deciding which one is most suitable is important as well.

Other criteria include usage, maintainability, support, documentation, etc.

You can find the full list in my article about it – 10 Tips for Choosing the Right WebRTC Open Source Media Server Framework

And if this is of real interest to you, then you should look at your selection process itself. For that, I can suggest my free media server selection KPI sheet.

Oh, and once you’re there, think about scaling as well. I have an existing eBook about best practices in scaling WebRTC applications, and an upcoming eBook on Optimizing Group Video Calling in WebRTC (available for pre-purchase).

The post How to know if an open source WebRTC media server is kept up to date? appeared first on BlogGeek.me.

AV1 vs HEVC: Are the WebRTC codec wars back?

bloggeek - Mon, 04/27/2020 - 12:30

AV1 is coming to WebRTC sooner rather than later. Apparently so is HEVC. It is an AV1 vs HEVC game now, but sadly, these codecs are unavailable to the “rest of us”.

WebRTC codec wars were something we’ve seen in the past. During the early days of WebRTC there have been ongoing discussions if the mandatory video codec in WebRTC should be VP8 or H.264. The outcome was to have both of them mandatory to implement in browsers.

Fast forward to today, and life is simply. We have ubiquity and support across all browsers that have WebRTC in them, which is great.

We are now gearing up for the next fight. This one isn’t going to be between VP9 and HEVC, but rather between AV1 and HEVC.

Why now?

COVID-19 is causing all communication vendors to fast forward and accelerate their roadmaps by 6-18 months. Those that don’t are going to be left behind on the other side of this pandemic.

COVID-19 is fast forwarding all roadmaps and plans related to WebRTC, including codec improvements

This isn’t an attempt to scare anyone or to FUD people into doing things. It is just the way things are.

If you want to see how serious things are, just check what’s going on around you:

  • Zoom is rolling out new features on almost a daily basis. They are plugging in their security gaps faster than most vendors can plan their roadmap, let alone develop anything
  • Google is releasing features on Duo and Meet to drastically improve them. A lot of it hinges on machine learning but also on the latest coding technologies (more on that later, when we get to AV1 again)
  • Most UCaaS vendors have launched their own video meeting service in the past 6 months. A lot of them in the last month. Many now offering it for free
  • Many vendors in the video space from all verticals are seeing a 10x or more increase in use
  • There’s a race towards filling different gaps when comparing these meeting services versus Zoom

The AV1 vs HEVC angles here are VERY interesting.

HEVC requires royalties and is a licensing mess.

AV1 is so new it hasn’t even had an opportunity to cool down a bit after being taken out of the oven. Frankly? It is still half baked and requires a bit more cookin’ – and yet… it is now being rolled out in Google Duo.

The thing is, that 6 months back, video was nice to have. A feature that needs to be ticked in a long requirements list.

Today? Video first. All the rest comes later.

Zoom’s stock price and market cap is the best indicator of that change.

A brief history of WebRTC video codecs

In less than 10 years, we’ve witnessed 3 codec generations in WebRTC:

  1. VP8 / H.264
  2. VP9 / HEVC
  3. AV1
Each video codec generation improves over the last one, addressing different market requirements

With each generation of codec introduced, CPU and memory requirements grow along with the complexity of the codec and the resulting quality for a given bitrate increases.

VP8/H.264

I’ve been working with H.264 since 200x. Probably somewhere in 2005. It was brand new at the time and was about to replace H.263 and all of its extensions.

Fast forward to around 2010, when you started it being deployed in almost all video conferencing room systems.

VP8 came to our lives along with WebRTC, in around 2012. It is comparable to VP8.

There are reasons to pick H.264 over VP8. And while hardware acceleration is more readily available in H.264 than VP8, it does pose challenges.

Both are probably at their peak right now when it comes to video calling:

  • They are ubiquitous
  • Readily available
  • Understood and known, with vibrant ecosystems
  • They can run on most CPUs

This is the tipping point, where a new video codec is being sought after.

If you are using it today, you should be just fine. If you seriously want to be at the forefront of technology, right on the bleeding edge (and you will bleed – time, money and blood), then read on to your next alternatives.

And if you need to decide between VP8 and H.264, check out this free video course: H.264 or VP8?

VP9

It should have been a VP9 vs HEVC thing and not an AV1 vs HEVC thing.

The next best thing in video codec was supposed to be VP9. VP9 is the replacement to HEVC. HEVC is what comes next after H.264, and the intent was always for VP9 to be the alternative to HEVC.

VP9 gives you either less bitrate for the same quality or more quality for the same bitrate than VP8

As things go, VP9 advantages are just what you’d expect in a new codec generation:

  • Compression efficiency
  • Higher complexity
  • Scarcity of hardware acceleration (an issue still)

What VP9 was supposed to bring to the world is SVC – scalability. With VP9 SVC we were supposed to improve resiliency of video as well as the ability to scale large group video calls better than ever before.

Need a boost and have a very good grasp at who is in a call before everyone joins? VP9 might be a good alternative for you.

AV1
  • AV1 is the new kid on the block. An impossible dream coming true: vendors working together in a new Alliance of Open Media, working on a royalty free video codec. Something that was never heard of a few years ago and now feels like the new norm
  • Starting with 7 founding members, this changed the dynamics of the WebRTC codec wars. Instead of having Google with VP9 on one side of the ring and the rest of the world on the other side with HEVC, it brought a team of large players to the royalty free side, standing behind the AV1 video codec
  • Today, the alliance includes 48 members, including all browser vendors and most chipset vendors
The Alliance of Open Media is the who’s who of the video industry
  • The focus in terms of comparisons are now AV1 vs HEVC

I’ve written at length about AV1 when the specification got released. You can learn about AV1 there.

There are those who believe AV1 is ready and have been ready for quite some time. Reality says otherwise. It isn’t for the faint of heart at this point. More on that – below.

Adventurous? Go AV1!

Where in the world is WebRTC VP9 video call?

VP9 shipped in Chrome 48 for WebRTC. That was January 2016. 4 years later and it is safe to say that not many are using VP9 in WebRTC.

Adoption of VP9 is slow

The two main places where VP9 is making sense?

  1. Google. Google Meet for example has been using VP9 for quite some time in its own calls
  2. Peer-to-peer calls. Just because it is easy to achieve

Once AV1 was announced, the debate began if one should even try and adopt VP9 or wait for AV1 instead. The majority are waiting for AV1. Laziness at its best (and what I would have selected as well if you’re wondering).

The other reason for delaying and skipping a generation is investment in VP9. Since everyone’s looking at AV1, VP9 is left with less eyeballs and developers improving it. Add to that the slow release of SVC support to it in Chrome and the fact that Safari still doesn’t support VP9 and you can understand the reluctance of going this route.

Apple’s appetite for HEVC in WebRTC

The big Apple is insatiable. Apple has been banking on HEVC for many years now, and where HEVC & WebRTC fits in Apple has been a topic here in the past as well.

Apple is banking on both royalty bearing (HEVC) and royalty free (AV1) video codecs

On Apple’s release notes for Safari Technology Preview 104 there’s a bullet point that shows where things are headed:

Added initial support for WebRTC HEVC

I wonder whatever for?

  • Apple is a founding member of the Alliance of Open Media, so it is banking on AV1 as the future video codec
  • In iOS 11 (2017), Apple introduced HEVC to its devices. That was done with the addition of hardware acceleration
  • Android devices usually don’t have HEVC hardware support, and licensing being as tough and expensive as it is, this is a continued differentiator for Apple
  • Google will be reluctant to add HEVC to Chrome. So would Mozilla. Not sure what Microsoft’s stance would be on this one
  • Apple isn’t playing an AV1 vs HEVC game, but rather an AV1 and HEVC game, and they are alone in that at the moment
  • Apple isn’t especially strong or dominant in the WebRTC space. Safari is the worst browser these days in terms of WebRTC support, with users already used to switching to Chrome on Mac. What would adding HEVC to WebRTC Safari add? Especially when there are so many other, more basic things to fix and improve in Safari WebRTC support…

To me, this is the biggest conundrum at the moment. A piece of this puzzle is missing. What would make developers use HEVC if it is only available in Safari and nowhere else? This isn’t the app store. It is the web.

Time will tell.

WebRTC AV1 support in Google Duo

I said it before and I’ll iterate it again. AV1 is too new. Too early to be adopted in WebRTC or real time communications. And yet… Google just announced supporting AV1 in Google Duo:

[…] in the coming week, we’re rolling out a new video codec technology to improve video call quality and reliability, even on very low bandwidth connections.

They made sure to add a nice moving GIF so you can see the difference between “a video codec” and AV1 in the same bitrate.

Is that other codec VP8? VP9? H.264? HEVC? Maybe H.261…

Are they using it for all Duo calls? In all devices? In all network conditions?

The only thing I could find is that this rolls out to Android with iOS 2 weeks behind in the roll out. There are more things left unsaid.

Some thoughts here
  • AV1 doesn’t have hardware acceleration on smartphones. Maybe on 1 or 2 very new ones (I doubt it), and even then, the hardware would still be buggy as hell – especially for real time video, which is different than just camera recording or playing YouTube videos
  • This means that going to HD resolutions with AV1 on smartphones is going to be brutal to CPU, battery life and device temperature. This isn’t where AV1 support in DUO is going
  • This leaves us with the low bitrate scenario – probably anything from VGA or lower. Maybe even a quarter of that (QVGA)
  • It is where AV1 is going to shine in 2020 and into 2021
Why AV1?

We’re all stuck at home burning the networks. The large streaming vendors are lowering resolutions (and bitrates) for their default players in certain countries. This reduces the CPU load, making room for improving quality on lower bitrates. And that leads to the ability (and need) of better video codecs.

Why not VP9?

Google Duo most probably already makes use of VP9. Maybe even HEVC on iOS devices due to hardware acceleration benefits. When it comes to 1:1 sessions, there’s no real reason to stick to a single video codec for all sessions.

With Apple working publicly now on HEVC in WebRTC, it put pressure on Google, and getting AV1 into Duo in order to bolster their side in the AV1 vs HEVC debate became a pressing matter. Google Duo’s 1:1 call scenarios were the most suitable candidate for Google to make that stand.

Enter AV1

When a new video codec generation was introduced, the thinking was simple: “we are expecting it to support a higher resolution, at a higher bitrate, with a higher CPU consumption”

  • Higher resolution, because let’s face it – QVGA sucked in 1995 and we were still using it in 2000 in video conferencing. So each generation had to get 4 times the pixels the previous one was capable of dealing with
  • Higher bitrate, because at 4 times the pixels we couldn’t really get 25% the size, so there was an expectation of needing more bandwidth for the content we wanted to use
  • Higher CPU consumption, because we were adding more work to the encoder and decoder

In 2020, things are changing.

Sometimes, all you need is a better fit into smaller spaces (like low bitrate) Bigger is no longer better with video codecs

I have 4K resolution on my desktop and laptop. 1080p on my phone and TV. I am happy with 720p content most of the time. I hate fonts on a 4K screen that aren’t enlarged (the damn characters are just too small to read).

What is the value of higher resolution? HDR content? 8K? 360? VR? If all I need is just plain video, no higher resolution is required. We’re all content most of the time with 720p resolutions for business meetings anyway.

Resolution requirements for most content types and use cases are not going to get higher any time soon.

We are probably at peak resolution already.

So we are free to think of next gen video codecs as ones that help consume lower bitrates.

There’s a distinction here. While any new video codec generation consumes lower bitrates for the same resolution/quality, the main purpose of these new video codecs was almost always in increasing the resolution as well.

AV1 on mobile makes perfect sense here. Especially for low resolutions – since we can have some CPU to spare for that scenario.

A quick FAQ on the latest WebRTC video codecs Is HEVC (H.265) supported in WebRTC?

No. Not officially.
Apple is adding support for it in Safari, but no other browser has added support for it or indicated plans to add support for it

Can I use HEVC in WebRTC?

Yes, but not in browsers.
Apple will introduce HEVC in Safari, but no other vendor will. If you build your own native application for either PC or mobile you can add HEVC as another supported codec and use it in your application.

Should I start investing in adding AV1 to my WebRTC application?

That depends. If you want to add AV1, you need to make sure your use case fits well, as well as the devices you expect your users to have.
You will also need to put a considerable investment of time and money to make it happen.
My suggestion for most vendors would be to wait with AV1 support.

Why isn’t VP9 used in WebRTC much?

That is a good question with no good answer.
I believe it is a matter of timing. When the time came to adopt VP9, AV1 was already announced and on its way, so vendors preferred to wait and jump directly to AV1 instead of going for VP9.
VP9 doesn’t enjoy much hardware acceleration, which also makes it CPU intensive, requiring companies to tweak, fine tune and optimize their systems to use it. That kind of work is something many prefer not to do.

WebRTC and the future of video codecs

We’re at war again. The video codec war of WebRTC. And this time, each vendor needs to pick a strategy to play.

Is there a single video codec today that will answer all of your WebRTC needs?

We’ve got multiple codecs in our warchest: VP8, H.264, VP9, AV1 and sometimes even HEVC now.

Which one will we be using?

Which ones will we be using?

Here, scenarios matter. Different scenarios will call for totally different video codec selection to optimize for quality, CPU use, performance, bitrate, cost, etc.

In 1:1 sessions, you may want to keep your options open – use the best one dynamically just by making a decision as the session is set up.

For group calls, will you be using a single, static video codec? Or allow for multiple ones? Will you have multiple codecs in a single group session? Are you going to have an SFU tweaked and tuned for that? Will you pick the best video codec for a session and then dynamically switch over as the nature of the session changes (=someone joins and leaves who has certain limitations)?

What about consumers? What kind of video codec selection strategies are going to be prevalent there? How are they going to be different than the ones we see in enterprise solutions? What will be the difference for mobile first or application based versus web based solutions?

WebRTC differentiation: the next battlefield lines are being drawn WebRTC differentiation is back in focus

We live in interesting times.

Codec selection has never been more interesting or important.

While WebRTC offers 2 codecs (H.264 & VP8), most browsers support VP9 and now we’re seeing browser vendors either adding HEVC or using AV1 in their own apps.

If media quality is at the core of your service (think carefully about your answer to this question), then rethinking your video codec selection strategy might be in order.

It is going to require research and investment. But this is where the future lies for video codecs in WebRTC.

The post AV1 vs HEVC: Are the WebRTC codec wars back? appeared first on BlogGeek.me.

Open Source Cloud Gaming with WebRTC

webrtchacks - Thu, 04/16/2020 - 05:13

Software as a Service, Infrastructure as a Service, Platform as a Service, Communications Platform as a Service, Video Conferencing as a Service, but what about Gaming as a Service? There have been a few attempts at Cloud Gaming, most notably Google’s recently launched Stadia. Stadia is no stranger to WebRTC, but can others leverage WebRTC […]

The post Open Source Cloud Gaming with WebRTC appeared first on webrtcHacks.

WebRTC Server: What is it exactly?

bloggeek - Mon, 04/13/2020 - 12:30

When someone says WebRTC Server – what does he really mean? There are 4 different WebRTC servers that you need to know about: application, signaling, NAT traversal and media.

WebRTC is a communications standard that enables us to build a variety of applications. The most common ones will be voice or video calling services (1:1 or group calls). You can use it for broadcasts, live streaming, private/secure messaging, etc.

To get it working requires using a multitude of “WebRTC servers” – machines that reside in the cloud (or at least remotely enough and reachable) and provide functionality that is necessary to get WebRTC sessions connected properly.

What I’d like to do here is explain what types of WebRTC servers exist, what they are used for and when will you be needing them. There are 4 types of servers detailed in this article:

There are 4 types of WebRTC servers you need to know about
  1. WebRTC application servers – essentially the website hosting the service
  2. WebRTC signaling servers – how clients find each other and connect to each other
  3. NAT traversal servers for WebRTC – servers used to assist in connecting through NATs and firewalls
  4. WebRTC media servers – media processing servers for group calling, recording, broadcasting and other more complex features

More of the audio-visual type? I’ve recorded a quick free 3-part video course on WebRTC servers.

Enroll to free WebRTC servers course

WebRTC application servers WebRTC application servers are like any other web application servers out there

Not exactly a WebRTC server, but you can’t really have a service without it

Think of it as the server that serves you the web page when you open the application’s website itself. It hosts the HTML, CSS and JS files. A few (or many) images. Some of it might not even be served directly from the application server but rather from a CDN for the static files.

What’s so interesting about WebRTC application servers? Nothing at all. They are just there and are needed, just like in any other web application out there.

WebRTC signaling servers WebRTC signaling servers are in charge of connecting users to one another

Signaling servers for WebRTC are sometimes embedded or collocated/co-hosted with the application servers, but more often than not they are built and managed separately from the application itself.

While WebRTC handles the media, it leaves the signaling to “someone else” to take care of. WebRTC will generate SDP – these are fragments of messages that the application needs to pass between the users. Passing these messages is the main concern of a signaling server.

A WebRTC signaling server passes signaling messages between the users to establish a session

There are 4 main signaling protocols that are used today with WebRTC, each lending itself to different signaling servers that will be used in the application:

  1. SIP – The dominant telecom VoIP protocol out there. When used with WebRTC, it is done as SIP over WebSocket. CPaaS and telecom vendors end up using it with WebRTC, mostly because they already had it in use in their infrastructure
  2. XMPP – A presence and messaging protocol. Some of the CPaaS vendors picked this one for their signaling protocol
  3. MQTT – Messaging protocol used mainly for IOT (Internet of Things). First time I’ve seen it used with WebRTC was Facebook Messenger, which makes it a very popular/common/widespread signaling server for WebRTC
  4. Proprietary – the most common approach of all, where people just implement or pick an alternative that just works for them

SIP, XMPP and MQTT all have existing servers that can be deployed with WebRTC. 

The proprietary option takes many shapes and sizes. Node.js is quite a common server alternative used for WebRTC signaling (just make sure not to pick an outdated alternative – that’s quite a common mistake in WebRTC).

If you are going towards the proprietary route:

NAT traversal servers for WebRTC NAT traversal is important to get more sessions connected properly in WebRTC

To work well, WebRTC requires NAT traversal servers. These WebRTC servers are in charge of making sure you can send media from one browser to another.

There are two types of NAT servers needed: STUN and TURN. TURN servers always implement STUN as well, so in all likelihood you’re looking at a single server here.

STUN is used to answer the question “what is my public IP address?” and then share the answer with the other user in the session, so he can try and use that address to send media directly.

TRUN is used to relay the media through it (so it costs more in bandwidth costs), and is used when you can’t really reach the other user directly.

A few quick thoughts here:

  • You need both STUN and TURN to make WebRTC work. You can skip STUN if the other end is a media server. You will need TURN even if your other end of the session is a media server on a public IP address
  • Don’t use free STUN servers in your production environment. And don’t never ever use “free” TURN servers
  • If you deploy your own servers, you will need to place the TURN servers as close as possible to your users, which means handling TURN geolocation
  • TURN servers don’t have access to the media. Ever. They don’t pose a privacy issue if they are configured properly, and they can’t be used by you or anyone else to record the conversations
  • Prefer using paid managed TURN servers instead of hosting your own if you can
  • Make sure you configure NAT traversal sensibly. Here’s a free 3-part video course on effectively connecting WebRTC sessions
WebRTC media servers WebRTC media servers makes it possible to support more complex scenarios

WebRTC media servers are servers that act as WebRTC clients but run on the server side. They are termination points for the media where we’d like to take action. Popular tasks done on WebRTC media servers include:

  • Group calling
  • Recording
  • Broadcast and live streaming
  • Gateway to other networks/protocols
  • Server-side machine learning
  • Cloud rendering (gaming or 3D)

The adventurous and strong hearted will go and develop their own WebRTC media server. Most would pick a commercial service or an open source one. For the latter, check out these tips for choosing WebRTC open source media server framework.

In many cases, the thing developers are looking for is support for group calling, something that almost always requires a media server. In that case, you need to decide if you’d go with the classing (and now somewhat old) MCU mixing model or with the more accepted and modern SFU routing model. You will also need to think a lot about the sizing of your WebRTC media server.

For recording WebRTC sessions, you can either do that on the client side or the server side. In both cases you’ll be needing a server, but what that server is and how it works will be very different in each case.

If it is broadcasting you’re after, then you need to think about the broadcast size of your WebRTC session.

A quick FAQ on WebRTC servers Can I run WebRTC without any server?

Not really. You will need somehow to know who to communicate with and in many cases, you will need to somehow negotiate IP addresses and even route data through a server to connect your session properly.

Will WebRTC servers spy on me and my data?

That depends on the service you are using, as different implementations will put their focus on different features.

In general, signaling and NAT traversal servers in WebRTC don’t have access to the actual data. Media servers often have (and need) access to the actual data.

Can I host WebRTC servers on AWS?

Yes. You can host your WebRTC servers on AWS. Many popular WebRTC services are hosted today on AWS, Google Cloud, Microsoft Azure and Digital Ocean servers. I am sure other hosting providers and data center vendors work as well.

Can I run WebRTC on my PHP WordPress site?

WebRTC can be added to any WordPress, PHP or other website. In such a case, the PHP WordPress server will serve as the application server and you will need to add into the mix the other WebRTC servers: signaling server, NAT traversal server and sometimes media servers.

Know your WebRTC servers

No matter how or what it is you are developing with WebRTC, you should know what WebRTC servers are and what they are used for.

If you want to expand your knowledge and understanding of WebRTC, check out my WebRTC training courses.

The post WebRTC Server: What is it exactly? appeared first on BlogGeek.me.

True End-to-End Encryption with WebRTC Insertable Streams

webrtchacks - Mon, 04/13/2020 - 04:36

A couple of weeks ago, the Chrome team announced an interesting Intent to Experiment on the blink-dev list about an API to do some custom processing on top of WebRTC. The intent comes with an explainer document written by Harald Alvestrand which shows the basic API usage. As I mentioned in my last post, this is the […]

The post True End-to-End Encryption with WebRTC Insertable Streams appeared first on webrtcHacks.

Accelerated Computer Vision inside a WebRTC Media Server with Intel OWT

webrtchacks - Fri, 04/03/2020 - 14:18

WebRTC has made getting and sending real time video streams (mostly) easy. The next step is doing something with them, and machine learning lets us have some fun with those streams. Last month I showed how to run Computer Vision (CV) locally in the browser. As I mentioned there, local is nice, but sometimes more performance […]

The post Accelerated Computer Vision inside a WebRTC Media Server with Intel OWT appeared first on webrtcHacks.

Does your video call have End-to-End Encryption? Probably not..

webrtchacks - Thu, 04/02/2020 - 02:05

Time for another opinionated post. This time on… end-to-end encryption (e2ee). Zoom apparently claims it supports e2ee while it can not satisfy that promise. Is WebRTC any better? Zoom does not have End to End Encryption Let’s get to the bottom of things fast: Boo Zoom! I reviewed how Zoom’s implements their web client last […]

The post Does your video call have End-to-End Encryption? Probably not.. appeared first on webrtcHacks.

A new WebRTC codelab training

bloggeek - Mon, 03/30/2020 - 12:00

I am launching a new WebRTC codelab course, created together with Philipp Hancke. This goes into the intricacies of WebRTC signaling and best practices.

The State of online WebRTC resources

Where should you start with WebRTC? There’s not enough information about it and at the same time too much information about it. Most of it is old and outdated. Needle in a haystack.

Up to date WebRTC code is hard to come by

Sift through discuss-webrtc, stackoverflow and the W3C WebRTC mailing list? All great. But there’s no explanation besides pieces of code. It lacks context.

Read the spec and work your way from there? If you don’t fall asleep, you might just find that browsers aren’t exactly spec compliant yet anyways.

Books? None from 2020. None from 2019. Less than 10 in total. A handful.

How about online courses? There are a few on udemy and pluralsight, but they are also old and broken by now.

My own Advanced WebRTC Architecture course? While great, I purposefully haven’t gone through the APIs of WebRTC, understanding they change so frequently.

A codelab then? Sure. find one that works and explains signaling properly.

The way I see it, there are 3 types of WebRTC codelabs today:

  1. The outdated ones
  2. The broken ones
  3. The ones that sell you a managed streaming service
Enter Philipp Hancke

I’ve been working with Philipp here and there in the last couple of years. He is fun to work with and knows everything there is to know about WebRTC and the intricacies of its APIs along with the status of browsers.

Here’s a bit about him:

I don’t quite recall how, but we got to the point a couple of months back that it would make sense to create a solid WebRTC codelab that covers the signaling aspects of WebRTC. Simply because there is nothing out there that does the trick properly.

WebRTC: The missing codelab

Fast forward to today, we have a codelab course for you:

The codelab has 4 parts to it:

  1. Github repo and course introduction module, both publicly available
  2. Codelab walkthrough. Over 2 hours of recorded lessons covering the source code and explaining all you need to know to grok it
  3. Exercises. We’ve started creating useful exercises on the codelab. These include the challenge, the solution and the explanation. Along with the additional code for the solution and best practices
  4. Resources library. If you are new to WebRTC, this will come handy in closing some gaps for you. They are lessons from my other courses that fit nicely here

We’ve made sure the lessons aren’t boring by making them interactive. I’ll be there with Philipp, taking the part of the student asking questions. The intent is to try and get into his head, understand his thought processes. What do you gain out of it? You will understand why things are implemented that way and not only how.

Interested?

Head to the WebRTC developer courses – there’s a 20% discount until the end of April.

If you are uncertain, then you are invited to join this week’s WebRTC Live webinar where I’ll be talking with Arin Sime about the codelab. Alternatively, just go through the introduction module and make a decision.

If you know that this is for you, then there’s a 50% discount if you enroll in March (that’s in the next 2 days) by using FASTMOVER as your coupon code on checkout (for any of the WebRTC course bundles).

If you’ve enrolled in my All Included course in 2020, then you are already automatically enrolled to the codelab. If you enrolled throughout 2019, then you are eligible for a 50% discount during April.

The post A new WebRTC codelab training appeared first on BlogGeek.me.

Stop touching your face using a browser and TensorFlow.js

webrtchacks - Thu, 03/19/2020 - 12:55

Don’t touch your face! To prevent the spread of disease, health bodies recommend not touching your face with unwashed hands. This is easier said than done if you are sitting in front of a computer for hours.  I wondered, is this a problem that can be solved with a browser? We have a number of […]

The post Stop touching your face using a browser and TensorFlow.js appeared first on webrtcHacks.

Pages

Subscribe to OpenTelecom.IT aggregator

Using the greatness of Parallax

Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.

Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.

Get free trial

Wow, this most certainly is a great a theme.

John Smith
Company name

Yet more available pages

Responsive grid

Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

More »

Typography

Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

More »

Startup Growth Lite is a free theme, contributed to the Drupal Community by More than Themes.