News from Industry

CPO at Spearline and what it means to BlogGeek.me

bloggeek - Tue, 09/13/2022 - 12:30

I am now CPO (Chief Product Officer) at Spearline. This means that there are going to be some changes here at BlogGeek.me. Here’s what you can expect

Me, somewhere in Ireland, 3 weeks ago

Almost a year ago, testRTC, the company I co-founded, got acquired by Spearline. During that time, I got to know the great team there and the huge opportunity that Spearline has.

Since the above feels corny and a cliché to me as I write it, I’ll stop here.

To make a long story short:

  • Spearline acquired testRTC (Spearline has its HQ in Ireland)
  • Now they had 2 separate product lines: Voice Assure and testRTC
  • As time went by, it was apparent that 2 is just a beginning
  • And also that someone needs to manage product management as a whole
  • Which is where I came in – they asked, and I said yes
  • So now I am CPO at Spearline
What does this mean?

First off, I am excited. Very.

It has been some time since I had a team to work with as their direct manager. It will also be the first time I get to manage product managers.

It also means that I am going to be investing a lot more of my time and attention at Spearline. Which is great, as I really love interacting with the people there already (I wouldn’t have accepted the role otherwise).

For my consulting business, it means that I will be shrinking it down considerably. I won’t be doing much consulting moving forward. It is somewhat sad, as I really loved helping people and hearing their stories and challenges. Hopefully, I will still get to do it in other ways.

What is going to stay, are all the initiatives that have taken place around BlogGeek.me over the years:

  • My writing here on this blog will continue, though probably at a lower frequency
  • The courses and reports will continue to be supported and updated. Me and Philipp Hancke are working to complete the new Low-level Protocols Course and we have plans for a few other courses after this one
  • In the same token, WebRTC Insights is going to continue as a service
  • And so will WebRTC Weekly and the Kranky Geek events
  • From time to time, I’ll probably run an initiative or two here. Because I just can’t stop myself

All in all, it is time to continue and grow, and in a direction I have never expected I’ll find myself again.

The post CPO at Spearline and what it means to BlogGeek.me appeared first on BlogGeek.me.

The WebRTC Developer Tools Landscape 2022 (+report)

bloggeek - Thu, 09/08/2022 - 12:30

An updated infographic of the WebRTC Developer Tools Landscape for 2022, along with my Choosing a WebRTC API Platform report.

This week I took the time to update my WebRTC Developer Tools Landscape. I do this every time I update my report, just to make sure it is all aligned and… up to date.

A few quick thoughts I had while doing this:

  • Vendors come and go
    • We see this all the time
    • At the time of writing, I am aware of 2-3 additional changes that couldn’t fit to this update simply because of timing
  • Testing & Monitoring is becoming more important
    • There are more vendors there than they used to
    • With my testRTC hat on, I can say this is a good thing
    • Especially since we’re the best game in town
  • CPaaS is crowded
    • And becoming more so
    • Is there room for everyone there?
    • How will this market look like moving forward?
    • Who should you be selecting for your next project?
    • All these questions is what I am covering in the WebRTC API report
Why is your company not there?

The WebRTC Developer Tools Landscape will never be complete. People always get pissed off at me when I publish it, not understanding why their company isn’t there. My answer to this is a simple one – because I don’t know what it is that you are doing.

They then get even angrier. What they should do at that point is ask themselves why I don’t know them enough. I have lived and breathed WebRTC since it was first announced. So if I don’t know their company and product, how do they expect others to learn about them?

I don’t think I am unique or special. Just that if you want to be in a landscape infographic that covers WebRTC, you might as well want to make sure people who deal with WebRTC and help others figure out what tools to use will know what it is that you’re doing.

What about that report?

The report has been going strong for some 8 years now, with an update taking place every 8-12 months. It has been 12 months, so it definitely needed an update.

2 vendors were removed from the report and 3 new vendors added.

I’ve also decided to “upgrade” the term Embed/Embeddable/Embedded to Prebuilt. The reason behind it is the progress and popularity of these types of solutions in the video API space. Most CPaaS vendors today that offer a video API are also offering some form of higher level abstraction in the form of a ready made application – be it a full reference app, a UIKit, or a Prebuilt component.

The report will be published on 22 September. If you want to purchase it, there’s a 20% discount available at the moment – from now and until its publication.

Check out more about my Choosing a WebRTC API Platform report.

The post The WebRTC Developer Tools Landscape 2022 (+report) appeared first on BlogGeek.me.

Media compression is all about purposefully losing what people won’t be missing

bloggeek - Mon, 09/05/2022 - 12:30

With WebRTC, we focus on lossy media compression codecs. These won’t maintain all the data they compress, simply because we won’t notice it either.

[In this list of short articles, I’ll be going over some WebRTC related quotes and try to explain them]

The purpose of codecs – voice and video – is to compress and decompress the media that needs to be sent over the network. This was true before WebRTC and will stay true after WebRTC.

Generally speaking, there are two types of compression:

The two types of codecs
  1. Lossless compression – these are codecs that whatever they see as input to the encoder will be generated in the other end of the decoder. Nothing will get lost along the way. Think of it as a .zip file – it stores files and requires a perfect match on both ends of the compression
  2. Lossy compression – these are codecs that don’t maintain an exact match from what goes into the encoder with what ends up after the decoder. These types of codecs are quite common with audio and video processing

Audio and video tend to hold a lot of data. And since we want to send it over the network, we’d rather not waste network resources. So what do these codecs do? They try to remove anything and everything that they can which our eyes and ears won’t notice much.

On a conceptual level, lossy compression has this virtual dial. You move the dial to decide how much you are willing to lose out of the data. The encoder will do its best to lose things you wouldn’t notice, but at some point, you’ll notice.

This flexibility in setting the compression level is also used to manage the bitrate. By estimating the bandwidth, the encoder can be instructed to turn the dial up and down the compression level to generate higher or lower compression to meet the requirements of the estimated available bandwidth.

Looking to learn more about video codecs? Go ahead and read my WebRTC video basics article

The post Media compression is all about purposefully losing what people won’t be missing appeared first on BlogGeek.me.

The state of WebRTC open source projects

bloggeek - Mon, 08/29/2022 - 12:30

WebRTC open source is a mess. It needs to grow out of its youth and become serious business – or gain serious backing.

This article has been written along with Philipp Hancke. We cooperate on many things – WebRTC courses (new one coming up soon) and WebRTC Insights to name a few.

WebRTC is free. Every modern browser incorporates WebRTC today. And the base code that runs in these browsers is open sourced and under a permissive BSD license. In some ways, free and open source were mixed in a slightly toxic combination. One in which developers assume that everything WebRTC should be free.

The end result? The sorry state in which we find ourselves today, 11 years after the announcement of WebRTC. What we’re going to do in this article, is detail the state of the WebRTC open source ecosystem, and why we feel a change is necessary to ensure the healthy growth of WebRTC for years to come.

Table of contents Your open source Cliffs Notes

We’ll start with the most important thing you need to know:

Open Source != Free

Let’s take a quick step back before we dive into it though.

What’s open source exactly?

An open source project is a piece of source code that is publicly available for anyone under one of the many open source licenses out there. Someone, or a group of people from the same company or from disparate places, have “banded together” and created a piece of software that does something. They put the code of that software out in the open and slap a license on top of it. That ends up being an open source project.

Open source isn’t free. There’s a legal binding associated with using open source, but it isn’t what we’re interested in here. It is the fact that if you use open source, it doesn’t mean that you pay nothing to no one. It just means that you get *something* with no strings attached.

Why would anyone end up doing this for free? Well… that brings us to business models.

Open source business models

There are different types of open source licenses. Each with its own set of rules, and some more permissive than others, making them business-friendly. Sometimes the license type itself is used as a business model, simply by offering a dual license mode where a non-permissive open source license is available freely and a commercial one is available in parallel.

In other cases, the business model of the open source project revolves around offering support, maintenance and customization of that project. You get the code for free, but if you want help with it – you can pay!

Sometimes, the business model is around additional components (this is where you will see things like community edition and enterprise edition popping up as options in the project’s website). Things such as scripts for scaling the system, monitoring modules or other pieces of operational and functional components are protected as commercial products. The open source part brings companies to use it and raise popularity and awareness to the project, while the commercial one is the reason for doing it all. How the developers behind the project bring food to the table and become rich.

In recent years, you see business models revolving around managed services. The database is open source and free, but if you let us host it for you and pay for it, we’ll take care of all your maintenance and scaling headaches.

And some believe it is really and truly free. Troy Hunt wrote about it recently (it is a really good post – go read it):

“… there is a suggestion that those of us who create software and services must somehow be in it for the money”

To that I say – yes!

At the end of the day, delving into open source is all about the money.

Why?

  • If you do this to create a popular project, then your aim is almost always to figure out how to monetize it. Directly (see above examples) or indirectly, by increasing your chances of getting hired for higher paying jobs or into more interesting projects
  • Sometimes, you do this because you care deeply about a topic. But the end result is similar. You either have the time to deal with it because you make money elsewhere and this is a hobby – or because the company hiring you is HAPPY that you are doing it (which means you are doing it to some extent for the intrinsic value it gives you at that company)
  • You might be doing it to hone your skills. But then again, the reason for all this is to become a better programmer and… get hired

The moment the open source project you are developing is meaningful to two more people, or even a single company, there are monetary benefits to be gleaned. We’d venture that if you aren’t making anything from these benefits (even minor ones), then the open source project has no real future. It gets to a point where it should either grow up or wither and die.

A few more words about open source projects

Just a few things before we start our journey to the WebRTC open source realm:

  • Most open source projects are just an API abstracting out a certain activity or capability that you need for your own application development. In the case of WebRTC, we will be focusing on such abstractions that implement specific network entities – more on that later
  • When using open source, you usually have a bit more control over your application. That’s because you can modify the source code of the open source components you use as opposed to asking from a vendor to do that when you use a precompiled library
  • Many open source projects will have poor documentation. That will be doubly true when they are lacking a solid business model – hobbyists developers are more into writing code than they are explaining how to use that code
  • Documentation is an important aspect for commercial use of open source projects. So are its ability to provide a clear API facade and code samples to make it easy to start using
The WebRTC open source landscape

A common mistake by “noobs” is that WebRTC is a solution that requires no coding. Since browsers already implement it, there’s nothing left to do. This can’t be farther away from the truth.

WebRTC as a protocol requires a set of moving parts, clients and servers; that together enable the rich set of communication solutions we’re seeing out there.

The diagram above, taken from the Advanced WebRTC Architecture course, shows the various components necessary in a typical WebRTC application:

  • Clients, web-based or otherwise
    • The web browser ones are the ones you get for “free” as part of the browser
    • Anything else you need to figure on your own
  • Application server, which we’re not going to touch in this article. The reason being that this is a generic component needed in any type of application and isn’t specific to WebRTC
  • Signaling server, taking care of setting up and negotiating the WebRTC sessions themselves
  • STUN/TURN server, which deals with NAT traversal. Needed in almost every deployment
  • Media server, for media processing heavy lifting. Be it group calling, recording, video rendering, etc – a media server is more than likely to make that happen

For each and every component here, you can find one or more open source projects that you can use to implement it. Some are better than others. Many are long forgotten and decaying. A few are pure gold.

Lets dive into each of these components to see what’s available and at what state we find the open source community for them.

WebRTC open source client libraries

First and foremost, we have the WebRTC open source client libraries. These are implementations of the WebRTC protocol from a user/device/client perspective. Consider these your low level API for WebRTC.

There used to be only a single one – libwebrtc – but with time, more were introduced and took their place in the ecosystem. Which is why we will start with libwebrtc:

libwebrtc

THE main open source project of WebRTC is libwebrtc.

Why?

  1. It is the first one to be introduced
  2. Chrome uses it for its WebRTC implementation
  3. The same goes for Safari, Edge and Firefox – each with a varying degree of integration and use
  4. Many of the native mobile apps use libwebrtc internally

Practically speaking – libwebrtc is everywhere WebRTC is.

Here are a few things you need to know about this library:

  • libwebrtc is maintained and controlled solely by Google. Every change needs to be signed off by a Googler.
  • It gets integrated into Chromium and Chrome, which means it reaches billions of devices
  • That means that Google is quite protective about it. Getting a contribution into libwebrtc is no easy feat
  • While there are others who contribute, external contributions to libwebrtc are rare and far between
  • Remember also that the team at Google doing this isn’t philanthropic. It does that for Google’s own needs, which mostly means Google Meet these days. This means that use cases, scenarios, APIs and code flows that are used by Google Meet are likely to be more secure, stable and far more optimized than anything else in libwebrtc’s codebase
  • Did we mention the whole build system of libwebrtc is geared towards compiling it into Chromium as opposed to other projects (like the one you’re building)? See Philipp’s Fosdem talk from 2021.
  • Or that some of its interfaces (like device acquisition) are less tested simply because Chrome overrides them, so Google’s focus is on the Chrome interfaces and not the ones implemented in libwebrtc?

Looking at the contributions over time Google is doing more than 90% of the work:

The amount of changes has been decreasing year-over-year after peaking in early 2016. During the pandemic we even reached a low point with less than 200 commits per month on average. Even with these reduced numbers libwebrtc is the largest and most frequently updated project in the open source WebRTC ecosystem.

The number of external contributions is fairly low, below 10%. This doesn’t bode well for the future of libwebrtc as the industry’s standard library of WebRTC. It would be better if Google opened up a bit more for contributions that improve WebRTC or those that make it easier to use by others.

This leads us to the business model aspect of libwebrtc

Money time

What if one decides to use libwebrtc and integrate it directly in his own application?

  • There’s no option for paid support
  • No real alternative to pay for custom development
  • Maintaining your own fork and keeping it in sync with the upstream one is a lot of effort

That said, for the most part, and in most situations, libwebrtc is the best alternative – that’s because it follows the exact implementations you will be bumping into in web browsers. It will always be the most up to date one available.

A side note – libwebrtc is implemented in C++. Why is this relevant? Pion

Pion

Pion is a Go implementation of the WebRTC APIs. Sean DuBois is the heart and sole behind the Pion project and his enthusiasm about it is infectious.

Putting on Tsahi’s cynic hat, Pion’s success can be attributed a lot to it being written in Go. And that’s simply because many developers would rather use Go (modern, new, hip) and not touch C++.

Whatever the reason is, Pion has grown quite nicely since its inception and is now quite a popular WebRTC open source project. It is used in embedded devices, cloud based video rendering and recently even SFU and other media server implementations.

Money time

What if one decides to use Pion and integrate it directly in his own application?

  • There’s no option for paid support
  • No official alternative to pay for custom development
  • There are a handful of contributors to Pion who are doing contracting work
Python, Rust, et al

There are other implementations of WebRTC in other languages.

The most notable ones:

  • aiortc – a Python implementation of WebRTC
  • WebRTC.rs – a Rust implementation of WebRTC, created as a rewrite of Pion

There are probably others, less known.

We won’t be doing any Money time section here. These projects are still too small. We haven’t seen too many services using them in production and at scale.

GStreamer

GStreamer is an open source media framework that is older than WebRTC. It is used in many applications and services that use WebRTC, even without using its WebRTC capabilities (mainly since these were added later to GStreamer).

We see GStreamer used by vendors when they need to transform video content in real-time. Things like:

  • Taking machine rendering (3D, screen casting or other) and passing them to a browser via WebRTC
  • Mixing inputs combining them into a single recording or a single livestream
  • Collecting media input on embedded platforms and preparing it for a WebRTC session

Since WebRTC was added as another output type in GStreamer, developers can use it directly as a broadcasting entity – one that doesn’t consume data but rather generates it.

GStreamer is a community effort and written in C. While it is used in many applications (commercial and otherwise), it lacks a robust commercial model. What does that mean?

Money time

What if one decides to use GStreamer and integrate it directly in his own application?

  • There’s no official option for paid support
  • No official alternative to pay for custom development
  • The ecosystem is large enough to allow finding people with GStreamer knowledge
Open source TURN server(s) Connecting WebRTC by using TURN to relay the media

Next we have open source TURN servers. And here, life is “simple”. We’re mostly talking about coturn. There are a few other alternatives, but coturn is by far the most popular TURN server today (open source or otherwise).

In many ways, we don’t need more than that, because TURN is simple and a commodity when it comes to the code implementation itself (up to a point, as Cloudflare is or was trying to change that with their managed service).

But, and there’s always a but in these things, coturn needs to get updated and improved as well. Here’s a recent discussion posted as an issue on coturn’s github repo:

Is the project dead?

Read the whole thread there. It is interesting.

The maintainers of coturn are burned out, or just don’t have time for it (=they have a day job). For such a popular project, the end result was a volunteer or two from the industry picking up the torch and doing this in parallel to their own day job.

Which leads us to:

Money time

What if one decides to use coturn and integrate it directly in his own application?

  • There’s no official option for paid support
  • No official alternative to pay for custom development
  • The ecosystem is large enough to allow finding people with coturn knowledge
Open source signaling servers for WebRTC

Signaling servers are a different beast. WebRTC doesn’t define them exactly, but they are needed to pass the SDP messages and other signals between participants. There are several alternatives here when it comes to open source signaling solutions for WebRTC.

It should be noted that many of the signaling server alternatives in WebRTC offer purely peer communication capabilities, without the ability to interact with media servers. Some signaling servers will also process audio and video streams. How much they focus on the media side versus the signaling side will decide if we will be treating them here as signaling servers or media servers – it all boils down to their own focus and to the functions they end up offering.

Signaling requires two components – a signaling server and a client side library (usually lightweight, but not always).

We will start with the standardized ones – SIP & XMPP.

SIP and XMPP

SIP and XMPP preceded WebRTC by a decade or so. They have their own ecosystem of open source projects, vendors and developers. They act as mature and scalable signaling servers, sometimes with extensions to support WebRTC-specific use-cases like creating authentication tokens for TURN servers.

We will not spend time explaining the alternatives here because of this.

Here, it is worthwhile mentioning MQTT as well. Facebook is known to be using it (at least in the past – not sure about today) in their Facebook Messenger for signaling

PeerJS

PeerJS has been around for almost as long as WebRTC itself. For an extended period of that time, the codebase has not been maintained or updated to fit what browsers supported. Today, it seems to be kept.

The project seems to focus on a monolithic single server deployment, without any thought about horizontal scaling. For most, this should be enough.

Throughout the years, PeerJS has changed hands and maintainers, including earlier this year:

Without much ado, lets move to the beef of it:

Money time

What if one decides to use PeerJS and integrate it directly in his own application?

  • There’s no official option for paid support
  • No official alternative to pay for custom development
  • The codebase is small, so if you know WebRTC, these challenges shouldn’t pose any real issue
simple-peer

Simple-Peer has been driven by Feross and his name in the early days. It is another one of those “pure WebRTC” libraries that focuses solely on peer-to-peer. If that fits your use-case, great, it is mature and “done”. Most of the time your use-case will evolve over time though.

It has received only a few maintenance commits in 2022 and not many more in 2021. The same considerations as for PeerJS apply for simple-peer. If you need to pick between the two… go for simple-peer, the code is a bit more idiomatic Javascript.

Money time

Just go read PeerJS – same rules apply here as well.

Matrix

Matrix is “an open network for secure, decentralized communication”. There’s also an open standard to it as well as a commercial vendor behind it (Element).

Matrix is trying to fix SIP and XMPP by being newer and more modern. But the main benefit of Matrix is that it comes as client and server along with implementations that are close to what Slack does – network and UI included. It is also built with scale in mind, with a decentralized architecture and implementation.

Here we’re a bit unaligned… Tsahi thinks Matrix is a good alternative and choice while Philipp is… less thrilled. Their WebRTC story is a bit convoluted for some, meandering from full mesh to Jitsi to a “native SFU” only recently.

So… Matrix has a company behind it. But they have their own focus (messaging service competing with Slack with privacy in mind).

Money time

What if one decides to use Matrix and integrate it directly in his own application?

  • There’s no official option for paid support
  • No official alternative to pay for custom development
  • That said, Matrix does have a jobs room on Matrix where you can search for paid help
Everything else in the github jungle

At the time of writing, there are 26,121 repositories on github mentioning WebRTC. By the time you’ll be reading it, that number will grow some.

Not many are sticking out too much, and in that jumble, it is hard to figure out which projects are right for you. Especially if what you need needs to last. And doubly so if you’re looking for something that has decent enough support and a thriving community around it.

Open source SFUs and media servers in WebRTC

Another set of important open source WebRTC components are media servers and SFUs.

While signaling servers deal with peer communication of setting up the actual sessions, media servers are focused on the channels – the actual data that we want to be sending – audio and video streams, offering realtime video streaming and processing Whenever you’ll be needing group sessions, broadcasts or recordings (and you will, assuming you’d like video calls or video conferences incorporated in your application), you will end up with media servers.

Here’s where are are marketwise

Janus, Jitsi, mediasoup & Pion

I’ve written about these projects at length in my 2022 WebRTC trends article. Here’s a visual refresher of the relevant part of it:

Janus, Jitsi, mediasoup and Pion are all useful and popular in commercial solutions. Let’s try to analyze them with the same prism we did for the other WebRTC open source projects here.

Janus
  • There’s official paid support available from meetecho
  • You can pay meetecho for consulting and paid development. From experience, they are mostly busy which means they are picky with who they end up working with
  • The Janus ecosystem is large enough and there are others who offer development services for it as well
Jitsi

Jitsi can be considered a platform of its own:

  • At the heart of Jitsi is the Jitsi Videobridge, with additional components around it, composing together the Jitsi Meet video chat app
  • There’s also a managed CPaaS service offering as part of it – 8×8 JaaS

Money time

  • Jitsi was acquired a few years ago by 8×8. Which means that there’s no official option for paid support
  • Similarly, custom development isn’t available
  • The Jitsi ecosystem is large enough and there are others who offer development services for it as well
  • Oh, and like Matrix (where Element offers paid hosting), 8×8 JaaS offers paid hosting for Jitsi (=CPaaS). There’s also Jitsi Meet which is essentially a free managed service built on top of Jitsi itself
Mediasoup
  • mediasoup is maintained by 2 developers who have a day job at Around. Which means that there’s no official option for paid support
  • Similarly, custom development isn’t available
  • The ecosystem around mediasoup means you can get developers for it as well
Pion
  • We’ve already discussed Pion when we looked at WebRTC clients
  • Assume the same is true for media servers
  • Only you have the headache of choosing which media server written on top of Pion to use

To be clear – in all cases above, getting vendors to help you out who aren’t maintaining the specific media server codebase means results are going to be variable when it comes to the quality of the implementation. In other words, it is hard to figure out who to work with.

The demise of Kurento

The Kurento Media Server is dead. So much so that even the guys behind it went to build OpenVidu (below) and then made OpenVidu work on top of mediasoup.

Don’t touch it with a long stick.

It has been dead for years and from time to time people still try using it. Go figure.

Higher layers of abstraction

A higher layer abstraction open source project strives to become a platform of sorts. Their main focus in the WebRTC ecosystem is to offer a layer of tooling on top of open source media servers. The two most notable ones are probably OpenVidu and LiveKit.video conferencing 

OpenVidu

OpenVidu is a kind of an abstraction layer to implement a room service, UI included.

It originates from the team left behind from the Kurento acquisition. With time, they even adopted mediasoup as the media server they are using, putting Kurento aside for the most part.

Money time

Unlike many of the open source solutions we’ve seen so far, OpenVidu actually seem like they have a business model:

  • There’s an official commercial support available
  • There are hosted commercial plans available as well as consulting and development work
LiveKit

LiveKit offers an “open source WebRTC infrastructure” – the management layer above Pion SFU.

For the life of me though, I don’t understand what the business model is for LiveKit. They are a company – not just an open source project, and as such, they need to have revenue to survive.

Most probably they get some support and development money from enterprises adopting LiveKit, but that isn’t easily apparent from their website.

Other, less popular open source alternatives for WebRTC

There are other companies who offer commercial solutions that are proprietary in nature. Some do it as on premise alternatives, where they provide the software and the support, while you need to deploy and maintain.

These can either be suitable solutions or disasters waiting to happen. Especially when such a vendor decides to pivot or leave the market.

Tread carefully here.

Is it time for WebRTC open source to grow up?

This has been a long overview, but I think we can all agree.

The current state of WebRTC open source is abysmal:

  • We are more than 10 years in
  • There are thriving open source projects for WebRTC out there
  • These projects are used by many – hobbyists and professionals alike
  • They are found inside commercial applications serving millions of users
  • But they offer little in the way of support or paid help
  • Somehow, the market hasn’t grown commercially

If it were up to us, and it isn’t, we’d like to see a more sophisticated market out there. One that gives more and better commercial solutions for enterprises and entrepreneurs alike. 

The post The state of WebRTC open source projects appeared first on BlogGeek.me.

Be very clear to yourself why you manage your own TURN servers

bloggeek - Mon, 08/22/2022 - 12:30

Running your own TURN servers for your WebRTC application is not necessarily the best decision. Make sure you know why you’re doing it.

[In this list of short articles, I’ll be going over some WebRTC related quotes and try to explain them]

Are you running your own TURN server? Great!

Now, are you crystal clear and honest with yourself about why you’re doing that exactly?

WebRTC has lots of moving parts you need to take care of. Lots of WebRTC servers: The application. Signaling servers. Media servers. And yes – TURN servers.

I already covered a few aspects of TURN in this WebRTC quote – We TURNed to see a STUNning view of the ICE. It is now time to review the build vs buy decision around TURN.

You see, NAT traversal in WebRTC is done by using two different servers: STUN and TURN. STUN is practically free and it can also be wrapped right into the TURN server.

TURN servers are easy to interface with, but not as easy to install, configure and maintain properly. Which is why my suggestion more often than not is to use a third party managed TURN service instead of putting up your own. Economies of scale along with focus and core competencies come to mind here with this decision.

Why buy your WebRTC TURN servers?

Buying a TURN server should be your default decision. It is simple. It isn’t too expensive (for the most part) and it will reduce a lot of your headaches.

Most of the companies that approach me with connectivity issues of their WebRTC application end up in that state simply because they decided to figure out NAT traversal in WebRTC on their own.

Here are a few really good reasons why you should buy your TURN service:

  • The best practices of TURN (and STUN) configuration aren’t the defaults of open source TURN servers or of the standard specification itself. So if you don’t have someone inhouse who has done it at scale in the past already, then don’t start now
  • Using a third party managed TURN server is simple. Onboarding and integration should be a breeze (a few hours at most)
  • There’s no real vendor lock-in. Switching to your own TURN servers will cost you the same as it would to start with your own TURN servers, so you can delay that decision for later. And switching to another managed TURN server is just as simple as it is to start using one for the first time
  • Testing for edge cases and figuring out issues with WebRTC connectivity is hard. It takes a lot of time, requires patience, understanding and visibility when issues fail. None of this is something you’ll have in the first months of running your own service
  • It is cheap. Twilio has it at $0.4/gigabyte of data. And not all of your traffic will go through TURN anyways. When you’ll start paying too much to your taste, you will be able to put up your own infrastructure. But why invest in that effort before it is time to do so?
  • Someone else will take care of scaling. TURN needs to be as close as possible to the end users. Installing a single server won’t be enough. Installing a single region won’t be enough. Why deal with that headache?
  • Firewall friendliness. Using your own servers means opening them up in firewall configurations of your customers. There’s a small likelihood that these firewalls are already configured to support the managed TURN service you are using for other tools
Why build your WebRTC TURN servers?

We are all builders. And we love building. So adding TURN into our belt of things we built makes sense. It also plays well into the vertical integration we now appreciate with how successful Apple has been with it with its services.

But frankly, it is mostly about control. The ability to control your own destiny without relying on others.

I still think you should buy your TURN servers from a reputable managed service provider. That said, here are some good reasons why to build and deploy your own:

  • Data sovereignty and other regulatory reasons. In some industries, for some customers, the fact that you host and run your own servers is critical. In such a case, using a managed third party TURN service is simply impossible. In the same domain, privacy and data processing  requirements may make using a third party harder than setting up your own
  • You already have a large traffic and footprint. With economies of scale this starts becoming interesting and important. If you have the sheer size that makes it worthwhile running your own then do it. I wouldn’t start below $10,000 or even $50,000 in monthly expenses for your managed TURN service, which is a lot of traffic. Why? Because you’ll need a full time ops person on the job for at least half a year if not longer. And you’ll need to deploy servers in many regions from the get go, so better start when you’re big enough
  • Firewall configurations can be a mess. Sometimes, your customers may want to validate the IP addresses they configure are yours, or want to limit the IP address ranges they configure, or limit the services they expose themselves to. In such cases, they might not look at it nicely when you use a third party
  • Existing customer installations might already be configured to your IP address ranges, and just placing your TURN servers within those ranges will be easier than asking them to change firewall configurations to incorporate a third party vendor
  • Traffic control is another reason. Using your own SDN network configuration or packet acceleration may benefit from having your own TURN servers in-house, alongside the rest of your infrastructure as opposed to be hosted elsewhere where connectivity to your backend servers might be questionable

Build? Buy? Which one is the path you’ll be taking?

Trying to get more of your calls connected in WebRTC? Check out this free video mini course on effectively connecting WebRTC sessions

The post Be very clear to yourself why you manage your own TURN servers appeared first on BlogGeek.me.

We TURNed to see a STUNning view of the ICE

bloggeek - Mon, 08/08/2022 - 11:30

Every time you look at NAT Traversal in WebRTC, you end up learning something new about STUN, TURN and/or ICE.

[In this list of short articles, I’ll be going over some WebRTC related quotes and try to explain them]

STUN, TURN and ICE. The most misunderstood aspects of WebRTC, and the most important ones to get more calls connected. It is no wonder that the most viewed and starred lesson in my WebRTC training courses is the one about NAT traversal.

Let’s take this opportunity to go over a few aspects of NAT traversal in WebRTC:

  • STUN is great (and mostly free). It doesn’t route media, it just punches holes in firewalls and NATs
  • TURN means relaying your media. It  isn’t used for all sessions, but when it is used, it is a life saver for that session. You can keep the TURN servers on all connections, since it will be used only when needed
  • While STUN and TURN are servers, ICE isn’t. ICE is a protocol. It is how WebRTC decides if it is going to use TURN or not in a session
  • No matter how you connect your session, it may happen on either UDP or TCP. UDP will be a better alternative (and WebRTC will prioritize it and try to connect it “first”)
  • TURN servers are expensive. Don’t use free TURN servers – they aren’t worth the money you aren’t paying for it. Use your own or go for a paid, managed TURN service
  • Put TURN servers as close as possible to your users. They’ll thank you for that
  • In the peer connection’s iceServers configuration – don’t put more than 3-4 servers (that means 1 STUN, 1 TURN/UDP, 1 TURN/TCP, 1 TURN/TLS). More servers means more connectivity checks and more time until you get things connected – it doesn’t mean better connectivity
  • Geolocation with TURN should be done either before you place your TURN servers in the configuration or via the DNS requests for the TURN servers themselves
  • You don’t always need TURN servers. Read more about when you need and don’t need TURN

This covers the basics. There’s a ton more to learn and understand about NAT traversal in WebRTC. I’d also suggest not installing and deploying your own TURN servers but rather use a third party paid managed service. The worst that can happen is that you’ll install and run your own later on – there’s almost no vendor lock-in for such a service anyway.

Trying to get more of your calls connected in WebRTC? Check out this free video mini course on effectively connecting WebRTC sessions

The post We TURNed to see a STUNning view of the ICE appeared first on BlogGeek.me.

With media delivery, you can optimize for quality or latency. Not both

bloggeek - Mon, 07/25/2022 - 11:30

You will need to decide what is more important for you – quality or latency. Trying to optimize for both is bound to fail miserably.

[In this list of short articles, I’ll be going over some WebRTC related quotes and try to explain them]

First thing I ask people who want to use WebRTC for a live streaming service is:

What do you mean by live?

This is a fundamental question and a critical one.

If you search Google, you will see vendors stating that good latency for live streaming is below 15 seconds. It might be good, but it is quite crappy if you are watching a live soccer game and your neighbors who saw the goal taking place 15 seconds before you did are shouting.

I like using the diagram above to show the differences in latencies by different protocols.

WebRTC leaves all other standards based protocols in the dust. It is the only true sub-second latency streaming protocol. It doesn’t mean that it is superior – just that it has been optimized for latency. And in order to do that, it sacrifices quality.

How?

But not retransmitting or buffering.

With all other protocols, you are mostly going to run over HTTPS or TCP. And all other protocols heavily rely on retransmissions in order to get the complete media stream. Here’s why:

  • Networks are finicky, and in most cases, that means you will be dealing with packet losses
  • You stream a media file over the internet, and on the receiving end, parts of that file will be missing – lost in transmission
  • So you manage it by retransmission mechanisms. Easiest way to do that is by relying on HTTPS – the main transport protocol used by browsers anyways
  • And HTTPS leans on TCP to offer reliability of data transmission, which in turn is done by retransmitting lost packets
  • Retransmissions require time, which means adding a buffering mechanism to make room for it to work and provide a smooth viewing experience. That time is the latency we see ranging from 2 seconds up to 30 seconds or more

WebRTC comes from the real time, interactive, conversational domain. There, even a second of delay is too long to wait – it breaks the experience of a conversation. So in WebRTC, the leading approach to dealing with packet losses isn’t retransmission, but rather concealment. What WebRTC does is it tries to conceal packet losses and also make sure there are as little of it as possible by providing a finely tuned bandwidth estimation mechanism.

Looking at WebRTC itself, it includes a jitter buffer implementation. The jitter buffer is in charge of delaying playout of incoming media. This is done to assist with network jitter, offering smoother playback. And it is also used to implement lip synchronization between incoming audio and video streams. You can to some extent control it by instructing it not to delay playout. This will again hurt the quality and improve latency.

You see, the lower the latency you want, the bigger the technical headaches you will need to deal with in order to maintain high quality. Which in turn means that whenever you want to reduce latency, you are going to pay in complexity and also in the quality you will be delivering. One way or another, there’s a choice being made here.

Looking to learn more on how to use WebRTC technology to build your solution? We’ve got WebRTC training courses just for that!

The post With media delivery, you can optimize for quality or latency. Not both appeared first on BlogGeek.me.

Nocode/Lowcode in CPaaS

bloggeek - Mon, 07/18/2022 - 12:30

Lowcode and nocode or old/new concepts that are now finding their way to Communication APIs. Here’s the latest developments.

Lowcode and nocode has fascinated me. Around 15 years ago (or more), I was tasked with bringing the video calling software SDKs we’ve developed at RADVISION to the cloud.

At the time, the solutions we had were geared towards developers and were essentially SDKs that were used as the video communication engines of applications our customers developed. Migrating to the cloud when all you are doing is the SDKs is a challenge. How do you offer your developer customers with the means to control the edge devices via the cloud, and doing so while allowing the application to control the look and feel, embedding the solution wherever they want.

The cloud we’ve developed used Python (Node.js wasn’t popular yet), and we dabbled and experimented with Awesomium – a web browser framework for applications – the predecessor of today’s more popular Electron. We built REST APIs to control the calling logic and handle the client apps remotely via the cloud.

I spent much of my time trying to come to grips with how exactly you would fit remote controlling an app to the fact that you don’t really own or… control. A conundrum.

Fast forward to today, where cloud and WebRTC are everywhere, and you ask yourself – how do you remote control communications – and how do you build such interactions with ease.

The answer to that is usually by way of nocode and lowcode. Mechanisms that reduce the amount of code developers need to write to use certain technologies – in our case Communication APIs (CPaaS).

I had a bit of spare time recently, so I decided to spend it on capturing today’s nocode & lowcode status and progress within the CPaaS domain.

This has been especially important if you consider the recent announcements in the market – including the one coming from Zoom about their Jumpstart program:

“With Jumpstart, you can quickly create easy-to-integrate and easy-to-customize Zoom video solutions into your apps at lower costs.”

So without much ado, if this space interest you, you should check out my new free eBook: Lowcode & Nocode in Communication APIs

This eBook details and explains the various approaches in which lowcode and nocode manifest themselves in the Communication APIs domain. It looks into the advantages and challenges of developers who adopt such techniques within their applications.

I’d like to thank Daily for sponsoring this ebook and helping me make it happen. If you don’t know them by now then you should. Daily offers WebRTC video and audio for every developer – they are a CPaaS vendor with a great lowcode/nocode solution called Daily Prebuilt

If you are in the process of developing applications that use 3rd party Communication APIs, you will find the insights in this eBook important to follow.

GET MY FREE LOWCODE/NOCODE CPAAS EBOOK

The post Nocode/Lowcode in CPaaS appeared first on BlogGeek.me.

In group video calls, effectively managing bandwidth is 90% of the battle

bloggeek - Mon, 07/11/2022 - 11:30

The biggest challenge you will have when implementing WebRTC group calling is estimating optimizing bandwidth use.

[In this list of short articles, I’ll be going over some WebRTC related quotes and try to explain them]

Video is a resource hog. Some say that WebRTC is a great solution for 1:1 calls, but is lacking when it comes to group calling. To them I’d say that WebRTC is a technology and not a solution. In this case, it simply means that you need to invest some effort in getting group video calling to work well.

What does that mean exactly? That you need to think about bandwidth management first and foremost.

Why?

Let’s assume a 25 participants video call. And we’re modest – we just want each to encode his video at 500kbps – reasonable if we plan on having everyone at a mere VGA resolution (640×480 pixels).

Want to do the math together?

We end up with 12.5Mbps. That’s only for the video, without the overhead of headers or  audio. Since we only need to receive media from 24 participants, we can “round” this down to 12Mbps.

I am sure you have a downlink higher than 12Mbps, but let me tell you a few things you might not be aware of:

  • A downlink of 100Mbps doesn’t mean you can really get sustainable 12Mbps for a long period of time
  • It also doesn’t mean you can get 12Mbps of incoming UDP traffic (and you prefer UDP since it is better for sending real-time media)
  • Most likely, your device won’t be able to decode 12Mbps of video content at reasonable CPU use
  • And if you have hardware acceleration for video decoding, it usually is limited to 3 or 4 media streams, so handling 24 such streams means software decoding – again running against the CPU processing limit
  • The larger the group the more diverse the devices and network connections. So you’ll be having people joining on old devices and smartphones, or with poor network connections. For them, 12Mbps will be science fiction at best
  • As a rule of thumb, I’d look at any service that uses over 3-4Mbps of downlink video traffic for video group calls as something that wasn’t properly optimized

You can get better at it, trying to figure out lower bitrates, limit how much you send and receive and do so individually per participant in the video group meeting. You can take into consideration the display layout, the dominant speaker and contributing participants, etc.

That’s exactly what 90% of your battle here is going to be – effectively managing bandwidth.

Going for a group video calling route? Be sure to save considerable time and resources for optimization work on bandwidth estimation and management. Oh – and you are going to need to do that continuously. Because WebRTC is a marathon not a sprint

Scaling WebRTC is no simple task. There are a lot of best practices, tips and tricks that you should be aware of. My WebRTC Scaling eBooks Bundle can assist you in figuring out what more you can do to improve the quality and stability of your group video calling service.

The post In group video calls, effectively managing bandwidth is 90% of the battle appeared first on BlogGeek.me.

Calculating True End-to-End RTT (Balázs Kreith)

webrtchacks - Mon, 07/11/2022 - 05:14

Balázs Kreith of the open-source WebRTC monitoring project, ObserveRTC shows how to calculate WebRTC latency - aka Round Trip Time (RTT) - in p2p scenarios and end-to-end across one or more with SFUs. WebRTC's getStats provides relatively easy access to RTT values, bu using those values in a real-world environment for accurate results is more difficult. He provides a step-by-step guide using some simple Docke examples that compute end-to-end RTT with a single SFU and in cascaded SFU environments.

The post Calculating True End-to-End RTT (Balázs Kreith) appeared first on webrtcHacks.

WebRTC is a technology not a solution

bloggeek - Mon, 06/27/2022 - 12:30

WebRTC is a building block to be used when developing solutions. Comparing it to solutions is the wrong approach.

[In this list of short articles, I’ll be going over some WebRTC related quotes and try to explain them]

How does WebRTC compare to Zoom?

What about Skype? Or FaceTime?

I’d say this is an to questions – you’re not comparing things that are comparable.

WebRTC is a piece of technology. A set of building blocks that you can use, like lego bricks.

In essence, you can view WebRTC in two ways:

  1. A standard specification – what goes on the network. In this mindset, the actual infrastructure pieces are yours to build (=the application/solution), and WebRTC just specifies what goes “in the wire”
  2. Open source implementation of the specification – this one is the libwebrtc library maintained by Google and embedded in Chrome. And then it is again just a piece that gets embedded inside different components, usually client-side only. And again, the solution is up to you to build with additional infrastructure pieces

Got an application you’re developing? Need communications sprinkled into it? Some voice. Maybe video. All in real time. And with browser components maybe. If that is the case, then WebRTC is the technology you’re likely to be using for it. But piecing all of that together into your application? That’s up to you. And that’s your solution.

We can then compare the solution you built to some other solution out there.

Next time people tell you “WebRTC isn’t good because it can’t do group calls” – just laugh at their faces. Because as a technology WebRTC can certainly handle group calls and large broadcasts – you’ll need to bring media servers to do that, and sweat to build your solution. The pieces of your puzzle there will include WebRTC as a technology.

Remember:

WebRTC is a technology not a solution. What you end up doing with it is what matters

Looking to learn more on how to use WebRTC technology to build your solution? We’ve got WebRTC training courses just for that!

The post WebRTC is a technology not a solution appeared first on BlogGeek.me.

The Ultimate Guide to Jitisi Meet and JaaS

webrtchacks - Tue, 06/21/2022 - 14:10

A full review and guide to all of the Jitsi Meet-related projects, services, and development options including self-install, using meet.jit.si, 8x8.vc, Jitsi as a Service (JaaS), the External iFrame API, lib-jitsi-meet, and the Jitsi React libraries among others.

The post The Ultimate Guide to Jitisi Meet and JaaS appeared first on webrtcHacks.

Meet vs. Duo – 2 faces of Google’s WebRTC

webrtchacks - Wed, 06/15/2022 - 07:19

A very detailed look at the WebRTC implementations of Google Meet and Google Duo and how they compare using webrtc-internals and some reverse engineering.

The post Meet vs. Duo – 2 faces of Google’s WebRTC appeared first on webrtcHacks.

WebRTC is a marathon not a sprint

bloggeek - Tue, 06/14/2022 - 12:30

WebRTC requires an ongoing investment that doesn’t lend itself to a one-off outsourced project. You need to plan and work with it longtime.

[In this list of short articles, I’ll be going over some WebRTC related quotes and try to explain them]

WebRTC simplified development and reduced the barrier of entry to many in the market. This brought with it the ability to quickly build, showcase and experiment with demos, proof of concepts and even MVPs. Getting that far is now much easier thanks to WebRTC, but not planning ahead will ruin you.

There are a few reasons why you can’t treat WebRTC as merely a sprint:

  1. WebRTC as a technology is changing
    • The standard and what browsers implement isn’t aligned just yet. There are discrepancies, and while they are getting resolved, this takes time, meaning we’re in a long transition period
    • Browsers are investing in WebRTC (or at least the Chrome team is), so browser behaviors wrt WebRTC changes between one Chrome release to another
  2. Communications vendors have woken up
    • Since the pandemic, communication vendors are investing heavily in innovation
    • This leads to an arms race in feature sets and capabilities. Things you’ll need to keep up with as well
  3. WebRTC is a resource hog
    • It uses microphones and cameras, it eats up CPU and memory
    • New devices (and old devices seen for the first time) may well cause hiccups in your application’s behavior. You’ll be fine tuning, tweaking and troubleshooting your WebRTC code for years to come – assuming your service becomes popular
  4. Networks are flaky
    • WebRTC needs to work on unmanaged networks at all times
    • Often enough, users will fail to connect. Or have quality issues. You’ll need to help them out. A lot more than with “simple” web sites

I like using this slide in my courses and presentations:

These are the actors in a WebRTC application. While the application is within your control and ownership – everything else isn’t…

  • Users are finicky and they use their own weird devices to connect. They also come with different levels of technical understanding and savviness
  • Networks are unmanaged, and you can never know in advance where the user is, if his network is good or bad, and what kind of firewalls and other nasty devices along the route are going to hinder communications
  • Browsers don’t adhere to your development schedule. They have their own pace, which is breakneck speeds of around 4 weeks between one release to another

Planning on using WebRTC? Great!

Now prepare for it as you would for a long marathon – it isn’t going to be a sprint.

Things to in your preparation for the WebRTC marathon include:

  • Getting skilled teams; most likely growing them inhouse and training them with WebRTC
  • Tool up. Take care of long term needs of testing and monitoring (you definitely should check testRTC)
  • Use a third party CPaaS to own most of the WebRTC infrastructure headaches if you don’t have the skillset to do it (and yes, I have a report for that)

The post WebRTC is a marathon not a sprint appeared first on BlogGeek.me.

What is the WebRTC leak test and should you be worried about it?

bloggeek - Mon, 06/06/2022 - 12:00

Hearing FUD around WebRTC IP leaks and testing them? The stories behind them are true, but only partially.

WebRTC IP leak tests were popular at some point, and somehow they still are today. Some of it is related to pure FUD while another part of it is important to consider and review. In this article, I’ll try to cover this as much as I can. Without leaking my own private IP address (192.168.123.191 at the moment if you must know) or my public IP address (80.246.138.141, while tethered to my phone at the coffee shop), lets dig into this topic together

Table of contents Premier to IP addresses

IP addresses are what got you here to read this article in the first place. It is used by machines to reach out to each other and communicate. There are different types of IP addresses, and one such grouping is done between private and public addresses.

Private and public IP addresses

Once upon a time, the internet was built on top of IPv4 (and it still mostly is). IPv4 meant that each device had an IP address constructed out of 4 octets – a total of around 4 billion potential addresses. Less than the people on earth today and certainly less than the number of devices that now exist and connect to the internet.

This got solved by splitting the address ranges to private and public ones. A private IP address range is a range that can be reused by different organizations. For example, that private IP address I shared above? 192.168.123.191? It might also be the private IP address you are using as well.

A private IP address is used to communicate between devices that are hosted inside the same local network (LAN). When a device is on a different network, then the local device reaches out to it via the remote device’s public IP address. Where did that public IP address come from?

The public IP address is what a NAT device associates with the private IP address. This is a “box” sitting on the edge of the local network, connecting it to the public internet. It essentially acts as the translator of public IP addresses to private ones.

IP addresses and privacy

So we have IP addresses, which are like… home addresses. They indicate how a device can be reached. If I know your IP address then I know something about you:

  • Private IP address is a small window towards that local network. Enough such addresses and someone can get a good understanding of the architecture of that network (or so I am being told)
  • Public IP addresses can tell you where that user is. To some extent:

A quick look at that public IP address of mine from above, gives you the following information on WhatIsMyIpAddress.com:

So…

  • My ISP is HOT Mobile
  • But… this is a cellular hotspot tethered from my smartphone
  • And I am definitely not located in Ashdod, although I did grew up there as a kid
  • Suffice to say, this isn’t a Static IP address either
  • A thing to consider here – a seemingly innocent website knows my public IP address. There’s no real “privacy” in public IP addresses

It is somewhat accurate, but in this specific case, not much. In other cases it can be pretty damn accurate. Which means it is quite private to me.

One thing these nasty IP addresses can be used for? Fingerprinting. This is a process of understanding who I am based on the makeup and behavior of my machine and me. An IP address is one of many characteristics that can be used for fingerprinting.

If you’re not certain if IP addresses are a privacy concern or not, then there’s the notion that most probably IP addresses are considered privately identifiable information – PII (based on ruling of US courts as far as I can glean). This means that an IP address can be used to identify you as a person. How does that affect us? I’d say it depends on the use case and the mode of communications – but what do I know? I am not a lawyer.

Who knows your IP address(es)?

IP addresses are important for communications. They contain some private information in them due to their nature. Who knows my IP addresses anyway?

The obvious answer is your ISP – the vendor providing you access to the internet. It allocated the public IP address you are using to you and it knows which private IP address you are coming from (in many cases, it even assigned that to you through the ADSL or other access device it installed in your home).

Unless you’re trying to hide, all websites you access know your public IP address. When you connected to my blog to read this article, in order to send this piece of content back to you, my server needed to know where to reply to, which means it has your public IP address. Am I storing it and using it elsewhere? Not that I am directly aware of, but my marketing services such as Google Analytics might and probably does make use of your public IP address.

That private IP address of yours though, most websites and cloud services aren’t directly aware of it and usually don’t need it either.

WebRTC and IP addresses

WebRTC does two things differently than most other browser based protocols out there:

  1. It enables peer-to-peer communications, directly between two devices. This diverges from the classic client-server approach where a server mediates each and every message between clients
  2. WebRTC uses dynamic ports generated per session when needed. This again is something you won’t see elsewhere in web browsers where ports 80 and 443 are so common

Because WebRTC diverges from the client-server approach AND uses dynamic ephemeral ports, there’s a need for NAT traversal mechanisms to be able to.., well… pass through these NATs and firewalls. And while at it, try not to waste too much network resources. This is why a normal peer connection in WebRTC will have 4+ types of “local” addresses as its candidates for such communications:

  1. The local address (usually, the private IP address of the device)
  2. Server reflexive address (the public IP address received via a STUN request)
  3. Host address (a public IP address received via a TURN request). This one comes in 3 different “flavors”: UDP, TCP and TLS

Lots and lots of addresses that need to be communicated from one peer to another. And then negotiated and checked for connectivity using ICE.

Then there’s this minor extra “inconvenience” that all these IP addresses are conveyed in SDP which is given to the application on top of WebRTC for it to send over the network. This is akin to me sending a letter, letting the post office read it just before it closes the envelope.

IP addresses are necessary for WebRTC (and VoIP) to be able to negotiate and communicate properly.

This one is important, so I’ll write it again: IP addresses are necessary for WebRTC (and VoIP) to be able to negotiate and communicate properly.

It means that this isn’t a bug or a security breach on behalf of WebRTC, but rather its normal behavior which lets you communicate in the first place. No IP addresses? No communications.

One last thing: You can hide a user’s local IP address and even public IP address. Doing that though means the communication goes through an intermediary TURN server.

Past WebRTC “exploits” of IP addresses

WebRTC is a great avenue for hackers:

  1. It is a new piece of technology, so the understanding of it is limited
  2. WebRTC is complex, with a lot of different network protocols and attack surfaces via its extensive APIs
  3. IP addresses are needed to be exchanged, giving access to… well… IP addresses

The main exploits around IP addresses in browsers affecting the user’s privacy were conducted so far for fingerprinting.

Fingerprinting is the act of figuring out who a user is based on the digital fingerprint he leaves on the web. You can glean quite a lot about who a user is based on the behavior of their web browser. Fingerprinting makes users identifiable and trackable when they browse the web, which is quite useful for advertisers.

The leading story here? NY Times used WebRTC for fingerprinting

There’s a flip side to it – WebRTC is/was a useful way of knowing if someone is a real person or a bot running on browser automation as indicated in the comments. A lot of the high scale browser automations simply couldn’t quite cope with WebRTC APIs in the browser, so it made sense to use it as part of the techniques to ferret out real traffic from bots.

Since then, WebRTC made some changes to the exposure of IP addresses:

  • It doesn’t expose local IP addresses to the application if the user haven’t allowed access to the camera or microphone
  • If it still needs local addresses, it uses mDNS instead
Who knows your local IP address in WebRTC?

There are different entities in a WebRTC session that need to have your local IP address in a WebRTC session:

  1. Your browser. Its “innards” which runs the WebRTC stack needs to know your local IP address. And it does. So does your browser for that matter
  2. The other peer. This can be another web browser or a media server. They need that IP address to reach back to you if they’re on the same network as you are. And they can only know that if they try reaching out to you
  3. The web application. Since WebRTC has no signaling, the application is the one that sends the local IP address to the other peer
  4. Browser extensions. These may have access to this information simply because they have JavaScript coding access to the web page conducting the communications. Since the web application has a clear view of the IP addresses in the SDP messages, so does any browser extensions that have access to that web page and web application
  5. TURN servers. Not yours, but your peer’s TURN server. Since that TURN server may act as a mediator for the traffic. It needs your local IP address to try (and mostly fail) to connect to it

The other peer, the web application and the TURN server don’t really need that access if you don’t care about the local network connectivity use case. If connecting a WebRTC session on the local network (inside a company office, home, etc) isn’t what you’re focused on, then you should be fine with not sharing the local IP address.

Also, if you are concerned about your privacy to the point of not wanting people to know your local IP address – or public IP address – then you wouldn’t want these IP addresses exposed either.

But how can the browser or the application know about that?

VPNs stopping WebRTC IP leaks

When using a VPN, what you are practically doing is making sure all traffic gets funneled through the VPN. There are many reasons for using a VPN and they all revolve around privacy and security – either of the user or the corporate whose VPN is being used.

The VPN client intercepts all outgoing traffic from a device and routes it through the VPN server. VPNs also configure proxy servers for that purpose so that web traffic in general would go through that proxy and not directly to the destination – all that in order to hide the user itself or to monitor the user’s browsing history (do you see how all these technologies can be used either for anonymity or for the exact opposite of it?).

WebRTC poses a challenge for VPNs as well:

  • It uses multiple addresses and ports. Dynamically. So it is a bit harder to track and reroute
  • IP addresses are found inside the body of HTTP and WebSocket messages themselves and not only in the protocol headers. They can be quite hard to find in order to delete/replace
  • WebRTC uses UDP, which typically doesn’t get a special treatment by web proxies (which tend to focus on HTTP and WebSocket traffic)
  • Did I mention it is rather new? And VPN vendors know little about it

To make all this go away, browsers have privacy policies built into them. And VPNs can modify these policies to accommodate for their needs – things like not allowing non-proxied UDP traffic to occur.

How much should you care about WebRTC IP leaks?

That’s for you to decide.

As a user, I don’t care much about who knows my IP address. But I am not an example – I am also using Chrome and Google services. Along with a subscription to Office 365 and a Facebook account. Most of my life has already been given away to corporate America.

Here are a few rules of thumb I’d use if I were to decide if I care:

  • If you’re blocking JavaScript in your browser then you probably have nothing to worry about – WebRTC won’t work without it anyway
  • Assuming you’re not using Skype, Facebook Messenger, Whatsapp and others because you don’t want them to know or track you, then you should think twice about using WebRTC as well. And if you use it, make sure to “plug” that “IP leak” in WebRTC
  • Using a VPN? Then it means you don’t want your IP addresses publicly known. Make sure your VPN handles WebRTC properly as well
  • You replaced Google Search with DuckDuckGo in your browser? Or other search engines because they were said to be more privacy conscious? Then you might want to consider the WebRTC angle of it as well

In all other cases, just do nothing and feel free to continue using WebRTC “as is”. The majority of web users are doing just that as well.

Do you want privacy or privacy?

This one is tricky

You want to communicate with someone online. Without them knowing your private or public IP address directly. Because… well… dating. And anonymity. And harassment. And whatever.

To that end, you want the communication to be masked by a server. All of the traffic – signaling and media – gets routed through the intermediary server/service. So that you are masked from the other peer. But guess what – that means your private and public IP addresses are going to be known to the intermediary server/service.

You want to communicate with someone online. Without people, companies or governments eavesdropping on the conversation.

To that end, you want the communication to be peer-to-peer. No TURN servers or media servers as intermediaries. Which is great, but guess what – that means your private and public IP addresses are going to be known to the peer you are communicating with.

At some point, someone needs to know your IP addresses if you want and need to communicate. Which is exactly where we started from.

Oh, and complicated schemes a-la TOR networking is nice, but doesn’t work that well with real time communications where latency and bitrates are critical for media quality.

The developer’s angle of WebRTC IP leaks

We’ve seen the issue, the reasons for it and we’ve discussed the user’s angle here. But what about developers? What should they do about this?

WebRTC application developers

If you are a WebRTC application developer, then you should take into account that some of your users will be privacy conscious. That may include the way they think about their IP addresses.

Here are a few things for you to think about here:

  • Does your service offer P2P communications? (you probably need local IP addresses for that in the messages)
  • If your traffic flows solely via media servers, consider removing host candidates from the device side. They will be mostly useless anyway
  • You’re probably passing the IP addresses in SDP messages in your network. Are you storing them or logging them anywhere? For how long? In what format?
  • Test your service in various privacy-challenging environments:
    • Web proxies
    • Strict firewalls
    • VPNs of various types
VPN developers

If you are a VPN developer, you should know more about WebRTC, and put some effort into handling it.

Blocking WebRTC altogether won’t solve the problem – it will just aggravate users who need access to WebRTC-based applications (=almost all meeting apps).

Instead, you should make sure that part of your VPN client application takes care of the browser configurations to place them in a policy that fits your rules:

  • Make sure you route WebRTC traffic via the VPN. That includes both signaling (easy) and media (harder). I’d also check the data channel routing while at it if I were you
  • Handle UDP routing. Don’t just go for the simple TCP/TLS-only approach, as this will ruin the quality of experience for your users
  • Test against multiple different types of WebRTC applications out there. Don’t only look at Google Meet – there are plenty of others people are using
WebRTC leak test: The FAQ version What is a WebRTC leak test?

A WebRTC leak test is a simple web application that tries to find your local IP address. This is used to check and prove that an innocent-looking web application with no special permissions from a user can gain access to such data.

Does WebRTC still leak IP?

Yes and no.
It really depends where you’re looking at this issue.
WebRTC needs IP addresses to communicate properly. So there’s no real leak. Applications written poorly may leak such IP addresses unintentionally. A VPN application may be implemented poorly so as to not plug this “leak” for the privacy conscious users who use them.

Can I block WebRTC leaks in Chrome?

Yes. By changing the privacy policy in Chrome. This is something that VPNs can do as well (and should do).

How severe is the WebRTC leak?

The WebRTC leak of IP addresses gives web applications the ability to know your private IP address. This has been a privacy issue in the past. Today, to gain access to that information, web applications must first ask the user for consent to access his microphone or camera, so this is less of an issue.

What is a good VPN to plug the WebRTC leak?

I can’t really recommend a good VPN to plug WebRTC leaks. This isn’t what I do, and frankly, I don’t believe in such tools plugging these leaks.
One rule of thumb I can give here is that don’t go for a free VPN. If it is free, then you are the product, which means they sell your data – the exact privacy you are trying to protect.

The post What is the WebRTC leak test and should you be worried about it? appeared first on BlogGeek.me.

Fix Bad Lighting with JavaScript Webcam Exposure Controls (Sebastian Schmid)

webrtchacks - Wed, 06/01/2022 - 04:59

Step-by-step guide on how to fix bad webcam lighting in your WebRTC app with standard JavaScript API's for camera exposure or natively with uvc drivers.

The post Fix Bad Lighting with JavaScript Webcam Exposure Controls (Sebastian Schmid) appeared first on webrtcHacks.

WebRTC reduced barriers and increased innovation in communications

bloggeek - Mon, 05/23/2022 - 13:00

What WebRTC did to VoIP was reduce the barrier of entry to new vendors and increased the level and domains of innovation.

[In this list of short articles, I’ll be going over some WebRTC related quotes and try to explain them]

WebRTC was an aha moment in the history of communications.

It did two simple things that were never before possible for “us” VoIP developers:

  1. Offered a built-in implementation in the browser (you mostly no longer needed to implement the low level media processing aspect of the client device)
  2. Provided in single, standardized API layer (up until then the standardized layer was the network protocol itself)

This in turn, brought with it the two aspects of WebRTC illustrated above:

  1. Reduced barrier of entry
    • You no longer needed to know in detail how the network protocols worked in order to develop something – there’s a standardized API that you can use that takes care of handling all that networking “stuff” somewhere (or at least needed to know a lot less to get started and to launch something)
    • The client side was mostly solved on the low level. You could focus on building your application and user experience a lot earlier in the game
  2. Increased innovation
    • Now that you’re not expected to focus so much on the low level, you can work more on the user experience, which means more time to innovate
    • And since you don’t need to know all of that networking stuff so intimately, you no longer need to be “indoctrinated” as a VoIP developer. Which means developers came from all software domains, with their own ideas on how communications should work, forcing greater innovate than ever before

For many years I’ve been using this slide to explain why WebRTC is so vastly different than what came before it:

  • It is free since the code is open source and the implementation is already embedded in all modern browsers. This means everyone can make use of it → reduced barrier of entry
  • The focus of it is web developers and not VoIP developers. There are more web developers than VoIP ones, and they come with different worldviews → increased innovation

That said, truly innovating, productizing and scaling WebRTC applications require a bit more of an investment and a lot more in understanding and truly grokking WebRTC. Especially since WebRTC is… well… it is web and VoIP while at the same time it isn’t exactly web and it isn’t exactly VoIP:

This means that you need to understand and be proficient with both VoIP development (to some extent) and with web development (to some extent).

Looking to learn WebRTC? Here are some guidelines of how to get started with learning WebRTC.

The post WebRTC reduced barriers and increased innovation in communications appeared first on BlogGeek.me.

FIDO Alliance and the end of 2FA revenue to CPaaS vendors

bloggeek - Mon, 05/16/2022 - 13:00

With FIDO coming to replace passwords in applications, CPaaS vendors are likely to decline in 2FA revenues.

2FA revenue has always lived on the premise that passwords are broken. I’ve written about this back in 2017:

Companies are using SMS for three types of services these days:

1. Security — either through two-factor authentication (2FA), for signing in to services; or one-time password (OTP), which replaces the need to remember a password for various apps

2. Notifications for services — these would be notifications that you care about or that offer you information, like that request for feedback or maybe that birthday coupon

3. Pure spam — businesses just send you their unsolicited crap trying to get you to sign up for their services

Spam is spam. Notifications are moving towards conversations on social networks. And the security SMS messages are going to be replaced by FIDO. Here’s where we’re headed.

Let’s take this step by step.

Table of contents Passwords and the FIDO Alliance

Passwords are the bane of our modern existence. A necessary evil.

To do anything meaningful online (besides reading this superb article), you need to login or identify yourself against the service. Usually, this is done by a username (email or an identity number most likely) and a password. That password part is a challenge:

  • It needs to be something you remember (=know)
  • But you can’t use it on more than one site. If you do, and that site is hacked, then your data on other sites is going to be exposed
  • And that password needs to be non-simple. So it can’t be easily guessed
  • So 8 characters or more. Upper and lower case. A digit or two or three please. Maybe a special character to boot
  • Oh – and please change it every 3 or 6 months because… security

I use a password manager to handle my online life. My wife uses the “forgot my password” link all the time to get the same results.

It seems that whatever was tried in the passwords industry has failed in one way or another. Getting people house trained on good password practices is just too damn hard and bound to failure (just like trying to explain to people not to throw facial tissue down the toilet).

Experts have since pushing for a security model that authenticates a user with multiple “things”:

  1. Something you know (=password)
  2. Something you own (=smartphone or security key)
  3. Something you are (=biometrics)

Smartphones today are something you own and they offer something you are by having fingerprint ID and face ID solutions baked into them. That last piece is the password.

Enter FIDO.

FIDO stands for Fast IDentity Online.

Here’s the main marketing spiel of the FIDO Alliance:

The FIDO Alliance seems to have more members than it has views on that YouTube video (seriously).

By their own words:

The FIDO Alliance is working to change the nature of authentication with open standards that are more secure than passwords and SMS OTPs, simpler for consumers to use, and easier for service providers to deploy and manage.

So:

  • Open standards
  • More secure than passwords and SMS OTPs
  • Simpler for consumers to use
  • Easier to deploy and manage

What more can you ask for?

Well… for this standard to succeed.

And here is what brought me to write this article. The recent announcement from earlier this month – Apple, Google and Microsoft all committing to the FIDO standard. They are already part of FIDO, but now it is about offering easier mechanisms to remove the need for a password altogether.

If you are reading this, then you are doing that in front of an Apple device (iPhone, iPad or MacOS), a Google one (Android or Chrome OS) or a Microsoft one (Windows). There are stragglers using Linux or others, but these are tech-savvy enough to use passwords anyways.

These devices are more and more active as both something you own and something you are. My two recent laptops offer fingerprint biometric identification and most (all?) smartphones today offer the same or better approaches as well.

I long waited for Google and Apple to open up their authentication mechanisms in Android and iOS to let developers use it the same way end users use it to access Google and Apple services – when I login to any Google connected site anywhere, my smartphone asks me if that was me.

And now it seems to be here. From the press release itself:

Today’s announcement extends these platform implementations to give users two new capabilities for more seamless and secure passwordless sign-ins: 

1. Allow users to automatically access their FIDO sign-in credentials (referred to by some as a “passkey”) on many of their devices, even new ones, without having to re-enroll every account. 

2. Enable users to use FIDO authentication on their mobile device to sign in to an app or website on a nearby device, regardless of the OS platform or browser they are running.

So… no need for passwords. And no need for 2FA. Or OTP.

FIDO is going to end the farce of using 2FA and OTP technologies.

2FA: a CPaaS milking cow

2FA stands for Two Factor Authentication while OTP stands for One Time Password.

With 2FA, you enter your credentials and then receive an SMS or email (or more recently Whatsapp message) with a number. You have to paste that number on the web page or app to login. This adds the something you own part to the security mechanism.

OTP is used to remove the password altogether. Tell us your email and we will send you a one time password over SMS (or email), usually a few digits, and you use that to login for just this once.

2FA, OTP… the ugly truth is that it is nagging as hell to everyone. Not only users but also application developers. The devil is always in the details with these things:

  • How do you send an SMS message?
  • What happens if the SMS or email isn’t received? Is there a retry mechanism?
  • Can the user complain if it doesn’t work to get things resolved?
  • Who takes care of internationalization of these messages?

The list goes on. So CPaaS vendors have gone ahead and incorporated 2FA specific solutions into their bag of services. Twilio even acquired Authy in 2015, a customer, just to have that in their offerings at the time.

The great thing about 2FA (for CPaaS vendors), is that the more people engage with the digital world, the more they will end up with a 2FA or OTP SMS message. And each such message is a minor goldmine: A single SMS on Twilio in the US costs $0.0075 to send. A 2FA transaction will cost an additional $0.09 on top of it.

Yes. 2FA services bring great value. And they are tricky to implement and maintain properly at scale. So the price can be explained. But… what if we didn’t really need 2FA at all?

The death of 2FA

Putting one and one together:

Apple, Google and Microsoft committing to FIDO and banishing passwords by making their devices take care of something you know, something you own AND something you are means that users will not need to identify themselves in front of services using passwords AND they won’t be needing OTP or 2FA either.

The solution ends up being simpler for the user AND simpler for the service provider.

Win Win.

Unless you are a CPaaS vendor who makes revenue from 2FA. Then it is pure loss.

What alternatives can CPaaS vendors offer?

At first step, the “migration” from “legacy” 2FA and OTP towards Apple/Google’s new and upcoming FIDO solution. Maybe a unified API on top of Apple and Google, but that’s a stretch. I can’t see such APIs costing $0.09 per authentication. Especially if Apple and Google do a good job at the developer tooling level for this.

* I removed Microsoft closer to the end here because they are less important for this to succeed. They are significant if this does succeed in making it even simpler on laptops so one won’t have to reach for his phone to login when on a laptop.

The future of CPaaS

5 years ago, back in that 2017 article, I ended it with these words:

Goodbye SMS, It’s Time for Us to Move On

Don’t be fooled by the growth of 2FA and application-to-person (A2P) type messages over SMS. This will have a short lifespan of a few years. But five to 10 years from now? It will just be a service sitting next to my imaginary fax machine.

We’re 5 years in and the replacements of SMS are here already.

  • Social truly is starting to replace SMS notifications with long lived conversations, augmented with the surge of chatbots everywhere
  • 2FA and OTP are now threatened by FIDO to be replaced simply by the fact that you own a smartphone

All that revenue coming to CPaaS from SMS is going to go elsewhere. Social omnichannel introduced by CPaaS vendors will replace that first chunk of revenue, but what will replace the 2FA and OTP? Can CPaaS vendors rely on FIDO and build their own business logic on top and around it for their customers?

It seems to me revenue will need to be found elsewhere.

Interested in learning more about the future of CPaaS? Check out my ebook on the topic (relevant today as it was at the time of writing it).

Download my CPaaS in 2020 ebook

The post FIDO Alliance and the end of 2FA revenue to CPaaS vendors appeared first on BlogGeek.me.

The WebRTC Bitcode Soap Opera (Saúl Ibarra Corretgé)

webrtchacks - Tue, 04/12/2022 - 14:12

Saúl Ibarra Corretgé of Jitsi walks through his epic struggle getting Apple iOS bitcode building with WebRTC for his Apple Watch app.

The post The WebRTC Bitcode Soap Opera (Saúl Ibarra Corretgé) appeared first on webrtcHacks.

WebRTC video calling table stakes

bloggeek - Mon, 04/04/2022 - 12:30

What was nice to have is now becoming mandatory in WebRTC video calling applications. This includes background blurring, but also a lot of other features as well.

Do you remember that time not long ago that 16 participants on a call was the highest number that product managers asked for? Well… we’re not there anymore. In many cases, the number has grown. First to 49. Then to a lot more, with nuances on what exactly does it mean to have larger calls. We now see anywhere between 100 to 10,000 to be considered a “meeting”.

I’ve been talking and mentioning table stakes for quite some time – during my workshops, on my messages on LinkedIn, in WebRTC Insights. It was time I sat down to write it on my blog

Table of contents WebRTC table stakes

This isn’t really about WebRTC, but rather what users now expect from WebRTC applications. These expectations are in many cases table stakes – features that are almost mandatory in order to be even considered as a relevant vendor in the selection process.

What you’ll see here is almost the new shopping list. Since users are different, markets are different, scenarios are different and requirements vary – you may not need all of them in your application. That said, I suggest you take a good look at them, decide which ones you need tomorrow, which you don’t need and which you have to get done yesterday.

Background blurring/replacement

Obvious. I have a background replacement. I never use it in my own calls. Because… well… I like my background. Or more accurately – I like showing my environment to people. It gives context and I think makes me more human.

This isn’t to say people shouldn’t use background replacement or that I’ll hate them for doing that – just that for me, and my background – I like keeping the original.

Others, though, want to replace their background. Sometimes because they don’t have a proper place where the background isn’t cluttered or “noisy”. Or because they just want to have fun with it.

Whatever the reason is, background blurring and replacement are now table stakes – if your app doesn’t have it, then the app that does in your market will be more interesting and relevant to the buyers.

Here’s how I see the development of the requirements here:

  • Figure out where a user is. Here, you can even implement an auto zoom capability (many skip this, though this can be quite useful as well)
  • Then focus on background blurring. It is the most tolerant of the alternatives
  • Move on to background replacement. Replace the background with a static image
  • Go for video backgrounds, where the user can replace the background with something moving
  • Think of “teleporting” the user after you’ve cut him away from his background to place him directly on a slide deck or in a virtual environment
Video lighting

If I recall correctly, Google Meet started with this feature, and since then it started cropping into other meeting solutions. We all use webcams, but none of us has good lighting. It might be a window behind (or in my case to the side), the weather out the window, the hour in the day, or just poor lighting in the room.

While this can be fixed, it isn’t. Much like the cluttered room, the understanding is that humans are lazy or just not up to the task of understanding what to do to improve video lighting on their own. And just like background removal, we can employ machine learning to improve lighting on a video stream.

Noise suppression/cancellation

I started using this stock image when I started doing virtual workshops. It is how I like to think of my nice neighbor (truth be told – he really is nice). It just seems that every time I sit down for an important meeting, he’d be on one of his renovation sprees.

The environment in which we’re conducting our calls is “polluted” with sounds. My mornings are full with lawn mower noises from the park below my apartment building. The rest of my days from the other family members in my apartment and by my friendly neighbor. For others, it is the classic dog barking and traffic noises.

Same as with video, since we’re now doing these sessions from everywhere at any time, it is becoming more important than ever to have this capability built into the service used.

Some services today offer the ability to suppress and cancel different types of noises. You don’t have the control over what to suppress, but rather get an on/off switch.

Four important things here:

  1. What noises are suppressed isn’t obvious. Each vendor picks and chooses what seems fit to his use case
  2. This can be implemented either on the sender side or on the receiver side or both
  3. It can be implemented on the device or in the cloud. Google Meet for example does that in the cloud while many others do it on the device
  4. Unlike the video features we’ve seen before, here as the sender, you can’t really hear what’s being suppressed of your end of the call…

And last but not least, this is a kind of a feature that can also be implemented directly by the microphone, CPU or operating system. Apple tried that recently in iOS and then reverted back.

Speech to text

Up until now, we’ve discussed capabilities that necessitated media processing and machine learning. Speech to text is different.

For several years now we’ve been hammered around speech to text and text to speech. The discussion was usually around the accuracy of the algorithms for speech to text and the speed at which they did their work.

It now seems that many services are starting to offer speech to text and its derivatives baked directly into their user experience. There are several benefits of investing in this direction:

  • Switching to text enables us to process the meeting for its meaning. Usually in the form of extracting meeting minutes and action items
  • Speech to text means we can get a transcript of a meeting, making it searchable
  • Accessibility – doing so in real-time, means we can transcribe the meeting to the participants, assisting them with noisy environments of other participants or simply with understanding accents – my company, testRTC, was acquired by Spearline, an Irish vendor – I am just getting used to understanding their accent
  • This is a step necessary for translation

The challenges with speech to text is first on how to pass the media stream to the speech to text algorithm – not a trivial task in many cases; and later, picking a service that would yield the desired results. 

WebRTC meeting size

It used to be 9 tiles. Then when the pandemic hit, everyone scrambled to do 49 gallery view. I think that requirement has become less of an issue, while at the same time we see a push towards a greater number of participants in sessions.

How does that work exactly?

  • The assumption that everyone is seen, needs to be seen or wants to be seen is not realistic in many scenarios
  • Meetings are mostly asymmetric in nature. Not everyone has the same level of participation, and oftentimes, we aren’t aware of this in advance
  • Quarantines and later remote work got us to the point where a lot more media streams join a meeting:

If in the past we had a few meeting rooms joining in to a meeting, with a few people seated in each room, now most of the time, we will have these people join in remotely from different locations. The number of people stayed the same, yet the number of media streams grew.

We’re also looking to get into more complex scenarios, such as large scale virtual events and webinars. And we want to make these more interactive. This pushes the boundary of a meeting size from hundreds of participants to thousands of participants.

This requirement means we need to put more effort into implementing optimizations in our WebRTC architecture and to employ capabilities that offer greater flexibility from our media servers and client code.

Getting there requires WebAssembly and constant optimization

These new requirements and capabilities are becoming table stakes. Implementing them has its set of nuances, and each of these features is also eating up on our CPU and memory budget.

It used to be that we had to focus on the new shiny toys. Adding new cool features and making them available on the latest and greatest devices. Now it seems that we’re in need of pushing these capabilities into ever lower performing devices:

  • Older PCs and laptops, to deal with the majority of the population and not only early adopters and tech savvy users
  • Plethora of peripherals – headsets, mics, speakers and webcams – all with their own quirks and proprietary features (echo canceling, latency inducing bluetooth headsets anyone?)

So we now have less capable devices who need more features to work well, requiring us to reduce our CPU requirements to serve them. And did I mention most of these new table stakes need machine learning?

The tool available to us for all this is WebAssembly on the browser side. This enables us to run code faster in the browser and implement algorithms that would be impossible to achieve using Javascript.

It also means we need to constantly optimize the implementation, improving performance to make room for more of these algorithms to run.

10 years into WebRTC and 2 years into the pandemic, we’re only just scratching the surface of what is needed. How are you planning to deal with these new table stakes?

The post WebRTC video calling table stakes appeared first on BlogGeek.me.

Pages

Subscribe to OpenTelecom.IT aggregator

Using the greatness of Parallax

Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.

Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.

Get free trial

Wow, this most certainly is a great a theme.

John Smith
Company name

Yet more available pages

Responsive grid

Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

More »

Typography

Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

More »

Startup Growth Lite is a free theme, contributed to the Drupal Community by More than Themes.