I am now CPO (Chief Product Officer) at Spearline. This means that there are going to be some changes here at BlogGeek.me. Here’s what you can expect
Me, somewhere in Ireland, 3 weeks agoAlmost a year ago, testRTC, the company I co-founded, got acquired by Spearline. During that time, I got to know the great team there and the huge opportunity that Spearline has.
Since the above feels corny and a cliché to me as I write it, I’ll stop here.
To make a long story short:
First off, I am excited. Very.
It has been some time since I had a team to work with as their direct manager. It will also be the first time I get to manage product managers.
It also means that I am going to be investing a lot more of my time and attention at Spearline. Which is great, as I really love interacting with the people there already (I wouldn’t have accepted the role otherwise).
For my consulting business, it means that I will be shrinking it down considerably. I won’t be doing much consulting moving forward. It is somewhat sad, as I really loved helping people and hearing their stories and challenges. Hopefully, I will still get to do it in other ways.
What is going to stay, are all the initiatives that have taken place around BlogGeek.me over the years:
All in all, it is time to continue and grow, and in a direction I have never expected I’ll find myself again.
The post CPO at Spearline and what it means to BlogGeek.me appeared first on BlogGeek.me.
An updated infographic of the WebRTC Developer Tools Landscape for 2022, along with my Choosing a WebRTC API Platform report.
This week I took the time to update my WebRTC Developer Tools Landscape. I do this every time I update my report, just to make sure it is all aligned and… up to date.
A few quick thoughts I had while doing this:
The WebRTC Developer Tools Landscape will never be complete. People always get pissed off at me when I publish it, not understanding why their company isn’t there. My answer to this is a simple one – because I don’t know what it is that you are doing.
They then get even angrier. What they should do at that point is ask themselves why I don’t know them enough. I have lived and breathed WebRTC since it was first announced. So if I don’t know their company and product, how do they expect others to learn about them?
I don’t think I am unique or special. Just that if you want to be in a landscape infographic that covers WebRTC, you might as well want to make sure people who deal with WebRTC and help others figure out what tools to use will know what it is that you’re doing.
What about that report?The report has been going strong for some 8 years now, with an update taking place every 8-12 months. It has been 12 months, so it definitely needed an update.
2 vendors were removed from the report and 3 new vendors added.
I’ve also decided to “upgrade” the term Embed/Embeddable/Embedded to Prebuilt. The reason behind it is the progress and popularity of these types of solutions in the video API space. Most CPaaS vendors today that offer a video API are also offering some form of higher level abstraction in the form of a ready made application – be it a full reference app, a UIKit, or a Prebuilt component.
The report will be published on 22 September. If you want to purchase it, there’s a 20% discount available at the moment – from now and until its publication.
Check out more about my Choosing a WebRTC API Platform report.
The post The WebRTC Developer Tools Landscape 2022 (+report) appeared first on BlogGeek.me.
With WebRTC, we focus on lossy media compression codecs. These won’t maintain all the data they compress, simply because we won’t notice it either.
[In this list of short articles, I’ll be going over some WebRTC related quotes and try to explain them]
The purpose of codecs – voice and video – is to compress and decompress the media that needs to be sent over the network. This was true before WebRTC and will stay true after WebRTC.
Generally speaking, there are two types of compression:
The two types of codecsAudio and video tend to hold a lot of data. And since we want to send it over the network, we’d rather not waste network resources. So what do these codecs do? They try to remove anything and everything that they can which our eyes and ears won’t notice much.
On a conceptual level, lossy compression has this virtual dial. You move the dial to decide how much you are willing to lose out of the data. The encoder will do its best to lose things you wouldn’t notice, but at some point, you’ll notice.
This flexibility in setting the compression level is also used to manage the bitrate. By estimating the bandwidth, the encoder can be instructed to turn the dial up and down the compression level to generate higher or lower compression to meet the requirements of the estimated available bandwidth.
Looking to learn more about video codecs? Go ahead and read my WebRTC video basics article
The post Media compression is all about purposefully losing what people won’t be missing appeared first on BlogGeek.me.
WebRTC open source is a mess. It needs to grow out of its youth and become serious business – or gain serious backing.
This article has been written along with Philipp Hancke. We cooperate on many things – WebRTC courses (new one coming up soon) and WebRTC Insights to name a few.
—
WebRTC is free. Every modern browser incorporates WebRTC today. And the base code that runs in these browsers is open sourced and under a permissive BSD license. In some ways, free and open source were mixed in a slightly toxic combination. One in which developers assume that everything WebRTC should be free.
The end result? The sorry state in which we find ourselves today, 11 years after the announcement of WebRTC. What we’re going to do in this article, is detail the state of the WebRTC open source ecosystem, and why we feel a change is necessary to ensure the healthy growth of WebRTC for years to come.
Table of contentsWe’ll start with the most important thing you need to know:
Open Source != Free
Let’s take a quick step back before we dive into it though.
What’s open source exactly?An open source project is a piece of source code that is publicly available for anyone under one of the many open source licenses out there. Someone, or a group of people from the same company or from disparate places, have “banded together” and created a piece of software that does something. They put the code of that software out in the open and slap a license on top of it. That ends up being an open source project.
Open source isn’t free. There’s a legal binding associated with using open source, but it isn’t what we’re interested in here. It is the fact that if you use open source, it doesn’t mean that you pay nothing to no one. It just means that you get *something* with no strings attached.
Why would anyone end up doing this for free? Well… that brings us to business models.
Open source business modelsThere are different types of open source licenses. Each with its own set of rules, and some more permissive than others, making them business-friendly. Sometimes the license type itself is used as a business model, simply by offering a dual license mode where a non-permissive open source license is available freely and a commercial one is available in parallel.
In other cases, the business model of the open source project revolves around offering support, maintenance and customization of that project. You get the code for free, but if you want help with it – you can pay!
Sometimes, the business model is around additional components (this is where you will see things like community edition and enterprise edition popping up as options in the project’s website). Things such as scripts for scaling the system, monitoring modules or other pieces of operational and functional components are protected as commercial products. The open source part brings companies to use it and raise popularity and awareness to the project, while the commercial one is the reason for doing it all. How the developers behind the project bring food to the table and become rich.
In recent years, you see business models revolving around managed services. The database is open source and free, but if you let us host it for you and pay for it, we’ll take care of all your maintenance and scaling headaches.
And some believe it is really and truly free. Troy Hunt wrote about it recently (it is a really good post – go read it):
“… there is a suggestion that those of us who create software and services must somehow be in it for the money”
To that I say – yes!
At the end of the day, delving into open source is all about the money.
Why?
The moment the open source project you are developing is meaningful to two more people, or even a single company, there are monetary benefits to be gleaned. We’d venture that if you aren’t making anything from these benefits (even minor ones), then the open source project has no real future. It gets to a point where it should either grow up or wither and die.
A few more words about open source projectsJust a few things before we start our journey to the WebRTC open source realm:
A common mistake by “noobs” is that WebRTC is a solution that requires no coding. Since browsers already implement it, there’s nothing left to do. This can’t be farther away from the truth.
WebRTC as a protocol requires a set of moving parts, clients and servers; that together enable the rich set of communication solutions we’re seeing out there.
The diagram above, taken from the Advanced WebRTC Architecture course, shows the various components necessary in a typical WebRTC application:
For each and every component here, you can find one or more open source projects that you can use to implement it. Some are better than others. Many are long forgotten and decaying. A few are pure gold.
Lets dive into each of these components to see what’s available and at what state we find the open source community for them.
WebRTC open source client librariesFirst and foremost, we have the WebRTC open source client libraries. These are implementations of the WebRTC protocol from a user/device/client perspective. Consider these your low level API for WebRTC.
There used to be only a single one – libwebrtc – but with time, more were introduced and took their place in the ecosystem. Which is why we will start with libwebrtc:
libwebrtcTHE main open source project of WebRTC is libwebrtc.
Why?
Practically speaking – libwebrtc is everywhere WebRTC is.
Here are a few things you need to know about this library:
Looking at the contributions over time Google is doing more than 90% of the work:
The amount of changes has been decreasing year-over-year after peaking in early 2016. During the pandemic we even reached a low point with less than 200 commits per month on average. Even with these reduced numbers libwebrtc is the largest and most frequently updated project in the open source WebRTC ecosystem.
The number of external contributions is fairly low, below 10%. This doesn’t bode well for the future of libwebrtc as the industry’s standard library of WebRTC. It would be better if Google opened up a bit more for contributions that improve WebRTC or those that make it easier to use by others.
This leads us to the business model aspect of libwebrtc
Money time
What if one decides to use libwebrtc and integrate it directly in his own application?
That said, for the most part, and in most situations, libwebrtc is the best alternative – that’s because it follows the exact implementations you will be bumping into in web browsers. It will always be the most up to date one available.
A side note – libwebrtc is implemented in C++. Why is this relevant? Pion
PionPion is a Go implementation of the WebRTC APIs. Sean DuBois is the heart and sole behind the Pion project and his enthusiasm about it is infectious.
Putting on Tsahi’s cynic hat, Pion’s success can be attributed a lot to it being written in Go. And that’s simply because many developers would rather use Go (modern, new, hip) and not touch C++.
Whatever the reason is, Pion has grown quite nicely since its inception and is now quite a popular WebRTC open source project. It is used in embedded devices, cloud based video rendering and recently even SFU and other media server implementations.
Money time
What if one decides to use Pion and integrate it directly in his own application?
There are other implementations of WebRTC in other languages.
The most notable ones:
There are probably others, less known.
We won’t be doing any Money time section here. These projects are still too small. We haven’t seen too many services using them in production and at scale.
GStreamerGStreamer is an open source media framework that is older than WebRTC. It is used in many applications and services that use WebRTC, even without using its WebRTC capabilities (mainly since these were added later to GStreamer).
We see GStreamer used by vendors when they need to transform video content in real-time. Things like:
Since WebRTC was added as another output type in GStreamer, developers can use it directly as a broadcasting entity – one that doesn’t consume data but rather generates it.
GStreamer is a community effort and written in C. While it is used in many applications (commercial and otherwise), it lacks a robust commercial model. What does that mean?
Money time
What if one decides to use GStreamer and integrate it directly in his own application?
Next we have open source TURN servers. And here, life is “simple”. We’re mostly talking about coturn. There are a few other alternatives, but coturn is by far the most popular TURN server today (open source or otherwise).
In many ways, we don’t need more than that, because TURN is simple and a commodity when it comes to the code implementation itself (up to a point, as Cloudflare is or was trying to change that with their managed service).
But, and there’s always a but in these things, coturn needs to get updated and improved as well. Here’s a recent discussion posted as an issue on coturn’s github repo:
Is the project dead?
Read the whole thread there. It is interesting.
The maintainers of coturn are burned out, or just don’t have time for it (=they have a day job). For such a popular project, the end result was a volunteer or two from the industry picking up the torch and doing this in parallel to their own day job.
Which leads us to:
Money time
What if one decides to use coturn and integrate it directly in his own application?
Signaling servers are a different beast. WebRTC doesn’t define them exactly, but they are needed to pass the SDP messages and other signals between participants. There are several alternatives here when it comes to open source signaling solutions for WebRTC.
It should be noted that many of the signaling server alternatives in WebRTC offer purely peer communication capabilities, without the ability to interact with media servers. Some signaling servers will also process audio and video streams. How much they focus on the media side versus the signaling side will decide if we will be treating them here as signaling servers or media servers – it all boils down to their own focus and to the functions they end up offering.
Signaling requires two components – a signaling server and a client side library (usually lightweight, but not always).
We will start with the standardized ones – SIP & XMPP.
SIP and XMPPSIP and XMPP preceded WebRTC by a decade or so. They have their own ecosystem of open source projects, vendors and developers. They act as mature and scalable signaling servers, sometimes with extensions to support WebRTC-specific use-cases like creating authentication tokens for TURN servers.
We will not spend time explaining the alternatives here because of this.
Here, it is worthwhile mentioning MQTT as well. Facebook is known to be using it (at least in the past – not sure about today) in their Facebook Messenger for signaling
PeerJSPeerJS has been around for almost as long as WebRTC itself. For an extended period of that time, the codebase has not been maintained or updated to fit what browsers supported. Today, it seems to be kept.
The project seems to focus on a monolithic single server deployment, without any thought about horizontal scaling. For most, this should be enough.
Throughout the years, PeerJS has changed hands and maintainers, including earlier this year:
Without much ado, lets move to the beef of it:
Money time
What if one decides to use PeerJS and integrate it directly in his own application?
Simple-Peer has been driven by Feross and his name in the early days. It is another one of those “pure WebRTC” libraries that focuses solely on peer-to-peer. If that fits your use-case, great, it is mature and “done”. Most of the time your use-case will evolve over time though.
It has received only a few maintenance commits in 2022 and not many more in 2021. The same considerations as for PeerJS apply for simple-peer. If you need to pick between the two… go for simple-peer, the code is a bit more idiomatic Javascript.
Money time
Just go read PeerJS – same rules apply here as well.
MatrixMatrix is “an open network for secure, decentralized communication”. There’s also an open standard to it as well as a commercial vendor behind it (Element).
Matrix is trying to fix SIP and XMPP by being newer and more modern. But the main benefit of Matrix is that it comes as client and server along with implementations that are close to what Slack does – network and UI included. It is also built with scale in mind, with a decentralized architecture and implementation.
Here we’re a bit unaligned… Tsahi thinks Matrix is a good alternative and choice while Philipp is… less thrilled. Their WebRTC story is a bit convoluted for some, meandering from full mesh to Jitsi to a “native SFU” only recently.
So… Matrix has a company behind it. But they have their own focus (messaging service competing with Slack with privacy in mind).
Money time
What if one decides to use Matrix and integrate it directly in his own application?
At the time of writing, there are 26,121 repositories on github mentioning WebRTC. By the time you’ll be reading it, that number will grow some.
Not many are sticking out too much, and in that jumble, it is hard to figure out which projects are right for you. Especially if what you need needs to last. And doubly so if you’re looking for something that has decent enough support and a thriving community around it.
Open source SFUs and media servers in WebRTCAnother set of important open source WebRTC components are media servers and SFUs.
While signaling servers deal with peer communication of setting up the actual sessions, media servers are focused on the channels – the actual data that we want to be sending – audio and video streams, offering realtime video streaming and processing Whenever you’ll be needing group sessions, broadcasts or recordings (and you will, assuming you’d like video calls or video conferences incorporated in your application), you will end up with media servers.
Here’s where are are marketwise
Janus, Jitsi, mediasoup & PionI’ve written about these projects at length in my 2022 WebRTC trends article. Here’s a visual refresher of the relevant part of it:
Janus, Jitsi, mediasoup and Pion are all useful and popular in commercial solutions. Let’s try to analyze them with the same prism we did for the other WebRTC open source projects here.
JanusJitsi can be considered a platform of its own:
Money time
–
To be clear – in all cases above, getting vendors to help you out who aren’t maintaining the specific media server codebase means results are going to be variable when it comes to the quality of the implementation. In other words, it is hard to figure out who to work with.
The demise of KurentoThe Kurento Media Server is dead. So much so that even the guys behind it went to build OpenVidu (below) and then made OpenVidu work on top of mediasoup.
Don’t touch it with a long stick.
It has been dead for years and from time to time people still try using it. Go figure.
Higher layers of abstractionA higher layer abstraction open source project strives to become a platform of sorts. Their main focus in the WebRTC ecosystem is to offer a layer of tooling on top of open source media servers. The two most notable ones are probably OpenVidu and LiveKit.video conferencing
OpenViduOpenVidu is a kind of an abstraction layer to implement a room service, UI included.
It originates from the team left behind from the Kurento acquisition. With time, they even adopted mediasoup as the media server they are using, putting Kurento aside for the most part.
Money time
Unlike many of the open source solutions we’ve seen so far, OpenVidu actually seem like they have a business model:
LiveKit offers an “open source WebRTC infrastructure” – the management layer above Pion SFU.
For the life of me though, I don’t understand what the business model is for LiveKit. They are a company – not just an open source project, and as such, they need to have revenue to survive.
Most probably they get some support and development money from enterprises adopting LiveKit, but that isn’t easily apparent from their website.
Other, less popular open source alternatives for WebRTCThere are other companies who offer commercial solutions that are proprietary in nature. Some do it as on premise alternatives, where they provide the software and the support, while you need to deploy and maintain.
These can either be suitable solutions or disasters waiting to happen. Especially when such a vendor decides to pivot or leave the market.
Tread carefully here.
Is it time for WebRTC open source to grow up?This has been a long overview, but I think we can all agree.
The current state of WebRTC open source is abysmal:
If it were up to us, and it isn’t, we’d like to see a more sophisticated market out there. One that gives more and better commercial solutions for enterprises and entrepreneurs alike.
The post The state of WebRTC open source projects appeared first on BlogGeek.me.
Running your own TURN servers for your WebRTC application is not necessarily the best decision. Make sure you know why you’re doing it.
[In this list of short articles, I’ll be going over some WebRTC related quotes and try to explain them]
Are you running your own TURN server? Great!
Now, are you crystal clear and honest with yourself about why you’re doing that exactly?
WebRTC has lots of moving parts you need to take care of. Lots of WebRTC servers: The application. Signaling servers. Media servers. And yes – TURN servers.
I already covered a few aspects of TURN in this WebRTC quote – We TURNed to see a STUNning view of the ICE. It is now time to review the build vs buy decision around TURN.
You see, NAT traversal in WebRTC is done by using two different servers: STUN and TURN. STUN is practically free and it can also be wrapped right into the TURN server.
TURN servers are easy to interface with, but not as easy to install, configure and maintain properly. Which is why my suggestion more often than not is to use a third party managed TURN service instead of putting up your own. Economies of scale along with focus and core competencies come to mind here with this decision.
Why buy your WebRTC TURN servers?Buying a TURN server should be your default decision. It is simple. It isn’t too expensive (for the most part) and it will reduce a lot of your headaches.
Most of the companies that approach me with connectivity issues of their WebRTC application end up in that state simply because they decided to figure out NAT traversal in WebRTC on their own.
Here are a few really good reasons why you should buy your TURN service:
We are all builders. And we love building. So adding TURN into our belt of things we built makes sense. It also plays well into the vertical integration we now appreciate with how successful Apple has been with it with its services.
But frankly, it is mostly about control. The ability to control your own destiny without relying on others.
I still think you should buy your TURN servers from a reputable managed service provider. That said, here are some good reasons why to build and deploy your own:
–
Build? Buy? Which one is the path you’ll be taking?
Trying to get more of your calls connected in WebRTC? Check out this free video mini course on effectively connecting WebRTC sessions
The post Be very clear to yourself why you manage your own TURN servers appeared first on BlogGeek.me.
Every time you look at NAT Traversal in WebRTC, you end up learning something new about STUN, TURN and/or ICE.
[In this list of short articles, I’ll be going over some WebRTC related quotes and try to explain them]
STUN, TURN and ICE. The most misunderstood aspects of WebRTC, and the most important ones to get more calls connected. It is no wonder that the most viewed and starred lesson in my WebRTC training courses is the one about NAT traversal.
Let’s take this opportunity to go over a few aspects of NAT traversal in WebRTC:
This covers the basics. There’s a ton more to learn and understand about NAT traversal in WebRTC. I’d also suggest not installing and deploying your own TURN servers but rather use a third party paid managed service. The worst that can happen is that you’ll install and run your own later on – there’s almost no vendor lock-in for such a service anyway.
Trying to get more of your calls connected in WebRTC? Check out this free video mini course on effectively connecting WebRTC sessions
The post We TURNed to see a STUNning view of the ICE appeared first on BlogGeek.me.
You will need to decide what is more important for you – quality or latency. Trying to optimize for both is bound to fail miserably.
[In this list of short articles, I’ll be going over some WebRTC related quotes and try to explain them]
First thing I ask people who want to use WebRTC for a live streaming service is:
What do you mean by live?
This is a fundamental question and a critical one.
If you search Google, you will see vendors stating that good latency for live streaming is below 15 seconds. It might be good, but it is quite crappy if you are watching a live soccer game and your neighbors who saw the goal taking place 15 seconds before you did are shouting.
I like using the diagram above to show the differences in latencies by different protocols.
WebRTC leaves all other standards based protocols in the dust. It is the only true sub-second latency streaming protocol. It doesn’t mean that it is superior – just that it has been optimized for latency. And in order to do that, it sacrifices quality.
How?
But not retransmitting or buffering.
With all other protocols, you are mostly going to run over HTTPS or TCP. And all other protocols heavily rely on retransmissions in order to get the complete media stream. Here’s why:
WebRTC comes from the real time, interactive, conversational domain. There, even a second of delay is too long to wait – it breaks the experience of a conversation. So in WebRTC, the leading approach to dealing with packet losses isn’t retransmission, but rather concealment. What WebRTC does is it tries to conceal packet losses and also make sure there are as little of it as possible by providing a finely tuned bandwidth estimation mechanism.
Looking at WebRTC itself, it includes a jitter buffer implementation. The jitter buffer is in charge of delaying playout of incoming media. This is done to assist with network jitter, offering smoother playback. And it is also used to implement lip synchronization between incoming audio and video streams. You can to some extent control it by instructing it not to delay playout. This will again hurt the quality and improve latency.
You see, the lower the latency you want, the bigger the technical headaches you will need to deal with in order to maintain high quality. Which in turn means that whenever you want to reduce latency, you are going to pay in complexity and also in the quality you will be delivering. One way or another, there’s a choice being made here.
Looking to learn more on how to use WebRTC technology to build your solution? We’ve got WebRTC training courses just for that!
The post With media delivery, you can optimize for quality or latency. Not both appeared first on BlogGeek.me.
Lowcode and nocode or old/new concepts that are now finding their way to Communication APIs. Here’s the latest developments.
Lowcode and nocode has fascinated me. Around 15 years ago (or more), I was tasked with bringing the video calling software SDKs we’ve developed at RADVISION to the cloud.
At the time, the solutions we had were geared towards developers and were essentially SDKs that were used as the video communication engines of applications our customers developed. Migrating to the cloud when all you are doing is the SDKs is a challenge. How do you offer your developer customers with the means to control the edge devices via the cloud, and doing so while allowing the application to control the look and feel, embedding the solution wherever they want.
The cloud we’ve developed used Python (Node.js wasn’t popular yet), and we dabbled and experimented with Awesomium – a web browser framework for applications – the predecessor of today’s more popular Electron. We built REST APIs to control the calling logic and handle the client apps remotely via the cloud.
I spent much of my time trying to come to grips with how exactly you would fit remote controlling an app to the fact that you don’t really own or… control. A conundrum.
Fast forward to today, where cloud and WebRTC are everywhere, and you ask yourself – how do you remote control communications – and how do you build such interactions with ease.
The answer to that is usually by way of nocode and lowcode. Mechanisms that reduce the amount of code developers need to write to use certain technologies – in our case Communication APIs (CPaaS).
I had a bit of spare time recently, so I decided to spend it on capturing today’s nocode & lowcode status and progress within the CPaaS domain.
This has been especially important if you consider the recent announcements in the market – including the one coming from Zoom about their Jumpstart program:
“With Jumpstart, you can quickly create easy-to-integrate and easy-to-customize Zoom video solutions into your apps at lower costs.”
So without much ado, if this space interest you, you should check out my new free eBook: Lowcode & Nocode in Communication APIs
This eBook details and explains the various approaches in which lowcode and nocode manifest themselves in the Communication APIs domain. It looks into the advantages and challenges of developers who adopt such techniques within their applications.
I’d like to thank Daily for sponsoring this ebook and helping me make it happen. If you don’t know them by now then you should. Daily offers WebRTC video and audio for every developer – they are a CPaaS vendor with a great lowcode/nocode solution called Daily Prebuilt
If you are in the process of developing applications that use 3rd party Communication APIs, you will find the insights in this eBook important to follow.
GET MY FREE LOWCODE/NOCODE CPAAS EBOOKThe post Nocode/Lowcode in CPaaS appeared first on BlogGeek.me.
The biggest challenge you will have when implementing WebRTC group calling is estimating optimizing bandwidth use.
[In this list of short articles, I’ll be going over some WebRTC related quotes and try to explain them]
Video is a resource hog. Some say that WebRTC is a great solution for 1:1 calls, but is lacking when it comes to group calling. To them I’d say that WebRTC is a technology and not a solution. In this case, it simply means that you need to invest some effort in getting group video calling to work well.
What does that mean exactly? That you need to think about bandwidth management first and foremost.
Why?
Let’s assume a 25 participants video call. And we’re modest – we just want each to encode his video at 500kbps – reasonable if we plan on having everyone at a mere VGA resolution (640×480 pixels).
Want to do the math together?
We end up with 12.5Mbps. That’s only for the video, without the overhead of headers or audio. Since we only need to receive media from 24 participants, we can “round” this down to 12Mbps.
I am sure you have a downlink higher than 12Mbps, but let me tell you a few things you might not be aware of:
You can get better at it, trying to figure out lower bitrates, limit how much you send and receive and do so individually per participant in the video group meeting. You can take into consideration the display layout, the dominant speaker and contributing participants, etc.
That’s exactly what 90% of your battle here is going to be – effectively managing bandwidth.
Going for a group video calling route? Be sure to save considerable time and resources for optimization work on bandwidth estimation and management. Oh – and you are going to need to do that continuously. Because WebRTC is a marathon not a sprint
Scaling WebRTC is no simple task. There are a lot of best practices, tips and tricks that you should be aware of. My WebRTC Scaling eBooks Bundle can assist you in figuring out what more you can do to improve the quality and stability of your group video calling service.
The post In group video calls, effectively managing bandwidth is 90% of the battle appeared first on BlogGeek.me.
Balázs Kreith of the open-source WebRTC monitoring project, ObserveRTC shows how to calculate WebRTC latency - aka Round Trip Time (RTT) - in p2p scenarios and end-to-end across one or more with SFUs. WebRTC's getStats provides relatively easy access to RTT values, bu using those values in a real-world environment for accurate results is more difficult. He provides a step-by-step guide using some simple Docke examples that compute end-to-end RTT with a single SFU and in cascaded SFU environments.
The post Calculating True End-to-End RTT (Balázs Kreith) appeared first on webrtcHacks.
WebRTC is a building block to be used when developing solutions. Comparing it to solutions is the wrong approach.
[In this list of short articles, I’ll be going over some WebRTC related quotes and try to explain them]
How does WebRTC compare to Zoom?
What about Skype? Or FaceTime?
I’d say this is an to questions – you’re not comparing things that are comparable.
WebRTC is a piece of technology. A set of building blocks that you can use, like lego bricks.
In essence, you can view WebRTC in two ways:
Got an application you’re developing? Need communications sprinkled into it? Some voice. Maybe video. All in real time. And with browser components maybe. If that is the case, then WebRTC is the technology you’re likely to be using for it. But piecing all of that together into your application? That’s up to you. And that’s your solution.
We can then compare the solution you built to some other solution out there.
Next time people tell you “WebRTC isn’t good because it can’t do group calls” – just laugh at their faces. Because as a technology WebRTC can certainly handle group calls and large broadcasts – you’ll need to bring media servers to do that, and sweat to build your solution. The pieces of your puzzle there will include WebRTC as a technology.
Remember:
WebRTC is a technology not a solution. What you end up doing with it is what matters
Looking to learn more on how to use WebRTC technology to build your solution? We’ve got WebRTC training courses just for that!
The post WebRTC is a technology not a solution appeared first on BlogGeek.me.
A full review and guide to all of the Jitsi Meet-related projects, services, and development options including self-install, using meet.jit.si, 8x8.vc, Jitsi as a Service (JaaS), the External iFrame API, lib-jitsi-meet, and the Jitsi React libraries among others.
The post The Ultimate Guide to Jitisi Meet and JaaS appeared first on webrtcHacks.
A very detailed look at the WebRTC implementations of Google Meet and Google Duo and how they compare using webrtc-internals and some reverse engineering.
The post Meet vs. Duo – 2 faces of Google’s WebRTC appeared first on webrtcHacks.
WebRTC requires an ongoing investment that doesn’t lend itself to a one-off outsourced project. You need to plan and work with it longtime.
[In this list of short articles, I’ll be going over some WebRTC related quotes and try to explain them]
WebRTC simplified development and reduced the barrier of entry to many in the market. This brought with it the ability to quickly build, showcase and experiment with demos, proof of concepts and even MVPs. Getting that far is now much easier thanks to WebRTC, but not planning ahead will ruin you.
There are a few reasons why you can’t treat WebRTC as merely a sprint:
I like using this slide in my courses and presentations:
These are the actors in a WebRTC application. While the application is within your control and ownership – everything else isn’t…
Planning on using WebRTC? Great!
Now prepare for it as you would for a long marathon – it isn’t going to be a sprint.
Things to in your preparation for the WebRTC marathon include:
The post WebRTC is a marathon not a sprint appeared first on BlogGeek.me.
Hearing FUD around WebRTC IP leaks and testing them? The stories behind them are true, but only partially.
WebRTC IP leak tests were popular at some point, and somehow they still are today. Some of it is related to pure FUD while another part of it is important to consider and review. In this article, I’ll try to cover this as much as I can. Without leaking my own private IP address (192.168.123.191 at the moment if you must know) or my public IP address (80.246.138.141, while tethered to my phone at the coffee shop), lets dig into this topic together
Table of contentsIP addresses are what got you here to read this article in the first place. It is used by machines to reach out to each other and communicate. There are different types of IP addresses, and one such grouping is done between private and public addresses.
Private and public IP addressesOnce upon a time, the internet was built on top of IPv4 (and it still mostly is). IPv4 meant that each device had an IP address constructed out of 4 octets – a total of around 4 billion potential addresses. Less than the people on earth today and certainly less than the number of devices that now exist and connect to the internet.
This got solved by splitting the address ranges to private and public ones. A private IP address range is a range that can be reused by different organizations. For example, that private IP address I shared above? 192.168.123.191? It might also be the private IP address you are using as well.
A private IP address is used to communicate between devices that are hosted inside the same local network (LAN). When a device is on a different network, then the local device reaches out to it via the remote device’s public IP address. Where did that public IP address come from?
The public IP address is what a NAT device associates with the private IP address. This is a “box” sitting on the edge of the local network, connecting it to the public internet. It essentially acts as the translator of public IP addresses to private ones.
IP addresses and privacySo we have IP addresses, which are like… home addresses. They indicate how a device can be reached. If I know your IP address then I know something about you:
A quick look at that public IP address of mine from above, gives you the following information on WhatIsMyIpAddress.com:
So…
It is somewhat accurate, but in this specific case, not much. In other cases it can be pretty damn accurate. Which means it is quite private to me.
One thing these nasty IP addresses can be used for? Fingerprinting. This is a process of understanding who I am based on the makeup and behavior of my machine and me. An IP address is one of many characteristics that can be used for fingerprinting.
If you’re not certain if IP addresses are a privacy concern or not, then there’s the notion that most probably IP addresses are considered privately identifiable information – PII (based on ruling of US courts as far as I can glean). This means that an IP address can be used to identify you as a person. How does that affect us? I’d say it depends on the use case and the mode of communications – but what do I know? I am not a lawyer.
Who knows your IP address(es)?IP addresses are important for communications. They contain some private information in them due to their nature. Who knows my IP addresses anyway?
The obvious answer is your ISP – the vendor providing you access to the internet. It allocated the public IP address you are using to you and it knows which private IP address you are coming from (in many cases, it even assigned that to you through the ADSL or other access device it installed in your home).
Unless you’re trying to hide, all websites you access know your public IP address. When you connected to my blog to read this article, in order to send this piece of content back to you, my server needed to know where to reply to, which means it has your public IP address. Am I storing it and using it elsewhere? Not that I am directly aware of, but my marketing services such as Google Analytics might and probably does make use of your public IP address.
That private IP address of yours though, most websites and cloud services aren’t directly aware of it and usually don’t need it either.
WebRTC and IP addressesWebRTC does two things differently than most other browser based protocols out there:
Because WebRTC diverges from the client-server approach AND uses dynamic ephemeral ports, there’s a need for NAT traversal mechanisms to be able to.., well… pass through these NATs and firewalls. And while at it, try not to waste too much network resources. This is why a normal peer connection in WebRTC will have 4+ types of “local” addresses as its candidates for such communications:
Lots and lots of addresses that need to be communicated from one peer to another. And then negotiated and checked for connectivity using ICE.
Then there’s this minor extra “inconvenience” that all these IP addresses are conveyed in SDP which is given to the application on top of WebRTC for it to send over the network. This is akin to me sending a letter, letting the post office read it just before it closes the envelope.
IP addresses are necessary for WebRTC (and VoIP) to be able to negotiate and communicate properly.
This one is important, so I’ll write it again: IP addresses are necessary for WebRTC (and VoIP) to be able to negotiate and communicate properly.
It means that this isn’t a bug or a security breach on behalf of WebRTC, but rather its normal behavior which lets you communicate in the first place. No IP addresses? No communications.
One last thing: You can hide a user’s local IP address and even public IP address. Doing that though means the communication goes through an intermediary TURN server.
Past WebRTC “exploits” of IP addressesWebRTC is a great avenue for hackers:
The main exploits around IP addresses in browsers affecting the user’s privacy were conducted so far for fingerprinting.
Fingerprinting is the act of figuring out who a user is based on the digital fingerprint he leaves on the web. You can glean quite a lot about who a user is based on the behavior of their web browser. Fingerprinting makes users identifiable and trackable when they browse the web, which is quite useful for advertisers.
The leading story here? NY Times used WebRTC for fingerprinting
There’s a flip side to it – WebRTC is/was a useful way of knowing if someone is a real person or a bot running on browser automation as indicated in the comments. A lot of the high scale browser automations simply couldn’t quite cope with WebRTC APIs in the browser, so it made sense to use it as part of the techniques to ferret out real traffic from bots.
Since then, WebRTC made some changes to the exposure of IP addresses:
There are different entities in a WebRTC session that need to have your local IP address in a WebRTC session:
The other peer, the web application and the TURN server don’t really need that access if you don’t care about the local network connectivity use case. If connecting a WebRTC session on the local network (inside a company office, home, etc) isn’t what you’re focused on, then you should be fine with not sharing the local IP address.
Also, if you are concerned about your privacy to the point of not wanting people to know your local IP address – or public IP address – then you wouldn’t want these IP addresses exposed either.
But how can the browser or the application know about that?
VPNs stopping WebRTC IP leaksWhen using a VPN, what you are practically doing is making sure all traffic gets funneled through the VPN. There are many reasons for using a VPN and they all revolve around privacy and security – either of the user or the corporate whose VPN is being used.
The VPN client intercepts all outgoing traffic from a device and routes it through the VPN server. VPNs also configure proxy servers for that purpose so that web traffic in general would go through that proxy and not directly to the destination – all that in order to hide the user itself or to monitor the user’s browsing history (do you see how all these technologies can be used either for anonymity or for the exact opposite of it?).
WebRTC poses a challenge for VPNs as well:
To make all this go away, browsers have privacy policies built into them. And VPNs can modify these policies to accommodate for their needs – things like not allowing non-proxied UDP traffic to occur.
How much should you care about WebRTC IP leaks?That’s for you to decide.
As a user, I don’t care much about who knows my IP address. But I am not an example – I am also using Chrome and Google services. Along with a subscription to Office 365 and a Facebook account. Most of my life has already been given away to corporate America.
Here are a few rules of thumb I’d use if I were to decide if I care:
In all other cases, just do nothing and feel free to continue using WebRTC “as is”. The majority of web users are doing just that as well.
Do you want privacy or privacy?This one is tricky
You want to communicate with someone online. Without them knowing your private or public IP address directly. Because… well… dating. And anonymity. And harassment. And whatever.
To that end, you want the communication to be masked by a server. All of the traffic – signaling and media – gets routed through the intermediary server/service. So that you are masked from the other peer. But guess what – that means your private and public IP addresses are going to be known to the intermediary server/service.
You want to communicate with someone online. Without people, companies or governments eavesdropping on the conversation.
To that end, you want the communication to be peer-to-peer. No TURN servers or media servers as intermediaries. Which is great, but guess what – that means your private and public IP addresses are going to be known to the peer you are communicating with.
At some point, someone needs to know your IP addresses if you want and need to communicate. Which is exactly where we started from.
Oh, and complicated schemes a-la TOR networking is nice, but doesn’t work that well with real time communications where latency and bitrates are critical for media quality.
The developer’s angle of WebRTC IP leaksWe’ve seen the issue, the reasons for it and we’ve discussed the user’s angle here. But what about developers? What should they do about this?
WebRTC application developersIf you are a WebRTC application developer, then you should take into account that some of your users will be privacy conscious. That may include the way they think about their IP addresses.
Here are a few things for you to think about here:
If you are a VPN developer, you should know more about WebRTC, and put some effort into handling it.
Blocking WebRTC altogether won’t solve the problem – it will just aggravate users who need access to WebRTC-based applications (=almost all meeting apps).
Instead, you should make sure that part of your VPN client application takes care of the browser configurations to place them in a policy that fits your rules:
A WebRTC leak test is a simple web application that tries to find your local IP address. This is used to check and prove that an innocent-looking web application with no special permissions from a user can gain access to such data.
Does WebRTC still leak IP?Yes and no.
It really depends where you’re looking at this issue.
WebRTC needs IP addresses to communicate properly. So there’s no real leak. Applications written poorly may leak such IP addresses unintentionally. A VPN application may be implemented poorly so as to not plug this “leak” for the privacy conscious users who use them.
Yes. By changing the privacy policy in Chrome. This is something that VPNs can do as well (and should do).
How severe is the WebRTC leak?The WebRTC leak of IP addresses gives web applications the ability to know your private IP address. This has been a privacy issue in the past. Today, to gain access to that information, web applications must first ask the user for consent to access his microphone or camera, so this is less of an issue.
What is a good VPN to plug the WebRTC leak?I can’t really recommend a good VPN to plug WebRTC leaks. This isn’t what I do, and frankly, I don’t believe in such tools plugging these leaks.
One rule of thumb I can give here is that don’t go for a free VPN. If it is free, then you are the product, which means they sell your data – the exact privacy you are trying to protect.
The post What is the WebRTC leak test and should you be worried about it? appeared first on BlogGeek.me.
Step-by-step guide on how to fix bad webcam lighting in your WebRTC app with standard JavaScript API's for camera exposure or natively with uvc drivers.
The post Fix Bad Lighting with JavaScript Webcam Exposure Controls (Sebastian Schmid) appeared first on webrtcHacks.
What WebRTC did to VoIP was reduce the barrier of entry to new vendors and increased the level and domains of innovation.
[In this list of short articles, I’ll be going over some WebRTC related quotes and try to explain them]
WebRTC was an aha moment in the history of communications.
It did two simple things that were never before possible for “us” VoIP developers:
This in turn, brought with it the two aspects of WebRTC illustrated above:
For many years I’ve been using this slide to explain why WebRTC is so vastly different than what came before it:
That said, truly innovating, productizing and scaling WebRTC applications require a bit more of an investment and a lot more in understanding and truly grokking WebRTC. Especially since WebRTC is… well… it is web and VoIP while at the same time it isn’t exactly web and it isn’t exactly VoIP:
This means that you need to understand and be proficient with both VoIP development (to some extent) and with web development (to some extent).
Looking to learn WebRTC? Here are some guidelines of how to get started with learning WebRTC.
The post WebRTC reduced barriers and increased innovation in communications appeared first on BlogGeek.me.
With FIDO coming to replace passwords in applications, CPaaS vendors are likely to decline in 2FA revenues.
2FA revenue has always lived on the premise that passwords are broken. I’ve written about this back in 2017:
Companies are using SMS for three types of services these days:
1. Security — either through two-factor authentication (2FA), for signing in to services; or one-time password (OTP), which replaces the need to remember a password for various apps
2. Notifications for services — these would be notifications that you care about or that offer you information, like that request for feedback or maybe that birthday coupon
3. Pure spam — businesses just send you their unsolicited crap trying to get you to sign up for their services
Spam is spam. Notifications are moving towards conversations on social networks. And the security SMS messages are going to be replaced by FIDO. Here’s where we’re headed.
Let’s take this step by step.
Table of contents Passwords and the FIDO AlliancePasswords are the bane of our modern existence. A necessary evil.
To do anything meaningful online (besides reading this superb article), you need to login or identify yourself against the service. Usually, this is done by a username (email or an identity number most likely) and a password. That password part is a challenge:
I use a password manager to handle my online life. My wife uses the “forgot my password” link all the time to get the same results.
It seems that whatever was tried in the passwords industry has failed in one way or another. Getting people house trained on good password practices is just too damn hard and bound to failure (just like trying to explain to people not to throw facial tissue down the toilet).
Experts have since pushing for a security model that authenticates a user with multiple “things”:
Smartphones today are something you own and they offer something you are by having fingerprint ID and face ID solutions baked into them. That last piece is the password.
Enter FIDO.
FIDO stands for Fast IDentity Online.
Here’s the main marketing spiel of the FIDO Alliance:
The FIDO Alliance seems to have more members than it has views on that YouTube video (seriously).
By their own words:
The FIDO Alliance is working to change the nature of authentication with open standards that are more secure than passwords and SMS OTPs, simpler for consumers to use, and easier for service providers to deploy and manage.
So:
What more can you ask for?
Well… for this standard to succeed.
And here is what brought me to write this article. The recent announcement from earlier this month – Apple, Google and Microsoft all committing to the FIDO standard. They are already part of FIDO, but now it is about offering easier mechanisms to remove the need for a password altogether.
If you are reading this, then you are doing that in front of an Apple device (iPhone, iPad or MacOS), a Google one (Android or Chrome OS) or a Microsoft one (Windows). There are stragglers using Linux or others, but these are tech-savvy enough to use passwords anyways.
These devices are more and more active as both something you own and something you are. My two recent laptops offer fingerprint biometric identification and most (all?) smartphones today offer the same or better approaches as well.
I long waited for Google and Apple to open up their authentication mechanisms in Android and iOS to let developers use it the same way end users use it to access Google and Apple services – when I login to any Google connected site anywhere, my smartphone asks me if that was me.
And now it seems to be here. From the press release itself:
Today’s announcement extends these platform implementations to give users two new capabilities for more seamless and secure passwordless sign-ins:
1. Allow users to automatically access their FIDO sign-in credentials (referred to by some as a “passkey”) on many of their devices, even new ones, without having to re-enroll every account.
2. Enable users to use FIDO authentication on their mobile device to sign in to an app or website on a nearby device, regardless of the OS platform or browser they are running.
So… no need for passwords. And no need for 2FA. Or OTP.
FIDO is going to end the farce of using 2FA and OTP technologies.
2FA: a CPaaS milking cow2FA stands for Two Factor Authentication while OTP stands for One Time Password.
With 2FA, you enter your credentials and then receive an SMS or email (or more recently Whatsapp message) with a number. You have to paste that number on the web page or app to login. This adds the something you own part to the security mechanism.
OTP is used to remove the password altogether. Tell us your email and we will send you a one time password over SMS (or email), usually a few digits, and you use that to login for just this once.
2FA, OTP… the ugly truth is that it is nagging as hell to everyone. Not only users but also application developers. The devil is always in the details with these things:
The list goes on. So CPaaS vendors have gone ahead and incorporated 2FA specific solutions into their bag of services. Twilio even acquired Authy in 2015, a customer, just to have that in their offerings at the time.
The great thing about 2FA (for CPaaS vendors), is that the more people engage with the digital world, the more they will end up with a 2FA or OTP SMS message. And each such message is a minor goldmine: A single SMS on Twilio in the US costs $0.0075 to send. A 2FA transaction will cost an additional $0.09 on top of it.
Yes. 2FA services bring great value. And they are tricky to implement and maintain properly at scale. So the price can be explained. But… what if we didn’t really need 2FA at all?
The death of 2FAPutting one and one together:
Apple, Google and Microsoft committing to FIDO and banishing passwords by making their devices take care of something you know, something you own AND something you are means that users will not need to identify themselves in front of services using passwords AND they won’t be needing OTP or 2FA either.
The solution ends up being simpler for the user AND simpler for the service provider.
Win Win.
Unless you are a CPaaS vendor who makes revenue from 2FA. Then it is pure loss.
What alternatives can CPaaS vendors offer?
At first step, the “migration” from “legacy” 2FA and OTP towards Apple/Google’s new and upcoming FIDO solution. Maybe a unified API on top of Apple and Google, but that’s a stretch. I can’t see such APIs costing $0.09 per authentication. Especially if Apple and Google do a good job at the developer tooling level for this.
* I removed Microsoft closer to the end here because they are less important for this to succeed. They are significant if this does succeed in making it even simpler on laptops so one won’t have to reach for his phone to login when on a laptop.
The future of CPaaS5 years ago, back in that 2017 article, I ended it with these words:
Goodbye SMS, It’s Time for Us to Move On
Don’t be fooled by the growth of 2FA and application-to-person (A2P) type messages over SMS. This will have a short lifespan of a few years. But five to 10 years from now? It will just be a service sitting next to my imaginary fax machine.
We’re 5 years in and the replacements of SMS are here already.
All that revenue coming to CPaaS from SMS is going to go elsewhere. Social omnichannel introduced by CPaaS vendors will replace that first chunk of revenue, but what will replace the 2FA and OTP? Can CPaaS vendors rely on FIDO and build their own business logic on top and around it for their customers?
It seems to me revenue will need to be found elsewhere.
Interested in learning more about the future of CPaaS? Check out my ebook on the topic (relevant today as it was at the time of writing it).
Download my CPaaS in 2020 ebookThe post FIDO Alliance and the end of 2FA revenue to CPaaS vendors appeared first on BlogGeek.me.
Saúl Ibarra Corretgé of Jitsi walks through his epic struggle getting Apple iOS bitcode building with WebRTC for his Apple Watch app.
The post The WebRTC Bitcode Soap Opera (Saúl Ibarra Corretgé) appeared first on webrtcHacks.
What was nice to have is now becoming mandatory in WebRTC video calling applications. This includes background blurring, but also a lot of other features as well.
Do you remember that time not long ago that 16 participants on a call was the highest number that product managers asked for? Well… we’re not there anymore. In many cases, the number has grown. First to 49. Then to a lot more, with nuances on what exactly does it mean to have larger calls. We now see anywhere between 100 to 10,000 to be considered a “meeting”.
I’ve been talking and mentioning table stakes for quite some time – during my workshops, on my messages on LinkedIn, in WebRTC Insights. It was time I sat down to write it on my blog
Table of contentsThis isn’t really about WebRTC, but rather what users now expect from WebRTC applications. These expectations are in many cases table stakes – features that are almost mandatory in order to be even considered as a relevant vendor in the selection process.
What you’ll see here is almost the new shopping list. Since users are different, markets are different, scenarios are different and requirements vary – you may not need all of them in your application. That said, I suggest you take a good look at them, decide which ones you need tomorrow, which you don’t need and which you have to get done yesterday.
Background blurring/replacementObvious. I have a background replacement. I never use it in my own calls. Because… well… I like my background. Or more accurately – I like showing my environment to people. It gives context and I think makes me more human.
This isn’t to say people shouldn’t use background replacement or that I’ll hate them for doing that – just that for me, and my background – I like keeping the original.
Others, though, want to replace their background. Sometimes because they don’t have a proper place where the background isn’t cluttered or “noisy”. Or because they just want to have fun with it.
Whatever the reason is, background blurring and replacement are now table stakes – if your app doesn’t have it, then the app that does in your market will be more interesting and relevant to the buyers.
Here’s how I see the development of the requirements here:
If I recall correctly, Google Meet started with this feature, and since then it started cropping into other meeting solutions. We all use webcams, but none of us has good lighting. It might be a window behind (or in my case to the side), the weather out the window, the hour in the day, or just poor lighting in the room.
While this can be fixed, it isn’t. Much like the cluttered room, the understanding is that humans are lazy or just not up to the task of understanding what to do to improve video lighting on their own. And just like background removal, we can employ machine learning to improve lighting on a video stream.
Noise suppression/cancellationI started using this stock image when I started doing virtual workshops. It is how I like to think of my nice neighbor (truth be told – he really is nice). It just seems that every time I sit down for an important meeting, he’d be on one of his renovation sprees.
The environment in which we’re conducting our calls is “polluted” with sounds. My mornings are full with lawn mower noises from the park below my apartment building. The rest of my days from the other family members in my apartment and by my friendly neighbor. For others, it is the classic dog barking and traffic noises.
Same as with video, since we’re now doing these sessions from everywhere at any time, it is becoming more important than ever to have this capability built into the service used.
Some services today offer the ability to suppress and cancel different types of noises. You don’t have the control over what to suppress, but rather get an on/off switch.
Four important things here:
And last but not least, this is a kind of a feature that can also be implemented directly by the microphone, CPU or operating system. Apple tried that recently in iOS and then reverted back.
Speech to textUp until now, we’ve discussed capabilities that necessitated media processing and machine learning. Speech to text is different.
For several years now we’ve been hammered around speech to text and text to speech. The discussion was usually around the accuracy of the algorithms for speech to text and the speed at which they did their work.
It now seems that many services are starting to offer speech to text and its derivatives baked directly into their user experience. There are several benefits of investing in this direction:
The challenges with speech to text is first on how to pass the media stream to the speech to text algorithm – not a trivial task in many cases; and later, picking a service that would yield the desired results.
WebRTC meeting sizeIt used to be 9 tiles. Then when the pandemic hit, everyone scrambled to do 49 gallery view. I think that requirement has become less of an issue, while at the same time we see a push towards a greater number of participants in sessions.
How does that work exactly?
If in the past we had a few meeting rooms joining in to a meeting, with a few people seated in each room, now most of the time, we will have these people join in remotely from different locations. The number of people stayed the same, yet the number of media streams grew.
We’re also looking to get into more complex scenarios, such as large scale virtual events and webinars. And we want to make these more interactive. This pushes the boundary of a meeting size from hundreds of participants to thousands of participants.
This requirement means we need to put more effort into implementing optimizations in our WebRTC architecture and to employ capabilities that offer greater flexibility from our media servers and client code.
Getting there requires WebAssembly and constant optimizationThese new requirements and capabilities are becoming table stakes. Implementing them has its set of nuances, and each of these features is also eating up on our CPU and memory budget.
It used to be that we had to focus on the new shiny toys. Adding new cool features and making them available on the latest and greatest devices. Now it seems that we’re in need of pushing these capabilities into ever lower performing devices:
So we now have less capable devices who need more features to work well, requiring us to reduce our CPU requirements to serve them. And did I mention most of these new table stakes need machine learning?
The tool available to us for all this is WebAssembly on the browser side. This enables us to run code faster in the browser and implement algorithms that would be impossible to achieve using Javascript.
It also means we need to constantly optimize the implementation, improving performance to make room for more of these algorithms to run.
10 years into WebRTC and 2 years into the pandemic, we’re only just scratching the surface of what is needed. How are you planning to deal with these new table stakes?
The post WebRTC video calling table stakes appeared first on BlogGeek.me.
Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.
Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.
Wow, this most certainly is a great a theme.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.