A full review and guide to all of the Jitsi Meet-related projects, services, and development options including self-install, using meet.jit.si, 8x8.vc, Jitsi as a Service (JaaS), the External iFrame API, lib-jitsi-meet, and the Jitsi React libraries among others.
The post The Ultimate Guide to Jitisi Meet and JaaS appeared first on webrtcHacks.
A very detailed look at the WebRTC implementations of Google Meet and Google Duo and how they compare using webrtc-internals and some reverse engineering.
The post Meet vs. Duo – 2 faces of Google’s WebRTC appeared first on webrtcHacks.
WebRTC requires an ongoing investment that doesn’t lend itself to a one-off outsourced project. You need to plan and work with it longtime.
[In this list of short articles, I’ll be going over some WebRTC related quotes and try to explain them]
WebRTC simplified development and reduced the barrier of entry to many in the market. This brought with it the ability to quickly build, showcase and experiment with demos, proof of concepts and even MVPs. Getting that far is now much easier thanks to WebRTC, but not planning ahead will ruin you.
There are a few reasons why you can’t treat WebRTC as merely a sprint:
I like using this slide in my courses and presentations:
These are the actors in a WebRTC application. While the application is within your control and ownership – everything else isn’t…
Planning on using WebRTC? Great!
Now prepare for it as you would for a long marathon – it isn’t going to be a sprint.
Things to in your preparation for the WebRTC marathon include:
The post WebRTC is a marathon not a sprint appeared first on BlogGeek.me.
Hearing FUD around WebRTC IP leaks and testing them? The stories behind them are true, but only partially.
WebRTC IP leak tests were popular at some point, and somehow they still are today. Some of it is related to pure FUD while another part of it is important to consider and review. In this article, I’ll try to cover this as much as I can. Without leaking my own private IP address (192.168.123.191 at the moment if you must know) or my public IP address (80.246.138.141, while tethered to my phone at the coffee shop), lets dig into this topic together
Table of contentsIP addresses are what got you here to read this article in the first place. It is used by machines to reach out to each other and communicate. There are different types of IP addresses, and one such grouping is done between private and public addresses.
Private and public IP addressesOnce upon a time, the internet was built on top of IPv4 (and it still mostly is). IPv4 meant that each device had an IP address constructed out of 4 octets – a total of around 4 billion potential addresses. Less than the people on earth today and certainly less than the number of devices that now exist and connect to the internet.
This got solved by splitting the address ranges to private and public ones. A private IP address range is a range that can be reused by different organizations. For example, that private IP address I shared above? 192.168.123.191? It might also be the private IP address you are using as well.
A private IP address is used to communicate between devices that are hosted inside the same local network (LAN). When a device is on a different network, then the local device reaches out to it via the remote device’s public IP address. Where did that public IP address come from?
The public IP address is what a NAT device associates with the private IP address. This is a “box” sitting on the edge of the local network, connecting it to the public internet. It essentially acts as the translator of public IP addresses to private ones.
IP addresses and privacySo we have IP addresses, which are like… home addresses. They indicate how a device can be reached. If I know your IP address then I know something about you:
A quick look at that public IP address of mine from above, gives you the following information on WhatIsMyIpAddress.com:
So…
It is somewhat accurate, but in this specific case, not much. In other cases it can be pretty damn accurate. Which means it is quite private to me.
One thing these nasty IP addresses can be used for? Fingerprinting. This is a process of understanding who I am based on the makeup and behavior of my machine and me. An IP address is one of many characteristics that can be used for fingerprinting.
If you’re not certain if IP addresses are a privacy concern or not, then there’s the notion that most probably IP addresses are considered privately identifiable information – PII (based on ruling of US courts as far as I can glean). This means that an IP address can be used to identify you as a person. How does that affect us? I’d say it depends on the use case and the mode of communications – but what do I know? I am not a lawyer.
Who knows your IP address(es)?IP addresses are important for communications. They contain some private information in them due to their nature. Who knows my IP addresses anyway?
The obvious answer is your ISP – the vendor providing you access to the internet. It allocated the public IP address you are using to you and it knows which private IP address you are coming from (in many cases, it even assigned that to you through the ADSL or other access device it installed in your home).
Unless you’re trying to hide, all websites you access know your public IP address. When you connected to my blog to read this article, in order to send this piece of content back to you, my server needed to know where to reply to, which means it has your public IP address. Am I storing it and using it elsewhere? Not that I am directly aware of, but my marketing services such as Google Analytics might and probably does make use of your public IP address.
That private IP address of yours though, most websites and cloud services aren’t directly aware of it and usually don’t need it either.
WebRTC and IP addressesWebRTC does two things differently than most other browser based protocols out there:
Because WebRTC diverges from the client-server approach AND uses dynamic ephemeral ports, there’s a need for NAT traversal mechanisms to be able to.., well… pass through these NATs and firewalls. And while at it, try not to waste too much network resources. This is why a normal peer connection in WebRTC will have 4+ types of “local” addresses as its candidates for such communications:
Lots and lots of addresses that need to be communicated from one peer to another. And then negotiated and checked for connectivity using ICE.
Then there’s this minor extra “inconvenience” that all these IP addresses are conveyed in SDP which is given to the application on top of WebRTC for it to send over the network. This is akin to me sending a letter, letting the post office read it just before it closes the envelope.
IP addresses are necessary for WebRTC (and VoIP) to be able to negotiate and communicate properly.
This one is important, so I’ll write it again: IP addresses are necessary for WebRTC (and VoIP) to be able to negotiate and communicate properly.
It means that this isn’t a bug or a security breach on behalf of WebRTC, but rather its normal behavior which lets you communicate in the first place. No IP addresses? No communications.
One last thing: You can hide a user’s local IP address and even public IP address. Doing that though means the communication goes through an intermediary TURN server.
Past WebRTC “exploits” of IP addressesWebRTC is a great avenue for hackers:
The main exploits around IP addresses in browsers affecting the user’s privacy were conducted so far for fingerprinting.
Fingerprinting is the act of figuring out who a user is based on the digital fingerprint he leaves on the web. You can glean quite a lot about who a user is based on the behavior of their web browser. Fingerprinting makes users identifiable and trackable when they browse the web, which is quite useful for advertisers.
The leading story here? NY Times used WebRTC for fingerprinting
There’s a flip side to it – WebRTC is/was a useful way of knowing if someone is a real person or a bot running on browser automation as indicated in the comments. A lot of the high scale browser automations simply couldn’t quite cope with WebRTC APIs in the browser, so it made sense to use it as part of the techniques to ferret out real traffic from bots.
Since then, WebRTC made some changes to the exposure of IP addresses:
There are different entities in a WebRTC session that need to have your local IP address in a WebRTC session:
The other peer, the web application and the TURN server don’t really need that access if you don’t care about the local network connectivity use case. If connecting a WebRTC session on the local network (inside a company office, home, etc) isn’t what you’re focused on, then you should be fine with not sharing the local IP address.
Also, if you are concerned about your privacy to the point of not wanting people to know your local IP address – or public IP address – then you wouldn’t want these IP addresses exposed either.
But how can the browser or the application know about that?
VPNs stopping WebRTC IP leaksWhen using a VPN, what you are practically doing is making sure all traffic gets funneled through the VPN. There are many reasons for using a VPN and they all revolve around privacy and security – either of the user or the corporate whose VPN is being used.
The VPN client intercepts all outgoing traffic from a device and routes it through the VPN server. VPNs also configure proxy servers for that purpose so that web traffic in general would go through that proxy and not directly to the destination – all that in order to hide the user itself or to monitor the user’s browsing history (do you see how all these technologies can be used either for anonymity or for the exact opposite of it?).
WebRTC poses a challenge for VPNs as well:
To make all this go away, browsers have privacy policies built into them. And VPNs can modify these policies to accommodate for their needs – things like not allowing non-proxied UDP traffic to occur.
How much should you care about WebRTC IP leaks?That’s for you to decide.
As a user, I don’t care much about who knows my IP address. But I am not an example – I am also using Chrome and Google services. Along with a subscription to Office 365 and a Facebook account. Most of my life has already been given away to corporate America.
Here are a few rules of thumb I’d use if I were to decide if I care:
In all other cases, just do nothing and feel free to continue using WebRTC “as is”. The majority of web users are doing just that as well.
Do you want privacy or privacy?This one is tricky
You want to communicate with someone online. Without them knowing your private or public IP address directly. Because… well… dating. And anonymity. And harassment. And whatever.
To that end, you want the communication to be masked by a server. All of the traffic – signaling and media – gets routed through the intermediary server/service. So that you are masked from the other peer. But guess what – that means your private and public IP addresses are going to be known to the intermediary server/service.
You want to communicate with someone online. Without people, companies or governments eavesdropping on the conversation.
To that end, you want the communication to be peer-to-peer. No TURN servers or media servers as intermediaries. Which is great, but guess what – that means your private and public IP addresses are going to be known to the peer you are communicating with.
At some point, someone needs to know your IP addresses if you want and need to communicate. Which is exactly where we started from.
Oh, and complicated schemes a-la TOR networking is nice, but doesn’t work that well with real time communications where latency and bitrates are critical for media quality.
The developer’s angle of WebRTC IP leaksWe’ve seen the issue, the reasons for it and we’ve discussed the user’s angle here. But what about developers? What should they do about this?
WebRTC application developersIf you are a WebRTC application developer, then you should take into account that some of your users will be privacy conscious. That may include the way they think about their IP addresses.
Here are a few things for you to think about here:
If you are a VPN developer, you should know more about WebRTC, and put some effort into handling it.
Blocking WebRTC altogether won’t solve the problem – it will just aggravate users who need access to WebRTC-based applications (=almost all meeting apps).
Instead, you should make sure that part of your VPN client application takes care of the browser configurations to place them in a policy that fits your rules:
A WebRTC leak test is a simple web application that tries to find your local IP address. This is used to check and prove that an innocent-looking web application with no special permissions from a user can gain access to such data.
Does WebRTC still leak IP?Yes and no.
It really depends where you’re looking at this issue.
WebRTC needs IP addresses to communicate properly. So there’s no real leak. Applications written poorly may leak such IP addresses unintentionally. A VPN application may be implemented poorly so as to not plug this “leak” for the privacy conscious users who use them.
Yes. By changing the privacy policy in Chrome. This is something that VPNs can do as well (and should do).
How severe is the WebRTC leak?The WebRTC leak of IP addresses gives web applications the ability to know your private IP address. This has been a privacy issue in the past. Today, to gain access to that information, web applications must first ask the user for consent to access his microphone or camera, so this is less of an issue.
What is a good VPN to plug the WebRTC leak?I can’t really recommend a good VPN to plug WebRTC leaks. This isn’t what I do, and frankly, I don’t believe in such tools plugging these leaks.
One rule of thumb I can give here is that don’t go for a free VPN. If it is free, then you are the product, which means they sell your data – the exact privacy you are trying to protect.
The post What is the WebRTC leak test and should you be worried about it? appeared first on BlogGeek.me.
Step-by-step guide on how to fix bad webcam lighting in your WebRTC app with standard JavaScript API's for camera exposure or natively with uvc drivers.
The post Fix Bad Lighting with JavaScript Webcam Exposure Controls (Sebastian Schmid) appeared first on webrtcHacks.
What WebRTC did to VoIP was reduce the barrier of entry to new vendors and increased the level and domains of innovation.
[In this list of short articles, I’ll be going over some WebRTC related quotes and try to explain them]
WebRTC was an aha moment in the history of communications.
It did two simple things that were never before possible for “us” VoIP developers:
This in turn, brought with it the two aspects of WebRTC illustrated above:
For many years I’ve been using this slide to explain why WebRTC is so vastly different than what came before it:
That said, truly innovating, productizing and scaling WebRTC applications require a bit more of an investment and a lot more in understanding and truly grokking WebRTC. Especially since WebRTC is… well… it is web and VoIP while at the same time it isn’t exactly web and it isn’t exactly VoIP:
This means that you need to understand and be proficient with both VoIP development (to some extent) and with web development (to some extent).
Looking to learn WebRTC? Here are some guidelines of how to get started with learning WebRTC.
The post WebRTC reduced barriers and increased innovation in communications appeared first on BlogGeek.me.
With FIDO coming to replace passwords in applications, CPaaS vendors are likely to decline in 2FA revenues.
2FA revenue has always lived on the premise that passwords are broken. I’ve written about this back in 2017:
Companies are using SMS for three types of services these days:
1. Security — either through two-factor authentication (2FA), for signing in to services; or one-time password (OTP), which replaces the need to remember a password for various apps
2. Notifications for services — these would be notifications that you care about or that offer you information, like that request for feedback or maybe that birthday coupon
3. Pure spam — businesses just send you their unsolicited crap trying to get you to sign up for their services
Spam is spam. Notifications are moving towards conversations on social networks. And the security SMS messages are going to be replaced by FIDO. Here’s where we’re headed.
Let’s take this step by step.
Table of contents Passwords and the FIDO AlliancePasswords are the bane of our modern existence. A necessary evil.
To do anything meaningful online (besides reading this superb article), you need to login or identify yourself against the service. Usually, this is done by a username (email or an identity number most likely) and a password. That password part is a challenge:
I use a password manager to handle my online life. My wife uses the “forgot my password” link all the time to get the same results.
It seems that whatever was tried in the passwords industry has failed in one way or another. Getting people house trained on good password practices is just too damn hard and bound to failure (just like trying to explain to people not to throw facial tissue down the toilet).
Experts have since pushing for a security model that authenticates a user with multiple “things”:
Smartphones today are something you own and they offer something you are by having fingerprint ID and face ID solutions baked into them. That last piece is the password.
Enter FIDO.
FIDO stands for Fast IDentity Online.
Here’s the main marketing spiel of the FIDO Alliance:
The FIDO Alliance seems to have more members than it has views on that YouTube video (seriously).
By their own words:
The FIDO Alliance is working to change the nature of authentication with open standards that are more secure than passwords and SMS OTPs, simpler for consumers to use, and easier for service providers to deploy and manage.
So:
What more can you ask for?
Well… for this standard to succeed.
And here is what brought me to write this article. The recent announcement from earlier this month – Apple, Google and Microsoft all committing to the FIDO standard. They are already part of FIDO, but now it is about offering easier mechanisms to remove the need for a password altogether.
If you are reading this, then you are doing that in front of an Apple device (iPhone, iPad or MacOS), a Google one (Android or Chrome OS) or a Microsoft one (Windows). There are stragglers using Linux or others, but these are tech-savvy enough to use passwords anyways.
These devices are more and more active as both something you own and something you are. My two recent laptops offer fingerprint biometric identification and most (all?) smartphones today offer the same or better approaches as well.
I long waited for Google and Apple to open up their authentication mechanisms in Android and iOS to let developers use it the same way end users use it to access Google and Apple services – when I login to any Google connected site anywhere, my smartphone asks me if that was me.
And now it seems to be here. From the press release itself:
Today’s announcement extends these platform implementations to give users two new capabilities for more seamless and secure passwordless sign-ins:
1. Allow users to automatically access their FIDO sign-in credentials (referred to by some as a “passkey”) on many of their devices, even new ones, without having to re-enroll every account.
2. Enable users to use FIDO authentication on their mobile device to sign in to an app or website on a nearby device, regardless of the OS platform or browser they are running.
So… no need for passwords. And no need for 2FA. Or OTP.
FIDO is going to end the farce of using 2FA and OTP technologies.
2FA: a CPaaS milking cow2FA stands for Two Factor Authentication while OTP stands for One Time Password.
With 2FA, you enter your credentials and then receive an SMS or email (or more recently Whatsapp message) with a number. You have to paste that number on the web page or app to login. This adds the something you own part to the security mechanism.
OTP is used to remove the password altogether. Tell us your email and we will send you a one time password over SMS (or email), usually a few digits, and you use that to login for just this once.
2FA, OTP… the ugly truth is that it is nagging as hell to everyone. Not only users but also application developers. The devil is always in the details with these things:
The list goes on. So CPaaS vendors have gone ahead and incorporated 2FA specific solutions into their bag of services. Twilio even acquired Authy in 2015, a customer, just to have that in their offerings at the time.
The great thing about 2FA (for CPaaS vendors), is that the more people engage with the digital world, the more they will end up with a 2FA or OTP SMS message. And each such message is a minor goldmine: A single SMS on Twilio in the US costs $0.0075 to send. A 2FA transaction will cost an additional $0.09 on top of it.
Yes. 2FA services bring great value. And they are tricky to implement and maintain properly at scale. So the price can be explained. But… what if we didn’t really need 2FA at all?
The death of 2FAPutting one and one together:
Apple, Google and Microsoft committing to FIDO and banishing passwords by making their devices take care of something you know, something you own AND something you are means that users will not need to identify themselves in front of services using passwords AND they won’t be needing OTP or 2FA either.
The solution ends up being simpler for the user AND simpler for the service provider.
Win Win.
Unless you are a CPaaS vendor who makes revenue from 2FA. Then it is pure loss.
What alternatives can CPaaS vendors offer?
At first step, the “migration” from “legacy” 2FA and OTP towards Apple/Google’s new and upcoming FIDO solution. Maybe a unified API on top of Apple and Google, but that’s a stretch. I can’t see such APIs costing $0.09 per authentication. Especially if Apple and Google do a good job at the developer tooling level for this.
* I removed Microsoft closer to the end here because they are less important for this to succeed. They are significant if this does succeed in making it even simpler on laptops so one won’t have to reach for his phone to login when on a laptop.
The future of CPaaS5 years ago, back in that 2017 article, I ended it with these words:
Goodbye SMS, It’s Time for Us to Move On
Don’t be fooled by the growth of 2FA and application-to-person (A2P) type messages over SMS. This will have a short lifespan of a few years. But five to 10 years from now? It will just be a service sitting next to my imaginary fax machine.
We’re 5 years in and the replacements of SMS are here already.
All that revenue coming to CPaaS from SMS is going to go elsewhere. Social omnichannel introduced by CPaaS vendors will replace that first chunk of revenue, but what will replace the 2FA and OTP? Can CPaaS vendors rely on FIDO and build their own business logic on top and around it for their customers?
It seems to me revenue will need to be found elsewhere.
Interested in learning more about the future of CPaaS? Check out my ebook on the topic (relevant today as it was at the time of writing it).
Download my CPaaS in 2020 ebookThe post FIDO Alliance and the end of 2FA revenue to CPaaS vendors appeared first on BlogGeek.me.
Saúl Ibarra Corretgé of Jitsi walks through his epic struggle getting Apple iOS bitcode building with WebRTC for his Apple Watch app.
The post The WebRTC Bitcode Soap Opera (Saúl Ibarra Corretgé) appeared first on webrtcHacks.
What was nice to have is now becoming mandatory in WebRTC video calling applications. This includes background blurring, but also a lot of other features as well.
Do you remember that time not long ago that 16 participants on a call was the highest number that product managers asked for? Well… we’re not there anymore. In many cases, the number has grown. First to 49. Then to a lot more, with nuances on what exactly does it mean to have larger calls. We now see anywhere between 100 to 10,000 to be considered a “meeting”.
I’ve been talking and mentioning table stakes for quite some time – during my workshops, on my messages on LinkedIn, in WebRTC Insights. It was time I sat down to write it on my blog
Table of contentsThis isn’t really about WebRTC, but rather what users now expect from WebRTC applications. These expectations are in many cases table stakes – features that are almost mandatory in order to be even considered as a relevant vendor in the selection process.
What you’ll see here is almost the new shopping list. Since users are different, markets are different, scenarios are different and requirements vary – you may not need all of them in your application. That said, I suggest you take a good look at them, decide which ones you need tomorrow, which you don’t need and which you have to get done yesterday.
Background blurring/replacementObvious. I have a background replacement. I never use it in my own calls. Because… well… I like my background. Or more accurately – I like showing my environment to people. It gives context and I think makes me more human.
This isn’t to say people shouldn’t use background replacement or that I’ll hate them for doing that – just that for me, and my background – I like keeping the original.
Others, though, want to replace their background. Sometimes because they don’t have a proper place where the background isn’t cluttered or “noisy”. Or because they just want to have fun with it.
Whatever the reason is, background blurring and replacement are now table stakes – if your app doesn’t have it, then the app that does in your market will be more interesting and relevant to the buyers.
Here’s how I see the development of the requirements here:
If I recall correctly, Google Meet started with this feature, and since then it started cropping into other meeting solutions. We all use webcams, but none of us has good lighting. It might be a window behind (or in my case to the side), the weather out the window, the hour in the day, or just poor lighting in the room.
While this can be fixed, it isn’t. Much like the cluttered room, the understanding is that humans are lazy or just not up to the task of understanding what to do to improve video lighting on their own. And just like background removal, we can employ machine learning to improve lighting on a video stream.
Noise suppression/cancellationI started using this stock image when I started doing virtual workshops. It is how I like to think of my nice neighbor (truth be told – he really is nice). It just seems that every time I sit down for an important meeting, he’d be on one of his renovation sprees.
The environment in which we’re conducting our calls is “polluted” with sounds. My mornings are full with lawn mower noises from the park below my apartment building. The rest of my days from the other family members in my apartment and by my friendly neighbor. For others, it is the classic dog barking and traffic noises.
Same as with video, since we’re now doing these sessions from everywhere at any time, it is becoming more important than ever to have this capability built into the service used.
Some services today offer the ability to suppress and cancel different types of noises. You don’t have the control over what to suppress, but rather get an on/off switch.
Four important things here:
And last but not least, this is a kind of a feature that can also be implemented directly by the microphone, CPU or operating system. Apple tried that recently in iOS and then reverted back.
Speech to textUp until now, we’ve discussed capabilities that necessitated media processing and machine learning. Speech to text is different.
For several years now we’ve been hammered around speech to text and text to speech. The discussion was usually around the accuracy of the algorithms for speech to text and the speed at which they did their work.
It now seems that many services are starting to offer speech to text and its derivatives baked directly into their user experience. There are several benefits of investing in this direction:
The challenges with speech to text is first on how to pass the media stream to the speech to text algorithm – not a trivial task in many cases; and later, picking a service that would yield the desired results.
WebRTC meeting sizeIt used to be 9 tiles. Then when the pandemic hit, everyone scrambled to do 49 gallery view. I think that requirement has become less of an issue, while at the same time we see a push towards a greater number of participants in sessions.
How does that work exactly?
If in the past we had a few meeting rooms joining in to a meeting, with a few people seated in each room, now most of the time, we will have these people join in remotely from different locations. The number of people stayed the same, yet the number of media streams grew.
We’re also looking to get into more complex scenarios, such as large scale virtual events and webinars. And we want to make these more interactive. This pushes the boundary of a meeting size from hundreds of participants to thousands of participants.
This requirement means we need to put more effort into implementing optimizations in our WebRTC architecture and to employ capabilities that offer greater flexibility from our media servers and client code.
Getting there requires WebAssembly and constant optimizationThese new requirements and capabilities are becoming table stakes. Implementing them has its set of nuances, and each of these features is also eating up on our CPU and memory budget.
It used to be that we had to focus on the new shiny toys. Adding new cool features and making them available on the latest and greatest devices. Now it seems that we’re in need of pushing these capabilities into ever lower performing devices:
So we now have less capable devices who need more features to work well, requiring us to reduce our CPU requirements to serve them. And did I mention most of these new table stakes need machine learning?
The tool available to us for all this is WebAssembly on the browser side. This enables us to run code faster in the browser and implement algorithms that would be impossible to achieve using Javascript.
It also means we need to constantly optimize the implementation, improving performance to make room for more of these algorithms to run.
10 years into WebRTC and 2 years into the pandemic, we’re only just scratching the surface of what is needed. How are you planning to deal with these new table stakes?
The post WebRTC video calling table stakes appeared first on BlogGeek.me.
Anycast enables WebRTC services to better manage and optimize global deployments at scale.
In 2021 we’ve started seeing a new technology finding its way more and more into WebRTC applications: Anycast. Unlike other shiny new toys, Anycast isn’t shiny and it isn’t new. In fact, it has been defined in the previous millenia, before the era of the smartphone.
I’ve been “doing” VoIP for over 20 years now, but wasn’t really aware of Anycast. I dug a bit around, and ended up sitting with William King, CTO & Co-founder of Subspace, to learn more about AnyCast and its use with WebRTC.
Here’s what I learned about how WebRTC developers can and are using Anycast – and how it can assist them in their own deployments.
Table of contentsFor someone sitting in the clouds today, the lowest level of networking you can think of is the IP level (I am told there are lower levels, but for me IP is low enough).
At that level, if one machine wants to reach another, it needs to use its IP address as the destination. In most cases, and at least in 99% of all of the things I’ve implemented myself as a developer, you do this using what is known as Unicast:
With Unicast, each device on the network has its own unique IP address that I can use to reach it directly (and yes, I am ignoring here the distinction between local networks and public networks and how they handle it). The key thing here is that an IP address is associated with one device only, so as the illustration above shows, when the red device wants to send a message to the green device, it can send it to him via Unicast simply by stating the green device’s IP address as the destination.
Anycast is different. With Anycast, multiple devices on the network can have the same IP address associated with them. The end result is more akin to this:
In the illustration above we have 3 different green devices with the same IP address. When the red device wants to send a message to their IP address, it doesn’t really know which one will be receiving his message – just that it is somehow going to be routed to one of them. Which one? The “closest” one usually, whatever that means.
What does that mean exactly?
Here’s how Wikipedia explains it (the illustrations above are rough sketches I did based on the ones I found on their page explaining Anycast):
Anycast is a network addressing and routing methodology in which a single destination IP address is shared by devices (generally servers) in multiple locations. Routers direct packets addressed to this destination to the location nearest the sender, using their normal decision-making algorithms, typically the lowest number of BGP network hops. Anycast routing is widely used by content delivery networks such as web and DNS hosts, to bring their content closer to end users.
Lets emphasize this with colors, so we focus on the important bits –
Anycast is something that is being widely used today, just not in VoIP or WebRTC.
The main purpose of Anycast at the end of the day is to provide high availability for stateless services.
The best thing you can do with Anycast is to deal with single request-response pairs – stateless.
Why? You send out your request (for example to translate a DNS name to an IP address; or for that next chunk of a Netflix episode you’re watching), and the server (device) you reach on the network sends you that response.
Looking for the next chunk in the Netflix episode or need another DNS name translation? Easy – send another request, and the same or another server with the same Anycast IP address will respond.
Enter WebRTC.
A world where everything and anything is stateful.
There’s signaling. With its connection state machine, ICE negotiation state machine (see? State Machine hints of this not being stateless) and application logic on top.
Then there are TURN servers and media servers. All of them need to understand the state and manage incoming media flow that is both stateful and real time.
This makes utilizing Anycast in WebRTC quite a challenge.
While we’d like to enjoy Anycast’s obvious advantage of high availability (and a few other advantages it gives), in order to do so, we need to overcome the statefulness challenge first.
The simplest link in WebRTC is the TURN server. While stateful, its job is rather simple – routing data between peers without much thought. This makes TURN servers the best candidate for infrastructure optimizations using Anycast.
Lets see what advantages Anycast TURN infrastructure can give WebRTC applications.
3 advantages of Anycast for WebRTCOnce you get down to it, deploying TURN servers and maybe even media servers using Anycast can give some interesting benefits to your infrastructure.
Here are the main advantages – ones that are going to define how WebRTC infrastructure will be designed and deployed in the coming years.
#1 – Better geolocationWhen a user connects your WebRTC application, your best bet is to make sure the user is as close to your infrastructure as possible. The fastest you put him on a TURN or a media server, the better media quality you can expect.
Why? Simple. Because from that server the user connected to – you control and own the media flow of the server. And if you control and own it you can make it better. But that part of the journey the media does from the user to your first server? That’s something you don’t control and own so your ability to improve quality there is lower.
This is why whenever a user joins, you are likely to start doing some geolocation, trying to figure out where the user is coming from in order to allocate for him your “closest” TURN or media server.
That process is done usually by looking at the origin IP address and then using a third party service to indicate the location of that IP address – or by DNS geolocation – letting a DNS server do that for us somehow. When we leave it to the DNS, then we are at the mercy of the DNS service hosting. It works, but not always. And it is also somewhat slow to update.
Remember that time you changed the DNS configuration of your WordPress server? Were you told it can take a few hours to “propagate”? Well… that’s exactly the problem you might be facing in getting routes updated when using DNS geolocation.
With Anycast, geolocation takes place at the BGP level. Don’t ask me what that is exactly, but it means two things for us:
That second point is a big difference. DNS servers have different “job to be done” than WebRTC Anycast services. The latter focuses on real time delivery and on better and more optimized geolocation as an extension of it. So you can expect better results overall, especially on a global scale.
#2 – Higher resiliency (and security)To operate an Anycast service requires solving the statelessness challenge it when it comes to WebRTC. Once that is solved, we gain the benefit of having our data routed through the closest server over the IP layer.
If the physical server we’re working in front of goes down, then Anycast will reroute future traffic through other servers with the same IP address. And that gives us a natural resiliency.
Furthermore, assume I am an “adversary” that wants to take down your service or disrupt it.
I can check the IP addresses you are using and map your servers. I can then commence with a DDoS attack to flood one or more of your servers via these IP addresses.
If that IP address belongs to a specific server, it will require a relatively small amount of traffic to bring that server down to its knees. But if that IP address belongs to multiple servers via Anycast, then flooding that IP address means trying to flood the whole network and not a specific server – a much harder task to achieve.
Resiliency comes built-in with Anycast.
#3 – Ease of configurationThe ease of configuration is something you get from the first two advantages.
Once we’re using Anycast, then there are a few things that make our lives easier:
Anycast is where much of the future of WebRTC services lies.
We are shifting our focus on how to optimize and maintain WebRTC infrastructure at scale. Last year it was all about getting to that 49-grid gallery view. This year it is a lot more nuanced. It is mostly about scale, performance and global reach as far as I can tell.
Anycast can play a vital role in that area and in how services can improve their performance and perceived quality for their users.
The post 3 advantages of Anycast in WebRTC you didn’t know about appeared first on BlogGeek.me.
RTC@Scale was Facebook’s virtual WebRTC event, covering current and future topics. Here’s the summary so you can pick and choose the relevant ones for you.
WebRTC Insights is a subscription service I have been running with Philipp Hancke for over a year now. The purpose of it is to make it easier for developers to get a grip of WebRTC and all of the changes happening in the code and browsers – to keep you up to date so you can focus on what you need to do best – build awesome applications.
We got into a kind of a flow:
It is fun to do and the feedback we’re getting is positive.
That said, being us, means that we can’t really sit still… or in this case – Philipp…
We published this on Monday the week after the event took place to our WebRTC Insights clients, and now, we’re opening it up for everyone as well.
Table of contentsPhilipp decided it would make sense to summarize the recent RTC@Scale “recruiting event” that Facebook did – the RSVP was explicitly asking for consent to be contacted. The technical depth of the talks was amazing so we’ve added an “out of order” issue for you, just for this
The intent is for you to *not* spend 5 hours but rather to focus on the select sessions that are relevant for you.
The event setup was simple:
Real-time Communication for Today and Future Experiences / Maher Saba @ Meta
Panel: RTC in the Metaverse / Sriram Srinivasan, Mike Arcuri, Paul Boustead, and Cullen Jennings
These sessions focus on roadmap and far future views. We’d rather have a bit more on the here and now and the immediate future requirements than what would happen in 3, 5 or 10 years time, but hey – they are recruiting
Holographic Video Calling / Nitin Garg @ Meta
Spatial Communications at Scale in Virtual Environments / Paul Boustead @ Dolby
RTC3 / Justin Uberti @ Clubhouse
Live QA
Audio ML is quite interesting. Large vendors are at it, and when (if?) the results will trickle into vanilla WebRTC is yet to be seen. Key takeaway: ML-based noise suppression is more important than echo cancellation these days.
Developing Machine Learning Based Speech Enhancement Models for Teams and Skype / Ross Cutler @ Microsoft
Can AI Disrupt Speech Compression? / Jan Skoglund @ Google
Live QA
AV1 is coming. It will take time to be here. To get a grip over it and see what companies are doing, we got Google and Visionular.
Google is what goes inside WebRTC. Visionular is what you can buy commercially on the market for server or proprietary implementations.
Your focus should probably be in low bitrates and slide sharing scenarios.
AV1 Encoder for RTC / Marco Paniconi @ Google
AV1 for RTC: Current and Future / Zoe Liu @ Visionular
Live QA
We found this part to be most applicable to current problems. This is where you should be spending your time and focus right now
Making Meta RTC Audio More Resilient / Andy Yang @ Meta
Private Calling at WhatsApp / Xi Deng @ Meta
Group Call End-to-End Encryption and the Challenges of Encrypting Large Calls / Abo-Talib Mahfoodh @ Meta
Live QA
What you are seeing here isn’t the run of the mill issue of a WebRTC insights newsletter. It wasn’t even intended. But it does show the effort and focus we put on everything WebRTC for our clients. Watching a five hour event twice and producing actionable notes is not an easy task. It changed our weekend plans but we ended up being very satisfied with the results if only for our own notes.
If your company is relying heavily on WebRTC, then you should at the very least try this out. Reach out to me via the form at the end of the WebRTC Insights landing page and I’ll send you a sample issue.
The post RTC@Scale summary and insights appeared first on BlogGeek.me.
The performance of WebRTC in Chrome as well as other RTC applications needed to be improved a lot during the pandemic when more people with a more diverse set of machines and network connections started to rely on video conferencing. Markus Handell is a team lead at Google who cares a lot about performance of […]
The post Optimizing WebRTC Power Consumption (Markus Handell) appeared first on webrtcHacks.
How time flies when you’re having fun… For me the definition of fun was starting BlogGeek.me, deciding to write about WebRTC for the first time and having 10 years fly by.
I had a few updates to write with no specific theme to them. Mostly about things just completed and a few upcoming projects and events. Then it dawned on me that I’ve been at it for a bit over 10 years now (!)
On January 5, 2012 I published the first post on this blog. I just left RADVISION for Amdocs, and wanted to have a place of my own out there that won’t be controlled by any vendor. So I started BlogGeek.me. I didn’t know what I was going to write about, but I did know it will include some 3-4 posts about WebRTC before I move on to other technical issues.
That first WebRTC post? Got published on March 8, 2012. It was about what’s WebRTC. Fast forward 10 years later, and more people today know BlogGeek.me than know me as Tsahi. And in many ways, BlogGeek.me is synonymous with WebRTC articles. Not what I had in mind when I started, but I am definitely happy with where it led me.
Anyways, here are a few updates on my ongoing projects, as well as where to find me.
Free eBook: WebRTC for Business PeopleEarlier this month, I updated my WebRTC for Business People ebook.
Its last update took place in 2019, before the pandemic, so it really needed to get up to speed with where we are now. I worked on this update in the last couple of months, updating much of the content and replacing many of the showcased vendors.
I’d like to thank Daily for picking up the sponsorship for this work. They’re one of the fascinating CPaaS vendors out there innovating in the domain of UX/UI.
Download the WebRTC for Business People ebook for free
WebRTC Trends for 2022I just finished my WebRTC Trends for 2022 workshop. Did it twice in parallel to accommodate different time zones and had a goodly sized audience joining live to the 6 hours in total.
During the workshop we went through many topics. I tried covering everything I think is relevant for 2022 when it comes to WebRTC, so that you can prepare properly.
The Advanced WebRTC Architecture course is due for another update.
The above image indicates the numbers for the course at the moment.
Around 15-20 lessons are going to be updated and recorded again – to make sure content is relevant and fresh.
One of the lessons will be dropped with 2-3 new lessons being added.
Until I finish all that work, I am announcing a 10% discount on all courses, ebooks and workshops on my webrtccourse.com website. Just use the coupon code 10YEARS.
If you enroll in the courses now, you’ll have a 1-year access to them which will include all of the upcoming updates.
WebRTC InsightsPhilipp Hancke is running the WebRTC Insights with me. This is fun to do, especially with a good friend and partner. We’ve grown the offering in the last few months, adding video release notes interpretation for WebRTC, color coding for issues, etc.
This weekend we worked on getting our subscribers a detailed summary of Facebook’s RTC@Scale event – so they can focus on what they find relevant in the 5-hour event.
We’ve celebrated a year of WebRTC Insights recently – if you’d like to join our service for the coming year and be updated on everything technical (and non-technical) about WebRTC just let us know.
Enterprise Connect 2022: Here I come!After two years at home, it is time to pack a bag for the first time and see a plane from the inside.
I will be at Enterprise Connect 2022, taking place in March in Orlando. This will also be my first opportunity to see in real life (!) the people from Spearline who acquired my company, testRTC. I’ll be going there to represent Spearline and showcase testRTC to whoever wants to listen.
If you are there – let me know – I’ll be happy to meet you as well.
Kranky Geek Virtual 2022 SpringWe’re going to have another Kranky Geek event. We plan to have it in April 2022.
At the moment, we’re working on the sponsors and speakers list. If you’re one of those – let me know (we keep a tight ship, so I can’t promise anything).
Here’s for the next 10 yearsThe last 10 years have been fun. I am actively thinking of what will happen with WebRTC and communications in the coming years. There are some trends that are just around the corner while others are more long term in their nature (web3 anyone?).
Here’s to seeing you in virtual and in person during 2022 and beyond
The post WebRTC, BlogGeek.me, 2022 & 10 years of blogging appeared first on BlogGeek.me.
Exploration and examples of the 5 different ways you can save an image from your webcam in JavaScript in 2022. Includes canvas.toBlob, OffscreenCanvas, createImageBitmap, ImageCapture, and ReadableStreams / MediaStreamTrackProcessor
The post Ways to save an image from your webcam in 2022 appeared first on webrtcHacks.
I have been looking at these Chrome usage statistics available on chromestatus.com for a while together with Tsahi Levent-Levi for WebRTC Insights but they are too fascinating to keep them behind our paywall. Let’s do some coffee ground reading on the usage of a number of important APIs and what it tells us about what […]
The post How is WebRTC doing and who is driving usage? (Hint: Google Meet) appeared first on webrtcHacks.
A look at WebRTC trends and what is in store in 2022, especially now, as the market is heating up and differentiation and proprietary are again.
We started this year with my WebRTC trends for 2021, so it is time to conclude the year (stating that I was generally spot on), and look at what 2022 is bringing us. In many ways, 2022 is a continuation of what we had in 2021 with some interesting nuances.
My main worry is that a war is brewing. On one hand, Google is leading WebRTC, but probably not seeing enough value out of it as a big corporation. On the other hand, much of the rest of the industry is frustrated at what is taking place with the main WebRTC library – libwebrtc – that is maintained, controlled and owned by Google. This is leading to many different forks along with discussions and attempts to find a better structural solution to this big initiative called WebRTC.
A lot of this is trickled throughout the year as part of the WebRTC Insights service that I am running along with Philipp Hancke.
I can ramble on in this overview, but it is best to just… start running with it.
Table of contentsTwo years ago we shifted gears, moving from the Growth era in WebRTC to the WebRTC Differentiation era. I discussed that at length earlier this year, when I explained how WebRTC differentiation manifests itself.
It started with Google splitting up their WebRTC development efforts, making decisions on what to place in libwebrtc, their open source implementation of WebRTC, and what to implement outside of it. The verdict came in a way that any machine learning algorithm that can be kept outside of WebRTC – will be.
Other large vendors understandably followed suit.
Peak WebRTCHave we reached peak WebRTC?
Philipp made me aware of the Chrome Platform Status website and the many statistics you can find there. It makes it possible to track how many page loads include certain API calls, with many of these relating to WebRTC. The one I selected for the diagram above is that of GetUserMediaPromise, showing how often do web pages that are loaded in Chrome ask permission to access a camera or a microphone – leading more often than not to a WebRTC session.
We’ve seen a huge increase in use of WebRTC throughout the pandemic, and now things seem to be settling down for the last half year on ~x4 times what they were prior to the pandemic. Will this last or not is a good question. Clubhouse seems to have plateaued since its strong debut for example.
No one really knows what the next 12 months are going to look like, and if Omicron or yet another strain of the virus will push us back to the safety of our homes and quarantine – or what things will look like when we find ourselves on the other end of this pandemic.
Google and libwebrtcGoogle has a stranglehold on WebRTC – for better and for worse.
ALL web browsers today that support WebRTC do so via libwebrtc, which is Google’s implementation of WebRTC:
Google seems to have shifted to a kind of a maintenance mode with WebRTC. They have also changed their mindset and are focusing with libwebrtc on what’s good for Google. It all makes sense. For them…
After 10+ years of holding up the mantle for the whole industry, it is becoming tiresome, especially when there’s not enough to show for it internally. The shift was inevitable.
Google is doing what is good for Google with WebRTC
That means that if your use case falls within the realm of what Google does and needs, then you’re in good shape and good luck. And if you aren’t… well…
In the meantime, the industry around WebRTC has good meaning people. Those who want to see WebRTC grow, flourish and thrive. They are trying to help, but helping is HARD:
A deadlock.
Breakout open source WebRTC technologiesThere has been a lot of open source built around WebRTC and in the recent two years that has accelerated as well – the pandemic and all.
What we’ve seen in these 10 years are a few distinct open source projects that have broken out from the pack, making themselves more popular than others. I know the list here is lacking and others are used as well – but assume that these are the ones I see the most in the market when it comes to open source (I am intentionally ignoring the VoIP/SIP open source projects such as FreeSwitch and Asterisk here).
The illustration above shows my current thinking about the trends surrounding these top open source WebRTC technologies:
Then there’s Electron. A PC application framework built on top of the Chromium browser engine – Electron is popular with WebRTC apps as well.
Electron is a great starting point: you write your web app. Wrap it with Electron. And you’re done.
But in many ways, that’s just the beginning of your journey. Arnaud Budkiewicz of RingCentral spoke at the recent Kranky Geek about their journey:
Using Electron means surrendering to the Chromium+libwebrtc release cadence that Electron has opted for – or digging deeper and owning that technology stack as well.
WebRTC in CPaaS is… complicatedUsing CPaaS WebRTC solutions was never easy, and in 2022 it is going to be even more complicated. Why? Because the landscape is unclear.
TwilioTwilio is chasing CEP butterflies. I am all for it – though sadly it has nothing to do with WebRTC.
They have been slow to respond to the market changes when it comes to WebRTC, and it still feels like WebRTC is an afterthought to them.
AgoraAgora’s stock has been acting out after their successful IPO.
While their performance and traffic is going strong, there are market uncertainties there – peak WebRTC is one, and the huge spike is Clubhouse (using Agora). The Chinese government regulation is another. I am singling out Agora here because they are the only CPaaS vendor focused on RTC that is a public company.
DailyOn the positive side, we’ve seen the investment in Daily – $40M in series B.
The company is growing, focused on their WebRTC implementation for developers.
VonageVonage just got acquired by Ericsson. That leads us to this acquisitions chain when it comes to their WebRTC CPaaS capabilities:
TokBox → Telefonica → Vonage → Nexmo → Ericsson
We will see where this takes the Vonage API platform.
New playersWe still have newcomers to this market. Big and small. We’ve seen Microsoft and Amazon jump into CPaaS – and especially to where WebRTC is being used in CPaaS. Zoom is dabbling with an API for CPaaS lately as well.
But also newer players such as 100ms with an interesting concept to their APIs, enabling developers to offer hints of their use case, or doing more in the background for the developers than the “classic” vendor solutions.
Widgets, Embeddables, PrebuiltThe market is also growing and maturing in CPaaS. We’re starting to see higher level abstractions, offering the UI/UX along with the APIs themselves. These come in different shapes, sizes and names, but they are all geared towards making the lives of developers easier.
Which one should you be using?
Will the one you choose be there next year?
Is he going to shift focus and bail on you?
Are the APIs and capabilities he is offering actually going to work?
Lots of questions. No easy answers.
WebRTC Trends in 2022 – more of the same?After this long preamble, it is time to talk about the WebRTC trends in 2022.
The 5 biggest trends for WebRTC in 2022 are taking slightly different routes than we’ve seen before. Some focus on scale while others on new requirements and others still on new markets.
#1 – Scale & performanceThere’s a saying/quote in Hebrew – “you start as fast as you can, and then you continue to accelerate slowly”. This is where we’re at with WebRTC.
This is obvious, and a continuation to 2021. Scale still matters. A lot. This is going to stay strong as an initiative well into 2022.
In our Kranky Geek event of November 2021, Google shared the work they’ve done in the past year. Below is the slide presented around performance optimizations. As you can see, this is an ongoing effort with multiple tasks. A lot of this has been achieved, but more is being done.
These improvements are aiming towards better scalability of a single session for multiple participants. The many bugs we now track in the recent couple of months around hardware encoding and decoding as part of the WebRTC Insights shows that this will continue well into 2022.
At the same time, we are seeing investments being made by many on the infrastructure level to scale their services.
What was the case in 2021 will be in 2022 as well.
#2 – #newtechThere are a swath of new technologies that are just now starting to mature. They are enabling vendors to do more with WebRTC. At Kranky Geek, for example, we’ve spent considerable time with these technologies and seeing how various vendors are making initial use of them.
WebAssemblyProbably the crown jewel of enablers in the web today.
WebAssembly speeds up performance of web code AND enables cross language compilation. For WebRTC, the main benefit here is the use of WebAssembly for machine learning tasks used for media manipulation. From noise suppression, through background replacement and funny hats, to video lighting. All these are enabled with WebAssembly today.
Expect more vendors to use this and expect more features to be enabled by this.
WebTransport & WebCodecsNot happy with WebRTC? There’s WebTransport & WebCodecs.
Together, they theoretically enable you to encode and decode media and send or receive it from a server.
The devil here is in the details, and while not favorable yet to replacing WebRTC, they do look promising. We’ve had Dolby and Intel share some of their insights on these at Kranky Geek.
What we are going to see is more vendors experimenting with these technologies as well as using them alongside and with WebRTC where it makes sense. I’ve pointed to this approach over a year ago, as part of the WebRTC unbundling process taking place.
With Google’s own enthusiasm about these, one wonders if they will lose interest in WebRTC a few years down the road.
AV1Then there are new codecs.
AV1 has been around since 2018. Not exactly… obviously… some people have been pushing it as a solution for WebRTC since 2018. The truth of it is that at the end of 2021, AV1 is yet to be seen anywhere significant when it comes to WebRTC. Not because it isn’t good, but because it takes time to release a new codec to market – especially a video one.
Well, the wait is somewhat over. AV1 is coming to WebRTC and we will see use of it in 2022. It will still be limited, but it will finally be interesting and relevant.
A new ML-based voice codec (think Lyra) will take a wee bit longer. There’s no consensus yet as to which voice codec it should be. AV1 didn’t have that problem – we already knew AV1 would be next in line.
#3 – WebRTC infrastructure, hyperscaling and SD-WANHow you design and deploy WebRTC is changing. The usual mesh/mix/route alternatives are still there. Many go for hybrid approaches. Focus and discussions lately went to the hardware itself, and where it is located, and how packets are routed exactly.
Agora were probably the first to do this openly and at scale, marketing it as a better approach. In 2021 we’ve seen the likes of Subspace and Cloudflare announce managed TURN services with regional distributions of 100 or more data centers.
I’ve marked infrastructure as one of the challenges in my workshop in 2021. In 2022 this is going to become an even more interesting topic. Anycast is going to join the frey as a technology used by vendors.
What we still won’t have as a definitive answer in 2022 is which one is preferable? Is there a real value differentiator and observable improvement in quality when using more than 10 regions globally. Would it be worth the effort, especially with the large cloud vendors popping out new data centers every month or so?
#4 – Live StreamingMoving away from features and technologies to use cases.
Live streaming is here and WebRTC is how you do it.
There are other technologies, but none that works as fast as WebRTC and works in browsers.
People are getting more and more comfortable with video. Due to the pandemic, a lot of new ways of communicating at scale are here, done remotely. And people want to interact. Live. and in real time.
2 seconds latency might be nice, but sub-second is nicer.
What we will be seeing is more vendors turning towards WebRTC for that sub-second experience. There’s room for higher latencies – for many use cases. But when it comes to instantaneous, expect to see a lot more WebRTC. At least until WebTransport & WebCodecs mature enough
#5 – 2D to MetaverseZoom fatigue? Boring gallery view and tiles?
Everyone is trying to rethink the communications of the future, and they don’t look like the talking heads we’ve grown up on in the last 20+ years.
The two extremes I am seeing?
We will see more of this in 2022. At the moment, there are so many different experiences being published that the most interesting thing to see will be which ones will stick and which will fade away.
WebRTC market forcesAs we head into 2022, it is also important to understand who are the main players and the main market forces. These are going to shape WebRTC moving forward.
Big Tech: FAAMG and WebRTCThe biggest tech vendors are the ones setting the pace and calling the shots with WebRTC. Each with his own angle to it.
You can add to this list Intel, who are now pushing the envelope on hardware encoding for WebRTC, something that was usually ignored by hardware vendors.
In 2022, these will be the shapers of WebRTC as we know it. They will decide if they listen to external feedback and pour it into their own product roadmaps or not – and that will end up affecting us all in the WebRTC ecosystem.
Twilio’s disinterest in WebRTCAs I stated earlier, Twilio doesn’t really care about WebRTC. Not much anyway. WebRTC isn’t big money for Twilio, so they are focusing elsewhere. We do make use of Twilio’s video-js repo as a good source of bug reports (Twilio and Vonage are still ahead of most everyone else in that).
As the dominant CPaaS vendor that is a proxy for other vendors:
This isn’t the best of environments for those who want to use CPaaS, and to some extent, this isn’t productive for those who want CPaaS either.
It also dilutes the power that CPaaS vendors have (or want to have?) over the direction WebRTC is headed. It would have been great to have these vendors’ voices heard more, as they aggregate behind them thousands of companies, use cases and requirements. Part of it is why I think UCaaS is outpacing CPaaS in innovation.
The Zoom elephant Is Zoom the exception to prove the rule?Zoom doesn’t really use WebRTC, but it does affect everything there is around WebRTC:
Without being a part of the WebRTC ecosystem, Zoom is a big shaper of the WebRTC market.
Coopetition in WebRTCCoopetition exists everywhere. The notion of competitors cooperating together is something we see a lot, especially in standardization organizations, where vendors are chugging it down, trying to get to an agreeable, better place for everyone (=lowest common denominator). We’ve seen it with the decision on mandatory to implement video codecs in WebRTC for example.
What we’re now seeing more is collaboration between companies directly – ones that compete in some ways and cooperate in others.
Microsoft improving screen sharing in Google’s libwebrtc (after deciding to adopt Chromium for Edge), Intel helping with hardware encoding of AV1, RingCentral and 8×8 pushing to get RED for Opus into libwebrtc, …, the list goes on.
We’ve come to a point where it is acknowledged that we can’t just sit and wait for things to “happen” on their own with WebRTC on the implementation side and there needs to be more proactivity and cooperation. Vendors need to start investing more and publicly in the baseline open source implementation and not only in their proprietary code.
This is wishful thinking most of the time, but I think we’re at an inflection point where this will need to happen more for the WebRTC community and ecosystem to take the next step in its evolution.
Upcoming WebRTC Trends 2022 workshopIn January I’ll be conducting a workshop that covers these topics. The trends and what to do with them. It will offer actionable advice on what you should do in 2022 and it will be interactive in nature.
My WebRTC trends in 2021 workshop was well attended. Here is what Stefan Karapetkov of Twilio had to say about it:
I was looking for an update on the WebRTC market and technology trends, and the workshop provided exactly that.
The information was specific, very well organized, and delivered in an engaging and entertaining way.
The workshop was split into three sessions and gave me enough time to think about the material, do additional research, and prepare questions for the next session.
I left the workshop with a solid understanding of the WebRTC technology, even more importantly, of the many technology tradeoffs that the WebRTC community made along the way.
I use this knowledge in my everyday interactions with colleagues and customers, and think that the workshop would beneficial for anyone in a Video Product Management or Architecture role, even for Solution Engineers who specialize in Video.
The 2022 workshop is going to be just as structured and useful, with ample interactivity that will give you the opportunity to interrupt and ask questions relevant to you and your business.
This new workshop, WebRTC trends for 2022, will take place during January-February, in 3 consecutive sessions of 2 hours each.
Space is limited, so if you are interested, register sooner rather than later.
See you at the workshop.
Register to WebRTC trends for 2022 workshopThe post WebRTC Trends for 2022: Proprietary & differentiation are back appeared first on BlogGeek.me.
Spearline acquired testRTC and now supports WebRTC testing and monitoring. This will change what I do, but in good ways.
This week the announcement became public. The company I co-founded with a few friends, testRTC, got acquired by Spearline. It is the end of a chapter and an opening of the next one.
For starters – I am still going to do what I did so far – have fun and help companies with their WebRTC and CPaaS challenges.
I tried to keep testRTC at an arm’s length from BlogGeek.me and what I do here just because… well… not sure why. Probably to stay as impartial as I can with the things that I do. That said, it is probably a good time to explain where we are with testRTC and our support for WebRTC applications.
Where are we with testRTC?We’ve started testRTC with the intent of providing a self service, cloud hosted testing solution for those developing with WebRTC. Along the way, we’ve expanded our product lines to include 3 separate domains with 5 different products:
Simply put, we are the only vendor today offering support for the full lifecycle of your WebRTC application – from development to deployment and long term maintenance of the service. We do that at scale, in the cloud, with a big smile
And then we met Spearline, and found a common ground.
Who and what is Spearline?Spearline offers testing and monitoring for your telephony services.
They have a large global deployment with real phone numbers across 70+ countries and carriers worldwide – landline and mobile. If you need your phone numbers tested and validated for their quality and performance (and you do), then you go to Spearline. Why? Because without actually testing a number, your only insight that a number isn’t working (say your sales line) is to get a customer to complain about it – which is way too late.
This all made perfect sense for us at testRTC. When we were approached, it was easy to figure out that this falls into this category:
SYNERGY
We’re completing Spearline in a few ways (WebRTC being an important part of it), and Spearline completing testRTC in other ways (telephony, scale and enterprise sales to give a few of the things we were after).
Which leads me to rocket surgery.
Rocket surgeryI had a technical call the other day. Related to BlogGeek.me. Someone at the call said “rocket surgery” at some point. It took me a few seconds to deconstruct that and understand it – he probably meant to say rocket science or brain surgery – just to indicate that they’re doing things that are hard, but not that hard (he said “this isn’t rocket surgery”).
Then it dawned on me. Rocket surgery is the best term I have for what we’re currently doing.
We’re marrying the best of both worlds here at testRTC & Spearline, so we can now offer our customers rocket surgery solutions. Things that no other vendor out there can do for you.
And that excites me – the things we can achieve and the plans we’re making for the future as part of this acquisition.
What changes for BlogGeek.me?Nothing and everything.
(can you spot the 10 differences between the images above?)
I am continuing my work at testRTC as before. Not as CEO (never liked that role), but as head of products for testRTC (which is kinda like a small CEO). testRTC is my baby. I want to see it grow and flourish.
But then again, I like the diversity and the thrill and fun of doing everything. And Spearline were kind enough to allow me to continue with my extra curricular activities. These include the courses, the weekly, insights, consulting and Kranky Geek.
I’ve been thinking a lot lately about my future. And what else I want to do. I don’t have the answers to it yet. For the foreseeable future though, this is going to be helping you with your WebRTC and CPaaS needs.
Onward and upward2021 has been a rollercoaster. I enjoyed the ride.
Here’s for a 2022 that is thrilling, exhilarating and fun.
The post Spearline acquiring testRTC – this is rocket surgery appeared first on BlogGeek.me.
There is a cool new feature everyone has been trying to implement – background transparency. Virtual backgrounds have been around for a while. Rather than inserting a new background behind user(s), transparency removes the background altogether, allowing the app to place users over a screen share or together in a shared environment. There doesn’t seem […]
The post How to add virtual background transparency in WebRTC appeared first on webrtcHacks.
Twilio Signal 2021 defines Twilio as “API”, “programmable”, “platform” and “customer engagement”. Here’s how it intends to compete in its many markets.
Twilio Signal 2021 is when Twilio officially pivoted from CPaaS to a Customer Engagement Platform. This is the reason Twilio acquired Segment last year, and the explanation of how it intends to leverage that acquisition.
Every year, I put time aside for Twilio Signal. Either in person or remote, going through the sessions and paying extra attention during the keynote. This has developed into a comprehensive view and research resources about Twilio that I’ve put up. It is time now to review what we had at Twilio Signal 2021.
Table of contentsTwilio didn’t put the keynote for Signal 2021 on YouTube (yet), but they did have it as part of their all-day Signal TV session. The video below will get you the keynote, which was around 90 minutes long:
As events go, Twilio Signal 2021 was quite a good experience for a virtual event. It was a bit hybrid, but most of the focus and action took place on the virtual side of it (or at least felt that way for me as a virtual audience).
Defining Twilio in 2021Twilio never liked or used the term CPaaS. I am not really sure why.
The Twilio pivotThere were 4 words that came time and time again during the keynote, and I think they are the center of what Twilio gravitates around today: “programmable”, “platform” and “customer engagement”.
Everything Twilio does can be found around these words, and I believe also every type of adjacent business they will try to go after will have two or more of these words in them in one way or another.
Twilio tried to show this shift and to move away a bit from APIs. It will take more than a single Signal event to do that.
Jeff Lawson, Co-founder and CEO of Twilio, started by presenting the idea of Customer Engagement and ended the keynote with the Customer Engagement Platform taking us in a complete circle around it.
Why did Twilio pivot now?Twilio is the leader in CPaaS. It has been so for many years now, defining and redefining what CPaaS is. Twilio is also ahead of all of its competitors. Way ahead. It acts as a best of suite provider, which covers most if not all of what CPaaS is, with depth of functionality in many of its offerings.
As such, it sees and knows the market. It also knows the market’s limits. Which means it understands its estimated growth. It had to pivot and start eating up more adjacencies to continue growing at an accelerated rate. But there probably aren’t enough adjacencies it can go after that can be defined as CPaaS or as communication APIs. So they went up the food chain, marketing customer engagement as their target.
How Twilio’s breakout acquisitions into email and customer data enabled the pivot to Customer EngagementTwilio’s reasoning for doing it now?
To be frank, the architectura shift as well as the move from reactive to proactive have been industry themes for over 10 years. The pandemic simply accelerated these changes, and probably accelerated Twilio’s own pivot. It is also a new language that Twilio is now speaking, so we hear it from them as well.
Twilio by the numbersEach time, Jeff starts his keynote with numbers, showing off Twilio’s size. It is interesting each time to see which numbers he shares and highlights at the beginning of the keynote. This year?
Twilio Signal 2021 numbers versus 2019 & 2020What numbers did Twilio share in the beginning of its keynote this year versus previous years?
201920202021Customers160,000200,000+240,000+ in 180+ countriesText messages––128B (100% growth)Emails––1T (5.8B single day peak)Calls––25BFlex interactions––0.5BSegment data events––10TInteractions750B1T–Unique phone numbers2.8B3B–Calls/minute32,500––Peak SMS/second13,000––Email addresses3B/quarter50%–Video minutes–3B–Developers6M––This is in-line with its pivot, as many of the original numbers aren’t even mentioned.
So… Twilio is now even bigger, and it is pivoting.
I haven’t added the social good related numbers that Twilio shared not because they aren’t important, but because they require a separate mention.
Twilio made the decision years ago to be a company that does good in the world. It also decided to put its money where its mouth is, through its twilio.org operation and its shift to become a diversified company.
Time is spent each year at Signal during the keynotes as well as in specific sessions for social good, and this year was no different.
Twilio and partnershipsJeff mentioned the strategic partners of Twilio at the beginning as well. These are getting more important to Twilio as it grows and shifts towards customer engagement.
Twilio dogfoodingTwilio is dogfooding its own products. For Twilio Signal 2020 and 2021 it has been hard at work building its own hybrid events platform. Still at its early stages but quite commendable.
Each year, additional pieces of the Twilio building blocks are being used to create these events. It will be interesting to see if in 2022 they will continue with this trend or go to a live-only event. Another question is if and when will they productize this as a programmable events platform.
The Pivot: Twilio Customer Engagement PlatformAfter the numbers it was time for the pivot. This is where Twilio moved away a bit from its roots into communications towards custom engagement. And the way this is explained by the fact that Twilio now isn’t only about communications but about all experiences with customers. Customers “drove” Twilio there, which led to the creation of Twilio’s Customer Engagement Platform.
Setting the stageTwo things here:
If you look at the communications market diagram above which I like using, then Twilio encompasses two of the three domains. The difference now is that it is vying towards the CRM part with its new story of a customer engagement platform.
The pillars of Twilio’s Customer Engagement Platform?
From here on, the keynote was focused on showcasing everything revolving around customer experience with trust, scale, reliability and compliance as the main themes.
FUDing the enterpriseTo hammer the message through, Twilio decided to harness the “digital giants”. In its mind, these are Amazon, Google, Netflix and Facebook. An odd choice, as Apple and Microsoft would be “gianter” than Netflix…
The reason behind this, is that these companies make the best use of customer data to improve its engagement with its customers, providing a singular, cohesive view of them.
Logic states that these digital giants have grown with the pandemic because they understand their customers better, and other vendors need to follow suit or be gobbled up by these digital giants.
Now that we want to be like them, we need to have the technology to do that. Amazon didn’t buy its CRM from anyone, it built it. It fed it with the data needed. And so do you dear vendor – you can’t rely on an existing CRM – you will need to build it. And just accidentally, Twilio Flex is what you need to build it (wink wink ).
Oh, but it isn’t Twilio Flex. It is actually Twilio Flex + Segment + machine learning.
To hammer that in, Jeff made sure you know that you don’t want the digital giants as your partners when it comes to your customers: Amazon taking a cut of each purchase,the Apple tax, Facebook and Google auctioning user attention via ads. You dear vendor, need and want to own your customer relationship – directly:
Now that we’re all warmed up, it was time to share and explain what Twilio Customer Engagement Platform really is.
The Twilio Customer Engagement Platform Twilio’s new Marketecture: Twilio Customer Engagement PlatformTwilio’s new Marketecture: Twilio Customer Engagement Platform
Jeff went through the platform’s components, which sits well with its current set of product offerings and acquisitions.
1. ChannelsChannels are the basic Twilio building blocks. That’s roughly the CPaaS part of Twilio:
The purpose is to be where the customer is.
Messaging and Voice is what Twilio is focused on. Ads were not mentioned anywhere else. Email is the SendGrid acquisition. And Video… well… that’s almost the only place it appeared during the keynote (more on video later).
2. Engagement AppsThese are the higher level programmable applications that Twilio is offering:
Segment…
This is why Twilio acquired Segment a year ago, and this is where it is taking Segment next.
The reason behind acquiring Segment was to pivot towards customer engagement and provide a larger offering to larger enterprises.
As Jeff said it, this is about engaging customers in real time at scale – that’s the focus of Segment.
From here, the keynote went to specific product announcements.
Twilio Signal 2021 keynote announcementsDuring the keynote, several official announcements were made. There were others that didn’t make it into the keynote itself, which goes to show where the main focus is.
Here are the things announced in the keynote:
Jeff introduced this first and explained that this was their biggest architectural change.
Twilio switched from a single US based data center to enabling running the Twilio stack from multiple regions. A customer can potentially choose where he wants to connect to Twilio and where he wants his data to reside.
The main difference is lower latency on API calls if sent to the same region, but mainly the ability to choose where to run and store the data.
The actual deployment of this is going to happen in stages with a growing number of locations as well as products enabled. This will start with two new regions – Australia and Ireland, to cover Europe and Asia by year end for Twilio Voice; while Twilio Segment can store data in Europe.
The main reason for this is the growing need to support regional data storage to meet regulation in different countries and the need to entice larger enterprises to use Twilio.
This was announced before the explanation of the Customer Engagement Platform, but I decided to place it here, as part of the announcements of the keynote.
Twilio MessagingXThe first announcement after introducing Twilio Customer Engagement Platform was Twilio MessagingX – the Channels layer in the new marketecture. This is also where the heart of the Twilio CPaaS solution lies.
It started nice. Soumya Srinagesh, Twilio’s VP Messaging Exchange, shared her big number:
Somehow, it differed from Jeff’s by 28B. I am sure there’s a good explanation, though either way, 100B is a large enough number.
SMS centered, but evolvingFor Twilio, messages are still SMS. It wasn’t said out loud, but it was hinted strongly enough throughout the session based on the announcement and in the analysts briefing for Twilio MessagingX:
During the analyst briefings of Twilio Signal 2021 the above slide was shared. I like it because it says a lot about how Twilio sees things in the messaging space. I also like it because of the way things are arranged.
Here are my immediate insights from it:
So what exactly is Twilio MessagingX?
It looks at messaging not from the API building block level, but rather from 3 different perspectives, each with its own set of focus and investments: Trust, Quality and Choice.
To be clear, all CPaaS vendors strive to do that. Twilio is one of the few that are big enough with economies of scale to really deliver it, and do so with programmability in mind in all of the possible layers.
TrustTo handle trust, mainly deliverability and compliance, Twilio announced TrustHub.
TrustHub is all about compliant phone numbers (did we say SMS?)
It isn’t as if other CPaaS vendors don’t offer compliant phone numbers. TrustHub does that by enabling access to it via APIs as well, making it… programmable? More flexible?
The intent at the end of the day here is to have messages pass unfiltered and not get them to be blocked by carriers. Especially now, when our phone’s spam folders for SMS and voice are full of such numbers and messages.
This initiative is starting with the US market and will expand elsewhere.
QualityThis is about deliverability by selecting which carriers to use to route messages, and figuring out bad connections. Twilio does that proactively (other CPaaS vendors do or say they do as well).
Not much else was said about it during the keynote, but this is where many of its acquisitions and investments in communication providers such as Syniverse earlier this year come to play.
This is a topic for a separate future analysis though.
ChoiceChoice is omni-channel. The ability to send messages to users on the channels they prefer.
There were two announcements around choice that were made:
1. Google Business MessagesTwilio already had SMS, Facebook Messenger and Whatsapp. Now they added support for Google Business Messages – the ability of customers to start a conversation with a business directly from a Google search result or a map listing.
Interestingly, Twilio still has no Apple Business Chat support. Probably because Apple doesn’t want to deal with generic CPaaS vendors just yet.
2. Content APITo manage and handle the fact that each messaging channel has slightly different rules you need to deal with, the new Twilio Content API is there to allow writing a message once and delivering it on whatever channel, with Twilio taking the headache of matching the message you want to send to how each channel likes that message.
As messages become more complex, requiring the user to take actions for example, such an API becomes a nice add-on.
For the most part, it feels like a utility that reduces a lot of the headache of a developer.
Twilio Voice and IVR NowThis was the first time voice was discussed. It was preceded by this nice number:
We had 25B calls, now with 36B voice minutes. If both relate to voice, then that’s 1:26 minutes per call on average. Transactional is the main focus of Twilio.
Not much more has been said or announced about Twilio Voice directly. The only thing was IVR Now, with about a minute spent on explaining it:
IVR Now seems to be a program that is designed to assist enterprises to migrate their VoiceXML from on premise IVRs to Twilio’s IVR. If I had to guess, this is about offering professional services either by Twilio directly or via partners.
The reason for sharing this during the keynote was to get enterprises listening in to talk to Twilio about it – there still isn’t anything on Twilio’s website about this program…
Other than that, it felt out of touch with the rest of the keynote.
Twilio IntelligenceAl Cook, VP & GM, Artificial Intelligence was the one introducing Twilio Intelligence. Al was the one leading and announcing Twilio Flex a few years ago, and this in a way is an extension of it.
The premise of Twilio Intelligence is the need to get from voice to data to meaning.
Twilio Autopilot was released to beta in 2018 and GA’d during Twilio Signal 2019. Interestingly, this is a platform and not a product (which means it probably is still Twilio Autopilot).
What is included?
A view of the language operators of Twilio Intelligence as implemented as part of Twilio Flex
Here’s what it means that Twilio Intelligence is a platform:
The demo was quite interesting, so I decided to share the direct pointer to it in the keynote here, as that’s easier than explaining it:
What I think:
It is hard work, and it will be interesting to see if Twilio nailed it this time around and what the next iteration of this will look like.
Where and when?
Now in limited private beta. A broader private beta in early 2022.
English only for now. Voice based for now.
Twilio FlexTwilio Flex launched 3 years ago. At the time, it was questioned if this would be successful or not. To some extent, it still is. The interesting thing is that the same was said about Amazon Connect, which took about 3 years to mature enough to show its size in the market.
Sateja Parulekar, Head of Contact Center Solutions at Twilio made it a point to explain that:
There were new announcements around Flex, mainly Flex ONE and Flextensions.
Flex ONEFlex ONE is about adding new channels to the Flex contact center with a single API. That includes today voice, messaging (including Whatsapp), chat and email.
The end result is one page holding all conversations across all channels with the customer.
FlextensionsFlextensions are pre-build extensions to Twilio Flex. To me it sounded much like Zoom Apps or application directories of other enterprise tools.
This is geared on top of the partnerships that Twilio has been working hard on and explained in last year’s Signal 2020 when they discussed the Twilio Flex ecosystem. It is the right move for the Flex platform.
From a product perspective, the future of Flex lies in its integration with Segment. This is where Twilio Intelligence is most focused on, as we’ve seen in its introduction and demo.
SegmentPeter Reinhardt, GM of Twilio Segment came to explain two things:
Segment is about collecting customer data from multiple sources and making it available as the single source of truth to wherever the business needs that data – all in real time.
Businesses store data about customers in many different places. With the migration towards cloud and SaaS, the number of these places is growing fast. I know… my own small business to run this website and my courses have their own share of SaaS vendors that I am using, all cobbled up with half-made integration and knit together with this masking tape called Zapier. It works. For my single person small business. Somewhat (I have tons of things I’d love to have better integrated, but don’t have the time or inclination to do – not enough ROI in it).
For real businesses, not like mine, the problem is a lot bigger and a lot more important to solve. Especially if… you want to be like the digital giants Jeff talked about at the beginning of the keynote and Peter made sure you remembered.
But back to the why:
And we’ve already seen glimpses of it with Twilio Intelligence earlier on.
I think Segment was the most interesting acquisition of Twilio so far. It isn’t only closing a gap on something they don’t have or need. It isn’t even going after a close adjacency. It is about being able to double down on customer engagement… and building a platform for it.
Which is exactly where Jeff started and where the keynote ends.
Twilio EngageTwilio Engage was the last announcement. This is the new engagement app that Twilio decided to launch. Flex is for support, Frontline is for sales and Engage is for marketers. This is the marketing cloud offering of Twilio, built on top of Segment.
It is available in pilot now and as GA in Q1 next year.
Not much else was explained or shared about this and the demo was mostly a concept of what can be done with it. Next year’s Signal event will probably show the flashy UI Peter said was less important than the data
Announcements that didn’t make it into the keynoteVideo. IoT. Frontline. Sendgrid.
Probably a few others that I missed.
I’d like to discuss 2 of these announcements here in brief.
Twilio Video InsightsVideo isn’t (and was never) top of mind for Twilio. They have it supported, but somehow it feels like a second class citizen most of the time: Twilio WebRTC Go was announced in Signal 2020 to give a semblance of progress with video. It is a free peer-to-peer video service from Twilio that is limited in scale. It got some increased capacity this year especially for Signal 2021. Nothing to write home about (I already discussed these free WebRTC video APIs at length recently.
What was announced was Twilio Video Insights and Twilio Video, both very different from each other.
Twilio Video Insights collects WebRTC and other statistics off of your calls done over Twilio Programmable Video, to create a dashboard view of media quality.
This is similar to what we do at testRTC with our watchRTC product.
A demo was shown in one of the sessions of Twilio Signal.
For me this validates our own watchRTC product, as Twilio saw the need to offer that out of the bex as part of its service. That said, if you need something like this (for Twilio, another CPaaS vendor or your own infrastructure), then come check for yourself which tool is most suitable for your needs.
Twilio LiveTwilio Live was announced a bit prior to Signal 2021. Probably in order to give center stage to Twilio Customer Engagement Platform where Live (or video for that matter) play a marginal role if any.
Here’s what I learned about Twilio Live during Signal 2021:
It is an interesting route that Twilio took for its broadcasting service. I am not sure how well it can compete with other CPaaS vendors who are clocking 100s of users or more per single WebRTC session. And it is hard to see this as an alternative for those using CDN streaming services already.
What will be interesting to see is how vendors accept this product and its position in the market – will this be good enough or even perfect for certain customers that can’t find the right solution for their broadcasting needs elsewhere.
What Twilio isn’tAfter writing down this longform article and analysis of Twilio Signal 2021, I think the most important part is what wasn’t said. And that’s what Twilio isn’t.
I long suggested and thought that CPaaS, CCaaS and UCaaS are going to merge as the lines between them are blurring. Vendors in each of these segments are vying towards the others through new product announcements and acquisitions.
Twilio went after CCaaS with Flex. It only made sense it would move into UCaaS at some point, being a comfortable adjacency in communications.
But it didn’t.
It went after customer engagement. Acquired Segment and doubled down in this route – making a splashing announcement of it at this Signal event and keynote.
Twilio is all about businesses communicating with customers.
Twilio is a lot less about people collaborating with each other in a business. Why? Because that’s where the focus of UCaaS is, and a lot of that focus relies on a slightly different set of requirements and roadmap.
This is also why video is getting less attention by Twilio for example.
What’s next for Twilio?I don’t really know.
This can be seen as a pivot, but also as the next step in Twilio’s evolution.
Twilio is surprising with the way it handles itself in the market, at least for me.
If I had to bet, I’d say that the next 2-3 years are going to be more of the same. Twilio will work on its current set of engagement applications, pouring data from the Segment CDP into it, and fitting its solutions for sales, support and marketing. Obviously, developers are still an important part of all of this.
I wouldn’t expect Twilio to go into additional adjacencies in the API domain or to go after unified communication related use cases either. At least not now. They have their hands full going up market and out of their comfort zone of pure communications.
The post Twilio Signal 2021: A Pivot from CPaaS to Customer Engagement Platform appeared first on BlogGeek.me.
WebRTC insights is turning out to be fun to create and super useful to our clients, looking to navigate the world of WebRTC.
Philipp Hancke and myself started this new thing called WebRTC Insights a year ago. We work well together, so we simply searched what we can do other than the WebRTC codelab, which was and still is a fun project.
WebRTC Insights is meant to help vendors sift through the technical (and non-technical) information that is out there and ever changing around WebRTC. Anything from bugs found, important changes in the WebRTC implementation to security issues raised and many other topics.
The idea? If you are a developer who uses WebRTC on a daily basis and relies on it, we can reduce the time you spend on finding what can bite you in the back when you weren’t looking. And we can definitely reduce the risk of that happening.
A year has gone by. The service evolved through this time, as we added more insights into it. Time to look at what we’ve done
WebRTC Insights by the numbersWe started small. The first WebRTC Insights issue looked at 6 issues, 7 PSAs and 2 market insights. 4 pages in total. Now we’re at 15-20 issues on average (twice as much when a Safari release happened) and 10 pages (or more).
In numbers, over the year this turned out to be:
26 Insights issues, 331 issues & bugs, 120 PSAs, 17 security vulnerabilities, 74 market insights and 185 pages. Phew…
BugsIn the past decade we have had more than 13,000 issues filed against libwebrtc, Google’s implementation of WebRTC that we all use in Chrome (and all other browsers in one way or another), with close to 5,000 of them external bug reports. In addition to that close to 2,000 external chromium bugs related to WebRTC.
WebRTC is a complex piece of software and staying on top of it requires quite some effort. While the development activity on WebRTC is much lower these days (at a third of the peak change rate back in 2017) there is still a surprising amount of issues we have to look at.
WebRTC Insights started from conversations about WebRTC issues and the challenges they bring between us. We have long looked at and discussed bugs, but this happened over chat and we never wrote it up. Nowadays we write up a summary, our thoughts and the potential impact each bug has. Quite often we learn something from it.
In the process we actually created an annotated list of issues that we can then refer to when we encounter new issues. So when Tsahi complained about an increase in video jitter statistics recently, Philipp just pointed him to the issue where we discussed this topic (you see, Tsahi’s memory isn’t what it used to be).
Mailing lists and PSAs“Public Service Announcements” or PSAs are a way for the WebRTC team (and Philipp) to communicate breaking changes in WebRTC. They range from changes to the C++ APIs to the plan-b deprecation and typically require action from developers using WebRTC in their applications.
We also list WebRTC-related Intent-to-ship from the Chromium process. This is a mandatory step in the process to launch WebRTC features that require Javascript API changes. In the last year we have mostly seen changes related to screen sharing which then turned into features of Google Meet – yet were available to other users of the platform as well.
Last but not least we do monitor the W3C working group and what happens there as it has a long term impact on where WebRTC is going.
The crazy profession syndrome: WebRTC trials in ChromeWebRTC uses field trials in Chrome to roll out changes that have some technical risk. We identify them which gives us insights into what might be a possible root cause for issues that are hard to reproduce locally. The best example for this recently was this report by Facebook where an experimental change to reduce the noise during opus dtx caused a large AV desync issue. We had been tracking the experiment for a couple of weeks at that point.
Security patches in WebRTCWe keep track of WebRTC related CVEs in Chrome (17 in the last twelve months), determine whether they only affect Chromium or when they affect native WebRTC and need to be cherry-picked into forks of the native library.
Where is the market headed?This part is the bird’s eye view that we offer. The rest of the insights are the low level details developers need. Here, we look at the bigger picture of what WebRTC is and the market forces around it.
We bump into tweets, posts, LinkedIn messages and other articles out there – and when we feel they are relevant and important to your work, we mention them. And explain where we see this trend headed and what you should be aware of.
The market insights are designed and handpicked for the clients we serve in WebRTC Insights.
We’re evolvingOver time, we’ve evolved the service.
Security and Chrome trials were added later on. We are now experimenting ourselves with short video explainers of each libwebrtc release (=once a month) and its implications to developers. We got some great feedback on it, so we’re likely to keep it as part of our format.
There are now also 3 different plans to the WebRTC Insights:
Want to join us for the ride this coming year?
To learn more, check us out at WebRTC Insights
You can leave us a message there to get a sample copy of one of our latest insights issue
The post A year of WebRTC Insights appeared first on BlogGeek.me.
Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.
Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.
Wow, this most certainly is a great a theme.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.