Running your own TURN servers for your WebRTC application is not necessarily the best decision. Make sure you know why you’re doing it.
[In this list of short articles, I’ll be going over some WebRTC related quotes and try to explain them]
Are you running your own TURN server? Great!
Now, are you crystal clear and honest with yourself about why you’re doing that exactly?
WebRTC has lots of moving parts you need to take care of. Lots of WebRTC servers: The application. Signaling servers. Media servers. And yes – TURN servers.
I already covered a few aspects of TURN in this WebRTC quote – We TURNed to see a STUNning view of the ICE. It is now time to review the build vs buy decision around TURN.
You see, NAT traversal in WebRTC is done by using two different servers: STUN and TURN. STUN is practically free and it can also be wrapped right into the TURN server.
TURN servers are easy to interface with, but not as easy to install, configure and maintain properly. Which is why my suggestion more often than not is to use a third party managed TURN service instead of putting up your own. Economies of scale along with focus and core competencies come to mind here with this decision.
Why buy your WebRTC TURN servers?Buying a TURN server should be your default decision. It is simple. It isn’t too expensive (for the most part) and it will reduce a lot of your headaches.
Most of the companies that approach me with connectivity issues of their WebRTC application end up in that state simply because they decided to figure out NAT traversal in WebRTC on their own.
Here are a few really good reasons why you should buy your TURN service:
We are all builders. And we love building. So adding TURN into our belt of things we built makes sense. It also plays well into the vertical integration we now appreciate with how successful Apple has been with it with its services.
But frankly, it is mostly about control. The ability to control your own destiny without relying on others.
I still think you should buy your TURN servers from a reputable managed service provider. That said, here are some good reasons why to build and deploy your own:
–
Build? Buy? Which one is the path you’ll be taking?
Trying to get more of your calls connected in WebRTC? Check out this free video mini course on effectively connecting WebRTC sessions
The post Be very clear to yourself why you manage your own TURN servers appeared first on BlogGeek.me.
Every time you look at NAT Traversal in WebRTC, you end up learning something new about STUN, TURN and/or ICE.
[In this list of short articles, I’ll be going over some WebRTC related quotes and try to explain them]
STUN, TURN and ICE. The most misunderstood aspects of WebRTC, and the most important ones to get more calls connected. It is no wonder that the most viewed and starred lesson in my WebRTC training courses is the one about NAT traversal.
Let’s take this opportunity to go over a few aspects of NAT traversal in WebRTC:
This covers the basics. There’s a ton more to learn and understand about NAT traversal in WebRTC. I’d also suggest not installing and deploying your own TURN servers but rather use a third party paid managed service. The worst that can happen is that you’ll install and run your own later on – there’s almost no vendor lock-in for such a service anyway.
Trying to get more of your calls connected in WebRTC? Check out this free video mini course on effectively connecting WebRTC sessions
The post We TURNed to see a STUNning view of the ICE appeared first on BlogGeek.me.
You will need to decide what is more important for you – quality or latency. Trying to optimize for both is bound to fail miserably.
[In this list of short articles, I’ll be going over some WebRTC related quotes and try to explain them]
First thing I ask people who want to use WebRTC for a live streaming service is:
What do you mean by live?
This is a fundamental question and a critical one.
If you search Google, you will see vendors stating that good latency for live streaming is below 15 seconds. It might be good, but it is quite crappy if you are watching a live soccer game and your neighbors who saw the goal taking place 15 seconds before you did are shouting.
I like using the diagram above to show the differences in latencies by different protocols.
WebRTC leaves all other standards based protocols in the dust. It is the only true sub-second latency streaming protocol. It doesn’t mean that it is superior – just that it has been optimized for latency. And in order to do that, it sacrifices quality.
How?
But not retransmitting or buffering.
With all other protocols, you are mostly going to run over HTTPS or TCP. And all other protocols heavily rely on retransmissions in order to get the complete media stream. Here’s why:
WebRTC comes from the real time, interactive, conversational domain. There, even a second of delay is too long to wait – it breaks the experience of a conversation. So in WebRTC, the leading approach to dealing with packet losses isn’t retransmission, but rather concealment. What WebRTC does is it tries to conceal packet losses and also make sure there are as little of it as possible by providing a finely tuned bandwidth estimation mechanism.
Looking at WebRTC itself, it includes a jitter buffer implementation. The jitter buffer is in charge of delaying playout of incoming media. This is done to assist with network jitter, offering smoother playback. And it is also used to implement lip synchronization between incoming audio and video streams. You can to some extent control it by instructing it not to delay playout. This will again hurt the quality and improve latency.
You see, the lower the latency you want, the bigger the technical headaches you will need to deal with in order to maintain high quality. Which in turn means that whenever you want to reduce latency, you are going to pay in complexity and also in the quality you will be delivering. One way or another, there’s a choice being made here.
Looking to learn more on how to use WebRTC technology to build your solution? We’ve got WebRTC training courses just for that!
The post With media delivery, you can optimize for quality or latency. Not both appeared first on BlogGeek.me.
Lowcode and nocode or old/new concepts that are now finding their way to Communication APIs. Here’s the latest developments.
Lowcode and nocode has fascinated me. Around 15 years ago (or more), I was tasked with bringing the video calling software SDKs we’ve developed at RADVISION to the cloud.
At the time, the solutions we had were geared towards developers and were essentially SDKs that were used as the video communication engines of applications our customers developed. Migrating to the cloud when all you are doing is the SDKs is a challenge. How do you offer your developer customers with the means to control the edge devices via the cloud, and doing so while allowing the application to control the look and feel, embedding the solution wherever they want.
The cloud we’ve developed used Python (Node.js wasn’t popular yet), and we dabbled and experimented with Awesomium – a web browser framework for applications – the predecessor of today’s more popular Electron. We built REST APIs to control the calling logic and handle the client apps remotely via the cloud.
I spent much of my time trying to come to grips with how exactly you would fit remote controlling an app to the fact that you don’t really own or… control. A conundrum.
Fast forward to today, where cloud and WebRTC are everywhere, and you ask yourself – how do you remote control communications – and how do you build such interactions with ease.
The answer to that is usually by way of nocode and lowcode. Mechanisms that reduce the amount of code developers need to write to use certain technologies – in our case Communication APIs (CPaaS).
I had a bit of spare time recently, so I decided to spend it on capturing today’s nocode & lowcode status and progress within the CPaaS domain.
This has been especially important if you consider the recent announcements in the market – including the one coming from Zoom about their Jumpstart program:
“With Jumpstart, you can quickly create easy-to-integrate and easy-to-customize Zoom video solutions into your apps at lower costs.”
So without much ado, if this space interest you, you should check out my new free eBook: Lowcode & Nocode in Communication APIs
This eBook details and explains the various approaches in which lowcode and nocode manifest themselves in the Communication APIs domain. It looks into the advantages and challenges of developers who adopt such techniques within their applications.
I’d like to thank Daily for sponsoring this ebook and helping me make it happen. If you don’t know them by now then you should. Daily offers WebRTC video and audio for every developer – they are a CPaaS vendor with a great lowcode/nocode solution called Daily Prebuilt
If you are in the process of developing applications that use 3rd party Communication APIs, you will find the insights in this eBook important to follow.
GET MY FREE LOWCODE/NOCODE CPAAS EBOOKThe post Nocode/Lowcode in CPaaS appeared first on BlogGeek.me.
The biggest challenge you will have when implementing WebRTC group calling is estimating optimizing bandwidth use.
[In this list of short articles, I’ll be going over some WebRTC related quotes and try to explain them]
Video is a resource hog. Some say that WebRTC is a great solution for 1:1 calls, but is lacking when it comes to group calling. To them I’d say that WebRTC is a technology and not a solution. In this case, it simply means that you need to invest some effort in getting group video calling to work well.
What does that mean exactly? That you need to think about bandwidth management first and foremost.
Why?
Let’s assume a 25 participants video call. And we’re modest – we just want each to encode his video at 500kbps – reasonable if we plan on having everyone at a mere VGA resolution (640×480 pixels).
Want to do the math together?
We end up with 12.5Mbps. That’s only for the video, without the overhead of headers or audio. Since we only need to receive media from 24 participants, we can “round” this down to 12Mbps.
I am sure you have a downlink higher than 12Mbps, but let me tell you a few things you might not be aware of:
You can get better at it, trying to figure out lower bitrates, limit how much you send and receive and do so individually per participant in the video group meeting. You can take into consideration the display layout, the dominant speaker and contributing participants, etc.
That’s exactly what 90% of your battle here is going to be – effectively managing bandwidth.
Going for a group video calling route? Be sure to save considerable time and resources for optimization work on bandwidth estimation and management. Oh – and you are going to need to do that continuously. Because WebRTC is a marathon not a sprint
Scaling WebRTC is no simple task. There are a lot of best practices, tips and tricks that you should be aware of. My WebRTC Scaling eBooks Bundle can assist you in figuring out what more you can do to improve the quality and stability of your group video calling service.
The post In group video calls, effectively managing bandwidth is 90% of the battle appeared first on BlogGeek.me.
Balázs Kreith of the open-source WebRTC monitoring project, ObserveRTC shows how to calculate WebRTC latency - aka Round Trip Time (RTT) - in p2p scenarios and end-to-end across one or more with SFUs. WebRTC's getStats provides relatively easy access to RTT values, bu using those values in a real-world environment for accurate results is more difficult. He provides a step-by-step guide using some simple Docke examples that compute end-to-end RTT with a single SFU and in cascaded SFU environments.
The post Calculating True End-to-End RTT (Balázs Kreith) appeared first on webrtcHacks.
WebRTC is a building block to be used when developing solutions. Comparing it to solutions is the wrong approach.
[In this list of short articles, I’ll be going over some WebRTC related quotes and try to explain them]
How does WebRTC compare to Zoom?
What about Skype? Or FaceTime?
I’d say this is an to questions – you’re not comparing things that are comparable.
WebRTC is a piece of technology. A set of building blocks that you can use, like lego bricks.
In essence, you can view WebRTC in two ways:
Got an application you’re developing? Need communications sprinkled into it? Some voice. Maybe video. All in real time. And with browser components maybe. If that is the case, then WebRTC is the technology you’re likely to be using for it. But piecing all of that together into your application? That’s up to you. And that’s your solution.
We can then compare the solution you built to some other solution out there.
Next time people tell you “WebRTC isn’t good because it can’t do group calls” – just laugh at their faces. Because as a technology WebRTC can certainly handle group calls and large broadcasts – you’ll need to bring media servers to do that, and sweat to build your solution. The pieces of your puzzle there will include WebRTC as a technology.
Remember:
WebRTC is a technology not a solution. What you end up doing with it is what matters
Looking to learn more on how to use WebRTC technology to build your solution? We’ve got WebRTC training courses just for that!
The post WebRTC is a technology not a solution appeared first on BlogGeek.me.
A full review and guide to all of the Jitsi Meet-related projects, services, and development options including self-install, using meet.jit.si, 8x8.vc, Jitsi as a Service (JaaS), the External iFrame API, lib-jitsi-meet, and the Jitsi React libraries among others.
The post The Ultimate Guide to Jitisi Meet and JaaS appeared first on webrtcHacks.
A very detailed look at the WebRTC implementations of Google Meet and Google Duo and how they compare using webrtc-internals and some reverse engineering.
The post Meet vs. Duo – 2 faces of Google’s WebRTC appeared first on webrtcHacks.
WebRTC requires an ongoing investment that doesn’t lend itself to a one-off outsourced project. You need to plan and work with it longtime.
[In this list of short articles, I’ll be going over some WebRTC related quotes and try to explain them]
WebRTC simplified development and reduced the barrier of entry to many in the market. This brought with it the ability to quickly build, showcase and experiment with demos, proof of concepts and even MVPs. Getting that far is now much easier thanks to WebRTC, but not planning ahead will ruin you.
There are a few reasons why you can’t treat WebRTC as merely a sprint:
I like using this slide in my courses and presentations:
These are the actors in a WebRTC application. While the application is within your control and ownership – everything else isn’t…
Planning on using WebRTC? Great!
Now prepare for it as you would for a long marathon – it isn’t going to be a sprint.
Things to in your preparation for the WebRTC marathon include:
The post WebRTC is a marathon not a sprint appeared first on BlogGeek.me.
Hearing FUD around WebRTC IP leaks and testing them? The stories behind them are true, but only partially.
WebRTC IP leak tests were popular at some point, and somehow they still are today. Some of it is related to pure FUD while another part of it is important to consider and review. In this article, I’ll try to cover this as much as I can. Without leaking my own private IP address (192.168.123.191 at the moment if you must know) or my public IP address (80.246.138.141, while tethered to my phone at the coffee shop), lets dig into this topic together
Table of contentsIP addresses are what got you here to read this article in the first place. It is used by machines to reach out to each other and communicate. There are different types of IP addresses, and one such grouping is done between private and public addresses.
Private and public IP addressesOnce upon a time, the internet was built on top of IPv4 (and it still mostly is). IPv4 meant that each device had an IP address constructed out of 4 octets – a total of around 4 billion potential addresses. Less than the people on earth today and certainly less than the number of devices that now exist and connect to the internet.
This got solved by splitting the address ranges to private and public ones. A private IP address range is a range that can be reused by different organizations. For example, that private IP address I shared above? 192.168.123.191? It might also be the private IP address you are using as well.
A private IP address is used to communicate between devices that are hosted inside the same local network (LAN). When a device is on a different network, then the local device reaches out to it via the remote device’s public IP address. Where did that public IP address come from?
The public IP address is what a NAT device associates with the private IP address. This is a “box” sitting on the edge of the local network, connecting it to the public internet. It essentially acts as the translator of public IP addresses to private ones.
IP addresses and privacySo we have IP addresses, which are like… home addresses. They indicate how a device can be reached. If I know your IP address then I know something about you:
A quick look at that public IP address of mine from above, gives you the following information on WhatIsMyIpAddress.com:
So…
It is somewhat accurate, but in this specific case, not much. In other cases it can be pretty damn accurate. Which means it is quite private to me.
One thing these nasty IP addresses can be used for? Fingerprinting. This is a process of understanding who I am based on the makeup and behavior of my machine and me. An IP address is one of many characteristics that can be used for fingerprinting.
If you’re not certain if IP addresses are a privacy concern or not, then there’s the notion that most probably IP addresses are considered privately identifiable information – PII (based on ruling of US courts as far as I can glean). This means that an IP address can be used to identify you as a person. How does that affect us? I’d say it depends on the use case and the mode of communications – but what do I know? I am not a lawyer.
Who knows your IP address(es)?IP addresses are important for communications. They contain some private information in them due to their nature. Who knows my IP addresses anyway?
The obvious answer is your ISP – the vendor providing you access to the internet. It allocated the public IP address you are using to you and it knows which private IP address you are coming from (in many cases, it even assigned that to you through the ADSL or other access device it installed in your home).
Unless you’re trying to hide, all websites you access know your public IP address. When you connected to my blog to read this article, in order to send this piece of content back to you, my server needed to know where to reply to, which means it has your public IP address. Am I storing it and using it elsewhere? Not that I am directly aware of, but my marketing services such as Google Analytics might and probably does make use of your public IP address.
That private IP address of yours though, most websites and cloud services aren’t directly aware of it and usually don’t need it either.
WebRTC and IP addressesWebRTC does two things differently than most other browser based protocols out there:
Because WebRTC diverges from the client-server approach AND uses dynamic ephemeral ports, there’s a need for NAT traversal mechanisms to be able to.., well… pass through these NATs and firewalls. And while at it, try not to waste too much network resources. This is why a normal peer connection in WebRTC will have 4+ types of “local” addresses as its candidates for such communications:
Lots and lots of addresses that need to be communicated from one peer to another. And then negotiated and checked for connectivity using ICE.
Then there’s this minor extra “inconvenience” that all these IP addresses are conveyed in SDP which is given to the application on top of WebRTC for it to send over the network. This is akin to me sending a letter, letting the post office read it just before it closes the envelope.
IP addresses are necessary for WebRTC (and VoIP) to be able to negotiate and communicate properly.
This one is important, so I’ll write it again: IP addresses are necessary for WebRTC (and VoIP) to be able to negotiate and communicate properly.
It means that this isn’t a bug or a security breach on behalf of WebRTC, but rather its normal behavior which lets you communicate in the first place. No IP addresses? No communications.
One last thing: You can hide a user’s local IP address and even public IP address. Doing that though means the communication goes through an intermediary TURN server.
Past WebRTC “exploits” of IP addressesWebRTC is a great avenue for hackers:
The main exploits around IP addresses in browsers affecting the user’s privacy were conducted so far for fingerprinting.
Fingerprinting is the act of figuring out who a user is based on the digital fingerprint he leaves on the web. You can glean quite a lot about who a user is based on the behavior of their web browser. Fingerprinting makes users identifiable and trackable when they browse the web, which is quite useful for advertisers.
The leading story here? NY Times used WebRTC for fingerprinting
There’s a flip side to it – WebRTC is/was a useful way of knowing if someone is a real person or a bot running on browser automation as indicated in the comments. A lot of the high scale browser automations simply couldn’t quite cope with WebRTC APIs in the browser, so it made sense to use it as part of the techniques to ferret out real traffic from bots.
Since then, WebRTC made some changes to the exposure of IP addresses:
There are different entities in a WebRTC session that need to have your local IP address in a WebRTC session:
The other peer, the web application and the TURN server don’t really need that access if you don’t care about the local network connectivity use case. If connecting a WebRTC session on the local network (inside a company office, home, etc) isn’t what you’re focused on, then you should be fine with not sharing the local IP address.
Also, if you are concerned about your privacy to the point of not wanting people to know your local IP address – or public IP address – then you wouldn’t want these IP addresses exposed either.
But how can the browser or the application know about that?
VPNs stopping WebRTC IP leaksWhen using a VPN, what you are practically doing is making sure all traffic gets funneled through the VPN. There are many reasons for using a VPN and they all revolve around privacy and security – either of the user or the corporate whose VPN is being used.
The VPN client intercepts all outgoing traffic from a device and routes it through the VPN server. VPNs also configure proxy servers for that purpose so that web traffic in general would go through that proxy and not directly to the destination – all that in order to hide the user itself or to monitor the user’s browsing history (do you see how all these technologies can be used either for anonymity or for the exact opposite of it?).
WebRTC poses a challenge for VPNs as well:
To make all this go away, browsers have privacy policies built into them. And VPNs can modify these policies to accommodate for their needs – things like not allowing non-proxied UDP traffic to occur.
How much should you care about WebRTC IP leaks?That’s for you to decide.
As a user, I don’t care much about who knows my IP address. But I am not an example – I am also using Chrome and Google services. Along with a subscription to Office 365 and a Facebook account. Most of my life has already been given away to corporate America.
Here are a few rules of thumb I’d use if I were to decide if I care:
In all other cases, just do nothing and feel free to continue using WebRTC “as is”. The majority of web users are doing just that as well.
Do you want privacy or privacy?This one is tricky
You want to communicate with someone online. Without them knowing your private or public IP address directly. Because… well… dating. And anonymity. And harassment. And whatever.
To that end, you want the communication to be masked by a server. All of the traffic – signaling and media – gets routed through the intermediary server/service. So that you are masked from the other peer. But guess what – that means your private and public IP addresses are going to be known to the intermediary server/service.
You want to communicate with someone online. Without people, companies or governments eavesdropping on the conversation.
To that end, you want the communication to be peer-to-peer. No TURN servers or media servers as intermediaries. Which is great, but guess what – that means your private and public IP addresses are going to be known to the peer you are communicating with.
At some point, someone needs to know your IP addresses if you want and need to communicate. Which is exactly where we started from.
Oh, and complicated schemes a-la TOR networking is nice, but doesn’t work that well with real time communications where latency and bitrates are critical for media quality.
The developer’s angle of WebRTC IP leaksWe’ve seen the issue, the reasons for it and we’ve discussed the user’s angle here. But what about developers? What should they do about this?
WebRTC application developersIf you are a WebRTC application developer, then you should take into account that some of your users will be privacy conscious. That may include the way they think about their IP addresses.
Here are a few things for you to think about here:
If you are a VPN developer, you should know more about WebRTC, and put some effort into handling it.
Blocking WebRTC altogether won’t solve the problem – it will just aggravate users who need access to WebRTC-based applications (=almost all meeting apps).
Instead, you should make sure that part of your VPN client application takes care of the browser configurations to place them in a policy that fits your rules:
A WebRTC leak test is a simple web application that tries to find your local IP address. This is used to check and prove that an innocent-looking web application with no special permissions from a user can gain access to such data.
Does WebRTC still leak IP?Yes and no.
It really depends where you’re looking at this issue.
WebRTC needs IP addresses to communicate properly. So there’s no real leak. Applications written poorly may leak such IP addresses unintentionally. A VPN application may be implemented poorly so as to not plug this “leak” for the privacy conscious users who use them.
Yes. By changing the privacy policy in Chrome. This is something that VPNs can do as well (and should do).
How severe is the WebRTC leak?The WebRTC leak of IP addresses gives web applications the ability to know your private IP address. This has been a privacy issue in the past. Today, to gain access to that information, web applications must first ask the user for consent to access his microphone or camera, so this is less of an issue.
What is a good VPN to plug the WebRTC leak?I can’t really recommend a good VPN to plug WebRTC leaks. This isn’t what I do, and frankly, I don’t believe in such tools plugging these leaks.
One rule of thumb I can give here is that don’t go for a free VPN. If it is free, then you are the product, which means they sell your data – the exact privacy you are trying to protect.
The post What is the WebRTC leak test and should you be worried about it? appeared first on BlogGeek.me.
Step-by-step guide on how to fix bad webcam lighting in your WebRTC app with standard JavaScript API's for camera exposure or natively with uvc drivers.
The post Fix Bad Lighting with JavaScript Webcam Exposure Controls (Sebastian Schmid) appeared first on webrtcHacks.
What WebRTC did to VoIP was reduce the barrier of entry to new vendors and increased the level and domains of innovation.
[In this list of short articles, I’ll be going over some WebRTC related quotes and try to explain them]
WebRTC was an aha moment in the history of communications.
It did two simple things that were never before possible for “us” VoIP developers:
This in turn, brought with it the two aspects of WebRTC illustrated above:
For many years I’ve been using this slide to explain why WebRTC is so vastly different than what came before it:
That said, truly innovating, productizing and scaling WebRTC applications require a bit more of an investment and a lot more in understanding and truly grokking WebRTC. Especially since WebRTC is… well… it is web and VoIP while at the same time it isn’t exactly web and it isn’t exactly VoIP:
This means that you need to understand and be proficient with both VoIP development (to some extent) and with web development (to some extent).
Looking to learn WebRTC? Here are some guidelines of how to get started with learning WebRTC.
The post WebRTC reduced barriers and increased innovation in communications appeared first on BlogGeek.me.
With FIDO coming to replace passwords in applications, CPaaS vendors are likely to decline in 2FA revenues.
2FA revenue has always lived on the premise that passwords are broken. I’ve written about this back in 2017:
Companies are using SMS for three types of services these days:
1. Security — either through two-factor authentication (2FA), for signing in to services; or one-time password (OTP), which replaces the need to remember a password for various apps
2. Notifications for services — these would be notifications that you care about or that offer you information, like that request for feedback or maybe that birthday coupon
3. Pure spam — businesses just send you their unsolicited crap trying to get you to sign up for their services
Spam is spam. Notifications are moving towards conversations on social networks. And the security SMS messages are going to be replaced by FIDO. Here’s where we’re headed.
Let’s take this step by step.
Table of contents Passwords and the FIDO AlliancePasswords are the bane of our modern existence. A necessary evil.
To do anything meaningful online (besides reading this superb article), you need to login or identify yourself against the service. Usually, this is done by a username (email or an identity number most likely) and a password. That password part is a challenge:
I use a password manager to handle my online life. My wife uses the “forgot my password” link all the time to get the same results.
It seems that whatever was tried in the passwords industry has failed in one way or another. Getting people house trained on good password practices is just too damn hard and bound to failure (just like trying to explain to people not to throw facial tissue down the toilet).
Experts have since pushing for a security model that authenticates a user with multiple “things”:
Smartphones today are something you own and they offer something you are by having fingerprint ID and face ID solutions baked into them. That last piece is the password.
Enter FIDO.
FIDO stands for Fast IDentity Online.
Here’s the main marketing spiel of the FIDO Alliance:
The FIDO Alliance seems to have more members than it has views on that YouTube video (seriously).
By their own words:
The FIDO Alliance is working to change the nature of authentication with open standards that are more secure than passwords and SMS OTPs, simpler for consumers to use, and easier for service providers to deploy and manage.
So:
What more can you ask for?
Well… for this standard to succeed.
And here is what brought me to write this article. The recent announcement from earlier this month – Apple, Google and Microsoft all committing to the FIDO standard. They are already part of FIDO, but now it is about offering easier mechanisms to remove the need for a password altogether.
If you are reading this, then you are doing that in front of an Apple device (iPhone, iPad or MacOS), a Google one (Android or Chrome OS) or a Microsoft one (Windows). There are stragglers using Linux or others, but these are tech-savvy enough to use passwords anyways.
These devices are more and more active as both something you own and something you are. My two recent laptops offer fingerprint biometric identification and most (all?) smartphones today offer the same or better approaches as well.
I long waited for Google and Apple to open up their authentication mechanisms in Android and iOS to let developers use it the same way end users use it to access Google and Apple services – when I login to any Google connected site anywhere, my smartphone asks me if that was me.
And now it seems to be here. From the press release itself:
Today’s announcement extends these platform implementations to give users two new capabilities for more seamless and secure passwordless sign-ins:
1. Allow users to automatically access their FIDO sign-in credentials (referred to by some as a “passkey”) on many of their devices, even new ones, without having to re-enroll every account.
2. Enable users to use FIDO authentication on their mobile device to sign in to an app or website on a nearby device, regardless of the OS platform or browser they are running.
So… no need for passwords. And no need for 2FA. Or OTP.
FIDO is going to end the farce of using 2FA and OTP technologies.
2FA: a CPaaS milking cow2FA stands for Two Factor Authentication while OTP stands for One Time Password.
With 2FA, you enter your credentials and then receive an SMS or email (or more recently Whatsapp message) with a number. You have to paste that number on the web page or app to login. This adds the something you own part to the security mechanism.
OTP is used to remove the password altogether. Tell us your email and we will send you a one time password over SMS (or email), usually a few digits, and you use that to login for just this once.
2FA, OTP… the ugly truth is that it is nagging as hell to everyone. Not only users but also application developers. The devil is always in the details with these things:
The list goes on. So CPaaS vendors have gone ahead and incorporated 2FA specific solutions into their bag of services. Twilio even acquired Authy in 2015, a customer, just to have that in their offerings at the time.
The great thing about 2FA (for CPaaS vendors), is that the more people engage with the digital world, the more they will end up with a 2FA or OTP SMS message. And each such message is a minor goldmine: A single SMS on Twilio in the US costs $0.0075 to send. A 2FA transaction will cost an additional $0.09 on top of it.
Yes. 2FA services bring great value. And they are tricky to implement and maintain properly at scale. So the price can be explained. But… what if we didn’t really need 2FA at all?
The death of 2FAPutting one and one together:
Apple, Google and Microsoft committing to FIDO and banishing passwords by making their devices take care of something you know, something you own AND something you are means that users will not need to identify themselves in front of services using passwords AND they won’t be needing OTP or 2FA either.
The solution ends up being simpler for the user AND simpler for the service provider.
Win Win.
Unless you are a CPaaS vendor who makes revenue from 2FA. Then it is pure loss.
What alternatives can CPaaS vendors offer?
At first step, the “migration” from “legacy” 2FA and OTP towards Apple/Google’s new and upcoming FIDO solution. Maybe a unified API on top of Apple and Google, but that’s a stretch. I can’t see such APIs costing $0.09 per authentication. Especially if Apple and Google do a good job at the developer tooling level for this.
* I removed Microsoft closer to the end here because they are less important for this to succeed. They are significant if this does succeed in making it even simpler on laptops so one won’t have to reach for his phone to login when on a laptop.
The future of CPaaS5 years ago, back in that 2017 article, I ended it with these words:
Goodbye SMS, It’s Time for Us to Move On
Don’t be fooled by the growth of 2FA and application-to-person (A2P) type messages over SMS. This will have a short lifespan of a few years. But five to 10 years from now? It will just be a service sitting next to my imaginary fax machine.
We’re 5 years in and the replacements of SMS are here already.
All that revenue coming to CPaaS from SMS is going to go elsewhere. Social omnichannel introduced by CPaaS vendors will replace that first chunk of revenue, but what will replace the 2FA and OTP? Can CPaaS vendors rely on FIDO and build their own business logic on top and around it for their customers?
It seems to me revenue will need to be found elsewhere.
Interested in learning more about the future of CPaaS? Check out my ebook on the topic (relevant today as it was at the time of writing it).
Download my CPaaS in 2020 ebookThe post FIDO Alliance and the end of 2FA revenue to CPaaS vendors appeared first on BlogGeek.me.
Saúl Ibarra Corretgé of Jitsi walks through his epic struggle getting Apple iOS bitcode building with WebRTC for his Apple Watch app.
The post The WebRTC Bitcode Soap Opera (Saúl Ibarra Corretgé) appeared first on webrtcHacks.
What was nice to have is now becoming mandatory in WebRTC video calling applications. This includes background blurring, but also a lot of other features as well.
Do you remember that time not long ago that 16 participants on a call was the highest number that product managers asked for? Well… we’re not there anymore. In many cases, the number has grown. First to 49. Then to a lot more, with nuances on what exactly does it mean to have larger calls. We now see anywhere between 100 to 10,000 to be considered a “meeting”.
I’ve been talking and mentioning table stakes for quite some time – during my workshops, on my messages on LinkedIn, in WebRTC Insights. It was time I sat down to write it on my blog
Table of contentsThis isn’t really about WebRTC, but rather what users now expect from WebRTC applications. These expectations are in many cases table stakes – features that are almost mandatory in order to be even considered as a relevant vendor in the selection process.
What you’ll see here is almost the new shopping list. Since users are different, markets are different, scenarios are different and requirements vary – you may not need all of them in your application. That said, I suggest you take a good look at them, decide which ones you need tomorrow, which you don’t need and which you have to get done yesterday.
Background blurring/replacementObvious. I have a background replacement. I never use it in my own calls. Because… well… I like my background. Or more accurately – I like showing my environment to people. It gives context and I think makes me more human.
This isn’t to say people shouldn’t use background replacement or that I’ll hate them for doing that – just that for me, and my background – I like keeping the original.
Others, though, want to replace their background. Sometimes because they don’t have a proper place where the background isn’t cluttered or “noisy”. Or because they just want to have fun with it.
Whatever the reason is, background blurring and replacement are now table stakes – if your app doesn’t have it, then the app that does in your market will be more interesting and relevant to the buyers.
Here’s how I see the development of the requirements here:
If I recall correctly, Google Meet started with this feature, and since then it started cropping into other meeting solutions. We all use webcams, but none of us has good lighting. It might be a window behind (or in my case to the side), the weather out the window, the hour in the day, or just poor lighting in the room.
While this can be fixed, it isn’t. Much like the cluttered room, the understanding is that humans are lazy or just not up to the task of understanding what to do to improve video lighting on their own. And just like background removal, we can employ machine learning to improve lighting on a video stream.
Noise suppression/cancellationI started using this stock image when I started doing virtual workshops. It is how I like to think of my nice neighbor (truth be told – he really is nice). It just seems that every time I sit down for an important meeting, he’d be on one of his renovation sprees.
The environment in which we’re conducting our calls is “polluted” with sounds. My mornings are full with lawn mower noises from the park below my apartment building. The rest of my days from the other family members in my apartment and by my friendly neighbor. For others, it is the classic dog barking and traffic noises.
Same as with video, since we’re now doing these sessions from everywhere at any time, it is becoming more important than ever to have this capability built into the service used.
Some services today offer the ability to suppress and cancel different types of noises. You don’t have the control over what to suppress, but rather get an on/off switch.
Four important things here:
And last but not least, this is a kind of a feature that can also be implemented directly by the microphone, CPU or operating system. Apple tried that recently in iOS and then reverted back.
Speech to textUp until now, we’ve discussed capabilities that necessitated media processing and machine learning. Speech to text is different.
For several years now we’ve been hammered around speech to text and text to speech. The discussion was usually around the accuracy of the algorithms for speech to text and the speed at which they did their work.
It now seems that many services are starting to offer speech to text and its derivatives baked directly into their user experience. There are several benefits of investing in this direction:
The challenges with speech to text is first on how to pass the media stream to the speech to text algorithm – not a trivial task in many cases; and later, picking a service that would yield the desired results.
WebRTC meeting sizeIt used to be 9 tiles. Then when the pandemic hit, everyone scrambled to do 49 gallery view. I think that requirement has become less of an issue, while at the same time we see a push towards a greater number of participants in sessions.
How does that work exactly?
If in the past we had a few meeting rooms joining in to a meeting, with a few people seated in each room, now most of the time, we will have these people join in remotely from different locations. The number of people stayed the same, yet the number of media streams grew.
We’re also looking to get into more complex scenarios, such as large scale virtual events and webinars. And we want to make these more interactive. This pushes the boundary of a meeting size from hundreds of participants to thousands of participants.
This requirement means we need to put more effort into implementing optimizations in our WebRTC architecture and to employ capabilities that offer greater flexibility from our media servers and client code.
Getting there requires WebAssembly and constant optimizationThese new requirements and capabilities are becoming table stakes. Implementing them has its set of nuances, and each of these features is also eating up on our CPU and memory budget.
It used to be that we had to focus on the new shiny toys. Adding new cool features and making them available on the latest and greatest devices. Now it seems that we’re in need of pushing these capabilities into ever lower performing devices:
So we now have less capable devices who need more features to work well, requiring us to reduce our CPU requirements to serve them. And did I mention most of these new table stakes need machine learning?
The tool available to us for all this is WebAssembly on the browser side. This enables us to run code faster in the browser and implement algorithms that would be impossible to achieve using Javascript.
It also means we need to constantly optimize the implementation, improving performance to make room for more of these algorithms to run.
10 years into WebRTC and 2 years into the pandemic, we’re only just scratching the surface of what is needed. How are you planning to deal with these new table stakes?
The post WebRTC video calling table stakes appeared first on BlogGeek.me.
Anycast enables WebRTC services to better manage and optimize global deployments at scale.
In 2021 we’ve started seeing a new technology finding its way more and more into WebRTC applications: Anycast. Unlike other shiny new toys, Anycast isn’t shiny and it isn’t new. In fact, it has been defined in the previous millenia, before the era of the smartphone.
I’ve been “doing” VoIP for over 20 years now, but wasn’t really aware of Anycast. I dug a bit around, and ended up sitting with William King, CTO & Co-founder of Subspace, to learn more about AnyCast and its use with WebRTC.
Here’s what I learned about how WebRTC developers can and are using Anycast – and how it can assist them in their own deployments.
Table of contentsFor someone sitting in the clouds today, the lowest level of networking you can think of is the IP level (I am told there are lower levels, but for me IP is low enough).
At that level, if one machine wants to reach another, it needs to use its IP address as the destination. In most cases, and at least in 99% of all of the things I’ve implemented myself as a developer, you do this using what is known as Unicast:
With Unicast, each device on the network has its own unique IP address that I can use to reach it directly (and yes, I am ignoring here the distinction between local networks and public networks and how they handle it). The key thing here is that an IP address is associated with one device only, so as the illustration above shows, when the red device wants to send a message to the green device, it can send it to him via Unicast simply by stating the green device’s IP address as the destination.
Anycast is different. With Anycast, multiple devices on the network can have the same IP address associated with them. The end result is more akin to this:
In the illustration above we have 3 different green devices with the same IP address. When the red device wants to send a message to their IP address, it doesn’t really know which one will be receiving his message – just that it is somehow going to be routed to one of them. Which one? The “closest” one usually, whatever that means.
What does that mean exactly?
Here’s how Wikipedia explains it (the illustrations above are rough sketches I did based on the ones I found on their page explaining Anycast):
Anycast is a network addressing and routing methodology in which a single destination IP address is shared by devices (generally servers) in multiple locations. Routers direct packets addressed to this destination to the location nearest the sender, using their normal decision-making algorithms, typically the lowest number of BGP network hops. Anycast routing is widely used by content delivery networks such as web and DNS hosts, to bring their content closer to end users.
Lets emphasize this with colors, so we focus on the important bits –
Anycast is something that is being widely used today, just not in VoIP or WebRTC.
The main purpose of Anycast at the end of the day is to provide high availability for stateless services.
The best thing you can do with Anycast is to deal with single request-response pairs – stateless.
Why? You send out your request (for example to translate a DNS name to an IP address; or for that next chunk of a Netflix episode you’re watching), and the server (device) you reach on the network sends you that response.
Looking for the next chunk in the Netflix episode or need another DNS name translation? Easy – send another request, and the same or another server with the same Anycast IP address will respond.
Enter WebRTC.
A world where everything and anything is stateful.
There’s signaling. With its connection state machine, ICE negotiation state machine (see? State Machine hints of this not being stateless) and application logic on top.
Then there are TURN servers and media servers. All of them need to understand the state and manage incoming media flow that is both stateful and real time.
This makes utilizing Anycast in WebRTC quite a challenge.
While we’d like to enjoy Anycast’s obvious advantage of high availability (and a few other advantages it gives), in order to do so, we need to overcome the statefulness challenge first.
The simplest link in WebRTC is the TURN server. While stateful, its job is rather simple – routing data between peers without much thought. This makes TURN servers the best candidate for infrastructure optimizations using Anycast.
Lets see what advantages Anycast TURN infrastructure can give WebRTC applications.
3 advantages of Anycast for WebRTCOnce you get down to it, deploying TURN servers and maybe even media servers using Anycast can give some interesting benefits to your infrastructure.
Here are the main advantages – ones that are going to define how WebRTC infrastructure will be designed and deployed in the coming years.
#1 – Better geolocationWhen a user connects your WebRTC application, your best bet is to make sure the user is as close to your infrastructure as possible. The fastest you put him on a TURN or a media server, the better media quality you can expect.
Why? Simple. Because from that server the user connected to – you control and own the media flow of the server. And if you control and own it you can make it better. But that part of the journey the media does from the user to your first server? That’s something you don’t control and own so your ability to improve quality there is lower.
This is why whenever a user joins, you are likely to start doing some geolocation, trying to figure out where the user is coming from in order to allocate for him your “closest” TURN or media server.
That process is done usually by looking at the origin IP address and then using a third party service to indicate the location of that IP address – or by DNS geolocation – letting a DNS server do that for us somehow. When we leave it to the DNS, then we are at the mercy of the DNS service hosting. It works, but not always. And it is also somewhat slow to update.
Remember that time you changed the DNS configuration of your WordPress server? Were you told it can take a few hours to “propagate”? Well… that’s exactly the problem you might be facing in getting routes updated when using DNS geolocation.
With Anycast, geolocation takes place at the BGP level. Don’t ask me what that is exactly, but it means two things for us:
That second point is a big difference. DNS servers have different “job to be done” than WebRTC Anycast services. The latter focuses on real time delivery and on better and more optimized geolocation as an extension of it. So you can expect better results overall, especially on a global scale.
#2 – Higher resiliency (and security)To operate an Anycast service requires solving the statelessness challenge it when it comes to WebRTC. Once that is solved, we gain the benefit of having our data routed through the closest server over the IP layer.
If the physical server we’re working in front of goes down, then Anycast will reroute future traffic through other servers with the same IP address. And that gives us a natural resiliency.
Furthermore, assume I am an “adversary” that wants to take down your service or disrupt it.
I can check the IP addresses you are using and map your servers. I can then commence with a DDoS attack to flood one or more of your servers via these IP addresses.
If that IP address belongs to a specific server, it will require a relatively small amount of traffic to bring that server down to its knees. But if that IP address belongs to multiple servers via Anycast, then flooding that IP address means trying to flood the whole network and not a specific server – a much harder task to achieve.
Resiliency comes built-in with Anycast.
#3 – Ease of configurationThe ease of configuration is something you get from the first two advantages.
Once we’re using Anycast, then there are a few things that make our lives easier:
Anycast is where much of the future of WebRTC services lies.
We are shifting our focus on how to optimize and maintain WebRTC infrastructure at scale. Last year it was all about getting to that 49-grid gallery view. This year it is a lot more nuanced. It is mostly about scale, performance and global reach as far as I can tell.
Anycast can play a vital role in that area and in how services can improve their performance and perceived quality for their users.
The post 3 advantages of Anycast in WebRTC you didn’t know about appeared first on BlogGeek.me.
RTC@Scale was Facebook’s virtual WebRTC event, covering current and future topics. Here’s the summary so you can pick and choose the relevant ones for you.
WebRTC Insights is a subscription service I have been running with Philipp Hancke for over a year now. The purpose of it is to make it easier for developers to get a grip of WebRTC and all of the changes happening in the code and browsers – to keep you up to date so you can focus on what you need to do best – build awesome applications.
We got into a kind of a flow:
It is fun to do and the feedback we’re getting is positive.
That said, being us, means that we can’t really sit still… or in this case – Philipp…
We published this on Monday the week after the event took place to our WebRTC Insights clients, and now, we’re opening it up for everyone as well.
Table of contentsPhilipp decided it would make sense to summarize the recent RTC@Scale “recruiting event” that Facebook did – the RSVP was explicitly asking for consent to be contacted. The technical depth of the talks was amazing so we’ve added an “out of order” issue for you, just for this
The intent is for you to *not* spend 5 hours but rather to focus on the select sessions that are relevant for you.
The event setup was simple:
Real-time Communication for Today and Future Experiences / Maher Saba @ Meta
Panel: RTC in the Metaverse / Sriram Srinivasan, Mike Arcuri, Paul Boustead, and Cullen Jennings
These sessions focus on roadmap and far future views. We’d rather have a bit more on the here and now and the immediate future requirements than what would happen in 3, 5 or 10 years time, but hey – they are recruiting
Holographic Video Calling / Nitin Garg @ Meta
Spatial Communications at Scale in Virtual Environments / Paul Boustead @ Dolby
RTC3 / Justin Uberti @ Clubhouse
Live QA
Audio ML is quite interesting. Large vendors are at it, and when (if?) the results will trickle into vanilla WebRTC is yet to be seen. Key takeaway: ML-based noise suppression is more important than echo cancellation these days.
Developing Machine Learning Based Speech Enhancement Models for Teams and Skype / Ross Cutler @ Microsoft
Can AI Disrupt Speech Compression? / Jan Skoglund @ Google
Live QA
AV1 is coming. It will take time to be here. To get a grip over it and see what companies are doing, we got Google and Visionular.
Google is what goes inside WebRTC. Visionular is what you can buy commercially on the market for server or proprietary implementations.
Your focus should probably be in low bitrates and slide sharing scenarios.
AV1 Encoder for RTC / Marco Paniconi @ Google
AV1 for RTC: Current and Future / Zoe Liu @ Visionular
Live QA
We found this part to be most applicable to current problems. This is where you should be spending your time and focus right now
Making Meta RTC Audio More Resilient / Andy Yang @ Meta
Private Calling at WhatsApp / Xi Deng @ Meta
Group Call End-to-End Encryption and the Challenges of Encrypting Large Calls / Abo-Talib Mahfoodh @ Meta
Live QA
What you are seeing here isn’t the run of the mill issue of a WebRTC insights newsletter. It wasn’t even intended. But it does show the effort and focus we put on everything WebRTC for our clients. Watching a five hour event twice and producing actionable notes is not an easy task. It changed our weekend plans but we ended up being very satisfied with the results if only for our own notes.
If your company is relying heavily on WebRTC, then you should at the very least try this out. Reach out to me via the form at the end of the WebRTC Insights landing page and I’ll send you a sample issue.
The post RTC@Scale summary and insights appeared first on BlogGeek.me.
The performance of WebRTC in Chrome as well as other RTC applications needed to be improved a lot during the pandemic when more people with a more diverse set of machines and network connections started to rely on video conferencing. Markus Handell is a team lead at Google who cares a lot about performance of […]
The post Optimizing WebRTC Power Consumption (Markus Handell) appeared first on webrtcHacks.
How time flies when you’re having fun… For me the definition of fun was starting BlogGeek.me, deciding to write about WebRTC for the first time and having 10 years fly by.
I had a few updates to write with no specific theme to them. Mostly about things just completed and a few upcoming projects and events. Then it dawned on me that I’ve been at it for a bit over 10 years now (!)
On January 5, 2012 I published the first post on this blog. I just left RADVISION for Amdocs, and wanted to have a place of my own out there that won’t be controlled by any vendor. So I started BlogGeek.me. I didn’t know what I was going to write about, but I did know it will include some 3-4 posts about WebRTC before I move on to other technical issues.
That first WebRTC post? Got published on March 8, 2012. It was about what’s WebRTC. Fast forward 10 years later, and more people today know BlogGeek.me than know me as Tsahi. And in many ways, BlogGeek.me is synonymous with WebRTC articles. Not what I had in mind when I started, but I am definitely happy with where it led me.
Anyways, here are a few updates on my ongoing projects, as well as where to find me.
Free eBook: WebRTC for Business PeopleEarlier this month, I updated my WebRTC for Business People ebook.
Its last update took place in 2019, before the pandemic, so it really needed to get up to speed with where we are now. I worked on this update in the last couple of months, updating much of the content and replacing many of the showcased vendors.
I’d like to thank Daily for picking up the sponsorship for this work. They’re one of the fascinating CPaaS vendors out there innovating in the domain of UX/UI.
Download the WebRTC for Business People ebook for free
WebRTC Trends for 2022I just finished my WebRTC Trends for 2022 workshop. Did it twice in parallel to accommodate different time zones and had a goodly sized audience joining live to the 6 hours in total.
During the workshop we went through many topics. I tried covering everything I think is relevant for 2022 when it comes to WebRTC, so that you can prepare properly.
The Advanced WebRTC Architecture course is due for another update.
The above image indicates the numbers for the course at the moment.
Around 15-20 lessons are going to be updated and recorded again – to make sure content is relevant and fresh.
One of the lessons will be dropped with 2-3 new lessons being added.
Until I finish all that work, I am announcing a 10% discount on all courses, ebooks and workshops on my webrtccourse.com website. Just use the coupon code 10YEARS.
If you enroll in the courses now, you’ll have a 1-year access to them which will include all of the upcoming updates.
WebRTC InsightsPhilipp Hancke is running the WebRTC Insights with me. This is fun to do, especially with a good friend and partner. We’ve grown the offering in the last few months, adding video release notes interpretation for WebRTC, color coding for issues, etc.
This weekend we worked on getting our subscribers a detailed summary of Facebook’s RTC@Scale event – so they can focus on what they find relevant in the 5-hour event.
We’ve celebrated a year of WebRTC Insights recently – if you’d like to join our service for the coming year and be updated on everything technical (and non-technical) about WebRTC just let us know.
Enterprise Connect 2022: Here I come!After two years at home, it is time to pack a bag for the first time and see a plane from the inside.
I will be at Enterprise Connect 2022, taking place in March in Orlando. This will also be my first opportunity to see in real life (!) the people from Spearline who acquired my company, testRTC. I’ll be going there to represent Spearline and showcase testRTC to whoever wants to listen.
If you are there – let me know – I’ll be happy to meet you as well.
Kranky Geek Virtual 2022 SpringWe’re going to have another Kranky Geek event. We plan to have it in April 2022.
At the moment, we’re working on the sponsors and speakers list. If you’re one of those – let me know (we keep a tight ship, so I can’t promise anything).
Here’s for the next 10 yearsThe last 10 years have been fun. I am actively thinking of what will happen with WebRTC and communications in the coming years. There are some trends that are just around the corner while others are more long term in their nature (web3 anyone?).
Here’s to seeing you in virtual and in person during 2022 and beyond
The post WebRTC, BlogGeek.me, 2022 & 10 years of blogging appeared first on BlogGeek.me.
Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.
Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.
Wow, this most certainly is a great a theme.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.