This is going to be awkward. For me? WebRTC is an open source media engine with a publicly known JavaScript API that got implemented in browsers.
I’ve written a “what is WebRTC” article more than once. The most notable ones?
This time, I wanted to check what Google thinks of WebRTC, so I started asking it:
Before we continue down this rabbit hole, make sure to register and join me in two weeks for a webinar covering Mesh, MCU and SFU topologies and what each one is good for in your WebRTC application.
Lets go one by one over these alternatives, trying to understand what are people looking for in their WebRTC.
WebRTC is disabledSomehow, this got the highest ranking. VPN vendors doing their best with FUD and SEO here, in trying to get people to disable WebRTC in browsers.
Reminds me of the good old days when people disabled JavaScript in their browsers.
WebRTC does give access to the camera, microphone, screen and local IP address of a user. Most of it under the user’s own volition. You can use browser extensions to support local IP address “leaks”, while in Safari exposing local IP addresses requires user authorization of some sort as well.
Not sure how this got first place in “WebRTC is”.
WebRTC is freeYes it is. Mostly. Somewhat. If you understand what “free” is.
You can go to webrtc.org and download it for free. You can even use it and modify it.
But then again, hosting a service isn’t free. Someone needs to pay for the network and electricity. Someone needs to do the coding.
Things brings a rather interesting mindset that I see in entrepreneurs and developers – they feel like using a third party framework or even a managed service should be free – or a lot cheaper than it is. So they go about developing it on their own, spending time and money on development (and a lot of times a lot more than it would have been just picking up a managed service instead).
That concept of free in WebRTC? It is mostly about removing barriers of entry for vendors. It isn’t about free video calling.
WebRTC is_component_buildBeats me how this got so high as a suggestion by google.
The build system in WebRTC is often challenging. That’s because Google maintains the main WebRTC open source project with the main purpose of being embedded in Chrome. Due to this, it is just part of the Chrome build process and scripts, and not a standalone product or library.
This part is probably the most painful in WebRTC for developers who need to modify or adapt it for native applications.
Still not sure why it ranks so high.
WebRTC is deadIt isn’t. Can’t even call it a grownup or a teanager.
Moving on.
WebRTC is readyYap. it is.
WebRTC is ready. Developers will still bitch and whine that it isn’t complete and changes all the time breaking things up, but at the end of the day – if you’re doing something with communications these days, WebRTC should be the first thing to look at before searching elsewhere.
WebRTC is udpIt is also TCP. With a dash of SCTP. With talks about making it QUIC. Go figure.
UDP is what WebRTC uses to send its media. It works well because TCP has this nasty habit of retransmitting things to make sure they get received. This retransmission thing doesn’t work well where what you’re sending is time sensitive (like media of an interactive conversation).
Not sure why this one is in the top 10 either.
WebRTC is_clangLike is_component_build, is_clang is also a build/compiler related setting. In this case, deciding which C/C++ compiler to use with WebRTC.
And again, I am clueless as to how and why this is such a popular Google search for WebRTC is.
WebRTC is not definedThis is golden.
The search itself is most probably related to compilation and runtime errors of developers with WebRTC, where they post the error messages around the web in stack overflow, discuss-webrtc and other online forums – asking for help from fellow developers.
Yet…
WebRTC isn’t defined. Yet.
People primsed me WebRTC 1.0 since 2015. Maybe a year or two earlier. We are now in 2019, talking about things like WebAssembly in WebRTC. But we still don’t have WebRTC 1.0. We’re getting there, but it is still a draft. Will WebRTC 1.0 standardization complete in 2019? Maybe. But WebRTC is not defined. But it is ready. Go figure.
WebRTC is p2pWebRTC is peer to peer.
You can send media directly from one browser to another (if network conditions allow). But you need to handle signaling in front of web servers, which is kinda centralized. And sometimes, sending media peer to peer won’t work media and has to be routed. And other times, you’ll want to send media towards a media server.
You can read more about it here – Get Over it: WebRTC isn’t Peer-to-Peer
WebRTC is supportedSomething that is going to change meaning in 2019.
People used to ask “which browsers support WebRTC?” or “is WebRTC supported on X” where X is Internet Explorer, Edge or Safari.
Nowadays, we’re over that bit of a challenge, with the last gaps closing as well.
The shift of this one is going to be towards traditional voice and video services that are adding WebRTC support for guest access or for those who don’t want to install any apps.
In the last year or so, I’ve had to install a lot less applications for meetings I have with companies. It isn’t because we all use Google Meet – it is because almost all of the services (Zoom is the exception here) give WebRTC guest access. WebEx, GoToMeeting, Amazon Chime – all offer WebRTC support. So I can easily handle these calls without installing anything. And yes – WebRTC is supported.
What’s your WebRTC is search term?I found this list of google search suggestions for WebRTC is quite interesting. Not exactly what I expected starting out.
For me, WebRTC is progress. It is the next step we’re taking in figuring out communications, and in that, it fills the role of one of the most basic building blocks we now have and use.
What about you? WebRTC is …
Looking to learn more about what WebRTC is? How about understanding about mesh, mixing and routing architecture? You should join me for this free webinar:
Register to Mesh, MCU or SFU webinar
The post Asking Google: WebRTC is … appeared first on BlogGeek.me.
AppRTC isn’t your friend when it comes to developing a commercial WebRTC application.
I already wrote about the fact that there’s no free TURN server from Google. It seems that I failed to mention the fact that you shouldn’t use Google’s “free” STUN server in production either. Which leads us to this great question on github on AppRTC:
apprtc websocket server down?
The interesting part about this one is that no one from Google commented on it at any point in time.
You see, AppRTC wasn’t meant as a full fledged application, and to some extent, not even as a reference application for other developers. It is mostly meant to be a hello world type of an example.
With a glaring lack of good, simple, popular open source signaling frameworks for WebRTC,
developers sometimes use AppRTC for that purpose.
Signaling is important, and so is media. If you want to learn more about mesh, mixing and routing architecture, you should join me for this free webinar:
Register to Mesh, MCU or SFU webinar
While I use AppRTC for baselining, I don’t think it is a good starting place for actual development of a real service.
Here are 4 reasons why:
#1 – AppRTC doesn’t get much love and attentionLook at github insights for AppRTC:
See the number of additions and deletions taking place in 2018?
Latest commit? March 2018.
One could argue that this is because the “Hello World” example for WebRTC is already quite polished and working well, so there’s no need to change anything. Or that WebRTC is now stable enough.
#2 – This is just a “Hello World”Here’s an example of a Hello World js function:
function hello(name){ console.log("Hello " + name); } hello('node.js');This isn’t a starting point I’d use for writing an application.
The AppRTC application is admittedly larger. Here’s the lines of code count for its github project at the time of writing (not that I’d expect much change to it in 2019):
The problem is in what AppRTC doesn’t include, which many developers want/try to add:
AppRTC uses a python based signaling server, which is great. The actual signaling protocol selected and used isn’t really documented anywhere, so you’ll need to dive into the code to figure it out if you’ll want to add or modify anything. And you will, simply because a lot of functionality you might want is missing.
The thing is, if you plan on scaling up your service to large number of users, you’ll need this to work across machines – and that’s not easy – or at least not trivial.
At Kranky Geek 2016, Google explained what they did to scale and improve signaling for their own production services. Check out what that means:
Not everyone needs to do things at scale, but many do. Starting for AppRTC places you at the wrong place for growth.
And when it comes to edge cases, it doesn’t cover them all – if ICE negotiation fails, you won’t know about it on the UI, just have it as an ICE failure message in the console log. That’s the example I’ve bumped into when using testRTC with it and closing all ports but 443.
#4 – Don’t iframe or URL to itRunning a service and just need basic meeting capabilities?
Don’t place AppRTC in an iframe of your app or have a URL to it open in another window.
You don’t get an SLA from Google when using AppRTC, and they won’t treat it like a critical service when it fails to run. Throughout the years there have been times when AppRTC was down for one reason or another.
Upwork, for example, used to use a third party free/sample/demo service similar to AppRTC or Jitsi Meet. You had to schedule a meeting with people you work with on Upwork? Click a button, it created a kind of an ad-hoc, random URL for that meeting and opened it on a new browser tab. They were smart enough to replace it with their own branded meetings feature later down the road.
That service that Upwork used? No longer exists. Want to get a signed guarantee from Google that AppRTC will stay up and running and work the same way it does today 2 years from now?
If you plan on running a serious business, host your own communications infrastructure or pay for it.
Do you have any other alternative?Not really. Not an immediate one at least.
People are still falling to the trap of using peerjs (see here why NOT to use peer.js).
We used to have EasyRTC and SimpleWebRTC in the past. EasyRTC still gets some love and attention, so you can try it out. SimpleWebRTC is now deprecated – &yet have decided to offer it “as a service” instead.
There are many other github projects offering webrtc signaling. Most of them seem to be projects people built for themselves but never really matured to a robust framework that others have adopted.
I started suggesting matrix, but many don’t really manage getting WebRTC to work well with out.
Then there’s the cloud based services – PubNub, Pusher, Scaledrone, Ably and even Google’s Firebase. These give you robust transport where you can pour your signaling protocol into.
Or a commercial software you can install anywhere such as Frozen Mountain’s WebSync.
In many cases, this will be an each to his own situation, where you’ll just need to develop it yourself or start somewhere and make it your own quite fast.
Signaling is important, and so is media. If you want to learn more about mesh, mixing and routing architecture, you should join me for this free webinar:
Register to Mesh, MCU or SFU webinar
The post What is a WebRTC Signaling Server and Why You Should NOT Use AppRTC? appeared first on BlogGeek.me.
WebAssembly in WebRTC will enable vendors to create differentiation in their products, probably favoring the more established, larger players.
In Kranky Geek two months ago, Google gave a presentation covering the overhaul of audio in Chrome as well as there is WebRTC headed next. That what’s next part was presented by Justin Uberti, creator and lead engineer for Google Duo and WebRTC.
The main theme Uberti used was the role of WebAssembly, and how deeper customizations of WebRTC are currently being thought of/planned for the next version of WebRTC (also known as WebRTC NV).
Before we dive into this and where my own opinions lie, let’s take a look at what WebAssembly is and what makes it important.
Looking to learn more about WebRTC? Start from understanding the server side aspects of it using my free mini video course.
Enroll to the free course
What is WebAssembly?Here’s what webassembly.org has to say about WebAssembly:
WebAssembly (abbreviated Wasm) is a binary instruction format for a stack-based virtual machine. Wasm is designed as a portable target for compilation of high-level languages like C/C++/Rust, enabling deployment on the web for client and server applications.
To me, WebAssembly is a JVM for your browser. The same as Java is a language that gets compiled into a binary code that then gets interpreted and executed on a virtual machine, WebAssembly, or Wasm, allows developers to take the hard core languages (which means virtually any language), “compile” it to a binary representation that a Wasm virtual machine can execute efficiently. And this Wasm virtual machine just happen to be available on all web browsers.
WebAssembly allows vendors to do some really cool things – things that just weren’t possible to do with JavaScript. JavaScript is kinda slow compared to using C/C++ and a lot of hard core stuff that’s already written in C/C++ can now be ported/migrated/compiled using WebAssembly and used inside a browser.
Here are a few interesting examples:
While the ink hasn’t dried yet on WebRTC 1.0 (I haven’t seen a press release announcing its final publication), discussions are taking place around what comes next. This is being captured in a W3C document called WebRTC Next Version Use Cases – WebRTC NV in short.
The current list of use cases includes:
While some of these requirements will end up being added as APIs and capabilities to WebRTC, a lot of them will end up enabling someone to control and interfere with how WebRTC works and behaves, which is where WebAssembly will find (and is already finding) a home in WebRTC.
Google’s example use case for WebAssembly in WebRTCAt the recent Kranky Geek event, Google shared with the audience their recent work in the audio pipeline for WebRTC in Chrome and the work ahead around WebRTC NV.
For Google, WebRTC NV means these areas:
The Low Level APIs is about places where WebAssembly can be used.
You should see the whole session, but here it is from where Justin Uberti starts talking about WebRTC NV – and mainly about WebAssembly in WebRTC:
WebAssembly is a really powerful tool. To give a taste of it with WebRTC, Justin Uberti resorted to the domain of noise separation – distinguishing between speech and noise. To do that, he put up an online demo that takes RNNoise, a noise suppression algorithm based on machine learning, ported it to WebAssembly, and built a small demo around it. The idea is that in a multiparty conference, the system won’t switch to a camera of a person unless he is really speaking – ignoring all other interfering noises (key strokes, falling pen, eating, moving furniture, etc).
Interestingly enough, the webpage hosting this demo is internal to Google and has a URL called hangouts_echo_detector/hackathon_2018/doritos – more on that later.
To explain the intent, Justin Uberti showed this slide:
As he said, the “stuff in green” (that’s Session Management, Media Processing, Codecs and Packetizer/FEC/RTX) can now be handled by the application instead of by WebRTC’s PeerConnection and enable higher differentiation and innovation.
I am not sure if this should make us happier or more worried.
In favor of differentiation and innovation through WebAssembly in WebRTCSavvy developers will LOVE WebAssembly in WebRTC. It allows them to:
In 2018, I’ve seen a lot of companies using customized WebRTC implementations to solve problems that are very close to what WebRTC does, but with a difference. These mainly revolved around streaming and internet of things type of use cases, where people aren’t communicating with each other in the classic sense. If they’d have low level API access, they could use WebAssembly and run these same use cases in the browser instead of having to port, compile and run their own stand-alone applications.
This theoretically allows Zoom to use WebRTC and by using WebAssembly get it to play nice with its current Zoom infrastructure without the need to modify it. The result would give better user experience than the current Zoom implementation in the browser.
Enabling WebAssembly in WebRTC can increase the speed of innovation and spread it across a larger talent pool and vendors pool.
In favor of a level playing field for WebRTCThe best part about WebRTC? Practically any developer can get a sample application up and running in no time compared to the alternatives. It reduced the barrier of entry for companies who wanted to use real time communications, democratizing the technology and making it accessible to all.
Since I am on a roll here – WebRTC did one more thing. It leveled the playing field for the players in this space.
Enabling something like WebAssembly in WebRTC goes in the exact opposite direction. It favors the bigger players who can invest in media optimizations. It enables them to place patents on media processing and use it not only to differentiate but to create a legal mote around their applications and services.
The simplest example to this can be seen in how Google itself decided to share the concept by taking RNNoise and porting it to WebAssembly. The demo itself isn’t publicly available. It was shown at Kranky Geek, but that’s about it. Was it because it isn’t ready? Because Google prefers having such innovations to itself (which it is certainly allowed to do)? I don’t know.
There’s a dark side to enabling WebAssembly in WebRTC – and we will most definitely be seeing it soon enough.
Where do we go from here?WebRTC is maturing, and with it, the way vendors are trying to adopt it and use it.
Enabling WebAssembly in WebRTC is going to take it to the next level, allowing developers more control of media processing. This is going to be great for those looking to differentiate and innovate or those that want to take WebRTC towards new markets and new use cases, where the current implementation isn’t suitable.
It is also going to require developers to have better understanding of WebRTC if they want to unlock such capabilities.
Looking to learn more about WebRTC? Start from understanding the server side aspects of it using my free mini video course.
Enroll to the free course
The post What’s the Role of WebAssembly in WebRTC? appeared first on BlogGeek.me.
Small, Medium, Big or Extra Large? How do you like your WebRC SFU Media Server?
I just checked AWS. If I had to build the most bad-ass, biggest, meanest, scalest, siziest server for WebRTC. One that can handle gazillions of sessions, I’d go for this one:
A machine to drool over… Should buy such a toy to write my articles on.
Or should I go for the biggest machine out there?
I did a round-up of some of the people who develop these SFUs. And guess what? None of them is ordering the XL machine.
They go for a Medium or Medium Well. Or should I say Medium Large?
Media servers, Signaling, NAT traversal – do you know what it takes to install and manage your own WebRTC infrastructure? Check out this free video course on the untold story of the WebRTC servers backend.
Start your free course
Anyways – here are a few things to think about when picking a machine for your SFU:
Going BIG on your SFUAs big as they come that’s how big you wanna take them.
We called it scale up in the past. Taking the same monolith application and put it on a bigger machine to get more juice out of it.
It’s not all bad, and there are good reasons to go that route with a media server:
Managing less machinesIf one big machine does the work of 10 smaller machines, then all in all, you’ll need 1/10 the number of machines to handle the same workload.
In many ways, scaling is non-linear. To get to linear scaling, you’ll need to put a lot of effort. Different bits and pieces of your architecture will start breaking once you scale too much. In this sense, having less machines to manage means less scaling headaches as well.
Having bigger roomsGroup calling is what we’re after with media servers. Not always, but mostly.
Getting 4 people in a room is easy. 20? Harder. 500? Doable.
The bigger the rooms, the more you’ll need to start addressing it with your architecture and scale out strategies.
If you take smaller machines, say ones that can handle up to 100 concurrent users, then getting any group meeting to 100 participants or more is going to be quite a headache – especially if the alternative is just to use a bigger machine spec.
The bigger the rooms you want, the bigger the machines you’ll aim for (up to a point – if you want to cater for 100+ users in a room, I’d aim for other scaling metrics and factors than just enlarging the machines).
Less fragmentationSimilar to how you fit chunks of memory allocations into physical memory, fitting group sessions into media servers, and maybe even cascading them across machines will end up with fragmentation headaches for you.
Let’s say some of your meetings are really large and most are pretty smallish. But you don’t really know in advance which is which. What would be the best approach of starting to fit new rooms into existing media servers? This isn’t a simple question to answer, and it gets harder the smaller the machines are.
Simpler architecture (=no cascading)If you are setting up the media server for a specific need, say catering for the needs of a hospital, then the size is known in advance – there’s a given number of hospital beds and they aren’t going to expand exponentially over night. The size of the workforce (doctors and nurses) is also known. And these numbers aren’t too big. In such a case, aiming for a large machine, with an additional one acting as active/passive server for high availability will be rather easy.
Aiming for smaller machines might get you faster to the need to scale out in your architecture. And scaling out has its own headaches and management costs.
SimplerBigger machines are going to be simpler in many ways.
Going small on your SFUThis is something I haven’t thought about as an alternative – at least not until a few years ago when I was helping a client in picking a media server for his cloud based service. One of the parameters that interested him was how small was considered too small by each media server vendor – trying to understand the overhead of a single media server process/machine/application.
I asked, and got good answers. I since decided to always look at this angle as well with the projects I handle. Here’s where smaller is better for WebRTC media servers:
Easier to upgradeI dealt with upgrading WebRTC media servers in the past.
There are two things you need to remember and understand:
The most common approach to upgrades these days is to drain media servers – when wanting to upgrade, block new sessions from going into some of the media servers, and once the sessions the are already handling are closed, kill and upgrade that media server. If it takes too long – just kill the sessions.
Smaller machines make it easier to drain them as they hold less sessions in them to begin with.
Having more machines also means you can mark more on them in parallel for draining without breaking the bank.
Blast radius of crashesThis is what started me on this article to begin with.
I took the time to watch Werner Vogels’s keynote from AWS re:Invent which took place November 2018. In it, he explains what got AWS on the route to build their own databases instead of using Oracle, and why cloud has different requirements and characteristics.
Here’s what Werner Vogels said:
With blast radius we mean that if a failure happens, and remember: everything fails all the time. Whether this is hardware or networking or transformers or your code. Things fail. And what you want to achieve is that you minimize the impact of such a failure on your customers.
Basically, if something fails, the minimum set of customers should be affected, if that’s the case.
Everything fails all the time.
And we do want to minimize who’s affected by such failures.
The more media servers we have (because they are smaller), the less customers will be affected if one of these servers fail. Why? Because our blast radius will be smaller.
CPU utilizationHere’s something about most modern media servers you might not have known – they don’t eat up CPU. Well… they do, but less than they used to a decade ago.
In the past, media servers were focused on mixing media – the industry was rallied around the MCU concept. This means that all video and audio content had to be decoded and re-encoded at least once. These days, it is a lot more common for vendors to be using a routing model for media – in the form of SFUs. With it, media gets routed around but never decoded or encoded.
Media servers, Signaling, NAT traversal – do you know what it takes to install and manage your own WebRTC infrastructure? Check out this free video course on the untold story of the WebRTC servers backend.
Start your free course
In an SFU, network I/O and even memory gets far more utilized than the CPU itself. When vendors go for bigger machines, they end up using less of the CPU of the machines, which translates into wasted resources (and you are paying for that waste).
At times, cloud vendors throttle network traffic, putting a limit at the number of packets you can send or receive from your cloud servers, which again ends up as putting a limit to how much you can push through your servers. Again, causing you to go for bigger machines but finding it hard to get them fully utilized.
Smaller machines translates into better CPU utilization for your SFU in most cases.
Number of Cores/CPUs and Your SFU’s ArchitectureBig or small, there’s another thing you’ll need to give your thought to – and that’s the architecture of the media server itself.
Media servers contain two main components (at least for an SFU):
Sometimes, they are coupled together, other times, they are split between threads or even processes.
In general, there are 3 types of architectures that SFUs take:
Me? I like the third alternative for large scale deployments. Especially when each process there is also running a single thread (I don’t really like multithreaded architectures and prefer shying away from them if possible).
That said, that third option isn’t always the solution I suggest to clients. It all depends on the use case and requirements.
In any case, you do need to give some thought to this as well when you pick a machine size – in almost all cases, you’ll be used a multi-core multi-threaded machine anyway, so better make the most of it.
How Do You Like Your SFU?Back to you.
Media servers, Signaling, NAT traversal – do you know what it takes to install and manage your own WebRTC infrastructure? Check out this free video course on the untold story of the WebRTC servers backend.
Start your free course
The post What’s the Best Size for a WebRTC SFU Media Server? appeared first on BlogGeek.me.
The new look is here – and it is less… green.
I’m splitting this one into two main parts – the redesign and what’s going to happen in 2019.
BlogGeek.me – RedesignedWhen I started this blog, what I didn’t want is yet another blue website. Somehow, it didn’t seem right to me. I ended up with a green one. So much so, that it stuck to almost everything else that I did online. As a kid, I really liked light blue – I don’t think green was anywhere in my sights.
Earlier this year, I wanted to refresh the look and the “brand” that is BlogGeek.me a bit. Luckily, the original designer just moved back from being a designer in an IoT startup to being a freelancer again, so I asked her for a new look. Which she happily and lovingly provided.
A few months later, with a lot of deliberation, hard work and updating ALL posts and pages (I had a lot of crap lying around due to custom shortcodes and plugins that accumulated in 6 years), I decided to take the plunge and update the main site with the new design.
What are the main differences?There’s a lot… but here’s what you should know:
Oh – and the pictures of me featuring on the website? They’re also new. Took them earlier in 2018.
Things are still brokenNot everything is working flawlessly. And there’s a reason for that. I knew that if I want just ship the thing, it will never come to be. So I decided to just release it “as is” at this point. I wanted to have a fresh start in 2019 with my website.
Here are somethings I know are broken:
Other than that, some pages are still ugly, and in other cases, there might be some dead or broken links.
If you find anything – just email me about it – I must have missed some of the ailments throughout this transition so I really appreciate your help here.
What to expect from BlogGeek.me in 2019?Honestly, I don’t really know. At least not exactly.
Each year I start off with a plan, in which certain initiatives take place throughout the year. Some of them come to fruition while others – don’t.
Here’s what I decided for 2019:
WebinarsLast year was a rather slow year for webinars. Both on BlogGeek.me and on testRTC (where I am a co-founder and CEO).
This is going to change.
In 2019, I want, at least theoretically, to do a webinar a month for each. A line up of topics has been created and is maintained (I’ll need more topics, but I have a good starting point).
For BlogGeek.me, webinars would be around topics that make sense for me at a given month. First one will be around Mesh/MCU/SFU – one of those topics that I can endlessly babble about.
testRTC webinars are going to focus on things that you can do with testRTC. Instead of trying to aim for generic WebRTC industry/testing/marketing/promoting/whatever non-focus, we’re going to double down on best practices, hacks and interesting things we’re bumping into with our customers at testRTC.
testRTCSpeaking of testRTC – we’ve had a good year in 2018, growing our list of customers and getting into new areas. We’ve rewritten a big portion of our backend and we will continue with the rewrite in 2019 to close our technical debt.
Expect some new features and a new product or two from testRTC to be announced during 2019.
Articles on BlogGeek.meI am going to write this year on BlogGeek.me, as well as other places when time permits.
For now, I plan to stick with a weekly article per week, something that was hard to maintain this year and I assume will be harder in 2019.
WebRTC TrainingMy online WebRTC course got over 250 registered students. I want to scale it up even further.
This year, I’ll be giving the course additional focus, making sure it stays the best alternative out there for those who wish to learn WebRTC.
In February, there will be a few announcements about the course.
Reports updateThe reports will get some refresh in 2019.
The WebRTC for Business People is up for a 2019 edition (later this month). I’d like to thank Frozen Mountain for sponsoring this initiative and making this edition free for everyone.
I might do an update to Choosing a WebRTC API Platform report. There are enough changes in the industry taking place that merit such an update. If you are a CPaaS vendor, who is now offering WebRTC support of some kind and you’re not featured in this report already – contact me.
The recent AI in RTC report I’ve written with Chad Hart doesn’t need an update. Yet.
Kranky GeekUnlike previous years, Kranky Geek already has a date for 2019: November 15, San Francisco, Google office – same place as always.
If you’d like to talk about sponsorships, speaking opportunities and such – we’re happy to start this earlier than usual.
In any case, mark your calendar.
Other projects and initiativesAs in previous years, more projects will crop up during the year. There are a few I am contemplating already, but not sure yet if I’ll be doing them.
If there’s a project you’d like to do together – just tell me.
2019Have a great new year!
The post A new design and what to expect in 2019 from BlogGeek.me? appeared first on BlogGeek.me.
There’s a lot of fuzzing around lately about WebRTC. Which is really about SRTP. Which is really important. But also really misplaced.
Before I BeginThis all started when Google Project Zero, a team tasked with actively searching for zero day bugs (nasty crashes and similar bugs that might be exploited by hackers) set their sights on video conferencing and WebRTC. The end result of it all is a github repository with tools to test RTP streams (and some filed bugs).
A few things to put the house in order:
Now that we’ve cleared the air – let’s check what’s all that fuzz. Shall we?
What Fuzzing meansWikipedia has his to say about Fuzzing:
Fuzzing or fuzz testing is an automated software testing technique that involves providing invalid, unexpected, or random data as inputs to a computer program. The program is then monitored for exceptions such as crashes, failing built-in code assertions, or potential memory leaks.
For me, fuzz testing is about the generation of malformed inputs in ways that the developers haven’t anticipated or tested for. This will result undefined behavior, which is largely a nicer word of saying a bug. In some cases, the bug will be an innocent one. In other cases, it can be nasty:
The type of bugs that can be found is endless, which makes for really good FUD (fear, uncertainty, doubt) and lore.
A good malformed input can theoretically be used to grant you administrative access to a machine or to allow you to read memory where you shouldn’t have access to.
A simple explanation can be this: assume your software expects a user’s email to be 40 characters long. Lower than that is obviously fine, but what will happen if you use an email that is longer than 40 characters? Somewhere along the line, there will be a piece of code that should check the length and state that you’ve got it too long. And if there isn’t… well… we’ve reached the realm of undefined and potential security bugs.
The same can happen in network protocols,where whatever you send “on the wire” has a structure of sorts. The machines need structure to be able to parse the data and act upon it. So if you change the data so it is close to the expected structure, but off in just a bit – you might get to that realm of undefined as well.
Fuzzing is trying to get to that place – adding randomness in just the correct places to get to undefined software behavior.
Let me tell you a bedtime storyMY fuzzy life started in Finland, though I’ve never been there (yet).
At Oulu university, one day, a new something called “PROTOS Test Suite” was created. At the time, I was the project manager leading the development and maintenance of RADVISION’s H.323 protocol stack. We’ve licensed it to many vendors around the globe, all using our source code to build VoIP products.
The PROTOS Test-Suite was all about security testing. The intent behind it was to find bugs that cause crashes and other ailments to those using H.323. And they chose the best possible entry point. Here’s how they phrased it:
The purpose of this test-suite is to evaluate implementation level security and robustness of H.225.0 implementations. H.225.0 is a protocol responsible for signalling and setting up H.323 calls. […]
The scope of the test-suite was narrowed to H.225.0 version 4 Setup-PDU. Rationale behind this selection was:
I marked in bold the important parts. Specifically, the guys at Oulu decided to go after the “pick up line” of H.323 and try to come up with nasty Setup messages that will confuse H.323 devices.
And confuse they did. PROTOS has 4497 Setup messages. On my first run with it, probably 50% of them caused our beloved H.323 stack to crash. I spent a week building the software to automate using it and fixing all the nastiness out of it. I admired the work they did and the work they made me do.
PROTOS practically analyzed how the things go on the wire, and devised a set of messages that were bound to get picked by bad programming practices, which we all err on as humans. This isn’t exactly fuzzing in an automated fashion, but it is the “manual” equivalent of it.
This got its own CERT vulnerability note and we had a great time working with our customers on updating our stack and getting these security fixes to work.
I believe some of our customers actually upgraded and updated their systems due to this. I am sure many didn’t. I am also assuming many of our customers’ customers didn’t upgrade their own deployed equipment. And the world continued on. Happily enough.
All this took place in 2004. Before WebRTC. Before the cloud. Before mobile. With practically the same RTP/RTCP protocol and the same techniques and mechanisms in VoIP that we use today in WebRTC.
Why didn’t people look at RTP vulnerabilities at that time? We’ll get to that.
Google’s Project Zero and video conferencingThis year, Google Project Zero decided to look at video conferencing. The “way in” was through WebRTC. Natalie Silvanovich was tasked with this and she wrote a series of 5 posts about it. The first one was about her selection and adventures with WebRTC itself. In it, she writes:
I started by looking at WebRTC signalling, because it is an attack surface that does not require any user interaction. […] WebRTC uses SDP for signalling.
I reviewed the WebRTC SDP parser code, but did not find any bugs. I also compiled it so it would accept an SDP file on the commandline and fuzzed it, but I did not find any bugs through fuzzing either. […]
I then decided to look at how RTP is processed in WebRTC. While RTP is not an interaction-less attack surface because the user usually has to answer the call before RTP traffic is processed, picking up a call is a reasonable action to expect a user to take. […]
Setting up end-to-end fuzzing was fairly time intensive […]
A few things that come to mind here:
Time intensive is important, as this raises the bar to those wishing to exploit such a weakness.
The fact that RTP isn’t the first attack surface and isn’t the first layer of interaction makes it somewhat less obvious on how to exploit it (besides instigating DDoS attacks on devices and servers).
Coupling these two – the complexity and the non-obviousness of an exploit is what kept people from putting the effort into it up until today.
The Fuzzy feelings of our WebRTC industryBen Hawkes, Project Zero team lead tweets on it garnered 3 digit likes and retweets, tapering off in the last 2 posts (I attribute that to fatigue of the subject):
Project Zero blog: "Adventures in Video Conferencing Part 1: The Wild World of WebRTC" by @natashenka – https://t.co/pdtZLDDP9M
— Ben Hawkes (@benhawkes) December 4, 2018
That kind of sharing is an average day for most posts published by that team. A few immediately took the cue and started fuzzing on their own. A notable example is Philipp Hancke who aimed at the Janus media server and fuzzed REMB RTCP messages.
His attack was quite successful due to several reasons:
Probably not.
And let’s face it – in the list of tests that you want to do but don’t do today, fuzzing fits nicely near that end of the things you just never find the time and priority to handle.
The good thing? For most of us, fuzzing is something that “others” should be doing.
If you are using a CPaaS vendor, it is his task to protect his signaling and media servers against such attacks.
If you run on top of the browser… well… those who maintain the WebRTC code for the browser need to do it (and it is Google for the most part at the moment).
You should think about fuzzing in your own application logic and the things that are under your control, but the WebRTC pieces? Going down the rabbit hole of fuzzing RTP and RTCP packets? Not for you.
Your role here is to ask the vendors you work with if they have taken steps in the area of security testing and what exactly have they done there. Fuzzing needs to be one of them things.
Who should care about fuzzing?There’s a shortlist of people that needs to deal with fuzzing.
Fuzzing isn’t the first thing that comes to mind when you set off to build your business.
We are at a point where we are dealing and addressing fuzzing, and at the layers of RTP is what people seem to be doing (at least a bit). We’ve come a long way since we started with WebRTC and it is a good sign.
To Fuzz or not to Fuzz? Where should you spend your energies with WebRTC? If you need help with that, just contact me.
The post All the Truth About the Latest (non)Hype of Fuzzy Testing WebRTC Applications appeared first on BlogGeek.me.
Tribbles Startrek GIF from Tribbles GIFs
Fuzzing is a Quality Assurance and security testing technique that provides unexpected, often random data to a program input to try to break it. Natalie Silvanovich from Google’s Project Zero team has had quite some fun fuzzing various different RTP implementations recently.
She found vulnerabilities in:
In a nutshell, she found a bunch of vulnerabilities just by throwing unexpected input at parsers.
Continue reading Lets get better at fuzzing in 2019 – here’s how at webrtcHacks.
Chrome=The web. Is that a good thing or a bad thing?
I’ve always said that Chrome is almost the only browser we need. Microsoft Edge was always an easy target to mock. And it now seems that Microsoft has thrown the towel on Edge and its technology stack as a differentiating factor and has decided to *gasp* use Chromium as the engine powering whatever comes next.
A long explanation from Microsoft on the move was published on github (more on GitHub later).
What are Browsers made of?I’ll start with a quick explanation of how I see a browser’s architecture. It is going to be rather simplistic and probably somewhat far from the truth, but it will be good enough for us for now.
A browser is built out of two main pieces: the renderer and the runtime engine.
The Renderer deals with displaying HTML pages with their CSS styling. Today, it probably also deals with CSS animation. It is what takes your webpage and renders it into something that can be displayed on the screen.
The Runtime Engine was all about executing JavaScript code inside the browser. It is what makes it interactive in modern browsers. It is usually called JavaScript Engine, but it is already running also WebAssembly, hence my preference in referring it as Runtime.
On top these two pieces sits the browser engine itself, which is later wrapped by the browser.
Who Uses What?That illustration of the browser makeup above? It shows in gray the components that Google uses in Chrome. Each browser vendor picks and chooses its own components.
In the past, we effectively had 3 browsers engines: “Firefox”, “Internet Explorer” and “WebKit”
WebKit was used by both Safari and Chrome. That until 2013 when Google decided to part ways and create Blink – it started by deleting everything it didn’t use out of WebKit and continue from there. In a way, it is a fork of WebKit, to the point that code integrated into WebKit oftentimes comes directly by porting it enmasse from Blink/Chromium (this is how WebRTC is implemented in Safari/WebKit today).
Up until a year ago, we had 4 roughly independent browser engines for the major 4 browsers:
Internet Explorer is all but dead.
Edge was never getting useful market share and now moving to embrace Chromium.
Apple’s Safari… I am not sure how much Apple cares about Safari, and besides, WebKit gets its fare share of code from Google’s Blink project. On top of it all, it runs only on Apple devices, limiting its popularity and use.
In a way, we’re down to two main browser stacks: Google’s and Mozilla’s
Mozilla wrote about the end of the line for EdgeHTML and they are spot on:
If one product like Chromium has enough market share, then it becomes easier for web developers and businesses to decide not to worry if their services and sites work with anything other than Chromium. That’s what happened when Microsoft had a monopoly on browsers in the early 2000s before Firefox was released. And it could happen again.
I’ve tried Firefox and Edge a year or two ago. They worked well enough. But somehow they weren’t Chrome (possibly because I am a heavy user of Google services), so it just made no sense to stick with any of them when Chrome feels too much like “home”.
Is the current state of affair lifts Chromium to the status of Linux? More on that a bit later down this article.
Chrome’s DominanceI’ve taken a snapshot of StatCounter’s desktop browsers market share:
If you are more interested in the numbers than that boring visual line, then here you go:
Chrome with over 72%; IE and Safari at 5%; Edge at 4%.
Firefox has a single digit 9%.
Funnily enough, all non-Chrome browsers are trending downwards. Even Safari which should enjoy growth due to an increase of Mac machines out there (for some unknown reason they are popular with developers these days – go figure).
Even if you ignore the desktop and check mobile only (see here), Chrome gets some 53% versus Safari’s 22%.
Investing in browser development isn’t a simple task. There are several vectors that need to be pursued at all times:
It would be safe to say that Chrome enjoys 100’s of Google employees developing code that goes directly into their Chrome browser.
Where will Microsoft take Edge?Microsoft under the lead of CEO Satya Nadella has shifted towards the cloud and is doubling down on the enterprise. To a big extent, its XBox business is an anomaly in the Microsoft of 2018.
Where once Microsoft was all about Windows and the Office suite, it has shifted towards Office 365 (subscription versus licensing business model for Office) and its Azure cloud. Windows is still there, but its importance and market dominance are a far cry from where it was a decade ago. Microsoft knows that and is making the necessary changes – not to win back the operating system market, but rather to grow its businesses on other core competencies and assets.
Microsoft Edge was an attempt to shed Internet Explorer. Give its browser a complete rewrite and bring something users would enjoy using. That hasn’t turned well. After all the investment in Edge, it had a small market share to show for it, with many of the users switching to Windows 10 opting to switch to Chrome instead of Edge.
This user behavior is surprising to say the least. With a default browser that is good enough (Edge), why would they make the conscious decision of browsing to chrome.com to download and install a different browser that does what Edge does?
Microsoft tried and failed to change this user behavior, which led it to the conclusion that Edge, or at least the innards of Edge are a waste of resources.
Why is opting for Chromium as a browser engine makes sense for Microsoft?
As Microsoft is shifting to the cloud, and Edge focusing on web standards, the end result was that anything and everything that Microsoft invested in for its web based services (Office 365 for example) has to work first and foremost on Chrome – that’s where users are anyway.
Google is using Chrome to drive proprietary initiatives to optimize its services for users and push them as standards later (think SPDY turn HTTP/2, QUIC or its latest Project Stream). It can do it due to its market dominance in browsers and the huge amount of web assets they operate. Microsoft never had that with Edge, so any proprietary initiatives on Microsoft’s part in web technologies was bound to fail.
Microsoft derived no value out of maintaining its own browser technology stack, and investing 100’s of developers on it was an expensive and useless endeavor.
So it went with Chromium.
Chromium brings one more benefit – theoretically, Microsoft can now push its browser to non-Windows 10 devices. Mac and Linux included. And since Microsoft is interested more in Office and Azure than it is in Windows, having an optimized “window” towards Office and Azure in the form of a Chromium-based Microsoft browser that works everywhere made sense.
This also means where Microsoft does want to focus its efforts in the browser – the user interface and experience, as well as in delivering the Microsoft services to customers.
Microsoft cannot forgo having its own browser and just pre-installing Chrome or even Firefox on its Windows operating system. That would mean ceding to much control to others. It has to have its own browser.
Windows ChromiumizedRemember that browser architecture I shared in the beginning? It is changing in one critical way. Google decided to create an “operating system” and call it Chrome OS, which ends up being based to some extent on the browser itself:
We spend more time in front of web applications that reside in the browser (or in Electron apps) and less inside native apps. This means that in many ways, the browser is the operating system.
Google derives all of its value from the internet, with the browser being the window there.
Microsoft is heading in the same direction, and where it matters for it with its operating system, it finds itself now competing against Chrome OS and Chromebooks, making it a huge threat to Microsoft and Office.
And obviously, there’s a “lite” version of Windows in the works, at least by the reports on Petri. Is this related to Edge using Chromium in some way? Would Windows Lite be web focused in the same way that Chrome OS is?
Who Controls Chromium? And is it the new Linux?Back to Chromium, and the reasons that the Microsoft news is making ripples in the web around openness and positive fragmentation.
Browsers are becoming operating systems in many ways. Can we correlate between Linux and its ecosystem to Chromium and its growing ecosystem?
Linux and OwnershipI’d say that these are two distinctly different cases. If anything, Chromium’s status should worry many out there. It is less about monocultures, openness and high words and more about control and competitive advantage.
On opensource.com, Greg Kroah-Hartman Feed wrote two years ago a piece titled 9 lessons from 25 years of Linux kernel development. Here’s lesson 6:
6. Corporate participation in the process is crucial, but no single company dominates kernel development.
Some 5,062 individual developers representing nearly 500 corporations have contributed to the Linux kernel since the 3.18 release in December of 2014. The majority of developers are paid for their work—and the changes they make serve the companies they work for. But, although any company can improve the kernel for its specific needs, no company can drive development in directions that hurt the others or restrict what the kernel can do.
This is important.
Who really controls Linux? Who owns it? Who decides what comes next? The fact that there are no clear answers to these questions is what makes Linux so powerful and so useful to the industry as a whole.
Chromium and GoogleDoes the same apply to Chromium?
Chromium is a Google owned project. Hosted on a Google domain. Managed using Google tooling. Maintained by Google. This includes all the main browser pieces that are created, controlled and owned by Google to a large extent: the V8 JavaScript Engine, Blink web renderer and Chromium itself.
When someone wants to contribute into Chromium, they need to go through a rigorous process. One that takes place at Google’s leisure and based on their priorities. This is understandable. Chromium is what Chrome is made up of, and Chrome gets released to a billion users every 6-8 weeks. Breakage there ends with backlash. Security holes there means vulnerability at a large scale.
While these aspects of stability and security are there with Linux as well, when it comes to Chromium, Google is the one that is setting the priorities.
It doesn’t end with priorities. It goes to the types of web experiments and proprietary features that end up in Chrome. Since Google controls and owns the Chromium stack… it can do as it pleases.
Will Google cede control of Chromium just because?
No.
It might benefit the open-whatever if it did, but it would also slow down innovation and won’t further Google’s own cause.
Microsoft and ChromiumMicrosoft is painting this in colors of open source and collaboration with the industry.
It isn’t.
This is about Microsoft going with Chromium because Edge took a few bad turns in its strategy from the get go:
Going with Chromium means two things to Microsoft:
The only challenge here is that it comes to Chromium as just another vendor. Not a partner or an owner.
A Single WebRTC StackAt the recent Kranky Geek event, Microsoft discussed its WebRTC on UWP project. Part of it was about merging changes it made to the WebRTC code from webrtc.org (=the code that goes into Chrome). Here’s how James Cadd framed it in his session:
… after 4 years of maintaining a fork on github, we’ve been discussing with Google the possibility of submitting this back to the webrtc.org repo and we’re working on that now. The caveat is that there’s no guarantee that we’ll get 100% of the way there. We’re mostly using the public submission process, so we’re going through reviews just like everyone does, but that’s our goal.
The UWP specific changes are going to live in sdk-contrib-windows so we will have our own little area to contribute this back. Microsoft has comitter rights there, so we’ll be able to keep everything moving there. […]
So just wanted to say thank you to Google for that opportunity. We’re looking forward for the collaboration.
A master and a slave? A landlord and a tenant? A patron and a client? Two partners? I am not sure what the exact relation here is, but it should be similar to what Microsoft has probably struck with Google across the board for all Chromium related technologies that are dear to Microsoft in one way or another.
Is a single stack good or bad?
If we look at it from a browser level perspective, we aren’t in a different position in the technology diversity than 8 years ago:
And here’s where we are today:
The main difference is market share – Chrome is eating up the internet with Blink and Chromium. Factor in Node.js which uses V8 JavaScript engine and you get the same tech running servers as well.
WebRTC specific though? Now runs on webrtc.org code only. All browser vendors pick bits and pieces from it for their own implementations, and while they are differences between browsers they aren’t many.
As I said before in many of my articles here – most developers today can simply develop their code for Chrome and be done with it; adding support for more browsers only if they really really really need to.
Browsers are one piece of getting WebRTC to run. Check out what else you’ll need in this free video series unraveling the server side story of WebRTC:
Register to the video series
Could Microsoft Buy Their way into Browser Market Share?Not really. If they could have, they would done so instead of going Chromium.
Let’s start from why such a move would be appealing.
GitHubThe recent acquisition of GitHub by Microsoft can be taken as a case point. Especially considering at the varied reactions it brought across the board.
6 months after that announcement, the sky haven’t fallen. Open source hasn’t been threatened or gobbled up by Microsoft. And Microsoft is even using GitHub for its own projects, and to announce its own initiatives – Edge using Chromium for example.
Time will tell, but my gut tells me that Microsoft’s acquisition of GitHub is as meaningful as Facebook’s acquisition of Whatsapp and Instagram. These made little sense at the time from a valuation standpoint, but no one is doubting these acquisitions today.
With GitHub, Microsoft is buying its way into open source. Not only as lip service, but also in understanding how open source works. By owning a large portion of the open source interactions, and being able to analyze them closely, Microsoft can tell where developers are headed and what they are after. Microsoft was always successful due to the developers using their platform (top notch tools for developers – always). GitHub allows them to continue with that in an open source world.
Then why not the browser market?
There were two assets that could be acquired here – Mozilla and Electron.
ElectronElectron is already developed and maintained by GitHub directly. Microsoft owns it already.
What advantages does Microsoft derive from Electron? None, assuming you remember that Electron runs on top of Chromium.
From a strategic standpoint, there’s no value in Electron for Microsoft. At the end of the day, Electron is a window to Chromium and to web applications.
Microsoft is using it for its own cross platform applications – Skype on Linux has been known to use Electron for several years now.
Owning Electron through GitHub doesn’t help Microsoft in its browser market share.
MozillaMozilla would have been an interesting acquisition.
Similarly to GitHub, it would be acquiring the obvious open source vendor. The challenge here is twofold:
Furthermore, acquiring Firefox as a window to Microsoft’s services and assets in the cloud is exactly one of them things that Mozilla is fighting Google against. It would be counterproductive to go there.
—
Microsoft has no one to buy in order to improve its position and market share in browsers.
It could only continue to fight it out with Edge or partner. And it decided to partner with the goliath in the room (an elephant wouldn’t be visible enough).
Will Chrome Reign Supreme?Yes.
Anyone thinks otherwise?
The post Is Chrome on its Way to be ONLY Browser out there? (Microsoft throwing the towel on Edge) appeared first on BlogGeek.me.
What Does Machine Learning Have to do with MOS Scores?
Human subjectivity in MOS calculations doesn’t hold water when it comes to heterogeneous environments. That’s where machine learning comes to play.
MOS score. That Mean Opinion Score. You get a voice call. You want to know its quality. So you use MOS. It gives you a number between 1 to 5. 1 being bad. 5 being great. If you get 3 or above – be happy and move on they say. If you get 4.something – you’re a god. If you don’t agree with my classification of the numbers then read on – there’s probably a good reason why we don’t agree.
Anyways, if you go down the rabbit hole of how MOS gets calculated, you’ll find out that there isn’t a single way of doing that. You can go now and define your own MOS scoring algorithm if you want, based on tests you’ll conduct. From that same Wikipedia link about MOS:
“a MOS value should only be reported if the context in which the values have been collected in is known and reported as well”
Phrased differently – MOS is highly subjective and you can’t really use MOS scores produced in one device to MOS scores produced in another device.
This is why I really truly hate delving into these globally-accepted-but-somewhat-useless quality metrics (and why we ended up with a slightly different scoring system in testRTC for our monitoring and testing services).
What Goes into MOS Scoring Calculations?Easy. everything.
Or at least everything you have access to:
Here are a few examples:
Physical desk phoneA physical IP phone has access to EVERYTHING. All the software and all the hardware.
It even knows how the headset works and what quality it offers.
Theoretically then, it can provide an accurate MOS that factors in everything there is.
Android native appAndroid apps have access to all the software. Almost. Mostly.
The low level device drivers are as known as the hardware that app is running on. The only problem is the number of potential devices. A few years back, these types of visualizations of the Android fragmentation were in fashion:
This one’s from OpenSignal. Different devices have different location for their mics and speakers. They use different device drivers. Have different “flavors” of the Android OS. They act differently and offer slightly different voice quality as well.
What does measuring what an objective person think about the quality of a played audio stream mean in such a case? Do we need to test this objectivity per device?
Media server who routes voice aroundThen we have the media server. It sends and receives voice. It might not even decode the audio (it could, and sometimes it does).
How does it measure MOS? What would it decide is good audio versus bad audio? It has access to all packets… so it can still be rather accurate. Maybe.
WebRTC inside a browserAnd we have WebRTC. Can’t write an article without mentioning WebRTC.
Here though, it is quite the challenge.
How would a browser measure MOS of its audio? It can probably do a good a job as an Android device. But for some reason, MOS scoring isn’t part of the WebRTC bundle. At least not today.
So how would a JavaScript web application calculate MOS of the incoming audio? By using getStats? That has access to an abstraction on top of the RTCP sender and receiver reports. It correlates to these to some extent. But that’s about as much as it has at its disposal for such calculations, which doesn’t amount for much.
Back to MOS calculationsBut what does MOS really calculate?
The quality of the voice I hear in a session?
Maybe the quality of voice the network is capable of supporting?
Or is it the quality of the software stack I use?
What about the issue with voice quality when the person I am speaking with is just standing in a crowded room? Would that affect MOS? Does the actual original content need to be factored into MOS scores to begin with?
I’ll leave these questions opened, but say that in my opinion, whatever quality measurement you look at, it should offer some information to the things that are in your power to change – at least as a developer or product owner. Otherwise, what can you do with that information?
What Affects Audio Quality in Communications?Everything.
I am sure I missed a bullet or two. Feel free to add them in the comments.
The thing is, there’s a lot of things that end up affecting audio quality when you make the decision of sending it through a network.
Is Machine Learning Killing MOS Scoring or Saving It?So what did we have so far?
A scoring system – MOS, which is subjective and inaccurate. It is also widely used and accepted as THE quality measure of voice calls. Most of the time, it looks at network traffic to decide on the quality level.
At Kranky Geek 2018, one of the interesting sessions for me was the one given by Curtis Peterson of RingCentral:
He discussed that problem of having different MOS scores for the SAME call in each device the call passes through in the network. The solution was to use machine learning to normalize MOS scoring across the network.
This got me thinking further.
Let’s say one of these devices provides machine learning based noise suppression. It is SO good, that it is even employed on the incoming stream, as opposed to placing it traditionally on the outgoing stream. This means that after passing through the network, and getting scored for MOS by some entity along the way, the device magically “improves” the audio simply by reducing the noise.
Does that help or hurt MOS scoring? Or at least the ability to provide something that can be easily normalized or referenced.
Machine Learning and Media OptimizationWe’ve had at Kranky Geek multiple vendors touching the domain of media optimizations. This year, their focus was mainly in video – both Agora.io and Houseparty gave eye opening presentations on using machine learning to improve the quality of a received video stream. Each taking a different approach to tackling the problem.
While researching for the AI in RTC report, we’ve seen other types of optimizations being employed. The idea is always to “silently” improve the quality of the call, offering a better experience to the users.
The next couple of years, we will see this area growing fast, with proprietary algorithms and techniques based on machine learning are added to the arms race of the various communication vendors.
Interested in more of these sessions around real time communications and how companies solve problems with it today?
Subscribe to our YouTube channel
The post What Does Machine Learning Have to do with MOS Scores? appeared first on BlogGeek.me.
Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.
Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.
Wow, this most certainly is a great a theme.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.