bloggeek

Subscribe to bloggeek feed bloggeek
The leading authority on WebRTC
Updated: 1 hour 50 min ago

Surviving WebRTC CPU requirements in large group calls

Mon, 05/25/2020 - 12:30

Enabling large group video calls in WebRTC is possible, but requires effort. WebRTC CPU consumption requires optimizations and that means making use of a lot of different techniques.

Cramming more users in a single WebRTC call is something I’ve been addressing here for quite some time.

The pandemic around us gave rise to the use and adoption of video conferencing everywhere. Even if this does slow down eventually, we’ve fast forwarded a few years at the very least in how people are going to use this technology.

It all started with a Gallery View

What’s interesting to see is how requirements and feature sets have changed throughout the years when it comes to video conferencing. When I first joined RADVISION, the leading screen layout of a video conference was something like this:

It had multiple names at the time, though today we refer to it mainly as gallery view (because, well… that’s how Zoom calls it).

Somehow, everyone was razor focused on this. Cramming as many people as possible into a single screen. Some of it is because we didn’t know better as an industry at the time. The rest is because video conferencing was a thing done between meeting rooms with large displays.

It also fit rather well with the centralized nature of the MCU, who ruled video conferencing.

Enter Speaker View

At some point in time, we all shifted towards the speaker view (name again, courtesy of Zoom):

There were a few vendors who implemented this, but I think Google Hangouts made it popular. It was the only layout they had available (up until last month), and it was well suited for the SFU technology they used. It also made a lot of sense since Hangouts took place in laptops and desktops and not inside meeting rooms.

With an SFU, we reduce the CPU load of the server by offloading that work to the user devices. At the same time, we increase our demand from the devices.

Hello 2020

Then 2020 happened.

With it came social distancing, quarantines and boredom. For companies this meant that there was a need for townhall meetings for an office, done on a regular interval, just to keep employees engaged. Larger meetings between room meetings because even larger still as everyone started joining them from home.

The context of many calls went from trying to get things done to get connected and providing the shared goals and values that are easier to achieve within the office space. This in turn, got us back to the gallery view.

Oh… and it also made 20+ user meetings a lot more common.

3 reasons why WebRTC is a CPU hog

The starting point is challenging with WebRTC CPU use when it comes to video calling. WebRTC has 3 things going against it at the get go already:

#1 – Video takes up a lot of pixels

1080p@30fps is challenging. And 720p isn’t a walk in the park either.

The amount of pixels to process to encode 1080p?

62 million pixels every second… I can’t count that fast

You need to encode and decode all that, and if you have multiple users in the same call, the number of pixels is going to grow – at least if you’re naive in your solution’s implementation.

What does this all boils down to? WebRTC CPU use will go over the roof, especially as more users are added into that group video call of yours.

#2 – Hardware acceleration isn’t always available

Without hardware acceleration, WebRTC CPU use will be high. Hardware acceleration will alleviate the pain somewhat.

Deciding to use H.264?

  • Certain optimizations won’t be available for you in group calling scenarios
  • And on devices without direct access to H.264 (some Android devices), you’ll need to use software implementations. It also means dealing with royalty payments on that software implementation

Going with VP8?

  • Hardware acceleration for it isn’t available everywhere
  • Or more accurately, there’s almost no hardware acceleration available for it

What about VP9?

  • Takes up more CPU than H.264 or VP8
  • You won’t find it in Safari
  • Not many are using it, which is a challenge as well
  • Hardware acceleration also a challenge

So all these pixels? Software needs to handle them in many (or all) cases.

#3 – It is general purpose

WebRTC is general purpose. It is a set of APIs in HTML that browsers implement.

These browsers have no clue about your use case, so they are not optimizing for it. They optimize for the greater good of humanity (and for Google’s own use cases when it comes to Chrome).

Implementing a large scale video conference scenario can be done in a lot of different ways. The architecture you pick will greatly affect quality but also the approach you’ll need to take towards optimization. This selection isn’t something that a browser is aware of or even the infrastructure you decide to use if we go with a CPaaS vendor.

And it all boils down to this simple graph:

Complexity vs group size in WebRTC conference calls

The bigger the meeting size, the more WebRTC CPU becomes a challenge and the harder you need to work to optimize your implementation for it.

Why now?

I’ve been helping out a client last month. He said something interesting –

“We probably had this issue and users complained. Now we have 100x the users, so we hear their complaints a lot more”

We are using video more than ever. It isn’t a “nice to have” kind of a thing – it is the main dish. And as such, we are finding out that WebRTC CPU (which people always complained about) is becoming a real issue. Especially in larger meetings.

Even Google are investing more effort in it than they used to:

@googlechrome 83 is now in beta with interesting changes to the video compositor. It should free up some CPU cycles when using @webrtc apps such as @whereby @confrere_video and #GoogleMeet

— Serge Lachapelle (@slac) April 17, 2020 3 areas to focus on to improve performance in group video conferences

Here are 3 areas you should invest time in to reduce the CPU use of your group video application:

#1 – Layout vs simulcast

Simulcast is great – if used correctly.

It allows the SFU to send different levels of quality to different users in a conference.

How is that decision made?

  • Based on available bandwidth in the downlink towards each participant
  • The performance of the device (no need to shove too much down a device’s throat and choke him with more data that it can decode)
  • The actual frame resolution of where that video should appear
  • How much is that video important for the overall session

Think about it. And see where you can shave off on the bitrates. The lower the total bitrate a device needs to deal with when it has to encode or decode – the lower the CPU use will be.

#2 – Not everyone’s talking

Large conferences have certain dynamics. Not everyone is going to speak his mind. A few will be dominant, some will voice an opinion here and there and the rest will listen in.

Can you mute the participants not speaking? Is there an elegant way for you to do it in your application without sacrificing the user experience?

There are different ways to handle this. Anything from a dominant speaker, through the use of DTX towards automatic muting and unmuting of certain users.

#3 – UI implementation

How you implemented your UI will affect performance.

The way you use CSS, HTML and your JS code will eat up the CPU without even dealing with audio or video processing.

Look at things like the events you process. Try not to run too much logic that ends up changing UI elements every 100 milliseconds of time or less on each media track – you’re going to have a lot of these taking place.

My eBook on the topic is now available

If you are interested in how to further optimize for video conference sizes, then I just published an eBook about it called Optimizing Group Calling in WebRTC. It includes a lot more details and suggestions on the above 3 areas of focus as well as a bunch of other optimization techniques that I am sure you’ll find very useful.

The post Surviving WebRTC CPU requirements in large group calls appeared first on BlogGeek.me.

How to know if an open source WebRTC media server is kept up to date?

Mon, 05/18/2020 - 12:00

If you are going to start a WebRTC project that requires a media server, you better be sure you know how frequently as well as when was the last time the code got updated.

A WebRTC media server is a type of server that is required to build applications that offer group calling capabilities among other things. There are other types of WebRTC servers that are needed, but this is not the place or time to discuss them.

Rising interest in WebRTC media servers

Here’s a discussion I had multiple times in the last month: People asking about this or that WebRTC media server or project, wanting to know if they should adopt it for their own application.

This rise stems from the increased interest in video conferencing due to the pandemic. Video has shown its usefulness in the biggest possible way. We’re all stuck at home, and the only way to communicate is by “calling”. Video adds context and meaning to voice only calling so it is becoming widespread.

In some cases, as in India, the government decided to put out funding for a video conferencing challenge, where vendors are invited to build applications for the local market. In others, remote-something is becoming a thing where the existing generic solutions don’t cut it (there are many such verticals).

As it so happens, a lot of teams are now trying to figure out which open source WebRTC media server they should pick and use.

There’s an article I wrote almost 3 years ago on 10 Tips for Choosing the Right WebRTC Open Source Media Server Framework. I took the time today to update it as well as the selection worksheet in it.

One thing that developers seem to miss is how easy it is to understand the freshness of the code – how up to date a WebRTC media server code really is.

So here we go.

How do the most popular WebRTC media servers compare to each other?

What I like doing is using the insights feature in github. It gives a nice initial perspective of a project besides the popularity metrics of watches, starts and forks (they are nice, but just to get me interested – not it making an opinion).

For that purpose, I like looking at the Pulse, Contributors and Code frequency metrics provided by github.

Doing such a check is useless without context. And context is built by looking at alternatives. In this case,  I decided to look at some of the most popular WebRTC media server alternatives out there: Janus, Jitsi, Kurento and mediasoup

Why these 4? Because they are mentioned in almost every conversation I have about open source WebRTC media servers.

Some of these projects are built out of multiple github repositories. For the purpose of this comparison, I tried looking at the main repo holding the media server itself. Here’s the ones I’ve used for each:

Pulse

The github pulse lists recent activity of the project. I’ve looked at a period of 1 month here on all 4 projects to get the following picture:

JanusJitsiKurentomediasoupAuthors141014Total commits7635842Files65569514Additions3,1522,0912922,670Deletions3631,8622141,993

A few thoughts:

  • Jitsi and mediasoup seem to be doing a lot of code optimizations based on the ratio between additions and deletions
  • Janus had a lot of additions versus deletions, so a lot of new code this past month?
  • All projects are skewed towards a single developer at their core. Jitsi skewed towards two main developers
  • Kurento is way behind the others
  • mediasoup seem to have a smaller community of active contributors around it (it is the youngest of these projects)
Contributors

The github contributors view gives us a nice time perspective of these projects, with a focus on who are the main contributors over time.

A few thoughts:

  • Activity in Janus has been stable over time, but leans heavily on a single developer, Lorenzo Miniero
  • For Jitsi, activity is leaning on 3 developers, 2 of whom have been with the project for a very long time
  • Kurento’s activity fell since the acquisition by Twilio and never really recuperated. Most activity since then can be attributed to Juan Navarro
  • mediasoup’s activity has a kind of a cyclic nature to it that is hard to explain from outside. Most of the work there can be attributed to Iñaki Baz Castillo
Code Frequency

This chart on github shows the additions and deletions made to a project throughout its lifetime.

For the image below I tried aligning the projects as well as I could on the 10k  range, which might have distorted them a bit, but should place them nicely in the context of each other. Notice that I couldn’t do that for mediasoup – see my thoughts below. Also note that the X axis of the timescale in each is different, but that wasn’t interesting for me for this comparison.

A few thoughts:

  • Janus and Jitsi show healthy code contributions throughout their existence. I attribute the spikes outside the chart area I’ve selected as on time efforts or just mistakes. Over time, they show stability
  • Kurento’s activity has seen better days
  • mediasoup is interesting. It shows bouts of activity, mainly due to heavy use of branching across its version releases
  • These projects are rather small in nature. If you search github for popular projects outside the WebRTC space, you’ll see a lot higher levels of activity
Why do we want an up to date WebRTC media server?

This looks like an obvious question, but it really isn’t.

To check this out, lets look at another popular project that I suggest people to use when they need a SIP over Websocket implementation in JavaScript: JsSIP

Does this make JsSIP a dead project? Or is it just that there’s nothing much to add besides code fixes here and there?

(Interestingly, SIP.js shows totally different behavior)

When it comes to WebRTC, the same cannot be said. WebRTC is “work in progress” at its core. Browsers deprecate and introduce new features with each release, new codecs are introduced and the slew of use cases using WebRTC is still growing strong. This means that to keep pace with these changes, WebRTC media servers need to be updated as well. Otherwise, they wither and die, with time being unable to offer the level of quality and connectivity developers and users expect.

What other criteria should you be looking for?

Freshness of code is only one criteria, but there are many more. The first one should probably be does this media server fit my requirements? Not all media servers are built equal or for the same purpose, and deciding which one is most suitable is important as well.

Other criteria include usage, maintainability, support, documentation, etc.

You can find the full list in my article about it – 10 Tips for Choosing the Right WebRTC Open Source Media Server Framework

And if this is of real interest to you, then you should look at your selection process itself. For that, I can suggest my free media server selection KPI sheet.

Oh, and once you’re there, think about scaling as well. I have an existing eBook about best practices in scaling WebRTC applications, and an upcoming eBook on Optimizing Group Video Calling in WebRTC (available for pre-purchase).

The post How to know if an open source WebRTC media server is kept up to date? appeared first on BlogGeek.me.

AV1 vs HEVC: Are the WebRTC codec wars back?

Mon, 04/27/2020 - 12:30

AV1 is coming to WebRTC sooner rather than later. Apparently so is HEVC. It is an AV1 vs HEVC game now, but sadly, these codecs are unavailable to the “rest of us”.

WebRTC codec wars were something we’ve seen in the past. During the early days of WebRTC there have been ongoing discussions if the mandatory video codec in WebRTC should be VP8 or H.264. The outcome was to have both of them mandatory to implement in browsers.

Fast forward to today, and life is simply. We have ubiquity and support across all browsers that have WebRTC in them, which is great.

We are now gearing up for the next fight. This one isn’t going to be between VP9 and HEVC, but rather between AV1 and HEVC.

Why now?

COVID-19 is causing all communication vendors to fast forward and accelerate their roadmaps by 6-18 months. Those that don’t are going to be left behind on the other side of this pandemic.

COVID-19 is fast forwarding all roadmaps and plans related to WebRTC, including codec improvements

This isn’t an attempt to scare anyone or to FUD people into doing things. It is just the way things are.

If you want to see how serious things are, just check what’s going on around you:

  • Zoom is rolling out new features on almost a daily basis. They are plugging in their security gaps faster than most vendors can plan their roadmap, let alone develop anything
  • Google is releasing features on Duo and Meet to drastically improve them. A lot of it hinges on machine learning but also on the latest coding technologies (more on that later, when we get to AV1 again)
  • Most UCaaS vendors have launched their own video meeting service in the past 6 months. A lot of them in the last month. Many now offering it for free
  • Many vendors in the video space from all verticals are seeing a 10x or more increase in use
  • There’s a race towards filling different gaps when comparing these meeting services versus Zoom

The AV1 vs HEVC angles here are VERY interesting.

HEVC requires royalties and is a licensing mess.

AV1 is so new it hasn’t even had an opportunity to cool down a bit after being taken out of the oven. Frankly? It is still half baked and requires a bit more cookin’ – and yet… it is now being rolled out in Google Duo.

The thing is, that 6 months back, video was nice to have. A feature that needs to be ticked in a long requirements list.

Today? Video first. All the rest comes later.

Zoom’s stock price and market cap is the best indicator of that change.

A brief history of WebRTC video codecs

In less than 10 years, we’ve witnessed 3 codec generations in WebRTC:

  1. VP8 / H.264
  2. VP9 / HEVC
  3. AV1
Each video codec generation improves over the last one, addressing different market requirements

With each generation of codec introduced, CPU and memory requirements grow along with the complexity of the codec and the resulting quality for a given bitrate increases.

VP8/H.264

I’ve been working with H.264 since 200x. Probably somewhere in 2005. It was brand new at the time and was about to replace H.263 and all of its extensions.

Fast forward to around 2010, when you started it being deployed in almost all video conferencing room systems.

VP8 came to our lives along with WebRTC, in around 2012. It is comparable to VP8.

There are reasons to pick H.264 over VP8. And while hardware acceleration is more readily available in H.264 than VP8, it does pose challenges.

Both are probably at their peak right now when it comes to video calling:

  • They are ubiquitous
  • Readily available
  • Understood and known, with vibrant ecosystems
  • They can run on most CPUs

This is the tipping point, where a new video codec is being sought after.

If you are using it today, you should be just fine. If you seriously want to be at the forefront of technology, right on the bleeding edge (and you will bleed – time, money and blood), then read on to your next alternatives.

And if you need to decide between VP8 and H.264, check out this free video course: H.264 or VP8?

VP9

It should have been a VP9 vs HEVC thing and not an AV1 vs HEVC thing.

The next best thing in video codec was supposed to be VP9. VP9 is the replacement to HEVC. HEVC is what comes next after H.264, and the intent was always for VP9 to be the alternative to HEVC.

VP9 gives you either less bitrate for the same quality or more quality for the same bitrate than VP8

As things go, VP9 advantages are just what you’d expect in a new codec generation:

  • Compression efficiency
  • Higher complexity
  • Scarcity of hardware acceleration (an issue still)

What VP9 was supposed to bring to the world is SVC – scalability. With VP9 SVC we were supposed to improve resiliency of video as well as the ability to scale large group video calls better than ever before.

Need a boost and have a very good grasp at who is in a call before everyone joins? VP9 might be a good alternative for you.

AV1
  • AV1 is the new kid on the block. An impossible dream coming true: vendors working together in a new Alliance of Open Media, working on a royalty free video codec. Something that was never heard of a few years ago and now feels like the new norm
  • Starting with 7 founding members, this changed the dynamics of the WebRTC codec wars. Instead of having Google with VP9 on one side of the ring and the rest of the world on the other side with HEVC, it brought a team of large players to the royalty free side, standing behind the AV1 video codec
  • Today, the alliance includes 48 members, including all browser vendors and most chipset vendors
The Alliance of Open Media is the who’s who of the video industry
  • The focus in terms of comparisons are now AV1 vs HEVC

I’ve written at length about AV1 when the specification got released. You can learn about AV1 there.

There are those who believe AV1 is ready and have been ready for quite some time. Reality says otherwise. It isn’t for the faint of heart at this point. More on that – below.

Adventurous? Go AV1!

Where in the world is WebRTC VP9 video call?

VP9 shipped in Chrome 48 for WebRTC. That was January 2016. 4 years later and it is safe to say that not many are using VP9 in WebRTC.

Adoption of VP9 is slow

The two main places where VP9 is making sense?

  1. Google. Google Meet for example has been using VP9 for quite some time in its own calls
  2. Peer-to-peer calls. Just because it is easy to achieve

Once AV1 was announced, the debate began if one should even try and adopt VP9 or wait for AV1 instead. The majority are waiting for AV1. Laziness at its best (and what I would have selected as well if you’re wondering).

The other reason for delaying and skipping a generation is investment in VP9. Since everyone’s looking at AV1, VP9 is left with less eyeballs and developers improving it. Add to that the slow release of SVC support to it in Chrome and the fact that Safari still doesn’t support VP9 and you can understand the reluctance of going this route.

Apple’s appetite for HEVC in WebRTC

The big Apple is insatiable. Apple has been banking on HEVC for many years now, and where HEVC & WebRTC fits in Apple has been a topic here in the past as well.

Apple is banking on both royalty bearing (HEVC) and royalty free (AV1) video codecs

On Apple’s release notes for Safari Technology Preview 104 there’s a bullet point that shows where things are headed:

Added initial support for WebRTC HEVC

I wonder whatever for?

  • Apple is a founding member of the Alliance of Open Media, so it is banking on AV1 as the future video codec
  • In iOS 11 (2017), Apple introduced HEVC to its devices. That was done with the addition of hardware acceleration
  • Android devices usually don’t have HEVC hardware support, and licensing being as tough and expensive as it is, this is a continued differentiator for Apple
  • Google will be reluctant to add HEVC to Chrome. So would Mozilla. Not sure what Microsoft’s stance would be on this one
  • Apple isn’t playing an AV1 vs HEVC game, but rather an AV1 and HEVC game, and they are alone in that at the moment
  • Apple isn’t especially strong or dominant in the WebRTC space. Safari is the worst browser these days in terms of WebRTC support, with users already used to switching to Chrome on Mac. What would adding HEVC to WebRTC Safari add? Especially when there are so many other, more basic things to fix and improve in Safari WebRTC support…

To me, this is the biggest conundrum at the moment. A piece of this puzzle is missing. What would make developers use HEVC if it is only available in Safari and nowhere else? This isn’t the app store. It is the web.

Time will tell.

WebRTC AV1 support in Google Duo

I said it before and I’ll iterate it again. AV1 is too new. Too early to be adopted in WebRTC or real time communications. And yet… Google just announced supporting AV1 in Google Duo:

[…] in the coming week, we’re rolling out a new video codec technology to improve video call quality and reliability, even on very low bandwidth connections.

They made sure to add a nice moving GIF so you can see the difference between “a video codec” and AV1 in the same bitrate.

Is that other codec VP8? VP9? H.264? HEVC? Maybe H.261…

Are they using it for all Duo calls? In all devices? In all network conditions?

The only thing I could find is that this rolls out to Android with iOS 2 weeks behind in the roll out. There are more things left unsaid.

Some thoughts here
  • AV1 doesn’t have hardware acceleration on smartphones. Maybe on 1 or 2 very new ones (I doubt it), and even then, the hardware would still be buggy as hell – especially for real time video, which is different than just camera recording or playing YouTube videos
  • This means that going to HD resolutions with AV1 on smartphones is going to be brutal to CPU, battery life and device temperature. This isn’t where AV1 support in DUO is going
  • This leaves us with the low bitrate scenario – probably anything from VGA or lower. Maybe even a quarter of that (QVGA)
  • It is where AV1 is going to shine in 2020 and into 2021
Why AV1?

We’re all stuck at home burning the networks. The large streaming vendors are lowering resolutions (and bitrates) for their default players in certain countries. This reduces the CPU load, making room for improving quality on lower bitrates. And that leads to the ability (and need) of better video codecs.

Why not VP9?

Google Duo most probably already makes use of VP9. Maybe even HEVC on iOS devices due to hardware acceleration benefits. When it comes to 1:1 sessions, there’s no real reason to stick to a single video codec for all sessions.

With Apple working publicly now on HEVC in WebRTC, it put pressure on Google, and getting AV1 into Duo in order to bolster their side in the AV1 vs HEVC debate became a pressing matter. Google Duo’s 1:1 call scenarios were the most suitable candidate for Google to make that stand.

Enter AV1

When a new video codec generation was introduced, the thinking was simple: “we are expecting it to support a higher resolution, at a higher bitrate, with a higher CPU consumption”

  • Higher resolution, because let’s face it – QVGA sucked in 1995 and we were still using it in 2000 in video conferencing. So each generation had to get 4 times the pixels the previous one was capable of dealing with
  • Higher bitrate, because at 4 times the pixels we couldn’t really get 25% the size, so there was an expectation of needing more bandwidth for the content we wanted to use
  • Higher CPU consumption, because we were adding more work to the encoder and decoder

In 2020, things are changing.

Sometimes, all you need is a better fit into smaller spaces (like low bitrate) Bigger is no longer better with video codecs

I have 4K resolution on my desktop and laptop. 1080p on my phone and TV. I am happy with 720p content most of the time. I hate fonts on a 4K screen that aren’t enlarged (the damn characters are just too small to read).

What is the value of higher resolution? HDR content? 8K? 360? VR? If all I need is just plain video, no higher resolution is required. We’re all content most of the time with 720p resolutions for business meetings anyway.

Resolution requirements for most content types and use cases are not going to get higher any time soon.

We are probably at peak resolution already.

So we are free to think of next gen video codecs as ones that help consume lower bitrates.

There’s a distinction here. While any new video codec generation consumes lower bitrates for the same resolution/quality, the main purpose of these new video codecs was almost always in increasing the resolution as well.

AV1 on mobile makes perfect sense here. Especially for low resolutions – since we can have some CPU to spare for that scenario.

A quick FAQ on the latest WebRTC video codecs Is HEVC (H.265) supported in WebRTC?

No. Not officially.
Apple is adding support for it in Safari, but no other browser has added support for it or indicated plans to add support for it

Can I use HEVC in WebRTC?

Yes, but not in browsers.
Apple will introduce HEVC in Safari, but no other vendor will. If you build your own native application for either PC or mobile you can add HEVC as another supported codec and use it in your application.

Should I start investing in adding AV1 to my WebRTC application?

That depends. If you want to add AV1, you need to make sure your use case fits well, as well as the devices you expect your users to have.
You will also need to put a considerable investment of time and money to make it happen.
My suggestion for most vendors would be to wait with AV1 support.

Why isn’t VP9 used in WebRTC much?

That is a good question with no good answer.
I believe it is a matter of timing. When the time came to adopt VP9, AV1 was already announced and on its way, so vendors preferred to wait and jump directly to AV1 instead of going for VP9.
VP9 doesn’t enjoy much hardware acceleration, which also makes it CPU intensive, requiring companies to tweak, fine tune and optimize their systems to use it. That kind of work is something many prefer not to do.

WebRTC and the future of video codecs

We’re at war again. The video codec war of WebRTC. And this time, each vendor needs to pick a strategy to play.

Is there a single video codec today that will answer all of your WebRTC needs?

We’ve got multiple codecs in our warchest: VP8, H.264, VP9, AV1 and sometimes even HEVC now.

Which one will we be using?

Which ones will we be using?

Here, scenarios matter. Different scenarios will call for totally different video codec selection to optimize for quality, CPU use, performance, bitrate, cost, etc.

In 1:1 sessions, you may want to keep your options open – use the best one dynamically just by making a decision as the session is set up.

For group calls, will you be using a single, static video codec? Or allow for multiple ones? Will you have multiple codecs in a single group session? Are you going to have an SFU tweaked and tuned for that? Will you pick the best video codec for a session and then dynamically switch over as the nature of the session changes (=someone joins and leaves who has certain limitations)?

What about consumers? What kind of video codec selection strategies are going to be prevalent there? How are they going to be different than the ones we see in enterprise solutions? What will be the difference for mobile first or application based versus web based solutions?

WebRTC differentiation: the next battlefield lines are being drawn WebRTC differentiation is back in focus

We live in interesting times.

Codec selection has never been more interesting or important.

While WebRTC offers 2 codecs (H.264 & VP8), most browsers support VP9 and now we’re seeing browser vendors either adding HEVC or using AV1 in their own apps.

If media quality is at the core of your service (think carefully about your answer to this question), then rethinking your video codec selection strategy might be in order.

It is going to require research and investment. But this is where the future lies for video codecs in WebRTC.

The post AV1 vs HEVC: Are the WebRTC codec wars back? appeared first on BlogGeek.me.

WebRTC Server: What is it exactly?

Mon, 04/13/2020 - 12:30

When someone says WebRTC Server – what does he really mean? There are 4 different WebRTC servers that you need to know about: application, signaling, NAT traversal and media.

WebRTC is a communications standard that enables us to build a variety of applications. The most common ones will be voice or video calling services (1:1 or group calls). You can use it for broadcasts, live streaming, private/secure messaging, etc.

To get it working requires using a multitude of “WebRTC servers” – machines that reside in the cloud (or at least remotely enough and reachable) and provide functionality that is necessary to get WebRTC sessions connected properly.

What I’d like to do here is explain what types of WebRTC servers exist, what they are used for and when will you be needing them. There are 4 types of servers detailed in this article:

There are 4 types of WebRTC servers you need to know about
  1. WebRTC application servers – essentially the website hosting the service
  2. WebRTC signaling servers – how clients find each other and connect to each other
  3. NAT traversal servers for WebRTC – servers used to assist in connecting through NATs and firewalls
  4. WebRTC media servers – media processing servers for group calling, recording, broadcasting and other more complex features

More of the audio-visual type? I’ve recorded a quick free 3-part video course on WebRTC servers.

Enroll to free WebRTC servers course

WebRTC application servers WebRTC application servers are like any other web application servers out there

Not exactly a WebRTC server, but you can’t really have a service without it

Think of it as the server that serves you the web page when you open the application’s website itself. It hosts the HTML, CSS and JS files. A few (or many) images. Some of it might not even be served directly from the application server but rather from a CDN for the static files.

What’s so interesting about WebRTC application servers? Nothing at all. They are just there and are needed, just like in any other web application out there.

WebRTC signaling servers WebRTC signaling servers are in charge of connecting users to one another

Signaling servers for WebRTC are sometimes embedded or collocated/co-hosted with the application servers, but more often than not they are built and managed separately from the application itself.

While WebRTC handles the media, it leaves the signaling to “someone else” to take care of. WebRTC will generate SDP – these are fragments of messages that the application needs to pass between the users. Passing these messages is the main concern of a signaling server.

A WebRTC signaling server passes signaling messages between the users to establish a session

There are 4 main signaling protocols that are used today with WebRTC, each lending itself to different signaling servers that will be used in the application:

  1. SIP – The dominant telecom VoIP protocol out there. When used with WebRTC, it is done as SIP over WebSocket. CPaaS and telecom vendors end up using it with WebRTC, mostly because they already had it in use in their infrastructure
  2. XMPP – A presence and messaging protocol. Some of the CPaaS vendors picked this one for their signaling protocol
  3. MQTT – Messaging protocol used mainly for IOT (Internet of Things). First time I’ve seen it used with WebRTC was Facebook Messenger, which makes it a very popular/common/widespread signaling server for WebRTC
  4. Proprietary – the most common approach of all, where people just implement or pick an alternative that just works for them

SIP, XMPP and MQTT all have existing servers that can be deployed with WebRTC. 

The proprietary option takes many shapes and sizes. Node.js is quite a common server alternative used for WebRTC signaling (just make sure not to pick an outdated alternative – that’s quite a common mistake in WebRTC).

If you are going towards the proprietary route:

NAT traversal servers for WebRTC NAT traversal is important to get more sessions connected properly in WebRTC

To work well, WebRTC requires NAT traversal servers. These WebRTC servers are in charge of making sure you can send media from one browser to another.

There are two types of NAT servers needed: STUN and TURN. TURN servers always implement STUN as well, so in all likelihood you’re looking at a single server here.

STUN is used to answer the question “what is my public IP address?” and then share the answer with the other user in the session, so he can try and use that address to send media directly.

TRUN is used to relay the media through it (so it costs more in bandwidth costs), and is used when you can’t really reach the other user directly.

A few quick thoughts here:

  • You need both STUN and TURN to make WebRTC work. You can skip STUN if the other end is a media server. You will need TURN even if your other end of the session is a media server on a public IP address
  • Don’t use free STUN servers in your production environment. And don’t never ever use “free” TURN servers
  • If you deploy your own servers, you will need to place the TURN servers as close as possible to your users, which means handling TURN geolocation
  • TURN servers don’t have access to the media. Ever. They don’t pose a privacy issue if they are configured properly, and they can’t be used by you or anyone else to record the conversations
  • Prefer using paid managed TURN servers instead of hosting your own if you can
  • Make sure you configure NAT traversal sensibly. Here’s a free 3-part video course on effectively connecting WebRTC sessions
WebRTC media servers WebRTC media servers makes it possible to support more complex scenarios

WebRTC media servers are servers that act as WebRTC clients but run on the server side. They are termination points for the media where we’d like to take action. Popular tasks done on WebRTC media servers include:

  • Group calling
  • Recording
  • Broadcast and live streaming
  • Gateway to other networks/protocols
  • Server-side machine learning
  • Cloud rendering (gaming or 3D)

The adventurous and strong hearted will go and develop their own WebRTC media server. Most would pick a commercial service or an open source one. For the latter, check out these tips for choosing WebRTC open source media server framework.

In many cases, the thing developers are looking for is support for group calling, something that almost always requires a media server. In that case, you need to decide if you’d go with the classing (and now somewhat old) MCU mixing model or with the more accepted and modern SFU routing model. You will also need to think a lot about the sizing of your WebRTC media server.

For recording WebRTC sessions, you can either do that on the client side or the server side. In both cases you’ll be needing a server, but what that server is and how it works will be very different in each case.

If it is broadcasting you’re after, then you need to think about the broadcast size of your WebRTC session.

A quick FAQ on WebRTC servers Can I run WebRTC without any server?

Not really. You will need somehow to know who to communicate with and in many cases, you will need to somehow negotiate IP addresses and even route data through a server to connect your session properly.

Will WebRTC servers spy on me and my data?

That depends on the service you are using, as different implementations will put their focus on different features.

In general, signaling and NAT traversal servers in WebRTC don’t have access to the actual data. Media servers often have (and need) access to the actual data.

Can I host WebRTC servers on AWS?

Yes. You can host your WebRTC servers on AWS. Many popular WebRTC services are hosted today on AWS, Google Cloud, Microsoft Azure and Digital Ocean servers. I am sure other hosting providers and data center vendors work as well.

Can I run WebRTC on my PHP WordPress site?

WebRTC can be added to any WordPress, PHP or other website. In such a case, the PHP WordPress server will serve as the application server and you will need to add into the mix the other WebRTC servers: signaling server, NAT traversal server and sometimes media servers.

Know your WebRTC servers

No matter how or what it is you are developing with WebRTC, you should know what WebRTC servers are and what they are used for.

If you want to expand your knowledge and understanding of WebRTC, check out my WebRTC training courses.

The post WebRTC Server: What is it exactly? appeared first on BlogGeek.me.

A new WebRTC codelab training

Mon, 03/30/2020 - 12:00

I am launching a new WebRTC codelab course, created together with Philipp Hancke. This goes into the intricacies of WebRTC signaling and best practices.

The State of online WebRTC resources

Where should you start with WebRTC? There’s not enough information about it and at the same time too much information about it. Most of it is old and outdated. Needle in a haystack.

Up to date WebRTC code is hard to come by

Sift through discuss-webrtc, stackoverflow and the W3C WebRTC mailing list? All great. But there’s no explanation besides pieces of code. It lacks context.

Read the spec and work your way from there? If you don’t fall asleep, you might just find that browsers aren’t exactly spec compliant yet anyways.

Books? None from 2020. None from 2019. Less than 10 in total. A handful.

How about online courses? There are a few on udemy and pluralsight, but they are also old and broken by now.

My own Advanced WebRTC Architecture course? While great, I purposefully haven’t gone through the APIs of WebRTC, understanding they change so frequently.

A codelab then? Sure. find one that works and explains signaling properly.

The way I see it, there are 3 types of WebRTC codelabs today:

  1. The outdated ones
  2. The broken ones
  3. The ones that sell you a managed streaming service
Enter Philipp Hancke

I’ve been working with Philipp here and there in the last couple of years. He is fun to work with and knows everything there is to know about WebRTC and the intricacies of its APIs along with the status of browsers.

Here’s a bit about him:

I don’t quite recall how, but we got to the point a couple of months back that it would make sense to create a solid WebRTC codelab that covers the signaling aspects of WebRTC. Simply because there is nothing out there that does the trick properly.

WebRTC: The missing codelab

Fast forward to today, we have a codelab course for you:

The codelab has 4 parts to it:

  1. Github repo and course introduction module, both publicly available
  2. Codelab walkthrough. Over 2 hours of recorded lessons covering the source code and explaining all you need to know to grok it
  3. Exercises. We’ve started creating useful exercises on the codelab. These include the challenge, the solution and the explanation. Along with the additional code for the solution and best practices
  4. Resources library. If you are new to WebRTC, this will come handy in closing some gaps for you. They are lessons from my other courses that fit nicely here

We’ve made sure the lessons aren’t boring by making them interactive. I’ll be there with Philipp, taking the part of the student asking questions. The intent is to try and get into his head, understand his thought processes. What do you gain out of it? You will understand why things are implemented that way and not only how.

Interested?

Head to the WebRTC developer courses – there’s a 20% discount until the end of April.

If you are uncertain, then you are invited to join this week’s WebRTC Live webinar where I’ll be talking with Arin Sime about the codelab. Alternatively, just go through the introduction module and make a decision.

If you know that this is for you, then there’s a 50% discount if you enroll in March (that’s in the next 2 days) by using FASTMOVER as your coupon code on checkout (for any of the WebRTC course bundles).

If you’ve enrolled in my All Included course in 2020, then you are already automatically enrolled to the codelab. If you enrolled throughout 2019, then you are eligible for a 50% discount during April.

The post A new WebRTC codelab training appeared first on BlogGeek.me.

Who are the WebRTC Market Global Key Players?

Mon, 03/02/2020 - 12:00

The WebRTC market global key players may be different than you think. They include the browser vendors, a few CPaaS vendors, dominant “creators” of WebRTC sessions and… open source projects.

The title for this post came from one of the many lead generating headlines I see for reports that mention companies that no longer exist in our market. It is sad in a way: The same market report released every year, with the main difference between the last one being X+1 where X denotes a year mentioned in the report.

(if you are about to purchase a market report on WebRTC – make sure the companies mentioned there actually do something meaningful in WebRTC – check against the companies listed here – or just ask me)

My intent here is to actually ask the question – Who are the WebRTC Market Global Key Players? – and then also answer it.

I’d like to segment the key players in WebRTC into 4 main groups:

  1. Browser vendors
  2. CPaaS vendors
  3. Customer facing services
  4. Open source projects
1. Browser vendors

There are exactly 4 browser vendors that are interesting. The rest? Less so.

I will list them here in the order of their importance to WebRTC.

Google Chrome

This one is obvious. Google is the main driving force behind WebRTC. They aren’t alone in being there, but they are the dominant browser player in market share AND they host the most popular implementation of WebRTC (that would be libwebrtc).

In many ways, Google decides where WebRTC is headed. It does that in 3 different angles:

1. libwebrtc and Chromium

Maintaining the most popular WebRTC implementation (did I say that already?).

So much so that all browser vendors use this implementation in one way or another (either directly or indirectly by copying the pieces of it that they want/need).

I’ve added here also Chromium. This is the open source components of the Chrome browser and it is used in MANY important projects:

  • Microsoft now operates Edge (its new browser) on top of Chromium
  • Electron, which is quite popular in packaging web apps as installable apps is built on top of Chromium
  • Many of the “other” browsers out there are implemented on top of Chromium
2. Chrome

THE market leader in browsers.

Taken from StatCounter (and yes. It also includes mobile)

Need I say more on how that influences WebRTC adoption and implementation?

3. Google Apps

We used to have only Hangouts using WebRTC at Google.

Now we’ve got a lot more.

The shortlist I am aware of include:

  • Google Hangouts / Google Meet
  • Google Duo
  • Google Stadia
  • Chrome Remote Desktop
  • YouTube Live

For these, Google has their own agenda with their own WebRTC roadmap. This means that the requirements that fall into any of these services end up either directly as part of the open source WebRTC implementation distributed by Google – or Chrome alone.

Apple Safari

Apple has been quiet about WebRTC. That’s true about Apple and a lot of other technologies as well.

That said, Safari now supports WebRTC in Mac and iOS for a couple of years now (there are those who missed that fact).

While Apple has no other direct involvement around WebRTC, all developers and entrepreneurs are affected by their decisions about WebRTC. A lot more than in any other case.

Point in case is WebRTC iPhone support:

  • You can build your own native app with WebRTC on an iPhone. This was always true
  • You can run a web app using WebRTC on an iPhone inside Safari
  • You can’t run a web app using WebRTC on an iPhone in any other iOS browsers – just because WebKit on iOS still doesn’t support WebRTC properly
Microsoft Edge

While Edge didn’t enjoy any growth or market share, this may soon change.

Microsoft decided a bit over a year ago to stop investing its resources and focus on building a browser engine, and instead took Chromium “as is”, building their Edge browser on top of it.

Somehow, I am assuming (hopefully) that this change also means that more Microsoft engineers are involved in the inner workings of Chromium itself, optimizing it to run on Windows and elsewhere for their own scenarios.

Apple is more important than Microsoft when it comes to WebRTC simply due to the current state of market share of their browsers.

Mozilla Firefox

Firefox is the 4th important browser and decision maker in the table of WebRTC.

It isn’t as important as the rest since its only “contribution” to the game is Firefox itself whereas the other browser vendors here come with operating systems and applications of their own as well.

Nevertheless, it is way more important than all other browsers not mentioned here combined.

2. CPaaS vendors CPaaS vendors offer the tools for others to embed WebRTC communications

CPaaS vendors enable others to build their applications without delving into the communication technology stack too much. In many cases, that includes support for WebRTC as well.

To me, they are key players within this industry, and I want to mention those that are the most important when it comes to WebRTC adoption.

Twilio

Twilio is the leader in CPaaS. It is also one of the dominant players when it comes to WebRTC in CPaaS.

When it comes to voice calls via WebRTC done through CPaaS, Twilio are probably the largest player. Twilio has a lot of visibility to issues and requirements related to voice use cases, especially ones related to call centers.

Twilio is also growing in their video use of WebRTC.

Vonage

Vonage is the owner of TokBox, now part of its API platform.

As one of the leading video API platforms using WebRTC, TokBox is important. As with Twilio, this stems from their visibility to issues and requirements, but in this case, related to video use cases.

Others?

There are other interesting CPaaS vendors in the WebRTC space, but none of them are dominant enough. 

The ones worth mentioning in this context?

3. Customer facing services

Customer facing services are the end products. What users interact with when it comes to WebRTC. This title won’t get a vendor to be a global key player unless there’s a real reason…

Facebook

Facebook is huge. Doesn’t matter if you look at Messenger, WhatsApp or Instagram.

Messenger uses WebRTC.

Instagram uses WebRTC for its live chat.

WhatsApp doesn’t use WebRTC directly, but there’s ongoing effort to consolidate the infrastructure of all these messaging platforms at Facebook. Will we be seeing 2 billion WhatsApp users able to conduct voice and video calls by way WebRTC in 2020?

Then there’s the Portal device.

Anyways, the sheer size of Facebook, along with their work with WebRTC places them as a market leader in WebRTC use. And due to their size, they are also largely alone here.

No one else

There are vendors that contribute large WebRTC traffic. But other than Facebook, I am not sure who else to include here.

Maybe Discord.

Probably Amazon. Due to multiple products and that minor thing called AWS.

I decided not to put any of the enterprise vendors. I think they should matter more, but got a feeling that they don’t at the moment.

4. Open source projects Open source projects are at the heart of the WebRTC ecosystem

These are sometimes neglected when discussing market leaders, which is rather sad. A lot of the development and WebRTC traffic out there ends up going through some of these open source projects, which is why I’ve added them here.

It is apparent that some projects should have a seat at the table. When Kurento got acquired by Twilio, there was a year when many of the discussions I had was about finding a suitable replacement for them.

If WebRTC open source projects fail to make progress, upgrade or just die, they affect those using them. The popular open source projects matter. A lot. They are at the heart of the WebRTC ecosystem.

Janus

I’ve decided to put Janus first because:

  1. Wherever it is used, there is use of WebRTC
  2. It is very versatile and flexible. I see it in a lot of different use cases
  3. It is quite popular
Jitsi

Jitsi is now owned by 8×8 after switching hands.

It is still run as an independent open source project and is widely deployed.

FreeSWITCH

FreeSWITCH comes from the telephony world.

Not in all of the deployments of FreeSWITCH out there WebRTC is used, but when companies need to connect telephony to WebRTC, they often go with FreeSWITCH for that.

Since FreeSWITCH is so common in so many of these instances, they are important for the WebRTC ecosystem

Kurento (to some extent)

To some extent, Kurento still matters.

Kurento is a shadow of its former glory prior to the acquisition. It isn’t part of Twilio – Twilio acqui-hired the developers, but not the name. Kurento lives on, but different.

It is being developed alongside OpenVidu, but the progress made to both seems somehow slower than the other open source projects mentioned here.

I am not sure they are a key player anymore.

Others?

There are other important projects that can/should make the title of key players in WebRTC. The 3 that immediately come to mind are mediasoup, PION and Asterisk.

Why haven’t I added them? Because their popularity is lower than the others.

For mediasoup and PION it is just that they are newer. They are growing, so I believe they will become key players if they continue in the current adaption trajectory.

For Asterisk, it is because they are used in a similar fashion to FreeSWITCH. I just don’t see them as much in the conversations around WebRTC that I have.

What does it mean to be a “WebRTC Market Global Key Player”?

This is where I started the article, and I think it bears thinking about.

When one coins a company as a “WebRTC Market Global Key Player” what does that mean exactly?

For me that means that they have the ability and potential to affect what happens with WebRTC moving forward.

While the standardization work is done in the W3C, a lot of the work happens elsewhere. In what Google places into Chrome as experiments and later as features. In what Apple decides to implement or not implement in Apple. In what CPaaS vendors deliver to their many customers and the feedback they get. In the companies that build large scale products and in the open source projects that make up a large portion of these products.

There are steps one can take to become a more dominant player. To be able to join the conversation and affect where WebRTC is headed. While I’ve conversed with many who want to become dominant players, only a few have the courage and the willingness to invest the time and resources needed.

Want to know who the global key players are? Don’t read it in a copy+paste research paper that is poorly updated…

The post Who are the WebRTC Market Global Key Players? appeared first on BlogGeek.me.

How can your WebRTC application keep pace with browser releases?

Mon, 02/17/2020 - 13:00

If you are developing with WebRTC, then there is special care you need to take to browser releases, as these can break your app. Here’s how I’d go about dealing with this problem.

Twice in the past week I’ve been asked about backward compatibility with WebRTC. It is a loaded topic – one that lends itself to this kind of a metaphor about developing with WebRTC:

When you’re developing with WebRTC (and I daresay when you’re a developer in the Google WebRTC team), it feels like replacing a wheel while driving the car on a highway.

Browsers release cycle

Browser release cycles are… short. And complicated.

  • Chrome and Firefox get updated roughly every 6 weeks or so
  • Safari twice a year at the moment
  • And Edge just got on Chromium, and we need time to see on what release cadence will Microsoft select for it – at the speed of Google Chrome, faster or slower?

And that’s the simple part of the whole story – it comprises only the right column of this diagram from 2017:

Hand drawn guide about different versions of Browsers.

Because no one told me Canary is not about an actual bird. pic.twitter.com/amerBhf8tp

— Mariko Kosaka (@kosamari) January 10, 2017

Browsers auto-update. They do that at fast release cycles that are shorter than 2 months between large releases (unless they are Safari), and they often ship and push security or stability releases in-between these main releases when needed.

Browser update cadence will either stay the same in 2020 or become even shorter.

WebRTC’s pace of change

When I think about the experience with WebRTC in the past few years, it boils down to something like this:

For end users it is a real joy. Most don’t even know they are using WebRTC, but they just do.

Developers, on the other hand, are on a rollercoaster ride that they’d rather not be on. Constant changes are making the experience challenging.

To be fair – this is a lot better that previous alternatives we had

There are 3 different ways in which WebRTC is constantly changing:

  1. WebRTC browser implementation vs WebRTC specification. We are dealing with a protocol specification that is still not complete. And browsers are making changes to close the gap between what they’ve got implemented to what the specification says. These changes are sometimes not backward compatible
  2. Introduction of new capabilities. mDNS is a good recent example. Deprecation of DTLS 1.0 is another. Then there’s the playout delay addition for Google Stadia (and others). Somehow, there’s something that just must be added that might break interoperability and is introduced to improve security or connectivity
  3. Optimizations. Constant changes in the implementation to improve performance. Here you can place rewriting the echo canceller, revamping the whole threading model for audio and switching from receiver-side bandwidth estimation to sender-side one

These aren’t just introduction of new features and capabilities. They almost always include changes in the behavior of WebRTC itself.

Don’t expect the pace of change of WebRTC to slow in 2020.

WebRTC server-side challenges

If your app runs in the cloud and in front of browsers then your life is relatively simple. Using tools such as adapter.js along with some good sense of using the beta and even dev channels of the various browsers you will be fine for the most part.

Things get complicated once you start using media servers.

Most media servers today are open source. The teams maintaining them are rather small and they have a lot on their plate. The commercial ones don’t fare much better here either.

Now imagine. The Google WebRTC team cranks out features, bug fixes and optimizations. Their main focus is their own needs, along with what goes in the spec and interoperability with other browsers. They wouldn’t be able to slow down or explain everything that goes on to everyone out there even if they wanted to.

Take this small example – DTLS 1.0 deprecation:

  • DTLS is used by WebRTC to negotiate the shared secret of the SRTP media channel
  • DTLS 1.0 is considered insecure
  • DTLS 1.2 was already implemented as the default mechanism in WebRTC, but the Chrome implementation of WebRTC allowed a downgrade to DTLS 1.0 during the negotiation of a session
  • In February 2019, Google announced it will remove DTLS 1.0 support in Chrome M74
  • In April 2019, another announcement was issued. This time about the deprecation taking place in Chrome 81. Why? “based on the feedback it is clear that some in the community need more time to update their systems, and so we have reverted this change from Chrome 74”
  • Fast forward to February 2019, and now it seems that Chrome M80 will show a deprecation notice and removal will take place in Chrome M82

Why all these changes? Vendors not fixing their media servers. For a span of a full year.

Here’s what will happen when Chrome M82 rolls out: some services will break.

Google is in the right here.

Server vendors need to keep up with the pace.

And this is an “easy” one. That gets announced and noticed.

Other changes like bandwidth estimation algorithms, different support mechanisms for simulcast, and many others need to be taken care of. These are important for the media quality of the sessions.

WebRTC media servers have their hands full in keeping up with the pace. Pick one that is lively and well maintained

Beyond pace of change, you will need to deal with scaling. If that’s what you’re after, then my ebook on Best Practices in Scaling WebRTC Applications is the thing you need.

Purchase WebRTC Scaling Best Practices Mobile applications and IOT devices

Up next – applications.

With browsers, we’re both at the mercy of browser vendors but we’re also “saved” by their effort and work. This causes us to sweat when it comes to developing media servers that work well with browsers. But what about mobile applications then?

Since they are acting just like WebRTC clients in browsers would, we need to update them to keep them functioning and working in front of the browsers. Why? Because some of the changes browser vendors introduce are breaking changes while others are about important optimizations.

If you are using Google’s libwebrtc then check out my best practices of using it. You’ll find I suggest upgrading multiple times a year but not at Google’s pace. The reasoning behind this approach is to balance your sanity versus how far away you are from the latest release. A kind of risk management effort.

On mobile, a WebRTC application must be updated a couple of times a year just to keep working in front of web browsers

On-premise deployments and WebRTC

On-premise brings with it its own challenges, especially today. It used to be that on-premise was easy and cloud was challenging but the wheel has turned.

WebRTC is just another headache here.

If you run an on-premise operation that relies on web browsers for access, then you’re in for a treat with WebRTC. You’ll need to be able to frequently update your software. A lot more frequent than the “never” alternative that is so common in this space.

With on-premise you’ll need to rethink your strategy for updates and upgrades. Automate it somehow. Have it done without “human intervention”. Not only because of WebRTC mind you – it will be more about security patches. But WebRTC requires it as well.

With on-premise, WebRTC will force you to adopt cloud-development paradigms

Figure out (plan and execute) your own pace with WebRTC

How are you going to keep the pace of change of browsers and WebRTC?

This is something you need to ask yourself and answer.

A few suggestions if I may:

  • Have mechanisms for automatic updates of your clients and servers
  • Put versioning information and decisions into your client applications (to know when to force an update and when to just suggest one)
  • Subscribe and read the messages and PSAa on discuss-webrtc
  • Have your developers work also on beta and dev channels of the browsers they use

Obviously, there are more things you can and should do. I am here to help with it – just contact me.

The post How can your WebRTC application keep pace with browser releases? appeared first on BlogGeek.me.

6 questions to ask about your WebRTC training 🙋

Mon, 02/10/2020 - 13:00

Finding a good WebRTC course is tricky. Finding a training program that teaches you more than the basics about WebRTC isn’t simple. Here are a few questions to guide you in finding that course you want.

First off – I am biased. I have created a WebRTC training and have been running it successfully for a couple of years now, teaching IT workers about WebRTC. I’ll try to be as objective as possible in this article. The main thing I ask of you? Do your own research, and feel free to use my questions below as a guide to your quest after the best WebRTC training course.

Without much ado, here are the 6 questions you need to ask yourself about the WebRTC training you are planning to enroll to:

1. What was the last date the WebRTC course was updated?

This is probably the most important question to ask.

WebRTC is a moving target. Ever changing.

There are 3 separate axes that need to be tackled when learning WebRTC:

  1. Standard
  2. Browsers
  3. Ecosystem

The standard is still changing. WebRTC 1.0 will hopefully be completed this year. The changes are minor, but they still occur. And once they are over, we will start talking about WebRTC NV – the Next Version of WebRTC. Which will inject new learnings around WebRTC.

Browsers are changing. Especially Chrome. But not only. They have their own implementations of WebRTC, slightly different than the standard. And they are crawling ever so slowly towards being spec-compliant. On top of that, they have their own features, nuances and experiments going on; of things that might or might not end up as part of WebRTC.

The Ecosystem around WebRTC is what you should really be interested in. Not many developers use WebRTC directly. Most use third party open source or commercial frameworks so they see less of the WebRTC API surface itself. Selecting which framework to use, and how they are going to affect your architecture and future growth is the hard part.

All this boils down to this:

If the WebRTC training you are going to enroll in is more than 6-12 months old, it isn’t going to help you that much.

2. Does it cover more than the WebRTC API surface?

WebRTC is multidisciplinary. It spans across different concepts, and is a lot more than just the APIs the browser publishes.

How is the course you’re planning to take tackling that?

While many of the WebRTC courses focus on the API surface, they fail to understand the reality of WebRTC: Most WebRTC developers don’t interact directly with WebRTC APIs, but rather use third parties – either in the form of open source or commercial frameworks for signaling and media servers; or in the form of full managed services (think TokBox or Twilio). In such cases, it is critical for the students to understand and grok WebRTC from a perspective of the whole architecture and less so in what each and every API in WebRTC does (something that may change from one Chrome release to another).

Things you’ll need covered in order to write a decent application that is production ready:

  • WebRTC APIs
  • NAT traversal (STUN, TURN, ICE)
  • Signaling and transport
  • Codecs – both voice and video, and not only spelling them out
  • Media processing – things like echo cancellation, noise suppression, packet loss, simulcast, etc.
  • Media architectures – mesh, routing and mixing

Then there’s the part of how you boil it down to an actual solution. What components to use and why.

WebRTC has a set of building blocks, but you need to know which ones to use to fit the specific model you want to operate.

An interesting tidbit to check – does the training include aspects of group sessions or broadcasting? These require a look beyond the basics of WebRTC API calls.

Make sure the WebRTC course you take isn’t too focused on the APIs and isn’t too focused on the standard specification.

3. Is the instructor who created that WebRTC training available for questions?

Assume that WebRTC is going to be challenging to grok.

And with an online course you are mostly on your own. Unless there’s a bigger framework at play.

Here are a few things that can help you out:

  • Someone to ask. Does he course offer someone you can ask questions?
  • Do you ask them over an online form? An esoteric email address?
  • Is there a chat widget you can use to reach out to the instructor whenever you need?
  • What about office hours? Can you join live to a session and ask questions in person, and with your own voice?
  • Does the course offer a place for students to share their experience? Like a forum. Or a place where updates about WebRTC are published? (and we know things get updated pretty regularly there)

And one last thing – do you even know who the instructor is?

An important part in learning WebRTC is the ability to ask questions interactively. Make sure that is part of the training you enroll in.

4. How long is the course?

An hour? Two hours? Four hours?

More doesn’t always mean better, but with WebRTC here’s the thing – there’s quite a lot of ground to cover. And there are three ways to do that:

  1. Run fast, skimming over the material; the student will fill out the rest by searching the internet later
  2. Focus on the basics, leave a lot of the meaty, important parts out of the course; the student can figure it out on his own
  3. Put the time into it, making sure to cover as much as possible in the course itself

That third option means that a WebRTC course, at least a decent one, should take more than a full day of training – well above 10 hours of information.

If you want to really learn WebRTC, make sure the course you take has enough hours in it to give you the knowledge you need.

5. What are students saying about the course?

Do people like the course? Do they feel it got them what they needed?

Look at the testimonials of the WebRTC courses: you will immediately notice the frustration of students with the freshness of the courses – most of them are 3-5 years old. This makes them useless. Interestingly, students are less worried about the price (these are cheap courses) – they are a lot more worried about the time they wasted.

Check what companies are sending their employees to take that course. Are they just sampling it out, or sending multiple employees? What do these employees have to say about the course after taking it?

You will be able to find many answers to the other questions here just by reading the reviews of students.

If you are going to invest your time on an online WebRTC training, make sure to read testimonials and reviews about that training.

6. Is the course suitable for your purpose?

Just need to understand in broad strokes what WebRTC is and what it does? Are you after a deep understanding of WebRTC and how to develop or test it properly? What about offering support or ops for a WebRTC application?

Each of these has a different set of needs. Each needs a view of WebRTC from a different angle.

Which angle do you need and how well does it align with the angle of that course you are looking at?

Make sure the WebRTC course is aligned (as much as possible) with the type of work you’re expected to do.

Looking for a WebRTC course? Ask yourself: What should a good online WebRTC training include?

A good WebRTC training should include information about WebRTC APIs, STUN/TURN servers, media servers (SFU, MCU), signaling servers and the state of the ecosystem and browser support.
A course focusing only on the WebRTC API or showing how a specific simple “hello world” application works won’t suffice.

How do I know if a WebRTC course is good?

Ask yourself the following questions about the course to understand if it is for you:
* What was the last date the WebRTC course was updated?
* Does it cover more than the WebRTC API surface?
* Is the instructor who created that WebRTC training available for questions?
* How long is the course?
* What are students saying about the course?
* Is the course suitable for your purpose?

Are there different WebRTC courses?

Yes. Some courses are targeted more towards developers while others focus on ops and support.
If you are looking for a WebRTC course, be sure to check that the course is aligned with your job description.

Pick the right WebRTC training for you

There are several WebRTC training courses out there. Be sure to sift through them and find the one that is most suitable for you.

Interested? check out my own WebRTC courses:

  • WebRTC basics – a free beginners course for those needing a bird’s eye view of WebRTC
  • Advanced WebRTC – for those who deal with development – engineers, testers, architects and product managers
  • Supporting WebRTC – a course focused on those in support positions


The post 6 questions to ask about your WebRTC training 🙋 appeared first on BlogGeek.me.

How to pick the right WebRTC mobile SDK build for your application

Mon, 01/27/2020 - 13:00

Most developers should just use libwebrtc that Google supplies for their WebRTC mobile SDK. Which exact release to pick and at what pace to update is a more nuanced decision one needs to make.

* I’ll be using SDK and library as well as mobile WebRTC SDK and mobile WebRTC library interchangably in this article, so bear with me

In the release notes of WebRTC M80 (=the changes made to WebRTC in the upcoming Chrome 80), Google added an interesting deprecation announcement:

Deprecating binary mobile libraries

The webrtc.org open source repository contains platform implementations for Windows, Mac, iOS and Android. These are primarily utilized for automated testing. Browsers and other applications that embed WebRTC often have developed their own highly optimized platform code with custom capture/render components matching the applications architecture.

We have decided to discontinue the distribution of precompiled libraries for Android and iOS. The script for creating the AAR library can be found here, the build script for iOS is located here.

Lets try to decrypt this deprecation and explain it, and then see what developers should be doing (and are doing already).

Official WebRTC precompiled libraries for Android and iOS

To understand this announcement we first need to understand what’s this WebRTC precompiled mobile libraries is exactly.

From the start, it was possible to use WebRTC on mobile. Google introduced WebRTC in Android Chrome in July 2013, less than a year after Chrome 23 was released on desktop with WebRTC support. Since that moment and on the codebase for libwebrtc (Google’s implementation of WebRTC) included support for mobile.

Up until 2016, Google never did offer any compiled binaries. Developers had to figure out the build process and handle it on their own. Several github repositories held compiled WebRTC source code for mobile and were somewhat popular.

In November 2016, Google introduced the official WebRTC precompiled libraries for Android and iOS, which they have maintained up until today.

Most of the vendors out there who are building applications or even SDKs (think CPaaS vendors such as Twilio or Nexmo) make use of libwebrtc as well for their basis of the VoIP stack implementation they run for their own clients. This was true BEFORE Google announced official WebRTC precompiled mobile SDKs and it will continue to be the case even now after Google discontinues the distribution of these mobile SDKs.

How did we get here?

Discontinuing to distribute the WebRTC mobile libraries

First off, it is important to state and understand: Google uses the same WebRTC codebase that goes into Chrome also in the Google Meet and Google Duo mobile applications running on Android and iOS.

There is no plan or incentive for Google to stop maintaining the libwebrtc codebase for mobile operating systems.

That being said, Google just stopped distribution of its WebRTC mobile libraries.

Why?

Because for all intents and purposes they were useless.

All vendors I know who run their products in production for mobile either use a third party SDK (open source or commercial) or have their own custom build of libwebrtc.

This is the case partially because the precompiled binaries from Google are somewhat useless. Here’s the official CocoaPod for Google’s WebRTC project:

The version mentioned here is 1.1.29400. What exactly does this relate to?

  • The WebRTC implementation just got an internal milestone at Google for supporting 1.0 of the specification (at around the same time of the last release of this CocoaPod)
  • WebRTC releases are versioned based on the Chrome release they belong to, and we’re now at 79, readying ourselves for 80
  • Nowhere on this page or elsewhere is an indication when are these binaries created or from which branch of the code. There seems to be no easy way (or no way at all) to align them with the browser releases of the same codebase
  • There is no explanation or release notes for any of these libraries. How do you know what was fixed, modified, deprecated or added?

This made the binaries useless without giving them any real chance in life, which led to their discontinuation.

The Google WebRTC team had two alternatives here:

  1. Fix the broken part of these releases, mainly by synchronizing them with real releases of WebRTC and maintaining clear release notes for them
  2. Discontinuing this effort as it causes more headaches than it was worth at its current state

They chose discontinuation. Probably because of what I’ll be sharing with you next.

What WebRTC mobile SDK should you use now?

This is the real question. It is the one developers had to deal with before, during and now after the age of Google’s official precompiled mobile libraries for WebRTC.

There are two routes to take here for any developer who needs a WebRTC SDK (I am ignoring those using higher level abstractions such as SDKs provided by CPaaS vendors):

  1. Use Google’s libwebrtc project, compile and maintain it on your own
  2. Go with another third party library

Between these two alternatives, the majority of the developers are choosing option (1). Why? Because let’s face it – no other library today offers the same feature richness, quality and interoperability with what runs in the browser that everyone uses.

There are a multitude of alternatives to Google’s libwebrtc, but they are all lacking in at least one way (probably more):

  • Commercial and cost $$$ to use
  • Don’t implement any codecs. You are expected to “bring your own”
  • Lack proper support for effective bandwidth estimation
  • Don’t offer acoustic echo cancellation
  • Not implementing peripherals support for media acquisition and/or playback (microphone, speakers, camera and display)
  • Interoperability with Chrome’s WebRTC. All the time. Including support for the latest features being added to it

I am sure I’ve left a few more gaps in that list.

Ask yourself why is Edge now based on Chromium and using Google’s WebRTC almost verbatim, or why Apple is relying on Google’s libwertc in a lot of its own implementation of WebRTC in Safari.

That said, there are very good reasons for using libraries other than Google’s libwebrtc:

  • Not needing a lot of what libwebrtc offers (if you need just the data channel for example)
  • Requiring specialized features, such as playback from file or other sources into a WebRTC session
  • Needing to run on “exotic” devices or operating systems (i.e – not classic iOS or Android mobile devices)

For the majority of the developers out there, libwebrtc is the right SDK to use on mobile.

Best practices in using Google’s libwebrtc mobile SDK

If you are going to use libwebrtc, what is it that you should be doing then?

Here are the best practices I’ve seen of companies using libwebrtc mobile SDK in production:

  1. Have your own codebase for libwebrtc that you compile and integrate into your application
  2. Don’t automatically upgrade to the latest libwebrtc release when that gets pushed out to a Chrome release. Doing that means releasing your application every 6-8 weeks, which is a brutal release cycle for most vendors
  3. Plan and aim for 2-4 upgrades of your libwebrtc SDK in your mobile application. Any less and you’re in danger of breaking interoperability with Chrome or at the very least missing out on optimizations, improvements and new features
  4. Think of libwebrtc as a starting point. You will have your own minor fixes and optimizations to it. Make sure they are well documented so that a future upgrade of the library doesn’t become too complex and risky a task
  5. Revisit these fixes and optimizations you are making once a year. Some of them might not be needed any longer, and carrying them further might take too much effort or hurt performance
  6. Try to contribute fixes you’ve made back to libwebrtc. This will be a long and frustrating process, but I suggest going through with it
  7. Roll out slowly. Have it tested internally, then with a small % of your audience and then with everyone
  8. Make sure you can rollback…
FAQ about WebRTC mobile SDK library Which library SDK should I use for WebRTC on iOS and Android?

Use Google’s libwebrtc implementation. This is by far the most comprehensive and popular library for client-side WebRTC implementations. Other alternatives exist, but you need to understand what you sign up for when you opt for using them.

What version of Google’s WebRTC should I use for my mobile application?

The best practice here is to pick something that is new but not too new. Pick on of the latest releases that is considered to be stable. Don’t upgrade immediately to the latest release as that is time consuming. Make it a point of upgrading your libwebrtc 2-4 times a year.

Are there client-side WebRTC libraries other than the one Google publishes?

Yes there are. PION and GStreamer come to mind in the open source scene. I’d seriously consider the reasons for not using Google’s libwebrtc in favor of anything else though, mainly due to its feature richness and immediate interoperability with Chrome and all other browsers.

Reduce your risks with WebRTC

Looking to lower their risks and increase their time to market with that WebRTC project you’re working on?

I can help you with this; when it comes to WebRTC and communication technologies, I help my clients get the answers they need and make sure their project doesn’t get delayed.

Contact me if you are interested.

The post How to pick the right WebRTC mobile SDK build for your application appeared first on BlogGeek.me.

Supporting WebRTC: Two webinars coming your way (with Talkdesk & Poly)

Mon, 01/13/2020 - 13:00

Register to the two free webinars I am hosting this month in areas around supporting WebRTC with Talkdesk and Poly.

I am shifting gears this year. Looking back at last year, what I’ve noticed is that there’s been a shift in what clients are asking of me. Many of them are more interested in issues that are support related rather than architecture or development. While a lot of the work I do revolves around assisting with defining architectures and dealing with roadmaps of products, there’s been an ongoing increase in the questions related to supporting WebRTC.

This led to a few changes in the things that I have on offer:

  1. At testRTC, where I am a co-founder and apparently also the CEO, we’ve launched a new product for network testing. This is focused on helping people who support clients with network related issues around WebRTC applications. We’re now working on another product specific to this domain
  2. At BlogGeek.me, I am now offering a new course called Supporting WebRTC. With 40 people who were there in the prelaunch and the feedback I’ve been getting, this seems like the topic is really relevant to many

Somehow, I found myself scheduling two separate free webinars for this month with partners that are around WebRTC support.

Talkdesk and how to support WebRTC-based call centers

At testRTC, we’ve created a product in 2019 to assist support teams analyze network issues for their users. Our first client for this product were Talkdesk who were kind enough to share their experience with us in a nice testimonial.

On Tuesday next week, João Gaspar from Talkdesk will join me in a webinar titled How to analyze WebRTC network issues in minutes and not hours (or days). In this webinar, I’ll explain a bit about the challenges WebRTC poses when it comes to connectivity from a support perspective, and João will share with us what Talkdesk are doing today to assist their users.

I’ve learned a lot from working with João and his team last year, and I am sure this will be interesting to you as well.

How to analyze WebRTC network issues in minutes and not hours (or days)

Tuesday, January 21, 2020

14:00-14:45 EST; 11:00-11:45 PST

Register here Poly and picking the right headset to improve WebRTC session quality

In the last year I’ve had a lot of conversations with support engineers. The people who end up needing to troubleshoot, figure out and explain issues to their users. Many of these issues end up being related to network connectivity. This made me create the new Supporting WebRTC course (now open for all to enroll). One thing I wanted to add there but had no clue about is headsets.

Headsets are this thing that I have at home and use for most of my conference calls. But I never really gave them a second thought. The last pair I purchased at the local computer equipment store, not even making an informed decision about what I needed.

That lead me to reach out to Poly, to get a briefing about headsets and how they affect quality in WebRTC, which lead to me understanding that this boring topic known as headsets is quite fascinating. Obviously, I used what I learned in that briefing to create that lesson I needed in my course.

The great thing though, is that Richard Kenny from Poly (who briefed me), was kind enough to accept joining a webinar about this topic.

Picking the best headset for your next WebRTC session

Tuesday, January 28, 2020

14:00-14:45 EST; 11:00-11:45 PST

Register here How are you handling your support efforts with WebRTC?

The people who usually follow me here are developers or product managers. Seldom are they support-oriented. I know that based on the comments and conversations I have on and off this website.

My suggestion to you is to go check what your support team is challenged with. What is keeping them up at night. What is it they need assistance with. What knowledge are they missing.

And then once you do, see if these webinars might be useful to them so you can share this with them. Let’s make 2020 the year we start solving more of the connectivity issues for our customers.

The post Supporting WebRTC: Two webinars coming your way (with Talkdesk & Poly) appeared first on BlogGeek.me.

Google’s WebRTC goals – a problem of expectations

Thu, 01/09/2020 - 13:00

WebRTC isn’t like Node.js or TensorFlow. Its purpose isn’t adoption in general, but rather adoption in browsers. If you believe otherwise, then there’s a problem of expectations you need to deal with.

As we are starting 2020, with what is hopefully going to be an official spec for WebRTC 1.0, it is time for a bit of reflections. I started this off when writing about Google’s WebRTC roadmap and I’d like to continue it here about WebRTC goals and expectations.

When I explain what WebRTC is, I start off with the fact that it is two things at the same time:

  1. A standard specification
  2. An open source project

The open source project angle is interesting.

Is WebRTC an open source project?

The main codebase we have for WebRTC today is the one maintained by Google at webrtc.org. There are other open source projects that implement the spec, but none to this level of completeness and quality.

By the ecosystem and use of WebRTC, one may think that this is just another popular open source project, like Node.js or TensorFlow.

It isn’t.

If I had to depict Node.js, it would be something like this:

TensorFlow?

How would I draw a diagram of WebRTC? Probably something like this:

From an administrative point of view, WebRTC is part of Blink, Chromium’s rendering engine. Blink is part of Chromium, the open source part of Chrome. And Chromium is what Chrome uses as its browser engine.

WebRTC isn’t exactly an independent project, sitting on its own, living the life.

Need an example why? WebRTC’s version releases follow the version releases of Chrome in terms of numbering and release dates. But mobile doesn’t follow the exact same set of rules. Olivier wrote it quite eloquently just recently:

“For web developers, release notes are very good and detailed. But for IOS and Android developers… I expect the same level of information.”

There’s an expectation problem here…

WebRTC isn’t like other open source projects that stand on their own, independent from what is around them. WebRTC is a component inside Chrome. A single module.

The WebRTC team at Google are assisting developers using the codebase elsewhere. It took a few years, but we now have build scripts that can build WebRTC separately and independently from Chromium. We have official pre-compiled mobile libraries for WebRTC from Google, albeit not a 1:1 match to the official WebRTC/Chromium releases.

At the end of the day, the WebRTC team at Google are probably being measured internally at Google by how they contributed to Chrome, Google’s WebRTC-based services AND to the web as a whole. Less so to the ecosystem around their codebase. If and how WebRTC gets adopted and used in mobile first applications or inside devices and sensors is harder to count and measure – and probably interests Google management somewhat less.

Who contributes to WebRTC?

I took the liberty of checking the commit history of the WebRTC git project over the years, creating the graph below:

There were various different emails associated with the committers, but they fell into these broad categories:

  1. People with a webrtc.org email address. These are Google employees working directly in the WebRTC project (at least I don’t know of a non-Googler with a webrtc.org email address)
  2. People with a google.com email address
  3. Commits done with a “chromium-webrtc-autoroll@” or similar “email” address in them. I’ve categorized these as bots
  4. All the others

It is safe to say that the majority of committers throughout the years are Googlers, and that the ones who aren’t Googlers aren’t contributing all that much.

Is that because Google is protective about the codebase, as it goes right into Chrome which servers over a billion users? Or is it because people just don’t want to commit? Maybe the ecosystem around WebRTC is too small to support more contributors? Might there be other reasons?

One wonders how such a popular project has so little external contributors while there are many developers who enjoy it.

Is webrtc.org Google’s RTC or ours?

A few years back, Google introduced a new programming language – Go (or Golang). It is getting quite a following (and its own WebRTC implementation, though unrelated to this article).

In May 2019, quite a stir was raised due to a post published by Chris Siebenmann titled Go is Google’s language, not ours. Interestingly enough, if you replace the word “Go” with “WebRTC” in this article – it rings true in many ways.

Golang has over 2,000 lines in its CONTRIBUTORS file versus WebRTC’s 100+ AUTHORS. While Golang identify individual contributors, WebRTC uses wildcard “corporate” contributions (I wouldn’t count too many contributors in these corporates though). WebRTC is smaller, and I dare say more centralized.

The simple answer to those who complain is going to be the same – “this is an open source project, feel free to fork it”.

For WebRTC, I’d add to this that what goes into the API layer is what the W3C and IETF decide. So Google isn’t in direct control over the future of WebRTC – just of its main implementation, which needs to adhere to the specification.

Then there’s the Node.js community forks that took place over the years (latest one from 2017). These disputes, technical and political, always seem to get resolved and merged back into the main project. In hindsight, this just seems like attempts to influence the direction of the project.

Can this be done for WebRTC?

It already occurred with the introduction (and slow death) of ORTC. ORTC (Object-RTC) started and was actively pushed by Microsoft, ending with most of what they wanted to do wrapped up into WebRTC (and probably causing a lot of the delays we’ve had with reaching WebRTC 1.0).

What does that mean to you?

Should you complain about Google? Maybe, but it won’t help

For Google, it makes sense to push WebRTC into Chrome as that is its main objective. Google is improving in tooling and capabilities of using WebRTC outside of Chrome, but this objective will always be second to prioritization of Chrome’s needs and Google’s services.

As an open source project, you are free to use or not use it. You’re not paying for it, so what would you be complaining about?

Google have invested and is still investing heavily in WebRTC. It is their prerogative to do so, especially as they are the only ones doing it today.

You should make an educated decision, weighing your requirements, risks and challenges, when developing a service that makes use of WebRTC.

The post Google’s WebRTC goals – a problem of expectations appeared first on BlogGeek.me.

Google’s private WebRTC roadmap for 2020 = AI

Mon, 01/06/2020 - 13:00

Google’s plans for WebRTC have either changed or finally got revealed. Where? In its internal WebRTC roadmap.

WebRTC is many things.

On one hand, it is a standard specification at the W3C (and is reaching 1.0 milestone).

On the other hand, it is an open source project. While there are a few such projects today, the most important one is Google’s webrtc.org. This is the code that gets into Chrome itself and the one being adopted by many (simply because it is already highly optimized for the main scenarios. And… it is free).

Google made it super simple for companies to adopt its WebRTC implementation – it uses a BSD open source license, making it quite permissive.

In the last 8 years, we’ve been treated like royalty, having access to a world-class media engine implementation for free.

The WebRTC roadmap we’ve seen so far from Google had 3 types of features in it:

  1. Making sure the implementation fits the spec
  2. Improve the architecture to perform better
  3. Add features specific to Google’s needs in other projects (not necessarily abiding to the spec)

At all times, these were available to everyone.

Google’s intent in open sourcing WebRTC

When WebRTC was first introduced it was about who has the balls to take something that up until that point was considered a core competency and make it freely available. This was a piece of technology that video conferencing companies protected fiercely, battling about through their sales and marketing pitches, each claiming to have superior media quality. At the time, media quality wasn’t in the “good enough” position that it is today:

Google made the calculated risk at the time:

  • Media quality was improving. So were bandwidth available and compute. It made sense that it would get to a point of “good enough” within a few years time
  • A migration to the cloud for video conferencing wasn’t at most companies agenda yet, but as cloud migration started picking up everywhere, it made sense to occur here as well. These cloud migrations took place hand in hand with the use of browsers
  • Dominance in browsers and lack of a real operating system footprint meant needing to have a media engine as part of the browser
  • Google had no leading service in video conferencing. Google Hangouts was available, but wasn’t any real competition to the leading platforms at the time, so they didn’t have much to lose by the decision

Other vendors just following along in the ride, making minor contributions here and there. Today, the leading (and only) media engine out there for WebRTC is still the Google one. At least in any meaningful way. So much so that Google’s “competitors” are using Google’s WebRTC stack directly in their products.

Where has this lead Google?

WebRTC is a huge success. All modern browsers now support it. They interoperate (to a good extent). Today, in every industry and market where live or real time media is needed, WebRTC is playing an important role.

But what about Google and WebRTC? What success did Google exert from WebRTC?

Not a lot. Or at least not enough.

Google uses WebRTC in the following services it offers:

  • Hangouts / Google Meet
  • Duo
  • Stadia
  • Chrome Remote Desktop
  • YouTube Live

Lets see how well did Google fare in each.

Hangouts / Google Meet

I use these two services almost on a daily basis. My calendar meetings default to them simply because they are so each to schedule with the Google Calendar. They offer what I need without any of the complexity.

But.

When you read or hear discussions about the video conferencing market, the vendors mentioned are usually Zoom and Cisco. Maybe Microsoft Teams or Skype for Business. Also Bluejeans and Pexip. A few others. Google isn’t one of the top vendors that come to mind here. Even though their service is rather good.

Did I mention that almost all their competitors are using WebRTC as well?

Duo

Duo. Google’s answer to Apple’s FaceTime.

It is a standalone video calling app available on Android and iOS. It isn’t installed by default on most smartphones and users need to actively find it, install it and make a decision on using it. Not an easy feat.

Why hasn’t Google nailed and bolted it smack into Android? Probably due to carriers and not wanting to hurt their feelings (and Google’s relationship with them). Otherwise, it makes no sense for Google to try and compete with the likes of FaceTime with one hand tied behind their backs.

Anyways… Duo is quite popular. Even on iPhone. It is ranked #7 in the social apps in the Apple App Store. This is higher than Houseparty (positioned somewhere at #17-20), which is rather interesting considering the high engagement Houseparty sees for its users.

Google doesn’t share any stats on usage of Duo. The only thing we know is downloads and the number of people who ranked it – two stat points that are useless for social networks. This is quite telling to the real usage numbers – not publishing them means they aren’t on par with the competition.

Curious myself, I’ve put out a quick poll on Twitter:

This is most definitely NOT the way to know or understand usage, but it is interesting.

My audience is probably tech savvy. Those answering the poll are highly likely to know about WebRTC. And still. We have over 50% who never tried it and 13% who use it. I’d consider 13% quite a lot and surprising. But it isn’t scratching the surface of where it should be given that Google owns and controls Android.

Stadia

Google Stadia is something totally different. It is cloud gaming. The game is being processed and rendered in “the cloud” and gets streamed in real time to your device using WebRTC. Google even made modifications to its WebRTC implementation to make it a better fit for gaming.

The concept is great. The technology is solid. The experience is said to be good (if you’re close enough to the data center and have a good network connection).

From the media, it seems like there are hurdles and challenges to the Stadia launch – this type of an article titled “Stadia’s biggest problem? Google” or this one titled “Google Duo is the best video calling service you’re not using” are rather common. Especially when put in comparison to the Apple Arcade launch.

Looking at Google Play store numbers for the Stadia app, things look rather disappointing: below 1M installs so far:

I have this feeling Google expected more.

Cloud gaming is still new and nascent. It will take time to happen and mature.

Take this into an adjacent industry, Netflix introduced streaming in 2007. It took them 3-4 years for the stock to take notice and the service to mature enough to make a dent in the industry. Whereas today, every other production studio is launching their own streaming service.

Will Google have the patience with Stadia to get there or will it end up shutting it down like many other “experiments” it has been running throughout the years? The thought itself is making it hard for Google to entice game developers to jump on its platform.

Chrome Remote Desktop

Google apparently has a remote desktop service. It makes use of WebRTC’s screen sharing capability and is called Chrome Remote Desktop.

While I haven’t used it myself, this does seem to have quite a following. 10M+ installs on Android, The Chrome extension shows ~4.8M users.

There is no apparent business model as the service is offered freely, and while the market has similar paid services, it doesn’t seem to be big enough to attract a company like Google. This isn’t interesting enough to value an investment in WebRTC itself by Google.

YouTube Live

YouTube has the ability to host live events. And it does that with the help of WebRTC.

That said, its use of WebRTC isn’t an impressive one – it is just a window into the service if you want to broadcast from your browser. It isn’t used for live streaming to the users themselves. There’s more on the technical side of it on webrcHacks, where they analyze what goes on the wire with YouTube Live.

Here’s the thing – just like Chrome Remote Desktop, this is Google exploiting a technology that is there. It isn’t about leading the industry or the market with it. And as with Chrome Remote Desktop, it isn’t of enough value to make it worth their while to invest in making WebRTC itself better.

WebRTC is now part of HTML5 and part of what browsers need to do, so Google needs to invest in having it in Chrome. How much to invest is the real challenge.

To WebRTC or not to WebRTC?

Meet, Duo and Stadia seem to be the leading factors in whatever Google is doing in WebRTC, other than dealing with complaints and feedback from the community.

Google Meet

Google Meet is using VP9. It is one of the only group calling services running in production at scale that have made that shift.

By harnessing WebRTC and owning its roadmap, Google is able to experiment and build their service faster than others can on WebRTC.

Two interesting examples we’ve had in the past year –

1. At Kranky Geek 2018, Google showed an experiment of using WebAssembly with WebRTC to improve video switching in a conference by distinguishing noise and speech:

Did it find its way into Google Meet? Maybe.

Then there’s the new captioning feature in Google Meet, which Gustavo nicely explains. It uses the data channel in WebRTC to send back the results. Assuming anything in WebRTC was needed to change to make this work better, Google could do that as it owns the WebRTC roadmap.

Google Meet, being predominantly a browser based experience, will need to rely on changes made directly into WebRTC or things that can be bolted on top using WebAssembly.

Google Duo

Google Duo is a mobile first service. It has browser support via Duo for Web, but for the most part, it is meant to be used on your smartphone.

Last month, Google announced some new features in Pixel phones, but also 3 machine learning based improvements for Duo:

Auto-framing:

“Auto-framing keeps your face centered during your Duo video calls, even as you move around, thanks to Pixel 4’s wide-angle lens. And if another person joins you in the shot, the camera automatically adjusts to keep both of you in the frame.”

We’ve seen Facebook do that in Portal and a few video conferencing vendors adding that to their room systems.

Packet loss concealment:

“When a bad connection leads to spotty audio, a machine learning model on your Pixel 4 predicts the likely next sound and helps you to keep the conversation going with minimum disruptions.”

Packet loss concealment using machine learning is something not many are doing (or publishing that their are doing).

Background blur:

“you can now apply a portrait filter as well. You’ll look sharper against the gentle blur of your background, while the busy office or messy bedroom behind you goes out of focus.”

Another nice feature, which is available in other services such as Zoom.

From the looks of it, auto-framing and background blur rely on hardware based capabilities of the Pixel devices. Packet loss concealment… a lot less so.

Could we see machine learning based packet loss concealment find its way into the WebRTC codebase? (where it makes the most sense to add it instead of as an external piece of software). Not soon…

Google Stadia

For Google Stadia, Google went with QUIC instead of SCTP for the controls. It decided to make use of WebRTC for live streaming itself.

But it wasn’t enough. It needed the low latency of WebRTC to be even lower. So it added a Chrome experiment to enable them to reduce the playout delay in WebRTC. A few of my clients have already adopted it and are happy with the results for their own use case.

Google also tweaked and improved the VP9 decoder to make it work with 4K 60fps streams.

In the case of Stadia, the changes need to be made inside the WebRTC codebase to apply well for its service anywhere.

What is changing with Google’s strategy about WebRTC in 2020?

WebRTC 1.0 is “out”. Almost.

The latest CR (Candidate Recommendation) is dated December 13. Hopefully the last one before we go to the next step. It is interesting to look at the original charter of WebRTC:

It took somewhat longer to get here than originally expected, but we’re almost there.

Google held its internal milestone of WebRTC 1.0 code complete two months back.

What now?

Besides housekeeping, bug fixes, and talking about WebRTC NV (the next version), I think a lot of it will change internally at Google to how can they make more of their investment in WebRTC and stay or become more competitive in the market. This being an open source project, means that some features will need to be kept out of the open source codebase. Like the new packet loss concealment mechanism in Google Duo.

How is that achievable?

The leading factor is going to be adding more flexibility and control to developers over what WebRTC is and how it operates. Ideally by using WebAssembly and in the future by using WebTransport and WebCodecs, two new initiatives that will unbundle a lot of what WebRTC is.

This gives the ability to take out improvements out of the baseline implementation and introduce them as proprietary features.

The demarcation line of what will go into the WebRTC codebase by Google and what will be kept out of it is going to be the use of machine learning and artificial intelligence. Whenever a feature makes use of learned machine learning models, Google will most probably try to keep that implementation out of WebRTC. Why? Because it has the greatest value and the highest investment today.

Should this worry you?

Maybe, but it is to be expected.

Google has invested heavily in WebRTC. Without this investment nothing that we see and use today in WebRTC and take for granted would have been possible.

It is even surprising that it lasted this long…

WebRTC closes the basic gaps and requirements of media engines. It is good enough. If you want to improve upon it, differentiate or be at the cutting edge of the WebRTC technology, you will need to invest in it yourself as well. Relying only on Google isn’t an option. And probably never really was.

Here’s to an interesting and eventful 2020 with WebRTC!

The post Google’s private WebRTC roadmap for 2020 = AI appeared first on BlogGeek.me.

WebRTC conference calls. What could possibly go wrong?

Tue, 12/17/2019 - 13:00

Conference calls were always complex. WebRTC might have made joining them simpler, but it does come with its own set of headaches.

I’ve been in the industry for the last 20 years or so (a dinosaur by now). I had my share of conference calls that I joined or scheduled. As humans, we tend to remember the bad things that happened. The outliers. There are many of those with conferencing.

When I saw this Dilbert strip the other day, it resonated well with the “Supporting WebRTC” course I’ve been working on these past few months:

One of the things I am dabbling with now in the course are media quality issues. This was spot on. So of course I had to share it on Twitter, which immediately got a colleague to remind me of this great Avengers mock video conference:

The funny thing is that this still occurs today, even if people will let you believe networks are better and these problems no longer exist. They do. Unless you are Zoom – Zoom always works. At least until it doesn’t…

What can possibly go wrong?

This one was just published today, so couldn’t resist…

A modern WebRTC service today will have a few potential failure points:

  1. The cloud vendor’s infrastructure
  2. Your own infrastructure
  3. The user’s network
  4. The user’s browser
  5. The user’s device

Let’s try to break these down a bit

1. The cloud verndor’s infrastructure

Here’s a secret. AWS breaks from time to time. So does Azure and Google and Digital Ocean and practically everyone else.

Some of these failures are large and public ones. A lot more are smaller and silent ones that aren’t even reported in the main status pages of these cloud vendors. We see that in testRTC – as I am writing these words, we are struggling with a network or resource issue with one of the cloud vendors that we are using, which affects one of our services (thankfully, we’re still running for most of our customers).

Your service might be unreachable or experiencing bad media quality because of the cloud vendor you are using. Fortunately, most cases, these are issues that don’t last long. Unfortunately, these issues are out of your control.

2. Your own infrastructure

This one is obvious but sometimes neglected. What you run in your backend and how the client devices are configured to use it has a profound effect on the quality of experience for your users.

I’ve seen anything from poor ICE servers configuration, through bad scaling decisions to machines that just need a reboot.

WebRTC has a lot of moving parts. You need to take good care and attention to them.

3. The user’s network

Now we head towards the things that you have no control over… and primarily that is the user’s network.

You. don’t. have. control. over. what. network. your. customer. Uses.

He might be over a poor 3G connection (yes, we still have those). Or just be too far from the closest WiFi hotspot he is connected through. Or any other set of stupid issues.

In enterprises, problems can easily include restrictive firewall configurations or use of an HTTP proxy or a VPN.

Then there’s the congestion on the user’s network based on what OTHER people are doing on it.

Here, what you’ll need to do is to be able to understand the issue and explain it to the user to help him in squeezing more out of the network he is using.

4. The user’s browser

Here’s another challenging one.

The first one is a bit obvious – modern browsers automatically upgrade. This means you will end up with a new browser running your app one day without Apple, Google, Microsoft or Mozilla calling you to ask if you agree to that. And yes – these upgrades may well change behavior for customers and affect media quality.

Then there’s the opposite one – in enterprise environments, IT administrators sometimes lock browser versions and don’t let them upgrade automatically.

The biggest challenge we’re now facing though are Google experiments, like the one conducted with mDNS in WebRTC. Google is conducting experiments in Chrome on live users sporadically. You have no control over these and no indication where and how they are conducted. The whole purpose of this is to surface issues. Problem is, you won’t know if it breaks things for you until someone complains (or unless you monitor your deployment closely).

5. The user’s device

The device the user uses affects quality. Obviously.

Tried recently to use an iPhone 4 with a WebRTC service?

The CPU, memory, software and other processes your user has on the device will affect quality. Add to that the fact that certain devices and peripherals behave differently and have their own known (or unknown) issues with WebRTC, and you get another minor headache to deal with.

The things we can control in our WebRTC conference calls

Here’s where we started – a modern WebRTC service today will have a few potential failure points:

  1. The cloud vendor’s infrastructure
  2. Your own infrastructure
  3. The user’s network
  4. The user’s browser
  5. The user’s device

In WebRTC calls, you can control your own infrastructure. And you can build it to work around many cloud vendor’s infrastructure issues.

You can try to add logic that deals with the user’s device.

You can probably deal with many of the user’s browser issues by more testing and running their unstable and developer preview releases.

The things we can’t control in our WebRTC conference calls

The main thing you can’t control is the user’s network.

What you can do here is to provide better support, assisting your users in finding out the issues that plague their network and suggesting what they can do about it.

Two things you will be needing to get that done: tooling and knowledge

The tooling side I’ll probably touch in a future article. The knowledge part is something I have a solution for.

How can you better serve your customer?

In the last few months I’ve been working on the creation of a new “Supporting WebRTC” course. This course is geared towards support people who get complaints from users about their service and they need to understand how to help them out.

The course started through conversations with support teams in widely known providers of WebRTC services, which turned into a suggested agenda that later turned into a real course.

There are already close to 6 hours of content split into 33 lessons, with more to be added in the next month or so.

I’ve decided to open up registration to the course to everyone and not limit it to the limited pre-launch users I’ve shared it with. I feel it is the right time and that the content there is rock solid.

If you want to improve your knowledge or your support team’s knowledge of WebRTC, with a focus on getting them making your users happy and using your service, then check out my course.

Register to the Supporting WebRTC course

The post WebRTC conference calls. What could possibly go wrong? appeared first on BlogGeek.me.

The software inside video conferencing hardware is… WebRTC

Mon, 12/02/2019 - 13:00

WebRTC isn’t only about guest access or even interoperability. It is about the whole infrastructure and service.

My article last month about guest access, the use of WebRTC for it AND how it is now used for “interoperability” between Microsoft and Cisco had its nice share of feedback and comments. Both on the article and off of it in private conversations. I think there is another trend that needs to be explained, which in a way is a lot more important. This one is about video conferencing hardware being dominated by HTTP and WebRTC. This in turn, is affecting how modern video infrastructure is also shifting towards WebRTC.

Where video conferencing hardware meets WebRTC

Check out this recent session from Kranky Geek last month. Here, Nissar Mahamood from Lifesize explains how WebRTC got integrated into their latest meeting room systems (=hardware), getting it to 4K resolutions.

It is a good session for anyone who is looking at embedded platforms and systems or needs to customize WebRTC for his own needs, using it outside of a web browser.

There are two things in this video that surprised me, for two very different reasons:

  1. Using GStreamer as the basis of the media engine
  2. Selecting Node.js as the client application environment
Using GStreamer as the basis of the media engine

I started seeing more and more developers using GStreamer as part of the technology stack they use with WebRTC. On Linux, your best bet with processing media using open source is either ffmpeg or GStreamer. Due to the real time nature of WebRTC, GStreamer is often the more sought after approach. In the past year or so, it also added WebRTC transport, making it a more viable option.

In many cases, the use of GStreamer is for connecting non-WebRTC content to WebRTC or getting content from WebRTC to restream it elsewhere. Lifesize has done something slightly different with it:

As the illustration above from their Kranky Geek session shows, Lifesize replaced the media engine (voice and video engines) part of WebRTC with their own which is built on top of WebRTC. They don’t use the WebRTC parts of GStreamer, but rather the “original” parts of it and replacing what’s in WebRTC with their own.

It is surprising as many would use WebRTC specifically for its media engine implementations and throw its other components. Why did they take that route? Probably because their existing systems already used GStreamer that is heavily customized or at the very least fine tuned for their needs. It made more sense to keep that investment than to try and reintroduce it into something like WebRTC.

This approach, of taking the WebRTC source code and modifying it to fit a need isn’t an easy route, but it is one that many are taking. More on that later.

Selecting Node.js as the client application environment

We’ve been so focused on development with WebRTC on browsers and mobile, that embedded non-mobile platforms are usually neglected. These have their own set of frameworks when it comes to WebRTC.

The one selected by Lifesize was Node.js:

They created a Node.js wrapper that interfaces directly with the WebRTC native C++ “API” with an effort to create the same JS API they get in the browser for WebRTC.

Why? Their meeting room systems now use HTML for its visual rendering and the application logic is driven by JavaScript.

Why JavaScript?

Because of Atwood’s Law

any application that can be written in JavaScript, will eventually be written in JavaScript

Lifesize simply made their application to one that can be written in JavaScript.

This is doubly true when you factor in the need to support web browsers where you have WebRTC with a JS API on top anyways.

The hidden assertion of WebRTC cloud infrastructure

What I like the slide above is the cloud with the wording “Lifesize Cloud Service” in it. The fact that Lifesize is connecting to it via Node.js speaks volumes about where we are and where we’re headed versus where we’re coming from.

A few years ago, this cloud service would have been based on H.323 or SIP signaling.

H.323 is now a deadend (something that is hard for me to say or think – I’ve been “doing” H.323 for the better part of my 13 years at RADVISION). SIP is used everywhere, but somehow I don’t see a bright future for it outside of PSTN connectivity (aka SIP Trunking).

Lifesize may or may not be using SIP here (SIP over WebSocket in this case) – due to the nature of their service. What I like about this is how there is a transition from WebRTC at the edge of the network towards WebRTC as the network itself. Let me try and explain –

Video conferencing vendors started off looking at WebRTC as a way to get into browsers. Or as a piece of open source code to gut and reuse elsewhere. If one wanted to connect a room system or a software client to a guest (or a user) connecting via WebRTC on a web browser, this would be the approach taken:

(I made up that term transcoding gateway just for this article)

You would interconnect them via a gateway or a media server. Signaling would be translated from one end to the other. Media would be transcoded as well. This, of course, is a waste of processing and bandwidth. It is expensive and wasteful. It doesn’t scale.

With the growing popularity of WebRTC and the increasing use and demand for browser connectivity to video conferences, there was/is no other way than to rethink the infrastructure to make it fit for purpose – have it understand and work with WebRTC not only at the edge.

That’s when vendors start trying to fit WebRTC paradigms into their infrastructure:

(guess what? Translating gateway? Also made up just for this article)

Things they do at this stage?

There are a lot of other minor nuances that need to be added and implemented at this stage. While some of these changes are nagging and painful, others are important. Adding SRTP simply means adding encryption and security – something that is downright mandatory in this day and age.

The illustration also shows where we focused on making the changes in this round – on the devices themselves. We’ve “upgraded” our legacy phone into a smartphone. In reality, the intent here is to make the devices we have in the network WebRTC-aware so they require a lot less translation in the gateway component.

Once a vendor is here, he still has that nagging box in the middle that doesn’t allow direct communication between the browser and the rest of his infrastructure. It is still a pain that needs to be maintained and dealt with. This becomes the last thing to throw out the window.

At this last stage, vendors go “all in” with WebRTC, modifying their equipment and infrastructure to simply communicate with WebRTC directly.

This migration takes place because of three main reasons:

  1. The need to support web browsers with WebRTC
  2. The cost of interworking across WebRTC and whatever the rest of the vendor’s infrastructure supports
  3. The popularity of WebRTC and its vibrant ecosystem marks it as the leading technology moving forward

That third reason is why once a decision to upgrade the infrastructure of a vendor and modernize it takes place, there is a switch towards adopting WebRTC wholeheartedly.

This isn’t just Lifesize

Microsoft took the plunge when adding Skype for Web and went all in with Microsoft Teams.

With their hardware devices for Teams they simply support web technologies in the device, with WebRTC, which means theoretical ability to support any WebRTC infrastructure deployed out there and not only Teams.

Same as the above is what we see with Cisco recently.

BlueJeans and Highfive both live and breath web technologies.

Forgot to mention you? Put a comment below…

There were other good Kranky Geek sessions around this topic this year and last year. Here are a few of them:

  • Discord (2018) – how they use WebRTC and the changes made to it to fit their needs. In their case, that was very large audio conferences
  • Microsoft (2018) – an overview of WebRTC for UWP and Hololens
  • PION (2019) – an alternative WebRTC stack written in Go, and using WASM while at it
  • RingCentral (2019) – server apps with Node.js+WebRTC
  • Phantom Auto (2019) – replacing the video encoder with an external hardware one running on an NVIDIA GPU
The winning video conferencing hardware software stack

Here’s what seems to be the winning software stack that gets shoved under the hood of video conferencing hardware these days. It comes in two shapes and sizes:

Linux
  • Linux as the underlying operating systems
  • HTML/JS as the visualization layer (Node.js, Chromium or WebKit as its baseline)
  • WebRTC embedded in there as part of the HTML implementation

This gives a vendor a hardware platform where web development is enabled.

Android
  • Android as the underlying operating system
  • Android app used to implement the device UI and application logic
  • WebRTC embedded natively as part of the app

This diverts from the web development approach a bit (while it does allow for it). That said, it opens up room for third party applications to be developed and delivered alongside the main interface.

Linux or Android, which one will it be? Depends on what your requirements are.

A word about Zoom in this context

Why isn’t Zoom using WebRTC properly?

I don’t know. But I can make an educated guess.

It all relates to my previous analysis of Zoom and WebRTC.

Zoom were stuck with the guest access paradigm, trying to take the first step was too expensive for them for some reason. Placing that interworking element to connect their infrastructure to web enabled Zoom clients didn’t scale well with pure WebRTC. It required video transcoding and probably a few more hurdles.

At their size, with their business model and with the amount of guest access use they see with the Zoom client on PCs, it just didn’t scale economically. So they took the WASM route that they are following today.

It got them on browsers, with limited quality, but workable. It got them an understanding on WASM and video processing in WASM that not many companies have today.

And it put them on an intersection in how they operate in the future.

Would they:

  1. Switch towards WebRTC, as most of their competitors have; or
  2. Continue with WASM, waiting for WebCodecs and WebTransport to progress to a point where they are clearly defined and available in web browsers

If I were the CTO of Zoom, I am not sure which of these routes I’d pick at this point in time. Not an easy decision to make, with a lot to gain and lose in each approach.

Need help figuring this out?

This whole domain is challenging. Getting WebRTC to work on devices, around devices, in new or existing infrastructure. Deciding how to define and build a hardware solution.

Contact me if you need help figuring this out.

The post The software inside video conferencing hardware is… WebRTC appeared first on BlogGeek.me.

Kranky Geek SF 2019 – post event summary

Mon, 11/25/2019 - 13:00

Our best Kranky Geek event ever. Or is it just that I have a short memory?

Earlier this month marked the highlight of the year for me. It happens every year now since 2015. The Kranky Geek event takes place in San Francisco. The event started by mistake and had become an immensely taxing and enjoyable undertaking for me.

WebRTC is a niche of an industry that are here to change the world and challenge how we communicate online with each other in real time. Kranky Geek became a place where our WebRTC niche meets, mingles and discusses many aspects of what it is that we’re doing. A lot of it is technology – and learnings people had and the scars they have to show for it. Some of it is more future looking, where new requirements are shared and semi-pitches are made. It is also a place where we get to talk and interact with the people behind the browser implementations.

I decided to share this slide about how niche WebRTC is:

This shows Stack Overflow Trends for WebRTC, VoIP and SIP. It is the percentage of questions in each month that shows these technologies as tags. WebRTC is higher than either SIP or VoIP by a factor of 3, which is nice. But overall, we’re still talking 0.05% of the questions, which isn’t much. WebRTC is a niche, but an important one (at least to me).

What is Kranky Geek all about

Kranky Geek is about the current state and the immediate future of the WebRTC ecosystem. It is first and foremost an event for developers.

Here’s what I understood at a client meeting earlier in that same week. After the meeting, the client comes to me and tells me how he is using the videos from past Kranky Geek events. Whenever there is a technical detail or a topic he knows is covered by one of our past sessions, he just goes and searches the videos to find that 2-3 minutes he needs.

It got me thinking. It is quite similar to how I use it. I end up referring people to a specific Kranky Geek video at least once a month if not more.

In the end, we are into learning and expanding the knowledge available out there about WebRTC.

The obligatory thanks

The Kranky Geek event isn’t funded by the audience’ tickets. These are practically free. We have a low registration fee that is a kind of a seriousness fee, which makes it easier to estimate the actual attendance rates we will see. That fees ends up being donated to good causes. In the case of Kranky Geek, we’ve been giving that money to GDI.

The event is only possible due to its sponsors.

There are a few people and companies that I need to thank for the Kranky Geek 2019 event.

First, to my partners in crime – Chris and Chad. Our different opinions and dispositions make a good mix for running Kranky Geek.

To Google and the Chrome WebRTC team at Google.

Google have been there with us from the beginning. They assist us tremendously with the logistics, their attendance and their sessions throughout the years.

To our sponsors of the event:

Their contribution is an important part of us being able to do this every year. I am also very happy that without exception, they treat their speaking slot and our rigorous process and dry runs seriously.

We had a new type of sponsors this year. Vendors who wanted to be part of the event, but didn’t speak (they came after we had a full agenda already).

Voximplant is a CPaaS vendor with WebRTC technology – one you should follow closely if you aren’t already.

Jamm just came out of stealth, and wanted to do that as part of our event.

What you can find in this year’s Kranky Geek sessions

We started off planning the event with a lot of AI in mind. This is what we had last year, and the trend is obvious to follow this year as well. It will probably still be a trend 5 years from now.

When we actually looked at our agenda, we found a nice mix of WebRTC topics, covering things from WebRTC specifications and best practices, through customizing and modifying WebRTC in production to new use cases and AI.

It is good we did a dry run to all of our speakers, since I didn’t really have the time and attention to listen to them during the event itself. I learned a lot of new things about WebRTC from the dry runs we have and I am sure you will find some very interesting and useful sessions here as well.

All of the videos are already available on YouTube and I encourage you to both subscribe and watch our 2019 playlist:

See you next year?

Maybe.

We never really know if we will be having a next ever. This is part of the fact that we’re not professional event organizers. We do it because we enjoy it. We also rely on others to make this happen.

If you are interested in a Kranky Geek 2020, then do one of the following things (or all of them):

  1. Subscribe to this blog – I’ll be sending out an email at some point with the event’s date if we decide to do another one
  2. Subscribe to WebRTC Weekly – that’s who we email to about Kranky Geek events officially
  3. Contact me directly – especially if you’re interested in speaking or sponsoring a future event

The post Kranky Geek SF 2019 – post event summary appeared first on BlogGeek.me.

Video meetings guest access: the new frontier of interoperability

Tue, 11/12/2019 - 13:00

There are different ways to deal with interoperability. With WebRTC, the one selected is relying on the browser and offer guest access. Interestingly, while the industry is headed in that direction, the elephants are also headed… elsewhere.

When I first started with this blog, over 7 years ago, I wasn’t really sure where I was headed with it. What I did know, is that I have to write something about WebRTC to get it off my chest. WebRTC was the reason I stopped working at RADVISION and moved on. You see, as a CTO of my business unit I was told there’s no budget to invest in researching what we can do with WebRTC. Somehow, the future wasn’t important enough, which got me to understand there’s no future for a CTO there either.

I ended up deciding to write three posts – what is WebRTC, why signaling is irrelevant, and what a future meeting room would look like.

That third article? Here it is, from March 2012: The Post-WebRTC Video Conferencing Room System

We’re still slowly crawling towards that goal.

A short history lesson: the early days

For many years video meetings were an in-company luxury. A dubious luxury at that.

All Most video conferencing systems were based on a signaling protocol called H.323 and were *supposed* to be “interoperable”. This didn’t work that well, and in the end, companies tended to purchase all of their hardware from a single vendor. Multi Vendor was possible, but always at a loss of features or capabilities – either because these were proprietary to begin with or because interoperability is such an elusive target.

What was a person to do when he needed to communicate with someone *outside* the company? Dial his phone number. If video was what was needed, then the IT department had to be involved – on both sides. Fooling around with dialing plans, checking that the video conferencing devices interoperate, and then hand holding the users throughout that session. This happened not only in regular companies but also when the companies in question were video conferencing vendors.

Most systems at this point were hardware based. You had to purchase “meeting rooms” and install them.

The system was totally broken.

Rise of the federation

At some point a new concept started cropping up. If I recall correctly, Microsoft came with it, in their Microsoft Lync service. The idea was create federations.

Microsoft Lync was a semi-standards based service. It was SIP based in nature, but different – connecting to it was harder than connecting to other SIP devices and services as a lot of the spec was proprietary. Being Microsoft, they had a largish software-based market share, but one that was left unconnected.

Each company installed, operated and managed its own Microsoft Lync service. You couldn’t just reach out to another user on another installation directly. What you could do is involve the IT people (on both ends – yes), and get them to configure both installations to be aware of one another. This was referred to as a federation.

Think about it.

Thousands and thousands of installations. Each an island of its own. Each time you wanted to reach out to someone from a new island, you had to ask permission and get it setup – to federate with that other island of install base.

And guess what? This never really worked either. Not in real life. And not even for the video conferencing vendors themselves.

The friction was just too high to make this useful for the workforce.

Introducing the software client

Until a couple of years ago, video conferencing was a thing for hardware devices.

20 years ago? These devices were mostly built around DSPs and weird embedded operating systems.

15-20 years ago? The vendors learned about Linux and were comfortable enough to use it (!) for an embedded application such as video conferencing. The main concern was usually the real time nature necessary in encoding and decoding video.

About 15 years ago, the notion of being able to use a software client on a Windows operating system to join a video conference (not conduct a meeting – just join one) started to crop up.

The idea was this:

  • If your company had a video conferencing installation, it could theoretically get a PC to connect to the video conferencing system at lower media quality and still be able to communicate properly
  • Towards that goal, one could install a video conferencing software client and use it

This brought with it the headaches of having to deal with unmanaged networks – having employees (mainly managers) connect from their home, coffee shops or the occasional crappy hotel network.

This new capability started changing the business model around video conferencing. How do you license the software in a world where what was sold was hardware through channels and VARs?

What it also did was change behavior patterns. People now didn’t go to meeting rooms to join a call – they joined from wherever they wanted. Once the video client was installed in their PC they were relatively free.

It had another use case to it: technically, you could get someone to connect as a guest to a meeting. All he needed to do was install the specific software client of the specific video conferencing vendor from the specific landing page of the specific enterprise who purchased the video conferencing system and connect.

If you conducted a meeting with a company who had an installation of a specific vendor, then meeting with another company using the same video vendor usually meant you didn’t have to install the client again – unless it needed an upgrade of sorts.

Since these were early days, there were many installation issues with these clients. When it worked it was great, but when it doesn’t…

Enter the cloud

At around the same point in time, cloud services started taking potshots at the video conferencing industry. They didn’t call it video conferencing but rather web conferencing. Why? Because the center of the service wasn’t an on premise hardware video system installation, but rather a software based cloud service.

It wasn’t as performant and the quality was lower, but it was easier to use. Sadly, video conferencing companies didn’t see it as an existential threat.

Anyway, these services assumed that all users download and install a software client to connect to these web conferences.

Since this was their bread and butter, the idea of having guests connect became more prevalent and acceptable.

At any given point in time, I had on my laptop at least 3 such software clients. Services like WebEx, GoToMeeting and AT&T Connect.

Two challenge these services faced:

  1. The concept was around “a company licenses a communicate tool to use internally but allows guest to participate”. This still wasn’t about the guests
  2. That software install is still friction, and something no one really wanted to experience (besides maybe the vendor behind that software client)
Zoom

Out of these two challenges, Zoom came and solved the first one. For the most part, the first experience of a user with Zoom is by being invited to a Zoom meeting. By someone. Not necessarily an employee in a company who licensed Zoom – just by someone.

The change in business model, as well as the focus on the first time experience (making it simple), got Zoom to where it is today.

The problem that remained though is the software installation piece. That’s friction, and the browser-based solution that Zoom is offering is still subpar to what can be done on a browser.

The WebRTC guest access

In the past 5 years, what we’ve seen is that every video conferencing vendor except for Zoom has made it towards WebRTC.

Vendors still offer software clients for ongoing use of their service and for providing an improved experience, but all of them also have WebRTC access as well.

Need to have someone join a session? Create a calendar invite and get a meeting link. That link will allow you to either install a software client or just use the browser with WebRTC.

This has become the norm to the point that in many cases, I get invited to meetings just by receiving a URL on one messaging service or another.

Just in the last year we;ve seen UCaaS vendors joining this game by offering their own video conferencing services, usually called Meetings:

  • 8×8
  • Vonage

The race towards having video bolted on top of voice meetings and web conferences now relies on WebRTC support and guest access as key features.

The nice thing about this? There’s no need to interoperate, federate or connect the islands of services. Need someone to join a meeting? Just send them a link. They won’t need to install anything, just click and be connected. Magic.

Today – almost all services offer simple to use guest access via the browser using WebRTC.

Room systems “interoperability” in 2020

This all leads to this interesting announcement by Microsoft and Cisco. In two carefully crafted posts/announcements, the two companies appear to be collaborating more than ever. The plan?

Offer direct guest access for a room system of one vendor to meetings of the other vendor.

What does that mean? If you are invited to a Microsoft Teams session as a guest, you should be able to join it from a Cisco WebEx Room device. And vice versa.

This is no federation here – just pure use of an existing room system to join “any” meeting.

From Cisco’s announcement:

Cisco and Microsoft are working together on a new approach that enables a direct guest join capability from one another’s video conferencing device to their respective meeting service web app (WebRTC based).

From Microsoft’s announcement:

Cisco and Microsoft are working together on a new approach that enables meeting room devices to connect to meeting services from other vendors via embedded web technologies. Microsoft and Cisco will be enabling a direct guest join capability from their respective video conferencing device to the web app for the video meeting service.

A few interesting initial thoughts:

  • This is provided not by interoperability of signaling protocols (my SIP can talk to your SIP, how about we federate?) but rather by making use of “web technologies” in Microsoft’s announcements, or simply using “WebRTC based” web app on Cisco’s announcement
  • It will not work on older meeting rooms and legacy devices. This isn’t about interoperability. It is about the web
  • Both Microsoft and Cisco rely here on WebRTC and web technologies. Their new devices use WebRTC internally and browser tech. They can now expose it by connecting using guest URLs
  • Both companies are taking the first initial step of supporting a single type of guest access of a single other vendor. This is a controlled environment for them, and probably just a first step towards opening up the devices to “any” WebRTC based guest access

It is about time we got there.

The post Video meetings guest access: the new frontier of interoperability appeared first on BlogGeek.me.

Common WebRTC mistakes and how to avoid them [Slidedeck]

Mon, 10/28/2019 - 12:00

We are now almost 8 years into WebRTC, and it seems like the same mistakes developers made 8 years ago are still being made today. Here are some common WebRTC mistakes that I see on a daily basis.

Last week, I took a quick business trip to Beijing for Agora.io’s RTC Expo event. I was invited by Agora.io to present there about a WebRTC topic, and I decided on “Common WebRTC mistakes and how to avoid them”. Why? Because it fits nicely with the fact that I’ve been promoting my WebRTC course recently, but also because it is an issue that crops up on a weekly basis.

RTC Expo is an interesting event. To begin with, it is a local event in China. It runs in three separate tracks and it was well attended – the rooms were usually filled to the brim during sessions. The number of foreigners could be counted on the fingers of a single hand. Agora.io offered there live translation, automated using Google Translator. During every session, the spoken words were transcribed and then translated to either Chinese or English, showing both languages to the side of the big screen. The results were mixed, and at times funny. It allowed understanding the gist of what was said but required some grasp of the language spoken by the presenter.

For my own presentation, I decided to work out with a simple structure:

  1. Give a short explanation of WebRTC
  2. Continue with a shopping list of common WebRTC mistakes

This structure gave me the ability to fit the content to the length of the session quite nicely, while driving home the three main concerns:

  • Developers are clueless about STUN and TURN configuration and meaning
  • Picking a signaling project in github is a tricky/risky endeavor
  • Lack of knowledge brings with it mistakes. Better learn WebRTC

There are a lot more mistakes, but these definitely make it to the top of the list.

If you are interested in learning more, then here is the deck I used:

Common WebRTC mistakesand how to avoid them (RTC Expo 2019) from Tsahi Levent-levi

When the video of the session will be published, I will add it here as well.And if you are interested in solving such issues and reducing the risks of your WebRTC project, then I can always suggest my WebRTC courses.

The post Common WebRTC mistakes and how to avoid them [Slidedeck] appeared first on BlogGeek.me.

Are you supporting WebRTC or developing with WebRTC?

Mon, 10/21/2019 - 12:00

I am in the process of launching a WebRTC support course, alongside my WebRTC training for developers. This is by part taking place because of the work we’ve been doing at testRTC lately.

Supporting a technology is different than developing it. This is something I learned only recently. It is something I should have known some 20 years ago already. You learn something new every day.

I was always on the software development track. Be it as a developer, project lead, product manager or CTO. It was all about defining, designing, implementing and maintaining communication software. On good days, I interacted with product managers and developers. On bad days, I had to deal with support people (not because they are bad people, but because it meant we had product issues and bugs to deal with). On really bad days, I had to talk to a client who was on an escalation path.

A lot of that work with clients and support teams is frustrating as hell for developers. Oftentimes, there are two disconnected conversations going on, where both sides try to talk to each other but somehow there’s a mismatch in the languages.

This was never a fun experience for me.

Learning the trade of technical support

Earlier this year, at testRTC, where I am a co-founder and the CEO, we’ve partnered with Talkdesk, developing a new product to suit their needs. For the first time, my customers weren’t other developers, devops or entrepreneurs but rather support teams. What we essentially built was a network testing tool for WebRTC, which enabled Talkdesk’s support team to more easily collect and analyze network statistics from their clients. The end result for Talkdesk? This greatly reduced their turnaround time on incidents. This product is now being trialed by a few other customers, which is great.

I learned a lot from this experience – working with support teams, understanding their challenges and getting feedback from them on our initial alpha release and from there to the product launch itself.

At roughly the same timeframe, I found myself consulting more to support teams through BlogGeek.me, which was a different experience. The main bulk of my consulting is either around architecture and troubleshooting development issues in communication technologies or they are revolving around roadmapping and strategizing communication products. The people you deal with are different in each case, and trying to assist support people instead of making them go away as a developer in my distant past is an interesting experience (something that I should have experienced years back, when I was still young and beautiful).

Where is all that leading to?

New upcoming Supporting WebRTC course

My next pet project at BlogGeek.me is a new course. This one geared towards support people.

It isn’t a subset of the developers WebRTC courses that are already available, but rather a brand new course, created and recorded from scratch.

Why?

Because support teams need something different.

They don’t really need to know the internals of SRTP, or a detailed explanation of the patent situation of video codecs, or a lot of other technicalities. What they need is a basic understanding of WebRTC and then a lot of information around how things fail (as opposed to how they work).

If you want a peak at the agenda for this course, then it is available here.

I am in the process of creating the materials for the course and will switch gears towards recording and putting this live in two or three weeks.

There are 3 options here:

  1. If you want to put your weight on it, affecting what content is on the course, and learning while I record these lessons, then you can join the pre-launch now at half the launch price. I am expecting your feedback in such a case, and will be giving priority to creating lessons you want. All you need to do is contact me
  2. You can wait until this is all available, probably end of December or so. And then enroll in the course and take it in a linear fashion
  3. You can skip and move on
My WebRTC courses for developers

Today, I have 3 WebRTC courses for developers:

  1. WebRTC basics – a free course open to anyone
  2. Advanced WebRTC – the main course, which already saw 500+ students enrolled to it
  3. WebRTC Tooling – a growing set of snippets and interviews to be used when needed for reference

If you want to learn more about them, you can check the course syllabus (PDF).

Are you an employee and not a decision maker?

I think this doesn’t happen enough:

The part not happening enough is employees asking to take classes. Asking to get trained in technologies they need to get their job done. Why do I think that? Because I used to be like that as a developer myself. I was passive, waiting for things to happen to me, rarely going and asking for the tools to assist me in my work.

More often than not, I see managers interested in enrolling their employees you my courses. From time to time, there will be a developer who thinks this is important enough to go and ask for permission to take the course – or even more – go suggest the company to send the whole team to enroll in the course.

Think you need this course but don’t think management will approve? Try asking them. You might be surprised by the reply you get.

The post Are you supporting WebRTC or developing with WebRTC? appeared first on BlogGeek.me.

Data APIs: How to make the most of ‘public’ realtime data sources

Mon, 10/14/2019 - 18:00

I find myself looking at streaming platforms somewhat more lately. A topic that crops up from time to time is access to “open data”. Many write about the merits of open data but a lot less is written about the challenges related to making such data accessible and available.

I’ve asked Tom Camp, technical author and developer at Ably Realtime, a data stream network and realtime API management platform, to give a few pointers around the challenges in accessing open data streams.

Why realtime open data is useful

A well-known example illustrating the benefits of realtime open data is Transport for London and the ‘Citymapper effect’. Deloitte estimates that the 13,000 developers who started using this data created 600+ apps (including Citymapper), contributing £130m to the city’s economy within just a few years of the scheme’s launch. So it’s surprising large-scale examples like this are so rare (if you know of any similar success stories/ good sources of realtime data please comment at the end of this article). The EU’s data commission has also noted a distinct lack of publicly available, value-generating data sources (think traffic data, weather information, realtime financial updates) due to the costs involved of realtime distribution. In the UK, the Office of National Statistics (the ONS) has noted a widespread lack of data sources in realtime. Headlines aside, ask most developers and you’ll get the same answer. 

By allowing developers to publish and consumer realtime open data feeds on Ably’s API Streamer (a realtime API Management Platform) Ably’s Open Data Streaming Program aims to make public realtime data easier to work with. Work setting this in motion has involved identifying the most useful, publicly-available realtime data, converting it to a single realtime feed, and inputting it to the Ably Hub, which then re-distributes it to users (for free) in whichever realtime protocol and data structure they need. The process brought us into contact with hundreds of ‘open’ realtime data sets, and we soon became veterans in identifying and solving common problems developers experience when trying to consume realtime data feeds. Recurring obstacles range from a lack of ‘real’ realtime information, to a lack of protocol support, to heterogeneous data structures. 

Below we isolate three key potential problems to bear in mind when accessing ‘realtime’ data sources, and share what we learnt about how to overcome them. 

1. Polling takes up time and resources

Despite the fact many online experiences (B2C, C2C and B2B) now take place in realtime, we still see a lack of push-based realtime APIs. Developers have to poll for data if they want updates in near realtime. The internet’s infrastructure is built on REST-APIs, which fall short in terms of providing event-driven online experiences. 

Let’s take transport systems as an example. Although transport systems are subject to change at any minute, even here we notice a lack of realtime APIs that would be better suited to reflect this. When we looked into this we found just 2/10 cities provided actual realtime APIs. As it happens, these were the two cities with some of the best journey-planning and transport sharing apps. 

How do realtime APIs help? Consider an application which is meant to keep end-users updated with train arrival times, subject to change (as the city dwellers amongst us know), at any moment. Using pull-based protocols, those wanting to receive the information will need to poll the provider’s endpoint every few seconds for current information, with obvious impacts on server load as well as usability.

Leave it too long and you risk missing information on a train arriving at a different platform, and have the end user miss the train:

Make it too short, and you’re using a lot of bandwidth making requests for unchanged information, with each message also having a fairly large overhead:

What can we do about it? We can recommend data be provided using push-based systems, to lighten the engineering load both for producers, who only need to provide the initial connection point, and for subscribers, who no longer need to worry about intermittently polling the provider’s endpoint. The result is instantaneous updates and far lower bandwidth costs. 

Unlike pull systems, push bandwidth costs remain sustainable even when thousands of developers start using the data. For developers wishing to add realtime to their apps, look out for push-based APIs, such as WebSockets and MQTT, that allow for persistent, bidirectional connections. But while we are persuading data producers of the benefits of providing these, we can –  up to an extent – stick with long-polling BUT optimize how we long-polling with maximal efficiency.

2. Data structures are fragmented 

Developers looking for realtime updates have to spend a lot of time familiarizing themselves with each provider’s chosen protocol, be that HTTP or something like STOMP, working out its implementation, and how to convert this data into a unified format suited to a particular app or service. More widely though, and again using transport as an example, there is also a fundamental lack of standardization in the way transport providers structure their data. Some companies provide extended information – carriage formation, up-to-the-minute ETAs, and seat availability, others scrape by with the bare minimum of time and transport mode ID. A lack of standards across sectors mean developers wanting to expand the reach of their app (ie all developers) eventually come up against a host of additional problems to solve. With each new data structure developers need to work out which data corresponds to what, how to correlate similar data, in addition to allow for varying degrees of accuracy. 

A good illustration of lack of cohesion is the variety of options for what has caused a disruption. GTFS Realtime includes twelve possible reasons for delays. NationalRail on Darwin however, has a whopping 496 options (I kid you not). If open data is to have a meaningful impact on different sectors, we recommend industry-wide agreements on what data to provide. For developers, in the meantime, it’s a matter of knowing how to sift through the sources.

3. Some data sets are more open than others 

Most pull-based systems I’ve encountered don’t seem to be designed to handle large numbers of requests, which inherently reduce the value in the data as it becomes less accessible. Many transport data providers impose heavy rate limits and restrictions on data usage. For example, UK  train operator NetworkRail has a limit of 500 people using their queues at any one time. TFL’s RESTful API is limited to 500 requests a minute. I think that public data providers need to impose generous limits. For developers, so as not to get caught out when your app scales, it’s a wise precaution to bear in mind that you will likely need higher loads than you are anticipating. Here and elsewhere, before you dive into building an app, it’s best to read the smallprint around your chosen data source, gauging how it fits in both with other data sources, and your use case.

Ably is a global cloud network for streaming data and managing the full lifecycle of realtime APIs. Read more about concepts, design patterns and protocols underpinning realtime engineering on the Ably Engineering blog

Finally, if you know of realtime data feeds that would benefit from being on the Ably Hub, get in touch – tom@ably.io 

The post Data APIs: How to make the most of ‘public’ realtime data sources appeared first on BlogGeek.me.

Future of CPaaS; a look ahead

Mon, 10/07/2019 - 12:30

Looking at the future of CPaaS, the lines are blurring in the cloud communication API future. And this isn’t only about UCaaS and CCaaS.

I’ve been asked recently by multiple clients to analyze for them the future of specific technologies they are developing. The process was very interesting and provided a lot of insights – somethings things that haven’t been obvious to me to begin with.

It got me into thinking. What if I do the same around CPaaS? Looking at how the future of cloud communication APIs look like, what are vendors after, what they pitch and brief analysts about, and what their customers are looking for.

I decided to do exactly that, ending up writing this article and creating a new comparison sheet and eBook (this eBook/sheet combo can be found in my WebRTC Course paid-for ebooks section).

When looking at what the future holds in the CPaaS domain, there are many aspects to review. If this topic interests you, then you should probably also read these other 4 articles I’ve written previously:

  1. 7 CPaaS Trends to Follow in 2018 – the perspective I had a year ago. Mostly still true, but I think we’re accelerating the pace of change and evolving this a lot further
  2. What Comes Next in Communications? – a look at how CPaaS, UCaaS and CCaaS vendors are looking at the market and at the blurring lines between them
  3. CPaaS differentiation in 2019 – because it shows how different vendors try to operate differently and rise above the noise
  4. Twilio Signal 2019 and the future of the programmable enterprise – a summary of the recent Twilio Signal event. Important simply because Twilio is the market leader and innovator in this domain

Now that we’re on “the same page”, here’s where I see things heading for communication APIs.

Want to figure out exactly what each vendor is doing in each of these future trajectories? You can purchase my CPaaS Vendors Comparison.

Learn more

nocode

There’s this new trend of making software development all-encompassing. It boils down to a single non-word used for it known as #nocode

Here’s some of the things people like saying about this trend:

As creating things on the internet becomes more accessible, more people will become makers. It’s no longer limited to the <1% of engineers that can code resulting in an explosion of ideas from all kinds of people. #NoCode

— Shaheer Ahmed ✪ (@Boringcuriosity) September 13, 2019

The best code you could write is #nocode at all

— Denis Anisimov (@dbanisimov) September 14, 2019

Interestingly, the place where you see people talk the most about #nocode is in the third party API space. Now that we’ve made integrating with third parties simpler via APIs, it is time to make it even more so by requiring less development skills to do so.

This has been a long time coming to the communication API space as well.

We’ve had visual IVRs for quite some time, and we’ve seen in the past 2-3 years many of the CPaaS vendors adding visual drag and drop tools. Twilio calls their tool Twilio Studio, while the rest of the industry settled on the name Flow.

Who is doing it today with CPaaS?

Others, like Nexmo, opted for releasing a Node-RED package, enabling developers more flexibility in the integration points the Flow tool has to offer them.

What I fail to understand is why so little activity is taking place in the serverless trend. It is as if CPaaS vendors knowingly decide NOT to offer these and instead jump directly towards the visual drag & drop flow tool.

Look at the diagram above. It shows why I believe it is a mistake to skip the serverless opportunity. We’ve started with APIs, to simplify the task of inhouse development, going towards cloud so we don’t need to install complex systems. We’ve seen a shift towards serverless (think AWS Lambda), where developers can focus on their use case and not think too much about the whole non-functional infrastructure stuff. Then came the visual drag and drop tools, which made life even simpler, as for many scenarios, there is no more need to code anything – just express your intents by connecting dots to boxes.

Developers end up using ALL of the tools given to them. They will use a visual drag & drop tool to speed up development when the flow is easier to express in that tool. They’ll write code when necessary. And they will use serverless functions to reduce the effort of scaling and maintenance if that is needed. So why not give them all of these tools?

CPaaS vendors are doing APIs and moving towards visual. The serverless part is an internal implementation which most don’t expose to their customers. Why? I am not sure.

What should you expect in the coming years?

Visual Flow tools will become an integral part of any CPaaS offering, with more widget types being added into these tools – supporting new features, adding new channels or integrating with external third parties.

Omnichannel

Omnichannel is the biggest thing in CPaaS at the moment.

There are two reasons for this:

  1. SMS is crap. And it is getting worse
  2. SMS (and voice) is being commoditized. Omnichannel means less churn for CPaaS vendors

Why is SMS crap? Because in the last week or so I’ve received so much spam on SMS related to the election here in Israel that it made that channel useless. I am sure I am not the only one and that this isn’t only in Israel.

SMS is being marketed to marketers as the channel that gets the highest attention rate from the spammed audience. What it gets is the highest deliverability – maybe. Definitely not the highest attention. This makes SMS great for transactional messages but I am not sure how good it is for sales or marketing promotions if done in the current stupid carpet-bombing tactics.

How does omnichannel change that? It doesn’t. But the social networks that act as channels treat their users better than carriers, which means they are guarding the entry to their garden from sales people and marketers, trying to bake the rules of permission marketing into the engagement. This is done by things like manually approving message templates, not letting businesses send unsolicited messages, forcing identity on the sender, allowing users to mark crap they receive as spam, etc.

It does one more thing – it brings the game into a new field which is murkier than SMS today. There are many channels already, with a promise of more channels to come in the future. Will you develop it on your own or rely on a third party CPaaS vendor for that? Most will choose the CPaaS vendor approach.

Timing is also good. Social networks are opening up their APIs, letting CPaaS vendors (and other vendors) access to their users, in an effort to enhance their usefulness to their users and to have more monetization options on their platform. They are doing that while trying really hard not to piss off their users, so spam levels are low and will be kept that way for years to come.

Omnichannel is the leading force of future CPaaS growth. This is where most invest their focus on, and where there’s an easy path for migrating SMS revenue/engagement from.

Email

Email was always shunned from. Akin to fax. A relic of a bad past.

But it isn’t.

Most of my business revolves around the ability to reach people via email. And it mostly works for me (don’t like my content? unsubscribe).

It isn’t a replacement for SMS messages. Not really. But it has many uses of its own. Especially if you factor omnichannel. Businesses need to communicate with their customers and prospects, and doing that only over SMS or WhatsApp is a limited worldview. There’s email as well.

Some CPaaS platforms already had email integrations and capabilities to some extent. Twilio has taken it to a whole new level with the acquisition of SendGrid. Did Twilio decide on this acquisition to increase their bottom line and appeal to Wall Street? Were they after an operation with less costs attached to it to increase their revenue per share? Was it a genuine strategic move towards email?

Doesn’t matter anymore. Email is part of the game of CPaaS. I don’t think many agree with me on that. The reason it is becoming part of CPaaS is because we need to look at communications holistically. As we head towards the enterprise with CPaaS, email is yet another channel of interaction – same as SMS, WhatsApp and others. Being better at email means answering more of the needs of an enterprise communications which means appealing more in a vendor selection process.

Email will take a bigger and more important position in CPaaS. The more omnichannel becomes the norm, the more customers will ask about Email support and capabilities.

Streaming media to third parties

We call it AI – Artificial Intelligences. If we’re not overly hyped, then ML – Machine Learning. And if we’re true to ourselves, then most of it is probably statistics, sometimes sprinkled with a bit of machine learning.

CPaaS is too generic and broad to be able to cover all possible algorithms and models. What do you want to do with that recorded voice call? Transcribe it? Translate to another language? Maybe do some emotion analysis? Find intents? Summarize? Look for action items?

Too many alternatives, with too much data to train from to get a good enough model. And then each scenario needs its own data to train for and get a specialized model to use.

The end result?

CPaaS vendors offer a few out of the box integration with popular features and frameworks. The known culprits are speech to text and text to speech. Or just connectivity to AWS or Google machine learning algorithms in this speech analytics domain.

Another approach which is gaining a lot of traction is to be able to stream the media itself to any third party – be it an on premise/proprietary machine learning model or a cloud based machine learning API. Usually over a WebSocket, but sometimes on top of other transport mechanisms.

The name of the game here? Simplicity and real time.

Enabling easy access to the media streams is key. The easier it is to access the media streams and integrate them with third parties that do machine learning the more attractive the CPaaS vendor will be moving forward.

Chatbots and voicebots

The digital transformation of enterprises is a transition that is taking now over a decade and will continue for many years to come. Part of that transition is figuring out how businesses communicate with users. Part of that communication needs to be relegated to bots.

Why?

  1. Because as a business we want greater scale. The more we can automate the more we can accomplish for less price, less friction and less mistakes
  2. Because users seem to prefer self service in many cases. “Empowering” users to do more by having a lot of their interactions taken care of with bots help that
  3. Interaction interfaces are moving from button clicking towards voice interactions. And text is the main form of communications on social networks (I am ignoring emojis and gifs here)

I’ve written about this trend and its reasoning when reviewing the two recent acquisitions of Cisco and Vonage in this space.

There are startups focusing solely on the bots industry, which is great. But in many ways, this is part of what a CPaaS vendor can offer – enablement of communications at scale.

Some CPaaS vendors today integrate directly or indirectly with bot frameworks such as Dialogflow or have built their own bot infrastructure. Moving forward, expect to see this more.

Enabling easy creation and configuration of chatbots and voice bots will be an important feature in CPaaS. The better tooling a CPaaS vendor will have in this space, the easier it will be for him to maintain enterprise customers looking to better communicate with their users.

UCaaS and CPaaS

Acronyms might be confusing in this section and the next so follow closely (or skip altogether)

UCaaS vendors are looking at CPaaS as a potential growth opportunity.

Vonage has seen that first with the acquisition of Nexmo.

Since then we’ve had Cisco acquire Tropo (and botch that one), RingCentral introducing developer APIs and 8×8 acquiring Wavecell.

There are definite synergies at the infrastructure level of UCaaS and CPaaS, though it is a bit less obvious what synergies there are on the frontend/application/business side. They do exist, but just a bit harder to see.

UCaaS vendors are adding APIs and points of integrations to their service because it makes sense. Everyone’s doin’ it in one way or another. It isn’t CPaaS but in some minor cases it can replace the need for using CPaaS.

What you don’t see, is CPaaS vendors heading towards UCaaS. Yet.

And you don’t see any successful independent UCaaS vendor using a 3rd party CPaaS vendor to operate all of its communication infrastructure. Yet.

For UCaaS, CPaaS is a growth potential. For CPaaS, UCaaS is just another use case. The lines are blurring between these two domains but not enough to matter.

CCaaS and CPaaS

Cloud contact centers take the exact opposite powerplay than UCaaS.

Many of the cloud based contact centers are using CPaaS and not their own infrastructure.

Twilio decided to build a contact center solution – Twilio Flex. In a way, it competes with some of its own customers. As successful companies grow large, they go toward adjacencies and CPaaS is an adjacency.

Will Twilio succeed with Flex? Too early to know.

Will more CPaaS vendors introduce contact center solutions? Probably not, but they are being bunched up and consolidated as larger entities – just see what Vonage and 8×8 have been doing in their acquisitions.

Twilio Flex is a singular occurrence. The norm would be other larger communication players who have CCaaS, acquiring smaller CPaaS players. The end result? A blurring of the lines between the various communication vendors.

For Twilio, Flex might be just the beginning. If this bet succeeds, Twilio will find the appetite to look at other adjacent enterprise applications it could build or acquire and make its own.

M2M / IOT

This. isn’t. part. of. CPaaS.

Or is it?

I’ll start by splitting this one into two areas:

  1. M2M (cellular stuff)
  2. IOT (messaging between devices)
M2M – Wireless

Twilio has their Programmable Wireless offering, which at its core is a modern M2M solution (for me M2M and IOT are one and the same).

In this domain, communication is needed between devices. Less human intervention for the most part, so some of the requirements are different.

But this is still communications.

CPaaS will redefine M2M/IOT as one of the use cases it covers. I don’t see a reason why CPaaS vendors wouldn’t take that route in an effort to grow their product line horizontally.

IOT – serverless infrastructure for real-time messaging

I tried to find a name for this subdomain and settled on what vendors like PubNub, Pusher and Ably end up with (or something in-between). There’s a set of vendors offering a kind of general purpose managed messaging that developers can use when they build their apps.

These vendors are settling on something like serverless infrastructure for real-time messaging as a name.

Serverless because it sounds modern, advanced and cool (marketing asked for that).

Infrastructure because this is what they have.

Real-time messaging because this is what they do.

How is that related to CPaaS? It doesn’t directly. Because no CPaaS vendor offers a “serverless infrastructure for real-time messaging”.

Here’s a surprising thing.

All of the CPaaS vendors who support WebRTC have a global backend real-time messaging infrastructure already. It is used to drive signaling across the network.

It might be more centralized. It might be slightly slower. It might be simplistic.

But at the end of the day – it is a serverless infrastructure for real-time messaging.

These CPaaS vendors can slap an API on top of that infrastructure and offer that as yet another distinct service. And they will. Either by inhouse development or through acquisitions.

Serverless infrastructure for real-time messaging will be wrapped into CPaaS.

Cloud native, no hybrid

There were attempts in the past by CPaaS vendors to offer both cloud and on premise alternatives.

Some are probably doing it still.

The vendors that see more growth though are cloud native and offer no on premise alternative.

Things aren’t going to change here.

The future of CPaaS is cloud. Hybrid is a nice idea, but until cloud vendors themselves don’t offer an easy (and cost effective) path towards that goal, the hybrid model makes less sense – it becomes too expensive to develop and maintain.

Measurements and SLAs

Quality across vendors, carriers, networks, infrastructures, time of day, day of the week or any other parameter you wish to use is variable at best. CPaaS vendors are “supposed” to handle that. They track and optimize media quality and connectivity across their services. They strive to maintain high uptime and reliability. Some even use quality as reasons for opting for their service.

At some point, TokBox and Twilio started offering quality measurement tools. TokBox introduced Inspector, a way for its users to troubleshoot network issues of recent sessions. Twilio launched Voice Insights, offering its users a quality dashboard of the calls conducted through its service.

A similar aspect is the use of SLAs as part of the service – a binding of what type of service expectations the customer should expect and what happens when the expectation isn’t met. These apply mostly to enterprise plans of some of the CPaaS vendors.

Why am I mentioning it here? Because it see it happening. It is what got Talkdesk to pick testRTC for a network testing tool (I am a co-founder at testRTC). It is also an issue that causes a lot of challenges to customers – understanding the quality their own users experience.

Measurement and SLAs will take bigger roles in customer’s buying decision making. As the market evolves and matures, expect to see more of these capabilities crop up in CPaaS offerings. It will happen due to pressure from competitors but more likely due to pressure from enterprise customers.

Vying towards the Programmable Enterprise

We’re shifting from on premise to the cloud. From analog to digital. From siloed solutions towards highly integrated ones. This migration changes the requirements of the enterprise and the types of tools it would require.

I think we will end up with the Programmable Enterprise. One where the software used is highly integratable. Many of these early trends we now see in CPaaS will trickle and find their way across all enterprise software.

Want to figure out exactly what each vendor is doing in each of these future trajectories? You can purchase my CPaaS Vendors Comparison.

Learn more

The post Future of CPaaS; a look ahead appeared first on BlogGeek.me.

Pages

Using the greatness of Parallax

Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.

Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.

Get free trial

Wow, this most certainly is a great a theme.

John Smith
Company name

Startup Growth Lite is a free theme, contributed to the Drupal Community by More than Themes.