bloggeek

Subscribe to bloggeek feed bloggeek
The leading authority on WebRTC
Updated: 32 min 28 sec ago

Twilio Signal 2021: A Pivot from CPaaS to Customer Engagement Platform

9 hours 30 min ago

Twilio Signal 2021 defines Twilio as “API”, “programmable”, “platform” and “customer engagement”. Here’s how it intends to compete in its many markets.

Twilio Signal 2021 is when Twilio officially pivoted from CPaaS to a Customer Engagement Platform. This is the reason Twilio acquired Segment last year, and the explanation of how it intends to leverage that acquisition.

Every year, I put time aside for Twilio Signal. Either in person or remote, going through the sessions and paying extra attention during the keynote. This has developed into a comprehensive view and research resources about Twilio that I’ve put up. It is time now to review what we had at Twilio Signal 2021.

Table of contents Twilio Signal Keynote 2021

Twilio didn’t put the keynote for Signal 2021 on YouTube (yet), but they did have it as part of their all-day Signal TV session. The video below will get you the keynote, which was around 90 minutes long:

As events go, Twilio Signal 2021 was quite a good experience for a virtual event. It was a bit hybrid, but most of the focus and action took place on the virtual side of it (or at least felt that way for me as a virtual audience).

Defining Twilio in 2021

Twilio never liked or used the term CPaaS. I am not really sure why.

The Twilio pivot

There were 4 words that came time and time again during the keynote, and I think they are the center of what Twilio gravitates around today: “programmable”, “platform” and “customer engagement”.

Everything Twilio does can be found around these words, and I believe also every type of adjacent business they will try to go after will have two or more of these words in them in one way or another.

Twilio tried to show this shift and to move away a bit from APIs. It will take more than a single Signal event to do that.

Jeff Lawson, Co-founder and CEO of Twilio, started by presenting the idea of Customer Engagement and ended the keynote with the Customer Engagement Platform taking us in a complete circle around it.

Why did Twilio pivot now?

Twilio is the leader in CPaaS. It has been so for many years now, defining and redefining what CPaaS is. Twilio is also ahead of all of its competitors. Way ahead. It acts as a best of suite provider, which covers most if not all of what CPaaS is, with depth of functionality in many of its offerings.

As such, it sees and knows the market. It also knows the market’s limits. Which means it understands its estimated growth. It had to pivot and start eating up more adjacencies to continue growing at an accelerated rate. But there probably aren’t enough adjacencies it can go after that can be defined as CPaaS or as communication APIs. So they went up the food chain, marketing customer engagement as their target.

How Twilio’s breakout acquisitions into email and customer data enabled the pivot to Customer Engagement

Twilio’s reasoning for doing it now?

  1. Size of the market. The communication market has been said to be $1T. Twilio believes it is much bigger, due to the slower shift of communications towards the cloud and the fact that communication is now used in new ways, not attributed in the original market sizing made by analysts
  2. Architectural shift. The shift to the cloud. This one is driven by customers who need to do more, faster and more flexibly. Legacy vendors can’t do it, while Twilio as a cloud native vendor can offer such capabilities
  3. A focus on “proactive”. Most use cases in business communication so far have been reactive in nature. Now they are a lot more proactive. That shift requires new capabilities, ones that require access to more data and being smart about it

To be frank, the architectura shift as well as the move from reactive to proactive have been industry themes for over 10 years. The pandemic simply accelerated these changes, and probably accelerated Twilio’s own pivot. It is also a new language that Twilio is now speaking, so we hear it from them as well.

Twilio by the numbers

Each time, Jeff starts his keynote with numbers, showing off Twilio’s size. It is interesting each time to see which numbers he shares and highlights at the beginning of the keynote. This year?

Twilio Signal 2021 numbers versus 2019 & 2020

What numbers did Twilio share in the beginning of its keynote this year versus previous years?

201920202021Customers160,000200,000+240,000+ in 180+ countriesText messages––128B (100% growth)Emails––1T (5.8B single day peak)Calls––25BFlex interactions––0.5BSegment data events––10TInteractions750B1T–Unique phone numbers2.8B3B–Calls/minute32,500––Peak SMS/second13,000––Email addresses3B/quarter50%–Video minutes–3B–Developers6M––

This is in-line with its pivot, as many of the original numbers aren’t even mentioned.

So… Twilio is now even bigger, and it is pivoting.

  • Customers came first. Not as a number, but as logos, showing how strong and diverse Twilio’s customers are
  • It was important for Jeff to share that these customers include startups, enterprises and ISVs – Twilio isn’t catering only startups
  • I think it was the first time Twilio shared the countries of origin for its customers. 180 of them. With anywhere between 195 to 249 (depending who is counting), that’s quite impressive. The reason to share this number? To signal that Twilio isn’t only big, but it is big everywhere (ie, outside the US)
  • Text is still the most important thing for Twilio. Not as SMS, but as “text” – omnichannel. We will see later that this still means SMS
  • For calls, Twilio shared the number of calls and not peak, with 25B as that number
  • Flex interactions. For the life of me, I still can’t understand what interactions are, and probably no one does. Twilio simply wanted to say “Flex is a real and it is big” – to remove the doubt in the business success of Flex in the contact center space
  • Segment data events are… as bad as Flex interactions as numbers go – I don’t understand what that means. But saying 10T is always good, cementing Twilio’s “dominance” on the CDP (Customer Data Platform) space Segment belongs to
Twilio and social good

I haven’t added the social good related numbers that Twilio shared not because they aren’t important, but because they require a separate mention.

Twilio made the decision years ago to be a company that does good in the world. It also decided to put its money where its mouth is, through its twilio.org operation and its shift to become a diversified company.

Time is spent each year at Signal during the keynotes as well as in specific sessions for social good, and this year was no different.

Twilio and partnerships

Jeff mentioned the strategic partners of Twilio at the beginning as well. These are getting more important to Twilio as it grows and shifts towards customer engagement.

Twilio dogfooding

Twilio is dogfooding its own products. For Twilio Signal 2020 and 2021 it has been hard at work building its own hybrid events platform. Still at its early stages but quite commendable.

Each year, additional pieces of the Twilio building blocks are being used to create these events. It will be interesting to see if in 2022 they will continue with this trend or go to a live-only event. Another question is if and when will they productize this as a programmable events platform.

The Pivot: Twilio Customer Engagement Platform

After the numbers it was time for the pivot. This is where Twilio moved away a bit from its roots into communications towards custom engagement. And the way this is explained by the fact that Twilio now isn’t only about communications but about all experiences with customers. Customers “drove” Twilio there, which led to the creation of Twilio’s Customer Engagement Platform.

Setting the stage

Two things here:

  1. Twilio isn’t only about CPaaS anymore
  2. Twilio focuses on communications of business with customers. They aren’t after the UCaaS market in any way
Twilio ignores UC and pivotes to customer engagement

If you look at the communications market diagram above which I like using, then Twilio encompasses two of the three domains. The difference now is that it is vying towards the CRM part with its new story of a customer engagement platform.

The pillars of Twilio’s Customer Engagement Platform?

From here on, the keynote was focused on showcasing everything revolving around customer experience with trust, scale, reliability and compliance as the main themes.

FUDing the enterprise

To hammer the message through, Twilio decided to harness the “digital giants”. In its mind, these are Amazon, Google, Netflix and Facebook. An odd choice, as Apple and Microsoft would be “gianter” than Netflix…

The reason behind this, is that these companies make the best use of customer data to improve its engagement with its customers, providing a singular, cohesive view of them.

Logic states that these digital giants have grown with the pandemic because they understand their customers better, and other vendors need to follow suit or be gobbled up by these digital giants.

Now that we want to be like them, we need to have the technology to do that. Amazon didn’t buy its CRM from anyone, it built it. It fed it with the data needed. And so do you dear vendor – you can’t rely on an existing CRM – you will need to build it. And just accidentally, Twilio Flex is what you need to build it (wink wink ).

Oh, but it isn’t Twilio Flex. It is actually Twilio Flex + Segment + machine learning.

To hammer that in, Jeff made sure you know that you don’t want the digital giants as your partners when it comes to your customers: Amazon taking a cut of each purchase,the Apple tax, Facebook and Google auctioning user attention via ads. You dear vendor, need and want to own your customer relationship – directly:

Now that we’re all warmed up, it was time to share and explain what Twilio Customer Engagement Platform really is.

The Twilio Customer Engagement Platform Twilio’s new Marketecture: Twilio Customer Engagement Platform

Twilio’s new Marketecture: Twilio Customer Engagement Platform

Jeff went through the platform’s components, which sits well with its current set of product offerings and acquisitions.

1. Channels

Channels are the basic Twilio building blocks. That’s roughly the CPaaS part of Twilio:

The purpose is to be where the customer is.

Messaging and Voice is what Twilio is focused on. Ads were not mentioned anywhere else. Email is the SendGrid acquisition. And Video… well… that’s almost the only place it appeared during the keynote (more on video later).

2. Engagement Apps

These are the higher level programmable applications that Twilio is offering:

  • Twilio Flex for support (announced 3 years ago at Twilio Signal)
  • Twilio Frontline for sales (announced a year ago at Twilio Signal, no new announcement around it in the keynote)
  • Twilio Engage for marketers (announced later in the keynote)
  • Custom apps are the ones you build yourself on top of Twilio’s CPaaS offering (their Channels)
3. Personalization

Segment…

This is why Twilio acquired Segment a year ago, and this is where it is taking Segment next.

The reason behind acquiring Segment was to pivot towards customer engagement and provide a larger offering to larger enterprises.

As Jeff said it, this is about engaging customers in real time at scale – that’s the focus of Segment.

From here, the keynote went to specific product announcements.

Twilio Signal 2021 keynote announcements

During the keynote, several official announcements were made. There were others that didn’t make it into the keynote itself, which goes to show where the main focus is.

Here are the things announced in the keynote:

  1. Regional Twilio – running the Twilio stack and connecting to it over different geographical regions
  2. Twilio MessagingX – a rebranding of its SMS and omni-channel offering
    1. TrustHub – managing compliant phone numbers
    2. Google Business Messaging – support for Google Business Messaging
    3. Content API – new API for managing messages across channels
  3. Twilio IVR Now – helping contact centers migrate from on prem IVRs to the cloud
  4. Twilio Intelligence – a new business process automation platform for the contact center
  5. Twilio Flex
    1. Twilio Flex ONE – single API for multiple channels in Flex
    2. Twilio Flextensions – marketplace for partner extensions and implementations for Flex
  6. Segment
    1. Twilio Engage – marketing cloud engagement app for marketers
Regional Twilio

Jeff introduced this first and explained that this was their biggest architectural change.

Twilio switched from a single US based data center to enabling running the Twilio stack from multiple regions. A customer can potentially choose where he wants to connect to Twilio and where he wants his data to reside.

The main difference is lower latency on API calls if sent to the same region, but mainly the ability to choose where to run and store the data.

The actual deployment of this is going to happen in stages with a growing number of locations as well as products enabled. This will start with two new regions – Australia and Ireland, to cover Europe and Asia by year end for Twilio Voice; while Twilio Segment can store data in Europe.

The main reason for this is the growing need to support regional data storage to meet regulation in different countries and the need to entice larger enterprises to use Twilio.

This was announced before the explanation of the Customer Engagement Platform, but I decided to place it here, as part of the announcements of the keynote.

Twilio MessagingX

The first announcement after introducing Twilio Customer Engagement Platform was Twilio MessagingX – the Channels layer in the new marketecture. This is also where the heart of the Twilio CPaaS solution lies.

It started nice. Soumya Srinagesh, Twilio’s VP Messaging Exchange, shared her big number:

Somehow, it differed from Jeff’s by 28B. I am sure there’s a good explanation, though either way, 100B is a large enough number.

SMS centered, but evolving

For Twilio, messages are still SMS. It wasn’t said out loud, but it was hinted strongly enough throughout the session based on the announcement and in the analysts briefing for Twilio MessagingX:

During the analyst briefings of Twilio Signal 2021 the above slide was shared. I like it because it says a lot about how Twilio sees things in the messaging space. I also like it because of the way things are arranged.

Here are my immediate insights from it:

  • SMS is the biggest channel by far. Everything else is just noise
  • Whatsapp comes second, and then Facebook Messenger
  • RCS is puny (it is still dead before arrival)
  • All of the above is true because Twilio deals with business to consumer communications
    • Until now it was mostly business to consumer
    • Whereas the future is in conversations where consumers initiate more of it, where social networks and Apple/Google are more important
  • It also doesn’t take into account communications that aren’t business to customers. Business to business and just person to person, which may happen in other channels
What is Twilio Messaging X?

So what exactly is Twilio MessagingX?

It looks at messaging not from the API building block level, but rather from 3 different perspectives, each with its own set of focus and investments: Trust, Quality and Choice.

To be clear, all CPaaS vendors strive to do that. Twilio is one of the few that are big enough with economies of scale to really deliver it, and do so with programmability in mind in all of the possible layers.

Trust

To handle trust, mainly deliverability and compliance, Twilio announced TrustHub.

TrustHub is all about compliant phone numbers (did we say SMS?)

It isn’t as if other CPaaS vendors don’t offer compliant phone numbers. TrustHub does that by enabling access to it via APIs as well, making it… programmable? More flexible?

The intent at the end of the day here is to have messages pass unfiltered and not get them to be blocked by carriers. Especially now, when our phone’s spam folders for SMS and voice are full of such numbers and messages.

This initiative is starting with the US market and will expand elsewhere.

Quality

This is about deliverability by selecting which carriers to use to route messages, and figuring out bad connections. Twilio does that proactively (other CPaaS vendors do or say they do as well).

Not much else was said about it during the keynote, but this is where many of its acquisitions and investments in communication providers such as Syniverse earlier this year come to play.

This is a topic for a separate future analysis though.

Choice

Choice is omni-channel. The ability to send messages to users on the channels they prefer.

There were two announcements around choice that were made:

1. Google Business Messages

Twilio already had SMS, Facebook Messenger and Whatsapp. Now they added support for Google Business Messages – the ability of customers to start a conversation with a business directly from a Google search result or a map listing.

Interestingly, Twilio still has no Apple Business Chat support. Probably because Apple doesn’t want to deal with generic CPaaS vendors just yet.

2. Content API

To manage and handle the fact that each messaging channel has slightly different rules you need to deal with, the new Twilio Content API is there to allow writing a message once and delivering it on whatever channel, with Twilio taking the headache of matching the message you want to send to how each channel likes that message.

As messages become more complex, requiring the user to take actions for example, such an API becomes a nice add-on.

For the most part, it feels like a utility that reduces a lot of the headache of a developer.

Twilio Voice and IVR Now

This was the first time voice was discussed. It was preceded by this nice number:

We had 25B calls, now with 36B voice minutes. If both relate to voice, then that’s 1:26 minutes per call on average. Transactional is the main focus of Twilio.

Not much more has been said or announced about Twilio Voice directly. The only thing was IVR Now, with about a minute spent on explaining it:

IVR Now seems to be a program that is designed to assist enterprises to migrate their VoiceXML from on premise IVRs to Twilio’s IVR. If I had to guess, this is about offering professional services either by Twilio directly or via partners.

The reason for sharing this during the keynote was to get enterprises listening in to talk to Twilio about it – there still isn’t anything on Twilio’s website about this program…

Other than that, it felt out of touch with the rest of the keynote.

Twilio Intelligence

Al Cook, VP & GM, Artificial Intelligence was the one introducing Twilio Intelligence. Al was the one leading and announcing Twilio Flex a few years ago, and this in a way is an extension of it.

The premise of Twilio Intelligence is the need to get from voice to data to meaning.

Twilio Autopilot was released to beta in 2018 and GA’d during Twilio Signal 2019. Interestingly, this is a platform and not a product (which means it probably is still Twilio Autopilot).

What is included?

  • Driven by conversations
  • Your own switch transcription engine and language understanding capability
  • The transcription engine itself was built by Twilio, not using third parties
    • This reduces the price points for Twilio and increases their ability to deliver a specialized solution
    • The data used to train the engine was labeled with type of data and calls that Twilio sees with its customers
    • This leads to accuracy higher than 90% (based on Al’s explanation)
  • The Twilio transcription engine is included in the Intelligence platform but can also be used as a standalone API
  • Accents were mentioned but not languages, so this is probably English only at this point in time
  • The intelligence part comes with language operators which can be trained by the vendors themselves
A view of the language operators of Twilio Intelligence as implemented as part of Twilio Flex

A view of the language operators of Twilio Intelligence as implemented as part of Twilio Flex

Here’s what it means that Twilio Intelligence is a platform:

  1. This isn’t a specific product, but a mix of multiple Twilio products and capabilities
  2. Twilio voice recordings will now offer transcriptions, most probably with diarization based on the channels in the call
  3. Segment stores the data
  4. Twilio Studio is used to manage and automate decision trees based on the language operators
  5. Twilio Autopilot or something newer/different is used to sift through that data to get to the understanding part of it
  6. Twilio Flex holds all that glue together with the application level implementation of it all

The demo was quite interesting, so I decided to share the direct pointer to it in the keynote here, as that’s easier than explaining it:

What I think:

  • This is the holy grail of call centers
  • Being able to understand conversations at scale
  • Automate proactive actions
  • Do things intelligently

It is hard work, and it will be interesting to see if Twilio nailed it this time around and what the next iteration of this will look like.

Where and when?

Now in limited private beta. A broader private beta in early 2022.

English only for now. Voice based for now.

Twilio Flex

Twilio Flex launched 3 years ago. At the time, it was questioned if this would be successful or not. To some extent, it still is. The interesting thing is that the same was said about Amazon Connect, which took about 3 years to mature enough to show its size in the market.

Sateja Parulekar, Head of Contact Center Solutions at Twilio made it a point to explain that:

  1. Large contact centers are already using Flex
  2. Flex is the fastest growing product at Twilio (though no specific numbers around size were given, besides the 0.5B interactions at the beginning)

There were new announcements around Flex, mainly Flex ONE and Flextensions.

Flex ONE

Flex ONE is about adding new channels to the Flex contact center with a single API. That includes today voice, messaging (including Whatsapp), chat and email.

The end result is one page holding all conversations across all channels with the customer.

Flextensions

Flextensions are pre-build extensions to Twilio Flex. To me it sounded much like Zoom Apps or application directories of other enterprise tools.

This is geared on top of the partnerships that Twilio has been working hard on and explained in last year’s Signal 2020 when they discussed the Twilio Flex ecosystem. It is the right move for the Flex platform.

From a product perspective, the future of Flex lies in its integration with Segment. This is where Twilio Intelligence is most focused on, as we’ve seen in its introduction and demo.

Segment

Peter Reinhardt, GM of Twilio Segment came to explain two things:

  1. What is Segment and why Twilio acquired it
  2. Announce Twilio Engage
What is Segment and why Twilio acquired it

Segment is about collecting customer data from multiple sources and making it available as the single source of truth to wherever the business needs that data – all in real time.

Businesses store data about customers in many different places. With the migration towards cloud and SaaS, the number of these places is growing fast. I know… my own small business to run this website and my courses have their own share of SaaS vendors that I am using, all cobbled up with half-made integration and knit together with this masking tape called Zapier. It works. For my single person small business. Somewhat (I have tons of things I’d love to have better integrated, but don’t have the time or inclination to do – not enough ROI in it).

For real businesses, not like mine, the problem is a lot bigger and a lot more important to solve. Especially if… you want to be like the digital giants Jeff talked about at the beginning of the keynote and Peter made sure you remembered.

But back to the why:

  • Businesses need a glue for their customer data. And Segment is a nice glue. A super glue
  • Twilio does communications APIs. And is going after businesses, especially where businesses need to communicate with customers
  • So the data used to decide if and how to communicate resides in Segment, or gets pushed to Segment from Twilio
  • A win win if you could integrate these two together

And we’ve already seen glimpses of it with Twilio Intelligence earlier on.

I think Segment was the most interesting acquisition of Twilio so far. It isn’t only closing a gap on something they don’t have or need. It isn’t even going after a close adjacency. It is about being able to double down on customer engagement… and building a platform for it.

Which is exactly where Jeff started and where the keynote ends.

Twilio Engage

Twilio Engage was the last announcement. This is the new engagement app that Twilio decided to launch. Flex is for support, Frontline is for sales and Engage is for marketers. This is the marketing cloud offering of Twilio, built on top of Segment.

It is available in pilot now and as GA in Q1 next year.

Not much else was explained or shared about this and the demo was mostly a concept of what can be done with it. Next year’s Signal event will probably show the flashy UI Peter said was less important than the data

Announcements that didn’t make it into the keynote

Video. IoT. Frontline. Sendgrid.

Probably a few others that I missed.

I’d like to discuss 2 of these announcements here in brief.

Twilio Video Insights

Video isn’t (and was never) top of mind for Twilio. They have it supported, but somehow it feels like a second class citizen most of the time: Twilio WebRTC Go was announced in Signal 2020 to give a semblance of progress with video. It is a free peer-to-peer video service from Twilio that is limited in scale. It got some increased capacity this year especially for Signal 2021. Nothing to write home about (I already discussed these free WebRTC video APIs at length recently.

What was announced was Twilio Video Insights and Twilio Video, both very different from each other.

Twilio Video Insights collects WebRTC and other statistics off of your calls done over Twilio Programmable Video, to create a dashboard view of media quality.

This is similar to what we do at testRTC with our watchRTC product.

A demo was shown in one of the sessions of Twilio Signal.

For me this validates our own watchRTC product, as Twilio saw the need to offer that out of the bex as part of its service. That said, if you need something like this (for Twilio, another CPaaS vendor or your own infrastructure), then come check for yourself which tool is most suitable for your needs.

Twilio Live

Twilio Live was announced a bit prior to Signal 2021. Probably in order to give center stage to Twilio Customer Engagement Platform where Live (or video for that matter) play a marginal role if any.

Here’s what I learned about Twilio Live during Signal 2021:

  • Twilio Live offers “interactive” audio and video
    • “Interactive” because there’s a 2 seconds latency end-to-end
    • It isn’t WebRTC on the viewer’s end, which can probably be blamed for that 2 seconds of latency
    • The problem with this is that today’s CDN streaming solutions that can go down to 5-10 seconds, and with further optimizations of their existing technology stacks down to 2 (using LLHLS for example)
    • Their competition from WebRTC streaming vendors is that these vendors support subsecond latencies, usually at the 500 milliseconds mark
    • CDNs are probably cheaper. WebRTC streaming vendors will probably be on par with Twilio’s pricing
    • Main reason for selecting Twilio here is if you’re using the Twilio stack elsewhere as well, but it might not be enough if what you are looking for is real interactivity
    • Yes, 2 seconds delay is great for most use cases, but not for all of them
  • It reaches millions of users on a single stream
    • I’d estimate that Twilio Live runs like a traditional CDN streaming service
    • It sends data over TCP (using HTTPS or a secure Websocket), so there’s no packet loss and there’s buffering added to deal with potential retransmissions
    • It probably also does ABR (adaptive bitrate), to deal with different bandwidth availability of different users
  • Twilio Programmable Video Group Room is used as the source of the content
    • Which means the broadcasters are using WebRTC
    • Since a single outgoing stream is sent towards Twilio Live, this gets mixed and “recorded” and then sent to the audience. All this is probably done by a headless chromium instance in the cloud somewhere
    • The fact that the content is mixed means that all viewers can only see the exact same layout. Less flexible, especially for the interactive type of use cases with several broadcasters

It is an interesting route that Twilio took for its broadcasting service. I am not sure how well it can compete with other CPaaS vendors who are clocking 100s of users or more per single WebRTC session. And it is hard to see this as an alternative for those using CDN streaming services already.

What will be interesting to see is how vendors accept this product and its position in the market – will this be good enough or even perfect for certain customers that can’t find the right solution for their broadcasting needs elsewhere.

What Twilio isn’t

After writing down this longform article and analysis of Twilio Signal 2021, I think the most important part is what wasn’t said. And that’s what Twilio isn’t.

I long suggested and thought that CPaaS, CCaaS and UCaaS are going to merge as the lines between them are blurring. Vendors in each of these segments are vying towards the others through new product announcements and acquisitions.

Twilio went after CCaaS with Flex. It only made sense it would move into UCaaS at some point, being a comfortable adjacency in communications.

But it didn’t.

It went after customer engagement. Acquired Segment and doubled down in this route – making a splashing announcement of it at this Signal event and keynote.

Twilio is all about businesses communicating with customers.

Twilio is a lot less about people collaborating with each other in a business. Why? Because that’s where the focus of UCaaS is, and a lot of that focus relies on a slightly different set of requirements and roadmap.

This is also why video is getting less attention by Twilio for example.

What’s next for Twilio?

I don’t really know.

This can be seen as a pivot, but also as the next step in Twilio’s evolution.

Twilio is surprising with the way it handles itself in the market, at least for me.

If I had to bet, I’d say that the next 2-3 years are going to be more of the same. Twilio will work on its current set of engagement applications, pouring data from the Segment CDP into it, and fitting its solutions for sales, support and marketing. Obviously, developers are still an important part of all of this.

I wouldn’t expect Twilio to go into additional adjacencies in the API domain or to go after unified communication related use cases either. At least not now. They have their hands full going up market and out of their comfort zone of pure communications.

The post Twilio Signal 2021: A Pivot from CPaaS to Customer Engagement Platform appeared first on BlogGeek.me.

A year of WebRTC Insights

Mon, 10/25/2021 - 12:30

WebRTC insights is turning out to be fun to create and super useful to our clients, looking to navigate the world of WebRTC.

Philipp Hancke and myself started this new thing called WebRTC Insights a year ago. We work well together, so we simply searched what we can do other than the WebRTC codelab, which was and still is a fun project.

WebRTC Insights is meant to help vendors sift through the technical (and non-technical) information that is out there and ever changing around WebRTC. Anything from bugs found, important changes in the WebRTC implementation to security issues raised and many other topics.

The idea? If you are a developer who uses WebRTC on a daily basis and relies on it, we can reduce the time you spend on finding what can bite you in the back when you weren’t looking. And we can definitely reduce the risk of that happening.

A year has gone by. The service evolved through this time, as we added more insights into it. Time to look at what we’ve done

WebRTC Insights by the numbers

We started small. The first WebRTC Insights issue looked at 6 issues, 7 PSAs and 2 market insights. 4 pages in total. Now we’re at 15-20 issues on average (twice as much when a Safari release happened) and 10 pages (or more).

In numbers, over the year this turned out to be:

26 Insights issues, 331 issues & bugs, 120 PSAs, 17 security vulnerabilities, 74 market insights and 185 pages. Phew…

Bugs

In the past decade we have had more than 13,000 issues filed against libwebrtc, Google’s implementation of WebRTC that we all use in Chrome (and all other browsers in one way or another), with close to 5,000 of them external bug reports. In addition to that close to 2,000 external chromium bugs related to WebRTC.

WebRTC is a complex piece of software and staying on top of it requires quite some effort. While the development activity on WebRTC is much lower these days (at a third of the peak change rate back in 2017) there is still a surprising amount of issues we have to look at.

WebRTC Insights started from conversations about WebRTC issues and the challenges they bring between us. We have long looked at and discussed bugs, but this happened over chat and we never wrote it up. Nowadays we write up a summary, our thoughts and the potential impact each bug has. Quite often we learn something from it.

In the process we actually created an annotated list of issues that we can then refer to when we encounter new issues. So when Tsahi complained about an increase in video jitter statistics recently, Philipp just pointed him to the issue where we discussed this topic (you see, Tsahi’s memory isn’t what it used to be).

Mailing lists and PSAs

“Public Service Announcements” or PSAs are a way for the WebRTC team (and Philipp) to communicate breaking changes in WebRTC. They range from changes to the C++ APIs to the plan-b deprecation and typically require action from developers using WebRTC in their applications.

We also list WebRTC-related Intent-to-ship from the Chromium process. This is a mandatory step in the process to launch WebRTC features that require Javascript API changes. In the last year we have mostly seen changes related to screen sharing which then turned into features of Google Meet – yet were available to other users of the platform as well.

Last but not least we do monitor the W3C working group and what happens there as it has a long term impact on where WebRTC is going.

The crazy profession syndrome: WebRTC trials in Chrome

WebRTC uses field trials in Chrome to roll out changes that have some technical risk. We identify them which gives us insights into what might be a possible root cause for issues that are hard to reproduce locally. The best example for this recently was this report by Facebook where an experimental change to reduce the noise during opus dtx caused a large AV desync issue. We had been tracking the experiment for a couple of weeks at that point.

Security patches in WebRTC

We keep track of WebRTC related CVEs in Chrome (17 in the last twelve months), determine whether they only affect Chromium or when they affect native WebRTC and need to be cherry-picked into forks of the native library.

Where is the market headed?

This part is the bird’s eye view that we offer. The rest of the insights are the low level details developers need. Here, we look at the bigger picture of what WebRTC is and the market forces around it.

We bump into tweets, posts, LinkedIn messages and other articles out there – and when we feel they are relevant and important to your work, we mention them. And explain where we see this trend headed and what you should be aware of.

The market insights are designed and handpicked for the clients we serve in WebRTC Insights.

We’re evolving

Over time, we’ve evolved the service.

Security and Chrome trials were added later on. We are now experimenting ourselves with short video explainers of each libwebrtc release (=once a month) and its implications to developers. We got some great feedback on it, so we’re likely to keep it as part of our format.

There are now also 3 different plans to the WebRTC Insights:

  • Light – the biweekly insights email
  • Premium – Light + monthly brainstorming session
  • Exclusive – Premium + unlimited access to courses

Want to join us for the ride this coming year?

To learn more, check us out at WebRTC Insights

You can leave us a message there to get a sample copy of one of our latest insights issue

The post A year of WebRTC Insights appeared first on BlogGeek.me.

Managed WebRTC TURN: The need for speed

Mon, 10/18/2021 - 12:30

What the announcements of Subspace and Cloudflare on their Managed WebRTC TURN services mean for the industry.

In the past couple of months we’ve seen two new entrants to the managed WebRTC TURN business. After stagnation for many years, this small market niche is becoming interesting. REALLY interesting.

Table of contents TURN and the WebRTC developer ecosystem

TURN servers are used in WebRTC in order to get your sessions connected if there’s no direct route available. I am not going to go into the technical part of it, but I’d say that without TURN servers, not all of your WebRTC sessions will get connected. You don’t need it for all sessions, but for some, you won’t be able to work without it. They are an essential component that has its own category in my WebRTC Developer Tools Landscape.

At the end of the day, TURN servers act as intermediaries by relaying the media between two points.

Roughly speaking, you have 3 alternatives in how you can get these set up:

  1. Self host. You can install and host your own TURN servers and manage them on your own. In most cases, this will be by using the open source coturn server
  2. Managed. You can use a third party that runs its own TURN servers, giving you access to their servers, paying for the service. Don’t search for free TURN servers – if they exist, then they aren’t worth the money you aren’t paying for them
  3. Everything and the kitchen sink. You could just go with a WebRTC CPaaS vendor. These will give you everything you need, including TURN servers and service. An all in one deal

In this article, I will be ignoring the “everything and the kitchen sink” approach. Not because it is bad, but because if you’re just interested in a managed WebRTC TURN, then you probably want to control a bit more of your destination (more on that later).

Challenges of using open source coturn in production

Let’s start with the self hosting approach. The leading choice today is to take coturn, a popular open source TURN server, and deploy it on your own. There are one or two other alternatives, but this is by far the most common one.

The challenge though stems from the fact that for TURN the majority of the issues aren’t around integration or development but rather in configuration and maintenance. As such, it falls into the laps of ops, but requires knowledge and understanding of WebRTC.

The main culprit? The fact that you don’t need TURN for each and every session – and that there are 3 different TURN transport protocols, offering a progressive fallback mechanism.

What does that mean?

You install and configure your TURN server. But how do you test that all went well? Just conducting a WebRTC session will not tell you that. If the session succeeded, is it because it didn’t need TURN or because it used your TURN server properly? And if it did use it properly, was that on all 3 different transport protocols?

Configuring TURN is a headache:

  • Testing TURN configuration it isn’t straightforward
  • Scaling TURN horizontally may seem simple, but it has its own set of challenges
  • Geolocating TURN servers properly is tough and tricky when you’re small
  • Securing your TURN servers from abuse isn’t hard, but another necessary task. So is monitoring it
  • And then there’s the hacking angle to it, as Slack found out in 2018
Managed WebRTC TURN – the early days

In the early days of WebRTC, developers had two main alternatives:

  1. DIY – building everything on their own, including the installation and configuration of their TURN servers
  2. CPaaS – “outsourcing” all of the WebRTC infrastructure components including their TURN servers to a third party vendor who specializes in it

You either knew what you were doing or didn’t want to know what you were doing.

The initial indication for managed WebRTC TURN service came from two vendors. It started with Xirsys and continued with Twilio.

Xirsys

Xirsys was the first vendor to offer a managed WebRTC TURN service commercially. It was limited to a data center or two when they started, but grew over time.

Today, the Xirsys Cloud service spans 7 regional data centers.

Twilio

Twilio is the most widely known CPaaS vendor out there. It is playing the best of suite game, with its large and growing portfolio of services. One of these products is their Twilio Global Network Traversal Service, a half-hidden product that enables you to leverage their TURN servers for your application without using their other CPaaS and WebRTC products.

At the time of writing, Twilio runs its media over 9 different regions, all on AWS.

Why use a managed WebRTC TURN service?

I guess it is a matter of experience and expertise. Do you really want to deal with questions such as how do you decide which TURN server to connect a user to? How to deal with WebRTC TURN geolocation?

A managed WebRTC TURN service eventually targets the exact pain points and challenges that setting up your own TURN servers pose:

  • Someone else takes care of properly configuring the TURN servers (assuming they know what they are doing)
  • They take care of scaling this for you, so you don’t need to deal with increases in traffic, at least not on the TURN servers
  • You get someone else to decide on geolocation (and do it better than you can for the most part)
  • Inherently, managed WebRTC TURN services secure their service from abuse, so that’s also a given – oh – and they’ll provide you with a nice usage dashboard as well

The best thing about managed WebRTC TURN services?

There’s no vendor lock-in.

Switching from one managed WebRTC TURN service to another or to your own self installed servers is a breeze – just change the iceServers configuration on your peer connections in WebRTC and you’re done. Theoretically, that’s a single line of code change.

It is also why I suggest anyone who is building their own WebRTC application to start by using a managed WebRTC TURN service – they can always switch to their own, and the cost of switching next year will be the same as just building it today. And as the lazy person that I am, I will always postpone to tomorrow something that I don’t have to do today.

Managed WebRTC TURN – the post-pandemic version

Then came the pandemic, with its lockdowns, quarantine and the rise in use of WebRTC and any other remote communications technology.

The market stayed roughly the same for managed WebRTC TURN servers, or at least it did until 2021. What happened is that we now have 2 more vendors in this domain: Subspace and Cloudflare. And they are different: they are bigger in the physical footprint they have and they make use of Anycast – an IP addressing and routing scheme used to connect a large set of globally spread servers via a single IP address. This type of a solution also makes things a lot simpler to whitelist when needed.

Subspace GlobalTURN

Subspace offers better connectivity than the open internet. They do that by optimizing the routes your packets go through. What you do is send your packets through their network, which will then figure out the best route.

In 2021, they decided to expand what they are doing to WebRTC as well, offering their GlobalTURN service. With around 100 cities and an Anycast addressing scheme, they offer a global footprint.

For Subspace, this isn’t the first VoIP related product they offer, but it is the first WebRTC related one. Would they move towards hosting media servers as well? I think it is an unlikely path for them.

Cloudflare WebRTC Components

Cloudflare announced their own deployment of a managed WebRTC TURN service called WebRTC Components. Besides it being a TURN service, there’s not much to go by yet.

What we do know is that it relies on Cloudflare’s anycast network spanning 250+ cities.

For Cloudflare, this is the first WebRTC related offering, which was announced alongside a slew of other capabilities, targeted at cloud vendors (their R2 storage which directly competes with AWS S3 for example). There’s a good overview of the disruption path Cloudflare is taking. The WebRTC addition to it is an interesting choice.

Interestingly, I debated the potential of using Cloudflare’s Workers as a TURN service enabler when it was announced. Seems like they decided to build it on their own

Which managed WebRTC TURN service to use?

That should be the question you should ask yourself.

It isn’t about whether you should use a managed WebRTC TURN service or deploy your own – it should be which managed WebRTC TURN service to select. Why? Because this is super simple to adopt and replace with zero vendor lock-in.

Pricing is important, but also global footprint, latency and quality. Then there are things like actually doing its job – the percentage of successful connections you get with it.

It will be interesting to see if and how Xirsys and Twilio address the threat from the newcomers to this market niche. For Xirsys this should be more worrying than it is for Twilio, as that’s one of their core products, whereas for Twilio it is a small part of what they offer to their customers.

Who would have thought that in 2021 we will see competition and innovation coming to the managed WebRTC TURN service?

The post Managed WebRTC TURN: The need for speed appeared first on BlogGeek.me.

Free WebRTC Video API in CPaaS. Is it worth it?

Mon, 09/13/2021 - 12:30

Are free minutes and accounts in WebRTC video API worth the trouble? I think not. Don’t choose your CPaaS vendor based on their “free” tier.

I am finalizing my 10th edition of Choosing a WebRTC API report these days. In the past year I’ve heard from a few vendors and developers questions about the free tiers in this space. So I took the time as part of this edition, to sit down and analyze the price plans of the various vendors in the market and create another article as part of the report (one that is available through the membership site for those who purchase the report).

In this article, I want to shine a light on one aspect of price plans in WebRTC APIs which is the free tier.

Let’s dive into things, shall we?

Table of contents Free tier is optional

14 out of 24 vendors I looked at practice per minute pricing. Sometimes, they have multiple price strategies, but per minute pricing is the most common – especially on the bigger more widely known vendors.

Out of the 14 vendors, 5 offer free tiers in one way or another. And 2 offer credits – Amazon Chime SDK and Microsoft Azure Communication Services – these two offer IaaS cloud credits to startups as general practice and their CPaaS/WebRTC offering wraps into these as well (I’ve written about cloud giant effect on the CPaaS market last year).

Not all WebRTC API vendors offer a free tier

Free tiers seem to be almost “random” in who offers them and who doesn’t

Free depends on the plan

Some vendors have free plans that depend on different things.

For Twilio, for example, free minutes come only with their Twilio Video WebRTC Go service, which… amounts to ~$10/month, and offers a limited peer-to-peer experience.

With some vendors, the free plan is actually a limited free evaluation for 1-4 months in timeframe.

That said, the most popular alternative seems to be free minutes on a paid plan. You give your credit card, and will only be charged if you pass a number of minutes on a given month. More on that – in the next section.

Free monthly minutes depend on the plan/feature set you choose/use

It might also be dependent on what you pay (did we say free plan?)

10,000 free WebRTC minutes

Most vendors that give free minutes, are giving 10,000 free minutes per month.

Some give less. A few give more. The highest is 30,000 minutes per month.

If your service offers group calls of 10 participants for 30 minutes each time on average, then a single group call will take 300 minutes. That means ~33 such calls a month are free. Or a bit over a call a day.

This isn’t much. Not even for a small vendor just starting out. To be clear – this isn’t to say that 10,000 free minutes isn’t nice. Just that it won’t get you far.

The number of free minutes offered may seem a lot, but calculated for a use case they aren’t that many

Many small vendors see upwards of a million video minutes a month, so this amount to 1% of less of their total monthly minutes. Negligible in the long run

WebRTC video free tier? Money Time

Minutes are nice, but how about money? How much money do you actually save with these free minutes?

I did the math. The numbers range between $30-$90 per month. Less than $1,000 per year.

If you are building a business and making your long term plans on the CPaaS vendor to use based on a potential discount of $1,000 a year then you’re doing it wrong.

Why aren’t CPaaS vendors offering higher free plans? Because they have costs they need to cover. Assuming a 10% cost over that price point, then 1,000 “free” accounts will cost them up to $100,000 a year to maintain. And that doesn’t include the support costs which are higher.

CPaaS vendors would like to have startups sample and use their service, but they also need to operate as a business and make money. Giving more minutes than they do today probably isn’t going to accomplish more paying customers – it will just bring in more free riders that will also leach on their soul and support resources.

Free WebRTC video CPaaS plans worth less than $100/month

When making your decision on choosing a vendor, ignore that plan in your own business plan

As a CPaaS vendor, decide if you want such a free tier and what type of customers it is going to attract

How do you choose a WebRTC CPaaS vendor?

The answer to this question is definitely NOT through their free tiers or minutes…

To some extent, the decision is made these days via pricing. It is why I’ve written in this round of my report to include a special article dedicated to pricing of WebRTC calls in CPaaS services. This includes the leading metrics these platforms use for their price plans as well as price ranges for each vendor. For this analysis, I’ve also added Zoom Video SDK as another reference point for pricing.

The report itself introduces a new CPaaS vendor and removes another vendor. It also sports a new features set structure, one that is geared towards the changes in requirements made due to the pandemic.

This report is used today by:

  • CPaaS vendors themselves, who wish to understand their competitive landscape
  • Enterprises and startups who need to pick and choose a CPaaS vendor to work with
  • Companies who wish to start a CPaaS business or compete through an adjacency type solution
  • Investment first looking to understand the market and… make an investment decision

This month, until the report gets officially published, there’s a $500 discount. You can use coupon code API2021LAUNCH when you purchase the report.

Learn more about my report

The post Free WebRTC Video API in CPaaS. Is it worth it? appeared first on BlogGeek.me.

How to hire WebRTC developers for your job

Mon, 08/23/2021 - 11:59

Hiring WebRTC developers? Here are some things you need to know and consider, since finding WebRTC experts for a job is challenging.

You’re growing. Obviously. And you have this huge, important, strategic, one of a kind, critical project. And it requires WebRTC. Only thing missing is developers. Or should I say skilled WebRTC developers.

How do you go about finding, hiring and retaining WebRTC developers?

I wrote a short post on LinkedIn the other day about this:

Typical conversation on #WebRTC recruitment

You: “Do you know any developer who can help us with WebRTC?”

Me: “No. Those I know either have a day job they love are are freelancers not looking for work (and almost always fully booked)”

You: “If you learn of a developer available let me know“

Me [Thinking ]: “Join the club at the end of that waiting list…”

Finding developers that know WebRTC is really hard. Seriously.

There’s a lot more demand than supply in this one, and the market is tiny compared to other technologies you need to deal with.

If you’re looking for WebRTC developers you can either:

poach someone from another vendor who does WebRTC. Tricky and expensive

find someone with the inclination and train him on WebRTC

If you’re on that second track of training, I can help you.

This brought with it a request to write this in longform so Philipp Hancke will have a place to refer recruiters to…

yes. Tsahi, please write a blog post so I can have a canned response for recruiters

— Philipp Hancke (@HCornflower) August 5, 2021

Philipp – this one’s for you

Table of contents

Oh – and if you are interested in history, this isn’t a new topic here. I wrote about finding WebRTC developers years ago…

WebRTC developers: A supply problem

The chart above shows a crude comparison between WebRTC usage and LinkedIn profiles. While the pandemic has shown a huge increase in WebRTC usage (=demand) the change in LinkedIn profiles has been relatively moderate (=supply).

Here’s the two separate charts showing each data point independently:

LinkedIn profiles showing “WebRTC” in them grew steadily from ~17,000 to 25,000 profiles (47% growth in total) whereas growth in WebRTC usage (calculated as calls to GetUserMedia in page loads) grew 0.05 to 0.22 (340% growth in total), peaking at almost 0.6 with the pandemic (that’s %1,100 growth).

We’ve got a supply problem with WebRTC. There’s a shortage of developers, architects, product managers, testers and support who are savvy enough with WebRTC. They are all hard to come by, and it is harder still to know what they really know about WebRTC – installing your own Jitsi server and playing with it is different than running it at scale or developing your own SFU media server from scratch.

With this in mind, you can safely assume that one of the most popular topics raised when people talk to me about WebRTC is hiring WebRTC developers – or more accurately, if I can recommend anyone specific.

The challenging skillsets of WebRTC

Why is it that it is hard to find WebRTC developers?

I think it starts from the diagram below:

WebRTC is multidisciplinary by its nature. It is located right between web and VoIP technologies:

This means a developer who needs to handle WebRTC needs to have a good grasp of more than a single field of software development. And this isn’t easy to come by.

There’s one more reason though, and that’s the fact that WebRTC means different things to different people, and isn’t really focused on a single set of skills. Look at the short set of questions I’ve asked years ago about how much WebRTC  developers are worth. The answers are mostly around “it depends”, where it depends on what tasks or job description that developer is filling up.

Here are the main areas today that you may need to find different profiles of WebRTC developers:

  • Frontend
  • Backend
  • Mobile
  • Telephony

In each domain, the skillset is slightly different and you will be hard pressed to find a superhero developer that meets all your requirements in all areas.

Hiring WebRTC talent

WebRTC hiring is challenging. If you are looking for talented engineers who know a thing or two about WebRTC, then you are in for a world of pain. Finding them isn’t easy and hiring them is even harder.

Here are the different techniques I’ve seen vendors take when trying to find and hire WebRTC engineers.

WebRTC head-hunting and poaching

You can go head hunting for WebRTC talent. Bear in mind 3 things though:

  1. There aren’t a lot of WebRTC developers out there
  2. Most of them are in cushy jobs not looking to change places
  3. Many of them don’t even go on the open market when they need to look for their next gig. They go through “friends and family”, and since the market has so much pent up demand, this is usually where they will land

There are two approaches here. Let’s call them bottom up and top down.

Bottom up – you find the individual developers that fit the profile you are looking for, and then you reach out to them to see if they are bored enough to consider moving elsewhere

Top down – target a vendor in this space who you think peaked or someone who got acquired or just someone you think a bit vulnerable and attractive as an employer, and then figure out who are the developers there worth approaching to poach

Neither approach is easy. They are time consuming, frustrating and long.

Job boards and job listings

You could use traditional job boards and job listing sites, place the job opening on your website, etc. What you’ll most probably get is going to be generalists with little domain knowledge and expertise in WebRTC. This means most applicants won’t have the WebRTC experience you seek.

The only other option here is to do an ad placement on WebRTC Weekly and/or webrtcHacks – many of the sponsors there use it for job listings, and you can try as well. The main advantage here is that the readership is quite relevant – developers working with WebRTC.

* Note that I operate WebRTC Weekly and affiliated with webrtcHacks

Hire from an adjacency

This is something I suggest to many of my clients. Hire from an adjacency:

  • Video streaming industry
  • VoIP or traditional video conferencing
  • Telephony
  • Software networking

My favorite is probably finding companies that vanished, for example Polycom Israel. They had a large engineering team in Israel experienced in video conferencing. You can try to find developers who worked there 5-10 years ago and… moved on – often to other domains. And try to get them back. They won’t be experts in WebRTC, but they’ll know a lot about how to handle real time video. And that’s better than nothing.

The same is applicable elsewhere in the world and in other adjacencies.

When hiring from an adjacency though, you will need to be certain the candidate in question isn’t “in love” in how things are done today and have the willingness and the openness to learn and grow. WebRTC brings with it new paradigms and challenges and developers who have partial experience and knowledge from an adjacency need to be open to learn new concepts.

Nurture and grow in-house WebRTC expertise

When all else fails, you’ll need to grow someone in-house or train a new hire that is clueless about WebRTC to become that expert. Not an easy task, but certainly achievable.

WebRTC requires a certain inclination. There’s a need to wrap your head around asynchronous events and programming (lots of await and callbacks). There’s a need to understand codecs and lossy compression mechanisms (at least at the conceptual level). There’s perpetual optimization and fine tuning work that goes with it. Not everyone likes to work in such environments (I thrive in them).

Once you find that person, you will need to train him. Something that again can happen in one of 3 ways:

  1. Throw him into the water. He probably knows how to Google and find his way on the Internet. He will either sink or swim. I believe this involves too much time, risk and wasted effort
  2. Have someone train him. If you have WebRTC developers already, then adding a new one and training him can be done in-house. But that will take time from your developers in creation of materials, training and frustration – they might not even be good at training while being great developers
  3. Put him on a WebRTC training course. There are a few of these out there, so might as well have him enroll in one (or a few of them). I know for a fact that there is a good WebRTC training for developers out there probably because I author and maintain it…
More than just WebRTC developers

I have only discussed developers so far, but the product life-cycle of WebRTC products involves more than just the engineers who need to understand WebRTC. There are a few more roles to think about:

  • System Architects – they need to understand how different design decisions affect the end results, where the limits are, what architecture alternatives they have, etc.
  • Product Managers – need to speak the language. Especially should be aware of what is or isn’t feasible with WebRTC. They need to understand the time and cost implications of the decisions they make
  • Testers – if you’re going to test something that makes use of WebRTC, you better know what WebRTC is and what it is capable of…
  • Support and Sales – people are going to ask technical questions. Be it because they got into a pickle and can’t connect or have bad quality. Or because they are buying and want to understand what’s in there

All of these roles need a solid understanding of WebRTC if it is part of the things you are offering in your company.

Can I help?

Yap.

There are several things that I actively do here:

  • Online training courses for developers (and other roles)
  • Assistance in writing job listings
  • Publish your job listings on WebRTC Weekly and/or webrtcHacks
  • Screen candidates based on CVs
  • Conduct technical job interviews to your potential candidates
  • Offer coaching to the WebRTC experts you’re grooming

If you’re interested in learning more, feel free to contact me.

Oh – and don’t ask me if I know someone suitable. You’re likely not the first to ask me that this week.

The post How to hire WebRTC developers for your job appeared first on BlogGeek.me.

Tweaking WebRTC video quality: unpacking bitrate, resolution and frame rates

Mon, 08/02/2021 - 12:30

WebRTC video quality requires some tweaking to get done properly. Lets see what levels we have in the form of bitrate, resolution and frame rate available to us.

Real time video is tough. WebRTC might make things a bit easier, but there are things you still need to take care of. Especially if what you’re aiming for is to squeeze every possible ounce of WebRTC video quality for your application to improve the user’s experience.

This time, I want to cover what levers we have at our disposal that affect video quality – and how to use them properly.

Table of contents What affects video quality in WebRTC?

Video plays a big role in communication these days. A video call/session/meeting is going to heavily rely on the video quality. Obviously…

But what is it then that affects the video quality? Lets try and group them into 3 main buckets: out of our control, service related and device related. This will enable us to focus on what we can control and where we should put our effort.

Out of our control From my workshop on WebRTC innovation and differentiation

There are things that are out of our control. We have the ability to affect them, but only a bit and only up to a point. To look at the extreme, if the user is sitting in Antarctica, inside an elevator, in the basement level somewhere, with no Internet connection and no cellular reception – in all likelihood, even if he complains that calls aren’t get connected – there’s nothing anyone will be able to do about it besides suggesting he moves himself closer to the Wifi access point.

The main two things we can’t really control? Bandwidth and the transport protocol that will be used.

We can’t control the user’s device and its capabilities either, but most of the time, people tend to understand this.

Bandwidth

Bandwidth is how much data can we send or receive over the network. The higher this value is, the better.

The thing is, we have little to no control over it:

  • The user might be far from his access point
  • He may have poor reception
  • Or a faulty cable
  • There might be others using the same access point and flooding it with their own data
  • Someone could have configured the firewall to throttle traffic

None of this is in our control.

And while we can do minor things to improve this, such as positioning our servers as close as possible to the users, there’s not much else.

Our role with bandwidth is to as accurately as possible estimate it. WebRTC has mechanisms for bandwidth estimation. Why is this important? If we know how much bandwidth is available to us, we can try to make better use of it –

Over-estimating bandwidth means we might end up sending more than the network can handle, which in turn is going to cause congestion (=bad)

Under-estimating bandwidth means we will be sending out less data than we could have, which will end up reducing the media quality we could have provided to the users (=bad)

Transport protocol

I’ve already voiced my opinion about using TCP for WebRTC media and why this isn’t a good idea.

The thing is, you don’t really control what gets selected. For the most part, this is how the distribution of your sessions is going to look like:

From my Advanced WebRTC Architecture Course
  • Most calls probably won’t need any TURN relay
  • Most calls that need TURN relay, will do so over UDP
  • The rest will likely do it over TCP
  • And there’ll be those sessions that must have TLS

Why is that? Just because networks are configured differently. And you have no control over it.

You can and should make sure the chart looks somewhat like this one. 90% of the sessions done over TURN/TCP should definitely raise a few red flags for you.

But once you reach a distribution similar to the above, or once you know how to explain what you’re seeing when it comes to the distribution of sessions, then there’s not much else for you to optimize.

Service related

Service related are things that are within our control and are handled in our infrastructure usually.This is where differentiation based on how we decided to architect and deploy our backend will come into play.

Bitrate

While bandwidth isn’t something we can control, bitrate is. Where bandwidth is the upper limit of what the network can send or receive, bitrate is what we actually send and receive over the network.

We can’t send more than what the bandwidth allows, and we might not always want to send the maximum bitrate that we can either.

Our role here is to pick the bitrate that is most suitable for our needs. What does that mean to me?

  1. Estimate the bandwidth available as accurately as possible
  2. This estimate is the maximum bitrate we can use
  3. Make use of as much of that bitrate as possible, as long as that gives us a quality advantage

It is important to remember to understand that increasing bitrate doesn’t always increase quality. It can cause detrimental decreases in quality as well.

Here are a few examples:

  • If the camera source we have is of VGA resolution (640×480), then there’s no need to send 2mbps over the network. 800kbps would suffice – more than that and we probably won’t see any difference in quality anyways
  • The network might be able to carry 10mbps in the downlink, but receiving 10mbps in aggregate of incoming video data from 5 participants (2mbps each) will likely tax our CPU to the point of rendering it useless. In turn, this will actually cause frame drops and poor media quality
  • Sending full HD video (1920×1080) and displaying it in a small frame on the screen because the content being shared in parallel is more important is wasteful. We are eating up precious network resources, decoder CPU and scaling down the image

There are a lot of other such cases as well.

So what do we do? I know, I am repeating myself, but this is critical –

  1. Estimate bandwidth available
  2. Decide our target bitrate to be lower or equal to the estimate
Codecs

Codecs affect media quality.

For voice, G.711 is bad, Opus is great. Lyra and Satin look promising as future alternatives/evolution.

With video, this is a lot more nuanced. You have a selection of VP8, VP9, H.264, HEVC and AV1.

Here are a few things to consider when selecting a video codec for your WebRTC application:

  • VP8 and H.264 both work well and are widely known and used
  • VP9 and HEVC give better quality than VP8 and H.264 on the same bitrate. All other things considered equal, and they never are
  • AV1 gives better performance than all the other video codecs. But it is new and not widely supported or understood
  • H.264 has more hardware acceleration available to it, but VP8 has temporal scalability which is useful
  • Hardware acceleration is somewhat overrated at times. It might even cause headaches (with bugs on specific processors), but it is worth aiming for if there’s a real need
  • For group sessions you’d want to use simulcast or SVC. These aren’t available with H.264 and probably not with HEVC either
  • HEVC will leave you in an Apple only world
  • VP9 isn’t widely used and the implementation of SVC that it has is still rather proprietary, so you’ll have some reverse engineering to do here
  • AV1 is new as hell. And it eats lots of CPU. It has its place, but then again, this is going to be an adventure (at least in the coming year or two)

Choosing a video codec for your service isn’t a simple task. If you don’t know what you’re doing, just stick with VP8 or H.264. Experimenting with codecs is a great time waster unless you know your way with them.

Latency How you design your WebRTC infrastructure will affect the latency

While we don’t control where users are – we definitely control where our servers are located. Which means that we can place the servers closer to the users, which in turn can reduce the latency (among other things).

Here are some things to consider here:

  • TURN servers should be placed as close as possible to users
  • In large group calls, we must have media servers
    • If we use a single server per meeting, then all users must connect directly to it
    • But if we distribute the media servers used for a single meeting, then we can connect users to media servers closer to where they are
  • The faster we get the user’s data off the public network, the more control we have over the routing of the packets between our own servers
  • The “shorter” the route from he user to our server is, the better the quality will be
    • Shorter might not be a geographic distance
    • We factor in bandwidth, packet loss, jitter and latency as the metrics we measure to decide on “shortest”

Measure the latency of your sessions (through rtt). Try to reduce it for your users as much as possible. And assume this is an ongoing never-ending process

Here’s a session from Kranky Geek discussing latencies and media servers:

Looking at scale and servers

There’s a lot to be said about the infrastructure side in WebRTC. I tried to place these insights in an ebook that is relevant today more than ever – Best practices in scaling WebRTC deployments

Device related

You don’t get to choose the device your users are going to use to join their meetings. But you do control how your application is going to behave on these devices.

There are several things to keep in mind here that are going to improve the media quality for your users if done right on their device.

Available CPU

This should be your top priority. To understand how much CPU is being used on the user’s device and deciding when you’ve gone too far.

What happens when the device is “out of CPU”?

  • The CPU will heat up. The fan will start to work busily and noisily on a PC. A mobile device would heat up. It will also start to have shorter battery life while at it. Interestingly, this is your smallest of worries here
  • WebRTC won’t be able to encode or decode media frames, so it will start to skip them
  • On the encoder side, this will mean a lower frame rate. Regrettable, but ok
  • The decoder is where things will start to get messy:
    • The decoder will drop frames and not try to decode them
    • Since video frames are dependent on one another, this will mean the decoder won’t be able to continue to do what it does
    • It will need a new I-frame and will ask for it
    • That will lead to video freezes, rendering video useless

So what did we have here?

You end up with poor video quality and video freezes

The network gets more congested due to frequent requests for I-frames

Your device heats up and battery life suffers

Your role here is to monitor and make sure CPU use isn’t too high, and if it is, reduce it. Your best tool for reducing CPU use is by reducing the bitrates you’re either sending and/or receiving.

Sadly, monitoring the CPU directly is impossible in the browser itself and you’ll need to find out other means of figuring out the state of the CPU.

Content type

With video, content and placement matter.

Let’s say you have 1,000kbps of “budget” to spend. That’s because the bandwidth estimator gives you that amount and you know/assume the CPU of both the sender and receiver(s) can handle that bitrate.

How do you spend that budget?

  • You need to figure out the resolution you want to send. The higher the resolution the “better” the image will look
  • How about increasing frame rate? Higher frame rate will give you smoother motion
  • Or maybe just invest more bits on whatever it is you’re sending

WebRTC makes its own decisions. These are based on the bitrate available. It will automatically decide to increase or reduce resolution and frame rate to accommodate for what it feels is the best quality. You can even pass hints on your content type – do you value motion over sharpness or vice versa.

There are things that WebRTC doesn’t know on its own through:

  • It knows what resolution you captured your content with (so it won’t try to send it at a higher resolution than that)
  • But it has no clue what the viewers’ screen or window resolution is
  • So it might send more than is needed, causing CPU and network losses on both ends of the session
  • It isn’t aware if the content sent is important or less important, which can affect the decisions of how much to invest in bitrate to begin with
  • Oh – and it makes its decisions on the device. If you have a media server that processes media, then all that goodness needs to happen in your media server and its own logic

It is going to be your job to figure out these things and place/remove certain restrictions of what you want from your video.

Optimizing large group calls

The bigger the meeting the more challenging and optimized your code will need to be in order to support it. WebRTC gives you a lot of powerful tools to scale a meeting, but it leaves a lot to you to figure out. This ebook will reveal these tools to you and enable you to increase your meeting sizes – Optimizing Group Video Calling in WebRTC

The 3-legged stool of WebRTC video quality

Video quality in WebRTC is like a 3-legged stool. With all things considered equal, you can tweak the bitrate, frame rate and resolution. At least that’s what you have at your disposal dynamically in real-time when you are in the middle of a session and need to make a decision.

Bitrate can be seen as the most important leg of the stool (more on that below).

The other two, frame rate and resolution are quite dependent on one another. A change in one will immediately force a change in the other if we wish to keep the image quality. Increasing or decreasing the bitrate can cause a change in both frame rate and resolution.

Follow the bitrate

I see a lot of developers start tweaking frame rates or resolutions. While this is admirable and even reasonable at times, it is the wrong starting point.

What you should be doing is follow the bitrate in WebRTC. Start by figuring out and truly understanding how much bitrate you have in your budget. Then decide how to allocate that bitrate based on your constraints:

  • Don’t expect full HD quality for example if what you have is a budget of 300kbps in your bitrate – it isn’t doable
  • If you have 800kbps you’ll need to decide where to invest them – in resolution or in frame rate

Always start with bitrate.

Then figure out the constraints you have on resolution and frame rate based on CPU, devices, screen resolution, content type, … and in general on the context of your session.

 The rest (resolution and frame rate) should follow.

And in most cases, it will be preferable to “hint” WebRTC on the type of content you have and let WebRTC figure out what it should be doing. It is rather good at that, otherwise, what would be the point of using it in the first place?

Making a choice between resolution and frame rate

Once we have the bitrate nailed down – should you go for a higher resolution or a higher frame rate?

Here are a few guidelines for you to use:

  • If your content is a slide deck or similar static content, you should aim for higher resolution at lower frame rate. If possible, go for VBR instead of the default CBR in WebRTC
  • Assuming you’re in the talking-heads domain, a higher frame rate is the better selection. 30fps is what we’re aiming for, but if the bitrate is low, you will need to lower that as well. It is quite common to see services running at 15fps and still happy with the results
  • Sharing generic video content from YouTube or similar? Assume frame rate is more important than resolution
  • Showing 9 or more participants on the screen? Feel free to lower the frame rate to 15fps (or less). Also make sure you’re not receiving video at resolutions that are higher than what you’re displaying
  • Interested in the sharpness of what is being shared? Aim for resolution and sacrifice on frame rate
Time to learn WebRTC

I’ve had my fair share of discussions lately with vendors who were working with WebRTC but didn’t have enough of an understanding of WebRTC. Often the results aren’t satisfactory, falling short with what is considered good media quality these days. All because of wrong assumptions or bad optimizations that backfired.

If you are planning to use WebRTC or even using WebRTC, then you should get to know it better. Understand how it works and make sure you’re using it properly. You can achieve that by enrolling in my WebRTC training courses for developers.

Learn more about my WebRTC training

The post Tweaking WebRTC video quality: unpacking bitrate, resolution and frame rates appeared first on BlogGeek.me.

Why you should prefer UDP over TCP for your WebRTC sessions

Tue, 07/06/2021 - 12:30

When using WebRTC you should always strive to send media over UDP instead of TCP. at least if you care about media quality

Every once in a while I bump into a person (or a company) that for some unknown reason made a decision to use TCP for its WebRTC sessions. By that I mean prioritizing TURN/TCP or ICE-TCP connections over everything else – many times even barring or ignoring the existence of UDP. The ensuing conversation is usually long and arduous – and not always productive I am afraid.

So I decided to write this article, to explain why for the most part, WebRTC over UDP is far superior to WebRTC over TCP.

Table of contents UDP and TCP

Since the dawn of time the internet, we had UDP and TCP as the underlying transport protocols that carry data across the network. While there are other transports, these are by far the most common ones.

And they are different from one another in every way.

UDP is the minimal must that a transport protocol can offer (you can get lower than that, but what would be the point?).

With UDP you get the ability to send data packets from one point to another over the network. There are no guarantees whatsoever:

  • Your data packets might get “lost” along the way
  • They might get reordered
  • Or duplicated

No guarantees. Did I mention that part?

With TCP you get the ability to send a stream of data from one point to another over a “connection”. And it comes with everything:

  • Guaranteed delivery of the data
  • The data is received in the exact order that it is sent
  • No duplication or other such crap

That guaranteed delivery requires the concept of retransmissions – what gets lost along the way needs to be retransmitted. More on that fact later on.

We end up with two extremes of the same continuum. But we need to choose one or the other.

TCP rules the web

Reading this page? You’re doin’ that over HTTPS.

HTTPS runs over a TLS connection (I know, there’s HTTP/3 but bear with me here).

And TLS is just TCP with security.

And if you are using a WebSocket instead, then that’s also TCP (or TLS if it is a secure WebSocket).

No escaping that fact, at least not until HTTP/3 becomes common place (which is slightly different than running on top of TCP, but that’s for another article).

Up until WebRTC came to our lives, everything you did inside a web browser was based on TCP in one way or another.

UDP rules VoIP

VoIP or Voice over IP or Video over IP or Real Time Communications (RTC) or… well… WebRTC – that takes place over UDP.

Why? Because this whole thing around guaranteed delivery isn’t good for the health of something that needs to be real time.

Let’s assume a latency of 50 milliseconds in each direction over the network, which is rather good. This translates to a round trip time of 100 milliseconds.

If a packet is lost, then it will take us at least a 100 milliseconds until the one who sent that packet will know about that – anything lower than that won’t allow the receiver to complain. Usually, it will take a bit more than 100 milliseconds.

For VoIP, we are looking to lower the latency. Otherwise, the call will sound unnatural – people will overtalk each other (happens from time to time in long distance calls for example). Which means we can’t really wait for these retransmissions to take place.

Which is why VoIP, in general, and WebRTC in particular, chose to use UDP to send its media streams. The concept here is that waiting will cause a delay for the whole duration of the session reducing the experience altogether, while the need to deal with lost packets, trying to conceal that fact would cause minor issues for the most part.

With WebRTC, you want and PREFER to use UDP for media traffic over TCP or TLS.

WebRTC ICE: Preferences and best effort

We don’t always get what we want. Which is why sometimes our sessions won’t open with WebRTC over UDP. Not because we don’t want them to. But because they can’t. Something is blocking that alternative from us.

That something is called a firewall. One with nasty rules that… well… don’t allow UDP traffic. The reasons for that are varied:

  • The smart IT person 30 years ago decided that UDP is bad and not used over the internet, so better to just block it
  • Another IT person didn’t like people at work bittorrenting the latest shows on the corporate network, so he blocked UDP traffic of the encrypted kind (which is essentially how WebRTC media traffic looks like)

This means that you’ll be needing TCP or TLS to be able to connect your users on that WebRTC session.

But – and that’s a big BUT. You don’t always want to use TCP or TLS. Just when it is necessary. Which brings us to ICE.

ICE is a procedure that enables WebRTC to negotiate the best way to connect a session by conducting connectivity checks.

In broad strokes, we will be using this type of logic (or strive to do so):

The diagram above shows the type of preferences we’d have while negotiating a session with ICE.

  • We’d love to use direct UDP
  • If impossible then relay via a TURN/UDP server would be just fine
  • Then direct TCP connection would be nice
  • Otherwise relay via a TURN/TCP or a TURN/TLS server

UDP comes first.

When is TCP (or TLS) good for WebRTC media?

The one and only reason to use TCP or TLS as your transport for WebRTC media is because UDP isn’t available.

There. Is. No. Other. Reason. Whatsoever.

And yes. It deserved a whole section of its own here so you don’t miss it.

TCP for me is a last resort for WebRTC. When all else fails

When will TCP break as a media transport for WebRTC?

The moment you’ll have packet loss on the network, TCP will break. By breaking I don’t mean the connection will be lost, but the media quality you’ll experience will degrade a lot farther than what it would with UDP.

Packet loss due to congestion is going to be the worst. Why? Because it occurs due to a switch or router along the route of your data getting clogged and starting to throw packets it needs to handle.

Here are all the things that will go wrong at such a point:

  • TCP will retransmit packets – since they weren’t acknowledged and are deemed lost
  • Retransmitting them will take time. Time we don’t have
    • For a video stream, in all likelihood, the packet loss will be translated to a request to a new I-frame
    • So the sender will then generate a new I-frame, which is bigger than other frames
    • This in turn will cause more data to be sent over the network
    • And since the network is already congested… this will just worsen the situation
  • Here’s the “nice” thing though. TCP is retransmitting the lost data
    • So we now have data we don’t need being sent through the network
    • Which is congested already. Causing more congestion. For things we’re not going to use anyways
    • It is actually hurting our ability to send out that I-frame the receiver is trying to ask for
  • We’re also running on top of TCP, so there’s no easy way for us to know that things are being lost and retransmitted since TCP is hiding all that important data
    • So the moment we know about packet loss in WebRTC is way too later
    • No ability to use logic like intra packets delay (that’s smart-talk for saying figuring out potential near-future congestion, which also feels like smart-talk)
    • And no way to employ algorithms to correct congestion and packet loss scenarios quickly enough

Bottom line – TCP causes packet loss issues to worsen the situation a lot further than they are, with a lot less leeway on how to solve them than we have running on top of UDP.

The assumptions TCP makes over the data being sent are all wrong for real time communications requirements that we have in protocols like WebRTC

Time to learn WebRTC

I’ve had my fair share of discussions lately with vendors who were working with WebRTC but didn’t have enough of an understanding of WebRTC. Often that ends up badly – with solutions that don’t work at all or seem to work until they hit the realities of real networks, real users and real devices.

I just completed a massive update to my Advanced WebRTC Architecture training course for developers. In this round, I also introduced a new lesson about bandwidth estimation in WebRTC.

Next week, we will start another round of office hours as part of the course, letting those taking this WebRTC training ask questions openly as well as join live lessons on top of all the recorded and written materials found in the course.

If you are planning to use WebRTC or even using WebRTC, there isn’t going to be any better timing to join than this week.

Learn more about my WebRTC training

The post Why you should prefer UDP over TCP for your WebRTC sessions appeared first on BlogGeek.me.

Why CPaaS is losing the innovation lead to UCaaS

Mon, 06/21/2021 - 12:30

It seems like CPaaS vendors have grown complacent compared to the rapid innovation coming from UCaaS vendors. This makes no sense.

CPaaS has been leading the innovation when it comes to how developers build communication products. This has been the case ever since CPaaS was coined. But now, the trend is changing. This is doubly true for WebRTC and video communication services. UCaaS vendors have taken the lead in innovation and setting the pace of the market, leaving CPaaS vendors behind.

Can this trend be reversed? Is this a bad omen for CPaaS vendors competing in video use cases?

Table of contents Predicting future communication trends

I used to work at RADVISION. The company specialized in video conferencing equipment but was split into two business units. The one I was a part of licensed VoIP software stacks to developers. You could say that what we did predates CPaaS. We didn’t have the cloud or server APIs but we sure did have SDKs.

In each and every townhall the company had, the CEO used to mention that our business unit was a precursor of the industry. Whatever requirements we’ve seen, whatever trend we experienced in sales (increase or decrease) was just an indicator of what is to come in the market in 3 years or so. The reasoning was simple – we licensed to developers, which then built their products and put them to market. Development cycles being as they were, 3 years was a good estimate.

Fast forward to today, and you have CPaaS vendors (the technology licensors of communication development tools) and the rest of the industry. And the large part of the rest of the industry is UCaaS.

The thing is, UCaaS vendors are no longer waiting for CPaaS vendors to innovate – they are just doing it on their own.

The promise of CPaaS

Communication Platform as a Service. What is it for anyways?

The whole purpose of CPaaS is to reduce the time to market for developers. Make it easier to get things done with communications by developing all the nasty little details for you.

Call it low code. Call it SDK or API or whatever.

I did an interview with Jeff Lawson, CEO of Twilio years ago. There Jeff explains the essence of Twilio – why he started the company. And the reason is to solve the communication problem for companies so they can focus on building great customer experiences.

Remember this one. We will be back to this interview a wee bit later.

Pandemic requirement shifts

Then the pandemic hit. And with it, a change in what communication requirements looked like around the world for all use cases.

4 distinct changes took place:

#1 – meetings became larger

We had large meetings before. The difference was that we connected rooms with groups of people in each room. Now? Everyone’s joining from his own place.

A meeting with 20 people in 3 rooms became a meeting with 20 people from 20 rooms. We will be back in the office, but the requirement for bigger meetings, with more people joining remotely will still be there with us.

Look at the start of this session from last year’s Kranky Geek virtual event.

Here Li-Tal Mashiach, Senior Engineering Manager at Facebook in the Messenger team explains what they’ve seen as changes in the usage of video calls in Messenger. Look at around the 2:40 mark in that video.

#2 – more meetings for longer periods of time

This one is obvious. Or is it?

Almost all vendors have seen a significant growth in both the number of video sessions conducted on their platforms as well the length of these sessions.

Scale had to be dealt with across these two axes.

You need to make sure you can carry conversations that now take hours on end instead of minutes:

  • My daughter had 4+ hour long sessions with her friends during lockdowns going well into the night. They talked, are, cooked and did whatever the hell teenage girls do together – just remotely
  • My son is still video calling with his cousin while playing Fortnite on his Xbox. And that usually lasts… well… until we stop them forcefully

In both cases, much of the interaction is just ambient video. They do things together or apart and just have these social interactions take place because they can’t meet. Funny enough, my son and his cousin aren’t stopping it now even though everything is open – that’s because meeting physically requires a 20 minute car ride…

How does that change the focus? How do you maintain servers, upgrade and update them when sessions can take hours on end on a machine? Does it mean the media servers also need to be stabler in how they operate?

And what about the number of sessions? Is it that easy to scale 10x or more your current traffic? This isn’t a simple question to contend with. Google shared their own challenges with scaling Meet which makes for a fascinating read. I had my share of vendors to help with best practices in scaling their WebRTC infrastructure during the last 15 months as well.

#3 – more networks

Back to that Kranky Geek video by Facebook. They saw an increase in desktop access. More than they had expected being mobile first.

I’d argue that we’ve all seen more variety in devices and networks. My apartment went from 1 video calling user to 4 video calling users in a matter of a day. Billion people or more who never went on a video call have done so and will continue to do so at least some of the time.

What devices do these billion people have? What does their home network look like?

If you look at the technology adoption curve, these aren’t the innovators or early adopters. They aren’t even the early majority. They include both the late majority and the laggards.

This means we’re facing a lot more variance in devices and networks. In the need to deal with lower end capabilities and resources available. And to deal with having these large groups take place with a larger variety of the differences across devices.

#4 – more places

The best part of video calling during the lockdowns and up until today is taking a peak at other people’s home office. You get to see a piece of who they really are outside “work”.

These places are almost always less than ideal.

  • Dogs and cats being part of the background
  • Kids. Lots of kids. Popping into the screen. Making noise
  • People walking in the back doing laundry, cooking, running after kids. The works
  • Construction noises from outside
  • Poor lighting conditions

Everything you can think of that affects the audio and video quality due to external sources will be there. And you can’t always ask the user to go purchase a better camera, change where he is sitting or replace his device.

It becomes a technical problem to solve many of these issues, especially when the service offers ad-hoc connectivity for its users.

CPaaS during the pandemic

CPaaS were supposed to help vendors build their products. Look at future needs and cater for them. And for the most part they do. But somehow during this pandemic, it seems that many of them have failed to do so.

I’ll look at Twilio here – and not because they are the only vendor with these issues – but because they are the biggest CPaaS vendor and the precursor of the industry.

Last year after Twilio’s Signal 2020 event I wrote that I expected more of them:

For me this says that Twilio hasn’t invested in video as much in the last year or two. If they had, they would have announced something more thrilling and interesting. Maybe larger meetings, above 50 participants? Broadcasting capabilities? Noise suppression? Something…

Since I wrote that, 8 months have passed. Meeting sizes for Twilio Programmable Video are still limited to 50 participants. There are no broadcasting capabilities. No noise suppression. No background blurring. Nothing.

I can’t even recall any real additional feature that Twilio introduced for Twilio Programmable Video since that Signal event. Maybe updates and improvements to their React reference app, but nothing more.

Most other vendors showed similar inclination and introduction of new features throughout the pandemic. It seems like the trend now for video APIs is to focus on embedded iframes for faster development. These have been discussed and experimented with years ago, and now seem to be finding new traction and interest.

It takes more time to develop features in CPaaS than it does on other platforms. The reason for that is the CPaaS vendors need to do 2 things others don’t have to deal with:

  1. Make the feature generic, solving a problem for more than a single use case or customer
  2. Document the feature properly, so that developers will be able to figure out how to use it

But let’s face it. These new requirements have been around for 15 months now…

There are obviously a few caveats here:

I am griping here about video

CPaaS has grown during the pandemic, so this hasn’t hurt them. Yet

Video is usually a small percentage of traffic and income for a CPaaS vendor

UCaaS during the pandemic

UCaaS shows a stark contrast to how CPaaS responded.

Many of the leading vendors have added background blurring and replacement, noise suppression and other features and capabilities. They have done so in breakneck speeds and they seem to be spewing out new features every week or so.

This isn’t limited to a single vendor. Out of the top of my head: Zoom, Microsoft Teams, WebEx, Google Meet and RingCentral all introduced these features in the past year. And all of them seem to be investing further into these areas while pushing forward other initiatives they have, each with its own focus.

Remember Jeff’s interview? I asked him if he believed UC vendors should develop their services on top of CPaaS. This is what he answered:

Yeah. I believe that companies whose primary business is communications can and definitely should and would get competitive advantage by using a platform like Twilio to build upon. The reason why is this. It used to be when those UC companies started, their core competency was making the phone ring. Then they’d add some software functionality on top of it, sure, but the vast majority of what they worried about was how do I make the phone ring? The problem is Twilio has democratized that ability.

[…]

The existing UCaaS vendors, they would be wise to build on top of the same platform that any developer in the world can come and start to compete with them on. If they don’t, those independent software developers, they can actually start and build companies that are really compelling competitors, because they don’t have to focus on the low level bits. They’re focused on the things customers really care about, which is features, functionality, and the user experience that matters.

While mostly true, this doesn’t hold water these days for video communications. Relying on CPaaS vendors means you need to figure out the feature set that is necessary to be a compelling competitor yourself – larger groups, background replacement, noise suppression, …

CPaaS vendors need to put their act together in the video domain, or start losing customers that will just go build this on their own. Especially when we see Zoom coming up with their Video SDK and becoming a direct competitor to CPaaS vendors.

UCaaS vendors are having their own headaches in the market due to the dramatic changes that Microsoft and Google are bringing into this domain. I’ll leave that for a future article.

Pandemic valuations

The pandemic also changed the dynamics in communication vendor valuations, shifting the focus to slightly different domains.

Hopin and Clubhouse, which I already touched on in my previous article about the new era in WebRTC.

Agora (video CPaaS vendor) had a hugely successful IPO, followed by another spike due to the popularity of Clubhouse (who is using them). They are now back to roughly their initial IPO price point.

Twilio (CPaaS) increased in their valuation throughout the pandemic. My guess is that this is mostly due to the increased use in voice and SMS. Less so in video, where they invest a lot less.

Zoom. Need I say more?

The differentiation dilemma & Build vs Buy

How does one differentiate then?

  • CPaaS vendors haven’t done enough during the pandemic to enable differentiation for the video use cases
  • The same CPaaS vendors also haven’t differentiated enough from one another – at least not on the surface level
  • Build on top of your CPaaS vendor the missing features (if possible)
  • Build your infrastructure in-house

I am seeing the following trends in CPaaS adoption and use. They used to be related to pricing, but now they are becoming more and more related to feature sets and differentiation needs:

Most enterprises stick with the use of CPaaS vendors. They rely on them for their communication needs. They will switch from a CPaaS vendor to another CPaaS vendor if they can get better pricing or if their current vendor is lacking features (or provides poor support).

Technology vendors and startups will pick either CPaaS vendors as their starting point or prefer going it alone from the get go. Those that become hugely successful will end up actively working on replacing the CPaaS vendor with their own infrastructure. They will see that as an imperative a lot more than their enterprise brethrens.

Unified communication vendors will continue as they are. Assuming that communication infrastructure is core to their business and will work towards maintaining their own knowledge and experience in the area – doubly so after the pandemic.

Wake up and smell the coffee

CPaaS vendors should wake up and smell the coffee.

The world has changed. Drastically.

There’s no going back to the old ways – even without quarantines.

I believe that there’s a competitive advantage waiting here. CPaaS vendors have been shying away from these requirements. The first ones to come out with actual solutions and feature capabilities that will ease the development of customers will win due to this differentiation.

The reason this hasn’t happened so far is that traditionally, such things weren’t catered for directly by CPaaS vendors – it is out of their comfort zone. This leads to an opportunity that is up for the taking.

On a similar note, after running successfully the Future of Communications workshop with Dean Bubley, we decided that it is both information packed and fun to do. If you are interested in a private session for your company – let us know.

The post Why CPaaS is losing the innovation lead to UCaaS appeared first on BlogGeek.me.

WebRTC: The end of an era (and the dawn of a new one)

Mon, 05/31/2021 - 12:30

After 10 years, we are at the dawn of a new era for WebRTC. This one is going to focus on differentiation and will bring with it new dominant players into the field.

There’s a change in the air. I think it started towards the end of 2019, but now it is quite obvious to see. WebRTC is changing – not the specification but rather who is using it and how it is used.

Table of contents A look at the history of WebRTC

There’s a slide I showed last week in the workshop of the future of video and real-time communications. It resonated with me with the latest news of Justin Uberti leaving Google. So much so, that I decided to record it separately and share it here as well:

We’ve moved from exploration to growth and now into differentiation when it comes to WebRTC.

What got us there exactly?

WebRTC 1.0

We’ve got that WebRTC 1.0 milestone behind us now.

I haven’t written any special article about WebRTC 1.0, because the main question you need to ask yourself is what changed?

And the real answer is nothing.

The work towards WebRTC 1.0 was important and this is an important milestone. But browser vendors already implemented WebRTC. And vendors already used WebRTC in browsers and native applications as if this was a done deal already.

If you were using WebRTC before, then nothing has changed for you since the announcement of WebRTC 1.0.

And if you haven’t used WebRTC yet, then why start now? What was holding you back so long? The fact that you weren’t sure if it is here to stay???

Having WebRTC 1.0 out is an important milestone. More a symbol and a signpost than anything else.

The pandemic The pandemic had a positive effect on WebRTC adoption

The pandemic got us all quarantined and changed everything.

There’s no new normal to talk about yet, but if you’re believing things are “going back to normal” then you’re wrong.

To put simply:

  • More businesses than ever before are just “fine” with employees working from home (or remotely)
    • Some businesses are fine doing that ALL the time, permanently
    • Other businesses are just as happy if you come to the office only some of the days of the week
  • More employees want to work remotely
    • Some employees simply don’t want to go back to the office. They’re just fine working from home. They will actively seek such jobs
    • Other employees are fine working a few days in the office and the rest remotely, so they don’t have to commute as much while still socializing face to face with their peers
  • ALL businesses want to sell more. They couldn’t care less if their clients are buying stuff locally or remotely
  • More customers are now fine with purchasing remotely or even being served remotely

These changes are bringing with them a lot of new demand, new use cases and new requirements.

What we focused on with WebRTC up until 2020 was suitable for the “old” pre-pandemic world. What we need to focus on now is on the “new” post-pandemic world, one which has slightly different requirements.

Zoom Is Zoom the exception to prove the rule?

Even before the pandemic, Zoom’s IPO has been phenomenal.

After the pandemic, Zoom has become a household name.

Pick any communication service you wish from any vendor in the globe. Randomly pick 100 people from the world’s population. How many of them will know that vendor or service, and how many of them will know Zoom?

Zoom doesn’t really use WebRTC, so why should you?

This is an important question. The appropriate answer is probably one of context. Your context is different from Zoom’s.

  • Using proprietary real-time video is a hard problem to solve, so why not use WebRTC instead of solving it yourself?
  • Nobody knows you yet, so whatever Zoom has going for it might not necessarily fit your situation
  • Zoom is probably installed on most machines today. Your app isn’t. How will you entice potential users to download and install your app?

And yet the WebRTC industry, its stack, the browsers and vendors are consistently being compared to Zoom.

Your ability to compete with Zoom on quality and connectivity is greatly dependent on Google, and what they decide to do with WebRTC.

You are not in full control over your destiny.

WebRTC musical chairs

There were a few changes in the people who are working and dealing with WebRTC directly recently. I want to discuss 3 specific cases that I think mark the end of an era.

Dr Alex Gouaillard, CoSMo and Millicast

Dr Alex Gouaillard passed away in April 2021.

Alex has been a known figure in the WebRTC community. His voice on subjects, his passion and his work has made its mark on our industry. He will be sorely missed.

In recent years, Alex focused heavily in the area of live streaming, trying to solve the challenge of broadcasting a WebRTC stream to many participants. He has been a vocal proponent of the use of AV1.

It will be interesting to see who will pick the mantle here and fill the void in explaining and promoting these use cases now.

Nils Ohlmeier, Mozilla (now 8×8)

Nils Ohlmeier has been “the guy” from Mozilla who represented WebRTC in Firefox.

He shared the work Mozilla is doing in Firefox for WebRTC in last year’s virtual Kranky Geek event as part of the browsers panel we did:

Nils switched employers this month, starting to work at 8×8 in the role of Principal Engineer. He will be contributing to the Jitsi codebase and its growth. While Jitsi has a large and vibrant ecosystem, is it anywhere near the size and complexity Mozilla had to deal with?

Who is going to take this role at Mozilla?

Is Firefox interesting as a browser for WebRTC developers and users anymore?

Was it time to move on now that the biggest challenges of WebRTC for browser vendors is “behind” us?

To me, these questions more than anything else mark the change in times.

Justin Uberti, Google Stadia (now Clubhouse)

Justin Uberti was there from the start when it came to WebRTC.

He is considered by many the lead engineer behind the Google Chrome team of WebRTC, and he was part of the original duo (not only the app) – Serge Lachapelle & Justin Uberti.

Justin moved on from the WebRTC team to Google Stadia at the end of 2019. He worked on Stadia related features before that as well.

This month, he decided to move on, leaving Google altogether, pursuing new activities. Justin is staying in the WebRTC industry, as his new role is Head of Streaming Technology at Clubhouse.

Here’s what Justin had to say at Kranky Geek 2018 during Google’s WebRTC update session:

It is truer today than it was in 2018…

Definitely the end of an era.

WebRTC “winners” of 2021

In 2017 I’ve written about 10 Massive Applications Using WebRTC.

That was 3.5 years ago and before anyone thought about quarantines or Zoom.

Fast forward to today, and that list is going to look different.

Two vendors I want to highlight here are Hopin and Clubhouse. They are different from the other vendors we’ve seen in the past who are making use of WebRTC.

Hopin

Hopin is a virtual events platform founded in June 2019, a bit less than two years ago. They couldn’t ask for a better timing (maybe start 6 months earlier?).

Within that timespan, Hopin managed to raise a whopping $571.4M in total and made 4 acquisitions (including StreamYard, Streamable and Jamm).

There are many virtual events platforms ever since the pandemic started but Hopin seems to be the biggest and most widely known one. They have shown that they aren’t shy of acquiring the technologies they need in order to get their feature set where they want it to be.

In 2019, who would have thought a virtual events vendor would be worth $5.65B in valuation by 2021?

Hopin has a nice warchest that they can use to grow their business, attract top notch developers and acquire or acquihire their way to success.

Clubhouse

Another interesting vendor is definitely Clubhouse.

Everyone wants to be Clubhouse these days, but there’s still only a single Clubhouse out there.

Clubhouse started life with the pandemic, in March 2020. After only 14 months it has a valuation of $4B and has been funded well over $100M ($110M by series B in January this year, and another undisclosed series C in April). That’s quite a feat for a voice only, iOS only (until recently) service.

It has a warchest to rival that of Hopin and the same kind of hype behind it to allow it to do practically anything it wanted.

Clubhouse still lacks a real business case, but it doesn’t seem to be stopping it.

Clubhouse is known to be using Agora as their CPaaS vendor, but that may soon change. They hired Justin Uberti from Google, and the only reason for that to me seems to be the desire to own and control their infrastructure.

Google

Google is still the big winner of WebRTC.

If you look at what features are added to WebRTC, then the answer to that is whatever Google needs for its own uses.

These uses now include Google Meet, Google Stadia and Google Assistant.

If your use case has the same requirements in general then you’re in good shape. If you are going “off the reservation”, then prepare for a life of misery if there’s something missing that you need and isn’t in Google’s own set of requirements.

WebRTC is open source up to a point. Not because the code isn’t open and available to all, but because the main implementation is owned and controlled by Google and the main browser you’ll need to work with is Chrome.

Welcome to the new WebRTC

From now on, WebRTC is going to be different.

Talking heads are still an important part of it, but the focus is shifting from a “video chat” or “video conferencing” service into a communication service that is unique. What that is exactly is hard to say, but suffice to say that WebRTC is there in the background.

And fading to the background is exactly what we wanted from WebRTC – the technology is only great once we start forgetting it is there.

The post WebRTC: The end of an era (and the dawn of a new one) appeared first on BlogGeek.me.

Interoperability and standardization in a world dominated by WebRTC & Zoom

Mon, 04/26/2021 - 12:30

Rethink the way we look at interoperability and standardization in communications, now that we live in a WebRTC & Zoom world.

We live in a different world. This video popped up in my Facebook as something I shared a year ago:

It is in Hebrew, but there are enough words in English there to make it quite apparent. This comedian is trying to explain to his mother over the phone how to use Zoom.

Today? Everyone knows how to use Zoom. Or WebRTC.

The transformation of communication technology Rewind 20+ years ago Rewinding back to when communications was a service

I started my professional adult life in a video conferencing company. There, I lived and breathed interoperability and standards for the better part of 13 years. I have a few contributions that got approved at the ITU and 3GPP. I’ve been to interoperability events and even hosted two of those in Israel.

At the time, the mindset was a telephony one:

  • There is a single way of doing things. We call that the standard specification
  • Vendors can implement different components of that specification
  • A buyer can purchase any component from any vendor and miraculously, his new purchase would “speak” to all other components on the network
  • Why? Because they are all interoperable. Per the standard

There were two main reasons why we wanted such a world to live in:

  1. Lower the barrier of entry. For a single company to develop it all requires huge budgets. Smaller vendors couldn’t take that risk and we wanted the innovation
  2. Monopoly power and vendor lock in. We wanted to give clients choice in what they purchased and have the ability for them to mix and match and not be locked in to a specific vendor for life
The rise of the smartphone (and the cloud)

Then Apple came with the iPhone and changed. Everything.

From embedded platforms, the smartphones became open programming platforms (open even within the closed gardens of their app stores).

Today, many of the embedded devices include an Android operating system, making it ever easier to develop software for them.

This brought with it a new kind of openness:

  • You didn’t have to purchase a device that adhered to a standard specification
  • The alternative was to download or install the necessary communication software instead

This didn’t mean standards were unimportant. It meant that interoperability became less interesting. Vendors could now bake their own proprietary additions on top of the standards that give extra features without the need to think too much about interoperability with other vendors – that’s because your client now brings his own device (BYOD) and you supply the software application to connect to the infrastructure.

Oh, and by the way – that infrastructure? It is now built in the cloud. And the cloud enables rapid development and hyper growth. Which again means that caring about interoperability becomes less of an issue between the client device and the cloud infrastructure – vendors are more interested in interoperability between their infrastructure components or with external service providers – via gateways.

This turned communications from a service into just another application on our phones.

A new way to look at communication standards: WebRTC WebRTC brought communications to the browser, making it into a feature

WebRTC came to our world about 10 years ago and changed the paradigm again.

Where the smartphone and the cloud reduced our dependency and need for interoperability, WebRTC reduced our dependency and need for standardization.

We still need standardization – after all, WebRTC is a standard.

But the standardization we care about is mostly the browser implementations versus the specification (and interoperability between browsers). Other than that? We couldn’t care less.

The client side is no longer even a software application. It is a bunch of JavaScript lines of code that get executed inside a web browser that supports WebRTC. We can still do applications, and we do, but the concept and the intent is the same – standardization across vendor’s components and devices is now overrated and mostly unnecessary.

If we need standardization and interoperability, we let gateways do it. As we did in the era of the smartphone.

WebRTC also made communications more accessible. Web developers could now use it, and you could easily embed and stitch it right into your application as a seamless part of your business process flow.

This turned communications from a service or an application into a feature in another service or application.

Zoom and the pandemic The pandemic made video communication commonplace, enabling Zoom to turn it into a platform

Then the pandemic came and made a world a lot smaller. It made sure we all know how to use video communications.

Zoom became a household name across the globe and turned into a noun.

Zoom is proprietary. It doesn’t even use WebRTC.

No standards. Which lead to a lot of security missteps.

But it worked. And now Zoom has a Client SDK a Video SDK and Zoom Apps. With the intent of making their infrastructure and technology integratable with anything and everything.

This is an attempt to turn communications from a service or an application or a feature into… a platform.

Workshop: The Future of Video & Realtime Communications

A few weeks ago, I had a conversation with Dean Bubley. We wanted to do something together, and decided to create a joint workshop.

The question of the role of standardization and interoperability is one of those we are going to tackle in the upcoming workshop.

If you are interested in joining the workshop, register below. There’s an early bird discount that is available only until the end of this month.

REGISTER TO THE WORKSHOP

The post Interoperability and standardization in a world dominated by WebRTC & Zoom appeared first on BlogGeek.me.

Lyra, Satin and the future of voice codecs in WebRTC

Mon, 04/19/2021 - 13:00

There are new audio codecs in town: Google Lyra and Microsoft Satin. Both banking on AI-based voice coding, and both will be fighting for inclusion in WebRTC.

Right on the heels of the changes we see in video codecs in WebRTC, with AV1 coming into the stage, and HEVC making an entrance in Apple devices, we now have a similar (?) story with voice codecs. Microsoft announced its AI-powered voice codec Satin in February. A week later, Google reciprocated in kind, announcing its low bitrate codec for speech compression Lyra.

Why now? What are the similarities and differences between these codecs? Where are they headed? And what does that mean to WebRTC and to you?

Table of contents Audio codecs in WebRTC

It makes sense to start this by explaining a bit about audio codecs in WebRTC.

WebRTC has mandatory to implement codecs. For audio/voice, these codecs are G.711 and Opus.

For all intent and purposes G.711 is there as a legacy codec, to deal with narrowband audio. The result of which is low quality, unresilient audio. Using G.711 is mostly reserved to connect it with the telephony networks, and even there, I wouldn’t recommend it as a solution.

Opus is the main voice codec in WebRTC. It offers a highly flexible solution capable of handling anything from narrowband to fullband stereo and at low bitrates. You can read more in this article I’ve written years ago: The Rise of Opus to HD Voice Domination.

Opus is almost 10 years old. It has been created by meshing two separate codecs: SILK (for speech) and CELT (for music). The pandemic of 2020, and the increased reliance on virtual meetings has started to show its age and its limitations. Opus is a great codec, but these days, we can probably do better.

How Opus works, from my Advanced WebRTC Architecture Course The two extremes in audio codecs: Low bitrate vs lossless

So what are the missing pieces? The things that Opus can’t get done on its own? There are two such areas that are actively being explored, and they are two extremes: highest possible audio quality and lowest possible bitrate.

Highest possible quality: Lossless audio coding

One extreme (unrelated to Lyra and Satin), is the strive for the highest possible audio quality. Getting there requires the use of lossless audio coding.

For all intent and purpose, what we do in VoIP today, and by extension in WebRTC, is use lossy coding. This means that we compress the audio and video in ways that don’t really allow us to reconstruct the original audio or video accurately, but instead it gets us “close enough” to there. It does that by trying to “get rid” only of information that we humans can’t discern – things the human eye and human ear would miss anyways.

As a crude example, I never did hear the difference between vinyl, cassettes and CDs – at least not enough for it to matter for me. On the other hand, I had a friend who complained that CDs don’t have the audio quality of vinyl records.

The most known lossless audio codec is FLAC. It has nothing to do with WebRTC. Yet.

Lowest possible bitrate: AI based compression

In the other end of this spectrum lies the lowest possible bitrate we can comfortably reach.

It turns out that Opus is good, but not great.

At a time where bandwidths are increasing, why do we even discuss getting voice codecs into lower and lower bitrates? What would be the incentive?

These questions are doubly important considering the fact that we’re heading towards a remote video filled world. And we know that video takes up considerably more bitrate than voice, so why care so much about voice bitrates?

One reason is simply the fact that we’re now communicating remotely a lot more. We do that more, and there are also a lot more people communicating online. From everywhere. This means that not all of them are going to be on great networks at all times, and even when they are, others are going to strain these networks with their own traffic. Google calls this “the next billion” – the next billion people joining the internet, which means people with less means and by extension less bandwidth.

The other reason is the fact that we’re growing bigger. More sessions. Bigger sessions. Widely spread. If we can even reduce a fraction of the bitrate, that would reduce the strain on our networks, servers and costs of running services.

I am also guessing that the big video meeting vendors got to learn a few interesting things during the pandemic. One of them is that voice is the most important part of a video call. If you don’t deliver your voice properly, video won’t matter. And for that, you need to make it leaner and meaner than it is today.

How do you make voice compression for an audio codec better?

AI and audio codec generations

I’ll be using machine learning (ML) and artificial intelligence (AI) interchangeably here. These terms have been butchered by marketers so much, that they are now indistinguishable anyway.

Better in the case of audio codecs is going to be a new generation of codecs. In a way, a migration from the old way of doing things (rule engines and heuristics) to our brave new world of machine learning and artificial intelligence.

Machine learning is where the future lies when it comes to most of our algorithms. Especially with the ones that make extensive use today of either rule engines or heuristics – both of which are found in abundance in real time media processing pipelines (=WebRTC). We started seeing this trend seeping into real time communications and WebRTC somewhere in 2018. After the initial hype, we found out the many challenges of adding machine learning. In 2020, it seemed like the path became somewhat clearer: noise suppression and background replacement solutions assisted with AI. For the rest? We understood collectively that we should first squeeze the lemon of optimization before resorting to AI.

It is now time to look at AI in media compression as well. We’ve seen this take place already in baby steps. At Kranky Geek 2019, Shawn Zhong of Agora, explained how AI can be used to improve encoding efficiency:

A year later, NVIDIA introduced Maxine, a platform capable of using AI to “reconstruct” a person. Effectively creating a kind of a compression algorithm.

Research around AI compression is flourishing. There is already an AI specific standards organization called MPAI (Moving Picture, Audio and Data Coding by Artificial Intelligence) – still small, but this may change in the future. And then there’s Mozilla’s Common Voice, an open source, high quality, labeled multi-language dataset for training language related models.

It makes sense then, that audio would be a prime target for AI based compression as well. Here, Microsoft took the first public shot, and Google immediately followed suit.

The Opus spec

To understand where Microsoft Satin and Google Lyra are headed, let’s first review how Opus works:

  • Opus has a range of 6-510 kbps of compression
  • Realistically, for WebRTC, it would be 6-40kbps, and in most cases ~26-30kbps
  • It runs the gamut of narrow band up to full-band stereo
  • Latency of 26.5ms, making it quite powerful for real time since it adds very little inherent delay of its own
  • As mentioned above, for speech, Opus uses a modified SILK implementation. For music it uses CELT, another audio codec. It can use them simultaneously as needed. And interestingly enough, it has a small machine learning model that decides what to use by classifying the audio input as either speech or noise
Opus was designed to enjoy the best of all worlds when it came to quality vs compression rates

Now let’s look at what we know so far about the two new audio codecs.

Microsoft Satin

Microsoft Satin is being positioned as an AI-powered audio codec to replace Silk.

Silk is used by Skype and was adopted as the basis for Opus as well. Here’s what Satin can do based on Microsoft’s announcement:

  • Super wideband speech starting at a bitrate of 6 kbps
  • Full-band stereo music starting at a bitrate of 17 kbps
  • Progressively higher quality at higher bitrates
  • Provide great audio quality even under high packet loss
  • Better redundancy algorithms to provide better protection under burst loss
Source: Microsoft; Satin positioned as a Silk/Opus replacement

Satin wasn’t presented as a work in progress, but rather as a battle tested codec – Microsoft stated it is already being used by Microsoft Teams and Skype in 2-way calls. Obviously, with plans to extend it to group calls.

Satin is a brand new codec that is being designed to replace Opus altogether.

Google Lyra

Google’s announcement of Lyra came a week after Microsoft’s. In a way, it seemed a bit rushed.

Why rushed? Because of how the announcement is written. It reads similar enough to the Microsoft one but lacks the “currently deployed” paragraph. Instead it has a “currently rolling out” paragraph.

What is Lyra about? Based on Google’s announcement:

  • Very low-bitrate speech codec
  • Processing latency of 90ms (on the slow end of the spectrum of real-time voice codecs)
  • Designed to operate at 3kbps
  • Currently optimized for the 64-bit ARM android platform
Source: Google; Lyra’s focus is on gaining high MOS scores at ridiculously low bitrates

Lyra is intended for SPEECH and not for AUDIO. It isn’t a replacement of Opus in any way.

Interestingly, Google believes that coupled with AV1, it can offer decent video conferencing experience at dial-in modem bitrates of 56kbps.

Lyra is being rolled out to Google Duo for very low bandwidth connections scenarios. But that’s about it for the time being.

More recently, Lyra has been open sourced by Google. The reasons for this are varied, especially considering that many of the recent advancements of Google in AI around real time communications weren’t open sourced at all:

  • Lyra came after Satin. They will both be fighting it out on being included in WebRTC
  • It is superior to Satin (probably) at the very low bitrate of 3kbps, especially considering Satin was designed for 6kbps and above. But it is no match to Satin in higher bitrates, where Satin most probably beats Opus
  • Google decided long ago that codecs should be free and open source (see the WebM project). As such, Lyra needs to play by these rules as well. Google might not see a competitive advantage here and would rather have this available across the board

Another thing you can achieve with Lyra is better redundancy for improved resiliency. With its very low bitrate, it is less of a constraint to add redundancy on top of it. You can check out this article on webrtcHacks by Philipp about audio redundancy encoding.

A multi-codec audio future for WebRTC?

At the moment, both Lyra and Satin are nice bedtime stories. You can use them only inside the proprietary implementations of Google and Microsoft. And even then, in most cases you wouldn’t even know that to be the case.

Why was it important then to announce these efforts?

My hunch is that it has to do with standardization and WebRTC.

WebRTC needs some love and attention now in the audio front. For video, we’re going to have AV1, but what do we do about voice?

There are currently two alternatives out there that will make their move soon enough:

  • Satin. As a 3rd optional codec in WebRTC. Microsoft will need Google’s approval/help to push this one forward by making it a part of Chrome, otherwise, it won’t gain enough popularity and will be kept proprietary and niche
  • Lyra. This codec makes no sense to me as a standalone codec. Adding it to WebRTC “as is” will be quite challenging for developers to make use of. There are a few routes that can be taken here:
    1. Have it handle wideband and full-band better and in a way competitive to Opus and call it a day. That means competing head-to-head with Satin
    2. Shove it into Opus. Call it Opus 2.0. Opus already contains SILK and CELT. Why not LYRA as well? Let Opus decide which one to use when and be done with it. Since Lyra is already open sourced, that can be a natural next step…
    3. Get Google and Microsoft in the same room and see how to put Lyra inside Satin, find a 3rd name for it so no one gets pissed off that the other won the marketing game, and add that new codec into WebRTC

It is too early to say how this will play out. My bet is on more optional audio codecs finding their way into WebRTC – not the boring old ones, but rather the hip new ones. This will make audio codec selection for developers building services a wee bit harder, which isn’t a good thing in the long run. I’d rather see this pushed into Opus – or added as a single codec replacement to Opus. Something that would be easy to pick instead of Opus.

FAQ on Satin and Lyra ✅ Is Google Lyra equivalent to Microsoft Satin?

No.
While both of these audio codecs operate at low bitrates and are powered by AI they are very different. Lyra is focused on narrowband only while Satin is about operating in super wideband.

✅ Can Microsoft Satin replace Opus?

Technically – yes.
Microsoft is already using Satin instead of Opus in Microsoft Teams and Skype for 1:1 calls. IT was designed with that goal in mind.

✅ Can Google Lyra replace Opus?

No.
Lyra was designed to work at low bitrates where Opus doesn’t do a good job today. When there’s enough bitrate, Opus offers better audio quality than Lyra.

✅ Is Lyra or Satin available on audio codecs in WebRTC?

No.
There are no public plans to add either of these codecs to the WebRTC specification or to browser implementations.

The post Lyra, Satin and the future of voice codecs in WebRTC appeared first on BlogGeek.me.

🎲 Which video codec to use in your WebRTC application? 🎲

Mon, 03/08/2021 - 09:00

Picking the right video codec for a WebRTC application is tricky. Should you use VP8? H.264? VP9? Go with AV1? What about HEVC?

Table of contents WebRTC video codecs – a quick reminder

WebRTC was once easy. You had VP8, Opus and G.711. G.711 is striked through because I don’t want you to use it. There’s really no reason to. Later on, H.264 was added as a mandatory to implement video codec. And all was well in the world of WebRTC.

Google then decided to introduce VP9 in Chrome. As an optional codec. Mozilla added VP9 to Firefox as well. Microsoft? They got it for “free” when they switched Edge to Chromium. And Apple… well… Apple. VP9 should be in their Technology Preview for Safari, but mainly because of Google Standia which uses VP9 – surprising as this may sound.

Oh, and Apple decided to add HEVC as an optional codec of their own to WebRTC – just for good measures. And to confuse us all even further.

Then there’s AV1. The next gen bestest video codec. For the time being. At least once it gets added to Chrome (in version 90 that is). And used by developers.

Video codecs support across WebRTC browsers

The diagram below is taken from my recent workshop on trends in WebRTC for 2021. It shows the current state of video codec support in web browsers.

To sum things up:

  • VP8 and H.264 are ubiquitous across browsers, and yes, there are some issues with both of them
  • VP9 isn’t adopted as much after years of being available to developers, and is coming to Safari “soon”
  • HEVC is Apple
  • AV1 is new
Video codec performance in WebRTC

Last week, I sat down with Philipp Hancke for our WebRTC Fiddle of the Month. In this month’s fiddle, Philipp suggested we look at video codec performance, so he wrote a… fiddle.

You can watch the whole fiddle here: measuring video codecs performance

The results were quite interesting and sometimes surprising. What’s nice here is that you don’t need to take our word for it – you can take the code and use it yourself. Also make sure to use it in the scenario you have and not the simple one we’ve shared, as your mileage may vary.

VP8 or H.264 for your WebRTC application?

Today? You’re probably using VP8 or H.264 – or should use VP8 or H.264.

Is there any real difference between the two? No. Not really. They produce similar video quality for a given bitrate.

That said, there are some nuances between them:

  • Google doesn’t really use H.264 in WebRTC. So VP8 is the more maintained video codec out of the two. For example, H.264 didn’t support simulcast in Chrome for many years (it does now)
  • VP8 has virtually no hardware acceleration, so it will eat up more CPU in some cases
  • H.264 has hardware acceleration. On Apple devices. Sometimes on PCs. Sometimes on Android. Sometimes though, you won’t have a H.264 implementation in WebRTC, because the hardware isn’t accessible and the software implementation isn’t there (royalties and stuff)
  • Temporal scalability is only available in VP8. Not in H.264

Our own quick tests suggest that the H.264 decoder is better than the VP8 one – with or without hardware acceleration on H.264. Definitely something to think about.

Which one should you use? Throw a dice… 🎲 or two 🎲🎲

VP8/H.264 or VP9 in WebRTC?

Here’s a real question. Should you go for VP9? Last year I suggested it might be time to use VP9. Little has changed – no real adoption that I can see to it.

Except from Google, no one uses it.

In our tests, it was close to VP8 in its CPU use. That was quite surprising. It is probably why Google is using it in Google Meet.

The best thing about VP9? It also supports SVC (in an undocumented munging kind of a way).

The challenge? Apple. Doesn’t really have it yet. Should be getting there. Question is when.

When to use HEVC in WebRTC?

This one is simple enough to answer – never.

That said, if you have calls that take place only between Apple devices, then HEVC might be a good option.

Is the time right for AV1?

No. Maybe. Yes.

From our own testing, AV1 is considerably worse than all other codecs when it comes to performance. It takes twice or more of the CPU it takes to encode and decode any of the other video codecs we tried.

AV1 should offer better quality than the other codecs, so you may actually want to pay that extra CPU. As far as I can say, there are two reasons for using AV1 today:

  1. To handle specific scenarios such as very low bitrate, where CPU isn’t the bottleneck but bandwidth is
  2. When you are decoding only and the encoder is in the cloud – a place where you control the hardware. You’ll pay for it in compute costs though
  3. It is rumored to be good at decoding thumbnails
Welcome to a multi codec WebRTC world

WebRTC started without many options. VP8 and H.264. That’s about it. Now? We’ve got 4-5 video codecs to choose from.

Most of us end up using VP8 just because. Some pick H.264, mainly because of performance considerations. The rest are mostly talked about but almost never used.

The newer video codecs are really promising – VP9, AV1 and even HEVC have real potential in a WebRTC application. The challenge though as that they come with some big challenges – mainly CPU and availability across browsers.

To use them, a new approach is needed. One where more than a single video codec is used by an application, at times within the exact same session.

Here are a few suggestions for you to explore:

  • Support higher complexity codecs on 1:1 calls only, dynamically switching to other video codecs if and when a call grows beyond two participants
  • Dynamically switch to a higher complexity codec on low bitrates
  • Enable decoding in as many codecs as possible in parallel on a device, and then deciding what the encoder should send based on its CPU capabilities
  • Using multiple video codecs in simulcast – for example using AV1 with very low bitrate and next to it use VP8 or VP9 at higher bitrates. Simulcast doesn’t support this (yet), but you could open two separate peer connections with different codecs and bitrates to achieve a similar outcome

Is it worth it? Maybe. You tell me if enhancing video quality in your application is important. Venturing into the multi video codec realm in WebRTC is about the 80% effort that yields the last 20% improvements. Go there once you’ve finished pursuing all other simpler optimizations.

WebRTC trends in 2021

Last month I hosted a workshop about WebRTC trends in 2021.

I covered optimizations of a single video call, voice suppression, background blurring, E2EE and video coding aspects. The challenge of which video codec to choose was there as well.

The sessions have been recorded and are now available as an online course on my website. If you are interested, you can register for it.

The post 🎲 Which video codec to use in your WebRTC application? 🎲 appeared first on BlogGeek.me.

WebRTC Trends for 2021 (and beyond)

Mon, 01/11/2021 - 12:30

2021 is set out to be the year of technical debt and quality optimizations. Check out these WebRTC trends to keep up to speed with communication technologies.

Last year was a very interesting and weird year. The vibe of 2020 was dictated by the pandemic and the quarantines around the globe. For those in the communication space, this meant a huge acceleration in demand, scale and the scope of work you had in front of you.

Table of contents WebRTC and expectations

When I started last year, I talked about the expectations of WebRTC. I tried explaining the concept that WebRTC, more than anything else, is driven by Google and controlled by Google. It was a kind of a follow up to my article on the artificial intelligence roadmap of Google for its “WebRTC Pro” implementation.

Since then, Google introduced noise suppression, background blur and other AI trinkets in Google Meet. All AI features. All were delivered outside of WebRTC but tightly coupled with the WebRTC implementation in Chrome.

What changed since then is the focus. It is great talking about bots and drones. AR, MR and XR. 360 videos, 4K and 8K resolutions. But it gets us nowhere.

We came back to the basics and the basics have changed along with the pandemic.

As developers, we need to follow the trends. Be where our users need us and fill out their requirements. This is also true of WebRTC, and being owned by Google, it means we know where it is (roughly) headed.

Google and WebRTC in 2021

While Google uses WebRTC in multiple services, there are only 2 that matter for WebRTC trends in 2021: Google Meet and Stadia.

Google Meet

In the latest Gartner magic quadrant for meeting solutions (September 2020), here’s who you find:

Google doesn’t make it into a leaders position in meeting solutions

The leaders? Zoom, Cisco and Microsoft. Google is far behind.

2020 being the year of video meetings, and with Google investing in WebRTC and Meet, this has to hurt.

Google invested heavily in 2020 in and around WebRTC.

You could place their investments in two main areas:

  1. Optimizing the code – finally someone took the time to optimize the code and make it more performant and stable on multiple platforms and devices. This is an ongoing work that can still be seen today with each and every release. Google is starting to look at real time video processing as a profession and not a hobby
  2. Beefing up the feature set – to meet with what competitors are offering. This trickles back into WebRTC’s capabilities

That trickle-back is important. The 3 leaders in meetings?

  • Zoom makes no use of WebRTC, which means it isn’t “limited” by WebRTC’s limitations (or advantages)
  • Microsoft Teams offers a subpar experience on browsers. Just try to connect to a video call from Chrome and not the Teams app – you’d be surprised how poor and backward the service feels
  • Cisco is improving with WebEx on the desktop. But a lot of the focus and features introduced are outside of the scope of WebRTC. Like the roll out of AV1 support in WebEx
Stadia

Stadia is Google’s cloud gaming platform.

It is still early days for both Stadia and cloud gaming, but a few interesting things have happened in this industry:

  • The pandemic got more people to play games. Especially kids. My son plays it now in-between his virtual lessons as well as during the rest of the day. With shelter at home and distancing, this becomes a way to stay connected with friends
  • Cyberpunk 2077 video should have been the incentive to join the platform. Gaming consoles like the PlayStation 4 and Xbox One couldn’t handle the game’s high end requirements. Using Stadia or other cloud gaming platform was a reasonable solution. Until bugs were reported about the game itself, causing it to tank globally. Not sure if and how that affects Stadia
  • Epic Games battling it out with Apple on its App Store tax rules, with the only potential solution for gaming aggregators being a browser based approach instead of an installable mobile app
  • Stadia, being cloud and browser based “enjoys” this

For now, Google seems committed to Stadia. Both Chrome and recently Safari added support to VP9 profile 2. This means a higher color depth than what is common for video conferencing, which is better suited for high end gamers.

Just like Meet, whatever Stadia will need from WebRTC will find its way into WebRTC.

WebRTC Trends in 2021

The trends affecting WebRTC in 2021 are based on two main aspects then:

  1. What Google needs for Google Meet and Stadia
  2. What many developers are trying to develop with WebRTC

What comes from developers these days is the expansion of remote-everything. There are many domains that aren’t getting heard enough, simply because they are new to the scene. What I think is most interesting is that the mainstream video communications space is still the one setting the agenda for WebRTC.

The 4 biggest trends for WebRTC in 2021 are driven by video communications. Here they are:

Trend #1 – Bigger WebRTC meeting sizes

Our first trend of 2021 for WebRTC? Meeting sizes. Something we’ve started focusing on only last year.

We used to want higher resolutions. At any given point in time, there was a company pushing the envelope in the resolution for video conferencing. Since we got to HD, that trend stopped. Vendors still tried marketing and selling 4K as a value proposition for video conferencing, but this hasn’t stuck. The high end of the market vanished, leaving us with a new number to focus on. The number of people in a “gallery view”.

With Zoom doing 49, this seems to have become the magic number everyone is aiming towards.

WebRTC was great for smaller meeting sizes, but going beyond 16 video streams in a single session was always challenging. I like using this slide to explain it:

The bigger the meeting size in WebRTC, the higher the complexity of the solution

The growing complexity comes with the need to employ ever greater techniques and tricks for optimization. Scaling from 2 users to 10 requires a different approach than scaling towards 50 or 100 users. Aiming for 1,000 users in a meeting needs a slightly different architecture. Going for 20,000 or more necessitates again other tools.

There are now two distinct areas that require large scale WebRTC meeting sizes:

“Traditional” meetings – we had large meetings of 20 or more people, but the people simply convened in 3-4 meeting rooms and connected these meeting rooms. Now each person is a device in the meeting.

Large conferences – we are now trying to copy the real world activity of industry conferences along with entertainment activities (comedians, talk shows, magicians, sporting events, …) and turn them into virtual events. Large online conferences.

These two are different in nature and in the techniques and technical solutions for them.

Google is focused on the “traditional” meetings with their work on Google Meet, which means the optimizations done inside WebRTC’s code as well as enabled on top of it are built to fit this class of problems. The large conferences have a bigger challenge to deal with and less “direct” support from Google and WebRTC.

Trend #2 – De-noising: Background replacement and noise suppression in WebRTC

The second WebRTC trend for 2021 is a bit more surprising. I don’t think we would have cared about it much without the pandemic.

Need better media quality? Buy a better camera.

That’s what I did at the beginning of the quarantine. I had to quadruple the number of machines at home with quality peripherals. Instead of only me in meetings we’re now 4 people in meetings, each needing his own different environment. That was obvious to me. Still challenging to do but obvious. We’re also lucky to be able to cater for the four of us in our apartment having a place for each to handle his needs without too much noise seeping out to the others.

Homes with more people? Smaller apartments? How would they handle it?

When we were all in offices things were simpler. The office space was designed (or then redesigned) to meet the needs of video calling. An IT person took care of the space. Someone purchased and installed equipment that fits the needs.

As we’ve all entered a pandemic with quarantine all that careful planning and preparation was thrown out the window. People had to use whatever they had and make do with it. And what did we find out? That there’s background noise and a user’s privacy we need to deal with.

That child from 2017 who barged into his father’s interview and was live on TV? That’s all of us now. It has become an accepted norm. People working from home. They have a personal life with family and kids, and kids are part of the scenery.

Same for the laundry or other artifacts that now reside behind a person speaking in a video call. How do you make all that go away? How do you reduce the noise of the neighbors running on top of your head while you write these words on a keyboard (literally)?

A rather old/new requirement is to be able to get rid of all of that. Background blurring and replacement. Noise suppression and noise cancellation. All things that were nice to have are becoming common requirements in meeting solutions.

They aren’t part of what comes with WebRTC, but somehow, you need to make them happen with WebRTC.

Trend #3 – A focus on WebRTC user privacy

Zoom and security issues anyone?

I am not here to gloat. Zoom did a bad job at security and privacy before 2020. It did a great job of fixing these issues in record time during 2020.

The issues around Zoom were both about security and privacy. Privacy of the users from other users and hackers, but also from Zoom itself.

This focus on user privacy found its way to WebRTC as well and for the same reason. Zoom is now how every communication company measures itself by, for better or worse.

There are many things to deal with when it comes to WebRTC security and the latest advancement there is E2EE enablement in media servers. The ability to offer end-to-end encryption in a group video call. It is now possible due to the introduction of Insertable Streams to WebRTC.

How is that used? What would it require of you to implement? How would that affect other requirements and features in your service? We are going to find that out during 2021 as more vendors will roll out E2EE solutions with WebRTC.

Trend #4 – WebRTC Investments in VP9 and AV1

Video codec technologies come in stages. The industry at large has started adopting HEVC, with Apple leading the charge. VP9 has been slow to catch up. And we’re already in the next round of codecs with AV1 being hammered as the next big thing and something called VVC breathing down its neck.

WebRTC has been predominantly a VP8 phenomena, with a trickle of H.264. Here’s my estimate on video codecs use in WebRTC:

Hint: look at area differences and not height in this graph

What is happening now is companies who are looking at VP9 and AV1 trying to make use of them for different use cases and scenarios.

Cisco just announced using AV1 in screen sharing for WebEx in native PC application when that is made possible.

We will see more of that in 2021. Companies experimenting, using and launching products that use more VP9 and even AV1.

An increase in use cases and markets

WebRTC is breaking out to additional markets. Large events, live streaming and even cloud video editing.

All these necessitate new features and capabilities to be added to WebRTC itself.

Now that WebRTC 1.0 is finally being finalized there is going to be a growing focus by the W3C on what comes next. If you have requirements that require a change in WebRTC, it might make sense for you to join the W3C and make your voice heard in affecting where WebRTC is headed next. Ping me if you’d like to discuss this.

Upcoming WebRTC Trends worksop

Next month I’ll be conducting a workshop that covers these topics. The trends and what to do with them. It will offer actionable advice on what you should do in 2021 and it will be interactive in nature.

My last workshop about differentiation in WebRTC was well attended. Here is what Andrey Abramov of Doxy.me had to say about it:

Thank you very much for the 3 weeks workshop on which you dove us into the WebRTC. It was really interesting and useful. I have learned a lot and look like now I have a better vision of what to do to improve UX of our calls on Doxy.me. Thanks for the records as well! I will be reviewing them from time to time to recall.

It was great! Thank you!

This new workshop, WebRTC trends for 2021, will take place during February, in 3 consecutive sessions of 2 hours each.

Space is limited, so if you are interested, register sooner rather than later.

See you at the workshop.

Register to WebRTC trends for 2021 workshop

The post WebRTC Trends for 2021 (and beyond) appeared first on BlogGeek.me.

A blueprint to improving WebRTC media quality using AI

Mon, 11/23/2020 - 00:30

Before jumping on the ML/AI bandwagon of WebRTC media quality, make sure you’ve exhausted all of your other optimization alternatives.

TL;DR – make sure you optimize for media quality without AI before jumping to using AI…

In 2018 and 2019 at Kranky Geek we’ve started looking at machine learning. We’ve handpicked speakers and sessions who deal with these topics. We’ve done so for both voice and video technologies. The intent and idea behind this was to fit to the times. Everyone’s been doing AI so why not us in the context and domain of WebRTC and communication technologies?

It made perfect sense.

Then came 2020 and… changed everything. No one was really interested in AI or how to improve quality of experience with it. It was now used mainly for bots with the purpose of handling large loads of calls (call deflection and agent assist type technologies).

At times, it seemed like we were all back to basics. We now had to start scratching our heads and see what can be done to improve quality.

Time for some quick wins

At Google and elsewhere, I am sure that a manager somewhere higher up came, saw the work that is being done, received an explanation how research into this machine learning stuff was progressing and showing promise, but in many ways required, well, more research, before it could be seen as anything that is close to being production ready.

And as managers do in these situations, they smack the table and say something like “I want quick wins”. So the developers went back to the basics. Trying to figure out what quick wins they can find to squeeze a bit more quality of that thing they had called WebRTC.

Quite surprisingly – it worked!

There seems to be ample room for optimizations. If you ask me? Someone forgot to try and squeeze this lemon properly.

There’s more room for optimizations of WebRTC before we resort to machine learning Google’s optimizations of WebRTC’s code

It started somewhere with the pandemic.

One of the first indications was this tweet by Serge Lachapelle (former product manager for WebRTC at Google and leading Google Meet at the time of tweeting).

@googlechrome 83 is now in beta with interesting changes to the video compositor. It should free up some CPU cycles when using @webrtc apps such as @whereby @confrere_video and #GoogleMeet

— Serge Lachapelle (@slac) April 17, 2020

Apparently, the video compositor wasn’t making the most out of the hardware it was using…

Since then we’ve seen some additional optimizations, though most of them taking place in the application level on top of the WebRTC implementation itself.

At Kranky Geek, Google discussed at length the optimization work it is working on. Mostly, making sure that video processing doesn’t take up too much CPU.

Too many media format conversions in the WebRTC media pipeline

Apparently, Chrome is doing way too many video format conversions between getting the frames from the camera until it encodes and sends it out. Each conversion eats up CPU and I/O, generally killing the whole internal bus of the machine. Oh – and it means memory copies. Lots and lots of memory copies.

Video processing 101: zero copy is what you’re striving for.

We’re 10 years into WebRTC and the leading team behind WebRTC is just now starting to look at zero copying.

There are other areas and aspects where optimizations are taking place. Once the Kranky Geek videos will be ready and published, I’ll add the relevant one here.

Still got optimization juice in this lemon. Expect better performing WebRTC in the coming Chrome releases.

Rushing towards 49-gallery view and 50+ group sizes

As the pandemic hit, Zoom grew. The media was filled with their gallery view.

Zoom’s 49-gallery view. The holy grail of video group calls?

One use case that didn’t exist before the pandemic is large video calls. Up until today, we used to take these video meetings in the office inside meeting rooms. Cramming a few people in each room in a remote office and doing a call with 2-4 such rooms. Maybe someone joined from home or a hotel. You could see meetings with 10 participants. Sometimes. But the need just wasn’t really there.

The pandemic hit. People are now at home. And communicate with video remotely. A meeting of 4 became a meeting of 20 just because the participants are now sitting at home.

Even worse, schools are now remote. Each class has 20-40 students in it. And the teacher wants to see them all.

This made Zoom’s gallery view so popular (even if a tad useless if you ask me). It also made the magical number 49 magical. The holy grail of what is needed of a video conferencing service in a pandemic. Doesn’t matter if everyone is muting their video.

49.

Microsoft and Google announced plans for supporting it. Then started running towards that value, each rising in the number of tiles in his gallery, reaching 49 recently.

Facebook grew from a meeting of 8 to meetings of 50.

Meetings are larger and longer now.

And again, we found the ways to make it happen with WebRTC.

Best practices on group video scaling being rewritten

There are a lot of mechanisms in WebRTC that enable an application to squeeze the lemon and gain back CPU cycles as it tries to optimize for larger group calls.

But we never did have a place where all these are found and explained. A body of knowledge and understanding of how to make it happen.

The larger the conference call size in WebRTC, the more complex the solution is going to be to implement it

I’ve been in such conversations multiple times with multiple clients and developers. I’ve hosted a workshop on the topic and write an ebook on optimizing group video calls.

In my recent/upcoming update to the Advanced WebRTC Architecture course there’s a lesson dedicated to this specific topic. It isn’t as if the information isn’t there in the course – it is spread all over the course. But now there’s a lesson on this alone. Because it became interesting only in 2020.

We have traded the focus on what is important to us with video communications. A video conference’s scale trumps quality at the moment. While I do understand we all want both all the time, but there is still a tradeoff between these two qualities of a system.

The role of machine learning and AI in communications

Where does one fit machine learning and AI in this brave new world of large video conference calls?

Machine learning requires memory and CPU. Things we don’t have to spare at the moment in these large group calls. So we can’t just slap machine learning inference algorithms on the edge inside the web browser easily.

Edge inference in web browsers using WebAssembly is also brand new. So there’s no guide book to work with.

We won’t be using it to improve video quality or audio quality in the edge – we can’t really. Not enough CPU to spare.

There’s no real place for it on the server side either – that one requires decoding and encoding which are going to be CPU intensive and increase the costs of delivering the service. Pexip is doing that for auto zoom, but that’s because they are built as an MCU. Google decided to do this for noise suppression.

There’s packet loss concealment using machine learning now. And you can do super resolution for video to get better video quality. But in the end, all these are going to make a difference once CPUs have their own dedicated, standardized AI accelerators, like the new Apple M1 chip in them brand new Intel-less MacBooks. We just don’t have cycles to spare.

Which is why media quality has gone back to its roots. Here’s something I have in that workshop of mine:

First take care of your infrastructure as much as you can to improve media quality in WebRTC

Machine learning should be added once we’re done squeezing that lemon for more performance and quality.

Google is now doing its part of optimizing the WebRTC codebase itself. It is your role to do it in your own infrastructure and application. Once done, the time will come to introduce some machine learning chops into it.

Until then? We need machine learning for two main tasks, and we see it already:

  1. Background blur and background replacements. We’re all humans but somehow we don’t want our kids to be in the way of our conversations
  2. Noise suppression. As we’re stuck at home, we can’t really control that crying kid of ours on the other side of the room
Where to start with AI in communications?

Does that mean you don’t need to invest in machine learning?

Hell no. you definitely MUST invest in machine learning.

Not for what you’ll be doing in 2021, but for what you’ll be launching in your product in early 2022. Which brings me to the heart of it all.

Machine learning is new and challenging. We’re still writing the playbook of what it means to use it for real time communications, inside a browser, using technologies such as WebAssembly.

You’ll need to decide on which use cases to invest, and what value you are going to derive of it. And you’ll need to plan for the long game here and be patient until you get results.

There’s a need to let the teams driving machine learning do the research and experimentation needed. But at the same time, they need guidance in where to look at and what to experiment with.

The post A blueprint to improving WebRTC media quality using AI appeared first on BlogGeek.me.

WebRTC Growth – is it a back-to-school pandemic phenomena?

Tue, 11/10/2020 - 12:30

WebRTC growth during 2020 came in waves, just like the pandemic and its quarantines. Here how it looks and where we are all headed.

Let’s look at some interesting performance indicators of WebRTC use and adoption.

2020 is the year of video communications.

2020 is also the year of WebRTC.

Table of contents Unified Communications & WebRTC

In my introductory slides of my WebRTC workshop 4 months ago, I had that as a very strong theme:

The slide above illustrates what the statistics at the time were for the big meetings vendors.

Since then, the numbers have grown. Microsoft Teams, for example, reached 115M DAU. That’s Daily Active Users.

While not all of the growth is in video calls, these services have a video focus to them.

Out of these 4 vendors:

  • Zoom doesn’t make use of WebRTC, and like it that way
  • Google Meet is “all in” with WebRTC
  • Microsoft Teams has WebRTC support to it, though with pretty limited capabilities
  • Cisco WebEx supports WebRTC rather nicely

Guest access growth for Microsoft Teams and Cisco WebEx can be attributed to some extent to WebRTC. With Google Meet, it is all WebRTC related.

Gartner’s Magic Quadrant for Meeting Solutions (& WebRTC)

Gartner has its nice magic quadrant diagrams. Here’s the one just published for meeting solutions:

Which of the vendors in this magic quadrant diagram use WebRTC? I’ve marked the vendors in red for you:

The ones not marked might have WebRTC – I am just not aware of it. The ones marked have WebRTC support in production in their products. How central is it to their product is a different question though.

The thing here is that no matter what magic quadrant from Gartner you’ll be looking at for whatever market category that involves communications, WebRTC will be used as the underlying technology by many of the vendors.

Contemplating if WebRTC is the technology to use? Look at the reds above.

A surge in use of WebRTC

I decided to leave the best for last.

Chrome collects and shares statistics of JS API calls in the browser and their “popularity”.

Lets look how getUserMedia() looks like:

Source: here

Interestingly, we see an adoption curve where each round of quarantine raises the use of WebRTC to a higher level.

From a steady, boring 0.05% of use pre-pandemic, the new normal is settling well above 0.2% of the page loads.

How can we explain the rise from July to October? Is this a sustained growth happening as the pandemic found its second wave in different countries and social distancing gradually came back in force throughout the globe? Is it due to the fact that schools started opening around the world in August and September, many of them strictly remotely? Is it due to more services being introduced online that offer WebRTC based communications in them?

AddTransceiver, AddTrack and AddStream show similar trends for the most part.

If you ask WebRTC, we’ve reached the peak of the second wave of the pandemic.

Where do we go from here?

Two alternatives:

  1. A third pandemic wave. Will that raise usage even further?
  2. Vaccine. Even a promise of one sent collaboration stocks down

On a more serious note though, the huge surge in WebRTC traffic brought with it new use cases and a lot of learnings regarding scaling and operationalizing WebRTC.

In our Kranky Geek event next week, we will be discussing these topics a lot. Make sure you register to join us!

The post WebRTC Growth – is it a back-to-school pandemic phenomena? appeared first on BlogGeek.me.

What is WebRTC P2P mesh and why it can’t scale?

Mon, 11/02/2020 - 12:30

If you are planning to use WebRTC P2P mesh to power your service, don’t expect it to scale to large sessions. Here’s why.

Every once in a while someone comes in with the idea to broadcast or conduct a large scale video session with WebRTC without the use of media servers. Just using pure WebRTC P2P mesh technology.

While interesting as a research topic for university, I don’t think that taking that route to production is a viable approach. Yet.

Table of contents What is WebRTC P2P mesh?

If you are focusing on data only WebRTC mesh, then skip to the last section of this article.

When dealing with WebRTC and indicating P2P or mesh, the focus is almost always on media transport. The signaling still flows through servers (single or distributed). For a simple 1:1 voice or video call, WebRTC P2P is an obvious choice.

From a WebRTC client perspective, a 1:1 session is similar if it is done using P2P mesh or using a media server

The diagram below shows that from the perspective of the WebRTC client, there is no difference between going through a media server or going P2P – in both cases, it sends out a single media channel and receives a single media channel. In both cases, we’d expect the bitrates to be similar as well.

Making this into a group call in P2P translates into a mesh network, where every WebRTC client has a peer connection opened to all other clients directly.

WebRTC mesh architecture. Or is it mess architecture? Why use WebRTC P2P mesh?

There are two main alluring reasons for vendors to want to use WebRTC P2P mesh as an architectural solution:

  1. It is cheaper to operate. Since there are no media servers, the media flows directly between the users. With WebRTC, oftentimes, the biggest cost is bandwidth. By not routing media through servers as much as possible (TURN relay will still be needed some of the time), the cost of running the service reduces drastically
  2. It is more private. Yap. As the service provider you don’t have any access to the media, since it doesn’t flow through your servers, so you can market your service as one that offers a higher degree of privacy for the end users
Why not use WebRTC P2P mesh?

If WebRTC P2P mesh is so great, with cheaper operating costs and better privacy, then why not use it?

Because it brings with it a lot of challenges and headaches when it comes to bandwidth and CPU requirements. So much so that it fails miserably in many cases.

It is also important to note here that in ALL cases of 3 users or more in a call, alternative solutions that rely on media servers give better performance and user experience. Always – at least as long as the media servers infrastructure is properly deployed and configured.

Bandwidth challenges in WebRTC P2P mesh

Assume we want pristine quality. Single speaker, 10 listeners.

The above layout illustrates what most users of this conference would like to see and experience. The speaker may alternate during the meeting, switching the person being displayed in the bigger frame.

As we’re all watching this on large screens (you do have a 28” 4K display – right?), we’d rather receive this at HD resolution and not QVGA. For that, we’d want at least 1.5Mbps of the speaker to be received by everyone.

Strain on the uplink

In a mesh topology, the speaker needs to send the media to all the participants. Here’s what that means exactly:

In WebRTC mesh, we put a bigger strain on the uplink

1.5Mbps times 10 equals 15Mbps on the uplink. Not something that most people have. Not something that I think my strained FTTH network will be able to give me whenever I need it. Especially not during the pandemic.

In an office setting, where people need to use the network in parallel, giving every user in a remote meeting 15Mbps uplink won’t be possible.

On top of that, we’ve got 10 separate peer connections to 10 different locations. WebRTC has its one internal bandwidth estimation algorithm that Google implemented in libwebrtc, which is great. But how well does it handle so many peer connections on the client’s side? Has anyone at Google ever tried to target or even optimize for this scenario? Remember – none of Google’s own services run in a mesh topology. Winning this one is going to be an uphill battle.

Bandwidth estimation on the downlink

Let’s look at the viewers/subscribers/participants/users or whatever else you want to call them.

If we pick a gallery view layout, then we are going to receive 10 incoming video streams. Reduce that to 9 for layout simplicity and we get this illustration:

There are 9 other users out there who generate video streams and send them our way. These 9 streams are competing on our downlink network resources and for our machine’s attention and CPU.

Each of them is independent of the others and have little knowledge about the others.

How can the viewer understand his downlink network conditions properly? Let alone try to instruct these sends on how and what to send. A media server has the same set of problems to deal with, but it does that with two main advantages:

  1. It controls all the videos that are sent to the viewer, and it can act uniformly as opposed to multiple browsers competing against each other (you can try to sync them, though good luck with that)
  2. You can put all incoming streams in a single peer connection from the server, which is what Google Meet does (and probably what Google is focused on optimizing for in their WebRTC implementation)
CPU challenges in P2P mesh

Then there’s the CPU to deal with in WebRTC P2P mesh.

Each video stream from our speaker to the viewers has its own dedicated video encoder. With our 10 viewers, that means 10 video encoders.

A few minor insights here if I may:

  • If you aim for H.264 hardware encoding, then bear in mind that many laptops allow up to 3-4 encoded streams in parallel. All the rest will be black screens with the current WebRTC implementation
  • Video coding is a CPU (and memory) hog. Encoding is a lot worse than decoding when it comes to CPU resources. Having 10 decoders is hard enough. 10 encoders is brutal
  • 10 or more participants in a video call is hard to manage with an SFU without adding optimizations to alleviate the pains of clients and not burn their CPU. And that’s when each user has a single encoder (or simulcast) to deal with
  • Your Apple MacBook Pro 2019 with 16 cores isn’t the typical device your users will have. If that’s what you’re testing your WebRTC mesh group video calling on then you’re doin’ it wrong
  • I am sure you thought that using VP9 (or AV1 or HEVC, which aren’t really available in WebRTC at the moment) will save you on bandwidth and improve quality. But it eats even more CPU than VP8 or H.264 so not feasible at all

So. going for a group video call?

Want to use WebRTC P2P mesh?

You’re stuck at 300kbps or less for your outgoing video even if your network has great uplink. Because your device’s CPU is going to burn cycles on encoding multiple times.

Which also means that people aren’t going to like hearing their laptop’s fans or touch their heating smartphone (and depleting battery) on that call.

Can we do better?

Probably. A single encoder would make the CPU problem a wee bit smaller. It will bring with it headaches of matching the bitrate to all viewers (each has his own network and device limitations).

Using simulcast in some manner here may help, but that’s not how it is intended to be used or how it has been implemented either.

So this approach requires someone to make the modifications to the WebRTC codebase. And for Google to adopt them. Did I already say Google has no incentive in investing in this?

Alternatives to WebRTC P2P mesh

You can get a group video call to work in WebRTC P2P mesh architecture. It will mean very low bitrate and reduced video quality. But it will work. At least to some extent.

There are other models which perform better, but require media servers.

WebRTC offers media server alternatives to mesh in the form of SFU and MCU

Using an MCU model, you mix all the video and audio streams in the MCU, making sure each participant receives and sends only a single stream towards the MCU.

With the SFU model, you route media around between participants while trying to balance their limitations with the media inputs the SFU receives.

You can learn more about in my WebRTC multiparty architectures article.

A word about WebRTC data channel mesh

I haven’t really touched WebRTC mesh architectures for data channels.

All the reasons and challenges detailed above don’t apply there directly. CPU and bandwidth relied on the concept of needing to encode, send, receive and decode live video. In most cases, this isn’t what we’re dealing with when trying to build mesh data channel networks. There, the main concern/challenge is going to be proper creation and connection of the peer connections in WebRTC.

If what you are doing isn’t a group video call (or live video broadcast from a browser to others) then a WebRTC P2P mesh architecture might work for you. If it will or won’t is something to analyze case by case.

The post What is WebRTC P2P mesh and why it can’t scale? appeared first on BlogGeek.me.

CPaaS in 2020 and my WebRTC API report

Mon, 10/26/2020 - 00:30

In the last 2 months I’ve dived into the world of CPaaS again, updating my WebRTC API focused report. Oh, and there’s a new free ebook.

There have been many changes since my last update,so this one was greatly overdue.

API platforms changed hands due to mergers and acquisitions. Vendors joining the market. Others leaving or just pivoting away from APIs.

And then we had AWS and Azure entering the CPaaS market.

What I did in these last two months was interview and review all the vendors in my report again, to see what has changed and update that part of the report. I learned a lot from the process.

As with every time where I shift focus to a certain market, I took the time to process my own thoughts by writing them down here in a series of articles.

Here are two things I wanted to share with you, as well as announce my next upcoming projects.

Table of contents Choosing a WebRTC API report – 2020 version

I finished and published the WebRTC API report last week. The result:

  • 254 pages
  • 24 vendors

Agora decided to sponsor this report (thanks a bunch!). They are one of the interesting vendors in this space, offering an IP video/voice focused platform with their own data centers spread across the globe and a lot of research done in machine learning to improve media processing.

If you are looking to learn more, then you can:

  1. Read the WebRTC API report overview
  2. Get the 4-pager of Agora from the report (each vendor covered in this report has a 4-pager)
  3. Purchase the report online
CPaaS in 2020 – a free ebook

The previous 3 articles in my site here were all focused on CPaaS, looking at different angles on how CPaaS is changing.

The first one dealt with the future of CPaaS, especially considering the pandemic and how it affects everything and everyone.

In the second article, I looked at AWS Chime SDK and Azure Communication Services, trying to understand what their entry into CPaaS is going to change in the market.

For the third and last article, the focus went to Twilio Signal 2020. Considering how they redefined the market in the last 4 years in each such event, this event was a bit of a downer. It did bring with it many insights.

If you’re more into printing and reading, or sharing with others, then I packaged all of these 3 articles into one ebook, making it easier to consume.

I called the ebook CPaaS in 2020 – a market in transition. Because this is what it is…

Download my CPaaS in 2020 ebook Advanced WebRTC Architecture Course – update & office hours

With my WebRTC API report now updated and finally launched, I can go back to focusing on other projects I am running.

My WebRTC Courses have been around for over 4 years now. I’ve been updating them regularly and I am doing it again to my main signature course – the Advanced WebRTC Architecture training.

Updates

There are going to be 2 new lessons and around 10 lessons that are already being updated and recorded all over again.

The purpose is still to make this the best alternative out there to learning WebRTC.

Office hours

Alongside the updates, I will be starting another round of office hours for the course. These will start in December.

The office hours is where students can come and learn online and in-person with me specific topics in WebRTC, as well as ask questions about anything related to WebRTC – and their own projects.

If you were thinking of learning WebRTC, then the best timing for it would be to enroll now and join the office hours. These are complementary to the course and open for anyone with a valid course subscription.

WebRTC Insights – a new service

Following and catching up with everything in WebRTC is time consuming. It is also tedious. And you need to know where to look and what each bit of information means to you.

To make this a wee bit easier, I’ve decided with the help of Philipp Hancke to start a new service together – WebRTC Insights

In this service, you receive an email every two weeks. This email includes all the important changes to WebRTC

  • Bug tracking of browser related WebRTC issues we feel are important
  • Select libwebrtc code commits that we found interesting
  • discuss-webrtc forum messages
  • Critical PSA announcements from browser vendors
  • W3C/IETF mailing list items
  • Market news related to WebRTC
  • Things we hear from other vendors that we can share

This gives you actionable insights to your own planning and reduces the risks in your development. Both Philipp and me have been doing this for a while, but doing it together brings it to a new level.

If you want to learn more and subscribe to this service, then check the new WebRTC Insights page.

The post CPaaS in 2020 and my WebRTC API report appeared first on BlogGeek.me.

Twilio Signal 2020. I expected more from the leading CPaaS vendor

Mon, 10/05/2020 - 12:30

Twilio Signal 2020 occurred virtually this year. The number of new announcements or market changing ones was low compared to previous years. I expected more from Twilio as the leading CPaaS vendor.

Table of contents

Twilio Signal is Twilio’s yearly event where its major announcements are made. It is also a gathering place where customers, partners and even Twilio CPaaS competitors come to meet. This year, as all other events, Signal was virtual. Twilio built its own hosting platform and event experience and did a good job at that.

Twilio Signal – past events

I’ve watched the keynote twice, and several of the other sessions, including all major announcement sessions. I came out of this feeling a wee bit disappointed. There was nothing really interesting or groundbreaking this year. Especially not if you compare it to some of the previous years:

In 2020, we’ve seen Twilio Microservices (the Electric Imp acquisition), Frontline, Video Go, Event Streams and Verify Push.

Twilio By the Numbers

The main keynote by Jeff Lawson, Twilio CEO, had 3 components to it, with 3 main messages:

  1. Twilio is big
  2. Social good
  3. New product announcements

I’ll focus on the big and new parts here.

Twilio is now 12 years old and it has accomplished a lot. Jeff threw the “Twilio is big” numbers too fast for my taste, not even letting some of the big numbers register in our minds properly.

Here are the numbers. I tried aligning them with last year’s numbers from Twilio 2019:

20192020Interactions750B1TUnique phone numbers2.8B3BCalls/minute32,500–Peak SMS/second13,000–Email addresses3B/quarter50%Video minutes–3BCustomers160,000200,000+Developers6M– What the numbers mean
  • I still don’t understand what interactions mean, but the number is growing ridiculously fast, so it must be a good thing (I’d love to know how it is calculated)
  • Voice and SMS is out (no calls/minute or SMS/second numbers this year)
  • Unique phone numbers indicates reachability and 3 billion is a nice number, showing decent growth from last year
  • Email moved from a number to a percentage, making it even less accurate or interesting. How would one know what an email address represents? There are so many of them that are spammy or just an alias to other addresses.
  • For the first time video is important to Twilio. 3 billion is a large number, but not overly so (more about this later)
  • The number of customers has grown significantly
  • The developers number was useless to begin with and is finally not shared at all
The “new normal”

Jeff alluded to the new normal, forced on us due to the pandemic. In many ways, this has been the main theme of Signal and the sessions.

My gripe with the “new normal” moniker to our situation is that there isn’t anything normal about it and it isn’t really here to stay.

Yes. We are seeing an accelerated move towards digital transformation and the cloud, but some of this shift, and especially the high usage in some sectors (such as education) aren’t here to stay post-pandemic.

For me, there’s no “new normal”. Just a transition to one, which will take time. How the future is going to look is hard to say from our current position.

Which leads me to the interview Jeff did with John Donahoe, Nike CEO.

Nike and digital transformation

Jeff picked John Donahoe as the first person to interview during the keynote. It is an interesting choice.

I found it a tad ironic to get an explanation about social good and how Nike in all its years promoted social causes. It got me thinking about the Nike sweatshops. Other than this little history reframing that was done, the interview was quite good.

Two sentences that John said really resonated with me:

“Every business in the world is embracing digital transformation. We all have no choice”

The shift towards making businesses more digital has been inevitable.

Just think of all the on premise contact centers and what they now have to do when all of their agents are working from home. Or how all brick and mortar stores need a digital footprint to be able to even stay in business and sell throughout the quarantines.

“There is no finish line”

I should start using it myself.

There are a lot of discussions around build vs buy that I participate in, especially when it comes to the decision to build a WebRTC infrastructure versus buying an existing one via CPaaS vendors. In many cases, the argument and focus is on the initial development effort and a lot less on maintenance. The thing about maintenance is that it is almost as hard as the initial development, especially because there is no finish line – the product team will always ask for more features and capabilities which will drive more investment.

Twilio Microvisor

The first announcement made during the keynote was about a new product – Twilio Microvisor.

The Twilio Microvisor is an extension of the Twilio Super SIM and its Internet of Things initiative, which many don’t even couple and view as CPaaS (I’ve been ignoring it as well).

The world of IOT and M2M is a challenging one. It includes different networks and carriers, differences in geographies and regulation, different hardware devices and chipsets.

Earlier in the year, Twilio acquired Electric Imp. This acquisition is now the Twilio Microvisor.

Up until now, the only real touching point that Twilio had with the physical world was their Super SIM. With Microvisor (and Electric Imp) that changes, and Twilio is mucking around with microcontrollers, firmware and hardware.

It the special announcements session, Evan Cummack, GM of IoT at Twilio, explained that there was a gap in the market – as a developer you either had to begin from scratch or use readymade solutions:

The gap between IOT alternatives of developers: DIY or bespoke solutions

He ignored a few of the competitors for the Twilio offering, but these are less flexible and open anyways.

What Twilio is doing with Microvisor, is taking care of a few important aspects of IOT development:

Twilio Microvisor features takes care of the heavy lifting of security for developers
  • Secure Boot
  • Secure FOTA (Firmware Over The Air)
  • Secure Debug
  • Secure Communications

The secure part here is key, as it is the one thing we struggle with greatly in IOT these days. This solution will remove a lot of the headaches of IOT development and get more products released.

It is also where Twilio is competing not with other CPaaS vendors but rather with cloud vendors, who also started offering IOT tooling in recent years.

Twilio Video WebRTC Go

Coming from the Video and WebRTC space, this is where I am most frustrated.

The need and growth of video

With the pandemic going on, Twilio had to do something about video, an area where little investment on their part has taken place. Until 2020, this has been understandable. Growth came from elsewhere and it didn’t seem like video is that important.

All this has changed. Zoom exploded, Agora.io had a great IPO, and Twilio itself saw an increase of 500% of daily usage for its video.

Twilio reiterating the need and uses of video communication

The one to talk about Twilio Programmable Video was Michelle Grover, Chief Information Officer. Her part of the keynote revolved around the market need. The main market verticals here were retail and health.

It was more a reminder that Twilio is doing video than anything else.

The new WebRTC announcement

The new announcement? Twilio Video WebRTC Go

What is Twilio Video WebRTC Go?

  • A free, hosted WebRTC service
  • Peer-to-peer, 1:1 sessions only
  • Limited to 25 GB/month of TURN for media relay

For context, pricing of 25 GB/month on Twilio’s TURN servers in the US is $10/month.

If you developed your own signaling and your own application, relying on Twilio’s TURN servers, then switching to Twilio Video WebRTC Go will save you $10.

But what you really get here is Twilio Video P2P that costs $0.0015/minute. In this configuration, you get the full infrastructure and support of Twilio’s signaling, logging and SDKs practically for free if your service is smaller than 25 GB/month of TURN media relay. How many video sessions can this accommodate? That’s something you’ll need to calculate.

For Twilio this is a win, as it gets more companies to adopt its Programmable Video at a very low price to Twilio (remember – video isn’t a serious money maker for Twilio yet, so helping these smaller users to grow their business and then have them start paying is just fine). With all the video API services out there, a free offering from a large vendor is a first. While limited, it is probably useful for many companies starting their way with 1:1 video calling.

On open source and Twilio

The fact that Twilio is calling their reference apps “Open Source Video Collaboration Apps” is a bit silly. These are references/samples running on top of the Twilio Programmable Video API and are not meant, designed or easily usable on top of any other vendor or on top of any other infrastructure.

Calling a piece of code, no matter how big, open source, while forcing its user to consume other paid services in order to use it is not exactly open source.

This isn’t to say that this open source reference app isn’t useful. It surely is most useful. It gives developers a better starting point for their application, and Twilio has taken the time at Signal to offer a session titled “Accelerating Development of Collaboration Apps with Twilio Video” dedicated exactly to this.

It is a trend I see of CPaaS vendors going towards higher level abstractions. Twilio is doing that with nocode (=Twilio Studio), programmable enterprise (=Twilio Flex), reference apps for video (this one) and now with Frontline (later in this article).

Nothing new under the sun here

For me this says that Twilio hasn’t invested in video as much in the last year or two. If they had, they would have announced something more thrilling and interesting. Maybe larger meetings, above 50 participants? Broadcasting capabilities? Noise suppression? Something…

Twilio Flex ecosystem

The keynote and the session had a lot of Twilio Flex content in them. This is less about developers and more about contact centers.

A show of force for Twilio Flex, but sharing customer logos

In this event, Tony Lama, Vice President, Contact Center Sales at Twilio mentioned in brief the fact that many features were added to Flex, but didn’t really delve into them too much. The focus was on the fact that Flex has customers and now has a thriving ecosystem of partners as well.

Lots of new features, none interesting enough for the keynote

The main target for this year were the on premise contact centers – this is where Twilio is setting its sights – in the transformation these contact centers are going through as they are heading to the cloud (forced to do so earlier rather than later due to the pandemic).

This is why Twilio decided to focus on the ecosystem, making it into a big announcement:

This targets exactly the on premise contact centers, where large deployments with many agents and a lot of custom integration code and features were added over the years. An ecosystem around Flex gives Twilio the reach it needs.

It is also why Twilio introduced its latest Flex partner – Deloitte Digital – who offer system integration in this target market.

Twilio Flex and its current set of announcements is less about CPaaS and developers and more about content center as a service (CCaaS).

Twilio Frontline

In that vein, the announcement of Twilio Frontline was made.

Interestingly, this was introduced by Simon Khalaf, SVP and GM, Messaging at Twilio.

Twilio Frontline is a new complete, closed, mobile application and service which enables employees in a company to directly communicate with customers through messaging channels.

The main benefits touted about Frontline? SSO (Single Sign-on) and CRM integration

  • Both of these features aren’t building blocks or APIs at Twilio, which begs the question why not
  • There’s nothing about programmability, APIs or building blocks here. This isn’t something by developers for developers

This is far remote from the developer roots and target audience of Twilio, so it will be interesting to see how this plays out and redefines Twilio itself. My guess is that Frontline started as a skunk works project during the pandemic, one that turned into a new product that is now looking for a home at Twilio and within its bigger storyline.

I wonder though, was this built on top of Twilio Conversations, which was introduced at Signal 2019, or is it something implemented on top of Twilio Flex?

If this was implemented on top of Twilio Flex (which I believe it was), then why is the SVP and GM of Messaging at Twilio the one introducing it? And why wasn’t it designed, developed and even introduced as a programmable solution? Part of Flex. Maybe even an “open source application” on top of Flex.

Frontline is an interesting product. But what does it have to do with Twilio?

Other announcements

There was little in the keynote of Twilio about APIs and CPaaS and more about the higher level abstractions and complete applications (Flex and Frontline). This shows a maturity level at Twilio, where most of the CPaaS domains are already well covered by their APIs.

Two additional announcements of new features/products were made, though not in the keynote itself.

Twilio Event Streams

That trillion human interactions? These are probably just events in the Twilio system:

This is the slide shared in the session discussing the new feature/product of Twilio Event Streams. It isn’t a trillion but it is close enough.

What Twilio did was consolidate all of its events into a single hook, calling it Event Streams, offering a single integration point for collection of events. The first sink selected for these events is Amazon Kinesis, with more to probably be added later, based on customer demand.

Moving towards consolidated data management shows maturity and an increase in the customers that are using multiple Twilio products.

Twilio Verify Push

Another new product/feature is Twilio Verify Push. This enables a mobile application to be used as a trusted device/app to validate login on another device (as well as on the device itself). The end result is reduction in the SMS volume.

While nice, I am waiting here for Google and Apple to close this gap and offer their own verification mechanisms to all instead of having application developers rely on third party services.

As for Twilio, this makes for a sensible and useful addition to their Twilio Verify service.

Machine Learning was missing

What was missing at Twilio Signal 2020 is AI and machine learning.

No really interesting improvements shared about Twilio Autopilot. No cool introduction of noise suppression or other media processing machine learning capability. Nothing.

There were a few mentions on how Autopilot is used by customers during the create bots in order to deflect calls and handle the volume (nice stories that we’ve heard would be the main use case for Autopilot already).

The only “real” thing around AI? At the end of the keynote, Jeff Lawson had his short “live” coding session.

Jeff, coding “live”. Still magical

This time, he went for using OpenAI’s GPT-3, a per-trained natural language processing engine. He made it understand TwiML constructs (the XML format used by Twilio sometimes) so that users can write a sentence of what they want, and the service would generate the TwiML for them. A nice toy to play with. I wonder what people would do from here with it, as it opens up a lot of questions, thoughts and ideas.

Machine learning is one of the main pillars I see in post-pandemic CPaaS offerings. Twilio has the skill set inhouse to pull this off, but they need to focus there more than they are doing today. They should probably also partner or acquire in this space to keep in pace with where the industry is headed.

The coming CPaaS fight is in the enterprise

The enterprise story of Twilio came at the beginning of the keynote. Jeff wanted to make sure everyone knew and understood that Twilio is ready for the enterprise and being used by the enterprise. The careful selection of guests throughout the keynote showed that as well – they were all established enterprises. No cool startup this time. No crazy garage developers. Just formidable businesses that existed for years.

Twilio is ready for the enterprise, with all the relevant certificates and procedures

I decided to leave this to the end since this is where Twilio is being challenged.

The challenge comes in the form of Amazon and Microsoft going towards CPaaS. Both of these vendors are:

  • Bigger, with a wider breadth of products and services targeted at developers
  • Attractive programs for startups, giving them free “cash” on their platforms
  • Better access and relationships with enterprises
  • Global coverage and partner programs that are richer in depth, breadth and reach

Amazon will probably introduce machine learning capabilities such as noise suppression as part of its CPaaS offering soon. They have it available in Amazon Chime, so placing it in the Chime SDK is the next logical step.

Microsoft runs their CPaaS on the same infrastructure that Teams is running on. Twilio touts 3B video minutes a year while Microsoft Teams has up to 5B meeting minutes a day. I am sure that it accumulates to a considerably larger number than 3B video minutes a year.

Both Amazon and Microsoft have ways to go in stabilizing their APIs and attracting developers and attention to it. They might not be highly interested in this CPaaS business as much as Twilio is, so would probably never reach the same level of maturity and breadth of features and flexibility of Twilio. But they will surely win market share. Market share that could have easily been Twilio’s.

What is also very interesting to note is that while Amazon and Microsoft made a point of not mentioning WebRTC in the front of their CPaaS platforms (both of which are video first and use WebRTC), Twilio decided to bring WebRTC to the front with their new offering of Twilio Video WebRTC Go. I wonder which works better for enterprise sales.

Anyway, with 75% of contact centers still on premise, the enterprise market as a whole is still only starting its path towards digital transformation and with the new phrase I just adopted of “there is no finish line”, there is definitely room for growth for Twilio and its many competitors.

Interesting times ahead of our industry.

The post Twilio Signal 2020. I expected more from the leading CPaaS vendor appeared first on BlogGeek.me.

Cloud giants joining the WebRTC API game. How is that changing the CPaaS landscape?

Tue, 09/29/2020 - 12:25

Amazon Chime SDK and Azure Communication Services mark the entrance of the cloud giants to the CPaaS space, and they are doing it from a WebRTC API angle.

Ever since Twilio became popular, a question was raised over and over again:

When will one of the large IaaS players (Amazon, Microsoft or Google) acquire them or start competing with them directly?

There was no good answer. At least not until 2020, where 3 things happened:

  1. The pandemic hit us and we had to stay at home and shelter, or whatever
  2. Video exploded
  3. Amazon Web Services and Microsoft Azure both launched their CPaaS offering

This. Changes. Everything.

(it doesn’t. It changes only some things, but bear with me)

I already discussed how the pandemic changes priorities for CPaaS vendors. This new development is going to make things more of a mess.

Why now?

Amazon Chime SDK was already announced and launched close to the end of 2019. They already have customers and success stories under their belt. Why am I just now getting to look at how IaaS vendors are changing the market?

Probably a bit because I am doing the update to my WebRTC API platforms report this month. But also because of Microsoft’s announcement of their Azure Communication Services.

Amazon Chime SDK

Amazon started the work to video communications by the introduction of Chime a few years back. Chime is an enterprise communication service (in the UCaaS space), which is akin to Zoom, Google Meet and Microsoft Teams. It enables companies to communicate internally and externally via video and voice with a better set of collaboration tools than just phone calls.

For some time now Amazon Chime was also offered as a whitelabel solution that vendors could “make their own” and integrate it with their service. But it doesn’t allow for much flexibility in terms of the workflow, business logic and user authentication. This has led Amazon to introduce the Amazon Chime SDK.

The Chime SDK is one rung lower in the stack. It enables a developer access to the logical building blocks of communications, offering a pure communication API that can be used to connect to any other service. A direct competitor to the other CPaaS vendors offering video capabilities.

What Chime SDK did to really disrupt the market was lower the price point per minute. It comes at a rate of $0.0017 per user per minute. Twilio answered with its own price drop in September 2020:

A 60% reduction in Twilio Programmable Video price points

The new rates are still above the Amazon Chime SDK price points, but they are 40% their previous price points.

It should be noted that peer-to-peer calling available in Twilio Programmable Video is at $0.0015, lower than the Amazon price, but of a slightly different service and feature set.

What Amazon is “selling” here? The AWS story. From the main Chime SDK page:

AWS Lambda is already there. Connectivity to other AWS services are also part of the bigger spiel.

Azure Communication Services (AKA ACS)

Microsoft just announced Azure Communication Services in a public preview. This is a full CPaaS offering that includes Video, Chat, SMS and Telephony calling. The interesting tidbits alluded to in the announcement:

  • Azure enabled, with all the knobs and pieces to connect it to other Azure services; along with the security and compliance of the Azure cloud
  • Connectivity with Microsoft Teams, which isn’t available yet in the public preview

Watch that video above. There’s a visual explanation of remote visual assistance. I’d never think of explaining embedded video communications or programmable video communications this way – because I am in the industry for this long. What Micsoroft is doing here is educating the market in the most basic way possible. Something we were missing in our market without even knowing it. This type of an approach can work well in the enterprise space, which hasn’t adopted such services in droves just yet.

What makes this so interesting is this:

  1. Microsoft is the only CPaaS vendor who has a huge UCaaS offering. Huge as in up to 5B (or more) meeting minutes a day. Starting off with the same underlying scalable infrastructure means resilience, reliability and scale
  2. This is part of Azure and not tied to Teams. Like the AWS Chime SDK offering, the tie in with machine learning in their compute cloud brings value to developers using Azure already
  3. Microsoft has Office as another huge asset. If they can make the connection to it here, this is another great differentiator

On pricing, Microsoft was a bit more traditional and less bold than Amazon, sticking to the $0.004/minute price point the market seems to have adopted.

The new model for Video CPaaS?

Even before Amazon and Microsoft joined this space, there were two objectives you could see in the mid-term and long-term roadmap for video CPaaS vendors:

  1. Add support for machine learning
  2. Introduce higher level of abstraction

These map where the new video CPaaS is headed, and the fact that Amazon and Microsoft both come with this “built-in” will accelerate things further.

Machine Learning

Everyone’s doing machine learning these days, and it is part of the future of communications and WebRTC.

Amazon Chime SDK will be offering their noise suppression capabilities. Connect to Kinesis and enable access to all their other machine learning services.

Microsoft in their launch already mentioned Azure Cognitive Services as something that plays/will play nice with ACS.

Other CPaaS vendors are figuring out their way in this space as well, but part of their offering is usually how to gain access to the media for… sending it to the cloud for machine learning analysis. That cloud is going to be AWS and Azure more often than not. Being in that cloud to begin with is going to be an advantage for these cloud vendors and their CPaaS offerings.

Also remember that cloud vendors live and breath machine learning already. CPaaS vendors? Less so.

Higher abstractions

Everyone in this space is talking about simplicity now.

How can I get developers to do their work in hours versus days. Days versus weeks. Weeks versus… no… weeks is too long already.

While this is unrealistic for a full fledged, polished service, it is something that works well towards an MVP or a first stab at a ready product.

Some do this by offering open source or reference applications on top of their CPaaS APIs. Others by offering this as a set of ready-made and highly configurable widgets.

It doesn’t seem like anyone has cracked the code of what is needed here, but the growing focus shows there’s something missing. Especially if we want developers to need to know less about WebRTC and media routing and more about their application logic.

I think that Amazon and Microsoft joining this market will speed up the efforts in this domain, as companies search for differentiation and quick onboarding.

Why telephony is dying and communication is growing

Both Amazon and Microsoft are leading here with video, adding chat and telephony later. Later can be immediately after the initial launch, but it is still later.

In the past it made sense to do the opposite. Lead with PSTN and SMS as money makers, and add WebRTC voice and video, waiting for them to grow in adoption.

Taking the opposite approach shows where the future of consumption is.

Winners

Who are the winners when CPaaS is done by the cloud vendors?

Users

If cloud vendors are joining this game, it means there’s enough $$$ in this market to make it interesting, which means more users are consuming such services.

The market education that these cloud vendors are capable of doing and their reach is higher than the other CPaaS vendors, excluding maybe Twilio. This will end up with more enterprises and businesses offering such services and end users using them.

Tier 1 cloud vendors

Amazon and Microsoft. Their timing couldn’t have been better.

If I haven’t known that Bill Gates is causing the pandemic so he can chip us all when his vaccine comes to market and causes all birds to fall from the sky due to 5G, I’d might end up saying that Jeff Bezos is to blame because he wanted the Chime SDK to grow in market share.

In all seriousness though, this gets both Amazon and Microsoft in front of the developers that use them for additional types of services that these developers are going to consume.

Smaller cloud vendors

Digital Ocean and Oracle.

Why are they winners? I am not sure how Twilio can continue running Programmable Video on top of AWS and compete with AWS Chime SDK on price and geographic spread.

Same for the other CPaaS vendors who might be using AWS or Azure. They will be thinking hard if they want to keep their media stacks on these platforms or move them elsewhere. They can move them to Google Cloud, but Google just might introduce the same capabilities and become a competitor. Next in line will be Digital Ocean and Oracle, both cloud vendors that are carrying real time media traffic already. If I were a sales person there, I’d pick up the phone today and call the CPaaS vendors one after the other…

Developers

A definite win. More choice. In clouds they already use. With a price war coming up.

What’s there to lose?

Losers

Who are the losers when CPaaS is done by the cloud vendors?

CPaaS vendors

They now have more competition. And not from smaller startups, but rather from the leading cloud vendors.

Cloud vendors already cater to developers, and a larger audience of developers.

Things are going to get interesting for these vendors, as they need to rethink differentiation, their own infrastructure and their pricing.

Twilio

Twilio is the leading CPaaS vendor today.

They are using AWS. Everywhere.

This is definitely hurting them and will hurt them more moving forward.

Out of all the threats to Twilio, having cloud vendors competing head to head with them was the biggest one, and it is now happening.

It made sense for someone like Amazon to acquire them and use them as the communication stack for AWS. now it won’t happen.

Maybe Google will acquire them, though this seems far fetched to me.

Google

3 leading cloud vendors.

  • Amazon
    • Now has AWS Chime SDK
    • Lots of adjacent services for developers
  • Microsoft
    • Now with Azure Communication Services
    • Lots of adjacent services for developers
    • Owner of Microsoft Teams, used as the underlying technology and media stack, with the ability to connect ACS to Teams if and when needed
    • Got Office 365 as another huge asset
  • Google
    • Nothing in communication APIs
    • Owner of Google Meet and Google Duo
    • Leveraging RCS with carriers and in Android
    • Has G Suite and Android as huge assets
    • Has Chrome and Chrome books as assets
    • Did I say no communication APIs?

Google is left behind in its communication APIs for developers, which is sad, considering they are the main driving force behind WebRTC.

I wonder if and when will Google close this gap.

Developers

This will definitely rattle the existing vendors. Some of them might not make it through. So choice will again get a wee bit limited as this plays out.

While cloud vendors are great, their support isn’t the best. They tend to offer support to the smaller developers and companies through third parties and not directly, so there’s going to be less of that available. And that for a domain that is still very complex in its nature.

Developers both win and lose from this development.

Updating my WebRTC API report

There’s a lot of change in the CPaaS domain. I mostly look at these vendors from a WebRTC prism, but not only.

This past month I’ve been working on updating my Choosing a WebRTC API platform report. I had a lot of briefings with the various vendors, researched their websites, added vendors, removed vendors. Grueling work.

The updated report will be published during October. It will include ~25 vendors, and touch everything from build vs buy, selection KPIs, vendor listing and pricing.

If you are looking to understand this domain better or need to select one vendor over another for an important project, then this report is for you. From today and until the report gets published, there’s a wee bit over 25% discount using coupon code API2020LAUNCH. Purchasing the report now will give you access to the current report as well as the fresh update once it is available.

The post Cloud giants joining the WebRTC API game. How is that changing the CPaaS landscape? appeared first on BlogGeek.me.

What should CPaaS providers do today to prepare for the “post pandemic”?

Mon, 09/14/2020 - 12:30

The pandemic is changing everything. CPaaS providers need to change their priorities and focus as well.

It is around this time of the year that I start thinking about where the CPaaS market is headed.

Mention last year’s articles on the future of CPaaS (this one was pre-pandemic) and on how CPaaS vendors differentiate (also pre-pandemic, and so “last year”).

The pandemic is an epochal event. It caught the CPaaS industry somewhat ready, with gaps found in their video offerings. Behind the pandemic, a few other market changes are taking shape, affecting how CPaaS providers need to plan ahead.

I’d like to look at a few of these trends and outline what I see as the basis of CPaaS competition for the future.

CPaaS features map CPaaS marketecture and features map

The diagram above shows the CPaaS features map. It is a kind of a marketecture diagram of the various bits and pieces that make up CPaaS.

I’ve layered it from Infrastructure, through Communications Building Blocks and Higher Abstraction to the Simplified Runtime domain. While not all CPaaS vendors will fill all building blocks in this map, they all see it in front of them one way or another.

Here are a few things to note:

  • I’ve decided not to place Email or IoT in here though I could without much effort
  • The importance of each block will be different for different customers and will change over time. The pandemic certainly changed priorities shifting them towards Video for example
  • I am using the term Studio, though Flow is the one that is used by most of Twilio’s competitors
  • ML stands for Machine Learning and it has its place throughout the CPaaS product stack. More on that later

If I had to map priorities for 2021, I’d probably create this heatmap:

CPaaS areas of investment in 2020-2021 The pandemic and CPaaS vendors

In many ways, the pandemic is accelerating the need for CPaaS providers. The world switched en masse from one of physical interactions to a virtual one. This, in turn, exposed a few aspects in the CPaaS market.

Digital transformation fast forward

The image above circulated on Twitter some time in March-April this year. It is spot on.

Digital transformation is here and it is here to stay. It came about a few years faster than expected and to get by, companies are relying more on communications and a lot of it comes today from vendors who use CPaaS or by developing the solutions needed on top of CPaaS platforms.

The thing is, in many cases, the increase is also catching businesses off guard, with call centers and support teams being overwhelmed with incidents. And that at a point in time where everyone is forced to work from home – including the call center agents.

This in turn, increases the requirements around technologies that assist in automation of processes and communication channels. Call deflection and agent assist solutions are taking center stage. This changes a bit how CPaaS vendors need to treat communication APIs, and especially what these APIs need to enable.

Are we looking now for more or less Uber-like solutions of matching a customer to a service provider? Or are we more about getting hold of the interaction’s content in real time and injecting insights into it, with or without a human agent?

I don’t have the answers, but I have a feeling that they are different than they were 9 months ago.

CPaaS vendors totally missed video Video growth was unexpected, catching most CPaaS vendors unprepared

Yap. We had CPaaS vendors doing video. A few of them. And they’re just fine. Up until the point that video becomes important for everyone and that totally new use cases pop up in our market on almost a daily basis.

Zoom doesn’t mean a magnifying glass anymore. Nor is it talking about getting a closer look.

During the pandemic?

  • Daily officially launched. And raised money
  • Dolby.io launched
  • Agora raised some $350M in their IPO

All of the above? Focus on video communications. None of them have any telephony roots or strong telephony capabilities. No phone numbers or SMS capabilities to speak of.

AWS decided it would be nice to join the frey, so they launched their own Chime SDK. With price points that challenge the existing players.

Twilio decided this month to lower their video price points. Cutting them down by some 60%.

8×8’s Jitsi is coming up with its own managed video API service, pricing it around MAU as opposed to the more common per minute pricing.

There’s a minor price war coming up around video APIs. It will be interesting to see how this plays out.

Lack of WFH tooling in CPaaS

WFH = Work From Home

Working from home isn’t just working from a different location

Welp… we’ve built all these nice communication services, but we’ve designed them mostly to work for the office.

On premise call centers moved to the cloud by adopting CPaaS, which is great, but the workforce itself still came to the office. All calls and communications took place from a controlled and managed environment.

The pandemic has forced call centers of tens of thousands of agents to stop coming to the office while continuing to work. From home. How do call center managers know anything about the environment of the home employee? How can he make sense of the quality of experience his agents and his customers are getting?

From the interest we see at testRTC in our qualityRTC service, there’s a real gap there.

Call this self promotion, but it is one of many areas where CPaaS vendors need to improve in order to offer a suitable WFH solution. Giving APIs is nice. Giving backend network insights and quality related dashboards is nice. Giving pre-call tests capabilities is nice. But I am not sure it is enough anymore.

Other aspects of WFH that aren’t catered for by CPaaS vendors? The need for noise suppression and background blurring/removal – to fit into the current work environments of call center agents and other workers.

The pandemic will pass, but digital transformation won’t Are we really in a new normal?

It was supposed to be a quick 2 months thing. Maybe 6. A year tops.

Then came Google and Facebook (not governments, because they can’t seem to be so realistic and pessimistic with their citizens), and simply let anyone work from home at least until July 2021. At least.

Fujitsu? Decided to cut office space by 50% in 3 years as the new normal.

LivePerson, an Israeli company with 1,300 employees decided to give up on its offices altogether and go 100% WFH. This saves money and apparently most employees prefer it while management doesn’t see enough of a degradation in production output.

This obviously isn’t the case everywhere. In a recent interview with the The Wall Street Journal, Reed Hastings, CEO of Netflix had this to say about remote work:

“I don’t see any positives. Not being able to get together in person, particularly internationally, is a pure negative. I’ve been super impressed at people’s sacrifices.”

To some degree, he is correct. It greatly depends on the type of industry and company.

Dean Bubley says it best about business events:

In-person business events will rise again, although I’m less certain about office work.

[…]

The #NewNormal will not be 100% remote. Once a vaccine is available, I hope that it isn’t even 50% #WFH.

My wife is a Pilates and Salsa dance teacher. She needs to work remotely now from time to time, with Zoom and recorded lessons. Her students? They’re fine with it, but whenever they can come over or do a face-to-face-in-the-flesh lesson – they’d take the opportunity.

This means that whatever it is CPaaS vendors are seeing as requirements may well stay and stick with them for the long run. What we have now isn’t a new normal, but there’s no going back to the old normal either.

3 pillars of CPaaS competition and differentiation in 2021

When I had to decide what are the main areas of investment for CPaaS when it comes to differentiation and competition towards 2021, I came to these 3 domains: machine learning, video and diagnostics.

There are two reasons why I chose these domains:

  1. Renewed focus on IP based communications. WebRTC and VoIP are becoming paramount to the growth and future of CPaaS. SMS and phone numbers are great money makers, but they’re not the future. The pandemic threw us a few years into the future, accelerating this trend
  2. Competing with in-house development. Phone numbers are complicated. Not because they are technically complex, but because they require haggling and contracting with multiple carriers around the globe, which gives an immediate advantage to CPaaS providers. With WebRTC that doesn’t exist anymore, and in-house becomes a bigger competitor to CPaaS providers. The domains below will increase the gap between build and buy for potential clients and also increase the perceived value of a solution
1# – Machine Learning in media quality

Noise suppression. Background replacement. Super resolution. Bandwidth estimation. Packet loss concealment. …

All these are algorithms in the media processing domain affecting the user experience in communications. Like everything else they are now shifting towards using a lot more machine learning than in the past.

The current forerunner in importance and mindshare is noise suppression, with a lot of partnerships and M&A activities around it.

When it comes to machine learning in media quality, what are CPaaS vendors doing today?

Almost nothing at all.

The rest? Not doing much about machine learning, researching or doing bots.

This cannot last.

We’ve already seen how WebRTC is being unbundled for the purpose of differentiation. That differentiation will come in the form of optimizations, mostly done by use of machine learning.

What will vendors do? Especially when we see the leading UCaaS vendors actively investing in machine learning media processing capabilities? This sets the bar to what a communication service needs to look like, and without such capabilities, why should I as a developer use that CPaaS vendor?

2# – Video, Video, Video Tony Robbins going virtual. Is this a CPaaS implementation???

Did I already say we’re in the year of the video?

It is.

A billion have been indoctrinated over a period of 1 month this year on how to use Zoom. don’t nitpick me on the exact number please. My mother now users Zoom in her daily life of a variety of activities, including a book reading club she joined

Many CPaaS vendors had video capabilities but they usually amounted to 1:1 interactions or small group sizes. There isn’t a day going by where I don’t get a new requirement from someone that CPaaS providers can’t cater for today. Many of these are in the domain of broadcasts and large groups (100 or more participants). Using CPaaS for them today feels like hacking at best. Impossibly challenging at most.

There are many areas where CPaaS providers are lacking when it comes to video. Here are the few that immediately come to mind:

  • What we are seeing is a rapid growth in the feature set and requirements of video centric use cases. These needs to be addressed. As a simple example, how do you do a live session with one presenter streaming to a large audience and the audience in turn sending their own video to the presenter, so that the presenter sees them all at the same time (or can alternate between them)?
  • There’s a blurring of the lines between voice, video, broadcast and streaming. There’s a need to seamlessly switch from one to the other. Broadcast and streaming comes today predominantly from non-CPaaS vendors. There’s a growing pressure for these to be wrapped into CPaaS for interactive use cases
  • Price points of video services need to be adjusted. With the change brought by AWS Chime SDK, and the pricing model of 8×8 JaaS, there are bound to be changes for other CPaaS vendors. This is imperative, especially when build vs buy decisions rely so heavily on back of the napkin calculations of minutes use multiplied by a static number
  • Location of data centers and the latency brought about due to it. Most CPaaS vendors have 10 or less data centers they operate from. Now that everyone is using video, this just isn’t enough. It might be nice for voice calls in call centers, but video calls the world over are different – and they take place a lot more locally within regions and countries now, so having data centers closer to users is becoming more important than ever

The investment in video communications in all its facets will be important to stay competitive in this space.

#3 – Diagnostics and analytics

It is great that you can communicate, but what happens when things go haywire?

In my recent round of updates I am doing for my Choosing a WebRTC API Platform report, many of the vendors made sure I know they have a dashboard for quality and network monitoring. Different vendors give it different names, but they all understood that unlike telephony, there’s a need for insights here, especially since networks are unmanaged.

It isn’t about me as a client understanding if the CPaaS vendor is doing a good job, but rather about me understanding my users’ networks and experience. Current dashboard solutions will need to evolve further to give the insights their customers are looking for.

Didn’t you miss anything?

In my future of CPaaS article from last year I mentioned a few additional trends. Some of them have been reiterated here, though from a different angle and with a different narrative that fits better with the changing times.

There were three topics that weren’t mentioned here yet, and I want to give them a bit of room and explain where I see them in 2021 with CPaaS.

nocode / low code

Still a thing. Serverless, Flow, Zapier integration, drag and drop tools. All there. All needed.

For the most part, CPaaS vendors seem to be content with the current state of affairs and the current tools they have. Investment in this domain in 2020 didn’t yield anything vastly different, new or interesting.

The domain of nocode is still relevant and interesting. For now, it seems to be mostly limited to the telephony (and voice) aspects of CPaaS.

CCaaS and UCaaS

The lines are blurring elsewhere as well. Areas of IoT (below), messaging and notifications, live streaming – are all suitable adjacencies for expansion of CPaaS vendors.

The largest areas though are CCaaS and UCaaS: contact centers and unified communications

Acronyms will be tricky here. So bear with me.

  • CCaaS and UCaaS are investing heavily in ML. A lot of it now is around #WFH
  • CPaaS is going up the foodchain, mainly after CCaaS. Some do it directly (Twilio Flex), others pivoting sideways to conversations (MessageBird Omnichannel Chat Widget)
  • UCaaS is vying towards CPaaS, introducing their own APIs and even CPaaS offerings

In another world, just next by, other SaaS solutions are blurring their lines. Gist (the chat widget I am using on my WebRTC course site) announced to its customers that it is releasing a full fledged CRM. From conversations to CRM.

CRMs in turn, can use CPaaS vendors directly to build up their own CCaaS offering. With the higher level abstractions geared towards customer engagement, CPaaS vendors now offer a simple route for CRMs in this direction.

This will continue, though I don’t see it as direct competition or real differentiation within the CPaaS domain itself.

IoT

Twilio seems to be the only CPaaS vendor investing in the Internet of Things. It acquired Electric Imp earlier this year. The acquisition wasn’t made with much fanfare, as this isn’t the main focus of Twilio and the current market is interested less in IoT than it is in video calls.

Is IoT part of CPaaS? Time will tell.

I believe that it is, but for now, only Twilio seems to be investing in that domain where none of its other immediate CPaaS competitors have the appetite for it. This will not change in the next couple of years as focus for CPaaS is elsewhere at the moment.

Updating my WebRTC API report

There’s a lot of change in the CPaaS domain. I mostly look at these vendors from a WebRTC prism, but not only.

This past month I’ve been working on updating my Choosing a WebRTC API platform report. I had a lot of briefings with the various vendors, researched their websites, added vendors, removed vendors. Grueling work.

The updated report will be published during October. It will include ~25 vendors, and touch everything from build vs buy, selection KPIs, vendor listing and pricing.

If you are looking to understand this domain better or need to select one vendor over another for an important project, then this report is for you. From today and until the report gets published, there’s a wee bit over 25% discount using coupon code API2020LAUNCH. Purchasing the report now will give you access to the current report as well as the fresh update once it is available.

The post What should CPaaS providers do today to prepare for the “post pandemic”? appeared first on BlogGeek.me.

Pages

Using the greatness of Parallax

Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.

Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.

Get free trial

Wow, this most certainly is a great a theme.

John Smith
Company name

Startup Growth Lite is a free theme, contributed to the Drupal Community by More than Themes.