News from Industry

Video quality metrics you should track in WebRTC applications

bloggeek - Mon, 07/15/2024 - 12:30

Get your copy of my ebook on the top 7 video quality metrics and KPIs in WebRTC (below).

I’ve been dealing with VoIP ever since I finished my first degree in computer science. That was… a very long time ago.

WebRTC? Been at it since the start. I co-founded testRTC, dealing with testing and monitoring WebRTC applications. Did consulting. Wrote a lot about it.

For the last two years I’ve been meaning to write a short ebook explaining video quality metrics in WebRTC. And I finally did that 😎

The challenges of measuring video quality

Ever since we started testRTC, customers came to us asking for a quality score to fit their video application. But where do you even begin?

  • A 1:1 call quality will be perceived differently at 1mbps running on a smartphone or a PC with a 27” display
  • These same 2 participants collaborating together on a document require much less bitrate and resolution
  • Group video calls with 15 people or more requires a totally different perspective as to what can be seen as good video quality
  • Cloud gaming with a unidirectional video stream at really low latency has different quality requirements
  • A webinar is different than the scenarios above

Deciding what’s good or bad is a personal decision that needs to be made by each and every company for its applications. Sometimes, differently per scenario used.

Where do we even start then?

Packet loss and latency aren’t enough

If I had to choose two main characteristics of media quality in real time communications, these were going to be packet loss and latency.

Packet loss tells you how bad the network conditions are (at least most of the time this is what it is meant to do). Your goal would be to reduce packet loss as much as possible (don’t expect to fully eradicate it).

Latency indicates how far the users are from your infrastructure or from each other. Shrinking this improves quality.

But that’s not enough. There’s more to it than these two metrics to be able to get a better picture of your application’s media quality – especially when dealing with video streams.

Know your top 7 video quality metrics in WebRTC

Which is why I invite you to download and review the top 7 video quality metrics in WebRTC – my new ebook which lists the most important KPIs when it comes to understanding video quality in WebRTC. There you will find an explanation of these metrics, along with my suggestions on what to do about them in order to improve your application’s video quality.

And yes – the ebook is free to download and read – once you jot down your name and email, it will be sent to you directly.

The post Video quality metrics you should track in WebRTC applications appeared first on BlogGeek.me.

Fixing packet loss in WebRTC

bloggeek - Mon, 07/01/2024 - 12:30

Discover the hidden dangers of packet loss and its impact on your WebRTC application. Find out how to optimize your network performance and minimize packet loss.

If there’s one thing that can give you better media quality in WebRTC it is going to be the reduction (or elimination?) of packet loss. Nothing else will be as effective as this.

What I want to do here, is to explain packet loss, what it is inevitable, and the many ways we have at our disposal to increase the resilience and quality of our media in WebRTC in the face of packet losses.

Table of contents Why do we have packet loss in WebRTC?

There are many reasons for packet losses to occur on modern networks and with WebRTC. To count a few of these:

  • Wireless and cellular networks may suffer due to the distance between the device and the access point, as well as other obstructions (physical or just aerial interference)
  • Routers and switches can get congested, causing delays as well as dropped packets
  • Ethernet cables can be faulty at times
  • Connections between switches are not always as clean as they could be
  • Media servers not doing their job correctly or just getting overtaxed with traffic
  • Entropy. The more we miniaturize and condense things, the more entropy will kick in (I added this one just to sound smart)
  • Devices might not be faring too well at times either

We think of the internet as a reliable network. You direct a browser to a web page. And magically the page loads. If it doesn’t, then the network or server is down. End of story. That’s because packet losses there are handled by retransmitting what is lost. The cost? You wait a wee bit longer for your page to load.

With WebRTC we are dealing with real time communications. So if something gets lost there is little time to fix that.

👉 Packet losses are a huge headache for WebRTC applications

What to do to overcome packet losses?

Packet loss is an inevitability when it comes to WebRTC and VoIP in general. You can’t really avoid them. The question then becomes what can we do about this?

There are four different approaches here that can be combined for a better user experience:

  1. Have less packet losses – if we have less of these, then user experience will increase
  2. Conceal packet losses (PLC) – once we have packet losses, we need to try and figure out what to do to conceal that fact from the user
  3. Retransmit lost packets (RTX) – we might want to try and retransmit what was lost, assuming there’s enough time for it
  4. Correct packet losses in advance (FEC) – when we know there’s high probability of packet losses, we might want to send packets more than once or add some error correction mechanism to deal with the potential packet losses

From here on, let’s review each one of these four approaches.

Have less packet losses

This is the most important solution.

Because I don’t want you to miss this, I’ll write this again:

This is the most important solution.

If there is less packet loss, there is going to be less headache to deal with when trying to “fix” this situation. So reducing packet loss should be your primary objective. Since you can’t fully eradicate packet loss, we will still need to use other techniques. But it starts with reducing the amount of packet losses.

Location of infrastructure elements in WebRTC

Where you place your media servers and TURN servers and how you route traffic for your WebRTC service will have a huge impact on packet loss.

Best practice today is having the first server that WebRTC media hits as close to the user as possible. The understanding behind that is that this reduces the number of hops and network infrastructure components that the media packets need to traverse over the open internet. Once on your server, you have a lot more control over how that data gets processed and forwarded between the servers.

Having a single data center in the US cater for all your traffic is great. Assuming your users are from that region – once users start joining from across the pond – say… France. Or India. You will start seeing higher latencies and with it higher levels of packet loss.

A few things here:

  • Where you place your servers highly depends on your users and their behavior
  • TURN servers are important to spread globally, but at the end of the day, check how much of your actual traffic gets related through TURN servers
  • Media servers are something I’d try to spread globally more, assuming these are needed in all meetings. I’d also focus on cascaded/distributed architectures where users join the closest media server (versus allocating a specific server for all users in the same meeting)

Where to start?

👉 Know the latency (RTT) of your users. Monitor it. Strive towards improving it

👉 Check if there are locations and users that are routed across regions. Beef up your infrastructure in the relevant regions based on this data

👉 Since we want to reduce packet loss, you should also monitor… packet loss

Better bandwidth estimation

I should have called this better bandwidth management, but for SEO reasons, kept it bandwidth estimation 😉

Here’s the thing:

Sending more than the network can handle, the sender can send or the receiver can receive leads to packet loss and packet drops.

Fixing that boils down to bandwidth management – you don’t want to send too little since media quality will be lower than what you can achieve. And you don’t want to send too much since… well… packet loss.

Your service needs to be able to estimate bandwidth. That needs to happen on both the uplink and the downlink for each user.

The challenge is that available bandwidth is dynamic in nature. At each point in time, we need to estimate it. If we overshoot – packets are going to be delayed or lost. If we undershoot, we are going to reduce media quality below what we can achieve.

Web browser implementations of WebRTC have their own bandwidth management algorithms and they are rather good. Media servers have different implementations and their quality varies.

For media servers, we also need to remember that we aren’t dealing only with bandwidth estimation but rather with bandwidth management. Once we approximately know the available bandwidth, we need to decide which of the streams to send over the connection and at which bitrates; doing that while seeing the bigger picture of the session (hence bandwidth management and not estimation).

Conceal packet losses (PLC)

Packet loss concealment is what we do after the fact. We lost packets, but we need to play out something for the user. What should we do to conceal the problem of packet loss?

This may seem like the last thing to deal with, but it is the first we need to tackle. There are two reasons why:

  1. No matter what kind of techniques and resiliency mechanisms you use, at the end of the day, some level of packet loss is bound to occur
  2. Other techniques we have are more sophisticated. Usually we will get to implement them later on. We NEED to have a rock solid concealment strategy before adding more techniques

Audio and video are different, which is why from here on, we will distinguish between the two in the techniques we are going to use.

Audio and packet loss concealment

With audio, a loss of an audio packet almost always translates immediately to a loss of one or more audio frames (and we usually have 50 audio frames per second).

“Skipping” them doesn’t work so well, as it leads to robotic audio when there’s packet loss.

Other naive approaches here include things like playing back the last frame received – either as is or with a reduction in its volume.

More sophisticated approaches try to estimate what should have been received by way of machine learning (or what we love calling it these days – generative AI). Google has such a capability inhouse (though not inside the open source implementation of WebRTC that they have). If you are interested in learning more about this, you can check out Google’s explanation of WaveNetEQ.

A few things to remember here:

👉 For the most part, this isn’t something in your control, unless you own/compile your WebRTC stack on the device side

👉 Knowing how browsers behave here enables you to be slightly smarter with the other techniques you are going to use (by deciding when to use them and how aggressively)

👉 In your own native application? You can improve on things, but you need to know what you’re doing and you need to have a compelling reason to take this route

Video and packet loss concealment 👉 frame dropping

Video is trickier with packet losses:

  • With video coding, each frame is usually dependent on past frames (to improve upon compression rates)
  • A video frame is almost always composed of multiple packets

One lost packet translates into a lost frame, which can easily cause loss of the whole video sequence:

Packet loss concealment in video means dropping a frame, and oftentimes freezing the video until the next keyframe arrives.

What can the receiver do in case of such a loss? If it believes it won’t recuperate quickly (which is most commonly the case), he can send out a FIR or PLI message over RTCP to the sender. These messages indicate to the sender that there’s a loss that needs to be addressed, where the usual solution is to reset the encoder and send a new keyframe.

In the past, systems used to try and overcome packet losses by continuing to decode without the missing packets. The end result was smearing artifacts on the video until a new keyframe arrived. Today, best practice is to freeze the video until a keyframe arrives (which is what all browser implementations do).

A few things to remember here:

👉 You have more control here than in audio. That’s because a lost packet means you will receive FIR or PLI message on the other end. If that’s your media server receiving these messages, you can decide how to respond

👉 Sending a keyframe means investing more on bitrate for that frame. If there’s congestion over the network, then this will just put more burden. Most media servers would avoid sending too many of these in larger group meetings

👉 There are video coding techniques that reduce the dependencies between frames. These include temporal scalability and SVC

Retransmitting lost packets (RTX)

If a packet is missing, then the first solution we can go for is to retransmit it.

The receiver knows what packets it is missing. Once the sender knows about the missing packets (via 

NACK messages), it can resend them as RTX packets.

Retransmission is the most economic solution in terms of network resources. It is the least wasteful solution. It is also the hardest to make use of. That’s because it ends up looking something like this:

In order to retransmit, we need to:

  • Know there are missing packets (by receiving a newer packet)
  • Decide that the older ones won’t be arriving and are lost
  • Let the sender know they are lost
  • Have the sender retransmit them

This takes time. A long time.

The question then becomes, is it going to be too late to retransmit them.

Video and RTX

Video can make real use of retransmissions (and it does in WebRTC).

With video compression, we have a kind of hierarchy of frames. Some frames are more important than others:

  • Keyframes (or I-frames) are the most important. They are “standalone” frames that aren’t reliant on any past frames
  • In SVC and temporal scalability, some frames are a kind of a dead-end, with nothing reliant on them, while in other cases, have frames reliant on them

The above illustration, for example, shows how keyframes and temporal scalability build dependency chains. Key denotes the keyframe while L0 has higher usability than L1 frames (L1 frames are dependent on L0 frames and nothing depends on them).

When we have such a dependency tree of frames, we can do some interesting things with resiliency. One of them is deciding if it is worthwhile to ask for a retransmission:

  • If the missing packets are from a keyframe, then asking for a retransmission is useful even if the keyframe itself won’t be displayed due to the time that passed
  • Similarly, we can decide to do this for L0 frames (these being quote important)
  • And we can just skip packets of L1 frames that are lost – we might not have time to playback this frame once the retransmission arrives, and that data will be useless anyway
Audio and RTX

Audio compression doesn’t enjoy the same dependency tree that video compression does. Which is why libwebrtc doesn’t have code to deal with audio RTX.

Would having RTC for audio be useful? It can. Audio packets usually wait for video packets to arrive for lip synchronization purposes. If we can use that wait time to retransmit, then we can improve upon audio quality. Google likely deemed this not important enough.

Correct packet losses in advance (FEC)

We could ask for a retransmission after the fact, but what about making sure there’s no need? This is what FEC (Forward Error Correction) is all about.

Think of it this way – if we had one shot at what we want to send and it was super important – would it make sense to send 100 copies of it, knowing that the chances that one of these copies would reach its destination is high?

FEC is about sending more packets that can be used to reconstruct or replace lost packets.

There are different FEC schemes that can be used, with the main 3 of them being:

  1. Duplication (send the same thing over and over again)
  2. XOR (add packets that XOR the ones we wish to protect)
  3. Reed Solomon (similar to XOR just more complex and more resilient)

WebRTC supports duplication and XOR out of the box.

The biggest hurdle of FEC is its use of bitrate – it is quite network hungry in that regard.

Audio FEC

Audio FEC comes in two different manners:

  1. In-codec FEC (such as Opus in-band FEC), where the FEC mechanism is part of the codec implementation itself
  2. RTP-based FEC, where the FEC mechanism is part of the RTP protocol

In-band FEC is implemented as part of the Opus codec library. It is ok’ish at best – nothing to write home about.

Then there’s RED – Redundancy Encoding – where each audio packet holds more than a single audio frame. And the ones it holds are just slightly older frames, so that if a packet is lost, we get it in another packet.

RED is implemented in libwebrtc. Support is limited to 1 level of redundancy for RED (meaning recovering up to one sequential lost packet). You can use WebRTC’s Insertable Streams mechanism to generate RED packets at higher redundancy or dynamic redundancy in the browser though.

In the above, Philipp Hancke explains RED (along with other resiliency features for audio in WebRTC).

Video FEC

FEC for video is considered wasteful. If we need to increase bitrate by 20% or more to introduce robustness using FEC, then it comes at a cost of video quality that we could increase by using higher video bitrate.

For the most part, WebRTC ignores FEC for video, which is a shame. When using temporal scalability or SVC, the same way that we can decide to retransmit only important packets, we can also decide to only add FEC protection only to more important frames.

Wrapping it all up

Dealing with packet loss in WebRTC isn’t a simple task. It gets more complex over time, as more techniques and optimizations are bolted on to the implementation. What I want to do here is to list the various tools at our disposal to deal with packet losses. When and how we decide to use them would determine the resulting robustness and media quality of the implementation.

Here’s a quick table to sum things up a bit:

PLCRTXFECFocusWhat to playback to the userWhen to ask for missing packetsWhen to send duplicated packetsAdvantagesNone. You must have this logic implementedLow network footprintLow latency overheadChallengesAudio may sound roboticVideo will freezeIncreases latency. Might not be usable due to itHigh network footprint. Can be quite wastefulAudioDuplicate last frames or reduce volumeUse Gen AI to estimate what was lostNot commonly used for audio in WebRTCFlexFEC used by WebRTCCan use RED if you want toVideoSkip video framesAsk for a fresh keyframe to reset the video streamCan be optimized to retransmit packets of important frames onlyNot commonly used for video in WebRTC

Oh – and make sure you first put an effort to reduce the amount of packet losses before starting to deal with how to overcome packet losses that occur…

Learn more about WebRTC (and everything about it)

Packet loss is one of the topics you need to deal with when writing WebRTC applications. There are many aspects affecting media quality – packet loss is but one of them. This time, we looked into the tools available in WebRTC for dealing with packet losses.

To learn more about media processing and everything else related to WebRTC, check out these services:

And if what you want is to test, monitor, optimize and improve the performance of your WebRTC application, then I’d suggest checking out testRTC.

The post Fixing packet loss in WebRTC appeared first on BlogGeek.me.

WebRTC & HEVC – how can you get these two to work together

bloggeek - Mon, 06/17/2024 - 13:00

Getting HEVC and WebRTC to work together is tricky and time consuming. Lets see what the advantages are and if this is worth your time or not.

Does HEVC & WebRTC make a perfect match, or a match at all???

WebRTC is open source, open standard, royalty free, …

HEVC is royalty bearing, made by committee, expensive

And yet… we do see areas where WebRTC and HEVC mix rather well. Here’s what I want to cover this time:

Table of contents WebRTC and royalty free codecs

Digging here in my blog, you can find articles discussing the WebRTC codec wars dating as early as 2012.

Prior to WebRTC, most useful audio and video codecs were royalty bearing. Companies issued patents related to media compression and then got the techniques covered by their patents integrated into codec standards, usually, under the umbrella of a standardization organization.

The logic was simple: companies and research institutes need to make a profit out of their effort, otherwise, there would be no high quality codecs. That was before the internet as we know it…

Once websites such as YouTube appeared, and UGC (User Generated Content) became a thing, this started to shift:

  • Browser vendors grumbled a bit about this, since browsers were given away freely. Why should they pay for licensing codec implementations?
  • Content creators and distributors alike didn’t want to pay either – especially since these were consumers (UGC) and not Hollywood in general

The new business models broke in one way or another the notion of royalty bearing codecs. Or at least tried to break. There were solutions of sorts – smartphones had hardware encoders prepaid for, decoder licenses required no payments, etc.

But that didn’t fit something symmetric like WebRTC.

When WebRTC was introduced, the codec wars began – which codecs should be supported in WebRTC?

The early days leaned towards royalty free codecs – VP8 for video and Opus for voice. At some point, we ended up with H.264 as well…

How H.264 wiggled its way into WebRTC

H.264 is royalty bearing. But it still found its way into WebRTC that was due to Cisco in a large part – they decided to contribute their encoder implementation of H.264 and pay the royalties on it (they likely already paid up to the cap needed anyways). That opened a weird technical solution to be concocted to make room for H.264 and allow it in WebRTC:

  • WebRTC spec would add H.264 as a mandatory to implement codec for browsers
  • Browsers would use the Cisco OpenH264 implementation for the encoder, but won’t have it as part of their browser binary
  • They would download it from Cisco’s CDN after installing the browser

Why? Because lawyers. Or something.

It worked for browsers. But not on mobile, where the solution was to use the hardware encoder on the device, that doesn’t always exist and doesn’t always work as advertised. And it left a gaping headache for native developers that wanted to use H.264. But who cared? Those who wanted to make a decision for WebRTC and move on – got it.

That made certain that at some point in the future, the H.264 royalty bearing crowd would come back asking for more. They’d be asking for HEVC.

HEVC, patents and big 💰

HEVC is a patents minefile, or at least were – I admit I haven’t been following up on this too closely for a few years now.

Here are two slides I have in my architecture course:

There are a gazillion patents related to HEVC (not that many, but 5 figures). They are owned by a lot of companies and get aggregated by multiple patent pools. Some of them are said to be trickling into VP9 and AV1, though for the time being, most of the market and vendors ignore that.

These patents make including HEVC in applications a pain – you need to figure out where to get the implementation of HEVC and who pays for its patents. With regard to WebRTC:

  • Is this the browser vendors who need to pay?
  • Maybe the chipset vendors?
  • Or device manufacturers?
  • What about the operating system itself?
  • How about the application vendor?

Oh, and there’s no “easy” cap to reach as there is/were with H.264 when it was included in WebRTC and paid for by Cisco.

HEVC is expensive, with a lot of vendors waiting to be paid for their efforts.

HEVC hardware

Software codecs and royalty payments are tricky. Why? Because it opens up the can of worms above, about who is paying. Hardware codecs are different in nature – the one paying for them is either the hardware acceleration vendor or the device manufacturer.

This means that hardware acceleration of codecs has two huge benefits – not only one:

  1. Less CPU use on the device
  2. Someone already paid the royalties of the codec

This is likely why Apple decided to go all in with HEVC from iPhone 8 and on – it gave them an edge that Android phones couldn’t easily solve:

  • iPhone is vertically integrated – chipset, device and operating system
  • Android devices have the chipset vendor, the device manufacturer and Google. Who pays the bill on HEVC?

This gap for Android devices was a nice barrier for many years that kept Apple devices ahead. Apple could “easily” pay the HEVC royalties while Android vendors try to figure out how to get this done.

Today?

We have Intel and Apple hardware supporting HEVC. Other chipset vendors as well. Some Android devices. Not all of them. And many just do decoding but not encoding.

For the most part, the HEVC hardware support on devices is a swiss cheese with more holes than cheese in it. Which is why many focus on HEVC support in Apple devices only today (if at all).

Advantages of HEVC in WebRTC

When it comes to video codecs, there are different generations of codecs. In the context of WebRTC, this is what it looks like:

There are two axes to look at in the illustration above

  1. From left to right, we move from one codec generation to another. Each one has better compression rates but at higher compute requirements
  2. Then there’s bottom to top, moving from royalty bearing to royalty free

If we move from the VP8 and H.264 to the next generation of VP9 and HEVC, we’re improving on the media quality for the same bitrate. The challenge though is the complexity and performance associated with it.

To deal with the increased compute, a common solution is to use hardware acceleration. This doesn’t exist that much for VP9 but is more prevalent in HEVC. That’s especially true since ALL Apple devices have HEVC support in them – at least when using WebRTC in Safari.

The other reason for using HEVC is media processing outside of WebRTC. Streaming and broadcasting services have traditionally been using royalty bearing video codecs. They are slowly moving now from H.264 to HEVC. This shift means that a lot of media sources are going to have available in them either H.264 or HEVC as the video codec – a lot less common will be VP8 or VP9. This being the case, vendors would rather use HEVC than go for VP9 and deal with transcoding – their other alternative is going to stick to using H.264.

So, why use HEVC?

  • It is better than VP8 and H264
  • Existence of hardware acceleration for HEVC that is more common than VP9
  • Things we want to connect to might have HEVC and not VP9
  • Differentiation. Some users, customers, investors or others may assume you’re doing something unique and innovative
Limitations of HEVC in WebRTC

HEVC requires royalty payments in a minefield of organizations and companies.

Apple already committed itself fully to HEVC, but Google and the rest of the WebRTC industry haven’t.

Google will be supporting HEVC in Chrome for WebRTC only as a decoder and only if there’s hardware accelerator available – no software implementation. Google’s “official” stance on the matter can be found in the Chrome issues tracker.

So if you are going to support HEVC, this is where you’ll find it:

  • Most Apple devices (see here)
  • Chrome (and maybe Edge?) browsers on devices that have hardware acceleration for HEVC, but only for decoding. But not yet – it is work in progress at the moment
  • Not on Firefox (though Mozilla haven’t gotten yet to adding AV1 to Firefox either)
Waiting for Godot AV1

Then there is AV1. A video codec years in the making. Royalty free. With a new non-profit industry consortium behind it, with all the who’s who:

The specification is ready. The software implementation already exists inside libwebrtc. Hardware acceleration is on its way. And compression results are better than HEVC. What’s not to like here?

This makes the challenge extra hard these days –

Should you invest and adopt HEVC, or start investing and adopting AV1 instead?

  • HEVC has more hardware support today
  • AV1 can run anywhere from a royalties standpoint
  • HEVC isn’t available on many devices and device categories
  • AV1 is too new and can’t seriously deal with high bitrates and video resolutions
  • HEVC won’t be adopted by many devices even in the foreseeable future
  • AV1 is likely to be supported everywhere in the future, but it is almost nowhere in the present

Adopt VP9? Wait for AV1?

Where can you fit HEVC and WebRTC?

Let’s see where there is room today to use HEVC. From here, you can figure out if it is worth the effort for your use case.

The Apple opportunity of WebRTC and HEVC

Why invest now in HEVC? Probably because HEVC is available on Apple devices. Mainly the iPhone. Likely for very specific and narrow use cases.

For a use case that needs to work there, there might be some reasoning behind using HEVC. It would work best there today with the hardware acceleration that Apple pampered us with for HEVC. It will be really hard or even impossible to achieve similar video quality in any other way on an iPhone today.

Doing this brings with it differentiation and uniqueness to your solution.

Deciding if this is worth it is a totally different story.

Intel (and other) HEVC hardware

Intel has worked on adding HEVC hardware acceleration to its chipsets. And while at it, they are pushing towards having HEVC implemented in WebRTC on Chrome itself. The reason behind this is a big unknown, or at least something that isn’t explained that much.

If I had to take a stab at it here, it would be the desire of Intel to work closely with Apple. Not sure why, it isn’t as if Intel chipsets are interesting for Apple anymore – they have been using their own chips for their devices for a few years now.

This might be due to some grandiose strategy, or just because a fiefdom (or a business unit or a team) within Intel needs to find things to do, and HEVC is both interesting and can be said to be important. And it is important, but is it important for WebRTC on Intel chipsets? That’s an open question.

Should you invest in HEVC for WebRTC?

No. Yes. Maybe. It depends.

When I told Philipp Hancke I am going to write about this topic, he said be sure to write that “it is a bit late to invest in HEVC in 2024”.

I think this is more nuanced than this.

It starts with the question how much energy and resources do you have and can you spend them on both HEVC and AV1. If you can’t then you need to choose only one of them or none of them.

Investing in HEVC means figuring out how the end result will differentiate your service enough or give it an advantage with certain types of users that would make your service irresistible (or usable).

For the most part, a lot of the WebRTC applications are going to ignore and skip HEVC support. This means there might be an opportunity to shine here by supporting it. Or it might be wasted effort. Depending how you look at these things.

Learn more about WebRTC (and everything about it)

Which codecs are available, which ones to use, how is that going to affect other parts of your application, how should you architect your solutions, can you keep up with the changes coming to WebRTC?

These and many other questions are being asked on a daily basis around the world by people who deal with WebRTC. I get these questions in many of my own meetings with people.

If you need assistance with answering them, then you may want to check out these services that I offer:

The post WebRTC & HEVC – how can you get these two to work together appeared first on BlogGeek.me.

WebRTC Plumbing with GStreamer

webrtchacks - Tue, 06/11/2024 - 14:30

GStreamer is one of the oldest and most established libraries for handling media. As a core media handling element in Linux and WebKit that as launched near the turn of the century, it is not surprising that many early WebRTC projects use various pieces of it. Today, GStreamer has expanded options for helping developers plumb […]

The post WebRTC Plumbing with GStreamer appeared first on webrtcHacks.

Reasons for WebRTC to discard media packets

bloggeek - Mon, 05/27/2024 - 12:30

From time to time, WebRTC is going to discard media packets. Monitoring such behavior and understanding the reasons is important to optimize media quality.

WebRTC does things in real time. That means that if something takes its sweet time to occur, it will be too late to process it. This boils down to the fact that from time to time, WebRTC will discard media packets, which isn’t a good thing. Why is that going to happen? There are quite a few reasons for it, which is what this article is all about.

Table of contents A WebRTC Q&A

I just started a new initiative with Philipp Hancke. We’re publishing an answer to a WebRTC related question once a week (give or take), trying to keep it all below the 2 minutes mark.

We are going to cover topics ranging from media processing, through signaling to NAT traversal. Dealing with client side or server side issues. Or anything else that comes to mind.

👉 Want to be the first to know? Subscribe to the YouTube channel

👉 Got a question you need answered? Let us know

Discarded media packets in WebRTC

Media packets and frames can and are discarded by WebRTC in real life calls. There are even getstats metrics that allow you to track these:

The screenshot above was taken from the RTCInboundRtpStreamStats dictionary of getstats. I marked most of the important metrics we’re interested in for discarding media data.

packetsDiscarded – this field indicates any fields that the jitter buffer decided to discard and ignore because they arrived too early or too late. It relates to audio packets.

framesXXX fields are dealing with video only and look at full frames which can span multiple packets. They get discarded because of a multitude of reasons which we will be dealing with later in this article. For the time being – just know where to find this.

The diagram below is a screenshot taken in testRTC of a real session of a client. Here you can see a spike of 200 packetsDiscarded less than a minute into the call. We’ve recently added in testRTC insights that hunt for such cases (as well as for video frame drops), alerting about these scenarios so that the user doesn’t have to drill down and search for them too much – they now appear front and center to the user.

WebRTC = Real-Time. Timing is everything

WebRTC stands for Web Real Time Communication. The Real Time part of it is critical. It means that things need to happen in… real time… and if they don’t, then the opportunity has already passed. This leads to the eventuality that at times, media packets will need to be discarded simply because they aren’t useful anymore – the opportunity to use them has already passed.

For all that logic to happen, WebRTC uses a protocol called RTP. This protocol is in charge of sending and receiving real time media packets over the network. For that to occur, each RTP packet has two critical fields in its header:

The illustration above is taken from our course Low level WebRTC protocols. In it, you can see these two fields:

  1. Sequence number
  2. Timestamp

The sequence number is just a running counter which can easily be used to order the packets on the receiving end based on the value of the counter. This takes care of any reordering, duplication and packet losses that can occur over modern networks.

The timestamp is used to understand when the media packet was originally generated. It is used when we need to playback this packet. Multiple packets can have the same timestamp for example, when the frame we want to send gets split across packets – something that occurs frequently with video frames.

These two, sequence number and timestamp, are used to deal with the various characteristics of the network. Usually, we deal with the following problems (I am not going to explain them here): jitter, latency, packet loss and reordering.

All of this goodness, and more is handled in WebRTC by what is called a jitter buffer. Here’s a short explainer of how a jitter buffer works:

WebRTC discarding incoming audio packets

The above video is our first WebRTC Q&A video. We started off with this because it popped up in discuss-webrtc. The question has since been deleted for some reason, but it was a good one.

Latency

The main reason for discarded audio packets is receiving them too late.

When audio packets are received by WebRTC, it pushes them into its jitter buffer. There, these packets get sorted in their sending order by looking at the sequence number of these packets. When to play them out is then dependent on the timestamp indicated in the packet.

Assuming we already played a newer packet to the user, we will be discarding packets that have a lower (and older) sequence number since their time has already passed.

Lipsync

Audio and video packets get played out together. This is due to a lip synchronization mechanism that WebRTC has, where it tries to match timestamps of audio and video streams to make sure there’s lip synchronization.

Here, if the video advanced too much, then you may need to drop some audio packets instead of playing them out in sync with the video (simply because you can’t sync the two anymore).

Bugs

Here’s another reason why audio packets might end up being discarded by the receiver – bugs in the sender’s implementation…

When the sender doesn’t use the correct timestamp in the packets, or does other “bad” things with the header fields of the RTP packets, you can get to a point when packets get discarded.

👉 Our focus here was on the timestamp because for some arcane reasons, figuring out the timestamp values and their progression in audio (and video) is never a simple task. Audio and video use different frequency clocks when calculating timestamps, done with values that make little sense to those who aren’t dealing with the innards and logic of audio and video encoders. This may easily lead to miscalculations and bugs in timestamp setting

WebRTC discarding outgoing audio packets

This doesn’t really happen. Or at least WebRTC ignores this option altogether.

How do we know that? Besides looking at the code, we can look at the fields that we have in getstats for this. While we have discarded frames for incoming and outgoing video and discarded incoming audio packets, we don’t have anything of this kind for outgoing audio packets.

These packets are too small and “insignificant” to cause any dropping of them on the sender side. That’s at least the logic…

WebRTC discarding incoming video frames

Before we go into the reasons, let’s understand how video packets are handled in the media processing pipeline of WebRTC. This is partial at best, and specifically focused on what I am trying to convey here:

The above diagram shows the process that video packets go through once they are received, along with the metrics that get updated due to this processing:

  1. It starts with the video packets being Received from the network
  2. They then get Reordered as they get inserted into the jitter buffer. Here, the jitter buffer may discard packets. In the case of video packets though, don’t expect packetsDiscarded to be updated properly
  3. For video, we now construct frames, taking multiple packets and concatenating them into frames in Construct a frame. This also gives us the ability to count the framesReceived metric
  4. Once we have frames, WebRTC will go ahead and Decode them. Here, we end up counting framesDecoded and framesDropped
  5. Now that we have decoded frames, we can Play them back and indicate that in framesRendered

👉 The exact places where these metrics might be updated are a wee bit more nuanced. Consider the above just me flailing my hands in the air as an explanation.

This also hints that with video, there are multiple places where things can get dropped and discarded along the pipeline.

The above is another screenshot from testRTC. This time, indicating framesDropped. You can see how throughout the session, quite a few frames got dropped by WebRTC.

Let’s find the potential reasons for such dropped frames..

Latency, lip sync & bugs

Just like incoming audio packets, we can get dropped packets and video frames because of much the same reasons.

Latency and lip synchronization may cause the jitter buffer to discard video packets.

And bugs on the sender side can easily cause WebRTC to drop incoming packets here as well.

That said, with video, we have to look at a slightly bigger picture – that of a frame instead of that of a singular packet.

Not all packets of a frame are available

Assume you have a packet dropped. And that packet is part of a frame that is sent over a series of 7 packets. We had 1 packet drop that caused a frame drop, which in turn, caused another 6 packets to be useless to us since we can’t really decode them without the missing packet (we can to some extent, but we usually don’t these days).

Dependency on older frames

With video, unless we’re decoding a keyframe, the frame we need to decode requires a previous frame to be decoded. There are dependencies here since for the most part, we only encode and compress the differences across frames and not the full frame (that would be a keyframe).

What happens then if a frame we need for decoding a fresh frame we just received isn’t available? Here, all packets were received for this new frame, but the frame (and all its packets) will still get dropped. This will be reported in framesDropped.

Not enough CPU

We might not have enough CPU available to decode video. Video is CPU intensive, and if WebRTC understands that it won’t have time to decode the frame, it will simply drop it before decoding it.

But, it might also decode the frame, but then due to CPU issues, miss the time for playout, causing framesRendered not to increment.

WebRTC discarding outgoing video frames

With outgoing media, there is a different dictionary we need to look at in getstats – RTCOutboundRtpStreamStats:

Here, the relevant fields are framesSent and framesEncoded. We should strive to have these two equal to each other.

We know that WebRTC decided to discard frames here if framesEncoded is higher than framesSent. If this happens, then it is bad in a few levels:

  • Encoding video is a resource intensive process. If we took the effort to encode a frame and didn’t send it in the end, then we’ve wasted resources. To me this means something is awfully wrong with the implementation and it isn’t well balanced
  • Video frames are usually dependent on one another. Dropping a frame may lead to future frames that the receiver will be unable to decode without the frame that was dropped
  • Such failures are usually due to network or memory problems. These hint towards a deeper problem that is occurring with the device or with the way your application handles the resources available on the device

On the RTCIceCandidatePairStats dictionary, there’s also packetsDiscardedOnSend metric, which hints to when and why would we lose and discard packets and frames on the sender side:

Total number of packets for this candidate pair that have been discarded due to socket errors, i.e. a socket error occurred when handing the packets to the socket. This might happen due to various reasons, including full buffer or no available memory.

If you’re dropping video frames on the sender side (framesEncoded < framesSent), then in all likelihood the network buffer on the device is full, causing a send failure. Here you should check the resources available on the device – especially memory and CPU – or just understand the network traffic you are dealing with.

Maintaining media quality in WebRTC

Media quality in WebRTC is a lot more than just dealing with bitrates or deciding what to do about packet losses. There are many aspects affecting media quality and they all do it dynamically throughout the session and in parallel to each other.

This time, we looked into why WebRTC discards media packets during calls. We’ve seen that there are many reasons for it.

To learn more about media processing and everything else related to WebRTC, check out these services:

The post Reasons for WebRTC to discard media packets appeared first on BlogGeek.me.

WebRTC simulcast – what is it and how is it used

bloggeek - Mon, 05/13/2024 - 12:30

What exactly is simulcast, how is it used in WebRTC and why is it a critical component in any SFU media server.

WebRTC simulcast is one of these things that is commonly used by WebRTC applications that have SFU media servers. If your media server doesn’t use simulcast – make sure to ask why and to understand the answer. And if it does, then you should know what it means exactly. Which is why we’re here now.

In this article, I want to explain what WebRTC simulcast is, when and how it is used AND some new advancements coming to simulcast.

Table of contents A crash course on video quality and bitrate

Before we begin, we need to understand the concept of bitrate. In a WebRTC video session, the first thing to look at and understand is the bitrate used. Video encoding requires sending a lot of data over the network, and WebRTC tries to match the bitrate it sends to the available bandwidth of the network.

See how I switched between talking about sending data to bitrate to bandwidth? For me, sending data is what we are trying to do. Bitrate is the actual (or target) amount of data we’re aiming for, and bandwidth is what is available for us on the network (assume that bandwidth should always be the same or preferably even higher than the bitrate).

When it comes to audio, we’re mostly working with bitrates that are static and known in advance. They are also low compared to video bitrates, so we just don’t care as much. Which leaves us with video streams.

For video streams:

  • The higher the bitrate, the higher the quality (most of the time)
  • The higher the bitrate, the higher the CPU and memory needed to encode and decode the data

This means that what we want to do is use as little bitrate as possible to get the highest possible quality. We’re trying to reach for the stars first by deciding our desired bitrate, and then we start lowering due to the constraints of the real world. Here are a few reasons for this:

  • Our CPU is over-burdened, so we need to reduce the bitrate we encode or decode
  • The resolution of the video that ends up being displayed is going to be quite rather small, so there’s no point in investing too much in bitrate. Same logic can be applied to the camera
  • We can’t push through the network the bitrate we want, so we need to reduce it to fit the bandwidth available on the network

👉 If you want to learn more about this topic, then read this article on WebRTC video quality

SFU media servers and group video sessions

For video group sessions in WebRTC, we use SFU media servers. Not always, but most of the time. Why? Because SFUs route media – this ends up costing us less compared to MCUs and in many ways makes things more flexible for us on the viewer’s end.

The challenge though is that SFUs harbor a wee bit more complex logic and smarts than the alternatives and they also delegate a lot of the work to the clients themselves. A good SFU is one that has tight integration and optimization methods with the clients using it. And remember here that the implementation of the browser (Chrome) is optimized for Google Meet’s needs.

Simulcast was “invented” for SFUs. Let’s take a quick example to show what we mean here.

We have 4 people on a call. All connected to an SFU. Each participant is sending his video to the SFU, and the SFU routes that video to the other 3 participants in the call:

If everyone has a decent network, then we’re all happy. But what if D has poor network conditions on his downlink? Here are some assumptions for our scenario:

  • All participants can send 2Mbps of video data towards the SFU
  • A, B and C can receive up to 20Mbps in total on the downlink
  • D can receive only 1Mbps in total on the downlink

If we want everyone to be displayed at the same quality on D’s screen, we need to give each one of them ~330Kbps. That’s instead of 2Mbps. So… do we just reduce the sending bitrate of everyone down to 330Kbps to accommodate for user D? Or do we drop him out of the call altogether?

Notice how we can still send 2Mbps from D to the rest of the participants? That’s just the nature and dynamics of the network and capabilities we have in this example.

Here’s where simulcast comes in…

We’re going to engineer the solution so that each participant is going to create 3 separate bitstreams of their video data: 1150kbps, 600kbps and 250kbps, totalling 2Mbps. The exact numbers are less important than the concept itself, so please go with the flow here.

* Being lazy, I’ve denoted simulcast lines as dotted lines, indicating Simulcast instead of using a better notation like 1150/600/250.

Now that we do that, A, B and C get 1150Kbps video from everyone else and D receives the lower 250Kbps bitstreams (it can’t handle 1150kbps or 600kbps even for only one of the users without dropping one of the other video streams it is receiving altogether). Now each one is getting the most he can handle (or at the very least, closer to that than just lowering everyone down).

Media quality: LCD or BAB

I am going to use names that don’t necessarily exist. I am making them up here to explain the nature of simulcast a bit better.

What we’ve seen in the example above is how we move from LCD (Least Common Denominator) to BAB (Best Available Bandwidth).

We started with a naive implementation where the same video bitrate is being sent to everyone. So if there’s a hiccup somewhere along the session, everyone is going to be affected. When D had network issues, everyone had to lower their bitrate from 2Mbps down to 330Kbps… that’s quite a hit to media quality across the board for them all.

That’s our LCD – we’re going to need to accommodate the bitrate to the lowest common denominator of the available bandwidth we have across our meeting participants. And that sucks. Bigtime…

But then we went for BAB – we’re going to try and work with the best available bandwidth that each user is capable of receiving.

How did we do that? By asking the participants (nicely) to generate more than a single bitstream. Each bitstream has a different bitrate here, which gives the SFU the flexibility it needs to decide which bitrate to send to which user.

We use simulcast (or SVC, though not in this article) because there’s no equality in digital communications. Participants have different devices, they connect with different networks and they even see and focus on different things during the same meeting. Simulcast enables us to give different participants a different view of the meeting with varying degrees of quality based on the capabilities of each participant at any given moment AND based on each participants’ preference/desire.

How much flexibility and how high media quality we can reach is determined by the tools and optimizations we end up employing in our implementation. No two implementations of SFU with simulcast are exactly alike.

Client side = Simulcast; Server side = Adaptive bitrate

Simulcast as a concept and solution is about a client generating multiple streams so that a media server can use whichever of the streams it needs to send to other participants.

Video streaming had a similar(?) solution known as ABR – Adaptive Bitrate.

Here, the client sends a single media stream to the server and the server is the one that generates any number of streams in different bitrates as it sees fit. This makes sense when there are many viewers (thousands or more) and it can be useful to invest in server resources (these cost money to the vendor providing the service) for the given scenario.

Some use ABR as a term to simply say that the bitrate is variable in nature and adapts to the network. I use it to refer to server side adaptation, where there are multiple video streams generated (in advance or in realtime) and the server simply chooses the best to use per viewer.

For large scale live streaming broadcasts, you can start seeing solutions that incorporate ABR as a technology to transcode the stream to broadcast on the server and generate multiple bitrates with it. This can and is done sometimes in parallel to using simulcast from the client as well.

The way for me to compartmentalize and remember this? Simulcast is multiple bitrates generated by the client. ABR is multiple bitrates generated by the server.

👉 Your can learn more about ABR vs simulcast or just about simulcast

Advantages and weaknesses of using simulcast in WebRTC

Simulcast is great, but it isn’t a catchall solution.

What simulcast does as a concept is to offload some of the work from the media server. Offloading here means that for the client it comes at an increase in CPU use and outgoing bandwidth required.

WebRTC simulcast advantages

Here are some great things that simulcast brings with it:

  • Reduces the costs of media servers drastically
    • By not needing to decode and encode media streams, media servers need way less CPU power
    • This means that scaling large deployments becomes easier and more feasible for a lot more use cases
  • Different layouts for each participant
    • Since each user ends up receiving multiple video streams (in different bitrates), the application is free to display a different layout for each participant
    • Other media servers that mix media would need to invest even more CPU to support something like “encoder per participant” to achieve this
  • Display participants’ video and other data in the same space
    • Again, since each participant video is separate from the others, it is simpler to place additional visual items in the same area
    • Mixing all videos into a single stream makes this harder and clunkier
WebRTC simulcast weaknesses

It isn’t all good though. There are weaknesses to the use of WebRTC simulcast:

  • Higher bandwidth use on uplink of users
    • Networks are asymmetric in bandwidth sometimes (think ADSL), and uplinks are usually lower in bandwidth than downlinks
    • Simulcast has a higher uplink requirement (1.3125x to be precise) than not using simulcast, which means that there are scenarios when using simulcast can actually lower quality if not done properly
  • Higher CPU use for user devices
    • Clients generate 2-3 media streams in different bitrates with simulcast
    • So they “invest” more in the encoding when it comes to CPU use
  • Higher system complexity
    • To really make use of simulcast in WebRTC, there should be a lot more synchronization between client and server code
    • That means higher complexity of the overall system
  • Dependency on client code
    • With other solutions, especially media mixing ones (see MCU), the clients might not even know they are in a group call
    • But when it comes to simulcast and group calling, clients have a huge role to play in making sure calls are of high quality (due to the complexity mentioned above)
Who decides on bitrates in WebRTC simulcast

There are usually two to three layers/streams when it comes to WebRTC simulcast. Each with a different bitrate, and from there, also with different resolutions, frame rates and quality. I am focusing on bitrate because for me, that’s the leading factor – everything else gets derived from it.

Which bitrates are we going to support and which ones get sent to whom are the most important questions for any SFU implementation that uses simulcast.

WebRTC by itself can’t make such decisions. It has its own default bitrates for simulcast, but this is only what they are – defaults. I wouldn’t recommend developers to use these without understanding their implications (they’re likely not useful for the use case you have at hand).

The decision which bitrates to support in simulcast to begin with should take into consideration the possible display layouts of the videos on the viewers’ end. By knowing at what resolutions the videos get displayed we can try to better estimate the desired bitrates to use while using simulcast. Factor into it things like number of videos in the layout (so that you take total bitrates and available bandwidth into consideration), importance of videos on the display (lower priority streams can manage with lower frame rates and resolutions), etc.

Here’s the thing though:

  • The client is the one generating and encoding simulcast media streams. It knows best its own CPU and performance capabilities
  • The SFU media server knows best the estimated bandwidth in front of all viewers. It also knows what media streams and at what bitrates it has at its disposal when the time comes to send media to viewers
  • The viewer is the one that knows best how the video gets laid out on the display, along with its own CPU and performance capabilities
  • Oh, and the viewer may change the layout on the display throughout the call, changing what’s best to send to it

The end result is that the application in charge of it all needs to orchestrate the clients and the media servers in order to optimize the session for higher media quality, taking into consideration all the information. It also means that your application needs to somehow share this out-of-band information with the application session logic so decisions can be made. And this part is proprietary – it isn’t something that we have written as a standard or even a best practice.

Keyframes and switching costs in simulcast

With all this goodness, there’s an achilles heel. One that stems from the way Google implemented simulcast in Chrome, but also by the realities of such a solution.

Here’s the thing: Whenever a viewer switches from one simulcast layer to another, there’s a change in the video stream that gets decoded. That change can only occur with a fresh keyframe on the layer that is being switched to, so that the video decoder will be able to decode the stream properly.

When there’s a need to generate a keyframe in simulcast, Chrome will automatically generate a keyframe across all simulcast layers. This isn’t a good thing, but it is what it is.

This also means that SFU media servers need to be conscious about this and not have viewers switch between the different layers all the time, limiting switches to the minimum necessary to maintain high video quality.

Temporal scalability improves WebRTC simulcast

When using temporal scalability alongside simulcast in WebRTC it gives us another level of flexibility.

In temporal scalability, the frames of a video stream are encoded in such a way that their dependency chain enables us to decode some of the frames and not others – something that is usually impossible in video compression. WebRTC’s implementation has in Chrome temporal scalability in VP8 with 2 such “layers”, so if you’re sending 30 frames per second, the SFU media server can decide to send either 30 or 15 FPS to participants (the 15 frames per second is roughly 60% of the bitrate of the 30 frames per second).

Think of it like multiplying your simulcast streams without an additional cost:

And yes, like everything else, this depends on the codec you use, the browser and the fact that some layers might not have enough frames per second to begin with (for example, the lower layer might only produce 10 or 15 frames per second and then temporal scalability might be useless).

When using simulcast, the level and variety of tools you use will enable you to increase the media quality you offer your users.

Decisions of highest layer bitrate in WebRTC simulcast

Simulcast in WebRTC gives us another level of flexibility. One that Daily explains nicely in their post where they title their solution as adaptive bitrate.

Let’s assume we’re going for the classic 3 media stream in our WebRTC simulcast solution:

Remember our example from before? Our smallest bitrate (250kbps) and medium sized bitrate (600kbps) are “static” in nature. The video encoder in our browser is going to generate these in such a way each and every time (assuming the CPU allows and bandwidth estimation is higher than the summation of these two).

That highest bitrate there isn’t really static. At least not by default. It will use as much bitrate as it needs, taking into consideration the CPU consumption and bandwidth estimation. Left to its own device, this highest bitrate layer is going to be greedy in its resource consumption. It can also get below the medium sized bitrate if there’s not enough CPU or bandwidth available, which beats the point of this being the highest layer. This all leads us to what we need to do…

Like everything else that WebRTC does in the browser though, it needs to be managed and taken into account by the SFU media server. In this case, deciding what that highest layer bitrate should be at any given point in time.

Here are some questions to ask yourself when making that decision in your SFU:

  • Do you want the highest layer to have a static bitrate? (hint: no)
  • The participants who need to get this user’s video at the highest quality – what’s the highest bitrate / resolution that they can cope with based on their device and network conditions?
    • Do you need to limit the bitrate of this layer to accommodate for more of these participants?
    • Are you willing to move some of these users to the mid bitrate in order to increase the quality for the other participants who have better conditions?
  • Are you recording this stream?
    • If you are, do you need it at the highest possible quality?
    • Does it mean you can “sacrifice” some of your participant’s viewing quality to get a better recording out of this session?
    • Or is the recording fine with lower bitrates or quality?
  • I’ll finish off with a question about all the layers – which ones are actually used?
    • If some of the layers aren’t being sent to any of the users in the meeting, you can decide to suspend them in the first place, practically “changing” the simulcast configuration dynamically for that specific participant. It will come at a cost when you’ll need to switch from one layer to another if the other layer is non-existent
    • And if we decided not to send a specific bitrate, does it mean the other bitrates can change as well to accommodate for the extra headroom we now have of bitrate and CPU available?

These questions don’t have a single simple answer. The answer to these will vary based on the strategy you wish to employ, the use case you have, the video layouts you support, the level of your engineers, the media server you start with, …

At the end of the day, your answers are just a set of heuristics, and being able to compare one to another is going to be a challenging task. Make sure you get this right (or right enough) for your needs.

WebRTC and multi-codec simulcast

This is something that we’re just starting to see now.

Up until recently, as a developer, you chose a codec, used simulcast on it and that’s about it. The available alternatives were mostly VP8 and H.264. These days? With the introduction of the AV1 video codec a new idea started cropping:

  • AV1 is a better codec when it comes to media quality per bitrate compared to the other codecs available
  • But AV1 also takes up more CPU and there’s almost no hardware acceleration available in the market
  • At very low bitrates, using AV1 is possible, since it won’t take up much CPU for that
  • But using it at higher bitrates isn’t possible in most scenarios

So the above diagram was thought out in a way. Instead of using the same video codec in a simulcast session for WebRTC, why not use multiple codecs? Have AV1 used on the lowest bitrate and then another codec, say VP8 or VP9 on the higher bitrates?

This way, the machine’s CPU is capable of encoding the data, and the resulting media quality of the lowest bitrate in there is now higher than it would have been if we used a single codec for simulcast.

At the time of writing, this hasn’t been implemented in a workable fashion just yet.

In a way, this is our future for the coming years, until AV1 will become popular enough and its use made possible by commonplace hardware acceleration or better CPUs on the devices.

A word about SVC… and where to learn more

There are alternatives to using WebRTC simulcast:

  1. Deciding NOT to use simulcast but still using an SFU, moving towards a LCD (least common denominator) approach to media quality
  2. Not using SFU or media routing, going for mesh or mixing solutions
  3. Replacing simulcast with SVC

SVC stands for Scalable Video Coding. At its heart, it is quite similar to simulcast, just done on the codec level. The video encoder itself generates a bitstream that can be peeled like an onion into multiple bitrates. This gives a solution that is less wasteful than simulcast in bitrate and CPU. The downside here is an increase in complexity and in lack of availability of hardware encoders and decoders that know how to handle SVC.

There are video meeting solutions out there that use SVC. They can usually also use WebRTC simulcast – simply because SVC gets added later as an additional tool for further optimization and flexibility.

To learn more about simulcast, SVC and everything related to WebRTC, check out these services:

The post WebRTC simulcast – what is it and how is it used appeared first on BlogGeek.me.

Probing WebRTC Bandwidth Probing – why and how in gcc

webrtchacks - Tue, 05/07/2024 - 14:56

Maximizing stream quality on an imperfect network in real-time is a delicate balancing act. If you send too much information then will cause congestion and packet loss. If you send too little then your video quality (or audio) will look like garbage. But how much can you send? One of the techniques used to find […]

The post Probing WebRTC Bandwidth Probing – why and how in gcc appeared first on webrtcHacks.

Does WebRTC need a change in governance?

bloggeek - Mon, 04/29/2024 - 12:30

Is it time to change the governance of WebRTC in order to keep it growing and flourishing?

WebRTC started life in 2011 or 2012. Depending when you start counting.

That’s around 13 years now. Time to put things on the table – we might need a change in governance. A different way of thinking about WebRTC.

Table of contents The concept of WebRTC unbundling https://www.linkedin.com/feed/update/urn:li:activity:7178742753281929216/

I published the above on LinkedIn last month.

It was a culmination of thoughts I’ve been having for the past several years.

You can pinpoint the first time I made that distinction in 2020 while coining the term WebRTC unbundling.

The notion was that WebRTC is being broken down into smaller pieces and developers are given more leeway and control over what WebRTC does (=a good thing). The result of all this is the ability to differentiate further, but also that the baseline of what WebRTC is gets farther behind what good media quality means.

There’s the popular open source implementation for WebRTC known as libwebrtc. It is maintained and governed by Google. When Google can enact its strategy by implementing their technologies and IP outside and around libwebrtc instead of inside libwebrtc – why wouldn’t they?

Google runs a business. They have commercial objectives. Differentiating from competitors who use libwebrtc to outwin Google would be a poor decision to make. Giving competitors using proprietary technology the source code of libwebrtc to copy from and improve upon without contributing back isn’t a smart move either.

This means cutting edge technologies and research is now done mostly outside of libwebrtc (and WebRTC) as much as possible. And the unbundling of WebRTC that started some 4 years ago is now starting to show.

Before we dive into the details

Something I always explain to people new to WebRTC is that WebRTC isn’t a single thing. When someone refers to it, he either thinks of WebRTC as a standard or WebRTC as an open source project:

The above is one of the first slides I’ve ever created about WebRTC.

WebRTC is an open standard. It is being specified by the IETF and W3C. The IETF deals with the network side while the W3C is all about the browser interface (JavaScript APIs).

WebRTC is also viewed as an open source project. That’s actually libwebrtc… the most common and popular implementation of WebRTC which has been created and is maintained by Google.

So remember – when people say WebRTC they can refer to it as either a standard or a package or both at the same time.

What we will do in this article from here on, is jump between these two definitions and see where we are with them today. We will start with the libwebrtc open source library.

The power and importance of libwebrtc

Here’s what I shared in my RTC@Scale 2024 session:

In WebRTC, libwebrtc is the most important library. There are others, but this is by far the most important. Why?

  • It is integrated and used by ALL modern browsers (Chrome, Edge, Firefox and Safari)
  • So when you interact with any browser in your WebRTC application, you end up working against libwebrtc
  • Many mobile applications decided to use libwebrtc natively inside the app. Why? Because it is good enough

The end result is that… well… It is the most important WebRTC library out there.

Before libwebrtc, what we had was lame open source libraries that implemented media engines. All good options were commercial ones. In fact, libwebrtc (and WebRTC) started with Google acquiring a company called GIPS who had a great implementation of a commercial media engine that they licensed to companies. I know because the company I worked at licensed it, and the moment they got acquired, we got a flood of requests and questions about finding an alternative.

What WebRTC did was make media engines a commodity of sorts. A new era where high quality media can be had from open source. This also meant that the commercial media engine market died at the same time.

This new development of pushing innovations and improvements in the media engine pipeline outside of libwebrtc is what is going to take that advantage from open source and libwebrtc away.

More on that, a bit later. But next, why don’t we look at the standardization of WebRTC?

WebRTC standardization efforts

The standardization of WebRTC was split between two different organizations: the W3C and the IETF. They were always semi-aligned.

The IETF was in charge of what goes on in the network. How a WebRTC session looks like on the wire. For WebRTC, it uses stuff that we all considered quite modern in 2012 – light years in tech-time. The IETF Working Group working on WebRTC, RTCWEB, concluded its work and closed down.

The W3C was/is in charge of the API layer in the browser. The JavaScript interface, mostly revolving around the RTCPeerConnection. And yes, they are trying to wrap this one up and call it a day.

In many ways, what brought WebRTC to what it is today is the W3C – the part focused on the interface in the browser that developers use. That is because the browser is our window to the internet (and in many ways to the world as well). And this window includes the ability to use WebRTC through the APIs specified by the W3C.

The catch here is that the standardization done by the W3C for WebRTC consists almost solely by the browser vendors themselves. There aren’t any (or not enough) web developers sitting at the table. The ones who need and end up using the WebRTC APIs have no real voice in the WebRTC spec itself. The cooks in the kitchen are far remote from the restaurant diners who need to enjoy their dish.

And meanwhile, the cooks have different opinions and directions as well:

  • Chrome protects its interests, focusing mainly on Google Meet’s requirements. This is what drives many of the contributions Google has been making to the W3C on the spec
  • The rest? Mostly trying to block any forward movement so they won’t have to add changes to their own browser implementation. This is especially true for Safari and Firefox

So what do we end up with?

Google, trying to add things it needs to the WebRTC specification to solve their product needs

Other browser vendors, trying to delay Google a bit..

And developers who aren’t part of the game at all and are happy with the leftovers from what Google needs.

Vendors differentiating outside of (lib)WebRTC

The whole WebRTC ecosystem is enjoying the work of Google in libWebRTC. They do so in various ways:

  1. Directly by taking libWebRTC codebase, making it their own and compiling it into native applications
  2. Indirectly by having WebRTC run inside web browsers, and figuring out any bugs and issues they bump into
  3. By carving bits and pieces of it to use in their own app (like tearing the echo canceller or other algorithms from libWebRTC and using it elsewhere)

The first alternative is the most interesting one here.

When vendors do that, they usually end up forking the original codebase and modifying bits and pieces of it to fit their own needs. These might be minor bug fixes for edge cases or they may be full blown optimizations (like what Meta has done with their new MLow codec and Beryl echo cancellation algorithm – there were other areas as well. You’ll find them in the RTC@Scale event summary).

Video API vendors are no different. They usually take libWebRTC and compile it as part of their own mobile SDKs. Again, with likely changes in the code. They also get to see and work with a multitude of customers, each with its own unique requirements. In a way,they see a LOT of the market. Having these insights and understanding is great. Passing it to the libWebRTC team can be even better. These Video API vendors can be a great aggregator of customer insights…

Then there’s the fact that not many end up contributing back what they’ve done to libWebRTC. And even that comes with a whole set of reasons why:

  1. Assuming (rightly or wrongly) that these changes made are unique, proprietary, a competitive advantage – you name it
  2. Being afraid of the legal implications of doing so (exposure or whatever)
  3. Too much fuss to do

If you ask me, (1) is just bad manners – you get something for free from another vendor you might even be competing directly with. The least you can do is to share and contribute back, so that you have a level playing field at that low level of the stack.

Looking at (2) means someone needs to sit and talk to the legal team at your company. On one hand, you make use of open source and on the other you’re not giving back anything. I am not even sure if that reduces your exposure in any way. I am not a lawyer, but I do see the problem in this free lunch approach of the industry.

That third one is a big issue. And partly due to the fault of Google. They don’t make it easy enough to contribute back to the codebase. I can easily understand the reasoning – with billions of Chrome installations, having a no-name developer with a weird github alias from *somewhere* in the globe trying to push a piece of arcane/mundane code into libWebRTC that ends up in Chrome is darn dangerous. But the current situation seems almost insufferable.

I just don’t know who’s to blame here – companies who are just too lazy to contribute back and take the hoops required to get there or Google, for adding more blockers and hoops along their way.

Is standardization moving to the next shiny thing(s)?

There are two separate routes in web browsers that are setting up themselves to displace WebRTC: WebTransport + WebCodecs + WebAssembly & MoQ (Media over QUIC)

WebTransport + WebCodecs + WebAssembly

This trio is the unbundling of WebRTC. Taking it and breaking it into smaller components that cannot really be implemented in a web browser – these are WebTransport and WebCodecs. And adding the glue to them so that developers can cobble up the missing pieces however they feel like it – that’s the WebAssembly piece.

Vendors are already using WebAssembly to intervene with the WebRTC media processing pipeline to differentiate and improve on the user experience in various ways (noise suppression and background replacement being the main examples).

The next step is to skip WebRTC altogether:

  • Use WebTransport for sending media over the network
  • WebCodecs are there to encode and decode audio and video efficiently
  • WebAssembly for the rest (packet loss, retransmission logic, echo cancellation, etc)

Don’t believe me? Zoom is doing almost that. They are using the WebRTC data channel as transport, and use WebCodecs and WebAssembly for the rest of it. Switching to WebTransport will likely happen for Zoom once it is ubiquitous across browsers (and offers solid performance compared to the data channel in WebRTC).

A new shiny toy for developers? Definitely.

Where will we see it first? In live streaming. I’ve written about it when discussing WHIP and WHEP, calling it the 3 horsemen.

MoQ (Media over QUIC)

The next big thing is likely to be MoQ.

WebTransport makes use of QUIC as its own transport. Around 5 years ago, I thought that QUIC can be a really good solution to replace WebRTC’s transport altogether. And it now has an official name – MoQ.

MoQ is about doing to RTP what WebTransport does to HTTP.

WebTransport takes QUIC and uses it as a modernized transport for web browsers, replacing HTTP and WebSocket.

MoQ takes QUIC and uses it as modernized media streaming for web browsers, replacing HLS and DASH.

There’s an overview for MoQ on the IETF website. Here’s the best part of it, directly from this post:

It includes a single protocol for sending and receiving high-quality media (including audio, video, and timed metadata, such as closed captions and cue points) in a way that provides ultra low latency for the end user.

If that sounds like WebRTC to you, then you’re almost correct. It is why many are going to see it (and use it) as a WebRTC alternative once it gets standardized and implemented by web browsers.

The main differences?

  • The timed metadata piece, which WebRTC sourly missed for many years
  • No P2P capability. Sacrificed for improved NAT traversal (by relying on QUIC and servers)
  • The definition of media relays (servers) along with their operation

While this is targeted at live streaming services, this can easily trickle into video conferencing.

Just like WebRTC was designed and built for video conferencing, but later adopted by live streaming services – the opposite can and is likely to happen: MoQ is being designed and built first and foremost for live streaming and it will be adopted and used by video conferencing services as well.

Would Google be interested in WebRTC enough? Maybe it would venture to use WebTransport + WebCodecs + WebAssembly instead. Or just go for MoQ and consolidate its protocols across services (think YouTube + Google Meet). What would happen to WebRTC if that would take place?

Who contributes to libwebrtc?

Here’s what I showed at RTC@Scale:

Let’s unpack this a bit.

The bars show the number of commits on a yearly basis. We see the numbers dwindling and winding down just as the use of WebRTC skyrockets (the redline) due to the pandemic. 2024 is likely to be even lower in terms of commits.

The greenish colored bars are Google’s contributions to libwebrtc. The blue? All the rest of the industry who make money using WebRTC – not all of them mind you – just those that contribute back (there are many others who never contribute back). Google has been sponsoring them somewhat which can not make them happy.

Why is that?

Why are so few contributions outside of Google end up in libwebrtc?

I guess there are two reasons here:

  1. Google doesn’t make it easy to contribute. In the end, libwebrtc gets embedded into Chrome which goes to billions of users every month with a new release. Not knowing what got integrated (malware or patent-encumbered code for example) is a real issue. Having insecure or not thoroughly tested code is also unacceptable at this scale
  2. Laziness of those who use libwebrtc but never contribute back
    • In large corporations, the developers need to “fight” with the legal teams to contribute code back (the excuses are usually around liability and protecting IP)
    • Smaller companies can’t be bothered with the friction that Google adds to the process – or just don’t want to spend the needed time
    • Not wanting to make your competitor’s product better by contributing
    • Struggling with the server side parts of WebRTC that in the end are quite tightly coupled with libWebRTC on the client. Google Meet undoubtedly delivers the best experience because the client side is designed for its needs

Many developers the world over enjoy the fruits of libwebrtc, but most aren’t willing to contribute back. This is true for both individual engineers as well as companies. Google even gave up on being frustrated with this and resorts to solving their own issues these days. They probably have a very good understanding of the overall usage in Chrome where Google Meet remains the dominant user.

On the one hand, Google isn’t making this easy. On the other hand, companies are lazy or protective of their own forked libwebrtc code to never contribute it back.

Can we save libwebrtc & WebRTC?

It is time to rethink WebRTC’s future.

For libwebrtc, we might need some other form of governance. Have more of the bigger vendors pitch in with the engineering effort itself. Meta, Microsoft and a few others who rely heavily on libwebrtc need to step up to that responsibility (the W3C Working Group is not where this kind of discussion happens) while Google needs to let go a bit. I have no clue how things are done in the world of Linux and I am sure libwebrtc isn’t big enough or important enough for that. But I do believe that something can be done here. At the end of the day it will require taking some of the maintenance cost off Google.

Just like Chrome has third party libraries such as libopus and dav1d (AV1 decoder) embedded into Chrome as part of libwebrtc, there is no real reason why libwebrtc itself can’t end up in the same way.

For WebRTC standardization, it is time to ask – is it finished, or are there more things needed?

Do we want to progress and modernize it further or are we happy with it as is?

Should we “migrate” it towards MoQ or a similar approach?

In the W3C, do we need to get more people involved? The web developers themselves maybe? They need to be listened to and made part of the process.

Will the above happen? Likely not.

The post Does WebRTC need a change in governance? appeared first on BlogGeek.me.

RTC@Scale 2024 – an event summary

bloggeek - Mon, 04/08/2024 - 12:30

RTC@Scale is Facebook’s virtual WebRTC event, covering current and future topics. Here’s the summary for RTC@Scale 2024 so you can pick and choose the relevant ones for you.

WebRTC Insights is a subscription service I have been running with Philipp Hancke for the past three years. The purpose of it is to make it easier for developers to get a grip of WebRTC and all of the changes happening in the code and browsers – to keep you up to date so you can focus on what you need to do best – build awesome applications.

We got into a kind of a flow:

  • Once every two weeks we finalize and publish a newsletter issue
  • Once a month we record a video summarizing libwebrtc release notes (older ones can be found on this YouTube playlist)

Oh – and we’re covering important events somewhat separately. Last month, a week after Meta’s RTC@Scale event took place, Philipp sat down and wrote a lengthy summary of the key takeaways from all the sessions, which we distributed to our WebRTC Insights subscribers.

As a community service (and a kind of a promotion for WebRTC Insights), we are now opening it up to everyone in this article 😎

Table of contents Why this issue?

Meta ran their rtc@scale event for the third  time. Here’s what we published last year and in 2022. This year was “slightly” different for us:

  1. Philipp was in-between jobs. Today is his first day at Meta and this was the reason he got a notebook
  2. Tsahi was a speaker at rtc@scale

While you can say we’re both biased on this one, we will still be offering an event summary here for you. And we will be doing it as objectively as we can.

Our focus for this summary is what we learned or what it means for folks developing with WebRTC. Once again, the majority of speakers were from Meta. At times they crossed the line of “is this generally useful” to the realm of “Meta specific” but most of the talks provide value.

Writing up these notes takes a considerable amount of time, but is worth it (we know – we’ve done this before). You can find the list of speakers and topics on the conference website, the playlist of the videos can be found here (there’s also a 6+ hours long session there that includes all the Q&As). You can also just scroll down below for our summary.

Our top picks

Our top picks:

  • “Improving International Calls” since it is quite applicable to WebRTC
  • “Improving Video Quality for RTC” since you can learn quite a bit about AV1
  • “Enhanced RTC Network Resiliency with Long-Term-Reference and Reed Solomon code” since you can learn about FEC for video (LTR is not in libWebRTC currently)
  • “Machine Learning based Bandwidth Estimation and Congestion Control for RTC” since BWE is crucial to quality.

We find these most applicable to how you deal with WebRTC in general, even outside of Meta.

General thoughts (TL;DR)
  • Meta is taking the route of most large vendors who do millions of minutes a day
  • It is gutting out WebRTC in the places that are most meaningful to it, replacing them with their own proprietary technology
    • Experiences in native applications are being prioritized over browser ones, and the browser implementation of WebRTC is kept as a fallback and interoperability mechanism
    • Smaller vendors will not be able to play this game across all fronts and will need to settle for the vinyl quality and experience given by WebRTC
    • Sadly, this may lead to WebRTC’s demise a few within a few years’ time
  • Meta can take this approach because the majority of their calls take place in mobile native applications, so they are less reliant and dependent on the browser
    • Other large vendors are taking a similar route
    • Even Google did that with Duo and likely is doing similar server-side things with Meet
SESSION 1 Li-Tal Mashiach, Meta / Host Welcome

(4 minutes)

Watch if you: need a second opinion on what sessions to watch

Key insights:

  • Pandemic is over and still Meta is seeing growth. That said, no numbers were shared around usage
Nitin Khandelwal, Meta / Keynote: From Codec to Connection

(13 minutes)

Watch if you: are a product person

Key insights:

  • Great user stories with a very personal motivation
  • Meta is all about “Connection” and “Presence” and RTC is the technical vehicle for creating “Presence when People are apart”
  • Large group calling is first mentioned for collaboration and only then for social interactions but we wonder why “joining ongoing group calls at any time” is being specifically mentioned as a feature
  • Codec avatars and the Metaverse are mentioned here, but aren’t discussed in any of the talks, which would have been nice to have as well
  • Interoperability and standards are called out as an absolute requirement which ties in with the recent WhatsApp announcement
Sriram Srinivasa + Hoang Do, Meta / Revamping Audio Quality for RTC Part 1: Beryl Echo Cancellation

(20 minutes)

Watch if you: are an engineer working on audio and enjoyed last year’s session

Key Insights:

  • Meta implemented a new proprietary AEC called Beryl to replace the one that WebRTC uses by default. This session explains the motivation, technical details and performance results of Beryl
  • The audio pipeline diagram at 1:10 remains great and gives context for this year’s enhancements which are in AEC and a low-bitrate audio codec:
  • At 2:50 we get a good summary of what “AI” can do in this area. Unsurprisingly this depends a lot on how much computational effort can be spent on the device
  • Meta’s Beryl is for more general usage and aims to be a replacement for WebRTC’s AEC3 (on desktop) and AECM (on mobile). At 4:00 we get a proper definition of acoustic echo as a block diagram. Hardware AEC is noted as not effective on a large number of devices and does not support advanced features like stereo/spatial audio anyway
  • At 06:00 the Beryl part gets kicked off with a hat-tip to the WebRTC echo cancellation and at 7:50 another block diagram. One of the key features is that Beryl is one AEC working in two modes, with a “lite” mode for low-powered devices. The increase in quality compared to WebRTC comes at the expense of 7-10% more CPU being used:
  • At 09:00 we get an intro to the different subcomponents of AEC, delay estimation, linear echo cancellation (AEC) and “leftover” echo suppression (AES)
  • At 13:30 come the learnings from implementing the algorithms, a demo at 16:30 and a apples-to-apples comparison with libWebRTCs AEC (which should be relatively fair since the rest of the pipeline is the same) showing a 30% increase in quality for a number of scenarios
  • This is a nice alternative summary if you still need convincing to watch the video
Jatin Kumar + Bikash Agarwalla, Meta / Revamping Audio Quality for RTC Part 2: MLow Audio Codec

(17 minutes)

Blog post: we hope there will be one!

Watch if you: are an engineer working on audio

Key Insights:

  • Meta implemented a new proprietary audio codec called MLow to improve upon and replace Opus within its applications
  • We start (if you skip the somewhat repeated intro) at 2:30 with the already familiar audio pipeline block diagram and a motivation for a new codec including the competitive landscape. Meta aims to provide good quality even on low-end devices
  • At 4:30 we get a good overview of the requirements. Fast integration by reusing the Opus API is an interesting one. ML/AI would be nice to use but would increase complexity in ways which lead to worse overall quality:
  • At 5:50 we get an overview of how the new codec works at a high level followed by the approach taken to develop the codec at 8:15 which is interesting because you don’t hear about the compromise between “move fast when trying things” and “be extremely performant” very often
  • At 9:30 we get some insight into how the evaluation was done using diverse and representative input and the actual crowdsourced listening tests (which are a lot of effort and are therefore expensive) at 11:30. Tools like VISQOL and POLQA are used for regression testing. 1.5 years of development time sounds quite fast!
  • At 13:00 we got a demo. We wonder which Opus version was used for comparison due to the recent 1.5 improvements there which promise improvements in the same low-bandwidth area
  • MLow can offer comparable quality to 25kbps opus at 18kbps but you might not care if you have more than 16kbps available since both codecs show very similar POLQA scores at that bitrate:
  • At 15:40 we get production results which show improvements (which are not quantified in this talk). Improvements in video quality are a bit surprising, we would not spend more bits on video in low-bitrate scenarios
Yi Zhang + Saish Gersappa, Meta / Improving International Calls

(19 minutes)

Watch if you: are looking for architecture insights also applicable to WebRTC

Key Insights:

  • Meta details how they are moving to a move decentralized architecture globally to make their calling experience more robust
  • 20% of Whatsapp calls are international, half a billion a day and “bad quality” is 20% more likely on those calls due to the more complex technical challenges which are clearly spelled out on the slide at 2:00 with a good explanation of how network issues are visible to the user
  • At 3:10 we get a very good introduction to the basics of how VoIP works. What Whatsapp calls a relay is slightly different from a TURN server since their “relay” is also used for multiparty calls. Being more than a TURN server allows the relay to do a bit more, in particular since it can decrypt and handle RTCP feedback
  • At 4:20 we get a good discussion of what is sometimes known as the “USPS problem” –  it is very rude to make the sender retransmit a packet that *you* lost (from a 2016 Twitter conversation)
    • A packet/NACK cache is an essential component of SFUs and we consider this the norm, not forwarding the NACK.
    • In cases of downstream packet loss it reduces the error correction time by half and makes the retransmission more effective
    • Notably this is for audio where Meta is known to leverage libWebRTC audio nack support in Messenger that is not enabled by default there (Google Meet enables it as well)
  • At 5:40 the relay is shown to be “smart” about upstream loss as well since it can detect the loss (i.e. a gap in the RTP sequence number) and proactively send a NACK, saving one RTT. This is followed by a summary on other things the relay can do such as duplicating packets (which is an alternative to RED for audio)
  • At 6:30 we get an idea how these basics apply to international calls which generally have a longer RTT (which makes the NACK handling more important)
  • At 8:00 we get into the new architecture called “cross relay routing” which is essentially a distributed or cascaded SFU (see e.g. the Jitsi approach from 2018 or the Vidyo talk from 2017)
    • This keeps the RTT to the NACK handling low (for downstream packet loss to the level of local calls) which improves quality and also utilizes Meta’s networking backbone which has lower packet loss than the general internet
    • They also have higher bandwidth so one can do more redundancy and duplication
    • At Whatsapp scale this creates the problem of picking the right relays which is done by looking at latencies. This is a tricky problem, it took Jitsi from 2018 until 2022 to get the desired results
  • At 11:00 (or 13:00) this gets expanded to group calls by using an architecture that starts with the centralized relay and extends it to a central router that only forwards the media packets combined with RTCP-terminating edge relays
    • Some decisions like bandwidth estimation are delegated to the local relay while some decisions, in particular related to selective forwarding (e.g. active speaker determination which influences bandwidth allocation, see last year’s talk) are run on the central relay which has a complete view of the call
    • Simulcast and in particular temporal layer dropping is surprising to see only in the central relay, it should be done in the edge relays as well to adapt for short-term bandwidth restrictions
    • Our opinion, is that over time, Meta would be moving most of these decisions from the central relay to the local relays, distributing the logic further and closer to the edge
  • At 16:40 we get a glimpse into the results. Unsurprisingly things work better with faster feedback! Putting servers closer to the users is an old wisdom but one of the most effective ways to improve the quality. The lesson of using dedicated networks applies not only to Meta’s backbone but also the one used by the big cloud providers. This quality increase is paid by increased network cost however
First Q&A with Speakers

https://www.youtube.com/live/dv-iEozS9H4?feature=shared&t=5821 (25 minutes)

Watch if: you found any of the sessions this covers interesting

Key Insights:

  • Quite a few great questions
  • One thing that stood out was the question whether NACK for audio helps vs FEC and the answer is “yes”, because they provide the full quality when the RTT is low. What to use in different situations depends on the conditions. Which is a sentiment that keeps coming up
SESSION 2 Shyam Sadhwani, Meta / Improving Video Quality for RTC

(22 minutes)

Blog post: https://engineering.fb.com/2024/03/20/video-engineering/mobile-rtc-video-av1-hd/

Watch if: you are thinking of adopting AV1 or trying to improve video quality

Key Insights:

  • Meta’s overview of the work and effort put into improving their video quality, and the route they took, especially with AV1 – the tradeoffs made when adopting it
  • “Why is the video quality of RTC not as great as Netflix” is a good question to ask, followed by a history of video encoding since DVDs came out in 1997. The answer is somewhat obvious from the constraints RTC operates under (shown at 2:00) 
  • At 3:20 we start with a histogram of the bandwidth estimation distribution seen by Meta. “Poor calls”, which are below 300kbps (for audio and video, including RTP overhead) have about 200kbps for the video target bitrate. Choosing a more efficient video codec like AV1 is one of the most effective knobs here (and we knew Meta was taking a route after last year’s talk). The bandwidth distribution Meta sees is shown below:
  • While AV1 is largely not there yet in hardware encoders, the slides at 06:00 explain why one actually wants software encoders; they provide better quality at the target bitrates used by RTC which is something we have seen in Chromium’s decision to use software encoding at lower resolutions a while back
  • At 7:00 we get a demo comparison which of course is affected by re-encoding the demo with another codec but the quality improvement of AV1 is noticeable, in particular for the background. AV1 gives 30% lower bitrate compared to H.264, even more for screen sharing due to screen content coding tools
  • Quite notably the 600kbps binary size increase caused by AV1 is a concern. WebRTC in Chrome was somewhat lucky in that regard since Chrome already had to include AV1 support for web video decoding
  • Multiple codecs get negotiated through SDP and then the switch between them happens on the fly. From the blog post that is not happening through the more recent APIs available to web browsers though
  • Originally a video quality score based on encoding bitrate, frame rate and quantization parameter was used (10:30) but the latter is not comparable between AV1 and H.264 so the team came up with a way to generate a peak signal to noise ratio like metric that was used for comparison. This allowed a controlled rollout with measurable improvements
  • High end networks (with an available bitrate above 800kbps) also benefit from AV1 as we can see starting at 12:30. At least on mobile devices 1080p resolution does not provide perceived advantages over 720p
  • “Isn’t it just a config change to raise max bitrate” is an excellent question asked at 13:45 and the answer is obviously “no” as this caused issues ranging from robotic voice to congestion. In particular annoying is constantly switching between high-quality video and low-quality which is perceived negatively (take this into account when switching spatial layers in SFUs). At high bitrates (2.5mbps and up) it makes a lot of sense to do 2-3x audio duplication (or redundancy) since audio quality matters more
  • Mobile applications have the advantage of taking into account the battery level and conditionally enable AV1 which is, for privacy reasons, not available in the browser
  • The talk gets wrapped up with a recap of the benefits of AV1 both in low-end (at 18:00) and high-end (at 19:10)
  • And we even got a blog post!
Thomas Davies, Visionular / AV1 at the coalface: challenges for delivering a next-generation codec for RTC

(19 minutes)

Watch if: you are interested in a deep dive on AV1 and video encoding in general

Key Insights:

  • Visionular on what goes into the implementation of a AV1 video encoder
  • The talk starts off with a very good explanation of the what, why and how of rolling out an additional codec to your system. For WebRTC in the browser you don’t control much beyond the bitrate and resolution but one can still ask many of the questions and use this is a framework:
  • At 4:30 we go into the part that describes encoder performance (where you can really optimize). The big constraint in RTC is that the encoder needs to produce a frame every 33 milliseconds (for 30fps)
  • Knowing the type of the content helps the encoder pick the right encoding tools (which is why we have the contentHint in WebRTC turning on screen content coding with good results)
  • Rate control (10:00) is particularly important for RTC use-cases. Maximum smoothness is an interesting goal to optimize for, in particular since any variance in frame size is going to be magnified by the SFU and will affect its outgoing network traffic
  • Adaptivity (12:50) for AV1 comes in two forms: SVC for layering and changing resolution without a keyframe
  • The “sales pitch” for Visionulars encoder comes quite late at 14:15, is done in less than 90 seconds and is a good pitch, the last part (15:30) is an outlook where RTC video encoding might go in the future
Gang Shen, Intel / Delivering Immersive 360-degree video over 5G networks

(16 minutes)

Watch if: you are working in the 360-degree video domain

Key points:

  • Intel, reviewing the challenges of 360-degree immersive video
  • We’re not quite sure what to do with this one. The use-case of 360 degree video is hugely demanding and solving it means pushing the boundaries in a number of areas
  • Until around 06:00, the discussion revolves around the unsuitability of HTTPS, and only from here, the discussion starts looking at UDP and WebRTC (an obvious choice for viewers of RTC@Scale)
  • Latency being a challenge, Intel went with 5G networks
  • It was hard to understand what Intel wanted to share here exactly
    • What is the problem being solved here?
    • Is 5G relevant and important here, or just the transport used, focusing on the latest and greatest cellular?
    • What challenges 360-video poses that are unique (besides being 8K resolution)?
  • Demo starts at 09:10, results at 11:00, a summary at 12:30 and an outlook at 14:30
  • All in all, this session feels a bit like a missed opportunity
Fengdeng Lyu + Fan Zhou, Meta / Enhanced RTC Network Resiliency with Long-Term-Reference and Reed Solomon code

(19 minutes)

Watch if:

  • you are using H.264 and are interested in features like LTR
  • you are interested in video forward error correction

Key points:

  • Secret sauce is promised!
  • The talk starts by describing the “open source baseline”, RTX, keyframes and XOR-based FEC
    • We would describe keyframes as a last resort that you really want to avoid and add temporal scalability (which allows dropping higher temporal layers) to the list of tools here
    • Using half the overall traffic for FEC sounds like too much, see this KrankyGeek talk which discusses the FEC-vs-target bitrate split
    • In the end this needs to be tuned heavily and we don’t know the details
  • At 4:20 we get a deep-dive on LTR, long-term reference frames, which is a fairly old H.264 feature
    • The encoder and decoder keep those frames around for longer and can then use them as baseline from which a subsequent frame is encoded/decoded instead of a previous frame which was lost (and then no longer needs to be recovered)
    • The implicit assumption here is 1:1, for multiparty LTR can not be used which is mentioned in the Q&A
  • When using LTR (vs NACK and FEC) makes sense is a question that is difficult to answer, we get to know Meta’s answer at 9:50: The largest gains seem to be in bandwidth-limited high-loss networks which makes sense
  • As a “VP8 pipeline” with only very rudimentary H.264 support libWebRTC does not support H.264 LTR out of the box and we will see whether Meta will open source this (and Google merges it)
  • At 10:30 we jump back to forward error correction, talking about the problems of the XOR-based approach and explaining the “only works if at most one packet covered by the recovery packet is lost” and the protection scheme
  • At 13:00 the important property of Reed-Solomon-FEC is explained which is more advanced than the XOR-based approach since the number of packets that can be recovered is proportional to the number of parity packets. This is followed by some practical tips when doing RS-FEC (which you won’t be able to do in the browser which also can not send FlexFEC)
  • At 16:30 there is a recap of the results. As with all other techniques, we are talking about single-digit improvements which is a great win. Meta promises to upstream their FEC to the open source repository which we are looking forward to (some of this already happened here)
  • Surprisingly video FEC has remained relatively obscure in WebRTC, neither Google Meet nor any of the well-known open source SFUs use it.
Second Q&A with Speakers

https://www.youtube.com/watch?v=dv-iEozS9H4&t=13260s (23 minutes)

Watch if: you found any of the sessions this covers interesting

Key Insights:

  • Quite a few great questions, including some from the one and only Justin Uberti who apparently cannot stop keeping an eye on what is going on in RTC
  • A lot of interest in LTR
SESSION 3 Tsahi Levent-Levi, bloggeek.me / The past and future of WebRTC, 2024 edition

(24 minutes)

Watch if: you like to hear Tsahi speaking. He does some juggling too!

Key Insights:

  • Quite often when trying to explain why some things in WebRTC are a bit weird the answer is “for historical reasons”. Tsahi gives his usual overview of the history of WebRTC, dividing it into the early age of exploration, the growth and the differentiation phases and looks at the usage of WebRTC we have seen in and since the pandemic
  • Tsahi is undoubtedly the person who spent the most time with developers using WebRTC and thought a lot about how to explain it. What is interesting is that Tsahi has to explain what Google does while the WebRTC team at Google remains silent
  • Google’s libWebRTC is a cornerstone of the ecosystem and is still tightly integrated into Chromium and its build and release process. Yet despite increased usage we see a slowdown in development looking at the number of commits and is effectively in maintenance mode. And it remains a Google-owned project (notably Meta is not affected by this since they can and have forked libWebRTC and they can release changes without open sourcing them)
  • What we see (at 10:10) currently in libWebRTC and Chromium is Google striving for more differentiation through APIs like Insertable Streams and Breakout Box without being forced to opensource and make everything to their competitors for free (e.g. we do not have built-in background blur into Chromium). Philipp isn’t convinced that WebTransport will replace WebRTC altogether. It makes sense for use-cases for which WebRTC was not the right tool though
  • Screen sharing is another topic (at 14:15) where we see a lot of improvements in Chromium and this is driven by the product needs of Google Meet. Some of the advances may only make sense for Google Meet but that is fair since Google is the party who pays the development cost
  • Optimization and housekeeping (at 17:20) are something that is not to be underestimated. Google has paid for the development of libWebRTC for more than a decade which is a huge investment in addition to open sourcing the original intellectual property
  • We heard a lot about AV1 as the most modern video codec and this continues in this talk. Lyra as an alternative audio codec has some competition (such as the new Meta audio codec) and it has not landed natively in the browser. Does Google use it together with WebRTC in native apps? Maybe…it requires effort to find out. As we have seen at KrankyGeek one can use it via WASM and insertable streams
  • The outlook is at 22:30 and raises the question how WebRTC will fare in 2024
Mandeep Deol + Ishan Khot, Meta / RTC observability

(20 minutes)

Watch if: you deploy a WebRTC-based system in production

Key points:

  • WebRTC is great when it works but sometimes it does not and then you need to debug why things do not work the way you expect. And you can not seriously ask your users to send you a chrome://webrtc-internals dump. Hence you need to make your system observable which means getting logs from the clients and servers
  • Two of the points on the slide at 0:40 are applicable to any system you build: you need to ensure user privacy, in particular for IP addresses and you need to strike a balance between reliability and efficiency
  • The “call debugging” section starting at 3:10 makes a good point: your system needs to provide both service-level metrics (such as what percentage of calls fail) as well as the ability to drill down to a particular session and understand the specific behavior (as you might have noticed, this is a topic close to the hearts of Philipp and Tsahi who evolved this project into watchRTC). At 4:15 we see Meta’s tool named “call dive”:
  • From the looks of it, it provides the fairly standard “timeline” view of some statistics (since we are dealing with a mobile application there are battery stats) but note that this is aggregated at the call level with multiple users
  • At 5:40 we get a deep dive into what it took Meta to develop the system. Some of these challenges are specific to their scale but the problem of how to aggregate the logs from the various clients and servers involved is very common
  • At 10:50 we get a deep-dive into the RAlligator system where the big challenge is determining when a multiparty call is done, all logs have arrived and can be processed by the following parts of the pipeline (which is made more difficult by not uploading the logs in real-time to avoid competing with the actual call). Keeping the logs in memory until then at the scale of Meta must be quite challenging
  • The system is designed for debugging, not for customer support where you need to explain to a customer why their call failed and need all logs reliably. Cost-effectiveness is a concern as well, you can’t spend more on the logging than you spend on the actual RTC media
  • At 16:00 we get a nice overview of what might be next. A lot of the things make sense but real time call debugging is just a fancy showcase and not very useful in practice. We would really like to see GenAI summarize webrtc-internals logs for us!
  • What is missing from the talk is how such a system is generating platform statistics which together with A/B experimentation must be the basis for the rollout results we see in many of the other talks
Sean Dubois, Livekit / Open Source from One to at Scale

(21 minutes)

Watch if you: like open source

Key takeaways:

  • This talk is about Sean’s experience working in the open source community, and especially Pion
  • Here, Sean tries to explain the benefits of open sources versus proprietary software, coming at it from the angle of the individual developer and his own experiences
    • When viewing, remember that most of these experiences are with highly popular open source projects
    • Your mileage may vary greatly with other types of open source projects
  • At 05:50 Sean makes a point of why Product Managers aren’t needed (you can talk to the customers directly and they even pay for it)
    • Tsahi as a Product Manager objects 😉
    • Talking to customers directly is needed for developers in products, but guidance and decisions ultimately need to be taken by the right function – even for developer-centric products and services
  • At 07:00 we get into how Amazon maintains their Chromium fork (Silk)
    • They have lots of patches made that they keep internally and are able to stay two weeks behind Chromium. But this feat requires 6 full time employees to achieve. Igalia had a great blog post on “downstreaming Chromium” recently (part two should be more interesting)
    • When using an open source project, careful decisions should be taken about contributing back versus keeping modifications proprietary. Reducing the cost of maintenance is quite an effective argument that Philipp has been using countless times
  • Sean touches the topic of money and open source at around 15:00. We believe this viewpoint is naive, as it doesn’t factor in investors, competition and other market constraints. For example we have seen a lot of WebRTC CPaaS vendors engage in direct peeing contests in response to Twilio shutting down which had a bad effect on what was left of a sense of “community” in WebRTC
  • All in all, quite an interesting session. Juxtapose this with how Meta is making use of open source for its own needs and how much of their effort gets contributed back when it comes to WebRTC for example. Or how Google open-sourced WebRTC and is pretty silent about it these days. Philipp’s approach of working with Google remains quite unique in that area but is not born from enthusiasm for WebRTC – more out of a necessity
Liyan Liu + Santhosh Sunderrajan, Meta / Machine Learning based Bandwidth Estimation and Congestion Control for RTC

(20 minutes)

Blog post: https://engineering.fb.com/2024/03/20/networking-traffic/optimizing-rtc-bandwidth-estimation-machine-learning/

Watch if: you are interested in BWE and machine-learning

Key takeaways:

  • Meta explaining here the work and results they got from employing machine learning to bandwidth estimation
  • That machine learning can help with BWE has been known for some time. Emil Ivov did a great presentation on the topic at KrankyGeek in 2017
  • The talk starts with a recap of what Meta achieved by moving from receive-side bandwidth by rolling out send-side BWE (SSBWE) in 2021 and a lot of tuning of BWE-related parameters in 2022
    • Not all networks are different and delivering the best quality requires understanding the type of network you are on
    • This is followed by a high-level overview of the different components in the WebRTC SSBWE implementation. That implementation is quite robust but contains a lot of parameters that work in certain scenarios but can be tuned (which is not possible in the browser). See this block diagram of the components:
  • The “what is the appropriate strategy in this situation” question is one that indeed needs to be answered holistically and is driving resilience mechanisms and encoding
  • Applying ML to network characterization requires describing the network behavior in a way that can be understood by machine learning which is the topic of the part of the talk starting at 4:10. Make sure to talk to your favorite machine learning engineer to understand what is going on! The example that starts at 7:05 gets a bit more understandable and shows what input “features” are used
  • Once random packet loss is detected the question is what to do with that information and we get some answers at 9:05. E.g. one might ignore “random” loss for the purpose of loss-based estimation (which Google’s loss-based BWE does in a more traditional way by using a trendline estimator for the loss)
  • At 9:30 we got from network characterization to network prediction, i.e. predicting how the network is going to react in the next couple of seconds
    • This is taking traditional delay-based BWE which takes an increase in receive-packet delay as input for predicting (and avoiding) congestion
    • The decision matrix shown at 12:00 is a essentially a refined version of the GoogCC rate control table
    • As we learn in the Q&A the ML model for this is around 30kb or ten seconds of Opus-encoded audio but binary size is a concern
  • At 14:50 we get into the results section which shows a relatively large gain from the improvements. Yep, getting BWE right is crucial to video quality! We are not surprised that a more complex ML-based approach outperforms simplified hand-tuned models either. WebRTCs AudioNetworkAdapter framework is an early example of this
  • An interesting point from the outlook that follows is how short the “window” used for the decisions is. 10 seconds is a lot of time in terms of packets but a relatively short window compared to the duration of the usual call
  • As we learn in the Q&A the browser lacks APIs for doing this kind of BWE tuning. Yet the W3C WebRTC Working Group prefers spending time on topics like “should an API used by 1% be available on the main thread”…
Live Q&A with Speakers

https://www.youtube.com/live/dv-iEozS9H4?feature=shared&t=21000 (24 minutes)

Watch if: you found any of the sessions this covers interesting

Key Insights:

  • Quite a few great questions again, including how to simulate loss in a realistic way (where the opus 1.5 approach may help)
  • And we learn how many balls Tsahi can juggle!
Closing remarks

As in previous years, we tried capturing as much as possible, which made this a wee bit long. The purpose though is to make it easier for you to decide in which sessions to focus, and even in which parts of each session. And of course for us so we can look things up and reference it in future blog posts or courses!

The post RTC@Scale 2024 – an event summary appeared first on BlogGeek.me.

End-to-End Encryption in WebRTC… 4 Years Later

webrtchacks - Tue, 03/12/2024 - 21:30

We covered End-to-end encryption (E2EE) before, first back in 2020 when Zoom’s claims to do E2EE were demystified (not just by us; they later got fined $85m for this), followed by the quite exciting beta implementation of E2EE in Jitsi using Chromium’s Insertable Streams API. A bit later we had Matrix explain how their approach […]

The post End-to-End Encryption in WebRTC… 4 Years Later appeared first on webrtcHacks.

WebRTC recording challenges and solutions

bloggeek - Mon, 02/26/2024 - 12:30

Need WebRTC recording in your application? Check out the various requirements and architectural decisions you’ll have to make when implementing it.

A critical part of many WebRTC applications is the ability to record the session. This might be a requirement for an optional feature or it might be the main focus of your application.

Whatever the reasons, WebRTC recording comes in different shapes and sizes, with quite a few alternatives on how to get it done these days.

What I want to do this time is to review a few of the aspects related to WebRTC recording, making sure that when it is your time to implement, you’ll be able to make better choices in your own detailed requirements and design.

Table of contents Record-and-upload or upload-and-record

One of the fundamental things you will need to consider is where do you plan the WebRTC recording to take place – on the device or on the server. You can either record the media on the device and then (optionally?) upload it to a server. Or you can upload the media to a server (live in a WebRTC session) and conduct the recording operation itself on the server.

Recording locally uses the MediaRecorder API while uploading uses HTTPS or WebSocket. Recording on the server uses WebRTC peer connection and then whatever media server you use for containerizing the media itself on the server.

Here’s how I’d compare these two alternatives to one another:

Record-and-uploadUpload-and-recordTechnologyMediaRecorder API + HTTPSWebRTC peer connectionClient-sideSome complexity in implementation, and the fact that browsers differ in the formats they supportNo changes to client sideServer-sideSimple file serverComplexity in recording functionMain advantages
  • No added infrastructure complexity
  • Better quality on poor networks (assuming you have time to wait for the uploaded recording)
  • Decoupling of recording requirements from client device characteristics and capabilities
  • Full control over composited result

When would I record-and-upload?

I would go for client-side recording using MediaRecorder in the following scenarios:

  1. My sole purpose is to record and I am the only “participant”. Said differently – if I don’t record, there would be no need to send media anywhere
  2. The users are aware of the importance of the recording and are willing to “sacrifice” a bit of their flexibility for higher production quality
  3. The recorded stream is more important to me than whatever live interaction I am having – especially if there’s post production editing needed. This usually means podcasts recording and similar use cases

When would I upload-and-record?

Here’s when I’d use classic WebRTC architectures of upload-and-record:

  1. I lack any control over the user’s devices and behavior
  2. Recording is a small feature in a larger service. Think web meetings where recording is optional at the discretion of the users and used a small percentage of the time
  3. When sessions are long. In general, if the sessions can be longer than an hour, I’d prefer upload-and-record to record-and-upload. No good reason. Just a gut feeling that guides me here

How about both?

There’s also the option of doing both at the same time – recording and uploading and in parallel to upload-and-record. Confused?

Here’s where you will see this taking place:

  • An application that focuses on the creation of recorded podcast-like content that gets edited
  • One that is used for interviews where two or more people in different locations have a conversation, so they have to be connected via a media server for the actual conversation to take place
  • Since there’s a media server, you can record in the server using the upload-and-record method
  • Since you’re going to edit it in post production, you may want to have higher quality media source, so you upload-and-record as well
  • You then offer these multiple resulting recordings to your user, to pick and choose what works best for him
Multi stream or single stream recording

If you are recording more than a single media source, let’s say a group of people speaking to each other, then you will have this dilemma to solve:

Will you be using WebRTC recording to get a single mixed stream out of the interaction or multiple streams – one per source or participant?

Assuming you are using an SFU as your media server AND going with the upload-and-record method, then what you have in your hands are separate media streams, each per source. Also, what you need is a kind of an MCU if you plan on recording as a single stream…

For each source you could couple their audio and video into a single media file (say .webm or .mp4), but should you instead mix all of the audio and video sources together into a single stream?

Using such a mixer means spending a lot of CPU and other resources for this process. The illustration below (from my Advanced WebRTC Architecture course) shows how that gets done for two users – you can deduce from there for more media sources:

The red blocks are the ones eating up on your CPU budget. Decoding, mixing and encoding are expensive operations, especially when an SFU is designed and implemented to avoid exactly such tasks.

Here’s how these two alternatives compare to each other:

Multiple streamsMixed streamOperationSave into a media fileDecode, mix and re-encodeResourcesMinimalHigh on CPU and memoryPlaybackCustomized, or each individual stream separatelySimpleMain advantages
  • No data loss from the session
  • Can create multiple playback experiences
  • Easy to diarize transcriptions since nothing is mixed
  • Simple to implement
  • Can mix later on if needed
  • Simply to playback anywhere
  • Requires less storage space

When would I use multi stream recording?

Multi stream can be viewed as a step towards mixed stream recording or as a destination of its own. Here’s when I’d pick it:

  1. When I need to be able to play back more than a single view of the session in different playback sessions
  2. If the percentage of times recorded sessions get played back is low – say 10% or lower. Why waste the added resources? (here I’d treat it is a step an optional mixed stream “destination”)
  3. When my customer might want to engage in post production editing. In such a case, giving him more streams with more options would be beneficial

When would I decide on mixed stream recording?

Mixed recording would be my go-to solution almost always. Usually because of these reasons:

  1. In most cases, users don’t want to wait or deal with hassles during the playback part
  2. Even if you choose multi stream for your WebRTC recording, you’ll almost always end up needing to provide also a mixed stream experience
  3. Playing back multi stream content requires writing a dedicated player (haven’t seen a properly functioning one yet)

What about mixed stream client side recording?

One thing that I’ve seen once or twice is an attempt to use a device browser to mix the streams for recording purposes. This might be doable, but quality is going to be degraded for both the actual user in the live session as well as in the recorded session.

I’d refrain from taking this route…

Switching or compositing

If you are aiming for a single stream recording, then the next dilemma you need to solve is the one between switching and compositing. Switching is the poor man’s choice, while compositing offers a richer “experience”.

What do I mean by that?

Audio is easy. You always need to mix the sources together. There isn’t much of a choice here.

For video though, the question is mostly what kind of a vantage point do you want to give that future viewer of yours. Switching means we’re going to show one person at a time – the one shouting the loudest. Compositing means we’re going to mix the video streams into a composite layout that shows some or all of the participants in the session.

Google Meet, for example, uses the switching method in its recordings, with a simple composite layout when screen sharing takes place (showing the presenter and his screen side by side, likely because it wasn’t too hard on the mixing CPU).

In a way, switching enables us to “get around” the complexity of single stream creation from multiple video sources:

SwitchingCompositingAudioMix all audio sourcesMix all audio sourcesVideoSelect single video at a time, based on active speaker detectionPick and combine multiple video streams togetherResourcesModerateHigh CPU and memory needsMain advantagesCost effectiveMore flexible in layouts and understanding of participants and what they visually did during the meeting

When would I pick switching?

When the focus is the audio and not the video.

Let’s face it – most meetings are boring anyway. We’re more interested in what is being said in them, and even that can be an exaggeration (one of the reasons why AI is used for creation of meeting summaries and action items in some cases).

The only crux of the matter here, is that implementing switching might take slightly longer than compositing. In order to optimize for machine time in the recording process, we need to first invest in more development time. Bear that in mind.

When would compositing be my choice?

The moment the video experience is important. Webinars. Live events. Video podcasts.

Media that plan or want to apply post production editing to.

Or simply when the implementation is there and easier to get done.

I must say that in many cases that I’ve been involved with, switching could have been selected. Compositing was picked just because it was thought of as the better/more complete solution. Which begs the question – how can Google Meet get away with switching in 2024? (the answer is simple – it isn’t needed in a lot of use cases).

Rigid layouts or flexible layouts

Assuming you decided on compositing the multiple video streams into a single stream in your WebRTC recording, it is now time to decide on the layout to use.

You can go for a single rigid layout used for all (say tiles or presenter mode). You can go for a few layouts, with the ability to switch from one to the other based on context or some external “intervention”. You can also go for something way more flexible. I guess it all depends on the context of what you’re trying to achieve:

SingleRigidFlexibleConceptA single layout to rule them allHave 2, 3 or 7 specific layouts to choose fromAllow virtually any layout your users may wish to useMain advantages
  • Simple to implement
  • Once implemented, it is hands-free
  • Gives a few choices to your users
  • Knowing the layouts in advance enables for code optimizations for them
Users can control everything, so you can offer the best user experience possibleMain challengesWhat if that single layout isn’t enough for your users?
  • How to choose which layouts to have?
  • When and how to switch between these layouts?
  • How are layouts defined and created?
  • When and how to switch between the layouts?

Here’s a good example of how this is done in StreamYard:

StreamYard gives 8 predefined different layouts a host can dynamically choose from, along with the ability to edit a layout or add new ones (the buttons at the bottom right corner of the screen).

When to aim for rigid layouts?

Here’s when I’ll go with rigid layouts:

  1. The recording is mostly an after-effect and not the “main course” of the interaction. For the most part, group meetings don’t need flexible layouts (no one cares enough anyway)
  2. My users aren’t creatives in nature, which brings us to the same point. The WebRTC recording itself is needed, but not for its visual aesthetics – mostly for its content
  3. When users won’t have the time or energy to pick and choose on their own

Here, make sure to figure out which layouts are best to use and how to automatically make the decision for the users (it might be that whatever the host layout is you record, or based on the current state of the meeting – with screen sharing, without, number of participants, etc).

When would flexibility be in my menu?

Flexibility will be what I’ll aim for if:

  1. My users care deeply about the end result (assume it has production value, such as uploading it to YouTube)
  2. This is a generic platform (CPaaS), and I am not sure who my users are, so some may likely need the extra flexibility
Transcoding pipeline or browser engine

You decided to go for a composite video stream for your WebRTC recording? Great! Now how do you achieve that exactly?

For the most part, I’ve seen vendors pick up one of two approaches here – either build their own proprietary/custom transcoding pipeline – or use a headless browser as their compositor:

Transcoding pipelineBrowser engineUnderlying technologyUsually ffmpeg or gstreamerChrome (and ffmpeg)ConceptStitch the pipeline on your own from scratchAdd a headless browser in the cloud as a user to the meeting and capture the screen of that browserResourcesHighHigh, with higher memory requirements (due to Chrome)Main advantages
  • Less moving parts means the solution is more robust
  • Cost effective, scales a bit better
  • Easier to implement
  • View can easily include any HTML/CSS element you desire

Here I won’t be giving an opinion about which one to use as I am not sure there’s an easy guideline. To make sure I am not leaving you half satisfied here, I am sharing a session Daily did at Kranky Geek in 2022, talking about their native transcoding pipeline:

Since that’s the alternative they took, look at it critically, trying to figure out what their challenges were, to create your own comparison table and making a decision on which path to take.

Live or “offline”

Last but not least, decide if the recording process takes place online or post mortem – live or “offline”.

This is relevant when what you are trying to do is to have a composite single media stream out of the session being recorded. With WebRTC recording, you can decide to start off by just saving the media received by your SFU with a bit of metadata around it, and only later handle the actual compositing:

Live“offline”ConceptHandle recording on demand, as it is taking place. Usually, adding 0-5 seconds of delayUse job queues to handle the recording process itself, making the recorded media file available for playback minutes or hours after the session endedMain advantages
  • Can be used to stream the media to live platforms (YouTube Live, Twitch, LinkedIn Live, Facebook Live, etc)
  • Better user experience (available faster)
  • Better utilization of media processing resources
  • Can be delayed until a request is made to playback a session

When to go live?

The simple answer here is when you need it:

  1. If you plan on streaming the composited media to a live streaming platform
  2. When all (or most) sessions end up being played back

When to use “offline”?

Going “offline” has its set of advantages:

  1. Cost effective – when you’re uncle scrooge
    1. Commit to compute resources with your cloud vendor and then queue such jobs to get better machine utilization
    2. You can use spot instances in the cloud to reduce on costs (you may need to retry when they get taken away)
  2. If the streams aren’t going to be viewed immediately
  3. Assuming streams are seldomly viewed at all, it might be best to composite them only on demand, with the assumption that storage costs less than compute (depends on how long you need to store these media files)

How about both?

Here are some suggestions of combinations of these approaches that might work well:

  • Mix audio immediately, but wait up with video compositing (it might not be needed at all)
  • Use offline, but have the option to bump priority and “go live” based on the session characteristics or when users seem to want to playback the file NOW
Plan your WebRTC recording architecture ahead of time

This has been long. Sorry about that.

Designing your WebRTC recording architecture isn’t simple once you dive into the details. Take the time to think of these requirements and understand the implications of the architecture decisions you make.

Oh, and did I mention there’s a set of courses for WebRTC developers available? Just go check them out at https://webrtccourse.com

The post WebRTC recording challenges and solutions appeared first on BlogGeek.me.

All the ways to send a video file over WebRTC

webrtchacks - Tue, 02/20/2024 - 14:54

I am working on a personal Chrome Extension project where I need a way to convert a video file – like your standard mp4 – into a media stream, all within the browser. Adding a file as a src to a Video Element is easy enough. How hard could it be to convert a video […]

The post All the ways to send a video file over WebRTC appeared first on webrtcHacks.

Science fiction books that resonated with me

bloggeek - Mon, 02/12/2024 - 12:30

Some science fiction books I carry in my heart and mind wherever I go for quite a few years now. Consider it a condensed book review.

I am a sucker for science fiction books. About 15 years ago, when I had a blog on RADVISION’s website, I even wrote a post about how writers envisioned video conferencing in science fiction books. Alas, that post has died, along with the RADVISION blogs, years ago.

Last week I sat down in the car with my daughter, ending up talking about books. It dawned on me that there are several that have stuck with me throughout the years and resonated. Books that keep me thinking even today.

This time, I decided to share them here. Unrelated to WebRTC, video, CPaaS or communication technologies. Just something I wanted to share 🤷‍♂️

And yes. All links are affiliated – my Kindle needs a few new good science fiction books 😉

They’re brought here in no specific order (alphabetically…)

Table of contents Blood Music / Greg Bear

Greg Bear has many great books. Blood Music is definitely one of them (I had to decide if I suggest this one on Drawin’s Radio – ending up with this one).

What I like about this one is how it combines miniaturization with biology. I know nothing about biology and what I do know about technology and miniaturization is by using computers.

This was a compelling read and a really interesting one of what happens at the extreme ends of connecting the dots between these two things.

It also resonated with my own philosophical thoughts about the difference in depiction and scale between the makings of atoms to the whole universe. To understand this specific sentence, reading Blood Music by Greg Bear is likely needed.

Daemon / Daniel Suarez

LLMs, chatbots, AI. This book has it all.

One of my previous managers suggested I read that, and he was spot on. It takes the angle of how the gaming industry and its NPCs (Non Player Characters) can make a difference if they are “let loose” in the world.

It takes the technologies we have today (or rather a few years ago) and tries to prophesize where we will be with them. Definitely a few misses in where we are headed, but a lot to think about.

Especially when the time to decide who works for who – the machine for us or us for the machine.

Go read Daemon by Daniel Suarez

Ender’s Game / Orson Scott Card

This is the second or third science fiction book I read in English and it got me onto the path of reading in English a lot. A roommate at the university gave it to me to read and said “it is about a small kid that saves the world”.

Besides the science fiction part of the book, how it covers bullying and the way to win in wars is interesting. I like how Orson outlines the story.

A few years after reading it, Orson Scott Card came to Israel for an event. I went there with a colleague from work for the book signing event, standing two hours in line for one minute with Orson. He gave me his full attention and was surprised at the book I brought to sign (Enchantment – it isn’t in this list since it is fantasy and not science fiction).

Anyway, Orson Scot Card is always a good read and Enter’s Game is a great starting point.

Expendable / James Alan Gardner

This is one enjoyable read. It took me into this riveting series of books by James Alan Gardner.

To put it short, explorers are expendable. They are dropped into new worlds to explore, and the reason they were selected is because they are deformed in one way or another but smart. So instead of fixing their external deformity (or ugliness), they are used as explorers. Why? Because if they looked good – they wouldn’t be expendable. Their death might matter to someone.

The rest of the series revolves around nanotech and AI. Or magic. Or something in between.

This is a lot less about ruminating about the books afterwards and more about enjoying the read – go read Expendable by James Alan Gardner.

Old Man’s War / John Sclazi

John Sclazi is another master storyteller (at least for me). Old Man’s War marks the beginning of a great series of many books (and not the only ones I love from John Sclazi).

Old Man’s War places humanity in a universe full of alien life – most of it warring in nature (or at least that’s the initial premise of it all). The way to build an army, the solution is to take the elderly and have them undergo a physical change, essentially taking them a bit apart from the rest of humanity and turning them into soldiers.

Since Earth is kept a wee bit back in its technology, they’ve seen most of what there is in life already and are old. So getting a younger body is all that is needed to recruit them for the cause.

The more I get older (age 40 was especially rough – it is when I started breaking in the seams or so it seems), the more I think about this series of books – and how I wish (or don’t wish) to be young again.

This series, as well as many of his other books are a joy to read – Old Man’s War by John Sclazi

Ready player one / Ernest Cline

Skip the movie. Read the book.

This has the word metaverse all over it. If you read Snow Crash by Neal Sephenson then you’ll want to read this one. And if you haven’t then just go read them both 🤷‍♂️

Besides the part of metaverse, large corp and all that stuff we’re here to ponder, what really sets this book apart is the treasure trove that it is for nostalgy. If you are 40 years or older, know what a Commodore 64 is, played Pac Man on a handheld device before there was such a thing as a PC, then you’ll find your youth inside this book. For me, this was a true joy to read.

Oh, and I just started reading Ready Player Two (noticed that when I went searching for the books I loved for this article).

Go read Ready Player One by Ernest Cline.

The Peace War / Vernor Vinge

If you know Vernor Vinge as a scifi writer then you don’t need me for this one. If you read scifi and haven’t read a Vernor Vinge book then you should. In such a case, The Peace War is a great place to start.

This one is about technology and fighting wars with the resources you have. Where one side rules all the other goes and miniaturizes stuff.

This, as well as many of his other books just float in my head and come out from time to time (especially books like A Fire Upon The Deep or Rainbows End, both from the point of view of communication technologies and artificial intelligence).

Anyways, just go read The Peace War by Vernor Vinge. Or any other book by Vernor Vinge for that matter…

The Speed of Dark / Elizabeth Moon

This book touched me in many ways. It isn’t exactly science fiction – it is mostly the effect improvements in healthcare on moral decisions we need to take.

In this case, it is about the last autistic people in the world, after autism is all but eradicated, and what it means for an autistic adult to decide to “heal”. Would that be a good thing for him? A bad one? Will he stay the same person?

And all of that written from the point of view of the autistic person.

I truly loved this one and walked around with the baggage it left in me afterwards. Highly recommended – The Speed of Dark / Elizabeth Moon.

Winter World / A.G. Riddle

I read this one last winter… and it got me into the mood of winter and kept me there. All dark and cold. This book (and the series) is so well written. You can just feel the cold and the darkness as you read it.

The story is about our earth, dealing with climate change – one where the sun just gets blotted out of the sky until it is no more visible. At least that’s the first book. It is about choices – technological and human ones. And about our will to survive.

I’ll just leave it at that and say that this winter here is cold as well. And it got me thinking about this book series again.

Go read Winter World by A.G Riddle.

Wool / Hugh Howey

No. I haven’t seen it on Apple TV. I read the book and then all 3 books in this series. And then the rest of the Silo stories available. It is that riveting.

This is less about technology (at least the first book) and more about the human condition and how technology affects it. Like many of the other books in this article that I am recommending, this series is also dystopian in nature. It isn’t that I like my books bleak – it is just that the bleak ones stick with me longer and cause me to think about my day to day a lot more.

Anyways, go read Wool by Hugh Howey.

Your turn

Got any books you think I should be reading? Science fiction and fantasy would be great:

Now I need to get back to Ready Player Two 😉

I’ll be back to the usual communication technology articles next time.

The post Science fiction books that resonated with me appeared first on BlogGeek.me.

An FAQ for WebRTC beginners

bloggeek - Mon, 01/29/2024 - 12:30

Answering some common FAQ questions about WebRTC that seem to be top of mind on Google search.

A few days ago, I searched something on Google, and somehow bumped into a page full of questions Google found relevant or common. These weren’t exactly relevant to my search term (not directly), but they were there. And they were beginner questions about WebRTC.

It dawned on me that I’ve probably mentioned some of these things in passing (or a wee bit more) in the past, but placing them all neatly together in one place made sense. So here we are. And here’s the WebRTC FAQ for beginners.

Table of contents Is WebRTC TCP or UDP?

WebRTC is neither TCP nor UDP. At the same time WebRTC is both TCP and UDP.

Confused?

Let’s put things in order.

With WebRTC there’s signaling and media.

Signaling is considered to be out of scope and left to the application. Most applications will use HTTPS or a secure WebSocket as transport for signaling. HTTPS runs over TCP… sort of… since HTTP/3 can also do UDP. But mostly, you can think of signaling in WebRTC as TCP and the skies won’t fall ( what we want for signaling is reliability and messages order, and TCP based protocols give us that).

Media in WebRTC wants to use UDP. It strives to use UDP as much as possible, but that’s not always available to it, so it then falls back towards using TCP. But you can consider this as a last resort (we don’t want to be in that predicament).

Read more about WebRTC transport:

Is WebRTC still used?

Yes. You wouldn’t be reading my blog otherwise

It isn’t that there aren’t any challengers. It is that WebRTC is still the most popular and common solution for real time communications in web browsers.

WebTransport + WebCodecs + WebAssembly might someday replace WebRTC. But we’re not there yet.

Read more about WebRTC’s success and future:

Is WebRTC free or paid?

Free. Err. Paid. Free? Paid? Both? None?

Let’s sort things out here.

WebRTC is an open standard with a popular open source implementation maintained by Google and used by all major browser vendors.

Accessing the APIs and using them is free.

But creating most of the meaningful applications is going to require some sort of payment. That can be to a CPaaS vendor to host the WebRTC infrastructure; or to an IaaS vendor (think AWS) to host the servers and the bandwidth use (especially with TURN and media servers).

So yes. WebRTC is free, but expect to pay for it, in particular if you need help. Google will not help you…

Read more about WebRTC’s costs:

What is WebRTC used for?

WebRTC is used for implementing realtime voice and video communications over the internet using web browsers. But it definitely isn’t limited to that.

I’ve seen use cases dealing with recording, live streaming, broadcasting, cloud gaming, remote teleoperation (that’s driving a car… remotely), peer assisted delivery, file transfer, … the list is endless.

Read more about WebRTC use cases:

Is WebRTC a security risk?

WebRTC enables browsers to have (and give) access to your microphone, camera, display and IP address. This is what every voice or video meeting application you install requires in order to work properly as well.

Is that a security risk? That’s up to you to decide as a user.

Giving such power to the browser reduces the friction for users but also for nefarious third parties who want to exploit these capabilities, so some will see this as an increase in security risk.

For developers it simply means that they need to know and understand what they are doing and how they implement their applications with this technology in order to mitigate any potential risk. It is worth noting that WebRTC and web browsers from their side do the most they can to reduce such security risks and even encourage developers to write secure applications.

Read more about WebRTC security:

Does Netflix use WebRTC?

No.

Netflix might be using WebRTC somewhere, but for its main video streaming service Netflix doesn’t use WebRTC.

Why? Because WebRTC is designed and fine tuned for real time communications. As such, it sacrifices quality for improved latency.

Netflix is the exact opposite. It strives to deliver the best quality and is willing to sacrifice a bit of latency while at it – you wouldn’t mind waiting a few seconds for your movie to start in order to have crisp and pristine video. On the other hand, you’d be pissed if your online video conversation had a latency of 5 seconds and felt as if the other person was sitting on the moon.

Read more about WebRTC and latency:

Can WebRTC be hacked?

Yes.

Everything can be hacked.

Browsers are trying to do their best to reduce that risk for WebRTC (and other technologies they implement), but it is an arms race…

Read more about WebRTC security:

Does WebRTC expose your IP?

This is a tricky question. The answer is yes and no.

Let’s start by understanding which IP address…

Your device usually has two IP addresses:

  1. A local IP address, used inside its local network – say the home network
  2. A public IP address, which the NAT assigns to it and is used to communicate with “the world”

Each application on your device, including the browser, has access to the local IP address.

Each web server you connect to on the internet sees your public IP address.

When negotiating a WebRTC session, WebRTC uses a mechanism called ICE which discovers your public IP address and shares your local and public IP address with the peer it connects with.

A few quick clarifications here:

  1. WebRTC will not expose a local IP address without permissions to access a camera or a microphone
  2. Any voice or video communication applications ends up exposing the same addresses in similar fashion
  3. A WebRTC application can decide to use only TURN relay or media servers so as to not expose these IP addresses to other users
  4. There are browser extensions that can be used that limits the ability to expose local IP addresses
  5. If your VPN leaks your public IP with WebRTC it is that VPN which is not working

More about WebRTC IP leak:

What is better than WebRTC? A cheesecake from Philipp Hancke for my 10-years BlogGeek.me birthday

A cheesecake is definitely better than WebRTC. A chocolate cheesecake is doubly so.

In all seriousness though, I have no clue.

It depends. Which is a cop out answer but the only one here.

The question should be more specific. It should include what it is you are trying to build, what is the target audience and what medium do you want to use for it.

For live streaming, WebRTC might not be the best fit. Especially if you can live with a 2 seconds delay (in that case, LL-HLS and LL-DASH would be better solutions for example).

For video conferencing… well… I’d start by selecting WebRTC by default. And then try to poke holes in my decision and select something else – proprietary – since there is nothing else…

More about WebRTC alternatives:

Is WebRTC better than Websockets?

Apples to oranges.

I’d use both. In the same application. Seriously.

WebSocket for signaling and WebRTC for media.

There are two places where you can think of WebRTC and WebSocket as alternatives:

  1. WebRTC’s data channel, which is bidirectional in nature and peer-to-peer. For the most part, I’ll still use WebSocket. Unless I am serious about my low latency requirements or my privacy requirements
  2. When aiming for live streaming. But then I might just go for WebTransport instead of WebSocket – being forward thinking…

Did I already say apples to oranges?

More about transport in WebRTC:

Is Google a WebRTC?

To be frank – Google is Google. Not sure what the question is here

Google and WebRTC have an interesting relationship.

It all started when Google acquired GIPS, a company who licensed media engines. A bit afterward, WebRTC was announced in the standardization organizations and Google made the GIPS media engine into an open source implementation, integrating it into Chrome and placing APIs on top of it – these APIs were the WebRTC API specifications (or close enough at the time).

That was over 10 years ago. Since then, WebRTC has evolved and so has Google’s implementation of it.

Google uses WebRTC internally for Google Meet and for other products and projects it has.

The actual WebRTC project is open source. Maintained by Google. And most of the contributions to it are Google’s.

More about WebRTC & Google:

Does WebRTC need a server?

Yes. WebRTC needs a server. In fact, it needs multiple servers.

For starters, you need to download the application logic from somewhere, and a way to signal who you want to make a conversation with. This is done with a signaling server.

Then, when connecting the WebRTC session, there are times when you won’t have a direct route for the media. In such cases, you are going to need a TURN server. TURN servers also act as STUN servers but STUN servers are not the same as signaling servers.

And, you may want to go fancy – run a group meeting, record stuff. Such capabilities almost always mean you are adding a media server into the mix.

Read more about WebRTC servers:

Does WebRTC require Internet?

Yes.

Everything today requires the Internet. Even you being able to read this FAQ requires the Internet.

WebRTC can run in local networks or private networks without connecting to the public Internet. But it still needs an IP network to work.

Does WebRTC use SSL?

Yes.

Let’s start with definitions first: For me SSL and TLS are one and the same.

HTTPS and WSS (Secure HTTP and Secure WebSocket) both run on top of TLS so they are also → SSL.

Web browsers practically force application developers to use HTTPS for the pages that host these services, which means all signaling used with WebRTC will be done via HTTPS or WSS.

The media uses SRTP, which is Secure RTP, which doesn’t use TLS (because it isn’t running over TCP). That said, when sessions need to be relayed via TURN servers, they might end up being relayed over TURN/TLS.

Read more about WebRTC security:

Where’s the answer to my question?

Couldn’t find the answer?

I can invite you to follow and read my blog – it has a lot of resources about WebRTC

My suggestion? Start here What is WebRTC?

If you are looking to skill up with WebRTC, I also have WebRTC courses for you.

The post An FAQ for WebRTC beginners appeared first on BlogGeek.me.

My WebRTC predictions for 2024

bloggeek - Mon, 01/15/2024 - 12:30

Here are the WebRTC trends and predictions you should expect in 2024. They are a continuation of what we’ve seen in 2023 with a few variations.

Time to look at what we’ve accomplished in 2023 and think what’s ahead of us in 2024 when it comes to WebRTC.

When we look ahead, there are several notable things that glare at us immediately:

  1. WebRTC is here to stay. But in some cases and for some use cases, the focus is shifting towards WebTransport+WebCodecs+WebAssembly
  2. The recession is here and it isn’t going anywhere, so a continuation of what we’ve seen a year ago
  3. Generative AI is getting all the love and attention out there. It is also finding its way slowly into WebRTC services

Last year, I became CPO at Spearline. This year, Spearline got acquired by Cyara and I am now Senior Director of Product Management there. I am still delving into WebRTC and CPaaS. Still consulting a bit here and there on these subjects when it makes sense.

If you are interested, you can read my last year’s WebRTC predictions for 2023

Let’s get started here…

Table of contents The video version

This year, I took the liberty of also sharing my predictions in a video form. It holds the essence of my WebRTC predictions for 2024, in a short form.

Read on below to get into the details.

The era of differentiation in WebRTC

We are well into the era of differentiation:

I’ve had this slide done somewhere in 2020, modifying it a bit to fit the pandemic.

It is as relevant today as it was last year:

  • We started off with WebRTC in an exploratory fashion, asking ourselves should we even use this technology?
  • Then we saw a growth spurt, where it was obvious WebRTC is here to stay. The question changed to how do we use it
  • That got us right into the age of differentiation, where services from different companies look so alike, using the same WebRTC interface and capabilities, that we now ask ourselves how do we compete

The answers of how we compete varies on a yearly basis. Now, it obviously revolves around generative AI and LLMs. That’s the easy answer. The truth is a lot more complicated and nuanced. It requires understanding where investments are currently made – both at Google and in the ecosystem around WebRTC and its use.

What does WebRTC use look like?

Last year I predicted usage would be 3 times higher than pre-pandemic. That meant lowering the use at the beginning of 2023 from 4 times to 3 times pre-pandemic. The end result? We stayed at around 4 times pre-pandemic usage.

From here, it can only go up, though slowly and linearly but likely after 2024:

  • New use cases are unlikely to cause people to start doing more video calls
  • Growth ahead will come from shifting on premise solutions to cloud ones and at the same time, migrating to WebRTC use
WebRTC, open source and XaaS

I am not going to touch the topic of open source here. I’ve done that in my article two weeks ago writing about the top WebRTC open source media servers on github.

XaaS requires a few words of explanation, and I am likely to cover them in the coming months in further detail in a separate article.

For me, XaaS is IaaS, CPaaS and SaaS. In all cases, it is a matter of looking at them from the prism of WebRTC APIs CPaaS.

CPaaS

The landscape is changing in the CPaaS domain. A few years back, the leading vendors for WebRTC APIs were Vonage, Twilio and Agora. Probably in this order.

Here’s what I had to say in my last year predictions article:

The perceived leaders in WebRTC CPaaS are still Twilio, Vonage and Agora. I have a feeling that by the end of 2023 this will change.

Little did I know this would be spot on…

Twilio just announced in December that it is exiting the video business altogether. They still have and use WebRTC for their voice capabilities, mainly with a focus on call centers. But other than that? They just became irrelevant to many developers.

Most vendors are now likely to want to compare themselves now to Vonage and Amazon Chime SDK. Agora probably as well.

From a perspective of innovation or specific market niches, other vendors come to mind as solid alternatives here. Companies such as Daily and Dolby for example (there are others – sorry for not mentioning everyone). Or LiveKit with its open source alternative.

Notables?

  • Twilio all but left the market a year ago, shifting focus to voice and text contact centers and CDPs. In December 2024 they announced sunsetting Twilio Programmable Video service
  • Vonage has been working on integrating machine learning pipelines into their SDKs, which is great
  • Dolby doubled down on low latency streaming and high end audio requirements
  • Daily leads in lowcode efforts and has been putting a lot of attention in the past year towards AI and partnerships
  • Agora has just released a signaling SDK and introduced VP9 support

That change at Twilio places more strain on developers who need to choose who to use, with the added new risk of the level of commitment they see in the CPaaS vendor they choose. When someone like Twilio throws you under the bus, what can you expect from other vendors?

SaaS

SaaS vendors are vying towards CPaaS, assuming for some unknown reason that there’s money to be had from developers.

There are a few that are taking this route.

The problem that I see here is the fact that Twilio decided this isn’t interesting enough. While they have the APIs – they don’t invest in it any further. Meaning it isn’t a big enough market for Twilio. In such an atmosphere, how would it be big enough for SaaS vendors, and how will they see the explosion in use of their infrastructure that they likely haven’t seen in SaaS.

Some of them may yet succeed, but the path here isn’t an obvious or a simple one.

IaaS

Amazon, Microsoft, Google… and… Cloudflare.

  • Amazon has AWS Chime SDK
  • Microsoft has Azure Communication Services
  • Google has… nothing
  • Cloudflare introduced WebRTC services throughout 2023

Let’s see where that takes us

Amazon is investing in Chime SDK. Especially when it comes to audio quality and capabilities. In many ways, Amazon is shifting the attention of developers from CPaaS to their Chime SDK as a solid alternative. This is a trend that should be watched by CPaaS vendors and developers alike.

Microsoft seems content with their current offering of Azure Communication Services. There were no new or interesting announcements around it in 2023, which begs the question – is it important enough for Microsoft and a viable solution for developers?

Google announced APIs for Google Meet. Ones that integrate with it, but not ones that use its infrastructure for me to build my own video experiences. So no luck there for a CPaaS play. Time will tell if this changes. It is unlikely to happen in 2024.

Cloudflare entered the market with much fanfare. I covered them in 2023’s predictions. Since then, there have been no material announcements. Is that good? Bad? I just don’t know.

How did I do with my 2023 WebRTC predictions?

I spent quite a lot of time on my predictions in 2023. Let’s see how well I did.

#1 – libWebRTC (and the future of WebRTC)

I’ve made the prediction that Google’s WebRTC library will focus on house cleaning, optimizing and polishing collaboration. It did all that this year. We see this on an ongoing basis in our WebRTC Insights service.

What was interesting to note, is a slight shift towards requirements coming outside of Google Meet. There’s work being done to include H.265 support in libWebRTC, wherever H.265 is available in a hardware implementation form (i.e – someone is already paying the patent royalties bill).

Is that because Google was benevolent and nice? Is it because they wanted to show they aren’t a monopoly in Chrome? Is it because of some other deal with Intel (the ones pushing H.265 into WebRTC). Or is it simply because they might end up using it in Google Meet in all-Apple devices meetings? Time will tell.

#2 – Machine learning and media processing

I assumed that WebAssembly would continue to be used with WebRTC for media processing in things like background replacement, noise suppression and proprietary codecs implementations.

It was.

Some of it was done in WebAssembly and browser level. A lot of it was relegated to the cloud or kept in native applications. What I found interesting, that some vendors chose to announce and release such solutions across all platforms and not start from native and move towards the web later.

Most interesting (and obvious) change here? A lot of this use is now being remarketed as generative AI – doesn’t matter if it is generative or not.

#3 – Voice before video (Lyra first, AV1 later)

I thought Lyra (=new voice codec) would find its way to applications faster than AV1 (=new video codec). Or at least new voice codecs…

The results are… inconclusive.

Webex did come out with a new Webex AI audio codec, with little explanation about it.

AV1 is starting to make real noises of almost-maturity, with Apple supporting AV1 hardware acceleration (for decoding only at the moment) and Google fiddling around with AV1 in Google Meet.

We didn’t hear much this year about Google’s Lyra or Microsoft’s Satin codecs. Just this new announcement of the new Webex AI codec. So I am not sure if voice happened before video or not.

#4 – Observability

Yes. There is more interest in observability. I know that by looking at our numbers in testRTC. There is no specific market or industry where it happens more. What I can say is that many contact centers are starting to take note. Probably due to their increased reliance in WebRTC and the fact that many contact center agents are working from home now.

#5 – M&As and shutdowns

We had a few interesting shutdowns and M&As. The most notable ones?

A lot of WebRTC engineers found themselves a new home. Either because their startups shut down, their company downsized or they saw no future where they were.

Good talent is there to be had if you look hard enough.

WebRTC predictions for 2024

Enough about 2023. That’s old news. Lets see what’s going to happen with WebRTC in 2024

#1 – libWebRTC (and the future of WebRTC)

I’ll start with the most important piece of our technology puzzle – libWebRTC, maintained by Google.

This year will be a continuation of last year. Mostly maintenance releases, with a few minor improvements. The places where we will see the most amount of focus by Google in libWebRTC:

  1. Access to media frames, raw and encoded, via Insertable Streams. This will include optimizations and a bit more flexibility. The purpose of it all is to promote and push forward AI capabilities
  2. Collaboration. A continuation of last year. Some of it via Insertable Streams. Others through polishing of media control APIs in the browser to enhance the user experience
  3. Accommodating AV1. I believe by the end of 2024, we will finally see Google Meet using AV1 – we’ve just seen a glimpse of that. In some limited scenarios, on select device types. There’s also work being done to allow for VP9 simulcast with hardware acceleration instead of using VP9 SVC
  4. Voice AI. Google will put Lyra or similar into Google Meet itself. Either as a standalone or by somehow plugging it into Opus or similar. Maybe it will do so via Insertable Streams, but I doubt this will be the route they will take here

By the end of 2024, we will find ourselves similar to where we are at the beginning of it:

  • Google will be the main and virtually sole contributor to libWebRTC. The total commit numbers have been dwindling and this will continue. Will we see it stabilize in 2024?
  • Here and there, external contributions will happen. Most of them are likely to come with Philipp Hancke. But here as well, we’ve probably seen the peak of individual contributions already…
#2 – Machine learning and media processing

WebAssembly is where we see innovation and differentiation in WebRTC. 2024 will be no different.

It will be incorporated in the “same old places” of media processing.

What we will see is also a lot more machine learning on the server side, and a lot of it will be leaning towards generative AI and LLM technologies. This isn’t really a prediction, but just stating the obvious here. For someone who uses Midjourney for many of his recent articles for imagery, that shouldn’t seem as a surprise to you.

#3 – The year of Lyra and AV1

Time to take a huge risk.

I mentioned this in the libWebRTC prediction, but it deserves a section of its own as well.

Each year I say AV1 is years away. I think it is still going to take time until it becomes commonplace. That said, I believe this year we will see AV1 in one or more commercial WebRTC services, including Google Meet. It will be used judiciously and in very specific use cases and scenarios – call this testing the water.

On the audio side, we will see an AI audio codec being used in production in web browsers. Likely from Google. I believe Lyra will find its way into Google Meet. How exactly is where I am uncertain.

#4 – WebTransport as a real alternative

WebTransport started life somewhere in 2020. We’re now at the beginning of 2024.

It still isn’t available in all browsers – Safari is still missing support for it. It is available elsewhere, but far from being commonly used or in the mainstream’s mindset.

We’ve seen this year a few more experiments and proof of concepts with WebTransport that incorporate low latency media delivery. Mostly in the domain of streaming. There are reasons for that. I’ve written about that when discussing WHIP and WHEP.

Here’s what I think is going to happen: in 2024, we will see the first production ready low latency streaming solution that makes use of WebTransport instead of WebRTC or other technologies. This will be for one-way large scale broadcast use cases, where 1-2 seconds of latency are fine.

There will be those that will use WebTransport for bidirectional media delivery, similar to what Zoom is doing in web browsers, though that will stay the exception of the rule and more of an experimentation.

#5 – M&As and shutdowns

This was easy in 2023 and will remain easy in 2024.

The recession is here. It is likely to stay throughout 2024, with no real end in sight. At least not yet.

More vendors relying on WebRTC will shut down. Small startups will run out of steam. Large vendors may decide to exit this market and focus on other avenues where they conduct business.

Shutting down may mean getting acqui-hired, or acquired for peanuts. It might also mean selling chunks of the business to another company.

Vendors who stick to this market are likely to slow down their efforts throughout the year in an attempt to survive and weather this ongoing storm.

2024, here we come

Lots to do in 2024, but with limited resources:

  • Slowdown at the same time we see technology shifts and the need to differentiate
  • Generative AI, and AI in general and trying to figure out where it fits in WebRTC use cases
  • Polishing collaboration and sharing capabilities in WebRTC and getting that implemented in apps
  • Introducing next generation audio and video codecs
  • Researching new transport technologies

All that while trying to satiate users and customers with new features and releases.

The post My WebRTC predictions for 2024 appeared first on BlogGeek.me.

Top WebRTC open source media servers on github for 2024

bloggeek - Mon, 01/01/2024 - 12:30

What are the WebRTC open source media servers in 2024, and which ones are the best, based on github stars.

This one is one of those sensitive articles which many people later complain about. So I’ll start it with a few disclaimers:

  • Different tools are suitable for different use cases. This means that a WebRTC media server here that is low on the popularity list might be the best fit for your requirements
  • It was enjoyable to look it up, so I just had to write this down
  • I love you all – I truly do. Please don’t be mad at me
  • That said, I am expecting a sarcastic enough meme by Iñaki. One that I can proudly add to this article – just below this bullet
Table of contents The WebRTC open source ecosystem

WebRTC is free. At least the part of it being an open standard with a commercial grade open source implementation that is available and embedded across all modern browsers.

This has garnered a nice developer ecosystem around it, part of which is open source in its nature. A simple search for “webrtc” on github returns over 32k results.

There are a lot of different avenues to WebRTC projects on github. The main ones that come to the top of my head include:

  • Media servers
  • Signaling servers and frameworks
  • WebRTC implementations in different languages
  • Samples and experiments
  • Applications written on top of WebRTC

For this specific article, I want to focus on media servers.

My “top 4” WebRTC open source media servers

There are quite a few WebRTC media servers, many of which are open source. That said, most aren’t widely known or got to the point of being interesting enough for me to take notice (I usually take notice when someone tells me he is using it for something that goes to commercial use).

Throughout the years, the list of the popular WebRTC media servers hasn’t changed that much. I’ve been using this diagram for two years now, and it probably still holds true:

Due to this, my “top 4” is simply the WebRTC open source media servers above that are still relevant. And to make sure people don’t bash me on minor issues, I’ll be presenting my these in their dictionary order: Janus, Jitsi, mediasoup and Pion

Using github for our WebRTC popularity contest

How do you even begin deciding which WebRTC open source media server project is the most commonly used out there?

One approach is to count the stars. Github starts. Luckily, all the projects I was interested in have github repos. Philipp Hancke directed me to GitHub Star History, which after a bit of fooling around with, got me this nice initial chart:

Based on people who placed a star on these github projects, we can see that mediasoup is chugging along, last in the packet. It is followed by Janus. Then there’s Pion and Jitsi Meet is ahead of the pack.

Each of these projects started at a different point in time. Pion was last to the party, which means the other projects had a headstart on it. Aligning them all on the point in time they were added to github, produces this chart:

Initial immediate thoughts here?

  • mediasoup is the slowest growing media server
  • Janus is growing at a steady, albeit slow pace
  • Jitsi changed its trajectory during the pandemic and growing faster ever since
  • Pion is the fastest growing project here, keeping at Jitsi’s recent pace to stardom

Let’s do a quick deep dive into each one of these.

Janus

Janus is one of the oldest WebRTC media servers. It is written in C, which might be the reason for its limited adoption – most developers these days won’t know how to write a hello world application in C – let alone figure out its memory use concepts (where you have to explicitly free what you allocate).

What Janus has going for it is a company. Meetecho, the maintainer of Janus, offers paid support and development services around Janus. Something other open source WebRTC media servers lack.

The trajectory of Janus is unlikely to change. It is versatile, has a community around it and support services.

Jitsi Meet

Jitsi Meet is likely the oldest of WebRTC media servers. Started by Bluejimp, who were acquired by Atlassian and then 8×8.

While Jitsi doesn’t offer any direct support and development services for Jitsi, it does offer JaaS – a managed Jitsi service for developers.

Jitsi is written in Java and has a React UI implementation.

One reason for its meteoric rise is the pandemic. Jitsi is the only open source solution that came fully built and optimized for group calls. From the get go, their mission was to build an open source Google Hangouts (that’s Google Meet today). And they succeeded.

By narrowing their applicability to a specific use case, they opened up their viability as a solution to a larger target audience – way beyond that of developers building applications.

This unfair advantage places them here as a top dog. This doesn’t mean that they are suitable for everyone – quite the opposite. They are suitable for those building Google Meet-like experiences. For things that are beyond this use case, shop around the other media servers first. But for a Google Meet-like service? Start from Jitsi Meet.

Mediasoup

Mediasoup is the Node.js implementation of an open source WebRTC media server. It is designed for high performance, with the unique concept of having the application built right inside the same Node.js process.

The challenge with mediasoup is its inability to offer official support and development services. Here, the reason is simple – the main creators and contributors work as developers at Miro today.

This challenge is probably what led to the slow growth of mediasoup in the github popularity contest.

That said, if you go and look at many large scale group calling deployments, they use mediasoup…

Pion

Pion is last to the scene, but fast growing compared to the others. There are 3 reasons why:

  1. Pion is written in Go language. For some reason, Go has its fandom of developers who love the language. This makes Pion their Go-to (pun intended) open source project
  2. Pion is general purpose. It is used to build both clients and servers. There are multiple media server implementations written on top of Pion, but in general, the fact that you can build more with it garners immediately more stars to the project
  3. Sean DuBois. The person who started Pion has a huge and infectious personality that helped push Pion forward. Other open source projects have their own unique personas, but whoever had the chance to speak with Sean directly will understand what I am saying here

As Pion’s popularity grows, so are the number of commercial services cropping up that use Pion. 

The best WebRTC open source media server

None.

All.

It depends.

For managers, my suggestion is almost always to let their developers experiment and pick and choose the open source WebRTC media server that they see fit. There are differences across these alternatives, but at the end of the day, if anyone tries to force a developer to use something he doesn’t think is the right solution – said developer will make sure to explain to the one forcing him why the decision made is the wrong one. In other words, you don’t want to go against your developers.

For developers, I find myself suggesting different media servers depending on their use case, requirements and even company DNA.

So in short, there’s no best WebRTC open source media server. There are several alternatives that are great – you just need to pick the one that is best for you

The post Top WebRTC open source media servers on github for 2024 appeared first on BlogGeek.me.

The Hidden AV1 Gift in Google Meet

webrtchacks - Tue, 12/19/2023 - 15:06

Earlier last week a friend at Google reached out to me asking Does Meet do anything weird with scalabilityMode? Apparently, I am the go-to when it comes to Google Meet behaving weirdly :). Well, I do have a decade of history observing Meet’s implementation, so this makes some sense! It turned out that this was […]

The post The Hidden AV1 Gift in Google Meet appeared first on webrtcHacks.

Twilio exits video APIs, further focusing on voice, SMS and Segment

bloggeek - Wed, 12/06/2023 - 09:35

Twilio Programmable Video is no more. What should WebRTC Video API vendors and their customers do from here on?

This week, Twilio dropped a bombshell

It decided to shut down its Programmable Video service and do a bit of downsizing and trimming around Segment and Flex.

I didn’t intend to write anything more until 2024, but this necessitated changing my plans.

The image above is an adaptation from a blog post on Twilio’s website from 2021…

Table of contents Twilio Signal, and why I stopped covering it

Each year, Twilio hosts its Twilio Signal event. I’ve attended a couple of them in person and used to cover them here on a yearly basis.

That stopped with Twilio Signal 2021, which was the last time I covered that event here. The reason for that was the pivot Twilio made from CPaaS to CEP (Customer Engagement Platform).

Ever since, I’ve searched for things to talk about and share about Twilio Signal, but found nothing of real value or interest to my readers.

Remember – I cover WebRTC and CPaaS. CPaaS mainly from the point of view of WebRTC and modern communications and less from the SMS and legacy telephony sides of it.

The shift towards CEP meant a lot less investment and focus by Twilio on exactly these areas – WebRTC and CPaaS that are non-SMS/legacy telephony related.

What did Twilio have to show for its investment in video and WebRTC in 2022 and 2023? Nothing. Crickets. Oh… yes… they did integrate with Krisp for noise cancellation. Presumably only in their Video SDK and not the Voice SDK. So that’s down the drain as well.

The decision might be the right one for Twilio, if you look at where their investments and attention are going:

  • Twilio Flex, for a programmable contact center
  • Segment, as a leading CDP vendor
  • Fuzing Segment with programmable communications

Video is likely 1% or less of their revenue. So why bother? Especially when it requires management attention to get it anywhere meaningful with so much else that is bigger and more important to deal with.

CPaaS vendors: Best of breed vs best of suite

I learned about the concepts of best of breed and best of suite when working at Amdocs.

  • A best of breed vendor would specialize vertically, offering its customers a solution that is great in a narrow domain. Think of it as “the leading SMS vendor”. You do SMS and only SMS and you do it really well
  • Best of suite is all about the breadth of your offering. You provide a solution that has a mixture of multiple services and features your customers will need. You might not be doing any of them the best in the market, but if someone needs multiple services and wants a single vendor to work with – you’re the best for them. Think of it as offering SMS, voice, email, video, … – Twilio

Twilio started with SMS and voice. It later decided to expand and become “best of suite” by attaching to it email, video, IOT, social messaging, chat , …

What happened though is that in parallel, it worked hard on being best of breed in voice and SMS. Doing that by going upstream and introducing Flex. Flex reduced the effort of contact centers built on top of Twilio.

And then they pivoted. With the acquisition of Segment and the need to tightly integrate it with their CPaaS and Flex offering. Transitioning from taking care of communications to taking care of understanding the customer.

Today?

There are two types of CPaaS vendors:

  1. The best of suite ones, who offer the breadth of communication services
  2. Or the best of breed ones, who focus on a specific domain. And the domain I care about is WebRTC and video. These usually won’t have legacy telephony. At most, they will enable connecting to legacy telephony of third parties

Interestingly, both are circling like vultures around Twilio to see which customers are going to come out of there looking for alternatives. Some of these CPaaS vultures offer pure WebRTC video solutions. Others offer the whole suite. And there are those who don’t even offer video – but see this as an opportunity to poach customers from Twilio.

The cases of Twilio IOT and Twilio Live

I remember that in one of the first Twilio Signal events, Jeff Lawson stood on stage and proudly announced that they never deprecated an official API. The way this was later handled is by having beta and GA phases for products.

This cannot be said anymore… by the end of 2022, Twilio started sunsetting and shutting down services.

It started with a round of layoffs at Twilio. Jeff Lawson, Twilio’s CEO, wrote a message that got to the Twilio blog as well. Here’s what we shared about it at the time with our WebRTC Insights clients:

  • Twilio laid off 11% of their workforce
  • The decision was to take the internal email and publicly put that on their blog, instead of getting it indirectly on TechCrunch
  • A few interesting to note in this email:
    • Twilio has 4 focus areas: reliability+trust, profitability of messaging, Segment adoption, Flex customer base
    • 3 main products in focus: messaging, Segment (Customer Data Platform), Flex (Programmable Contact Center)
    • Programmable Video isn’t prioritized at all. Programmable Voice might be said to be buried somewhere in there under Flex
    • Twilio’s future success and growth lies Segment and Flex – not in Communication APIs
  • The charts below show the number of employees and growth rate of Twilio in recent years
  • Why is Twilio doing this? A few options here
    • Growth is slowing, and all the hiring they did is just too much to maintain
    • Management has too many directions it is now looking at, so it was time to shoot down all the smaller initiatives and products since they won’t bring the necessary growth at Twilio’s size
    • Twilio might have used the current market state to clean the stables and remove all the useless fat from the company
    • All of the above, to some extent
  • How will this affect other CPaaS vendors? This is hard to say. Here are a few thoughts
    • If Twilio is in poor shape, then the rest are in worst one
    • With Twilio management shifting focus elsewhere, the API space, and especially in voice and video, it is down on these areas to build some differentiation
    • Time to use FUD in the market against using Twilio for video APIs – Jeff just said it isn’t a focus area. Just make sure it doesn’t backfires…
    • Maybe CPaaS isn’t as great as it was believed to be as a business…
      • From my past life I know that selling to developers is super hard
      • And the target market for it is rather limited
      • There are better opportunities out there, which is why many CPaaS vendors are following in Twilio’s steps when it comes to Flex
  • Also, if you are looking for developers, it might be worthwhile to try and poach a few of those who still work at Twilio, or more easily those who are looking for a new job

After the reduction in workforce, came the reduction in product offerings. The first two to go through the chopping block were Twilio IOT and Twilio Live.

Twilio Live was announced dead in November 2022. Low traction of the service and little fit the the direction of Twilio meant this had to die. The way this was done? Let customers know. Officially suggest they go use Mux instead. Somehow, the fact that Mux at the time had a service competing directly with Twilio Programmable Video wasn’t something that worried Twilio.

Twilio IOT was simply sold off to KORE Wireless in March 2023.

Remember that suggestion we gave about FUD in the market against using Twilio for video APIs? (I marked it in yellow above so you won’t miss it)

The demise of Twilio Programmable Video

Here’s what the Twilio product menu looks like on their homepage:

This is likely going to change soon or by the time this gets published.

  • Customer Data = Segment offering
  • Communications = CPaaS
  • Applications = Enterprise stuff

Each and every piece in the Communications part can be snuggly fit into the products on the left and on the right (Customer Data and Applications).

Video is a bit of a stretch. At least if you look closely at traffic sizes and revenue numbers.

The two other oddballs – IOT and video streaming – were thrown out without too many objections and without hurting Twilio’ bottom line.

What was left was to get rid of the video piece. It likely took too many resources but made no real dent in Twilio’s numbers.

To be frank – the problems likely started with the acquisition of Kurento. Kurento wasn’t fit for what they had in mind for it, and it was riddled with architectural and technical issues. This wasn’t a good starting point for multiparty calling in Twilio Programmable Video.

If I had to guess, a lot of technical debt went into the product to improve and repurpose the media server pieces of Kurento.

Twilio was slow to innovate on video, leaving the room for other vendors – big and small. It missed the lowcode and embeddable experiences that are now common in video APIs. They didn’t invest in AI integrations too much. It didn’t optimize media quality enough to work well for its customers.

And then it left the door open for Amazon with their Chime SDK to threaten them in this domain.

I am guessing growth and revenue from Twilio Programmable Video wasn’t in line of expectations (unsurprisingly). The current market climate, the end of the pandemic, the headaches in Segment and Flex. All of it got them to the conclusion that it would be simpler to just sunset Twilio Programmable Video and move on.

A brave decision. Twilio Programmable Video couldn’t have been sunset in the worst time (unless you consider a few months prior to the pandemic and the quarantines).

A week before this announcement from Twilio, Amazon announced support for video calling in Amazon Connect.

Amazon is investing in adding video to its contact center solution, and Twilio, who has Twilio Flex competing against Amazon Connect, is sunsetting video support for its video API.

  • What does it mean for video calling support in Twilio Flex?
  • Would Twilio still support or add video calling to Twilio Flex without offering Programmable Video APIs?
  • How should contact center customers view this? If they have video requirements in their roadmap, would they use Amazon Connect or Twilio Flex?
Innovations in Video APIs and WebRTC managed services

Why was Twilio Programmable Video appealing to potential customers? I can think of two main reasons:

  1. Single throat to choke. Sourcing your voice, SMS and video from the same vendor, on a single bill is an advantage
  2. A reputable vendor. It is Twilio. They are big. What can ever go wrong? …

The reasons why not to? Quite a few:

  1. Quality wasn’t on par with what can be achieved elsewhere with CPaaS vendors
  2. No lowcode/embeddable offering for its video API
  3. Support… could be better
  4. No innovation

All that Twilio had for itself is its brand name. And that in a market that was moving on.

Things other vendors have been doing in that period of time?

  • Doubling down on large scale sessions, with 10,000 or more users
  • Live streaming solutions (the one Twilio sunset in 2022 – Twilio Live)
  • Investing in AI integrations and pipelines, both on client side and on server side
  • 3D audio, VP9 video codec support
  • Nocode/lowcode solutions

Twilio wasn’t able to keep up. Or even pick a direction it wanted to invest in.

The rise of the Zoom Video SDK

Twilio issued an email to its customers on December 5, stating the sunset will take a full year. From this email:

[…] we have decided to End of Life (EOL) our Programmable Video product on December 5, 2024, and we are recommending our customers migrate to the Zoom Video SDK for your video needs. 

The official recommendation from Twilio is for their customers to migrate to the Zoom Video SDK.

The announcement can’t be found (yet) on any marketing material from Twilio. It can be found on social media accounts from Zoom.

Why Zoom?

  1. Zoom isn’t a competitor of Twilio in anything, and are unlikely to be any time soon
  2. It is a large and respectable vendor with a brand name

They couldn’t suggest vendors that have SMS or voice services.

The rest are mostly smaller vendors – not something Twilio wanted to be identified with is my guess.

There’s only one problem with picking Zoom Video SDK here. Their web experience isn’t on par with the rest of the pack. They rely on WebTransport+WebCodecs+WebAssembly, which isn’t as stable or performant as just using WebRTC. For native, their SDKs should be fine, but for web browsers, I’d be reluctant to use them yet. Add to that the fact that this is a technology shift, requiring some relearning of terms and a reliance on proprietary technology, and you get some increased risk for the vendors switching.

I wonder if Twilio and Zoom came to an agreement here (with Zoom maybe even paying for this suggestion to go out) or if Twilio simply decided to offer some kind of a recommendation and be done with it. Philipp’s bet: Eric had dinner with Jeff and paid for it.

Anyhow, customers have a full year to figure out a solution. Or less – depending on how much browsers WebRTC implementations drift away from the current implementation of Twilio. What doesn’t get maintained in WebRTC rots rather quickly.

The future of managed Video APIs (without Twilio)

I am not sure how much Twilio Programmable Video would be missed.

Developers certainly used it. Big and small. Its revenue was probably higher than some of the smaller video API vendors out there. These developers will figure out a way to migrate to other vendors to use. It won’t be the first time a CPaaS vendor has existed in the video API market (we had AddLive, vLine, ooVoo, SightCall, Respoke, Tropo, Forge, CafeX, Circuit, Bit6 all exit this market in the past).

3-4 years ago, we had 3 top dogs in this market: Vonage, Twilio, Agora

A year ago, I’d say I heard a lot more about Vonage, Amazon Chime SDK and Twilio. Less so Agora

Now, we have Vonage and Amazon Chime SDK

Who will take the 3rd spot in the 3 runners when it comes to developers’ mindshare in this industry?

We have Agora, Daily, Dolby, LiveKit and others who are all vying for that spot. Each has its own angle and differentiation.

Would Vonage keep its spot there?

Will Amazon continue investing in its Chime SDK enough?

I don’t have the answers to these questions, but I do have my own opinions.

Where should Twilio Video customers go from here?

That is the big question.

If you are using Twilio Programmable Video – who should you go to instead?

And if you are on the lookout for a CPaaS vendor now – who should you pick?

My WebRTC Developer Landscape infographic was last updated in 2022, but can still offer some guidance as to the alternatives available. Some of them I’ve listed throughout this article. Others are just as valid.

Here are a few questions you need to answer for yourself:

  • What are your requirements and focus? Different CPaaS vendors offer a different type of a solution, so pick one that offers what it is you’re after
  • Make sure you ask around. Check references. Talk with other developers who use that CPaaS vendor
  • Try them out in a small POC before fully committing yourself
  • Check their commitment and level of investment in what it is you focus on as your requirements and roadmap. Don’t only listen to what they say – also check out what features they introduced to the market in the last 12-24 months. See if they had layoffs in that same period of time as well
  • Don’t invest in abstraction layers to be able to replace CPaaS vendors. It sounds like a great initiative and project, so just don’t do it. Unless you want to use more than a single vendor at a time (unlikely for most of us)
  • While you shouldn’t invest in an abstraction layer, you should definitely try to limit calls to the CPaaS vendor’s APIs to specific modules in your code. If you can limit it to a single source file or class – even better

The post Twilio exits video APIs, further focusing on voice, SMS and Segment appeared first on BlogGeek.me.

Third time’s a charm: WebRTC Insights, 3 years in

bloggeek - Mon, 11/20/2023 - 12:30

Let’s look at what we’ve achieved with WebRTC Insights in the past three years and where we are headed with it.

Along with Philipp Hancke, I’ve been running multiple projects. WebRTC Insights is one of the main ones.

Three years ago, we decided to start a service – WebRTC Insights – where we send out an email every two weeks about everything and anything that WebRTC developers need to be aware of. This includes bug reports, upcoming features, Chrome experiments, security issues and market trends.

All of this with the intent of empowering you and letting you focus on what is really important – your application. We take care of giving you the information you need quicker and in a form that is already processed.

Three years into this initiative, this is still going strong. We’ve onboarded a new client recently, and this is what he had to share with us on the first week already:

“[The Insights] Newsletter has been great and very helpful. Wish we had subscribed 2 years ago.”

Sean MacIsaac, Founder and EVP, Engineering @ Roam

Why is the WebRTC Insights so useful for our clients?

It boils down to two main things:

  1. Time
  2. Focus

We reduce the time it takes for engineers and product people to figure out issues they face and trends on the market. Instead of them searching the internet to sift through hints or trying to catch threads of information on things they care about, we give it straight to them – usually a few days before their clients (or management) complains about it.

On top of it, we increase their focus on what’s important to them. Going back to past issues to find problems, search issues, look at security problems, know of experiments Google is doing or just be aware of the areas where Google is investing their efforts – all of these become really simple to do.

In the past few weeks we’ve been getting complaints from clients about audio issues on Mac (usually acoustic echo problems in Chrome). These were already hinted to in one of our previous issues and the full details appeared in the more recent issues. In parallel, we’ve been able to sniff around for root causes for them almost in real-time – enabling them to zero in on the problem and find a suitable workaround.

If I weren’t so modest, I would say that for those who are serious about WebRTC, we are a force multiplier in their WebRTC expertise.

WebRTC Insights by the numbers

Since this is the third year, you can also check out our past “year in review” posts:

This is what we’ve done in these 3 years:

26 Insights issued this year with 329 issues & bugs, 136 PSAs, 15 security vulnerabilities, 230 market insights all totaling 231 pages. That’s quite a few useful insights to digest and act upon.

We have covered over a thousand issues and written more than 650 pages.

WebRTC is still ever changing – both in the codebase and how it gets used by the market.

Activity on libWebRTC has cooled down yet again in the last year, dropping below 200 commits a month consistently:

This is more visible by looking at the last four years:

On one hand WebRTC is very mature now, on the other hand it seems to us that there is still a lot of work to be done and bugs to be fixed. External contributions were up. What is concerning is that the “big drop” in May happened three months after Google announced a round of layoffs but we have not seen many departures of long-time contributors.

Let’s dive into the categories, along with a few new initiatives we’ve taken this year as part of our WebRTC Insights service.

Bugs

The number of reported external bugs has dropped considerably as did the number of issues tracking new work and initiatives. This correlates with the decreased commit activity.

The areas for bugs also shifted, we have seen a lot more issues related to hardware acceleration (since Google is eying that now to further reduce the CPU usage in Google Meet). Operating systems are starting to become a bigger issue, for example MacOS Sonoma caused quite a few audio issues and enabled overlaid emoji reactions (a bad choice with consequences described here) by default as part of a bigger push to move features like background blur to the OS layer. And of course, every autumn brings a new Safari on iOS release which means a ton of regressions…

A good example of how Philipp himself uses Insights as a way to identify what change caused a regression was the lack of H.264 fallback on Android which rolled out in Chrome 115 in August. We had been commenting on the original change end of May:

That said, we did not think of Android which remains complicated when it comes to H.264 support. Thankfully this rollout was guarded by a feature flag so the regression could be mitigated by the WebRTC team in less than two days.

PSAs & resources worth reading

In addition to the public service announcements done by Googlers (and Philipp) as part of making changes to the C++ API or network behavior we continue to be tracking Chromium-related “Intents” (which are a useful indicator for what is going to ship) and relevant W3C/IETF discussions in this section. We also moved more in-depth technical comments on relevant blog posts from the “Market” section which made the overall decline in activity less visible here.

Experiments in WebRTC

Chrome’s field trials for WebRTC are a good indicator of what large changes are rolling out which either carry some risk of subtle breaks or need A/B experimentation. Sometimes, those trials may explain behavior that only reproduces on some machines but not on others. We track the information from the chrome://version page over time which gives us a pretty good picture on what is going on:

We have gotten a bit better and now track rollout percentages. We have not seen regressions from these rollouts in the last year which is good news.

WebRTC security alerts

This year we continued keeping track of WebRTC related CVEs in Chrome (15 new ones in the past year). For each one, we determine whether they only affect Chromium or when they affect native WebRTC and need to be cherry-picked to your own fork of libwebrtc when you use it that way.

In recent months we’ve seen a trend of looking more closely at the codec implementations to find security threats there. Our expectation is that this will continue in the coming year as well – expect more CVEs around this area.

A personal highlight was Google’s Natalie Silvanovich following up on a silly SDP munging thing Philipp did with CVE-2023-4076 which affected WebRTC munging in Chrome (but not native applications:

If only anyone had told us that using SDP in the API, let alone having Javascript manipulate it in the input, is a bad idea…

WebRTC market guidance

What are the leaders in video conferencing doing? What is Google doing with Meet, which directly affects WebRTC’s implementation? Are they all headed in the same direction? Do they invest in different technologies and domains?

How about CPaaS vendors? How are they trying to differentiate from each other?

Other vendors who use WebRTC or delve into the communication space – where do they innovate?

Here’s a quick example we’ve noticed when Twilio worked on migrating their media servers to different IP and ports:

This ability to look at best practices of vendors, how they handled such challenges, or introduced new features is an eye opener. These are the things we cover in our market guidance. The intent here is to get you out of your echochamber that is your own company, and see the bigger world. We do that in small doses, so that it won’t defocus you. But we do it so you can take into account these trends and changes that are shaping our industry.

The interesting thing is that as WebRTC goes more and more into a kind of a “maintenance mode” with its browser releases, the variance and interesting newsworthy items we see on the market as a whole is growing. This is likely why our market insights section has seen rapid growth this year.

Insights automation

We’ve grown nicely in our client base, and up until recently, we sent the emails… manually.

It became a time consuming activity to say the least, and one that was also prone to errors. So we finally automated it.

The WebRTC Issue emails are now automated. They include the specific issue along with the latest collection security issues. It has made life considerably simpler on our end.

Join the WebRTC experts

We are now headed into our fourth year of WebRTC Insights.

Our number of subscribers is growing. If you’ve got to this point, then the only question to ask is why aren’t you already subscribed to the WebRTC Insights if WebRTC interests you so much?

You can read more about the available plans for WebRTC Insights and if you have any questions – just contact Tsahi.

Oh – and you shouldn’t take only our word for how great WebRTC Insights – just see what Google’s own Serge Lachapelle has to say about it:

Still not sure? Want to sample an issue? Just reach out to me.

The post Third time’s a charm: WebRTC Insights, 3 years in appeared first on BlogGeek.me.

Qotom Q20321G9 fanless PC

TXLAB - Tue, 11/07/2023 - 00:04

As PCengines announced the end of sales of their famous APU platform, it’s time to look for alternative devices that can be utilized as firewalls or network probes or VPN appliances.

I bought recently a Qotom Q20321G9 mini-PC from AliExpress. The model is similar to their Q20331G9 model described on Qotom website. The difference is a slower CPU and less SFP+ interfaces:

ModelQ20321G9Q20331G9CPUIntel Atom C3558RIntel Atom C3758RTDP17W26WNICs2x SFP+, 2x SFP, 5x 2.5Gbit LAN4x SFP+, 5x 2.5Gbit LAN

Comparing to the APU platform, this Qotom box is huge: 62mm high, compared to 30mm of APU enclosure, 217mm bright, and much heavier because of the massive heatsink. But it has much more to offer.

Two M.2 NVME sockets allow a redundant storage setup out of the box. Also, it supports ECC RAM (although the model I received had a non-ECC DIMM), so it can serve as a reliable hardware platform if you need a long-term service. Also, it has an M.2 socket for an LTE modem, two antenna mounting holes, and a nano-SIM card slot.

A minor downside is that even at idling, with all CPU cores running at 800MHz, the device is getting quite warm. The onboard sensors show the CPU core temperatures at around +42C to +44C, and the enclosure is rather hot at the touch.

I also have run a CPU stress test with the enclosure covered by a towel for about a half an hour, and the CPU temperature exceeded 60C, still functioning well.

A minor inconvenience is that the power button is too easy to press if you’re moving around it while testing. But the button is easy to remove, so that the power switch can be pressed by a pen when needed.

The SFP and SFP+ interfaces were recognized by Debian 12 out of the box.

The device arrived with a preinstalled Windows 10. The BIOS allows redirecting the console to the COM port, which is provided as an RJ-45 socket, with the same pinout as Cisco routers.

The NIC numbering is a bit non-intuitive, and the marking on the enclosure does not help much. Here are the interfaces as they’re seen by Debian, if you look at the device’s interface panel:

eno1 (SFP+)eno3 (SFP)enp7s0 (LAN)enp6s0 (LAN)enp8s0 (LAN)eno2 (SFP+)eno4 (SFP)enp5s0 (LAN)enp4s0 (LAN)

Some diagnostics output below:

root@qotom01:~# lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 39 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 4 On-line CPU(s) list: 0-3 Vendor ID: GenuineIntel BIOS Vendor ID: Intel(R) Corporation Model name: Intel(R) Atom(TM) CPU C3558R @ 2.40GHz BIOS Model name: Intel(R) Atom(TM) CPU C3558R @ 2.40GHz CPU @ 2.4GHz BIOS CPU family: 178 CPU family: 6 Model: 95 Thread(s) per core: 1 Core(s) per socket: 4 Socket(s): 1 Stepping: 1 CPU(s) scaling MHz: 52% CPU max MHz: 2400.0000 CPU min MHz: 800.0000 BogoMIPS: 4800.00 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology no nstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg cx16 x tpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave rdrand lahf_lm 3dnowprefetch cpuid_ fault epb cat_l2 ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust smep erms mpx rdt_a rdseed smap clflushopt intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves dtherm arat pln pts m d_clear arch_capabilities Virtualization features: Virtualization: VT-x Caches (sum of all): L1d: 96 KiB (4 instances) L1i: 128 KiB (4 instances) L2: 8 MiB (4 instances) NUMA: NUMA node(s): 1 NUMA node0 CPU(s): 0-3 Vulnerabilities: Gather data sampling: Not affected Itlb multihit: Not affected L1tf: Not affected Mds: Not affected Meltdown: Not affected Mmio stale data: Not affected Retbleed: Not affected Spec rstack overflow: Not affected Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected Srbds: Not affected Tsx async abort: Not affected root@qotom01:~# lsusb Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 003: ID 05e3:0608 Genesys Logic, Inc. Hub Bus 001 Device 002: ID 046d:c31c Logitech, Inc. Keyboard K120 Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub root@qotom01:~# lspci 00:00.0 Host bridge: Intel Corporation Atom Processor C3000 Series System Agent (rev 11) 00:04.0 Host bridge: Intel Corporation Atom Processor C3000 Series Error Registers (rev 11) 00:05.0 Generic system peripheral [0807]: Intel Corporation Atom Processor C3000 Series Root Complex Event Collector (rev 11) 00:06.0 PCI bridge: Intel Corporation Atom Processor C3000 Series Integrated QAT Root Port (rev 11) 00:09.0 PCI bridge: Intel Corporation Atom Processor C3000 Series PCI Express Root Port #0 (rev 11) 00:0a.0 PCI bridge: Intel Corporation Atom Processor C3000 Series PCI Express Root Port #1 (rev 11) 00:0b.0 PCI bridge: Intel Corporation Atom Processor C3000 Series PCI Express Root Port #2 (rev 11) 00:0c.0 PCI bridge: Intel Corporation Atom Processor C3000 Series PCI Express Root Port #3 (rev 11) 00:0e.0 PCI bridge: Intel Corporation Atom Processor C3000 Series PCI Express Root Port #4 (rev 11) 00:0f.0 PCI bridge: Intel Corporation Atom Processor C3000 Series PCI Express Root Port #5 (rev 11) 00:10.0 PCI bridge: Intel Corporation Atom Processor C3000 Series PCI Express Root Port #6 (rev 11) 00:11.0 PCI bridge: Intel Corporation Atom Processor C3000 Series PCI Express Root Port #7 (rev 11) 00:12.0 System peripheral: Intel Corporation Atom Processor C3000 Series SMBus Contoller - Host (rev 11) 00:13.0 SATA controller: Intel Corporation Atom Processor C3000 Series SATA Controller 0 (rev 11) 00:14.0 SATA controller: Intel Corporation Atom Processor C3000 Series SATA Controller 1 (rev 11) 00:15.0 USB controller: Intel Corporation Atom Processor C3000 Series USB 3.0 xHCI Controller (rev 11) 00:16.0 PCI bridge: Intel Corporation Atom Processor C3000 Series Integrated LAN Root Port #0 (rev 11) 00:17.0 PCI bridge: Intel Corporation Atom Processor C3000 Series Integrated LAN Root Port #1 (rev 11) 00:18.0 Communication controller: Intel Corporation Atom Processor C3000 Series ME HECI 1 (rev 11) 00:1a.0 Serial controller: Intel Corporation Atom Processor C3000 Series HSUART Controller (rev 11) 00:1f.0 ISA bridge: Intel Corporation Atom Processor C3000 Series LPC or eSPI (rev 11) 00:1f.2 Memory controller: Intel Corporation Atom Processor C3000 Series Power Management Controller (rev 11) 00:1f.4 SMBus: Intel Corporation Atom Processor C3000 Series SMBus controller (rev 11) 00:1f.5 Serial bus controller: Intel Corporation Atom Processor C3000 Series SPI Controller (rev 11) 01:00.0 Co-processor: Intel Corporation Atom Processor C3000 Series QuickAssist Technology (rev 11) 02:00.0 Non-Volatile memory controller: Phison Electronics Corporation PS5013 E13 NVMe Controller (rev 01) 04:00.0 Ethernet controller: Intel Corporation Ethernet Controller I225-V (rev 03) 05:00.0 Ethernet controller: Intel Corporation Ethernet Controller I225-V (rev 03) 06:00.0 Ethernet controller: Intel Corporation Ethernet Controller I225-V (rev 03) 07:00.0 Ethernet controller: Intel Corporation Ethernet Controller I225-V (rev 03) 08:00.0 Ethernet controller: Intel Corporation Ethernet Controller I225-V (rev 03) 09:00.0 PCI bridge: ASPEED Technology, Inc. AST1150 PCI-to-PCI Bridge (rev 03) 0a:00.0 VGA compatible controller: ASPEED Technology, Inc. ASPEED Graphics Family (rev 30) 0b:00.0 Ethernet controller: Intel Corporation Ethernet Connection X553 10 GbE SFP+ (rev 11) 0b:00.1 Ethernet controller: Intel Corporation Ethernet Connection X553 10 GbE SFP+ (rev 11) 0c:00.0 Ethernet controller: Intel Corporation Ethernet Connection X553 Backplane (rev 11) 0c:00.1 Ethernet controller: Intel Corporation Ethernet Connection X553 Backplane (rev 11) root@qotom01:~# lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 39 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 4 On-line CPU(s) list: 0-3 Vendor ID: GenuineIntel BIOS Vendor ID: Intel(R) Corporation Model name: Intel(R) Atom(TM) CPU C3558R @ 2.40GHz BIOS Model name: Intel(R) Atom(TM) CPU C3558R @ 2.40GHz CPU @ 2.4GHz BIOS CPU family: 178 CPU family: 6 Model: 95 Thread(s) per core: 1 Core(s) per socket: 4 Socket(s): 1 Stepping: 1 CPU(s) scaling MHz: 52% CPU max MHz: 2400.0000 CPU min MHz: 800.0000 BogoMIPS: 4800.00 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology no nstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg cx16 x tpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave rdrand lahf_lm 3dnowprefetch cpuid_ fault epb cat_l2 ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust smep erms mpx rdt_a rdseed smap clflushopt intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves dtherm arat pln pts m d_clear arch_capabilities Virtualization features: Virtualization: VT-x Caches (sum of all): L1d: 96 KiB (4 instances) L1i: 128 KiB (4 instances) L2: 8 MiB (4 instances) NUMA: NUMA node(s): 1 NUMA node0 CPU(s): 0-3 Vulnerabilities: Gather data sampling: Not affected Itlb multihit: Not affected L1tf: Not affected Mds: Not affected Meltdown: Not affected Mmio stale data: Not affected Retbleed: Not affected Spec rstack overflow: Not affected Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected Srbds: Not affected Tsx async abort: Not affected root@qotom01:~# lsusb Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 003: ID 05e3:0608 Genesys Logic, Inc. Hub Bus 001 Device 002: ID 046d:c31c Logitech, Inc. Keyboard K120 Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub root@qotom01:~# lspci 00:00.0 Host bridge: Intel Corporation Atom Processor C3000 Series System Agent (rev 11) 00:04.0 Host bridge: Intel Corporation Atom Processor C3000 Series Error Registers (rev 11) 00:05.0 Generic system peripheral [0807]: Intel Corporation Atom Processor C3000 Series Root Complex Event Collector (rev 11) 00:06.0 PCI bridge: Intel Corporation Atom Processor C3000 Series Integrated QAT Root Port (rev 11) 00:09.0 PCI bridge: Intel Corporation Atom Processor C3000 Series PCI Express Root Port #0 (rev 11) 00:0a.0 PCI bridge: Intel Corporation Atom Processor C3000 Series PCI Express Root Port #1 (rev 11) 00:0b.0 PCI bridge: Intel Corporation Atom Processor C3000 Series PCI Express Root Port #2 (rev 11) 00:0c.0 PCI bridge: Intel Corporation Atom Processor C3000 Series PCI Express Root Port #3 (rev 11) 00:0e.0 PCI bridge: Intel Corporation Atom Processor C3000 Series PCI Express Root Port #4 (rev 11) 00:0f.0 PCI bridge: Intel Corporation Atom Processor C3000 Series PCI Express Root Port #5 (rev 11) 00:10.0 PCI bridge: Intel Corporation Atom Processor C3000 Series PCI Express Root Port #6 (rev 11) 00:11.0 PCI bridge: Intel Corporation Atom Processor C3000 Series PCI Express Root Port #7 (rev 11) 00:12.0 System peripheral: Intel Corporation Atom Processor C3000 Series SMBus Contoller - Host (rev 11) 00:13.0 SATA controller: Intel Corporation Atom Processor C3000 Series SATA Controller 0 (rev 11) 00:14.0 SATA controller: Intel Corporation Atom Processor C3000 Series SATA Controller 1 (rev 11) 00:15.0 USB controller: Intel Corporation Atom Processor C3000 Series USB 3.0 xHCI Controller (rev 11) 00:16.0 PCI bridge: Intel Corporation Atom Processor C3000 Series Integrated LAN Root Port #0 (rev 11) 00:17.0 PCI bridge: Intel Corporation Atom Processor C3000 Series Integrated LAN Root Port #1 (rev 11) 00:18.0 Communication controller: Intel Corporation Atom Processor C3000 Series ME HECI 1 (rev 11) 00:1a.0 Serial controller: Intel Corporation Atom Processor C3000 Series HSUART Controller (rev 11) 00:1f.0 ISA bridge: Intel Corporation Atom Processor C3000 Series LPC or eSPI (rev 11) 00:1f.2 Memory controller: Intel Corporation Atom Processor C3000 Series Power Management Controller (rev 11) 00:1f.4 SMBus: Intel Corporation Atom Processor C3000 Series SMBus controller (rev 11) 00:1f.5 Serial bus controller: Intel Corporation Atom Processor C3000 Series SPI Controller (rev 11) 01:00.0 Co-processor: Intel Corporation Atom Processor C3000 Series QuickAssist Technology (rev 11) 02:00.0 Non-Volatile memory controller: Phison Electronics Corporation PS5013 E13 NVMe Controller (rev 01) 04:00.0 Ethernet controller: Intel Corporation Ethernet Controller I225-V (rev 03) 05:00.0 Ethernet controller: Intel Corporation Ethernet Controller I225-V (rev 03) 06:00.0 Ethernet controller: Intel Corporation Ethernet Controller I225-V (rev 03) 07:00.0 Ethernet controller: Intel Corporation Ethernet Controller I225-V (rev 03) 08:00.0 Ethernet controller: Intel Corporation Ethernet Controller I225-V (rev 03) 09:00.0 PCI bridge: ASPEED Technology, Inc. AST1150 PCI-to-PCI Bridge (rev 03) 0a:00.0 VGA compatible controller: ASPEED Technology, Inc. ASPEED Graphics Family (rev 30) 0b:00.0 Ethernet controller: Intel Corporation Ethernet Connection X553 10 GbE SFP+ (rev 11) 0b:00.1 Ethernet controller: Intel Corporation Ethernet Connection X553 10 GbE SFP+ (rev 11) 0c:00.0 Ethernet controller: Intel Corporation Ethernet Connection X553 Backplane (rev 11) 0c:00.1 Ethernet controller: Intel Corporation Ethernet Connection X553 Backplane (rev 11) root@qotom01:~# ip link 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: enp4s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 20:7c:14:f2:9c:76 brd ff:ff:ff:ff:ff:ff 3: enp5s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 20:7c:14:f2:9c:77 brd ff:ff:ff:ff:ff:ff 4: enp6s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 20:7c:14:f2:9c:78 brd ff:ff:ff:ff:ff:ff 5: enp7s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 20:7c:14:f2:9c:79 brd ff:ff:ff:ff:ff:ff 6: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 20:7c:14:f2:9c:7a brd ff:ff:ff:ff:ff:ff 7: eno1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 1000 link/ether 20:7c:14:f2:9c:7b brd ff:ff:ff:ff:ff:ff altname enp11s0f0 8: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 20:7c:14:f2:9c:7c brd ff:ff:ff:ff:ff:ff altname enp11s0f1 9: eno3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 20:7c:14:f2:9c:7d brd ff:ff:ff:ff:ff:ff altname enp12s0f0 10: eno4: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 1000 link/ether 20:7c:14:f2:9c:7e brd ff:ff:ff:ff:ff:ff altname enp12s0f1

Pages

Subscribe to OpenTelecom.IT aggregator

Using the greatness of Parallax

Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.

Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.

Get free trial

Wow, this most certainly is a great a theme.

John Smith
Company name

Yet more available pages

Responsive grid

Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

More »

Typography

Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

More »

Startup Growth Lite is a free theme, contributed to the Drupal Community by More than Themes.