Unlock the potential of WebRTC stats with getStats to boost your application’s performance and reliability.
WebRTC is great. When it works.
When it doesn’t? A bit less so. Which is why there are tools available at your disposal to be able to debug and troubleshoot issues with your WebRTC application – be it connectivity failures, poor quality, bad use of the APIs or just buggy implementation.
This article, as well as the other articles in this series were written with the assistance of Philipp Hancke.
Interested in webrtc-internals and getStats? Then this series of articles is just for you:
This time? We’re taking a closer look at what’s inside getStats values – what the metrics that you’ll find there really mean (at least the more important ones)
Table of contentsWe’re going to use these two terms interchangeably from now on, so please bear with us.
For me?
If you’ve read the previous article, then you should know by now how to obtain a webrtc-internals dump file and also how to call getStats periodically to get the statistics you need.
So time to understand what’s in there…
Structure of a getStats returned valueThere are many metrics that can be used in WebRTC to monitor various aspects of the peer connection. To put some sense and order into the process, the W3C decided to design the getStats() API in a manner that would “flatten” the information out for easy search access, and also include identifiers to be able to think of it all as structured tree data.
Here’s a “short” video explainer for WebRTC getStats() result structure:
https://youtu.be/B1MgeVkRQ-M A map of stats objectsWebRTC has been broken down in the specification to various objects for the purpose of statistics reporting. These objects are sometimes singletons (such as the “peer-connection”) and sometimes may have multiple instances (think incoming media streams).
To get away from the need of maintaining multiple arrays, a single map of statistics is used which stores in it as a set of RTCStats objects.
Each RTCStats object always has in it an id (object identifier), a timestamp and a type. The rest of the fields (and values) stored in the object depend on the type.
Multiple objects of the same type, such as “inbound-rtp” will have a different id.
Here’s how it looks like if you inspect the response object in the JS console on Chrome:
Partial getStatsBefore we dive into the hierarchy and the metrics, it is important to note what happens with getStats() when you call it with a specific selector. The selector is a specific MediaStreamTrack, so that the results returned are going to be limited to that track only.
getStats() getStats(selector) RTCRtpSender.getStats() RTCRtpReceiver.getStats()Great – right?
Not really…
This is not going to help you in any way, but in many ways, it is a hindrance.
When calling getStats(), with or without a selector, libWebRTC goes about its business collecting the statistics across ALL of the WebRTC objects. It sweats and uses resources to collect everything, and then filter down the results for you. There’s no optimization in the collection process that is taking place here.
Since you’re usually going to need to check statistics across your tracks, calling this separately for each track is wasteful.
Our suggestion? Always call getStats() with no selector at all. Do the filtering yourself if needed.
Hierarchy of objectsMost objects in getStats (but not all of them) end up connecting in one way or another to the “transport” object.
This “hidden” tree structure can be reconstructed by way of the various id fields found inside WebRTC’s stats objects (from WebRTC stats spec):
Some important notes about this table:
Let’s see what the main stats objects and fields are there.
The specification of these can be found in the W3C spec for WebRTC itself.
A deep dive into getStats valuesTime to look at getStats objects and fields and understand what values we may get for certain WebRTC metrics.
Fields and value typesFor me, all of these fields are just field:value (or key:value) pairs.
If I had to group the fields to the types of values they store, it would be something like this:
Why did I want to mention all this? When you see a field, be sure to think about its type – it will help you determine how to read it and what you should do with it.
“transport” typeLink to spec (RTCTransportStats)
The “transport” type denotes the DTLS and ICE transport objects used to send and receive media. You can think about it as a single RTP/RTCP “connection”.
Things you’ll find on the “transport” type?
Typically you will have a single transport object per connection (unless you are not using BUNDLE).
“candidate-pair”, “local-candidate” and “remote-candidate” typesThese objects deal with ICE negotiation candidates.
During this process, WebRTC collects all local candidates (IP addresses it can use to receive media and send media from) and the remote candidates (IP addresses that the remote peer tells him he can be reached out at). WebRTC then conducts ICE connectivity checks by pairing different local candidates with remote candidates.
To that end, getStats stores and returns us all “local-candidate” and “remote-candidate” types along with the “candidate-pair” types for the pairs it tried out.
“local-candidate” and “remote-candidate”?
Link to spec (RTCIceCandidateStats)
The ICE candidate statistics object stores static information in general. It doesn’t have anything that changes dynamically, as that happens on the pair. The main fields here relate to the IP, port and protocol (address, port, protocol, candidateType and relayProtocol) used by the candidate.
Our “candidate-pair”?
Link to spec (RTCIceCandidatePairStats)
The candidate pair is the actual connection (or attempted connection). Here things start to become interesting (at last).
On one hand, the pair contains quite a few identifiers, connecting it to the transport object (transportId) and to the local and remote candidates (localCandidateId and remoteCandidateId). The state field indicates when ICE checked it, failed or succeeded (not too useful).
There are quite a few interesting fields here:
For the most part? This section still deals with connectivity related metrics. A lot less about quality itself.
RTCRtpStreamStatsWe’re getting to fragmented stats structures – think classes and inheritance in object oriented programming languages. The RCTRtpStreamStats take part of all rtp reports – “outbound-rtp”, “inbound-rtp”, “remote-inbound-rtp” and “remote-outbound-rtp”. What does it hold?
Link to spec (RTCRtpStreamStats)
ssrc is the static field connecting us to the SSRC value of the RTP stream itself. These reports also aggregate data from SSRCs related to this SSRC such as the RTX and FEC SSRCs.
kind just indicates if this is a “voice” or a “video” stream. That’s going to affect other metrics down the line, and is also a way to filter and find what we’re looking for.
Then we’ve got the pointer identifiers transportId and codecId.
Nothing much to write home about here, but important to know and understand nonetheless.
RTCSentRtpStreamStats and RTCReceivedRtpStreamStatsEach “*-rtp” type object also holds in it either an RTCSentRtpStreamStats or an RTCReceivedRtpStreamStats set of fields.
RTCSentRtpStreamStats
Link to spec (RTCSentRtpStreamStats)
The Sent one is rather simple. It holds two accumulators that we’ve seen already: packetsSent and bytesSent.
There’s slightly more (and different) fields in the receive side of things:
Link to spec (RTCReceivedRtpStreamStats)
On the receiving end, we’re focused on two accumulators and a variable metric. The accumulators are packetsReceived and packetsLost (rather important ones that also help us in calculating packet loss percentage).
And then there’s the jitter metric, which is the reported jitter of the incoming stream’s packets.
“outbound-rtp” and “remote-inbound-rtp” typesThese two types are about outgoing media. “outbound-rtp” is about what we send and “remote-inbound-rtp” is about what our peer reported it received from us.
Each of these holds more than one stats object inside of it. We’ve covered the basics of these objects above. Time to look at what they specifically hold.
Let’s review each one of them separately.
“outbound-rtp”
outbound-rtp reports back to us what our WebRTC implementation is sending on a stream. To begin with, the “outbound-rtp” stats object will be holding RTCRtpStreamStats and RTCSentRtpStreamStats fields.
On top of it, there’s a slew of additional fields that will be there, depending on the type of the stream – audio or video.
Link to spec (RTCOutboundRtpStreamStats)
Our outbound RTP metrics relate to both audio and video, with specific metrics that are relevant only for video.
Both audio and video:
Video only:
Now that we have what we “know” we sent, time to look at “remote-inbound-rtp”
“remote-inbound-rtp”
The remote-inbound-rtp object is all about what the remote side reported back about our sent stream. In essence, this is the RTCP RR (Receiver Report) data – or more accurately – parts of it. Our “remote-inbound-rtp” stats object also holds RTCRtpStreamStats and RTCReceivedRtpStreamStats fields.
Link to spec (RTCRemoteInboundRtpStreamStats)
Time to talk about the “other side”…
“inbound-rtp” and “remote-outbound-rtp” typesWhat we had for outbound is there for inbound as well. “Inbound-rtp” is what we actually received and processed while “remote-outbound-rtp” is what the remote peer reported to us it sent (where some might have gotten lost in the void of the internet).
Here’s what we have for the “inbound-rtp” – RTCRtpStreamStats, RTCReceivedRtpStreamStats as well as additional fields:
Link to spec (RTCInboundRtpStreamStats)
For inbound RTP related stats, we have those that are specific to audio, those specific to video and those that relate to both.
Both audio and video:
Audio only:
Video only:
Now it is time to check what is being reported to use by the remote peer:
“remote-outbound-rtp”
The “remote-outbound-rtp” is what the remote peer tells us he sent us. This is received on our end by the RTCP SR (Sender Report) and then incorporated into this stats block.
As usual, it is comprised out of RTCRtpStreamStats, RTCSentRtpStreamStats and this additional block:
Link to spec (RTCRemoteoutboundRtpStreamStats)
Here we have:
The codec block holds information about the codec used – for both incoming and outgoing streams.
Frankly? There’s not much here to use for monitoring… The best thing here is the ability to resolve a nice name for the codec.
“media-source” typeThe “media-source” is about what we’re sending. It is split into 3 parts: generic, audio and video. Obviously, we will find either audio or video for any specific media source.
The generic
Link to spec (RTCMediaSourceStats)
The kind field will indicate if we’re dealing with audio or video…
The audio
Link to spec (RTCAudioSourceStats)
Here we have a few metrics, of which audioLevel is the most interesting:
The video
Link to spec (RTCVideoSourceStats)
We’ve seen the metrics here elsewhere as well – but this time, it indicates what our source video metrics are – not those measured just before encoding or after being decoded on the other end.
Towards that end, we have:
Where “media-source” is about outgoing streams, “media-playout” is about incoming ones. That said, today at least, “media-playout” is limited to audio streams only.
Link to spec (RTCAudioPlayoutStats)
All of the fields here (besides the kind which is always set to “audio”) are accumulators.
Nothing much to add here.
Others? “peer-connection”, “data-channel” and “certificate” typesThe other types of stats blocks don’t hold much in them. At least not in the form of something that is really useful when debugging.
The “peer-connection” has a running tally using accumulators for closed and opened data channels (dataChannelsOpened and dataChannelsClosed).
The “data-channel” one is built mostly of accumulators that can be calculated from sent and received data on the channels. Might be easier to take it from here, but it doesn’t add much value beyond being simpler to get in this manner.
And the “certificate”? Well… it just gives you that – the certificates trail. Not something we’ve used so far.
Structure of a webrtc-internals fileWhen it comes to chrome://webrtc-internals, the file itself is a simple JSON text file. The format is not specified and subject to change. It has grown historically and does some things like double-encoding as JSON.
Sometimes you need to look at the format when you are looking for a specific value that is not visualized by your tooling such as the dtlsCipher.
If you open the content in a nice JSON viewer, you’ll get something like this:
There are 2 arrays in this JSON file:
The stats inside the PeerConnections objects is an array of calls into getStats(). Here’s what you’ll find there:
Here we see the id COT01_96. The field of each item is the postfix of the id – transportId, payloadType, mimeType, clockRate, timestamp, …
For each, we have the startTime and endTime, denoting the time the first and last samples were taken. We have the statsType – the object this is collected for (“codec” in this case). And the values which are an array of the values as taken over the period of time.
The eventsLog… that’s left for another article down the road.
If you are lazy, and you should be, then reading this file should be done using a dedicated visualizer. The open one out there is fippo’s WebRTC dump importer. It parses the structure and then visualizes some of the data. I’ll leave it to you to try it out – it works great. Maybe we should do a video explainer for it at some point…
How can we helpWebRTC statistics is an important part of developing and maintaining WebRTC applications. We’re here to help.
You can check out my products and services on the menu at the top of this page.
The two immediate services that come to mind?
Something else is bugging you with WebRTC? Just reach out to me.
The post Making sense of getStats in WebRTC appeared first on BlogGeek.me.
Maximize your understanding of webrtc stats and webrtc-internals, assisting you in monitoring and analyzing WebRTC applications.
WebRTC is great. When it works.
When it doesn’t? A bit less so. Which is why there are tools available at your disposal to be able to debug and troubleshoot issues with your WebRTC application – be it connectivity failures, poor quality, bad use of the APIs or just buggy implementation.
This article, as well as the other articles in this series were written with the assistance of Philipp Hancke.
Interested in webrtc-internals and getStats? Then this series of articles is just for you:
This time? We’re focusing on WebRTC debugging 101. Or as it is more widely known by: webrtc-internals and getStats
Table of contentsWebRTC runs inside the browser. It has a set of Javascript APIs so developers can build their applications using it. The thing is, that networks are finicky and messy – they are unpredictable. Which is why developers need to monitor quality metrics. If you don’t do that in your application, then:
What is needed is observability, and that is done using an API that was available in WebRTC since its inception – known as getStats(). getStats exposes a surprisingly large amount of information about the internal performance of the underlying WebRTC library.
Calling getStatsgetStats can either be called on the RTCPeerConnection object or specific senders or receivers. Since calling it on senders or receivers only filters the result obtained for the whole connection it is typically better to call it on the RTCPeerConnection:
const stats = await pc.getStats();Remember that getStats is an asynchronous method so returns a Promise which needs to be awaited. The Promise resolves with a “Maplike” object that is a key-value store in Javascript.
You can iterate over this with a for-loop and log the contents:
stats.forEach(report => console.log(report.id, report))Please note that the “id” is an identifier and while it has a certain structure in Chrome, do not attach any meaning to that structure as it is subject to change without notice (this happened in the past already)..
Alternatively you can get an array with the values which is useful if you are looking to filter for certain types of reports:
[...stats.values()].filter(report => report.type === 'inbound-rtp')The key of each key-value pair is a string that uniquely identifies the object and is consistent across calls. This means you can call getStats at two different points in time and compare the objects easily:
// we assume `stats` has been obtained “a while ago” const newStats = await pc.getStats();Assuming we are interested in the audio bitrate, we would look for the “outbound-rtp” report with an “audio” kind:
// we assume `stats` has been obtained “a while ago” const newStats = await pc.getStats();We need to check that “currentAudio” exists and that stats.has(currentAudio.id) (i.e. the old report has the same value and then we can calculate the audio bitrate from the “bytesSent” values as
// check currentAudio and stats.has(currentAudio.id) audioBitrate = 8 * (currentAudio.bytesSent - stats.get(currentAudio.id).bytesSent) / (currentAudio.timestamp - stats.get(currentAudio.id).timestamp)The pattern of taking the difference in the cumulative measure and dividing it by the time difference is very common, see here for the underlying design principle.
What do the values inside getStats exactly mean? That’s what we’re covering in our reading getStats article.
getStats frequencyAt what frequency should you be calling getStats()?
That’s up to you. For most metrics, calling it at frequencies lower than a second makes no sense. And frequencies above 10 seconds will be too little usually.
getStats() uses a JavaScript Promise – which means it is asynchronous in nature. You ask for stats and then the browser (WebRTC) will be working to get the stats for you. It will return the Promise once done.
Calling too frequently means eating up CPU for collecting statistics since getStats needs to query a lot of information from different parts of the system. If you don’t plan on using it for something important enough at such a frequency, then call the function less frequently.
One example of using getStats for the wrong task was calling it several times per second to get and display the audio level. This has since been replaced by a better API and getStats is returning the same result when it is called too frequently.
getStats returns aggregated values for many statistics such as the number of bytes received. This lets you call getStats and subtract the previous value from the current value and divide this by the time between the two measurements to get an average over a time period.
Our suggestion? Once a second. For statistics that are a bit jittery keep a 3-5 second old object around and average over the slightly larger window.
Reading getStats resultsHow to read getStats results is a bigger topic and won’t fit here. Lucky for you, we’ve written a dedicated article just for this!
Head on and check out how to make sense of getStats results.
Why collect stats from the client anywayIn the past, in VoIP services, we often focused on collecting the metrics and statistics from the “network”. We collected the metrics from the media servers and other application servers. We also placed network packet analyzers near the media servers to measure these metrics for us.
This cannot be done anymore…
When WebRTC was introduced, it immediately lent itself to client side metrics collection. The bandwidth available to us was higher than ever before for the most part, many of the developers building WebRTC services were never indoctrinated as VoIP experts – they were just web developers. This meant that client side collection of stats was adopted and made common.
Whatever the reason is, today’s best practice is to collect the information from the client itself, and that makes total sense for WebRTC applications.
A word about rtcstatsA decade ago Philipp Hancke created an open source project years ago called rtcstats. It is a very lightweight approach to wrap the underlying RTCPeerConnection object, periodically call getStats and send the statistics (as well as other information about the API calls) to a server. On the server one of the artifacts this produces is a “dump” with information equivalent to the webrtc-internals dump. While it has not been updated recently, there are friendly forks e.g. from Jitsi. This project enables us to easily collect WebRTC related information from a WebRTC application without much integration effort. The library itself is simple enough that it does not require much maintenance or frequent updates.
If you are looking to build your own WebRTC statistics collection for your WebRTC application, then this project is highly recommended.
rtcstats collects everything – API calls and all getStats metrics, sending it to the server side of your data collection. It does so with some thoughts about reducing the traffic on the network by implementing a kind of a virtual sparse graph of the metrics collected (think of it as not collecting metric values that haven’t changed). This avoids stealing away the bandwidth needed for real time communications for uploading the logs.
Chrome and webrtc-internalsIn a way, Chrome was always at the forefront of WebRTC (not surprising considering ALL modern browsers end up using Google’s libWebRTC implementation). They were the first to implement and adopt it into their browser and services as well (obviously).
What happened is that Google needed simple tooling to debug and troubleshoot issues related to WebRTC. So they created webrtc-internals.
What is webrtc-internals?webrtc-internals is a browser tab (just write chrome://webrtc-internals in the address bar of your Chrome browser) that collects and shares WebRTC related information from the browser itself.
It has information about GetUserMedia, PeerConnection configuration, WebRTC API calls and events and calls to getStats – both latest and visualized on graphs.
This treasure trove of information is quite handy when you’re trying to figure out what’s going on with your WebRTC application.
The challenge? The data itself is transient. There as long as the peer connection is open. Deleted and gone the moment it is closed.
This leaves us with two big challenges:
The first thing we need in order to “solve” the two challenges above is to “convert” webrtc-internals into a file (also know as a webrtc-internals dump)
The video above explains that visually. In essence:
You should now have a webrtc_internals_dump.txt file in your downloads folder.
Note that you still need to be purposeful about it, planning on obtaining that information to begin with, and actively downloading the file. Not fun, but very useful.
Reading webrtc-internalsHow to read getStats results is a bigger topic and won’t fit here. Lucky for you, we’ve written a dedicated article just for this!
Head on and check out how to make sense of getStats results.
webrtc-internals alternative in FirefoxMozilla has its own about:webrtc browser tab for Firefox.
They even outdid Google here and actually wrote about it: Debugging with about:webrtc in Firefox, Getting Data Out
What they are lacking though is relevance… not many developers (or users) are using Firefox, so the whole focus and effort is elsewhere.
Here’s the thing – at the end of the day, what we need is a robust solution/service across all browsers and devices. This usually translates to rtcstats based solutions. More on that… in a later article in this series.
👉 Still interested in debugging on Firefox? Check out this section from Olivier’s post on debugging WebRTC in browsers
webrtc-internals alternative in SafariDebugging is Safari is close to nonexistent. You’d be better off collecting the data yourself via rtcstats.
Apple, being Apple, doesn’t care much about WebRTC or developers in general.
👉 Still interested in debugging on Safari? Check out this section from Olivier’s post on debugging WebRTC in browsers
Visualising WebRTC statisticsHaving stats is great, but what do you make out of this?
Being able to see anything here is hard. Which is why Philipp Hancke built and is still maintaining a tool called WebRTC dump importer – you take the webrtc-internals dump you’ve downloaded, upload it to this page, and magic happens. Go check it out.
There are other visualization tools available, but they are commercial and part of larger paid solutions (testRTC for example has great visualization, but it isn’t offered as a standalone).
How can we helpWebRTC statistics is an important part of developing and maintaining WebRTC applications. We’re here to help.
You can check out my products and services on the menu at the top of this page.
The two immediate services that come to mind?
Something else is bugging you with WebRTC? Just reach out to me.
The post Everything you wanted to know about webrtc-internals and getStats appeared first on BlogGeek.me.
Understanding API and SDK: Dive into their definitions and learn why both are crucial for effective software development with CPaaS and LLM.
An API and an SDK. They are similar but different. Both are interfaces used by services to expose their capabilities to developers and applications. For the most part, we’ve been happy enough with APIs that are based on REST, probably with an OpenAPI specification based definition for it.
But for things like WebRTC, communications and WebSocket based interfaces, an API just isn’t enough.
Let’s dive in to see why.
Table of contentsWe will start by a quick definition of each.
Keep in mind that the actual definitions are rather fluid – the ones below are just those that are common today in our industry (networking software).
APIAPI stands for Application Programming Interface. In this day and age, such an interface is usually one that gets used by remote invocation – from one machine to another over an IP network.
The most common specification for an API? REST
REST is a rather simple mechanism built on top of HTTP. For me, it is a way to formalize how a URL can be used to retrieve values, push values or execute “stuff” on the server.
Why REST? Because it uses HTTP, making it easily accessible and usable by web applications running inside web browsers.
Then there’s OpenAPI which is simply a specification of how to express interfaces using REST in a formal way. This enables using software tools to create, document, deploy, test and use APIs.
While there are other types of APIs, which don’t rely on REST or OpenAPI, most do.
The unique thing about an API? It sits “inside” or “on top” of the service we want to interface with and we call/invoke that API by calling it from a separate process/machine.
SDKSDK stands for Software Development Kit. For me, that’s a piece of code that gets embedded in your application as a library that you use directly.
Where an API gets communicated remotely, over the network; an SDK gets invoked directly, from inside a software application.
In many cases, an SDK is built on top of an API, to make it easier to integrate with.
CPaaS and Programmable Communication interfacesLets see what Twilio does as an example of the various interfaces they have on offer. The ones on offer are:
The moment in time that a client side SDK is needed is when explaining how to interact with the server’s interface (think REST) is going to be complicated. Remember – CPaaS vendors are there to simplify the development. So adding SDKs to simplify it further where needed makes total sense.
WebRTC almost forces us to create such client side SDKs. Especially since signaling isn’t defined for WebRTC – it is up to the vendor to decide, and here, the vendor is the CPaaS vendor. So if he defines an interface, it is easier to implement the client side of it as an SDK than to document it well enough and assume customers will be able to do the implementation properly without wasting too much time and too many support resources.
Programmable LLM interfacesTime to look at LLM and Generative AI interfaces that are programmable. We do that by reviewing OpenAI’s developer platform documentation. Here’s what they have available at the moment:
With voice, OpenAI Realtime API started by offering a WebSocket interface. Google Gemini followed suit.
Why WebSocket?
Why no SDK? Because this is still in beta…
They quickly followed with a WebRTC interface. Which makes total sense – WebSocket isn’t really real time and comes with its own set of limitations for an interactive voice interface (on that, in another time).
What they didn’t do here was add an SDK either.
And while with WebSocket this is “acceptable”, for WebRTC… I believe it is less so.
Here’s what I wrote about OpenAI, LLMs, voice and WebRTC a few months back
Is an SDK critical to “hide” a WebRTC interfaceYes it is.
WebRTC has an API surface that is quite extensive. It includes APIs, SDP, network configuration, etc.
Leaving all these exposed and even more – with no direct implementation other than an example in the documentation – isn’t going to help anyone.
WebRTC as a development interface suffers from a few big challenges:
This means that without having an SDK to a WebRTC interface (be it for a Programmable Video or Voice service, or for an LLM / Generative AI service), you are going to be left with a solution that is hard to adopt and easy to break:
Oh, and we didn’t go into the discussion of what to do with Android and iOS developers that might want to integrate with the services inside a native application (they need native SDKs…).
If you’re aiming to have an API for a WebRTC interface, then you should also work towards having an SDK for it. And if not, be very very clear to yourself why you don’t need an SDK.
The post CPaaS and LLMs both need APIs and SDKs appeared first on BlogGeek.me.
Learn what’s next for BlogGeek.me in the world of WebRTC, Generative AI and Programmable Communications.
January 5 2012. Around 13 years ago I published my first blog post here. That was when I left RADVISION and ended up in this route of consulting and entrepreneurship. That was after 13 years at RADVISION. And now it is 13 years later still.
Those who think of time as different eras (WebRTC evolution anyone?), you’d think it might be the beginning of a new era for me as well. In many ways, the past few months definitely feel like that.
I’ve spent the past two months trying to figure out what comes next for me and BlogGeek.me. Some of it is business as usual while others are about brand new initiatives.
Here are a few updates on my ongoing projects and work. Feel free to reach out to me.
More videoLast year I started producing more video content. There are now 3 different types of such content:
There are more videos that I create when needed – some are added to the tip and offer emails (see below). Others to glossary terms, FAQs and other areas.
Why am I doing it? To experiment with something new related to content creation.
You can subscribe to my YouTube channel
Tip and offerThis year I started a new thing that is only available on my newsletter (ie – email subscribers): a tip and an offer.
The cadence of my writing is now a new article here every two weeks. In between, I write a shorter piece that includes a specific tip or thought, along with an offer to things that I do.
The tip can be about something new I am starting to flesh out in my mind or an insight that can’t fit a full article, but is important. So far, I have written about the 4 communication scenarios in WebRTC and 21 algorithms in WebRTC. Lined up are tips around AV1 adoption, UC studio and Clientless SFUs.
To receive these emails, I invite you to subscribe to my newsletter:
.tve_lead_generated_inputs_container .tve-lg-dropdown-message-after{background-color: rgba(60,190,199,0.5);padding: 4px 6px;font-weight: 400;font-size: 12px;color: rgba(0,0,0,0.5);display: block;}.thrv_lead_generation_container .tve_lg_number{flex-wrap: wrap;}.thrv_lead_generation_container .tve_lg_number input{flex: 1 1 0%;max-width: 100%;}.thrv_lead_generation_container .tve_lg_number .thrv_field_wrapper{position: relative;display: flex;flex: 0 0 100%;}.thrv_lead_generation_container .tve_lg_number .thrv_field_wrapper + .thrv_text_element{flex: 0 0 100%;}.thrv_lead_generation_container .tve_lg_date{flex-wrap: wrap;}.thrv_lead_generation_container .tve_lg_date input{flex: 1 1 0%;max-width: 100%;}.thrv_lead_generation_container .tve_lg_date .thrv_field_wrapper{position: relative;display: flex;flex: 0 0 100%;}.thrv_lead_generation_container .tve_lg_date .thrv_field_wrapper + .thrv_text_element{flex: 0 0 100%;}#tve-lg-error-container{background-color: rgb(242,222,222);color: rgb(169,68,66);border: 1px solid rgb(235,204,209);border-radius: 1px;padding: 4px 10px;position: absolute;z-index: 3000000;box-sizing: border-box !important;}#tve-lg-error-container .tve-lg-err-item{line-height: 1.2;font-size: 14px;}#tve-lg-error-container .tve-lg-err-close{color: rgb(169,68,66);display: inline-block;font-size: 12px;width: 12px;height: 12px;position: absolute;top: 50%;right: 10px;margin: -6px 0px 0px;}#tve-lg-error-container .tve-lg-err-close:hover{text-decoration: none;}.thrv_wrapper.thrv_lead_generation{width: 100%;overflow: unset !important;}.thrv_wrapper.thrv_lead_generation .tve_lead_generated_inputs_container{color: rgb(85,85,85);--tcb-applied-color: #555;font-family: Roboto,sans-serif;}.thrv_wrapper.thrv_lead_generation .tve_lead_generated_inputs_container input[type="email"]{padding: 10px 15px;height: auto;}.thrv_wrapper.thrv_lead_generation .tve_lead_generated_inputs_container input[type="email"]:hover{border-color: rgb(183,216,209);}.thrv_wrapper.thrv_lead_generation .tve_lead_generated_inputs_container input::placeholder{font-family: inherit !important;}.thrv_wrapper.thrv_lead_generation{position: relative;box-sizing: border-box;}.thrv_wrapper.thrv_lead_generation::after{content: "";display: block;position: absolute;top: 0px;left: 0px;}.thrv_wrapper.thrv_lead_generation .tcb-flex-row{padding-bottom: 0px;padding-top: 0px;}.thrv_wrapper.thrv_lead_generation.tve-lead-generation-template .thrv_lead_generation_container .tve_lg_input_container.tve_lg_input{margin: 10px 0px;}.thrv_wrapper.thrv_lead_generation.tve-lead-generation-template .thrv_lead_generation_container .tve_lg_input_container.tve_lg_input > input{margin: 0px;}.thrv_wrapper.thrv_lead_generation.tve-lead-generation-template .thrv_lead_generation_container .tve_lg_input_container.tve_lg_textarea{margin: 10px 0px;}.thrv_wrapper.thrv_lead_generation.tve-lead-generation-template .thrv_lead_generation_container .tve_lg_input_container.tve_lg_textarea > textarea{margin: 0px;}.tve-lg-error{background-repeat: no-repeat;border-color: rgba(0,0,0,0) !important;box-shadow: rgb(169,68,66) 0px 0px 4px inset !important;}.thrv_lead_generation_container .tve_lg_input_container.tve_lg_input{display: flex;}.thrv_lead_generation_container .tve_lg_input_container.tve_lg_input > input{flex: 1 1 0%;max-width: 100%;}.thrv_lead_generation_container input[type="password"],.thrv_lead_generation_container input[type="email"],.thrv_lead_generation_container input[type="url"],.thrv_lead_generation_container input[type="text"],.thrv_lead_generation_container input[type="tel"],.thrv_lead_generation_container input[type="number"],.thrv_lead_generation_container button,.thrv_lead_generation_container select:not(.flatpickr-monthDropdown-months),.thrv_lead_generation_container textarea{border-style: solid;border-color: rgb(183,216,209);border-width: 1px;max-width: none;background-color: rgb(248,249,250);box-sizing: border-box !important;float: none !important;width: 100% !important;}.thrv_lead_generation_container input[type="password"]::placeholder,.thrv_lead_generation_container input[type="email"]::placeholder,.thrv_lead_generation_container input[type="url"]::placeholder,.thrv_lead_generation_container input[type="text"]::placeholder,.thrv_lead_generation_container input[type="tel"]::placeholder,.thrv_lead_generation_container input[type="number"]::placeholder,.thrv_lead_generation_container button::placeholder,.thrv_lead_generation_container select:not(.flatpickr-monthDropdown-months)::placeholder,.thrv_lead_generation_container textarea::placeholder{opacity: 0.7;color: inherit !important;}.thrv_lead_generation_container input:hover{background-color: rgb(255,255,255);border-color: rgb(26,188,156);}.thrv_lead_generation_container input[type="image"]{box-sizing: border-box;}.thrv_lead_generation_container select{height: auto;}.thrv_lead_generation_container input[type="password"],.thrv_lead_generation_container input[type="email"],.thrv_lead_generation_container input[type="text"],.thrv_lead_generation_container input[type="tel"],.thrv_lead_generation_container input[type="url"]{outline: none;padding: 5px;}.thrv_lead_generation_container button{border-width: 0px;color: rgb(255,255,255);cursor: pointer;font-size: 16px;padding: 10px;}.thrv_lead_generation_container .tcb-form-loader{display: none;position: absolute;width: 100%;height: 100%;top: 0px;left: 0px;}span.tcb-form-loader-icon{animation: 0.7s linear 0s infinite normal none running tcb-loader;display: inline-block;font-size: 24px;line-height: 24px;height: 24px;width: 24px;position: absolute;top: 50%;left: 50%;margin: -12px 0px 0px -12px;opacity: 0.7;}.thrv_lead_generation_container .thrv_text_element{position: relative;z-index: 1 !important;}.thrv_lead_generation_container .thrv_text_element.tve-hide{display: none !important;}.tve_lg_input_container{position: relative;z-index: 1 !important;}.tve_lg_input_container input[type="email"]{margin: 10px 0px;}.tve_lg_input_container.tcb-plain-text{cursor: unset;}.tve-turnstile-container{display: table;position: relative;}.tve-turnstile-container[data-size]{margin: 10px auto;--tve-alignment: center;}.tve_lead_generated_inputs_container{--tcb-local-color-30800: rgb(59,136,253);--tcb-local-color-f2bba: rgba(59,136,253,0.1);--tcb-local-color-trewq: rgba(59,136,253,0.3);--tcb-local-color-poiuy: rgba(59,136,253,0.6);--tcb-local-color-f83d7: rgba(59,136,253,0.25);--tcb-local-color-3d798: rgba(59,136,253,0.4);--tcb-local-color-418a6: rgba(59,136,253,0.12);--tcb-local-color-a941t: rgba(59,136,253,0.05);--tcb-local-color-1ad9d: rgba(46,204,113,0.1);--tcb-local-color-2dbcc: rgb(136,231,253);--tcb-local-color-frty6: rgba(59,136,253,0.45);--tcb-local-color-flktr: rgba(59,136,253,0.8);--tcb-radio-size: 20px;--tcb-checkbox-size: 20px;--tve-color: var(--tcb-local-color-30800);}.tve-new-radio .tve_lg_radio_wrapper .tve-lg-error:not(:checked) + label:not(:hover) + .tve-checkmark,.tve-new-radio .tve_lg_radio_wrapper .tve-lg-error:not(:checked) + label:not(:hover) .tve-checkmark,.tve-new-checkbox .tve_lg_checkbox_wrapper .tve-lg-error:not(:checked) + label:not(:hover) + .tve-checkmark,.tve-new-checkbox .tve_lg_checkbox_wrapper .tve-lg-error:not(:checked) + label:not(:hover) .tve-checkmark{border-color: rgba(0,0,0,0);box-shadow: rgb(169,68,66) 0px 0px 4px inset;}.tve-new-radio .tve_lg_radio_wrapper .tve-lg-error:not(:checked) + label:not(:hover) + .tve-checkmark::after,.tve-new-radio .tve_lg_radio_wrapper .tve-lg-error:not(:checked) + label:not(:hover) .tve-checkmark::after,.tve-new-checkbox .tve_lg_checkbox_wrapper .tve-lg-error:not(:checked) + label:not(:hover) + .tve-checkmark::after,.tve-new-checkbox .tve_lg_checkbox_wrapper .tve-lg-error:not(:checked) + label:not(:hover) .tve-checkmark::after{box-shadow: rgb(169,68,66) 0px 0px 4px inset;}.tve-new-radio.tve_lg_radio.tve-lg-error-multiple::after{display: block;position: absolute;left: 16px;bottom: -10px;font-size: 16px;color: rgb(170,68,67);}.tve_lg_dropdown.tve-lg-error,.tcb-form-dropdown.tve-lg-error,.tve-dynamic-dropdown.tve-lg-error{border-radius: 6px;}.tve_lg_dropdown.tve-lg-error > a,.tcb-form-dropdown.tve-lg-error > a,.tve-dynamic-dropdown.tve-lg-error > a{box-shadow: rgb(169,68,66) 0px 0px 4px !important;}.tcb-file-list .tcb-file-loader .tcb-form-loader-icon{font-size: 16px;line-height: 16px;width: 16px;height: 16px;margin: -8px 0px 0px -8px;}.tve-form-button{max-width: 100%;margin-left: auto;margin-right: auto;display: table !important;}.tve-form-button.thrv_wrapper{padding: 0px;}.tve-form-button .tcb-plain-text{cursor: pointer;}.tve-form-button{position: relative;z-index: 1;}.tve-form-button:focus-within .tve-form-button-submit{box-shadow: rgba(142,142,142,0.5) 0px 2px 4px;}a.tcb-button-link{background-color: rgb(26,188,156);padding: 12px 15px;font-size: 18px;box-sizing: border-box;display: inline-flex;align-items: center;overflow: hidden;width: 100%;text-align: center;line-height: 1.2em;}a.tcb-button-link:hover{background-color: rgb(21,162,136);}.tve-form-button a.tcb-button-link{color: rgb(255,255,255);text-decoration: none !important;}a.tcb-button-link > span::before{position: absolute;content: "";display: none;top: -100px;bottom: -100px;width: 1px;left: 10px;background-color: rgb(0,121,0);}span.tcb-button-texts{color: inherit;display: block;flex: 1 1 0%;position: relative;}span.tcb-button-texts > span{display: block;padding: 0px;}.tcb-plain-text{cursor: text;}.thrv-login-element .tcb-form-loader-icon{z-index: 11;}.thrv-login-element .tcb-form-loader > span.tcb-form-loader-icon{animation: 0.7s linear 0s infinite normal none running tcb-loader;display: inline-block;font-size: 24px;line-height: 24px;height: 24px;width: 24px;position: absolute;top: 50%;left: 50%;margin: -12px 0px 0px -12px;opacity: 0.7;}.notifications-content-wrapper{position: fixed;opacity: 1;}.notifications-content-wrapper.thrv_wrapper{padding: 0px;margin: 0px;}.notifications-content-wrapper.tcb-permanently-hidden{display: none !important;}.notifications-content-wrapper .notifications-content{display: none;flex-direction: column;}.notifications-content-wrapper:not(.notification-edit-mode){z-index: 9999993;}.notifications-content-wrapper[data-position*="top"]{top: 50px;}.notifications-content-wrapper[data-position*="middle"]{top: 50%;transform: translateY(-50%);}.notifications-content-wrapper[data-position*="bottom"]{bottom: 50px;}.notifications-content-wrapper[data-position*="left"]{left: 50px;}.notifications-content-wrapper[data-position*="center"]{left: 50%;transform: translateX(-50%);}.notifications-content-wrapper[data-position*="right"]{right: 50px;}.notifications-content-wrapper[data-position="middle-center"]{transform: translate(-50%,-50%);}.notifications-content-wrapper.tcb-animated:not(.notification-edit-mode),.notifications-content-wrapper.tcb-animated.editor-preview{transition: top 0.7s,bottom 0.7s,left 0.7s,right 0.7s,opacity 0.7s ease-in-out;}.notifications-content-wrapper.tcb-animated:not(.notification-edit-mode)[data-animation="down"][data-position*="bottom"],.notifications-content-wrapper.tcb-animated.editor-preview[data-animation="down"][data-position*="bottom"]{bottom: 150%;}.notifications-content-wrapper.tcb-animated:not(.notification-edit-mode)[data-animation="down"][data-position*="top"],.notifications-content-wrapper.tcb-animated:not(.notification-edit-mode)[data-animation="down"][data-position*="middle"],.notifications-content-wrapper.tcb-animated.editor-preview[data-animation="down"][data-position*="top"],.notifications-content-wrapper.tcb-animated.editor-preview[data-animation="down"][data-position*="middle"]{top: -100%;}.notifications-content-wrapper.tcb-animated:not(.notification-edit-mode)[data-animation="up"][data-position*="bottom"],.notifications-content-wrapper.tcb-animated.editor-preview[data-animation="up"][data-position*="bottom"]{bottom: -100%;}.notifications-content-wrapper.tcb-animated:not(.notification-edit-mode)[data-animation="up"][data-position*="top"],.notifications-content-wrapper.tcb-animated:not(.notification-edit-mode)[data-animation="up"][data-position*="middle"],.notifications-content-wrapper.tcb-animated.editor-preview[data-animation="up"][data-position*="top"],.notifications-content-wrapper.tcb-animated.editor-preview[data-animation="up"][data-position*="middle"]{top: 150%;}.notifications-content-wrapper.tcb-animated:not(.notification-edit-mode)[data-animation="left"][data-position*="right"],.notifications-content-wrapper.tcb-animated.editor-preview[data-animation="left"][data-position*="right"]{right: 150%;}.notifications-content-wrapper.tcb-animated:not(.notification-edit-mode)[data-animation="left"][data-position*="left"],.notifications-content-wrapper.tcb-animated:not(.notification-edit-mode)[data-animation="left"][data-position*="center"],.notifications-content-wrapper.tcb-animated.editor-preview[data-animation="left"][data-position*="left"],.notifications-content-wrapper.tcb-animated.editor-preview[data-animation="left"][data-position*="center"]{left: -100%;}.notifications-content-wrapper.tcb-animated:not(.notification-edit-mode)[data-animation="right"][data-position*="right"],.notifications-content-wrapper.tcb-animated.editor-preview[data-animation="right"][data-position*="right"]{right: -100%;}.notifications-content-wrapper.tcb-animated:not(.notification-edit-mode)[data-animation="right"][data-position*="left"],.notifications-content-wrapper.tcb-animated:not(.notification-edit-mode)[data-animation="right"][data-position*="center"],.notifications-content-wrapper.tcb-animated.editor-preview[data-animation="right"][data-position*="left"],.notifications-content-wrapper.tcb-animated.editor-preview[data-animation="right"][data-position*="center"]{left: 150%;}.notifications-content-wrapper.tcb-animated:not(.notification-edit-mode)[data-animation],.notifications-content-wrapper.tcb-animated.editor-preview[data-animation]{opacity: 0;}.notifications-content-wrapper[data-state="success"] .notification-success{display: flex;}.notifications-content-wrapper[data-state="warning"] .notification-warning{display: flex;}.notifications-content-wrapper[data-state="error"] .notification-error{display: flex;}.tcb-permanently-hidden{display: none !important;}.tar-disabled{cursor: default;opacity: 0.4;pointer-events: none;}.tcb-flex-row{display: flex;flex-flow: row;align-items: stretch;justify-content: space-between;margin-top: 0px;margin-left: -15px;padding-bottom: 15px;padding-top: 15px;}.tcb-flex-col{flex: 1 1 auto;padding-top: 0px;padding-left: 15px;}.tcb-flex-row .tcb-flex-col{box-sizing: border-box;}.tcb-col{height: 100%;display: flex;flex-direction: column;position: relative;}.tcb-flex-row .tcb-col{box-sizing: border-box;}.thrv-svg-icon svg{width: 1em;height: 1em;stroke-width: 0;fill: currentcolor;stroke: currentcolor;}html{text-rendering: auto !important;}html body{text-rendering: auto !important;}body.tcb_symbol-template-default::before{content: none;}.thrv_wrapper{margin-top: 20px;margin-bottom: 20px;padding: 1px;}.thrv_wrapper div{box-sizing: content-box;}.thrv_symbol .thrv_wrapper:not(.thrv_icon){box-sizing: border-box !important;}.thrv_wrapper.thrv-columns{margin-top: 10px;margin-bottom: 10px;padding: 0px;}p{font-size: 1em;}.tve_clearfix::after{content: "";display: block;clear: both;visibility: hidden;line-height: 0;height: 0px;}.tvd-toast{justify-content: space-between;}.tvd-toast.tve-fe-message{top: 50px;width: 60%;padding: 0px;color: rgb(0,0,0);max-width: 500px;position: fixed;z-index: 9999993;left: 50%;}.tvd-toast.tve-fe-message .tve-toast-message{position: relative;left: -50%;background: rgb(255,255,255);box-shadow: rgb(167,167,167) 0px 0px 15px 0px;}.tvd-toast.tve-fe-message .tve-toast-icon-container{display: inline-block;width: 50px;background: green;color: rgb(255,255,255);height: 100%;position: absolute;}.tvd-toast.tve-fe-message .tve-toast-icon-container.tve-toast-error{background: red;}.tvd-toast.tve-fe-message .tve-toast-message-container{padding: 20px 10px 20px 70px;margin: auto 0px;font-family: Roboto,sans-serif;font-size: 16px;}.tvd-toast.tve-fe-message span{text-align: center;display: flex;justify-content: center;flex-direction: column;align-items: center;min-height: 50px;height: 100%;width: 100%;}:not(#_s):not(#_s) .tcb-conditional-display-placeholder{min-height: var(--tcb-container-height-d,100px) !important;position: relative;}:not(#_s):not(#_s) .tcb-conditional-display-placeholder.thrv-page-section{box-sizing: border-box;margin: 0px;}:not(#_s):not(#_s) .tcb-conditional-display-placeholder.thrv-content-box{box-sizing: border-box;}:not(#_s):not(#_s) .tcb-conditional-display-placeholder .tve-page-section-out,:not(#_s):not(#_s) .tcb-conditional-display-placeholder .tve-content-box-background{box-sizing: border-box;position: absolute;width: 100%;height: 100%;left: 0px;top: 0px;overflow: hidden;}a.tcb-plain-text{cursor: pointer;}@media (max-width: 1023px){:not(#_s):not(#_s) .tcb-conditional-display-placeholder{min-height: var(--tcb-container-height-t) !important;}}@media (max-width: 767px){html{overflow-x: hidden !important;}html,body{max-width: 100vw !important;}.notifications-content-wrapper{transform: translateX(-50%);left: 50% !important;right: unset !important;}.notifications-content-wrapper[data-position*="middle"]{transform: translate(-50%,-50%);}.notifications-content-wrapper[data-position*="top"]{top: 0px;}.notifications-content-wrapper[data-position*="bottom"]{bottom: 0px;}.tcb-flex-row{flex-direction: column;}.tcb-flex-row.v-2{flex-direction: row;}.tcb-flex-row.v-2:not(.tcb-mobile-no-wrap){flex-wrap: wrap;}.tcb-flex-row.v-2:not(.tcb-mobile-no-wrap) > .tcb-flex-col{width: 100%;flex: 1 0 390px;max-width: 100% !important;}:not(#_s):not(#_s) .tcb-conditional-display-placeholder{min-height: var(--tcb-container-height-m) !important;}}@media only screen and (max-width: 740px){.thrv_lead_generation .thrv_lead_generation_container .tve_lg_input_container.tve_lg_select_container .thrv_icon{margin-right: 14px;}}@media (max-width: 1023px) and (min-width: 768px){.notifications-content-wrapper[data-position*="top"]{top: 20px;}.notifications-content-wrapper[data-position*="bottom"]{bottom: 20px;}}@media screen and (-ms-high-contrast: active),(-ms-high-contrast: none){.tcb-flex-col{width: 100%;}.tcb-col{display: block;}}@media screen and (max-device-width: 480px){body{text-size-adjust: none;}}@keyframes tcb-loader{0%{transform: rotate(0deg);}100%{transform: rotate(359deg);}}@media (min-width: 300px){.thrv_symbol_74928 [data-css="tve-u-194ccc1e96a"]{max-width: 72.5%;}.thrv_symbol_74928 [data-css="tve-u-194ccc1e96c"]{max-width: 27.5%;}.thrv_symbol_74928 [data-css="tve-u-194ccc1e969"] > .tcb-flex-col > .tcb-col{justify-content: center;}.thrv_symbol_74928 [data-css="tve-u-194ccc1e968"]{margin-top: 0px !important;margin-bottom: 0px !important;}[data-css="tve-u-194ccc1e9df"]{--tcb-local-color-0a1ec: rgb(47,138,229);--tcb-local-color-909bc: rgba(47,138,229,0.08);--tcb-local-color-146a8: rgba(47,138,229,0.2);}.thrv_symbol_74928 [data-css="tve-u-194ccc1e9df"]{--tcb-local-color-bcd13: var(--tcb-local-color-0a1ec);--form-color: --tcb-skin-color-0;float: none;margin-left: auto !important;margin-right: auto !important;max-width: 700px !important;padding-left: 10px !important;padding-right: 10px !important;background-color: rgb(255,255,255) !important;--tve-applied-background-color: rgb(255,255,255) !important;--tcb-local-color-0a1ec: var(--tcb-skin-color-0) !important;--tcb-local-color-909bc: rgba(55,101,139,0.08) !important;--tcb-local-color-146a8: rgba(55,101,139,0.2) !important;}:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tve_lg_regular_input:not(.tcb-excluded-from-group-item) input,:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tve_lg_regular_input:not(.tcb-excluded-from-group-item) textarea{border: 1px solid var(--tcb-local-color-909bc);--tve-applied-border: 1px solid var$(--tcb-local-color-909bc);border-radius: 0px;box-shadow: 0px 0px 3px 0px var(--tcb-local-color-909bc) inset;--tve-applied-box-shadow: 0px 0px 3px 0px var$(--tcb-local-color-909bc) inset;background-color: rgb(251,251,251) !important;--tve-applied-background-color: rgb(251,251,251) !important;background-image: none !important;--tve-applied-background-image: none !important;}:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tve_lg_regular_input:not(.tcb-excluded-from-group-item) input,:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tve_lg_regular_input:not(.tcb-excluded-from-group-item) textarea,:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tve_lg_regular_input:not(.tcb-excluded-from-group-item) ::placeholder{font-weight: var(--tve-font-weight,var(--g-regular-weight,normal));font-family: var(--tve-font-family,Arial,Helvetica,sans-serif);font-size: var(--tve-font-size,14px);line-height: var(--tve-line-height,1.2em);--tcb-applied-color: rgb(17,17,17);color: var(--tve-color,rgb(17,17,17)) !important;--tve-applied-color: var$(--tve-color,rgb(17,17,17)) !important;}.thrv_symbol_74928 #lg-m6parpwe .tve_lg_regular_input:not(.tcb-excluded-from-group-item){--tve-font-weight: var(--g-regular-weight,normal);--tve-font-family: Arial,Helvetica,sans-serif;--tve-font-size: 14px;--tve-line-height: 1.2em;--tve-color: rgb(17,17,17);--tve-applied---tve-color: rgb(17,17,17);--tve-border-radius: 0px;}.thrv_symbol_74928 #lg-m6parpwe .tve_lg_regular_input:not(.tcb-excluded-from-group-item) input,#lg-m6parpwe .tve_lg_regular_input:not(.tcb-excluded-from-group-item) textarea{padding: 18px !important;}:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tve_lg_regular_input:not(.tcb-excluded-from-group-item):hover input,:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tve_lg_regular_input:not(.tcb-excluded-from-group-item):hover textarea{box-shadow: rgba(47,138,229,0.08) 0px 0px 3px 0px inset !important;--tve-applied-box-shadow: 0px 0px 3px 0px rgba(47,138,229,0.08) inset !important;border: 1px solid var(--tcb-local-color-146a8) !important;--tve-applied-border: 1px solid var$(--tcb-local-color-146a8) !important;}:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tve_lg_regular_input:not(.tcb-excluded-from-group-item):hover input,:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tve_lg_regular_input:not(.tcb-excluded-from-group-item):hover textarea,:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tve_lg_regular_input:not(.tcb-excluded-from-group-item):hover ::placeholder{color: var(--tve-color,var(--tcb-local-color-0a1ec)) !important;--tve-applied-color: var$(--tve-color,var$(--tcb-local-color-0a1ec)) !important;--tcb-applied-color: var$(--tcb-local-color-0a1ec) !important;}:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tve_lg_regular_input:not(.tcb-excluded-from-group-item):hover{--tve-color: var(--tcb-local-color-0a1ec) !important;--tve-applied---tve-color: var$(--tcb-local-color-0a1ec) !important;}.thrv_symbol_74928 #lg-m6parpwe .tve_lg_dropdown:not(.tcb-excluded-from-group-item){--tcb-local-color-30800: rgb(47,138,229);--tcb-local-color-f2bba: rgba(59,156,253,0.1);--tcb-local-color-f83d7: rgba(59,156,253,0.25);--tcb-local-color-trewq: rgba(59,156,253,0.3);--tcb-local-color-3d798: rgba(59,156,253,0.4);--tcb-local-color-poiuy: rgba(59,156,253,0.6);--tcb-local-color-418a6: rgba(59,156,253,0.12);--tcb-local-color-a941t: rgba(59,156,253,0.05);--tcb-local-color-1ad9d: rgba(46,204,96,0.1);--tcb-local-color-2dbcc: rgb(131,188,123);--tve-font-weight: var(--g-regular-weight,normal);--tve-font-family: Arial,Helvetica,sans-serif;--tve-color: rgb(17,17,17);--tve-applied---tve-color: rgb(17,17,17);--tve-font-size: 14px;border: 1px solid var(--tcb-local-color-909bc);--tve-applied-border: 1px solid var$(--tcb-local-color-909bc);border-radius: 72px;overflow: hidden;--tve-line-height: 1.2em;box-shadow: 0px 0px 3px 0px var(--tcb-local-color-909bc) inset;--tve-applied-box-shadow: 0px 0px 3px 0px var$(--tcb-local-color-909bc) inset;padding: 18px !important;margin-top: 16px !important;margin-bottom: 16px !important;background-color: rgb(251,251,251) !important;--tve-applied-background-color: rgb(251,251,251) !important;}.thrv_symbol_74928 #lg-m6parpwe .tve_lg_checkbox:not(.tcb-excluded-from-group-item) .tve_lg_checkbox_wrapper:not(.tcb-excluded-from-group-item){--tcb-local-color-30800: rgb(47,138,229);--tcb-local-color-f2bba: rgba(59,156,253,0.1);--tcb-local-color-trewq: rgba(59,156,253,0.3);--tcb-local-color-frty6: rgba(59,156,253,0.45);--tcb-local-color-flktr: rgba(59,156,253,0.8);--tve-font-size: 14px;--tve-color: rgba(17,17,17,0.7);--tve-applied---tve-color: rgba(17,17,17,0.7);--tve-font-weight: var(--g-regular-weight,normal);--tve-font-family: Arial,Helvetica,sans-serif;}.thrv_symbol_74928 #lg-m6parpwe .tve_lg_radio:not(.tcb-excluded-from-group-item) .tve_lg_radio_wrapper:not(.tcb-excluded-from-group-item){--tcb-local-color-30800: rgb(47,138,229);--tcb-local-color-f2bba: rgba(59,156,253,0.1);--tcb-local-color-trewq: rgba(59,156,253,0.3);--tcb-local-color-frty6: rgba(59,156,253,0.45);--tcb-local-color-flktr: rgba(59,156,253,0.8);--tve-font-weight: var(--g-regular-weight,normal);--tve-font-family: Arial,Helvetica,sans-serif;--tve-font-size: 14px;--tve-color: rgba(17,17,17,0.7);--tve-applied---tve-color: rgba(17,17,17,0.7);}:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tve_lg_regular_input#lg-kcond5nq input,:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tve_lg_regular_input#lg-kcond5nq textarea{border: 1px solid var(--tcb-local-color-909bc);--tve-applied-border: 1px solid var$(--tcb-local-color-909bc);border-radius: 10px;overflow: hidden;box-shadow: 0px 0px 3px 0px var(--tcb-local-color-909bc) inset;--tve-applied-box-shadow: 0px 0px 3px 0px var$(--tcb-local-color-909bc) inset;background-color: rgb(251,251,251) !important;--tve-applied-background-color: rgb(251,251,251) !important;}:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tve_lg_regular_input#lg-kcond5nq input,:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tve_lg_regular_input#lg-kcond5nq textarea,:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tve_lg_regular_input#lg-kcond5nq ::placeholder{font-weight: var(--tve-font-weight,var(--g-regular-weight,normal));font-family: var(--tve-font-family,Arial,Helvetica,sans-serif);font-size: var(--tve-font-size,14px);line-height: var(--tve-line-height,1.2em);}.thrv_symbol_74928 #lg-m6parpwe .tve_lg_regular_input#lg-kcond5nq{--tve-font-weight: var(--g-regular-weight,normal);--tve-font-family: Arial,Helvetica,sans-serif;--tve-font-size: 14px;--tve-line-height: 1.2em;}.thrv_symbol_74928 #lg-m6parpwe .tve_lg_regular_input#lg-kcond5nq input,#lg-m6parpwe .tve_lg_regular_input#lg-kcond5nq textarea{padding: 18px !important;}:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tve_lg_regular_input#lg-kcond5nq:hover input,:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tve_lg_regular_input#lg-kcond5nq:hover textarea{box-shadow: rgba(47,138,229,0.08) 0px 0px 3px 0px inset !important;--tve-applied-box-shadow: 0px 0px 3px 0px rgba(47,138,229,0.08) inset !important;border: 1px solid var(--tcb-local-color-146a8) !important;--tve-applied-border: 1px solid var$(--tcb-local-color-146a8) !important;}:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tve_lg_regular_input#lg-kcond5nq:hover input,:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tve_lg_regular_input#lg-kcond5nq:hover textarea,:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tve_lg_regular_input#lg-kcond5nq:hover ::placeholder{color: var(--tve-color,var(--tcb-local-color-0a1ec)) !important;--tve-applied-color: var$(--tve-color,var$(--tcb-local-color-0a1ec)) !important;--tcb-applied-color: var$(--tcb-local-color-0a1ec) !important;}:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tve_lg_regular_input#lg-kcond5nq:hover{--tve-color: var(--tcb-local-color-0a1ec) !important;--tve-applied---tve-color: var$(--tcb-local-color-0a1ec) !important;}:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tve_lg_dropdown:not(.tcb-excluded-from-group-item) span{font-weight: var(--tve-font-weight,var(--g-regular-weight,normal));font-family: var(--tve-font-family,Arial,Helvetica,sans-serif);color: var(--tve-color,rgb(17,17,17));--tve-applied-color: var$(--tve-color,rgb(17,17,17));--tcb-applied-color: rgb(17,17,17);font-size: var(--tve-font-size,14px);line-height: var(--tve-line-height,1.2em);}:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tve_lg_dropdown:not(.tcb-excluded-from-group-item):hover span{color: var(--tve-color,var(--tcb-local-color-0a1ec)) !important;--tve-applied-color: var$(--tve-color,var$(--tcb-local-color-0a1ec)) !important;--tcb-applied-color: var$(--tcb-local-color-0a1ec) !important;}:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tve_lg_dropdown:not(.tcb-excluded-from-group-item):hover{--tve-color: var(--tcb-local-color-0a1ec) !important;--tve-applied---tve-color: var$(--tcb-local-color-0a1ec) !important;box-shadow: 0px 0px 3px 0px var(--tcb-local-color-146a8) inset !important;--tve-applied-box-shadow: 0px 0px 3px 0px var$(--tcb-local-color-146a8) inset !important;border: 1px solid var(--tcb-local-color-146a8) !important;--tve-applied-border: 1px solid var$(--tcb-local-color-146a8) !important;}:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tve_lg_radio:not(.tcb-excluded-from-group-item) .tve_lg_radio_wrapper:not(.tcb-excluded-from-group-item) .tve-input-option-text{font-family: var(--tve-font-family,Arial,Helvetica,sans-serif);--tcb-applied-color: rgba(17,17,17,0.7);font-weight: var(--tve-font-weight,var(--g-regular-weight,normal) ) !important;font-size: var(--tve-font-size,14px) !important;color: var(--tve-color,rgba(17,17,17,0.7)) !important;--tve-applied-color: var$(--tve-color,rgba(17,17,17,0.7)) !important;}:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tve_lg_checkbox:not(.tcb-excluded-from-group-item) .tve_lg_checkbox_wrapper:not(.tcb-excluded-from-group-item) .tve-input-option-text{--tcb-applied-color: rgba(17,17,17,0.7);font-family: var(--tve-font-family,Arial,Helvetica,sans-serif);font-size: var(--tve-font-size,14px) !important;color: var(--tve-color,rgba(17,17,17,0.7)) !important;--tve-applied-color: var$(--tve-color,rgba(17,17,17,0.7)) !important;font-weight: var(--tve-font-weight,var(--g-regular-weight,normal) ) !important;}:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tve_lg_checkbox:not(.tcb-excluded-from-group-item) .tve_lg_checkbox_wrapper:not(.tcb-excluded-from-group-item):hover .tve-input-option-text{color: var(--tve-color,var(--tcb-local-color-0a1ec)) !important;--tve-applied-color: var$(--tve-color,var$(--tcb-local-color-0a1ec)) !important;--tcb-applied-color: var$(--tcb-local-color-0a1ec) !important;}:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tve_lg_checkbox:not(.tcb-excluded-from-group-item) .tve_lg_checkbox_wrapper:not(.tcb-excluded-from-group-item):hover{--tve-color: var(--tcb-local-color-0a1ec) !important;--tve-applied---tve-color: var$(--tcb-local-color-0a1ec) !important;}:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tve_lg_radio:not(.tcb-excluded-from-group-item) .tve_lg_radio_wrapper:not(.tcb-excluded-from-group-item):hover .tve-input-option-text{color: var(--tve-color,var(--tcb-local-color-0a1ec)) !important;--tve-applied-color: var$(--tve-color,var$(--tcb-local-color-0a1ec)) !important;--tcb-applied-color: var$(--tcb-local-color-0a1ec) !important;}:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tve_lg_radio:not(.tcb-excluded-from-group-item) .tve_lg_radio_wrapper:not(.tcb-excluded-from-group-item):hover{--tve-color: var(--tcb-local-color-0a1ec) !important;--tve-applied---tve-color: var$(--tcb-local-color-0a1ec) !important;}.thrv_symbol_74928 #lg-m6parpwe .tve_lg_radio:not(.tcb-excluded-from-group-item) .tve_lg_radio_wrapper:not(.tcb-excluded-from-group-item) .tve-checkmark{--tcb-radio-size: 14px;}.thrv_symbol_74928 #lg-m6parpwe .tve_lg_checkbox:not(.tcb-excluded-from-group-item) .tve_lg_checkbox_wrapper:not(.tcb-excluded-from-group-item) .tve-checkmark{--tcb-checkbox-size: 14px;}:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .thrv_text_element[data-label-for]:not(.tcb-excluded-from-group-item) .tcb-plain-text{font-family: Arial,Helvetica,sans-serif !important;font-weight: var(--g-regular-weight,normal) !important;font-size: 14px !important;color: var(--tcb-local-color-0a1ec) !important;--tcb-applied-color: var$(--tcb-local-color-0a1ec) !important;--tve-applied-color: var$(--tcb-local-color-0a1ec) !important;}.thrv_symbol_74928 #lg-m6parpwe .tcb-default-upload-icon{--tcb-local-color-icon: var(--tcb-local-color-146a8);--tcb-local-color-var: var$(--tcb-local-color-146a8);float: none;font-size: 30px;width: 30px;height: 30px;--tve-icon-size: 30px;border: none;border-radius: 0px;--tve-applied-border: none;overflow: hidden;background-size: auto;background-attachment: scroll,scroll,scroll;background-position: 50% 50%;background-repeat: no-repeat;background-image: none !important;margin-left: auto !important;margin-right: auto !important;padding: 2px !important;--tve-applied-background-image: none !important;background-color: transparent !important;--tve-applied-background-color: transparent !important;margin-bottom: 5px !important;}:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tcb-default-upload-text{letter-spacing: 1px;color: rgb(124,124,124) !important;--tcb-applied-color: rgb(124,124,124) !important;--tve-applied-color: rgb(124,124,124) !important;font-size: 14px !important;}.thrv_symbol_74928 #lg-m6parpwe .tve_lg_file > .tve-content-box-background{box-shadow: none;--tve-applied-box-shadow: none;border-radius: 0px;overflow: hidden;background-color: transparent !important;--tve-applied-background-color: transparent !important;border-top: 1px solid var(--tcb-local-color-909bc) !important;border-right: none !important;border-bottom: 1px solid var(--tcb-local-color-909bc) !important;border-left: none !important;border-image: initial !important;--tve-applied-border: none !important;}.thrv_symbol_74928 #lg-m6parpwe .tve_lg_file.tve-state-active > .tve-content-box-background{border-top: 1px solid var(--tcb-local-color-146a8) !important;border-bottom: 1px solid var(--tcb-local-color-146a8) !important;}.thrv_symbol_74928 #lg-m6parpwe .thrv_text_element[data-label-for]:not(.tcb-excluded-from-group-item){margin-top: 30px !important;}:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tcb-default-upload-icon > :first-child{color: var(--tcb-local-color-icon);--tve-applied-color: var$(--tcb-local-color-icon);}.thrv_symbol_74928 #lg-m6parpwe .tve_lg_file{margin-top: 15px !important;}:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tcb-default-upload-icon:hover{--tcb-local-color-icon: var(--tcb-local-color-909bc) !important;--tcb-local-color-var: var$(--tcb-local-color-909bc) !important;}:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tcb-default-upload-icon:hover > :first-child{color: var(--tcb-local-color-909bc) !important;--tve-applied-color: var$(--tcb-local-color-909bc) !important;}:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tcb-default-upload-text .tcb-plain-text{font-size: 14px !important;color: rgb(166,166,166) !important;--tcb-applied-color: rgb(166,166,166) !important;--tve-applied-color: rgb(166,166,166) !important;}.thrv_symbol_74928 #lg-m6parpwe .tcb-default-upload-text{margin-bottom: 20px !important;}.thrv_symbol_74928 #lg-m6parpwe .tcb-file-upload-btn .tcb-button-link{letter-spacing: var(--tve-letter-spacing,2px);border: 3px solid var(--tcb-local-color-0a1ec);--tve-applied-border: 3px solid var$(--tcb-local-color-0a1ec);border-radius: 45px;overflow: hidden;background-color: transparent !important;padding: 15px 20px !important;}:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tcb-file-upload-btn .tcb-button-link span{font-weight: var(--tve-font-weight,var(--g-bold-weight,bold));color: var(--tve-color,var(--tcb-local-color-0a1ec));--tcb-applied-color: var$(--tcb-local-color-0a1ec);--tve-applied-color: var$(--tve-color,var$(--tcb-local-color-0a1ec));}.thrv_symbol_74928 #lg-m6parpwe .tcb-file-upload-btn{--tve-font-weight: var(--g-bold-weight,bold);--tve-letter-spacing: 2px;--tve-color: var(--tcb-local-color-0a1ec);--tcb-local-color-ef6da: rgb(19,114,211);--tcb-local-color-f8570: rgb(19,114,211);float: none;--tve-font-size: 11px;--tve-applied---tve-color: var$(--tcb-local-color-0a1ec);margin-left: auto !important;margin-right: auto !important;}:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tcb-file-upload-btn:hover .tcb-button-link{border: 3px solid var(--tcb-local-color-0a1ec) !important;background-color: transparent !important;background-image: linear-gradient(var(--tcb-local-color-0a1ec),var(--tcb-local-color-0a1ec)) !important;background-size: auto !important;background-position: 50% 50% !important;background-attachment: scroll !important;background-repeat: no-repeat !important;--tve-applied-background-image: linear-gradient(var$(--tcb-local-color-0a1ec),var$(--tcb-local-color-0a1ec)) !important;--tve-applied-border: 3px solid var$(--tcb-local-color-0a1ec) !important;}.thrv_symbol_74928 body:not(.tcb-states) #lg-m6parpwe .tcb-file-upload-btn .tcb-button-link::before{background-attachment: scroll;background-image: none;background-position: 0% 0%;background-repeat: repeat;background-size: auto;background-color: transparent !important;}.thrv_symbol_74928 #lg-m6parpwe .tcb-file-upload-btn .tcb-button-link::after{background-color: transparent !important;background-image: linear-gradient(var(--tcb-local-color-0a1ec),var(--tcb-local-color-0a1ec)) !important;background-size: auto !important;background-position: 50% 50% !important;background-attachment: scroll !important;background-repeat: no-repeat !important;}:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tcb-file-upload-btn:hover .tcb-button-link span{color: var(--tve-color,rgb(255,255,255)) !important;--tcb-applied-color: rgb(255,255,255) !important;}:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tcb-file-upload-btn:hover{--tve-color: rgb(255,255,255) !important;}:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tcb-file-upload-btn .tcb-button-link{font-size: var(--tve-font-size,11px) !important;}#lg-m6parpwe .tve-form-button{--tcb-local-color-c57b7: rgb(19,114,211);}:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tve-form-button .tcb-button-link span{font-weight: var(--tve-font-weight,var(--g-bold-weight,bold));color: var(--tve-color,rgb(255,255,255));--tcb-applied-color: rgb(255,255,255);--tve-applied-color: var$(--tve-color,rgb(255,255,255));}.thrv_symbol_74928 #lg-m6parpwe .tve-form-button{--tve-font-weight: var(--g-bold-weight,bold);--tve-letter-spacing: 2px;--tcb-local-color-2818e: rgb(19,114,211);--tve-color: rgb(255,255,255);--tve-applied---tve-color: rgb(255,255,255);--tcb-local-color-c57b7: rgb(54,100,138) !important;min-width: 100% !important;}.thrv_symbol_74928 #lg-m6parpwe .tve-form-button .tcb-button-link{letter-spacing: var(--tve-letter-spacing,2px);padding: 18px !important;background-color: var(--tcb-local-color-c57b7) !important;background-image: none !important;}:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tve-form-button:hover .tcb-button-link{background-image: linear-gradient(rgba(255,255,255,0.08),rgba(255,255,255,0.08)) !important;background-size: auto !important;background-position: 50% 50% !important;background-attachment: scroll !important;background-repeat: no-repeat !important;background-color: var(--tcb-local-color-c57b7) !important;}.thrv_symbol_74928 #lg-m6parpwe .tve-form-button.tve-state-active .tcb-button-link{background-image: linear-gradient(rgba(0,0,0,0.4),rgba(0,0,0,0.4)) !important;background-size: auto !important;background-position: 50% 50% !important;background-attachment: scroll !important;background-repeat: no-repeat !important;--background-image: linear-gradient(rgba(0,0,0,0.4),rgba(0,0,0,0.4)) !important;--background-size: auto auto !important;--background-position: 50% 50% !important;--background-attachment: scroll !important;--background-repeat: no-repeat !important;--tve-applied-background-image: linear-gradient(rgba(0,0,0,0.4),rgba(0,0,0,0.4)) !important;}:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tve-form-button.tve-state-active .tcb-button-link span{color: var(--tve-color,rgb(255,255,255));--tve-applied-color: var$(--tve-color,rgb(255,255,255));--tcb-applied-color: rgb(255,255,255);}:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tve-form-button.tve-state-active{--tve-color: rgb(255,255,255);--tve-applied---tve-color: rgb(255,255,255);}:not(#tve) .thrv_symbol_74928 #lg-m6parpwe .tve-form-button.tve-state-active .tcb-button-link{background-color: var(--tcb-local-color-c57b7) !important;--background-color: var(--tcb-local-color-c57b7) !important;--tve-applied-background-color: var$(--tcb-local-color-c57b7) !important;}.thrv_symbol_74928 #lg-m6parpwe .tve-form-button.tve-color-set{--tcb-local-color-c57b7: rgb(51,146,108) !important;}} Subscribe Free eBook: WebRTC for Business PeopleEarlier this month, I updated my WebRTC for Business People ebook.
Its last update took place in 2022, during the pandemic. Everything was about work from home. We’ve seen some work around AI, but we didn’t have the term Generative AI or ChatGPT. An updated version was born, rewriting and recording some of the content as well as replacing some of the showcased vendors.
I’d like to thank Ant Media, Kaleyra, Nimble Ape and WebRTC.ventures for picking up the sponsorship for this work.
Download the WebRTC for Business People ebook for free
Workshop: Generative AI & WebRTCThis week I’ll have my latest Generative AI & WebRTC workshop. It deals with the challenges and best practices that are being figured out just now on how to connect Generative AI technologies (LLM and ASR) with WebRTC.
There are a few spots left. If you wish to join, you can enroll in the workshop through this link.
WebRTC Training CoursesIn the past year or two, 3 new courses were introduced: WebRTC Security & Privacy, Low-level WebRTC protocols, Higher-level WebRTC protocols.
I am considering if it makes sense to introduce 1-2 more courses this year as well. I don’t have a final decision yet.
In the meantime, my Supporting WebRTC course is going through a nice update – quizzes are going to be added to it this month. These are going to be useful for those starting a new support role with little WebRTC knowledge and experience.
Learn more about my WebRTC courses
WebRTC InsightsWebRTC Insights has been an integral part of my work for the past 4 years now. Along with Philipp Hancke, I’ve been offering a premium subscription service that gives engineering teams and their product managers everything they need to stay on top of WebRTC at all times.
We’ve celebrated a year of WebRTC Insights recently – if you’d like to join our service for the coming year and be updated on everything technical (and non-technical) about WebRTC just let us know.
Consulting and fractional employmentI decided to go back to my roots as part of this change – this means that I am now open to more consulting work, getting back to what I have enjoyed doing before testRTC got acquired. This also includes fractional employment of a day a month or a day a week – in the domains of product management and architecture for WebRTC/Programmable Communications.
Things I do for my clients include:
Want to learn more? Just contact me.
Entering my next eraRecently I started mentoring here in Israel – school kids with programming network related projects and product managers in their path to becoming CPOs. This is fun and fulfilling in ways that are hard to explain in words. Will that take a bigger place in the future for me? I don’t know yet. Time will tell.
In the meantime, I am going to continue making my site and courses the best place to learn and upskill on WebRTC – and definitely continue assisting vendors through my various consulting services.
The post WebRTC, BlogGeek.me, 2025 & 13 years of blogging appeared first on BlogGeek.me.
I have been updating a WebRTC in Open Source dataset derived from GitHub event data for 10 years now. I periodically update this to look for recent trends on WebRTC activity, popular repos, and new API usage. I hosted a live stream of my 2024 review back in December where Tsahi Levent-Levi joined to help […]
The post 2024 WebRTC in Open Source Review: A Quantitative Analysis appeared first on webrtcHacks.
Debugging WebRTC media issues, especially video, often requires access to the unencrypted RTP payloads. We talked about this back in 2017 already and had a great blog post on using the libWebRTC “video_replay” tool. While that post has aged remarkably well, video_replay has improved significantly, in particular since it is now possible to create the […]
The post Capture & Replay WebRTC video streams for debugging – video_replay 2025 update appeared first on webrtcHacks.
Master the art of effective WebRTC and Zoom meetings and boost your remote communication. Get practical tips and techniques for productive virtual meetings.
Everyone is talking about RTO (return-to-office), so what better time is there than this to talk about remote meetings and how to conduct them more effectively? For the purpose of SEO, this is titled effective Zoom meetings, but frankly? It is the same best practices I’d give for any WebRTC meeting.
Table of contentsThis is usually a blog for creators of products and services around WebRTC and communication technology, so writing something for end users is rather new to me. That said, like everyone else I do a lot of remote meetings, and most of my career revolved around video conferencing technologies.
I follow a great Hebrew podcast about sales and marketing. In a recent episode they discussed Zoom sales meetings (you can find it here, if your Hebrew is up for it). So I took it as a signal that it was time for me to dive into this topic and write about it as well. I’ve incorporated a lot of their suggestions here, and added a few of my own.
Fix your hairSome things don’t change between in person and remote meetings. In both, you need to look your best.
Since what you see in a video meeting is the head, make sure that your hair is presentable.
While at it, check if you need to shave…
For me this is a real issue. I never combed or fixed my hair in any way. Not when going to work physically and not when working from home. That has been true for all of my adult life. I don’t own or use a comb… so fixing my hair for me is making sure I go to the barber at least one a month (with my son, whose hair grows at the same rate as mine)
Don’t go for a striped or squared shirtHere is where we diverge a bit from real life. Yes. Stripes and squares aren’t the best selection most days of the week. They are a lot worse for a video meeting. Why? Because WebRTC and Zoom software has a hard time dealing with stripes and squares properly – it takes up a lot of bandwidth to treat them in a way that works well. They also look really bad in low resolutions.
As much as possible, aim for a solid color shirt.
I decided to level up my game a bit here. I am trying to transition from “cool” T-shirts (which I love) to polo shirts (which are ok). We’ll see how that goes
Put on your pantsVideo meeting. No one sees your plants. Maybe someone sees your belt – also questionable. But pants?
So why put them on?
When I started off with my consulting business years ago, I made it a point to dress when I start my day. Working from home is great, but I needed a way to separate myself from home while at home. Dressing in clothes instead of staying in my PJ is my small ritual to mark that separation
That said, I don’t change my shows… not even when I record videos at home.
Decide on your backgroundAre you doing the meeting from your laundry room? Is your bed a mess? Are the kids running around in the background tearing at each other? Maybe you should blur that background or yours. Or replace it.
If you replace it, then don’t go for a tacky vacation background. We’re doing serious work here after all. Brand that background with the company’s logo and colors. That’s assuming the company provided a few such background images to use. They have – haven’t they? And if they haven’t then go and demand one. While at it, the corporate backgrounds shouldn’t be boring and tacky either, which is a challenge these days, since most are.
Or just choose a setting where you don’t really need to replace your background.
I don’t like replacing the background because it is never polished enough as a solution –
That‘s my background. I should probably shave to fit in a wee bit better.
And that picture? There for too long. I need to go to my mom’s house and hunt for one of her newest paintings – my wife hinted that there’s one or two I might fancy for a replacement.
Real or fake background – make sure it works well for you. It gives an impression to others about you. I find myself looking at people’s background in meetings all the time – as a kind of a made up sociology study in my head
Take care of your microphone (and speaker)Don’t believe all the AI hype about improving audio quality.
It isn’t that it doesn’t work. It does. But the results? Not as good as a great microphone and acoustic.
Get and use a decent mic.
Try not using the over the head / over the ear headphones that come with speakers. At least not if you plan on replacing or blurring your background (for the reasons why, check above…).
For best quality? The over the head headphones are likely your choice. Otherwise, find a good saucer or professional mic to do the work. They cost a bit, but definitely worth it.
Meetings are there for a reason. And that’s communicating with each other. If you can’t be heard well enough, then what’s the point?
I use a Jabra Speak speakerphone. I got it when I found out I am spending hours a head in meetings and my ears started to ache. Best purchase for meetings I’ve done at the time.
I should probably start familiarizing myself with upgrading my game to the new speak2…
Use a decent cameraCameras suck. At least all those who have VGA resolution only. The old ones or the ones sold during the pandemic when there were no web cameras available (my Logitech 1080p camera has a Chinese name to it – a parallel import shtick during the pandemic).
The camera I really use? The Obsbot… I love it
It has its quirks that I had to configure (gesture controls were messing around with zoom and stuff). The automatic tracking and framing of people is great – it is a nice conversation opener when needed – on top of the camera being good enough for my needs with built-in pan, tilt AND zoom.
Just make sure you have a good camera that gives great results. Especially if you plan on doing frequent video meetings
Room lighting should make you shineYou picked a shirt? Took care of your haircut and shave line? Decided on the background to use? Got a good camera? Great! What about lighting?
For my own recordings (not meetings mind you), I started using a light ring – it just gets better results.
If you can, make sure the lighting in the room you use puts you… in a positive light (pun intended). In general, back lighting is bad. Front light at an angle is good. Overhead light is questionable. Multiple light sources (placed strategically, which will be a challenge for most) is great.
There’s enough resources online about lighting for video conferencing (that I don’t usually follow but should). If you want to learn more, the Webex blog post about lighting is a good place to start
Location, location, locationJust like hotels, your location matters.
I’d like to look at the location aspect from two different angles – network and ambiance. We already touched the background angle, which is also about… location.
Network
The network connection where you plan to do the meeting matters. It needs to be great. Pristine. Flawless.
When we bought our apartment some 8 years ago, I decided in advance which room would be my office. I then made sure to have it wired to the home internet access point directly. And then I made sure to wire my desktop PC to it. To top it off, I placed the WiFi access point inside the room itself, so that if I need to use a laptop – it will have the best possible signal in that room as well.
Why go to all that fuss? The better the network the higher the stability and the quality of your online meetings are going to be.
Make sure the network you use is up for the task. For me, that’s about having fiber to the home with the highest profile I can get in my neighborhood
Ambiance
Do you live in a glass house? Or just a room with too many glass walls or windows? That’s going to affect the meeting with what is known as reverb (use that word whenever you want to sound smart about audio).
Reverbs I’ve been told are the worst and the hardest to fix. Recently I recorded something with the wrong mic. Going to Fiverr for an expert to fix this manually got me 1 out of 4 that agreed to do the work. The results were “fine”. But then my video editor (=my son) said I don’t sound like myself and at points it is unintelligible. So I had to record it all over again – with the correct mic (did I mention you should have a good mic?).
I do these recordings in the living room because I want the bricks background behind me for some of the videos. But for that, I need to make sure I get a high quality lapel mic so that there’s no reverb effect.
Neighbors? Crying babies? Cars? Lawn mower? You don’t want these to cause issues. From time to time they’re fine, but if they become too frequent in your calls, you should consider switching rooms or places.
Think of the best place that is quiet enough and plays nice with your mics for your meetings
Have more prep timeUp until here, we dealt with the obvious and easy tips for better meetings. These are the things everyone’s telling in one way or another (and most of us ignore). From here, I want to touch a few points that were raised during the podcast episode I listened to, or that popped into my mind during that episode.
We’ll start with the prep time angle.
When we do remote meetings, it means we saved the commute time. Going to and back from that meeting. That time saved? Shouldn’t be used for more meetings. Or for more work. It should first be used to prepare better for the meeting itself.
That’s important.
We can now come better prepared to meetings just because we have a bit more time for them.
An example? Giving a sales person more meetings with more leads likely won’t mean he makes more money to the company. In the end, your company is better served with a salesperson who prepares a lot better for meeting with leads that have a higher potential of becoming customers.
Just increasing the funnel and bringing in more leads that might not be validated properly just because sales now have more time because they’re not commuting? Bad idea.
The same can apply to other types of meetings just as easily. The more prepared you are for a meeting the more effective the meeting will be and the better outcome you’ll get from that meeting.
Quality and not quantity. Use the extra time to prepare better for the meetings you have
Ignore your smartphoneYou are probably like me. You glance at your smartphone every few seconds. I bet you even did it now if you’re reading this on your screen and the smartphone is next to you. Trying to see if there’s a new notification there waiting for you. The dopamine rush is intoxicating.
Some of us still don’t glance at our smartphone when we are sitting in a room talking to others in person. The same courtesy needs to be extended to the people on the screen in a remote meeting. Why? Because we care. And if we don’t, then why are we having that meeting in the first place?
Whatsapp messages, calls, slack and all the rest? They can and should wait. Just put your smartphone face down. Ignore it during the meeting.
People know when you glance off the screen at something else, so if you do it, make sure you explain why and that the explanation makes enough sense
Shut off notifications on your deviceSimilar to your smartphone, but more nuanced.
Doing meetings from a laptop or a PC means there are lots and lots of applications there looking for your attention. Usually, mail applications, slack and similar have by default a popup showing up when there are incoming messages.
Disable them all.
Don’t let anything popup unattended from applications.
These might embarrass you the moment you share your screen. So just don’t let this be an option at all.
For good measures, whenever possible, only share a specific application or tab and not the whole screen. To avoid embarrassment.
Go now to the settings of Outlook and Slack. Change the configuration there to not show pop ups for anything. I’ll wait
Create a summary… at the beginning of the meetingHad time to prepare for the meeting? Use it also to create the summary.
Summary? Done in the preparation phase and not after the meeting? Yes.
What’s the point in having a meeting if there’s no objective or something you wish to achieve in the meeting? Figuring what your objective of the meeting is is a great starting point. From there, go think about what a summary of a meeting that meets your objective would look like.
Why? Because doing so would:
So. you think about the meeting, its objective and how a summary looks when you prep for the meeting. You then go about conducting the meeting, always thinking of your summary and how it changes. And then once done – you should have most of it already there in your head. All that is left is to write it down. And in some cases, it might even be better to record a video summary – especially for customer calls (that’s upping the game).
Why go to all the trouble? Because we’re aiming for effectiveness. We’re here to get things done and not to just sit in meetings
That WebRTC angleSorry, but I couldn’t help myself.
While this article has nothing directly to do with WebRTC (or Zoom by the way), I can’t not say something about it
Zoom was the exemplification of why WebRTC isn’t good – they were virtually the only vendor not using WebRTC even in the browser. Guess what? They are using it now…
Up your gameAs the old saying goes – do what the teacher says, not what the teacher does
I don’t always follow the advice in this article, and probably I should. We should all up our game in these online remote meetings.
What are you going to take from here in preparation for your next meeting?
The post Conducting effective Zoom and WebRTC meetings appeared first on BlogGeek.me.
What’s in store for WebRTC in 2025? Dive into the trends and predictions, from the rise of video use cases to the influence of Generative AI.
Time to look at what we’ve accomplished in 2024 and think what’s ahead of us in 2025 when it comes to WebRTC.
When we look ahead, there are several notable things that glare at us immediately:
Last year I was a Senior Director of Product Management at Cyara for all things testRTC. I am still doing that but only part time. The rest? Writing stuff here, courses, workshops, reports and fractional employment consulting on WebRTC and communications (and yes, I can help you if you need me)
If you are interested, you can read my last year’s WebRTC predictions for 2024
Let’s get started here…
Table of contentsOne thing that happened in 2024, was me doing more videos. It started with last year’s WebRTC predictions video, continuing with my monthly chat with Arin Sime, weekly Q&A and programmable video videos.
It meant I was “forced” to do a video for my WebRTC trends and predictions for 2025:
Read on below to get into the details.
Welcome to the era of Gen AI in WebRTCWe are well into the era of differentiation:
Last year, I tried holding on to the era of differentiation in WebRTC… even suggested it was relevant even today. No more.
Everyone is now running after ChatGPT, LLM, OpenAI, … Connecting WebRTC to bots and agentic AI. All the money, attention, focus and resources are there.
The end result? We don’t care about differentiation as much. We care about generative AI and how to bolt it on and into our WebRTC products and services.
Oh, and since I am doing videos now, there’s a video (and an article) on this new Gen AI era of the WebRTC evolution:
What does WebRTC use look like?We are a third year in a row at 4 times pre-pandemic usage. We can safely say that this is now stable and not going away. Sadly, it isn’t growing either. Will Generative AI in our new era change that? Probably. Time will tell.
Twilio, Programmable Video and the future of video servicesTwilio announced sunsetting their Programmable Video service in 2024. Then extended it until 2026. Then retracted it. Twilio will be keeping Programmable Video and focusing on customer engagement. Yay.
This series of decisions has hurt the market:
Somehow, this is confusing would be customers as well as pissing off existing ones – should they stay with Twilio? Build their own? Search for an alternative vendor?
The loser here is the market:
Now that Twilio is back to Programmable Video, we need to see how this will affect everyone else.
How did I do with my 2024 WebRTC predictions?I spent a considerable amount of time on my predictions in 2024. Let’s see how well I did.
#1 – libWebRTC (and the future of WebRTC)My money here was on house cleaning. This is roughly what has been happening.
While I tried to argue for work around Insertable Streams, collaboration, AV1 and Voice AI; I believe the majority of the focus has ended up around AV1… with a potential of seeing HEVC in 2025 in some limited shape and form.
Looking at our own WebRTC Insights service, where we track everything and anything WebRTC related. For our recent 4 year review, we looked at the trend of how much we covered in each year:
In terms of issues and bugs, we’re at the lowest point in the past 4 years. This may indicate the stability and robustness of the code – but it may also indicate the level of investment that Google is putting into libWebRTC (=dwindling).
libWebRTC has seen better days than what it had in 2024.
#2 – Machine learning and media processingHard to be wrong by stating the obvious…
I indicated that machine learning, LLM and generative AI will be front and center for investments in 2024. And well… It turned out to be true. How (un)surprising.
#3 – The year of Lyra and AV1Almost.
I decided to take a bet here and risk it a bit.
First “bet” here? While AV1 is still too early a video codec for most, we will still see it in commercial services. And yes, Google Meet is using it. A bit at least.
Second “bet’? That Lyra or a similar AI voice codec will be launched by someone on web browsers. Didn’t happen. We had proprietary AI voice codecs introduced by multiple vendors (Cisco Webex and Microsoft Teams) but none that run in web browsers.
#4 – WebTransport as a real alternativeNope. Didn’t happen.
I assumed we would see a commercial service using WebTransport instead of WebRTC in production for streaming… I don’t think that happened. And if it did, I am not aware of it.
We’ve also seen a “replacement” of the term WebTransport for media streaming use cases with the term MoQ – Media over QUIC. QUIC is the transport protocol used by WebTransport, so in a way, MoQ is Media over WebTransport… which is what I referred to here.
The end result? We’re now talking about MoQ and had a few people experiment with it, including Lorenzo Miniero from Meetecho in his great series of posts about MoQ (here’s the last one in the series).
One would expect Zoom to show enough interest to run this in production. But what really happened is that Zoom is now on track to support… WebRTC.
But then again, did this happen in 2024? Unfortunately, not yet.
#5 – M&As and shutdownsDidn’t see this one coming…
I predicted vendors will leave the market or make a small exit by selling or being acqui-hired.
I nailed it – we had Mux shut down its WebRTC video communication service. And Dolby.io keeping its live streaming service but shutting down its video communication service.
But then… we had Twilio double back on their decision to sunset Programmable Video, keeping it alive and switching focus towards customer engagement. Who would have thought… definitely not me
WebRTC predictions for 2025Enough about 2024. That’s old news. Lets see what’s going to happen with WebRTC in 2025
#1 – libWebRTC (and the future of WebRTC)Like last year, I’ll start with libWebRTC, Google’s popular and important library implementing WebRTC.
Sadly, nothing is going to change this year. libWebRTC will stay in maintenance mode. Any improvements made will be because Google needs them for Google Meet.
The number of external contributors to libWebRTC will stay miserably low to non-existent (besides Philipp Hancke I guess).
This state cannot continue forever. One of two things will eventually happen:
Both approaches can’t and won’t happen in 2025. It is too early for that due to multiple reasons that I won’t list here just now.
What other scenarios are out there and truly unlikely?
Machine learning is nice. Generative AI, Conversational AI and Agentic AI are nicer.
We are still going to have developers use WebAssembly, but that won’t be as interesting. It wasn’t as interesting as I thought it would in 2024, and will likely be less so in 2025.
Why?
Because LLM technologies and generative AI are done predominantly on servers these days – in the cloud. Edge devices don’t have the memory needed to hold the huge models usually – and when they do, they still don’t fit that well into web pages in browsers.
What we will see in 2025 is a continuation of the past couple of months – companies figuring out how best to connect to cloud LLMs and generative AI with voice and video in real time. That means focusing on lower latencies for these specific use cases.
I am focusing here on what companies will do and not the outcomes, because I think we won’t see too much of an outcome in Agentic AI in 2025 yet. It isn’t that these solutions won’t be in production in 2025 – they will. But they won’t amount to a large percentage of the interactions we will have. We are still in early days with this one.
Oh – and it will also mostly be around voice in 2025. A few video vendors may try to “play” this game with generated virtual humans. But the majority will be sticking to voice for now.
#3 – Audio and video codecs are interesting againIn a different way though.
Last year I thought it would be about AV1 and Lyra.
I think differently this year…
We have AV1. Vendors are adopting it. Things will take time here. But it is progressing nicely. As nice as can be expected from such a technology.
But… we see indications of HEVC coming to WebRTC as well
So I’d like to make my prediction here slightly different – we are going to see quite a few vendors switching from one video codec generation to another in 2025:
The migration to AV1 won’t be full. Each vendor using it will also keep other video codecs – for certain scenarios, use cases and devices. AV1 will still be limited in its deployment, but the envelope of the use cases it will encompass will grow each year.
For audio, there are 3 possible scenarios:
My wish is for libopus to be integrated. This should be the least effort for the market as a whole to improve things. Philipp says it won’t happen due to the binary size of this libopus version (~3mb). I hope he is wrong…
If Google doesn’t do this for libWebRTC and Chrome, I’d suggest Microsoft, Apple and Mozilla to do this for their visions of libWebRTC in their browsers and stick it to Google, along with publicly announcing such support… Unlike Google, they have nothing to lose
#4 – WebTransport and MoQ will wait a wee bitA reversal of last year’s prediction…
WebTransport is still quaint. MoQ has a nice sound to it.
None of them are happening just yet.
And they won’t be happening in 2025 either.
These things need time. And I have been a bit too optimistic last year about them.
Likely because what we see now are developers experimenting with the technology, which got me so enthusiastic and hopeful to see something intersting. But it is too early. Hopefully, I’ll change my mind for 2026.
#5 – M&As and shutdowns2023. Then 2024. And not 2025. It is all one and the same…
The recession is still here. Trump is the next president. Inflation is still high. Interest rates are high. There are indications that the interest rate will continue to be lowered moving forward. But this is far from over. Most countries have too high a debt already. How they are going to solve this is beyond me. It is good that I am an analyst around real time communications and not the economy.
So yes. More vendors are going to be shutting down or getting silently acqui-hired. Or sold at lower rates that they expected just in order to save face. For the most part, investments in and around WebRTC are going to be kept under tight budgets, to be enhanced for those who can tell a compelling Generative AI story for their WebRTC application.
The end result? Still a turmoil. Still a market of buyers.
Welcome to 20252025 is going to be exciting, but only for those doing… Generative AI:
There are a couple of things that I am doing that you may want to consider at this point:
The post My WebRTC predictions for 2025 appeared first on BlogGeek.me.
Uncover the synergy between Programmable Video, Prebuilt, and marketplaces. Explore the role of video APIs in accelerating development.
Programmable Video is a known quantity. It is part of the CPaaS movement where in this case, video APIs are used to enable developers to build their applications faster. WebRTC is also a part of all this.
Prebuilt is another concept that is well defined, but differently from Programmable Video, it is still reshaping itself. Prebuilt is about embedding the UX/UI component of the video interactions, and not just using an API – it makes the faster development of Programmable Video… well… faster.
Then there are/were marketplaces. When taken to the domain of Programmable Video, it takes a slightly different shape still.
What’s there between Programmable Video, Prebuilt and marketplaces? Where are we headed with this? This is something I want to explore in this article.
Table of contentsThis isn’t my first foray and look at the Prebuilt market. Even before WebRTC, I’ve been fascinated about what a lowcode/nocode solution looks like for a Programmable Video offering.
At RADVISION, I’ve been in charge of defining our cloud vision for developers. What we did was license protocol stacks to developers building their own voice and video communication applications. That was before CPaaS was called CPaaS. And before WebSocket was part of the web.
Later on, I’ve written about embeddable solutions and then published an ebook on lowcode/nocode for video communications – and the Prebuilt solutions they prescribed for us. That ebook is still relevant today and can be downloaded freely.
Marketplaces 101Let’s switch gears a bit, before we head back to programmable video and lowcode/nocode solutions, I want to talk about a different topic called marketplaces.
When a vendor wants to build an ecosystem around his solution, one of the ways of doing that is to introduce a marketplace. The higher you go in the food chain, the more likely you are to find a marketplace as part of the complete offering.
I thought it would be best to explain what a marketplace is by looking at one. But just searching for an example gives you the gist of it. This is what I got for searching “AWS marketplace” on Google:
For me, a marketplace usually means:
The biggest marketplaces today are probably Apple’s App Store and Android Play. There’s the Microsoft Store for Windows applications.
Then there are the marketplaces of all the big IaaS vendors (Amazon AWS, Microsoft Azure and Google Cloud).
Cloud contact center vendors? All the big ones have marketplaces (I just searched for NICE, Five9 and Genesys to confirm).
Zoom has their own App Marketplace.
To complete this part – marketplaces are there once a vendor is big enough and looking to encompass third party solutions and services as part of his own offering.
Oh, and if you don’t understand why this is here, then just think what a marketplace for a Programmable Video Prebuilt offering may look like and mean.
Lowcode/nocode in Programmable Video: The next generationWhen I wrote the lowcode/nocode ebook most of the market for Prebuilt revolved around CPaaS vendors who were just adding a UI layer on top – anywhere from a source code reference application to a higher level of abstraction with an API and documentation.
Since then, the market has evolved. We are now seeing vendors coming from a different origin story into the domain of CPaaS and Programmable Video, and these come with a different view of Prebuilt. Here’s how I explained recently the origin stories:
The vendors with a SaaS origin story started life with a full fledged video meetings application – UX/UI – the whole shebang. For them, going down the food chain towards Programmable Video meant their focus was Prebuilt first and then the rest of the low level APIs. As such, they brought with them some new qualities and capabilities not often found in Prebuilt Programmable Video solutions up to that point.
That brings us to the next generation of what Prebuilt is in Programmable Video, and how this market is evolving and shaping up towards the future.
A few notable examplesHere are a few notable examples of what is changing in the Prebuilt space, and how this is shaping the coming changes in what future Prebuilt solutions in the programmable video space are going to look like.
Supporting multiple languagesMoving from the API layer to the UX/UI layer brings with it a need to deal with different languages and internationalization.
That means that the text messages displayed on the screen and shared with the user need to be conveyed in different languages. Which ones? That depends on the vendor. Each vendor offers a different set of languages, usually based on the customer base it has.
This is more than just text translation – there’s changing direction of the text for some languages (Arabic and Hebrew), numbering and dates conventions, layout of the screen – placing text in a given area or on a specific button might require changing its size.
For a Prebuilt service, there’s also the ability (maybe?) of letting the customer make changes to the text being displayed, and that needs to be done – again – in multiple languages.
It may sound obvious, especially if you’ve built consumer applications. But for those focused on developing APIs for other developers, this is a new type of a headache they need to deal with.
Prebuilt in Programmable Video is now coming in multiple languages from some vendors. Others may need to follow.
A user constructFor the most part, CPaaS and Programmable Video vendors don’t think in terms of users. Mostly minutes and peers or devices. Got a meeting to connect to? You publish your streams and subscribe to streams of others. They aren’t users in the sense that they aren’t identified or known users. Their identity, if any, is decided and managed by the application on top.
Programmable Video offers no memory or notion of the users, their preferences or history.
Prebuilt? Sometimes…
Some Prebuilt solutions are starting to show signs of dealing with users – their identification and authentication. Sometimes even offering different permission types within meetings based on who they are.
I am not sure how this will hold moving forward, but it is something to track and contemplate if you are investing in a Prebuilt offering.
Calendar integrationMeetings are sometimes done on a set schedule. And that schedule means there’s a calendar involved.
Programmable Video doesn’t have calendars integrated into it, so adding external ones via partnerships might make sense – and it does for some of the Prebuilt vendors.
The ones adding such an integration into their Prebuilt solution are mostly those with a SaaS origin story. They see such requirements from their users and then translate it to their embedded offering as well.
Transcriptions, translations, summaries and everything AILike calendars and multiple languages, there are other meeting features that aren’t a “classic” match for Programmable Video APIs but make sense for Prebuilt. These include the gamut of handling the speech to text side of things – the ability to transcribe, translate, generate summaries, extract action items, etc.
All these are things that Prebuilt solutions for Programmable Video are introducing now. And again, it comes mostly from those with a SaaS origin story.
While these are getting better and more accurate due to LLM and generative AI, I think it is worth separating the two. Which leads me to the next thing – LLMs.
How will LLM, conversational AI and bots fit inWith the introduction of generative AI due to the concept of LLM we’ve seen huge amounts of money poured into this space. This is geared and focused towards the creation of conversational AI solutions along with voice and video bots. How will this affect the programmable video space is yet to be seen.
Programmable Voice and Video goes generative AIOpen AI just released a Realtime API for ChatGPT. This is Websocket based and not easy enough to use for live interaction from browsers or end devices.
This left a kind of a gap in the market, which a lot of CPaaS vendors and Programmable Video vendors have rushed to fill with an interface of their own connecting to OpenAI’s Realtime API. We’re tracking these as part of the WebRTC Insights service:
The reason for this rush is threefold:
What is missing here though is the fact that once OpenAI does release a decent realtime API that fixes the gaps (think WebRTC interface), where does that leave all the Programmable Voice (and Video) vendors for the 1:machine use case?
Will they need to compete head on with LLM technology vendors for developer mindshare (and pocket) or will they still be viewed as viable partners?
Prebuilt and generative AIWould it enable plugging machine intelligence instead of humans into these conversations? Making an attempt to focus on specific industries and market niches. Or would it be more towards interfacing with such third parties who bring the machine intelligence piece from “elsewhere”?
More importantly to this article, how will that fit into the world of Prebuilt solutions? On the one hand, this can keep developers away from adopting Prebuilt approaches, as these may or may not be able to cater for the latest approach that comes along with generative AI. using Prebuit may be viewed as a way to stick with the best practice in the video conferencing domain. But we are at an inflection point where trying to figure out and understand what conversational AI really means and how it will look like in the future is practically like writing best practices from scratch. Keeping at the forefront here might mean skipping Prebuilt and needing to go at least one level lower in the abstraction stack.
On the other hand, going Prebuilt might mean having the ability and resources needed to figure out how to add conversational AI to such a solution, assuming it is flexible enough. But how does one know which Prebuilt solution is going to be flexible enough in a domain that is only now being defined?
And maybe, going Prebuilt might mean not needing to deal with this new technology front, and instead, having it provided by the Prebuilt vendor itself – at some (near) future point in time.
More questions here than answers.
The challenge of a niche focusA word of caution though. Taking the strategy of Prebuilt means diving into a niche market.
If you are developing a Prebuilt offering, then know that not all businesses are going to need or align with your offering. Each has his own unique requirements, many of which you are unlikely to be able to cater for. It means knowing and understanding that your potential target market is smaller, but also likely different in nature than the traditional programmable video market.
For those looking for a solution, choosing a Prebuilt alternative means ascribing to the set of features and capabilities provided by that specific vendor. At its code, a Prebuilt offering is less generic and more opinionated. You essentially get what the vendor thinks makes sense. It might be the common sense best practices that he baked into his solution, but that doesn’t mean that it fits your needs exactly. In some cases, using a more generic programmable video offering in the form of a video API might be the better option.
A hybrid approachSome vendors have decided to enjoy both worlds. They do so by offering both a low level generic API while at the same time offering a higher level Prebuilt construct.
How they go about doing that is different and interesting. It is also explained in the video above. They might start with a low level API, adding a Prebuilt solution on top. Or rather start with a kind of a SaaS offering of video communications, later on creating a Prebuilt solution from it and further down the road introduce a lower level API for it as well.
As time goes by and the market matures, we will see more vendors taking up the hybrid approach.
We are seeing this today with CPaaS where quite a few vendors offer both a generic API and a drag and drop Flow/Studio interface.
What’s nextIf you are into this domain and need assistance. Be it in validating the work you are doing with your own APIs and lowcode/nocode solution. Or if what you are after is deciding which vendor to work with for your application, reach out to me. I can help.
The post The future of Programmable Video: Prebuilt and marketplaces appeared first on BlogGeek.me.
Tuesday, December 10 @ 5PM CET / 11AM ET / 8AM PT / 16:00 UTC Join Chad Hart, Editor of webrtcHacks, for an analysis of WebRTC trends in GitHub, StackOverflow, and other open-source communities. Leveraging advanced quantitative analysis techniques, this talk examines millions of GitHub events and developer activity data to uncover key trends […]
The post Upcoming Livestream 10-Dec: 2024 WebRTC in Open Source Review appeared first on webrtcHacks.
Unlock the power of WebRTC in the era of generative AI. Explore the perfect partnership between these groundbreaking technologies.
I am working on my WebRTC for Business People update. Going through it, I saw that the slide I had depicting the evolution of WebRTC had to be updated and fit to today’s realities. These realities are… well… generative AI.
Here are some questions I want to cover this time
Let’s dive into it, shall we?
Table of contentsI had the above done about 5 years ago for the first time. Obviously, it had only the first 3 eras in it: Exploration, Growth and Differentiation –
As we are nearing the end of 2024, we are also closing the chapter on the Differentiation era and starting off the Generative AI one. The line isn’t as distinct as it was in the past, but it is there – you feel the difference in the energy inside companies today and where they put their focus and resources it is ALL about Generative AI.
Why Generative AI? Why now?OpenAI introduced ChatGPT in November 2022, making LLMs (Large Language Models) popular. ChatGPT enabled users to write text prompts and have “the machine” reply back with answers that were human in nature. The initial adoption of ChatCPT was… instant – faster than anything we’ve ever seen before.
Source: Kyle Hailey
This validated the use of AI and Generative AI in a back and forth “prompted” conversation between a human and a machine. From there, the market exploded.
If you look at the WebRTC domain these days, it is kinda “boring”. We’ve had the adrenaline rush of the pandemic, with everyone working on scaling, optimization and getting to a 49-view magic squares layout. But now? Crickets. The use of WebRTC has gone drastically down after the pandemic. Still a lot higher than pre-pandemic time, but lower than what we had earlier. This had companies’ use going back down and their investments shrinking down with it. The world’s turmoils and instabilities aren’t helping here, and the inflation is such that stifles investments as well.
So a new story was needed. One that would attract investment. LLM and Generative AI were then, powered by the popularity of OpenAI’s ChatGPT.
This is such a strong pull that I believe it is going to last for quite a few years, earning it an era of its own in my evolution of WebRTC view.
The need for speed: how GenAI and LLM fit so well with WebRTCChatGPT brought us prompting. You ask a question in text. You get a text answer back. A ping pong game. Conversations are somewhat like that, with a few distinct differences:
So there’s a race going on, where work is being invested everywhere in the Generative AI pipeline to reduce latency as much as possible. I touched on that when I wrote about Open AI, LLM and WebRTC a few months back.
Part of that pipeline is sending and receiving audio over the network, and that is best served today using WebRTC – WebRTC is low latency and available in web browsers. How vendors are designing their interfaces for audio and LLM interactions isn’t the most optimized or simple to use for actual conversations, which is why there are many CPaaS vendors who are adding that layer on top today. I am not quite sure that this is the right approach, or how things will look like a few months out. So many things are currently being experimented and decided.
What does that mean for WebRTC in 2025 and beyondWebRTC has been in a kind of maintenance status for quite some time now. This isn’t going to change much in 2025. The most we will see is developers figuring out how to best fit Generative AI with WebRTC.
Some of the time this will be about the best integration points and APIs to use. In other times, it is going to be about minor tweaks to WebRTC itself and maybe even introducing a new API or two to make it easier for WebRTC to work with Generative AI.
More on what’s in store for us in 2025, in a few weeks time. Once I actually sit down and work it out in a separate article.
I am here to helpIf you are looking to figure out your own way with Generative AI and WebRTC, then contact me.
I am working on a brand new workshop titled “Generative AI and WebRTC” – you can register to the webinar now and reserve your spot.
The post Generative AI and WebRTC: The fourth era in the evolution of WebRTC appeared first on BlogGeek.me.
Stay informed about the latest trends and insights in WebRTC technology. Our unique service offers expert analysis and valuable information for anyone developing using WebRTC.
We are into our 5th year of WebRTC Insights. Along with Philipp Hancke, I’ve been doing this premium biweekly newsletter. Every two weeks, we send it out to our subscribers, covering everything and anything that WebRTC developers need to be aware of. This is used to guide developers with the things important to them. We include bug reports, upcoming features, Chrome experiments, security issues and market trends.
The purpose of it all? Letting developers and decision makers who develop with WebRTC focus on their own application, leaving a lot of the issues that might surprise them to us. We give them the insights they need before they get complaints from customers or get surprised by their competitors.
Each year Philipp asks me if this might be our last one, because, well, let’s face it – there are times when the newsletter is “only” 7 or 8 pages long without a lot of issues. The thing is, whatever is in there is important to someone. I myself took note of something Philipp indicated in issue #102 to be sure to integrate it into our testRTC products.
Why is WebRTC Insights so valuable to our clients?It comes down to two key benefits:
We help engineers and product teams save time by quickly identifying WebRTC issues and market trends. Instead of spending hours searching the internet for clues or trying to piece together fragmented information, we deliver everything they need directly – often several days before their clients or management bring up the issue.
Beyond saving time, we help clients stay focused on what matters most. Whether it’s revisiting past issues, tracking security concerns, understanding Google’s ongoing experiments, or staying updated on areas where Google is investing, we make it easy for them to stay informed.
If I weren’t so humble, I’d say that for those truly dedicated to mastering WebRTC, we’re a force multiplier for their expertise.
WebRTC Insights by the numbersSince this is the fourth year, you can also check out our past “year in review” posts:
This is what we’ve done in these 4 years:
26 Insights issued this year with 250 issues & bugs, 141 PSAs, 13 security vulnerabilities, 312 market insights all totaling 235 pages. We’re keeping ourselves busy making sure you can focus on your stuff.
We have covered well over a thousand issues and written close to 1,000 pages so far.
2024…
In the past year, we’ve seen quite a steep decline in issues and bugs that were filed and we talked about. From our peak of ~450 a year in 2022, to ~320 in 2023 and now 250 in 2024:
YearIssues we reported onIssues filed (libWebRTC/Chrome)2020-2021331658 / 5792021-2022447549 / 6392022-2023329515 / 5572023-2024250361 / 420This correlates with the overall decline in the activity around libWebRTC which has dropped below 200 commits per month in the last year:
This is more visible by looking at the last three years:
The Google team working on WebRTC is now just keeping the lights on. While commit numbers stayed roughly the same, external contributions are now approximately 30% of the total commits. There’s little in the way of innovation and creativity. Most of the work is now technocratic maintenance, if we were to use boring slur words…
The reality is that libWebRTC is mature and good enough. It is embedded inside Chrome, with over a billion installations, and any change in it has a wide range of effect on many applications and users. In the language of Werner Vogels, the CTO of AWS, the blast radius of a bug in libWebRTC can be rather big and impactful.
Let’s dive into the categories of our WebRTC Insights service, to figure out what we’ve had in our 4th year.
BugsIn this section we track new issues filed and progress (in the form of code changes) for both libWebRTC and Chromium. We categorize the issues into regressions for which developers need to take action, insights and features which inform developers about new capabilities or changes to existing behavior and assign a category such as “audio”, “video” or “bandwidth estimation” to make it easy for more specialized developers to only read about the issues affecting their area.
A good example of regressions this year were several regressions in the handling of H.264:
In a nutshell, relatively harmless and very reasonable changes to the way libWebRTC deals with H.264 packetization caused interop issues for services that use H.264 and rely on some of its more exotic features. And those changes made it all the way to Chrome stable which suggests a lack of testing in Beta and Canary versions.
We also track progress on feature work such as “corruption detection” and speculate on why Google is embarking on such projects:
Google migrating both Chromium and WebRTC from the Monorail issue tracker system to the more modern Buganizer caused us a little bit of a headache here.
PSAs & resources worth readingIn this section we track “public service announcements” on the discuss-webrtc mailing list, webrtc-related threads on the blink/chromium mailing list, W3C activity (where we often shake our heads) and highly technical blog posts which do not fit into the “market” category.
A good example of this is Google experimenting with a new way to put the device permissions into the page content which we noted in May, followed by seeing how Google Meet put this into action in November. The process for this is “open” but as a developer you need to be aware of what is possible and being experimented with by Google to keep up.
We also used to track libWebRTC release notes in this section but stopped sending those earlier this year when the migration from Monorail to Buganizer broke the tooling we had. Not many folks missed them so far.
Experiments in WebRTCChrome’s field trials for WebRTC are a good indicator of what large changes are rolling out which either carry some risk of subtle breaks or need A/B experimentation. Sometimes, those trials may explain behavior that only reproduces on some machines but not on others. We track the information from the chrome://version page over time which gives us a pretty good picture on what is going on. Most recently we used it to track how Google is experimenting with a change in getUserMedia which changes how the “ideal” deviceId constraint behaves:
See this issue for more information about the change. We also waved goodbye to the longest-lasting field trial which had been with us the entire four years, being enabled 100% and causing a different behavior in Chrome versus Chromium-based browsers not using Google’s field trial configuration such as Microsoft Edge:
WebRTC-VP8ConferenceTemporalLayers
It was removed (without the default value changing) in this commit. Which is great because it had side-effects on other codecs like H.264.
WebRTC security alertsWe continued tracking WebRTC-related security issues announced in the Chrome release blog. We had eight of them this year, all but one related to how Chromium manages the underlying WebRTC objects. And a vulnerability in the dav1d decoder (as we predicted last year, codec implementations will get some more eyes on them).
WebRTC market guidanceWhat is happening in the world of WebRTC? Who is doing what? Why? When? Where?
We’re looking at the leading vendors, but also at the small startups.
There are probably 3 main areas we cover here:
From time to time, you’ll see us looking at call centers, security and privacy, governance, open source, etc. All with a view from the prism of WebRTC developers and with an attempt to find an insight – something actionable for you to do with that information.
The purpose of it all? For you to understand the moves in the market as well as the best practices that are being defined. Things you can use to think over your own strategy and tactics. Ways for you to leave your company’s echochamber for a bit. All in the purpose of improving your product at the end of the day.
With our shift towards an ever maturing WebRTC market, the market insights section is growing as well. We expect this to happen in the coming year yet again.
Join the WebRTC expertsWe are now headed into our fifth year of WebRTC Insights.
On one hand, there are less technical issues you will bump into. But those that you will, are going to be more important than ever. Why? Because the market is maturing and competition is growing.
So if you’re working with WebRTC and not subscribed to the WebRTC Insights yet – you need to ask yourself why it is. And if you might be interested, then let me know – and I’ll share with you a sample issue of our insights, so you can see what you’ve been missing out on.
The post Four years of WebRTC Insights appeared first on BlogGeek.me.
Rating access immediately in order to a large number of ports out of finest software company from the VegasSlotsOnline. A writer and you will editor with a good penchant to possess games and you may method, Adam Ryan has been for the Local casino.org party to own eight years now. With authored to own and edited numerous iGaming labels in the career, he’s anything of a content sage when it comes to our iGaming duplicate in america and you will Canada. Local casino.org have a rigid 25-step review procedure that we realize for each and every casino comment.
Free Revolves Existing Customers no depositContinue reading for solutions to the most popular questions relating to which kind of casino added bonus. Since the a fact-checker, and you may all of our Chief Betting Officer, Alex Korsager verifies the online casino info on this site. The guy yourself measures up our profiles to your casino’s and you can, when the some thing try unsure, he connectivity the new local casino. In a nutshell, Alex assurances you can make the best and you will accurate decision. Imagine if your FanDuel Michigan Gambling establishment’s indication-up incentive is a great “$2,000 Get involved in it Once more” render.
Knowledge No deposit 100 percent free RevolvesIn identical vein since the earn limitations, you might be extremely scarcely permitted to share everything you have redeemed from the NZ no deposit bonus codes using one spin. The new betting requirements away from 50x are a little greater than we’d including, but it is very standard with no deposit incentives inside the NZ. Yet , Casimba also provides 4 times as numerous zero depoist free revolves for a passing fancy game. This means 4 times as numerous opportunities to struck one to $a hundred max cashout. Besides the bonus size, you should see casual incentive words including reduced wagering requirements and you can a much bigger successful limit. All the free spins is actually valued from the £step 1.sixty, providing a whole bonus property value £8.
No deposit incentives basically become connected with heftier wagering criteria than just matches deposit incentives because they’re liberated to discover. Yes, specific no-deposit casinos in britain don’t have any betting standards on the free signal-upwards incentives. LeoVegas is a prime exemplory case of a casino having a no-bet no deposit extra. It’s 50 100 percent free revolves to your position online game from its range instead betting, nevertheless games change weekly.
This may be sure you are utilizing the newest bonuses correctly and will maximize your prospective profits. For each gambling enterprise has its own book choices and words, very learning the fresh fine print and you will knowing the standards ahead of saying people bonuses is extremely important. Here’s a go through the specific no deposit bonuses provided by this type of greatest casinos.
This isn’t always the case but there are several conditions and terms you need to be cautious about when claiming an excellent incentive choice give and no put. Make sure to see the full Ts and you will Cs of the no-deposit extra 100 percent free choice when saying your give. Here’s probably the most preferred inquiries we’ve got received from the zero deposit incentives in the usa.
Stating The No-deposit Incentive: A step-by-Action BookIn cases like this, the individuals systems are the various percentage steps given by casinos on the internet. One of the some other types out of no-deposit bonuses, 100 percent free gamble and you will extra dollars stand out using their book characteristics. Free play will provide you with an admission to the local casino’s park, letting you participate in casino games without the need to invest any individual money. At the same time, added bonus bucks offers you a selected sum which you can use as you wish within the local casino.
Fundamentally, players need to wager the main benefit number a certain number of times just before they could withdraw one earnings. Check the fresh conditions and terms to make sure your’lso are completely advised regarding the regulations. BetOnline is another on-line casino one runs glamorous no deposit incentive selling, along with individuals online casino bonuses. These types of sales can include totally free spins otherwise free gamble options, usually given as part of a pleasant package. Very, whether your’re a fan of slots or like dining table games, BetOnline’s no-deposit bonuses are certain to keep you captivated.
One of the best the way you use 7Bit gambling establishment bonuses appropriately is to steer clear of the following the added bonus abuse. You’re inclined to withdraw a no deposit added bonus away from your account, but being able to all hangs entirely on the new terminology and you can requirements of the internet casino. No deposit bonuses are perfect for people who do n’t need so you can going their money whenever exploring another local casino or online game. Free elite instructional courses to possess internet casino group geared towards industry recommendations, boosting user sense, and you can reasonable method to betting. Only a few gaming sites offer no-deposit bets many manage and we’ve got seen her or him to your pursuing the sportsbooks. Maximum added bonus amount you can get from your no-deposit extra wager give may differ ranging from playing web site and added bonus, but may also be a lot below what you can get off their incentives.
Even though you can access a no cost no-deposit extra or something like that else, you will want to see the campaign’s Fine print. For many who wear’t accomplish that promptly, you acquired’t know very well what to do, and even the new tiniest error may cause one get rid of the new extra. If you are unclear about all local casino offers, find knowledgeable responses with the content field. Offer an email address, and you will get a response in minutes.
You have to know which in the no-deposit expected bonusesOf a lot web based casinos offer support or VIP software one reward present players with exclusive no-deposit bonuses or any other incentives including cashback rewards. As an example, Bovada also offers a suggestion program getting to $a hundred for each and every deposit referral, in addition to a plus for guidelines having fun with cryptocurrency. They are particular promotions, as there are often a spot to them. That time is usually to score people to try specific online game created by the newest casino’s lovers.
The post one hundred Zombies Remark Position Ratings appeared first on BlogGeek.me.
Twilio Programmable Video is back. Twilio decided not to sunset this service. Here’s where their new focus lies and what it means to you and to the industry.
A year ago, Twilio announced sunsetting its Programmable Video service. Now, it is back from the dead, like a phoenix rising up from the ashes. Or is that going to be more like a dead walking zombie?
Here’s what I think happened and what it means – to CPaaS, Twilio and other vendors.
👉 Twilio being central about CPaaS means they have a dedicated page of their own on my site – you can check it up here: Twilio
Table of contentsLet’s first look at two important aspects of the decision of Twilio to sunset their Twilio Programmable Video service. I did a couple of video recordings converting some of the visuals from my Video API report and placed them on YouTube (you should subscribe to my channel if you haven’t already).
The first one? A look at Twilio’s video services.
The second one? A look at how the market is going to figure this one out:
All in all, not good for the market.
Twilio Customers in the past yearTo be frank, this started before the EOL announcement. If you look at the commits done to the Twilio Video SDK you see this picture:
Half a year prior to the announcement, the SDK got no commits whatsoever. And then? The official EOL came.
This last year has been tough on Twilio’s customers who use Programmable Video.
They had to migrate away from Twilio, with the need to do it by the end of 2024.
The time wasn’t long enough for many of the customers, and they likely complained to Twilio. The EOL (End Of Life) date moved to 2026, giving two more years for these customers.
The development work needed to switch and migrate away from Twilio might not have been huge, but it was not scheduled and came in as a critical requirement. In some cases, the customers didn’t have the engineering team in place for it, because external outsourcing vendors and freelancers originally developed the integration. In other cases, the migration required also dealing with mobile native applications, which is always more expensive and time consuming.
In one case, I had a vendor complain that they can’t replace the code in the appliances it deployed in a timespan of a year even if he wanted to – he works in a regulated industry and environment with native mobile applications.
Twilio set up their customers to a royal mess and a real headache here.
Zag: Twilio Programmable Video back from the deadThen came the zag. Twilio decided to revert its decision and keep Twilio Programmable Video going. Here’s the statement/announcement from Twilio’s blog.
Here’s how they start it off:
“Today, we’re excited to announce that Twilio Video will remain as a product that we are committed to investing in and growing to best meet the needs of our customers. […]
Twilio Video will not be discontinued, and instead, we are investing in its development moving forward to continue to enhance customer engagement by enabling businesses to embed Video calling into their unique customer experiences.”
In their “why the change” section of the post, Twilio is trying to build a case for video (again). In it, they are making an effort to explain that they aren’t going to sunset video in the future, which is an important signal to potential new customers as well as existing ones. Their explanation revolves around the customer engagement use cases – this is important.
The “what to expect moving forward” section is the interesting part. It is built out of 4 bullets. Here’s what I think about them:
Alli in all, Twilio is planning on focusing predominantly on 1:1 customer engagement use cases and connecting them to Segment. At least that’s my reading of things.
Sunk costs or a hidden opportunity for customersWhat about Twilio Programmable Video customers?
They had a year to plan and move away from the service to something else. Many of them either finished their migration or close to that point.
Should they now revert back to using Twilio? Stick with the competition?
Those who are in the middle of migration – should they stick to Twilio or keep investing resources in migrating away from Twilio?
These customers spent time and money on moving away. Should they view that as sunk costs or as an opportunity?
From discussions with a few Twilio customers, it seems that the answers are varied. In some cases, what they’ve done is built an abstraction running on top of two vendors – Twilio and the new vendor they’re migrating to. This way, they can keep Twilio as a backup as long as Twilio runs the service.
Now? They have the option to pick and choose which of the two alternatives to use.
This works well for services that do 1:1 meetings. Less so for group meetings.
In a way, Twilio reverting back adds another layer of headache and decisions that customers now need to go through (again).
Twilio’s challenges aheadThis leads us to the challenges Twilio is about to face.
The 3 leading ones are:
All 3 are solvable, but will take time, attention and commitment on behalf of Twilio.
Zoom: The biggest winner of allThe big winner this past year? Zoom.
Zoom had an SDK and a Programmable Video offering, but it was known and popularized for its UCaaS service. Twilio sunsetting Programmable Video while at the same time suggesting and sending customers to Zoom was a proof of quality from a third party in the space that Zoom enjoyed.
This cannot be taken back now. It rocketed the Zoom Video SDK to one of the alternatives that potential buyers now need to review and explain why they shouldn’t be trialing it.
All in all, a good thing for Zoom.
This change of heart by Twilio? Not going to affect Zoom.
What should you doIf you are already using Twilio and were migrating away –
There’s also always my Video API report to help you out (contact me for a discount on it or if you want some more specific consultation)
The post Twilio Programmable Video is back from the dead appeared first on BlogGeek.me.
Twilio Programmable Video is back. Twilio decided not to sunset this service. Here’s where their new focus lies and what it means to you and to the industry.
A year ago, Twilio announced sunsetting its Programmable Video service. Now, it is back from the dead, like a phoenix rising up from the ashes. Or is that going to be more like a dead walking zombie?
Here’s what I think happened and what it means – to CPaaS, Twilio and other vendors.
Twilio being central about CPaaS means they have a dedicated page of their own on my site – you can check it up here: Twilio
Table of contentsLet’s first look at two important aspects of the decision of Twilio to sunset their Twilio Programmable Video service. I did a couple of video recordings converting some of the visuals from my Video API report and placed them on YouTube (you should subscribe to my channel if you haven’t already).
The first one? A look at Twilio’s video services.
The second one? A look at how the market is going to figure this one out:
All in all, not good for the market.
Twilio Customers in the past yearTo be frank, this started before the EOL announcement. If you look at the commits done to the Twilio Video SDK you see this picture:
Half a year prior to the announcement, the SDK got no commits whatsoever. And then? The official EOL came.
This last year has been tough on Twilio’s customers who use Programmable Video.
They had to migrate away from Twilio, with the need to do it by the end of 2024.
The time wasn’t long enough for many of the customers, and they likely complained to Twilio. The EOL (End Of Life) date moved to 2026, giving two more years for these customers.
The development work needed to switch and migrate away from Twilio might not have been huge, but it was not scheduled and came in as a critical requirement. In some cases, the customers didn’t have the engineering team in place for it, because external outsourcing vendors and freelancers originally developed the integration. In other cases, the migration required also dealing with mobile native applications, which is always more expensive and time consuming.
Once I had a vendor complain that they can’t replace the code in the appliances it deployed in a timespan of a year even if he wanted to – he works in a regulated industry and environment with native mobile applications.
Twilio set up their customers to a royal mess and a real headache here.
Zag: Twilio Programmable Video back from the deadThen came the zag. Twilio decided to revert its decision and keep Twilio Programmable Video going. Here’s the statement/announcement from Twilio’s blog.
Here’s how they start it off:
“Today, we’re excited to announce that Twilio Video will remain as a product that we are committed to investing in and growing to best meet the needs of our customers. […]
Twilio Video will not be discontinued, and instead, we are investing in its development moving forward to continue to enhance customer engagement by enabling businesses to embed Video calling into their unique customer experiences.”
In their “why the change” section of the post, Twilio is trying to build a case for video (again). In it, they are making an effort to explain that they aren’t going to sunset video in the future, which is an important signal to potential new customers as well as existing ones. Their explanation revolves around the customer engagement use cases – this is important.
The “what to expect moving forward” section is the interesting part. It is built out of 4 bullets. Here’s what I think about them:
Alli in all, Twilio is planning on focusing predominantly on 1:1 customer engagement use cases and connecting them to Segment. At least that’s my reading of things.
Sunk costs or a hidden opportunity for customersWhat about Twilio Programmable Video customers?
They had a year to plan and move away from the service to something else. Many of them either finished their migration or close to that point.
Should they now revert back to using Twilio? Stick with the competition?
Those who are in the middle of migration – should they stick to Twilio or keep investing resources in migrating away from Twilio?
These customers spent time and money on moving away. Should they view that as sunk costs or as an opportunity?
From discussions with a few Twilio customers, it seems that the answers are varied. In some cases, what they’ve done is built an abstraction running on top of two vendors – Twilio and the new vendor they’re migrating to. This way, they can keep Twilio as a backup as long as Twilio runs the service.
Now? They have the option to pick and choose which of the two alternatives to use.
This works well for services that do 1:1 meetings. Less so for group meetings.
In a way, Twilio reverting back adds another layer of headache and decisions that customers now need to go through (again).
Twilio’s challenges aheadThis leads us to the challenges Twilio is about to face.
The 3 leading ones are:
All 3 are solvable, but will take time, attention and commitment on behalf of Twilio.
Zoom: The biggest winner of allThe big winner this past year? Zoom.
Zoom had an SDK and a Programmable Video offering, but it was known and popularized for its UCaaS service. Twilio sunsetting Programmable Video while at the same time suggesting and sending customers to Zoom was a proof of quality from a third party in the space that Zoom enjoyed.
This cannot be taken back now. It rocketed the Zoom Video SDK to one of the alternatives that potential buyers now need to review and explain why they shouldn’t be trialing it.
All in all, a good thing for Zoom.
This change of heart by Twilio? Not going to affect Zoom.
What should you doIf you are already using Twilio and were migrating away –
There’s also always my Video API report to help you out (contact me for a discount on it or if you want some more specific consultation)
The post Twilio Programmable Video is back from the dead appeared first on BlogGeek.me.
Struggling with WebRTC POC or demo development? Follow these best practices to save time and increase the success of your project.
I get approached by a lot of startups and developers who start on the path to building WebRTC applications. Oftentimes, they reach out to me when they can’t get their POC (Proof of Concept) or demo to work properly.
For those who don’t want to go through paid consulting, here are some best practices that can save you time and can considerably increase the success rate of your project.
Table of contentsI don’t want to delve here too much on peer to peer type solutions. These require no media server and due to that are “easier” to develop into a nice demo. The services that use media servers are the ones that are often more beefy and are also the ones that fall into many challenging traps during a POC development.
Media requires the use of ephemeral ports that get allocated dynamically. It needs to negotiate connections. There are more moving parts that can break and fail on you.
All of the following sections here include best practices that you should read before going on to implement your WebRTC demo. Best to use them during your design and planning phases.
👉 An introduction to WebRTC media servers
Use CPaaSLet’s start with the most important question of all. If you’ve decided to install and host media servers in AWS or other locations – are you sure this is an important part of your demo?
I’ll try to explain this question. A demo or a POC comes to prove a point. It can be something like “we want to validate the technical viability of the project” or “we wanted to have something up and running quickly to start getting real customers’ feedback”.
If what you want is to build an MVP (Minimal Viable Product) with the intent of attracting a few friendly customers, go to a VC for funding or just test the waters before plunging in, then be sure to do that using CPaaS or a Programmable Video solution. These are usually based on usage pricing so they won’t be expensive when you’re just starting out. But they will reduce a lot of the headaches in development and maintenance of the infrastructure – so they’re more than worth it.
Sometimes, what you will be after is a POC that seeks to answer the question “what does it mean to build this on our own”. Not only due to costs but mainly due to the uniqueness of the requirements desired – these may include the need to run in a closed network, connect to certain restricted components, etc. Here, having the POC not use CPaaS and rely on open source self hosted components will make perfect sense.
First have the “official” media server demo workDecided not to use CPaaS? Picked a few open source media servers and components that you’ll be using?
Make sure to install, run and validate the demo application of that open source media server.
You should do this because:
Using a 3rd party? Install and run its demo first.
Don’t. Use. DockerDocker is great. Especially in production. Well… that’s what I’ve been told by DevOps people. It makes deploying easier. It is great for continuous integration. It is fairy dust on the code developers write.
But for WebRTC media servers? It is hell on earth to get configured properly for the first time. Too many ports need to be opened all over the place. Some TCP. Lots of them UDP. And if you miss the configuration – the media won’t get connected. Or it will. Sometimes. Which is worse.
My suggestion? Leave all the DevOps fairy dust for production. For your POC and demo? Go with operating systems on virtual machines or on bare metal. This will save you a lot of headaches by making sure things will fail less due to not having ports opened properly on your Docker configuration(s).
You don’t have time to waste when you’re developing that WebRTC POC.
Don’t do native. Go webRemember that suggestion about doing the full session for your demo so you know the infrastructure is built properly? If you need native applications on mobile devices – don’t.
The easiest way to develop a demo for WebRTC would be by using a web browser for the client side. I’d go farther and say by using Chrome web browser. Ignore Firefox and Safari for the initial POC. Skip mobile – assume these are a lot of work but won’t validate anything architecturally. At least not for the majority of application types.
👉 Still need to go native and mobile? Here are your WebRTC mobile SDK alternatives
Use a 3rd party TURN serviceAlways always always configure TURN in your iceServers for the peer connections.
Your initial “hello world” moment is likely to take place on the local LAN or even on the same machine. But once you start placing the devices on different networks, things will start failing without TURN servers. To make sure you don’t get there, just have TURN configured.
And have it configured properly.
And don’t install and host your own TURN servers.
Just use a managed TURN service.
The ones I’d pick for this task are either Twilio or Cloudflare for this stage. They are easy to start with.
You can always replace them with your own later without any vendor lock-in risk. But starting off with your own is too much work and hassle and will bring with it a slew of potential bugs and blockers that you just don’t need at this point in time.
👉 More on NAT Traversal and TURN servers in WebRTC
Be very specific about your requirements (and “demo” them)Don’t assume that connecting a single user to a meeting room in a demo application means you can connect 20 users into that meeting room.
Streaming a webcam to a viewer isn’t the same as streaming that same webcam to 100 viewers.
If you plan on doing a real proof of concept, be sure to define the exact media requirements you have and to implement them at the scale of a different session. Not doing so means you aren’t really validating anything in your architecture.
A 1:1 meeting uses a different architecture than a 4-way video meeting which in turn uses a different architecture than a 20-50 participants in a meeting, which is different once you think about 100 or 200 participants, which again looks different architecturally when you’re hitting 1,000-10,000 and then… you get the point on how to continue from here.
The same applies for things like using screen sharing, doing spatial audio, multiple video sharing, etc. Have all these as part of your POC. It can be clunky and kinda ugly, but it needs to be there. You must have an understanding of if and how it works – of what are the limits you are bound to hit with it.
For the larger and more complex applications, be sure you know all of the suggestions in this article before coming to read it. If you don’t, then you should beef up your understanding and experience with WebRTC infrastructure and architecture…
Got a POC? Build it to scale for that single session you’re aiming for. I won’t care if you can do 2 of these in parallel or a 1,000. That’s also important, but can wait for later stages.
👉 More on scaling WebRTC meeting sizes
One step at a timeSetting up a WebRTC POC is a daunting task. There are multiple moving parts in there, each with its own quirks. If one thing goes wrong, nothing works.
This is true for all development projects, but it is a lot more relevant and apparent in WebRTC development projects. When you start these exploration steps with putting up a POC or a demo, there is a lot to get done right. Configurations, ports, servers, clients, communication channels.
Taking multiple installation or configuration steps at once will likely end up with a failure due to a bug in one of these steps. Tracing back to figure out what was the change causing this failure will take quite some time, leading to delays and frustrations. Better to take one step at a time. Validating each time that the step taken worked as expected.
I earned that the hard way at the age of 22, while being the lead integrator of an important project the company I worked for had with Cisco and HP. I blamed a change that HP did on an issue we had with our VoIP implementation that lost us a full week. It ended up me… doing two steps instead of one. But that’s a story for another time.
Know your toolingIf you don’t know what webrtc-internals is and haven’t used dump-importer then you’re doing it wrong.
Not using these tools mean that when things go wrong (and they will), you’re going to be totally blind on why. These aren’t perfect tools, but they give you a lot of power and visibility that you wouldn’t have otherwise.
Here’s how you download a webrtc internals file:
You’ll need to do that if you want to view the results on fippo’s webrtc-dump-importer.
And if you’re serious about it, then you can read a bit about what the WebRTC statistics there really mean.
Now if you’re going to do this properly and with a budget, I can suggest using testRTC for both testing and monitoring.
Know more about WebRTCEverything above will get you started. You’ll be able to get to a workable POC or demo. Is that fit for production? What will be missing there? Is the architecture selected the one that will work for you? How do you scale this properly?
You can read about it online or even ask ChatGPT as you go along. The thing is that a shallow understanding of WebRTC isn’t advisable here. Which is a nice segway to say that you should look at our WebRTC courses if you want to dig deeper into WebRTC and become skilled with using it.
The post Best practices for WebRTC POC/Demo development appeared first on BlogGeek.me.
Struggling with WebRTC POC or demo development? Follow these best practices to save time and increase the success of your project.
I get approached by a lot of startups and developers who start on the path to building WebRTC applications. Oftentimes, they reach out to me when they can’t get their POC (Proof of Concept) or demo to work properly.
For those who don’t want to go through paid consulting, here are some best practices that can save you time and can considerably increase the success rate of your project.
Table of contentsI don’t want to delve here too much on peer to peer type solutions. These require no media server and due to that are “easier” to develop into a nice demo. The services that use media servers are the ones that are often more beefy and are also the ones that fall into many challenging traps during a POC development.
Media requires the use of ephemeral ports that get allocated dynamically. It needs to negotiate connections. There are more moving parts that can break and fail on you.
All of the following sections here include best practices that you should read before going on to implement your WebRTC demo. Best to use them during your design and planning phases.
An introduction to WebRTC media servers
Use CPaaSLet’s start with the most important question of all. If you’ve decided to install and host media servers in AWS or other locations – are you sure this is an important part of your demo?
I’ll try to explain this question. A demo or a POC comes to prove a point. It can be something like “we want to validate the technical viability of the project” or “we wanted to have something up and running quickly to start getting real customers’ feedback”.
If what you want is to build an MVP (Minimal Viable Product) with the intent of attracting a few friendly customers, go to a VC for funding or just test the waters before plunging in, then be sure to do that using CPaaS or a Programmable Video solution. These are usually based on usage pricing so they won’t be expensive when you’re just starting out. But they will reduce a lot of the headaches in development and maintenance of the infrastructure – so they’re more than worth it.
Sometimes, what you will be after is a POC that seeks to answer the question “what does it mean to build this on our own”. Not only due to costs but mainly due to the uniqueness of the requirements desired – these may include the need to run in a closed network, connect to certain restricted components, etc. Here, having the POC not use CPaaS and rely on open source self hosted components will make perfect sense.
First have the “official” media server demo workDecided not to use CPaaS? Picked a few open source media servers and components that you’ll be using?
Make sure to install, run and validate the demo application of that open source media server.
You should do this because:
Using a 3rd party? Install and run its demo first.
Don’t. Use. DockerDocker is great. Especially in production. Well… that’s what I’ve been told by DevOps people. It makes deploying easier. It is great for continuous integration. It is fairy dust on the code developers write.
But for WebRTC media servers? It is hell on earth to get configured properly for the first time. Too many ports need to be opened all over the place. Some TCP. Lots of them UDP. And if you miss the configuration – the media won’t get connected. Or it will. Sometimes. Which is worse.
My suggestion? Leave all the DevOps fairy dust for production. For your POC and demo? Go with operating systems on virtual machines or on bare metal. This will save you a lot of headaches by making sure things will fail less due to not having ports opened properly on your Docker configuration(s).
You don’t have time to waste when you’re developing that WebRTC POC.
Don’t do native. Go webRemember that suggestion about doing the full session for your demo so you know the infrastructure is built properly? If you need native applications on mobile devices – don’t.
The easiest way to develop a demo for WebRTC would be by using a web browser for the client side. I’d go farther and say by using Chrome web browser. Ignore Firefox and Safari for the initial POC. Skip mobile – assume these are a lot of work but won’t validate anything architecturally. At least not for the majority of application types.
Still need to go native and mobile? Here are your WebRTC mobile SDK alternatives
Use a 3rd party TURN serviceAlways always always configure TURN in your iceServers for the peer connections.
Your initial “hello world” moment is likely to take place on the local LAN or even on the same machine. But once you start placing the devices on different networks, things will start failing without TURN servers. To make sure you don’t get there, just have TURN configured.
And have it configured properly.
And don’t install and host your own TURN servers.
Just use a managed TURN service.
The ones I’d pick for this task are either Twilio or Cloudflare for this stage. They are easy to start with.
You can always replace them with your own later without any vendor lock-in risk. But starting off with your own is too much work and hassle and will bring with it a slew of potential bugs and blockers that you just don’t need at this point in time.
More on NAT Traversal and TURN servers in WebRTC
Be very specific about your requirements (and “demo” them)Don’t assume that connecting a single user to a meeting room in a demo application means you can connect 20 users into that meeting room.
Streaming a webcam to a viewer isn’t the same as streaming that same webcam to 100 viewers.
If you plan on doing a real proof of concept, be sure to define the exact media requirements you have and to implement them at the scale of a different session. Not doing so means you aren’t really validating anything in your architecture.
A 1:1 meeting uses a different architecture than a 4-way video meeting which in turn uses a different architecture than a 20-50 participants in a meeting, which is different once you think about 100 or 200 participants, which again looks different architecturally when you’re hitting 1,000-10,000 and then… you get the point on how to continue from here.
The same applies for things like using screen sharing, doing spatial audio, multiple video sharing, etc. Have all these as part of your POC. It can be clunky and kinda ugly, but it needs to be there. You must have an understanding of if and how it works – of what are the limits you are bound to hit with it.
For the larger and more complex applications, be sure you know all of the suggestions in this article before coming to read it. If you don’t, then you should beef up your understanding and experience with WebRTC infrastructure and architecture…
Got a POC? Build it to scale for that single session you’re aiming for. I won’t care if you can do 2 of these in parallel or a 1,000. That’s also important, but can wait for later stages.
More on scaling WebRTC meeting sizes
One step at a timeSetting up a WebRTC POC is a daunting task. There are multiple moving parts in there, each with its own quirks. If one thing goes wrong, nothing works.
This is true for all development projects, but it is a lot more relevant and apparent in WebRTC development projects. When you start these exploration steps with putting up a POC or a demo, there is a lot to get done right. Configurations, ports, servers, clients, communication channels.
Taking multiple installation or configuration steps at once will likely end up with a failure due to a bug in one of these steps. Tracing back to figure out what was the change causing this failure will take quite some time, leading to delays and frustrations. Better to take one step at a time. Validating each time that the step taken worked as expected.
I earned that the hard way at the age of 22, while being the lead integrator of an important project the company I worked for had with Cisco and HP. I blamed a change that HP did on an issue we had with our VoIP implementation that lost us a full week. It ended up me… doing two steps instead of one. But that’s a story for another time.
Know your toolingIf you don’t know what webrtc-internals is and haven’t used dump-importer then you’re doing it wrong.
Not using these tools mean that when things go wrong (and they will), you’re going to be totally blind on why. These aren’t perfect tools, but they give you a lot of power and visibility that you wouldn’t have otherwise.
Here’s how you download a webrtc internals file:
You’ll need to do that if you want to view the results on fippo’s webrtc-dump-importer.
And if you’re serious about it, then you can read a bit about what the WebRTC statistics there really mean.
Now if you’re going to do this properly and with a budget, I can suggest using testRTC for both testing and monitoring.
Know more about WebRTCEverything above will get you started. You’ll be able to get to a workable POC or demo. Is that fit for production? What will be missing there? Is the architecture selected the one that will work for you? How do you scale this properly?
You can read about it online or even ask ChatGPT as you go along. The thing is that a shallow understanding of WebRTC isn’t advisable here. Which is a nice segway to say that you should look at our WebRTC courses if you want to dig deeper into WebRTC and become skilled with using it.
The post Best practices for WebRTC POC/Demo development appeared first on BlogGeek.me.
Explore the world of video codecs and their significance in WebRTC. Understand the advantages and trade-offs of switching between different codec generations.
Technology grinds forward with endless improvements. I remember when I first came to video conferencing, over 20 years ago, the video codecs used were H.261, H.263 and H.263+ with all of its glorious variants. H.264 was starting to be discussed and deployed here and there.
Today? H.264 and VP8 are everywhere. We bump into VP9 in WebRTC applications and we talk about AV1.
What does it mean exactly to move from one video codec generation to another? What do we gain? What do we lose? This is what I want to cover in this article.
Table of contentsDon’t have time for my ramblings? This short video should have you mostly covered:
👉 I started recording these videos a few months back. If you like them, then don’t forget to like them 😉
The TL;DR:
A codec is a piece of software that compresses and decompresses data. A video codec consists of an encoder which compresses a raw video input and a decoder which decompresses the compressed bitstream of a video back to something that can be displayed.
👉 We are dealing here with lossy codecs. Codecs that don’t maintain the whole data, but rather lose information trying to hold as much as the original as possible with as little data that needs to be stored as possible
The way video codecs are defined is by their decoder:
Given a bitstream generated by a video encoder, the video codec specification indicates how to decompress that bitstream back into a viewable format.
What does that mean?
Video codecs require a lot of CPU and memory to operate. This means that in many cases, our preference would be to offload their job from the CPU to hardware acceleration. Most modern devices today have media acceleration components in the form of GPUs or other chipset components that are capable of bearing the brunt of this work. It is why mobile devices can shoot high quality videos with their internal camera for example.
Since video codecs are dictated by the specification of their decoder, defining and implementing hardware acceleration for video decoders is a lot easier than doing the same thing for video encoders. That’s because the decoders are deterministic.
For the video encoder, you need to start asking questions –
This leads us to the fact that in many cases and scenarios, hardware acceleration of video codecs isn’t suitable for WebRTC at all – they are added to devices so people can watch YouTube videos of cats or create their own TikTok videos. Both of these activities are asynchronous ones – we don’t care how long the process of encoding and decoding takes (we do, but not in the range of milliseconds of latency).
Up until a few years ago, most hardware acceleration out there didn’t work well for WebRTC and video conferencing applications. This started to change with the Covid pandemic, which caused a shift in priorities. Remote work and remote collaboration scenarios climbed the priorities list for device manufacturers and their hardware acceleration components.
Where does that leave us?
The end result? Another headache to deal with… and we didn’t even start to talk about codec generations.
New video codec generation = newer, more sophisticated toolsI mentioned the tools that are the basis of a video codec. The decoder knows how to read a bitstream based on these tools. The encoder picks and chooses which tools to use when.
When moving to a newer codec generation what usually happens is that the tools we had are getting more flexible and sophisticated, introducing new features and capabilities. And new tools are also added.
More tools and features mean the encoder now has more decisions to make when it compresses. This usually means the encoder needs to use more memory and CPU to get the job done if what we’re aiming for is better compression.
Switching from one video codec generation to another means we need the devices to be able to carry that additional resource load…
A few hard facts about video codecsHere are a few things to remember when dealing with video codecs:
It is time to start looking at WebRTC and its video codecs. We will begin with the MTI video codecs – the Mandatory To Implement. This has been a big debate back in the day. The standardization organizations couldn’t decide if VP8 or H.264 need to be the MTI codecs.
To make a long story short – a decision was made that both are MTI.
What does this mean exactly?
These video codecs are rather comparable for their “price/performance”. There are differences though.
👉 If you’re contemplating which one to use, I’ve got a short free video course to guide you through this decision making process: H.264 or VP8 – What Shall it be?
The emergence of VP9 and rejection of HEVCThe descendants of VP8 and H.264 are VP9 and HEVC.
H.264 is a royalty bearing codec and so is HEVC. VP8 and VP9 are both royalty free codecs.
HEVC being newer and considerably more expensive made things harder for it to be adopted for something like WebRTC. That’s because WebRTC requires a large ecosystem of vendors and agreements around how things are done. With a video codec, not knowing who needs to pay the royalties stifles its adoption.
And here, should the ones paying be the chipset vendor? Device manufacturer? The browser vendor? The application developer? No easy answer, so no decision.
This is why HEVC ended up being left out of WebRTC for the time being.
VP9 was an easy decision in comparison.
Today, you can find VP9 in applications such as Google Meet and Jitsi Meet among many others who decided to go for this video codec generation and not stay in the VP8/H.264 generation.
The big promise of VP9 was its SVC support
Our brave new world of AV1AV1 is our next gen of video codecs. The promise of a better world. Peace upon the earth. Well… no.
Just a divergence in the road that puts a focus in a future that is mostly royalty free for video codecs (maybe).
What do we get from AV1 as a new video codec generation compared to VP9? Mainly what we did from VP9 compared to VP8. Better quality for the same bitrate and the price of CPU and memory.
Where VP9 brought us the promise of SVC, AV1 is bringing with it the promise of better screen sharing of text. Why? Because its compression tools are better equipped for text, something that was/is lacking in previous video codecs.
AV1 has behind it most of the industry. Somehow, at a magical moment in the past, they got together and got to the conclusion that a royalty free video codec would benefit everyone, creating the Alliance of Open Media and with it the AV1 specification. This got the push the codec needed to become the most dominant video coding technology of our near future.
For WebRTC, it marks the 3rd video generation codec that we can now use:
Here’s an update of what Meta is doing with AV1 on mobile from their RTC@Scale event earlier this year.
This is a start. And a good one. You see experiments taking place as well as first steps towards productizing it (think Google Meet and Jitsi Meet here among others) in the following areas:
First things first. If you’re going to use a video codec of a newer generation than what you currently have, then this is what you’ll need to decide:
Do you focus on getting the same bitrate you have in the past, effectively increasing the media quality of the session. Or alternatively, are you going to lower the bitrate from where it was, reducing your bandwidth requirements.
Obviously, you can also pick anything in between the two, reducing the bitrate used a bit and increasing the quality a bit.
Starting to use another video codec though isn’t only about bitrate and quality. It is about understanding its tooling and availability as well:
There’s a lot more to be said about video codecs and how they get used in WebRTC.
For more, you can always enroll in my WebRTC courses.
The post WebRTC video codec generations: Moving from VP8 and H.264 to VP9 and AV1 appeared first on BlogGeek.me.
Explore the world of video codecs and their significance in WebRTC. Understand the advantages and trade-offs of switching between different codec generations.
Technology grinds forward with endless improvements. I remember when I first came to video conferencing, over 20 years ago, the video codecs used were H.261, H.263 and H.263+ with all of its glorious variants. H.264 was starting to be discussed and deployed here and there.
Today? H.264 and VP8 are everywhere. We bump into VP9 in WebRTC applications and we talk about AV1.
What does it mean exactly to move from one video codec generation to another? What do we gain? What do we lose? This is what I want to cover in this article.
Table of contentsDon’t have time for my ramblings? This short video should have you mostly covered:
I started recording these videos a few months back. If you like them, then don’t forget to like them
The TL;DR:
A codec is a piece of software that compresses and decompresses data. A video codec consists of an encoder which compresses a raw video input and a decoder which decompresses the compressed bitstream of a video back to something that can be displayed.
We are dealing here with lossy codecs. Codecs that don’t maintain the whole data, but rather lose information trying to hold as much as the original as possible with as little data that needs to be stored as possible
The way video codecs are defined is by their decoder:
Given a bitstream generated by a video encoder, the video codec specification indicates how to decompress that bitstream back into a viewable format.
What does that mean?
Video codecs require a lot of CPU and memory to operate. This means that in many cases, our preference would be to offload their job from the CPU to hardware acceleration. Most modern devices today have media acceleration components in the form of GPUs or other chipset components that are capable of bearing the brunt of this work. It is why mobile devices can shoot high quality videos with their internal camera for example.
Since video codecs are dictated by the specification of their decoder, defining and implementing hardware acceleration for video decoders is a lot easier than doing the same thing for video encoders. That’s because the decoders are deterministic.
For the video encoder, you need to start asking questions –
This leads us to the fact that in many cases and scenarios, hardware acceleration of video codecs isn’t suitable for WebRTC at all – they are added to devices so people can watch YouTube videos of cats or create their own TikTok videos. Both of these activities are asynchronous ones – we don’t care how long the process of encoding and decoding takes (we do, but not in the range of milliseconds of latency).
Up until a few years ago, most hardware acceleration out there didn’t work well for WebRTC and video conferencing applications. This started to change with the Covid pandemic, which caused a shift in priorities. Remote work and remote collaboration scenarios climbed the priorities list for device manufacturers and their hardware acceleration components.
Where does that leave us?
The end result? Another headache to deal with… and we didn’t even start to talk about codec generations.
New video codec generation = newer, more sophisticated toolsI mentioned the tools that are the basis of a video codec. The decoder knows how to read a bitstream based on these tools. The encoder picks and chooses which tools to use when.
When moving to a newer codec generation what usually happens is that the tools we had are getting more flexible and sophisticated, introducing new features and capabilities. And new tools are also added.
More tools and features mean the encoder now has more decisions to make when it compresses. This usually means the encoder needs to use more memory and CPU to get the job done if what we’re aiming for is better compression.
Switching from one video codec generation to another means we need the devices to be able to carry that additional resource load…
A few hard facts about video codecsHere are a few things to remember when dealing with video codecs:
It is time to start looking at WebRTC and its video codecs. We will begin with the MTI video codecs – the Mandatory To Implement. This has been a big debate back in the day. The standardization organizations couldn’t decide if VP8 or H.264 need to be the MTI codecs.
To make a long story short – a decision was made that both are MTI.
What does this mean exactly?
These video codecs are rather comparable for their “price/performance”. There are differences though.
If you’re contemplating which one to use, I’ve got a short free video course to guide you through this decision making process: H.264 or VP8 – What Shall it be?
The emergence of VP9 and rejection of HEVCThe descendants of VP8 and H.264 are VP9 and HEVC.
H.264 is a royalty bearing codec and so is HEVC. VP8 and VP9 are both royalty free codecs.
HEVC being newer and considerably more expensive made things harder for it to be adopted for something like WebRTC. That’s because WebRTC requires a large ecosystem of vendors and agreements around how things are done. With a video codec, not knowing who needs to pay the royalties stifles its adoption.
And here, should the ones paying be the chipset vendor? Device manufacturer? The browser vendor? The application developer? No easy answer, so no decision.
This is why HEVC ended up being left out of WebRTC for the time being.
VP9 was an easy decision in comparison.
Today, you can find VP9 in applications such as Google Meet and Jitsi Meet among many others who decided to go for this video codec generation and not stay in the VP8/H.264 generation.
The big promise of VP9 was its SVC support
Our brave new world of AV1AV1 is our next gen of video codecs. The promise of a better world. Peace upon the earth. Well… no.
Just a divergence in the road that puts a focus in a future that is mostly royalty free for video codecs (maybe).
What do we get from AV1 as a new video codec generation compared to VP9? Mainly what we did from VP9 compared to VP8. Better quality for the same bitrate and the price of CPU and memory.
Where VP9 brought us the promise of SVC, AV1 is bringing with it the promise of better screen sharing of text. Why? Because its compression tools are better equipped for text, something that was/is lacking in previous video codecs.
AV1 has behind it most of the industry. Somehow, at a magical moment in the past, they got together and got to the conclusion that a royalty free video codec would benefit everyone, creating the Alliance of Open Media and with it the AV1 specification. This got the push the codec needed to become the most dominant video coding technology of our near future.
For WebRTC, it marks the 3rd video generation codec that we can now use:
Here’s an update of what Meta is doing with AV1 on mobile from their RTC@Scale event earlier this year.
This is a start. And a good one. You see experiments taking place as well as first steps towards productizing it (think Google Meet and Jitsi Meet here among others) in the following areas:
First things first. If you’re going to use a video codec of a newer generation than what you currently have, then this is what you’ll need to decide:
Do you focus on getting the same bitrate you have in the past, effectively increasing the media quality of the session. Or alternatively, are you going to lower the bitrate from where it was, reducing your bandwidth requirements.
Obviously, you can also pick anything in between the two, reducing the bitrate used a bit and increasing the quality a bit.
Starting to use another video codec though isn’t only about bitrate and quality. It is about understanding its tooling and availability as well:
There’s a lot more to be said about video codecs and how they get used in WebRTC.
For more, you can always enroll in my WebRTC courses.
The post WebRTC video codec generations: Moving from VP8 and H.264 to VP9 and AV1 appeared first on BlogGeek.me.
WebRTC’s peer connection includes a getStats method that provides a variety of low-level statistics. Basic apps don’t really need to worry about these stats but many more advanced WebRTC apps use getStats for passive monitoring and even to make active changes. Extracting meaning from the getStats data is not all that straightforward. Luckily return author […]
The post Power-up getStats for Client Monitoring appeared first on webrtcHacks.
Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.
Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.
Wow, this most certainly is a great a theme.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.