Earlier last week a friend at Google reached out to me asking Does Meet do anything weird with scalabilityMode? Apparently, I am the go-to when it comes to Google Meet behaving weirdly :). Well, I do have a decade of history observing Meet’s implementation, so this makes some sense! It turned out that this was […]
The post The Hidden AV1 Gift in Google Meet appeared first on webrtcHacks.
Twilio Programmable Video is no more. What should WebRTC Video API vendors and their customers do from here on?
This week, Twilio dropped a bombshell
It decided to shut down its Programmable Video service and do a bit of downsizing and trimming around Segment and Flex.
I didn’t intend to write anything more until 2024, but this necessitated changing my plans.
The image above is an adaptation from a blog post on Twilio’s website from 2021…
Table of contentsEach year, Twilio hosts its Twilio Signal event. I’ve attended a couple of them in person and used to cover them here on a yearly basis.
That stopped with Twilio Signal 2021, which was the last time I covered that event here. The reason for that was the pivot Twilio made from CPaaS to CEP (Customer Engagement Platform).
Ever since, I’ve searched for things to talk about and share about Twilio Signal, but found nothing of real value or interest to my readers.
Remember – I cover WebRTC and CPaaS. CPaaS mainly from the point of view of WebRTC and modern communications and less from the SMS and legacy telephony sides of it.
The shift towards CEP meant a lot less investment and focus by Twilio on exactly these areas – WebRTC and CPaaS that are non-SMS/legacy telephony related.
What did Twilio have to show for its investment in video and WebRTC in 2022 and 2023? Nothing. Crickets. Oh… yes… they did integrate with Krisp for noise cancellation. Presumably only in their Video SDK and not the Voice SDK. So that’s down the drain as well.
The decision might be the right one for Twilio, if you look at where their investments and attention are going:
Video is likely 1% or less of their revenue. So why bother? Especially when it requires management attention to get it anywhere meaningful with so much else that is bigger and more important to deal with.
CPaaS vendors: Best of breed vs best of suiteI learned about the concepts of best of breed and best of suite when working at Amdocs.
Twilio started with SMS and voice. It later decided to expand and become “best of suite” by attaching to it email, video, IOT, social messaging, chat , …
What happened though is that in parallel, it worked hard on being best of breed in voice and SMS. Doing that by going upstream and introducing Flex. Flex reduced the effort of contact centers built on top of Twilio.
And then they pivoted. With the acquisition of Segment and the need to tightly integrate it with their CPaaS and Flex offering. Transitioning from taking care of communications to taking care of understanding the customer.
Today?
There are two types of CPaaS vendors:
Interestingly, both are circling like vultures around Twilio to see which customers are going to come out of there looking for alternatives. Some of these CPaaS vultures offer pure WebRTC video solutions. Others offer the whole suite. And there are those who don’t even offer video – but see this as an opportunity to poach customers from Twilio.
The cases of Twilio IOT and Twilio LiveI remember that in one of the first Twilio Signal events, Jeff Lawson stood on stage and proudly announced that they never deprecated an official API. The way this was later handled is by having beta and GA phases for products.
This cannot be said anymore… by the end of 2022, Twilio started sunsetting and shutting down services.
It started with a round of layoffs at Twilio. Jeff Lawson, Twilio’s CEO, wrote a message that got to the Twilio blog as well. Here’s what we shared about it at the time with our WebRTC Insights clients:
After the reduction in workforce, came the reduction in product offerings. The first two to go through the chopping block were Twilio IOT and Twilio Live.
Twilio Live was announced dead in November 2022. Low traction of the service and little fit the the direction of Twilio meant this had to die. The way this was done? Let customers know. Officially suggest they go use Mux instead. Somehow, the fact that Mux at the time had a service competing directly with Twilio Programmable Video wasn’t something that worried Twilio.
Twilio IOT was simply sold off to KORE Wireless in March 2023.
Remember that suggestion we gave about FUD in the market against using Twilio for video APIs? (I marked it in yellow above so you won’t miss it)
The demise of Twilio Programmable VideoHere’s what the Twilio product menu looks like on their homepage:
This is likely going to change soon or by the time this gets published.
Each and every piece in the Communications part can be snuggly fit into the products on the left and on the right (Customer Data and Applications).
Video is a bit of a stretch. At least if you look closely at traffic sizes and revenue numbers.
The two other oddballs – IOT and video streaming – were thrown out without too many objections and without hurting Twilio’ bottom line.
What was left was to get rid of the video piece. It likely took too many resources but made no real dent in Twilio’s numbers.
To be frank – the problems likely started with the acquisition of Kurento. Kurento wasn’t fit for what they had in mind for it, and it was riddled with architectural and technical issues. This wasn’t a good starting point for multiparty calling in Twilio Programmable Video.
If I had to guess, a lot of technical debt went into the product to improve and repurpose the media server pieces of Kurento.
Twilio was slow to innovate on video, leaving the room for other vendors – big and small. It missed the lowcode and embeddable experiences that are now common in video APIs. They didn’t invest in AI integrations too much. It didn’t optimize media quality enough to work well for its customers.
And then it left the door open for Amazon with their Chime SDK to threaten them in this domain.
I am guessing growth and revenue from Twilio Programmable Video wasn’t in line of expectations (unsurprisingly). The current market climate, the end of the pandemic, the headaches in Segment and Flex. All of it got them to the conclusion that it would be simpler to just sunset Twilio Programmable Video and move on.
A brave decision. Twilio Programmable Video couldn’t have been sunset in the worst time (unless you consider a few months prior to the pandemic and the quarantines).
A week before this announcement from Twilio, Amazon announced support for video calling in Amazon Connect.
Amazon is investing in adding video to its contact center solution, and Twilio, who has Twilio Flex competing against Amazon Connect, is sunsetting video support for its video API.
Why was Twilio Programmable Video appealing to potential customers? I can think of two main reasons:
The reasons why not to? Quite a few:
All that Twilio had for itself is its brand name. And that in a market that was moving on.
Things other vendors have been doing in that period of time?
Twilio wasn’t able to keep up. Or even pick a direction it wanted to invest in.
The rise of the Zoom Video SDKTwilio issued an email to its customers on December 5, stating the sunset will take a full year. From this email:
[…] we have decided to End of Life (EOL) our Programmable Video product on December 5, 2024, and we are recommending our customers migrate to the Zoom Video SDK for your video needs.
The official recommendation from Twilio is for their customers to migrate to the Zoom Video SDK.
The announcement can’t be found (yet) on any marketing material from Twilio. It can be found on social media accounts from Zoom.
Why Zoom?
They couldn’t suggest vendors that have SMS or voice services.
The rest are mostly smaller vendors – not something Twilio wanted to be identified with is my guess.
There’s only one problem with picking Zoom Video SDK here. Their web experience isn’t on par with the rest of the pack. They rely on WebTransport+WebCodecs+WebAssembly, which isn’t as stable or performant as just using WebRTC. For native, their SDKs should be fine, but for web browsers, I’d be reluctant to use them yet. Add to that the fact that this is a technology shift, requiring some relearning of terms and a reliance on proprietary technology, and you get some increased risk for the vendors switching.
I wonder if Twilio and Zoom came to an agreement here (with Zoom maybe even paying for this suggestion to go out) or if Twilio simply decided to offer some kind of a recommendation and be done with it. Philipp’s bet: Eric had dinner with Jeff and paid for it.
Anyhow, customers have a full year to figure out a solution. Or less – depending on how much browsers WebRTC implementations drift away from the current implementation of Twilio. What doesn’t get maintained in WebRTC rots rather quickly.
The future of managed Video APIs (without Twilio)I am not sure how much Twilio Programmable Video would be missed.
Developers certainly used it. Big and small. Its revenue was probably higher than some of the smaller video API vendors out there. These developers will figure out a way to migrate to other vendors to use. It won’t be the first time a CPaaS vendor has existed in the video API market (we had AddLive, vLine, ooVoo, SightCall, Respoke, Tropo, Forge, CafeX, Circuit, Bit6 all exit this market in the past).
3-4 years ago, we had 3 top dogs in this market: Vonage, Twilio, Agora
A year ago, I’d say I heard a lot more about Vonage, Amazon Chime SDK and Twilio. Less so Agora
Now, we have Vonage and Amazon Chime SDK
Who will take the 3rd spot in the 3 runners when it comes to developers’ mindshare in this industry?
We have Agora, Daily, Dolby, LiveKit and others who are all vying for that spot. Each has its own angle and differentiation.
Would Vonage keep its spot there?
Will Amazon continue investing in its Chime SDK enough?
I don’t have the answers to these questions, but I do have my own opinions.
Where should Twilio Video customers go from here?That is the big question.
If you are using Twilio Programmable Video – who should you go to instead?
And if you are on the lookout for a CPaaS vendor now – who should you pick?
My WebRTC Developer Landscape infographic was last updated in 2022, but can still offer some guidance as to the alternatives available. Some of them I’ve listed throughout this article. Others are just as valid.
Here are a few questions you need to answer for yourself:
The post Twilio exits video APIs, further focusing on voice, SMS and Segment appeared first on BlogGeek.me.
Let’s look at what we’ve achieved with WebRTC Insights in the past three years and where we are headed with it.
Along with Philipp Hancke, I’ve been running multiple projects. WebRTC Insights is one of the main ones.
Three years ago, we decided to start a service – WebRTC Insights – where we send out an email every two weeks about everything and anything that WebRTC developers need to be aware of. This includes bug reports, upcoming features, Chrome experiments, security issues and market trends.
All of this with the intent of empowering you and letting you focus on what is really important – your application. We take care of giving you the information you need quicker and in a form that is already processed.
Three years into this initiative, this is still going strong. We’ve onboarded a new client recently, and this is what he had to share with us on the first week already:
“[The Insights] Newsletter has been great and very helpful. Wish we had subscribed 2 years ago.”
Sean MacIsaac, Founder and EVP, Engineering @ Roam
Why is the WebRTC Insights so useful for our clients?It boils down to two main things:
We reduce the time it takes for engineers and product people to figure out issues they face and trends on the market. Instead of them searching the internet to sift through hints or trying to catch threads of information on things they care about, we give it straight to them – usually a few days before their clients (or management) complains about it.
On top of it, we increase their focus on what’s important to them. Going back to past issues to find problems, search issues, look at security problems, know of experiments Google is doing or just be aware of the areas where Google is investing their efforts – all of these become really simple to do.
In the past few weeks we’ve been getting complaints from clients about audio issues on Mac (usually acoustic echo problems in Chrome). These were already hinted to in one of our previous issues and the full details appeared in the more recent issues. In parallel, we’ve been able to sniff around for root causes for them almost in real-time – enabling them to zero in on the problem and find a suitable workaround.
If I weren’t so modest, I would say that for those who are serious about WebRTC, we are a force multiplier in their WebRTC expertise.
WebRTC Insights by the numbersSince this is the third year, you can also check out our past “year in review” posts:
This is what we’ve done in these 3 years:
26 Insights issued this year with 329 issues & bugs, 136 PSAs, 15 security vulnerabilities, 230 market insights all totaling 231 pages. That’s quite a few useful insights to digest and act upon.
We have covered over a thousand issues and written more than 650 pages.
WebRTC is still ever changing – both in the codebase and how it gets used by the market.
Activity on libWebRTC has cooled down yet again in the last year, dropping below 200 commits a month consistently:
This is more visible by looking at the last four years:
On one hand WebRTC is very mature now, on the other hand it seems to us that there is still a lot of work to be done and bugs to be fixed. External contributions were up. What is concerning is that the “big drop” in May happened three months after Google announced a round of layoffs but we have not seen many departures of long-time contributors.
Let’s dive into the categories, along with a few new initiatives we’ve taken this year as part of our WebRTC Insights service.
BugsThe number of reported external bugs has dropped considerably as did the number of issues tracking new work and initiatives. This correlates with the decreased commit activity.
The areas for bugs also shifted, we have seen a lot more issues related to hardware acceleration (since Google is eying that now to further reduce the CPU usage in Google Meet). Operating systems are starting to become a bigger issue, for example MacOS Sonoma caused quite a few audio issues and enabled overlaid emoji reactions (a bad choice with consequences described here) by default as part of a bigger push to move features like background blur to the OS layer. And of course, every autumn brings a new Safari on iOS release which means a ton of regressions…
A good example of how Philipp himself uses Insights as a way to identify what change caused a regression was the lack of H.264 fallback on Android which rolled out in Chrome 115 in August. We had been commenting on the original change end of May:
That said, we did not think of Android which remains complicated when it comes to H.264 support. Thankfully this rollout was guarded by a feature flag so the regression could be mitigated by the WebRTC team in less than two days.
PSAs & resources worth readingIn addition to the public service announcements done by Googlers (and Philipp) as part of making changes to the C++ API or network behavior we continue to be tracking Chromium-related “Intents” (which are a useful indicator for what is going to ship) and relevant W3C/IETF discussions in this section. We also moved more in-depth technical comments on relevant blog posts from the “Market” section which made the overall decline in activity less visible here.
Experiments in WebRTCChrome’s field trials for WebRTC are a good indicator of what large changes are rolling out which either carry some risk of subtle breaks or need A/B experimentation. Sometimes, those trials may explain behavior that only reproduces on some machines but not on others. We track the information from the chrome://version page over time which gives us a pretty good picture on what is going on:
We have gotten a bit better and now track rollout percentages. We have not seen regressions from these rollouts in the last year which is good news.
WebRTC security alertsThis year we continued keeping track of WebRTC related CVEs in Chrome (15 new ones in the past year). For each one, we determine whether they only affect Chromium or when they affect native WebRTC and need to be cherry-picked to your own fork of libwebrtc when you use it that way.
In recent months we’ve seen a trend of looking more closely at the codec implementations to find security threats there. Our expectation is that this will continue in the coming year as well – expect more CVEs around this area.
A personal highlight was Google’s Natalie Silvanovich following up on a silly SDP munging thing Philipp did with CVE-2023-4076 which affected WebRTC munging in Chrome (but not native applications:
If only anyone had told us that using SDP in the API, let alone having Javascript manipulate it in the input, is a bad idea…
WebRTC market guidanceWhat are the leaders in video conferencing doing? What is Google doing with Meet, which directly affects WebRTC’s implementation? Are they all headed in the same direction? Do they invest in different technologies and domains?
How about CPaaS vendors? How are they trying to differentiate from each other?
Other vendors who use WebRTC or delve into the communication space – where do they innovate?
Here’s a quick example we’ve noticed when Twilio worked on migrating their media servers to different IP and ports:
This ability to look at best practices of vendors, how they handled such challenges, or introduced new features is an eye opener. These are the things we cover in our market guidance. The intent here is to get you out of your echochamber that is your own company, and see the bigger world. We do that in small doses, so that it won’t defocus you. But we do it so you can take into account these trends and changes that are shaping our industry.
The interesting thing is that as WebRTC goes more and more into a kind of a “maintenance mode” with its browser releases, the variance and interesting newsworthy items we see on the market as a whole is growing. This is likely why our market insights section has seen rapid growth this year.
Insights automationWe’ve grown nicely in our client base, and up until recently, we sent the emails… manually.
It became a time consuming activity to say the least, and one that was also prone to errors. So we finally automated it.
The WebRTC Issue emails are now automated. They include the specific issue along with the latest collection security issues. It has made life considerably simpler on our end.
Join the WebRTC expertsWe are now headed into our fourth year of WebRTC Insights.
Our number of subscribers is growing. If you’ve got to this point, then the only question to ask is why aren’t you already subscribed to the WebRTC Insights if WebRTC interests you so much?
You can read more about the available plans for WebRTC Insights and if you have any questions – just contact Tsahi.
Oh – and you shouldn’t take only our word for how great WebRTC Insights – just see what Google’s own Serge Lachapelle has to say about it:
Still not sure? Want to sample an issue? Just reach out to me.
The post Third time’s a charm: WebRTC Insights, 3 years in appeared first on BlogGeek.me.
As PCengines announced the end of sales of their famous APU platform, it’s time to look for alternative devices that can be utilized as firewalls or network probes or VPN appliances.
I bought recently a Qotom Q20321G9 mini-PC from AliExpress. The model is similar to their Q20331G9 model described on Qotom website. The difference is a slower CPU and less SFP+ interfaces:
ModelQ20321G9Q20331G9CPUIntel Atom C3558RIntel Atom C3758RTDP17W26WNICs2x SFP+, 2x SFP, 5x 2.5Gbit LAN4x SFP+, 5x 2.5Gbit LANComparing to the APU platform, this Qotom box is huge: 62mm high, compared to 30mm of APU enclosure, 217mm bright, and much heavier because of the massive heatsink. But it has much more to offer.
Two M.2 NVME sockets allow a redundant storage setup out of the box. Also, it supports ECC RAM (although the model I received had a non-ECC DIMM), so it can serve as a reliable hardware platform if you need a long-term service. Also, it has an M.2 socket for an LTE modem, two antenna mounting holes, and a nano-SIM card slot.
A minor downside is that even at idling, with all CPU cores running at 800MHz, the device is getting quite warm. The onboard sensors show the CPU core temperatures at around +42C to +44C, and the enclosure is rather hot at the touch.
I also have run a CPU stress test with the enclosure covered by a towel for about a half an hour, and the CPU temperature exceeded 60C, still functioning well.
A minor inconvenience is that the power button is too easy to press if you’re moving around it while testing. But the button is easy to remove, so that the power switch can be pressed by a pen when needed.
The SFP and SFP+ interfaces were recognized by Debian 12 out of the box.
The device arrived with a preinstalled Windows 10. The BIOS allows redirecting the console to the COM port, which is provided as an RJ-45 socket, with the same pinout as Cisco routers.
The NIC numbering is a bit non-intuitive, and the marking on the enclosure does not help much. Here are the interfaces as they’re seen by Debian, if you look at the device’s interface panel:
eno1 (SFP+)eno3 (SFP)enp7s0 (LAN)enp6s0 (LAN)enp8s0 (LAN)eno2 (SFP+)eno4 (SFP)enp5s0 (LAN)enp4s0 (LAN)Some diagnostics output below:
root@qotom01:~# lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 39 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 4 On-line CPU(s) list: 0-3 Vendor ID: GenuineIntel BIOS Vendor ID: Intel(R) Corporation Model name: Intel(R) Atom(TM) CPU C3558R @ 2.40GHz BIOS Model name: Intel(R) Atom(TM) CPU C3558R @ 2.40GHz CPU @ 2.4GHz BIOS CPU family: 178 CPU family: 6 Model: 95 Thread(s) per core: 1 Core(s) per socket: 4 Socket(s): 1 Stepping: 1 CPU(s) scaling MHz: 52% CPU max MHz: 2400.0000 CPU min MHz: 800.0000 BogoMIPS: 4800.00 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology no nstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg cx16 x tpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave rdrand lahf_lm 3dnowprefetch cpuid_ fault epb cat_l2 ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust smep erms mpx rdt_a rdseed smap clflushopt intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves dtherm arat pln pts m d_clear arch_capabilities Virtualization features: Virtualization: VT-x Caches (sum of all): L1d: 96 KiB (4 instances) L1i: 128 KiB (4 instances) L2: 8 MiB (4 instances) NUMA: NUMA node(s): 1 NUMA node0 CPU(s): 0-3 Vulnerabilities: Gather data sampling: Not affected Itlb multihit: Not affected L1tf: Not affected Mds: Not affected Meltdown: Not affected Mmio stale data: Not affected Retbleed: Not affected Spec rstack overflow: Not affected Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected Srbds: Not affected Tsx async abort: Not affected root@qotom01:~# lsusb Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 003: ID 05e3:0608 Genesys Logic, Inc. Hub Bus 001 Device 002: ID 046d:c31c Logitech, Inc. Keyboard K120 Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub root@qotom01:~# lspci 00:00.0 Host bridge: Intel Corporation Atom Processor C3000 Series System Agent (rev 11) 00:04.0 Host bridge: Intel Corporation Atom Processor C3000 Series Error Registers (rev 11) 00:05.0 Generic system peripheral [0807]: Intel Corporation Atom Processor C3000 Series Root Complex Event Collector (rev 11) 00:06.0 PCI bridge: Intel Corporation Atom Processor C3000 Series Integrated QAT Root Port (rev 11) 00:09.0 PCI bridge: Intel Corporation Atom Processor C3000 Series PCI Express Root Port #0 (rev 11) 00:0a.0 PCI bridge: Intel Corporation Atom Processor C3000 Series PCI Express Root Port #1 (rev 11) 00:0b.0 PCI bridge: Intel Corporation Atom Processor C3000 Series PCI Express Root Port #2 (rev 11) 00:0c.0 PCI bridge: Intel Corporation Atom Processor C3000 Series PCI Express Root Port #3 (rev 11) 00:0e.0 PCI bridge: Intel Corporation Atom Processor C3000 Series PCI Express Root Port #4 (rev 11) 00:0f.0 PCI bridge: Intel Corporation Atom Processor C3000 Series PCI Express Root Port #5 (rev 11) 00:10.0 PCI bridge: Intel Corporation Atom Processor C3000 Series PCI Express Root Port #6 (rev 11) 00:11.0 PCI bridge: Intel Corporation Atom Processor C3000 Series PCI Express Root Port #7 (rev 11) 00:12.0 System peripheral: Intel Corporation Atom Processor C3000 Series SMBus Contoller - Host (rev 11) 00:13.0 SATA controller: Intel Corporation Atom Processor C3000 Series SATA Controller 0 (rev 11) 00:14.0 SATA controller: Intel Corporation Atom Processor C3000 Series SATA Controller 1 (rev 11) 00:15.0 USB controller: Intel Corporation Atom Processor C3000 Series USB 3.0 xHCI Controller (rev 11) 00:16.0 PCI bridge: Intel Corporation Atom Processor C3000 Series Integrated LAN Root Port #0 (rev 11) 00:17.0 PCI bridge: Intel Corporation Atom Processor C3000 Series Integrated LAN Root Port #1 (rev 11) 00:18.0 Communication controller: Intel Corporation Atom Processor C3000 Series ME HECI 1 (rev 11) 00:1a.0 Serial controller: Intel Corporation Atom Processor C3000 Series HSUART Controller (rev 11) 00:1f.0 ISA bridge: Intel Corporation Atom Processor C3000 Series LPC or eSPI (rev 11) 00:1f.2 Memory controller: Intel Corporation Atom Processor C3000 Series Power Management Controller (rev 11) 00:1f.4 SMBus: Intel Corporation Atom Processor C3000 Series SMBus controller (rev 11) 00:1f.5 Serial bus controller: Intel Corporation Atom Processor C3000 Series SPI Controller (rev 11) 01:00.0 Co-processor: Intel Corporation Atom Processor C3000 Series QuickAssist Technology (rev 11) 02:00.0 Non-Volatile memory controller: Phison Electronics Corporation PS5013 E13 NVMe Controller (rev 01) 04:00.0 Ethernet controller: Intel Corporation Ethernet Controller I225-V (rev 03) 05:00.0 Ethernet controller: Intel Corporation Ethernet Controller I225-V (rev 03) 06:00.0 Ethernet controller: Intel Corporation Ethernet Controller I225-V (rev 03) 07:00.0 Ethernet controller: Intel Corporation Ethernet Controller I225-V (rev 03) 08:00.0 Ethernet controller: Intel Corporation Ethernet Controller I225-V (rev 03) 09:00.0 PCI bridge: ASPEED Technology, Inc. AST1150 PCI-to-PCI Bridge (rev 03) 0a:00.0 VGA compatible controller: ASPEED Technology, Inc. ASPEED Graphics Family (rev 30) 0b:00.0 Ethernet controller: Intel Corporation Ethernet Connection X553 10 GbE SFP+ (rev 11) 0b:00.1 Ethernet controller: Intel Corporation Ethernet Connection X553 10 GbE SFP+ (rev 11) 0c:00.0 Ethernet controller: Intel Corporation Ethernet Connection X553 Backplane (rev 11) 0c:00.1 Ethernet controller: Intel Corporation Ethernet Connection X553 Backplane (rev 11) root@qotom01:~# lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 39 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 4 On-line CPU(s) list: 0-3 Vendor ID: GenuineIntel BIOS Vendor ID: Intel(R) Corporation Model name: Intel(R) Atom(TM) CPU C3558R @ 2.40GHz BIOS Model name: Intel(R) Atom(TM) CPU C3558R @ 2.40GHz CPU @ 2.4GHz BIOS CPU family: 178 CPU family: 6 Model: 95 Thread(s) per core: 1 Core(s) per socket: 4 Socket(s): 1 Stepping: 1 CPU(s) scaling MHz: 52% CPU max MHz: 2400.0000 CPU min MHz: 800.0000 BogoMIPS: 4800.00 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology no nstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg cx16 x tpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave rdrand lahf_lm 3dnowprefetch cpuid_ fault epb cat_l2 ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust smep erms mpx rdt_a rdseed smap clflushopt intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves dtherm arat pln pts m d_clear arch_capabilities Virtualization features: Virtualization: VT-x Caches (sum of all): L1d: 96 KiB (4 instances) L1i: 128 KiB (4 instances) L2: 8 MiB (4 instances) NUMA: NUMA node(s): 1 NUMA node0 CPU(s): 0-3 Vulnerabilities: Gather data sampling: Not affected Itlb multihit: Not affected L1tf: Not affected Mds: Not affected Meltdown: Not affected Mmio stale data: Not affected Retbleed: Not affected Spec rstack overflow: Not affected Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected Srbds: Not affected Tsx async abort: Not affected root@qotom01:~# lsusb Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 003: ID 05e3:0608 Genesys Logic, Inc. Hub Bus 001 Device 002: ID 046d:c31c Logitech, Inc. Keyboard K120 Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub root@qotom01:~# lspci 00:00.0 Host bridge: Intel Corporation Atom Processor C3000 Series System Agent (rev 11) 00:04.0 Host bridge: Intel Corporation Atom Processor C3000 Series Error Registers (rev 11) 00:05.0 Generic system peripheral [0807]: Intel Corporation Atom Processor C3000 Series Root Complex Event Collector (rev 11) 00:06.0 PCI bridge: Intel Corporation Atom Processor C3000 Series Integrated QAT Root Port (rev 11) 00:09.0 PCI bridge: Intel Corporation Atom Processor C3000 Series PCI Express Root Port #0 (rev 11) 00:0a.0 PCI bridge: Intel Corporation Atom Processor C3000 Series PCI Express Root Port #1 (rev 11) 00:0b.0 PCI bridge: Intel Corporation Atom Processor C3000 Series PCI Express Root Port #2 (rev 11) 00:0c.0 PCI bridge: Intel Corporation Atom Processor C3000 Series PCI Express Root Port #3 (rev 11) 00:0e.0 PCI bridge: Intel Corporation Atom Processor C3000 Series PCI Express Root Port #4 (rev 11) 00:0f.0 PCI bridge: Intel Corporation Atom Processor C3000 Series PCI Express Root Port #5 (rev 11) 00:10.0 PCI bridge: Intel Corporation Atom Processor C3000 Series PCI Express Root Port #6 (rev 11) 00:11.0 PCI bridge: Intel Corporation Atom Processor C3000 Series PCI Express Root Port #7 (rev 11) 00:12.0 System peripheral: Intel Corporation Atom Processor C3000 Series SMBus Contoller - Host (rev 11) 00:13.0 SATA controller: Intel Corporation Atom Processor C3000 Series SATA Controller 0 (rev 11) 00:14.0 SATA controller: Intel Corporation Atom Processor C3000 Series SATA Controller 1 (rev 11) 00:15.0 USB controller: Intel Corporation Atom Processor C3000 Series USB 3.0 xHCI Controller (rev 11) 00:16.0 PCI bridge: Intel Corporation Atom Processor C3000 Series Integrated LAN Root Port #0 (rev 11) 00:17.0 PCI bridge: Intel Corporation Atom Processor C3000 Series Integrated LAN Root Port #1 (rev 11) 00:18.0 Communication controller: Intel Corporation Atom Processor C3000 Series ME HECI 1 (rev 11) 00:1a.0 Serial controller: Intel Corporation Atom Processor C3000 Series HSUART Controller (rev 11) 00:1f.0 ISA bridge: Intel Corporation Atom Processor C3000 Series LPC or eSPI (rev 11) 00:1f.2 Memory controller: Intel Corporation Atom Processor C3000 Series Power Management Controller (rev 11) 00:1f.4 SMBus: Intel Corporation Atom Processor C3000 Series SMBus controller (rev 11) 00:1f.5 Serial bus controller: Intel Corporation Atom Processor C3000 Series SPI Controller (rev 11) 01:00.0 Co-processor: Intel Corporation Atom Processor C3000 Series QuickAssist Technology (rev 11) 02:00.0 Non-Volatile memory controller: Phison Electronics Corporation PS5013 E13 NVMe Controller (rev 01) 04:00.0 Ethernet controller: Intel Corporation Ethernet Controller I225-V (rev 03) 05:00.0 Ethernet controller: Intel Corporation Ethernet Controller I225-V (rev 03) 06:00.0 Ethernet controller: Intel Corporation Ethernet Controller I225-V (rev 03) 07:00.0 Ethernet controller: Intel Corporation Ethernet Controller I225-V (rev 03) 08:00.0 Ethernet controller: Intel Corporation Ethernet Controller I225-V (rev 03) 09:00.0 PCI bridge: ASPEED Technology, Inc. AST1150 PCI-to-PCI Bridge (rev 03) 0a:00.0 VGA compatible controller: ASPEED Technology, Inc. ASPEED Graphics Family (rev 30) 0b:00.0 Ethernet controller: Intel Corporation Ethernet Connection X553 10 GbE SFP+ (rev 11) 0b:00.1 Ethernet controller: Intel Corporation Ethernet Connection X553 10 GbE SFP+ (rev 11) 0c:00.0 Ethernet controller: Intel Corporation Ethernet Connection X553 Backplane (rev 11) 0c:00.1 Ethernet controller: Intel Corporation Ethernet Connection X553 Backplane (rev 11) root@qotom01:~# ip link 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: enp4s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 20:7c:14:f2:9c:76 brd ff:ff:ff:ff:ff:ff 3: enp5s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 20:7c:14:f2:9c:77 brd ff:ff:ff:ff:ff:ff 4: enp6s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 20:7c:14:f2:9c:78 brd ff:ff:ff:ff:ff:ff 5: enp7s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 20:7c:14:f2:9c:79 brd ff:ff:ff:ff:ff:ff 6: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 20:7c:14:f2:9c:7a brd ff:ff:ff:ff:ff:ff 7: eno1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 1000 link/ether 20:7c:14:f2:9c:7b brd ff:ff:ff:ff:ff:ff altname enp11s0f0 8: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 20:7c:14:f2:9c:7c brd ff:ff:ff:ff:ff:ff altname enp11s0f1 9: eno3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 20:7c:14:f2:9c:7d brd ff:ff:ff:ff:ff:ff altname enp12s0f0 10: eno4: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 1000 link/ether 20:7c:14:f2:9c:7e brd ff:ff:ff:ff:ff:ff altname enp12s0f1An overview of remote education and WebRTC. The market niches, challenges and solutions.
Whenever a video meetings company starts looking at verticals for the purpose of targeted marketing, one of the verticals that is always there is education. We’ve seen this during the pandemic – as the world went into quarantine mode, schools started figuring out how to teach kids remotely.
The remote education market is not just schools doing remote video calls. It is a lot more varied. I’d like to explore that market in this article.
Table of contentsThere are around 2 billion children in the world. Over 80% of them attend schools.
Some 235 million higher education students are out there as well around the globe.
During the pandemic, a lot of them were online, taking classes remotely. For multiple hours each day.
The slide above is from Kranky Geek 2020. In this session, Google talked about their work on WebRTC in Chrome.
Here they shared the increase in video minutes during the initial quarantines. The huge spike there starts at around the August/September timeframe, when schools start.
Remote education is here to stay. Not with its increased usage of 10-100x, but definitely bigger than in the past. There are many places where remote education can fit – and not only for emergencies such as the pandemic.
Me? Remote education?Like everyone else, my kids went through the process of remote education during the pandemic. Here, the Ministry of Education went all-in with Zoom for schools (along with Google Classroom and Microsoft Office – go figure). Since then, our kids have on and off private tutors doing classes remotely sometimes. And now, when we have a war raging between Gaza and Israel, depending on where you live, you might be studying from home or physically in school.
I had my share of consulting with education organizations across the globe. Some focusing on schools, others with universities and some with private tutoring. It was always fascinating to see how such markets are distinctly different from each other, and how remote education also takes different shapes and sizes based on the country.
And then there are my own online courses, with their associated office hours and AMAs.
The role of WebRTC in remote educationWebRTC plays an important role in the education market. Besides offering video communications, it also enables the ability to mesh the communication experience directly into the LMS (Learning Management System) or the SIS (School Information System), offering a seamless and tailored experience for both the teacher and the learners – one that enables the educators to implement various pedagogies.
Remember here that WebRTC is a synchronous technology – live, real-time voice and video communications. A large chunk of the education market is leaning heavily on asynchronous learning (recorded videos, texts to read, etc). These are not covered in this article.
Here are some market niches and use cases where you will find WebRTC in remote education.
Group lessonsThe simplest one to explain is probably group lessons. The classic one would be the pandemic use case, where during quarantine, schools went all virtual – classes were conducted online.
Remote group lessons aren’t limited to schools either – they are done in universities, private group tutoring, etc.
Main challenges here include:
Moderation tools for the teachers. Ones that are simple to use while conducting the lesson itself
Collaboration tools to make the lessons more engaging. Maintaining engagement in online group lessons is the biggest challenge at the moment, especially for younger learners
Authentication and authorization of users. Lots of anecdotal stories around this one throughout the pandemic
One thing that is raised time and again with group lessons, especially in schools, is the need (and inability) to get the students to keep their cameras on. This is a huge obstacle to effective learning, and something that needs to be taken into account.
Another important thing that needs to be fleshed out early on here, is who is the client – is it the teacher or the students. Whoever the system is geared towards will set the tone to how the solution gets designed and implemented.
One-to-one tutoringThese are mainly one on one lessons conducted remotely.
Outside of the domain of classic education, a lot of classes are actually conducted in such a way. Here are a few anecdotal stories from recent years that I’ve learned about:
A dear friend who is learning to play the piano. Remotely. She travels a lot between the US and Israel, and takes her lessons from everywhere through her iPad
Another friend, taking 1:1 drawing lessons
Online chess lessons for kids in our community
My son’s friend, learning C++ on Unreal engine, taking 1:1 lessons
My son, a few years ago, when he was 10 or so, learning to build online games using nocode game engines from an 18 year-old who lived two cities away
My wife took online dance lessons to specialize in Salsa from a renowned instructor abroad
Besides the collaborative, engagement level and nature of such lessons, it is important to note that they aren’t suitable for everyone. Some teachers are more natural in these, and some students can learn effectively in such a manner while others struggle (I have both examples at home).
An interesting use case here that I’ve seen is math and English (!) tutors from India and China teaching remote kids in the UK and the US. Why? Simply because they are cheaper than using local teachers. Then there was the opposite – rich Chinese families getting one-to-one English tutoring for their kids from US teachers. Go figure.
One-to-one tutoring comes in a lot of different shapes and sizes.
MOOCs (Massive Open Online Courses)MOOCs were all the rage 10 years ago. Their market is still consistently growing.
MOOCs are simply large online courses that are open for people around the globe. Some of them are collaborative, while others are mainly lecturer driven. Some allow for asynchronous learning while others are more synchronous in their nature. Both the asynchronous and synchronous learning modes in MOOCs offer self-paced learning (at least to some degree).
WebRTC finds its way into MOOCs for their synchronous part, when that requires live video sessions – either between lecturers and students or between student groups in the more collaborative courses.
ProctoringProctoring isn’t about learning, but about taking exams. Remote proctoring enables taking exams at the comfort of one’s home or office without going to the classroom.
With proctoring, the user is required to open up his camera and microphone as well as share his screen while taking the exam. The proctoring application takes care of checking that other tabs aren’t being opened and that nothing fishy is taking place (as much as possible). WebRTC is used to gather all that realtime audio and video data and record it. If needed, these recordings can be accessed by human proctors later on.
It should be noted that for proctoring, there are a lot of requirements around circumventing the ability to cheat on the exam. This includes things like monitoring applications used during the exam, maintaining focus on the exam page, etc. To achieve this, most proctoring solutions end up as PC applications (usually using Electron) which the student needs to install on his machine in order to take the exam. The innards of the proctoring application will end up using WebRTC in a web application – simply for its speed of development and the use of the WebRTC ecosystem.
CoachingWhile similar to classic education, coaching is slightly different. In its essence, these can be 1:1 sessions or small group sessions where issues and challenges in certain areas get fleshed out. In group lessons and 1:1 tutoring, a lot of the focus is on collaboration features. Here, in many cases, it will be more on the video of the participants and the need to bring them together.
Another interesting aspect of coaching is the platform it gets attached to – either directly or indirectly. Coaching often comes bundled as a larger course/training offering, mixed with in-person meetings, reading/presented materials and the coaching sessions themselves.
The LMS and SIS systems are usually also lacking in the coaching platforms. Usually, these will be geared towards flexible use and at times an integrated payment system.
WebinarsWebinars are a form of lessons that is conducted over the internet, mostly for businesses to assist in marketing and sales efforts. Depending on the level of interactiveness of the webinar, the need and use of WebRTC will be needed.
In the past, webinars were usually conducted via specialized downloadable applications, where the content was mostly slide decks and the voice of the speakers. The interaction with the audience was done via text messages and organized Q&A. Over time, these solutions became richer and more sophisticated, adding video communications as well as the ability of the audience to “join the podium” if and when needed.
Using WebRTC here enabled getting rid of the application download requirement and increased the level of interactivity quite considerably.
The intersection of education and healthcareEducation and healthcare are bound together. I’ve shown that a bit in my WebRTC in telehealth article, looking at it from the remote training of healthcare topics perspectives. I want to take a different angle on the same topic here. I’ll do that by showcasing two interesting use cases I’ve been privy to a few years back.
#1 – Dance lessons in cancer
I heard this one from a dancer who had cancer and healed. Women with cancer have it hard. Chemo is brutal – it seeps out the energy and causes hair loss. This means women don’t want to go outside that much. Here, being able to bring them remotely to a dance lesson can be a real benefit to them, especially if they love(d) dancing. They won’t go physically – not wanting to meet people outside and the stairs that come with it – along with the energy it takes. But they will be willing to dance – maybe.
Remote dance lessons for this niche is beneficial. Not from an educational standpoint but more from a mental health one.
#2 – Video in class for students in hospitals
Another vendor I worked with briefly was assisting school kids who had to be treated in hospital or just stay home for prolonged periods of time (think weeks or months at a time). Their solution was to bring a video conferencing system and rig it in the physical classroom of the kid as well as where he is located, be it home or a hospital bed.
This way, the kid could join the classes as well as stay connected to other classmates during recesses. The main purpose here isn’t really the teaching part, but rather to make sure the student stays in contact with peers in his age group and not be secluded during that period of time.
Is this a use case in education? In healthcare? I can’t really say…
ERT (Emergency Remote Teaching)The pandemic showed us that remote education is challenging but might be necessary. We were all quarantined for long periods of time, with school across the globe going remote.
Here in Israel, when clashes with Gaza or Hezbollah in Lebanon flare, schools shift to remote learning. It isn’t frictionless or smooth, but it is the solution we have to try and continue educating kids here.
The most crucial aspect of ERT is that teachers are forced to change their teaching setting with no preparation. In Israel, at least, the pandemic didn’t prepare teachers for the current war – it feels like the education system in Israel learned nothing from the pandemic wrt to remote teaching
Top down decisions; sometimesEducation is interesting. Especially the institutional ones of schools.
In some countries, decisions are made top down while in others, there’s more autonomy kept at the school level or the district level.
Here are a few things I learned asking the question on LinkedIn, about what tool was used during the pandemic for virtual classes across the globe:
This is by no means complete or accurate, but it shows a few important aspects of education:
In some countries, decisions on the tools to use is taken top down, while in others, each district or school is left to autonomously make a decision
Like in many industries, but probably more so, appearances matter. Losing Israel for Zoom was bad publicity. They had to fix that quickly by renewing the service for free. BTW – the damage is already done, my kids are now using Google Meet at school and there likely isn’t a way back
Live, online and in-personEducation is mixed. It isn’t all virtual and isn’t all in person.
My own WebRTC Courses are online, but not live. The lessons are pre-recorded. I offer monthly AMA meetings as part of them which are online and live.
I took a CPO course last year. It included in person meetings (3 full days), weekly live sessions as well as pre-recorded information.
My kids are now learning some days remote and some days in-person in the school.
Some countries had recorded/broadcasted lessons alongside virtual live classes during the pandemic, creating from them a full set of learning materials that students can use moving forward.
The LMS (Learning Management System) used needs to take all these into account, enabling different learning strategies and different content types. Your own service needs to be able to figure out what works best.
HybridThe term Hybrid Learning refers to any form that incorporates online and offline learning. This is slightly different from how we define hybrid meetings.
Allowing a student to join remotely to a class taking place in-person is a real challenge, but one that needs to be dealt with as well. This isn’t any different from hybrid meetings in enterprises in terms of the basic need. The difference is likely in size and complexity.
Most classes aren’t geared to this. From the placement of the cameras in the class, to the way the lessons are conducted and to the way teachers need to split their attention between in person to remote students.
In most places, going hybrid in education is an intentional decision that can be made only for select use cases and in a limited number and types of institutions.
ModerationWho is allowed to join a virtual lesson? Should the teacher approve each student joining? How do you know who is online? Who is actively listening? Should anyone be automatically allowed to speak up? Share their screen? Is there a way to check if the student goes “off the reservation”, doing other things in other browser tabs or on his phone in parallel?
All these are hard questions with no good answers.
Moderation in education must take place – especially for group lessons. This has two purposes:
Oftentimes, moderation tools deal with a semblance of order but less with the focus of the teacher or teaching.
The decision in Israel for example to go for Google Meet makes total sense simply because authentication and identity is managed by Google Classroom already. Classroom is acting as the LMS as well, or at least the hub for students and teachers. Having a tighter integration means some of the moderation requirements can more easily be met.
It isn’t only about what can be moderated, but how and with what level of friction
AssessmentHow are assessments taking place in online learning?
In the traditional classroom, teachers physically saw the students and could easily gauge their level of attentiveness. To that, home assignments and tests were added.
Once going online, technology can come to assist the teachers and students, adding a layer of information to the assessment process. Dashboards can be built to make this data accessible.
Where does WebRTC fit in here? The same way it does in online meetings, where we see today a growing focus on incorporating transcriptions, meeting summaries and action items automatically. Similar LLM/generative AI technologies can be used to glean insights out of online lessons.
In many ways, this isn’t done yet. Probably because we’re still struggling with engagement (see below).
Collaboration and whiteboardingHow is collaboration done in education? Do we need the classing blackboard/whiteboard for teaching? How does that get translated to the digital, remote scenario?
Are we looking here for something as powerful and flexible as a Miro board or something simpler and less feature rich?
Is teaching math or physics similar to teaching languages or literature when it comes to collaboration and whiteboard?
How about Kahoot or similar polling/quiz capabilities? Do we make them engaging or boring as hell?
A lot of thought and energy needs to be diverted towards these types of questions, in trying to figure out what works best to increase engagement and improve the learning experience (and by extension, the learning itself).
The challenge of engagementHow do you define engagement in online synchronous lessons?
Is students opening cameras considered engagement?
Maybe students be engaged with their cameras turned off
Getting students to open up their cameras, having them choose to do so and keep the cameras on is a big issue in schools and in higher education.
In my son’s school, they are now shifting towards enforcing students to open their cameras… but allowing them to point that camera at the ceiling
Once you have cameras on, how does a teacher gauge the level of engagement of a student? How does he spare the time looking at 20+ students (36 in Israel classes) to understand if they are engaged or not while trying to present his screen to teach something out of his slidedeck?
“Feeling the crowd” to understand if a topic needs further explanation or can the teacher move on to new topics is harder to achieve online than it is in person.
The challenge of engagement (part 2)How do you get students engaged?
What type of collaboration solution do you need?
Which experiences should be baked into the solution?
My son decided to take up Russian. His friend speaks Russian with his parents, so he decided he wants to understand when they talk to each other (go figure). He decided independently to install Duolingo on his phone and has been taking their lessons for almost a year now
He can now read Russian and know quite a few words.
A good friend of mine is learning German using Duolingo. We did a roadtrip in the US in February. I had to hear him learn in our long hours on the road. It was an interesting experience to see it from the side, trying to figure out how this magic happens.
Engagement and “gamification” are a main part of how Duolingo works and how it gets students back into their app over and over again.
We haven’t quite cracked the formula of how to do this well in live virtual classes. There must be a way to get there, and when we find it, we will see great dividends from it.
Asymmetry in remote educationThere are teachers and there are students. Who is the system designed to cater?
A simple question. Answering with “both” is likely going to be wrong most of the time.
I had a meeting at a large and prominent university in Europe a few years back. They wanted to build a video conferencing system for lectures. Have the professor in front of a large digital board showing tens of students joining remotely. Call it extremely expensive and unique. That was before the pandemic, so unrelated to it.
The question I had was who this system is for. Is it to sell students on a great remote experience or is it for the professor to feel important. I have my own answer here
You need to decide who the service you are developing is really there to cater – the teacher and his needs, assuming that students will simply join because they have little choice. Or the students, focusing on enticing them to join, collaborate and interact.
Doing both at the same time is a real challenge, and one that most vendors aren’t prepared to take yet.
Figure out who your main user is. The teacher or the students. Or maybe the parents?
Training the educatorsSomeone needs to teach the teachers how to use the service. This is a real problem, especially when going mainstream.
When the pandemic started and Zoom was selected here in Israel, a lot of videos surfaced explaining how to use Zoom in the context of teaching with it. Last month, when Google Meet was the official solution, you started seeing the same occur for Google Meet here in Israel.
The differences between these two services may seem minor, but they are big for teachers who aren’t technically savvy.
Some private tutors for example shy away from remote lessons. Their reason is the inability to focus on the student during the lesson. Increase that by 20-40 students in a single lesson, many of them acting like prisoners trying to break out and figuring out ways to game the system called a virtual lesson, and you get to the need for teachers who know their way using the service inside and out.
Onboarding and familiarizing teachers to the platform is just as important as the actual service, sometimes even more
A matter of costsThis one might just be an opinion of mine.
Remote education is a huge market. During the pandemic, it encompassed almost all the world’s students. And yet, the amount of money available to spend per minute is quite low.
In many cases, the deals are large (in front of a state or a country). Sometimes, they are smallish, in front of a single school. There’s money in these institutions, but in many cases, that money is spent elsewhere.
When going after the education market, it is vital to understand the buying habits and budget of the would-be purchaser beforehand.
Solutions in the education market need to be cost effective and efficient from a WebRTC infrastructure point of view
Where can I help, if at all?
Online WebRTC courses, to skill up engineers on this technology
Consulting, mostly around architecture decisions and technology stack selection
Testing and monitoring WebRTC systems, via my role as Senior Director at Cyara (and the co-founder of testRTC)
The post Zooming in on remote education and WebRTC appeared first on BlogGeek.me.
When it comes to WebRTC in telehealth, there are quite a few use cases and a lot of things to consider besides HIPAA compliance.
A thing that comes up in each and every discussion related to telehealth & WebRTC is the value of the call in telehealth. We’ve seen video meetings and calls go down to zero in their cost/value for the user. Especially during the pandemic. So whenever we find a nice market where there is high value for a call, it is heartening. Healthcare is such a place where we can easily explain why calls are important.
But what exactly does WebRTC in telehealth mean? It isn’t just a patient calling a doctor. There is a lot more to it than that. Let’s dive in together to see what we can find.
Table of contentsLike many others, my first real bump with telehealth took place during the COVID quarantines.
My son was sick with high fever for over a week, and the doctors didn’t help any.
My wife was worried, needing more comfort by knowing someone was looking at him. Really looking at him.
So we used a kind of a private service that a hospital near our vicinity was giving:
What can I say? It worked as advertised.
As a consultant and a product managerWe have quite a few healthcare clients using our various WebRTC services at testRTC.
Other than that:
That and just from conversations with vendors, along with a review of this article by a few who work on telehealth products and integrating their comments as well.
Does that make me an expert in telehealth? No.
But I can fill in the WebRTC angle of telehealth, which is a rather big one.
Finding WebRTC in TelehealthTelehealth for me is about the digital transformation of healthcare services.
It can start small, with things such as scheduling and viewing lab test results. And then it can grow towards virtualizing the actual patient-doctor interaction. Or any other interaction within the healthcare space between one or more people (emphasis on one here – not two).
I’ve listed here the main use cases that came to mind thinking of it in recent days.
Patients and doctorsThe most obvious use case is the patient and doctor scenario.
In this, the doctor visitation itself is remote and virtual.
This can be useful in many situations:
For many of these situations, this is the setup that takes place:
More on that – later.
In general – here’s where you’ll see such solution types deployed:
Hospitals and large healthcare organizations
Clinics hosting multiple doctors
Private clinic of a single doctor
Insurance companies
Also remember that the word doctor is a broad definition of the caretakers involved. These can be nurses, doctors, dietitians and other practitioners offering the treatment/session to the patient remotely.
The other thing to remember is that this is also asymmetric in scarcity: there are a lot more patients than they are caregivers.
Group therapy and counselingThen there’s group therapy.
One where one or more psychologists lead a larger group of patients. The same also applies to dietitians, speech therapists, smokers, cancer patients and other groups of practitioners.
Here again, the idea and intent is that the patients and the therapists can join remotely to a virtual meeting and conduct that meeting.
The main benefit? Not needing to drive and travel for the meeting and being able to conduct it from anywhere.
Notable here is the fact that this can be enhanced or taken to a slightly different perspective – this can encompass the allied health domain, where AA (Alcoholic Anonymous) groups for example fit in.
Nurse stationsThe nurse station is slightly different from the doctor-patient in my mind.
Here, the patient is situated physically next to the nurse, so the call/meeting isn’t virtual or remote but rather in person. The “twist” is that there is another caregiver or external authority that can be joined remotely to the session if and when needed. Say a doctor with a specialization that might not be available where the patient is located – this can be viewed in a way to democratize the access to specialty care.
Envision a nurse moving inside a hospital ward. She has a mobile station moving around with her that can be used to conduct video meetings with doctors. It can also be used for other purposes such as adding a live translator into that interaction with the patient or the patient’s custodian.
The lack of specialized provider access in remote areas can be extremely critical, and here again, virtual meetings can assist. Taking this further, a nurse station of sorts can be placed inside an ambulance providing immediate care – even for cases of strokes or cardiac arrests.
OutpatientsOutpatients are clinics that belong to hospitals. These are designed for people who do not require a hospital bed or an overnight stay. Sometimes, this can be for minor surgeries. Mostly for diagnostics, treatments or as follow ups to hospital admissions.
These clinics are part of the overall treatment that patients get from the hospital or for things that are hard to obtain elsewhere due to scarcity of machinery and/or experience.
Some of the diagnostics done in an outpatient clinic can be done remotely. This reduces wait times and travel times for patients and also allows using doctors joining remotely and not physically inside the clinic.
While similar to the patients and doctors use case, there are differences. The main one being the organization behind it, the logistics and the network. Hospital networks are usually a lot more complex and limited to connectivity of WebRTC traffic, bringing with it a different set of headaches.
Taking care of the elderlyAs the human population is aging in general and people live longer, we’re also getting to a point where elderly care is different from other areas of healthcare. Another aspect of it is the breakdown of the family unit into smaller pieces where elderly people move to assisted living, nursing homes and hospices.
Here, the telehealth solutions seen include also things like:
Remote patient monitoring is another new field. Due to the scarcity of nurses, many hospitals are moving towards virtual patient monitoring for patients who are in hospitals or medical facilities that require 24×7 monitoring for critical patients.
Operating roomsThe operating room is at the heart of hospital care. It is where surgeons, anesthetics, nurses and other practitioners work together on a patient in an aseptic environment.
An obvious requirement here might be to have an expert join remotely to observe, instruct or consult during surgery. That expert can be someone who isn’t at the vicinity of the hospital, enabling to bridge the gap of knowledge and expertise existing between central hospitals in large cities to rural ones.
It can also be used to have an expert who is situated in the hospital join in – entering an operating room requires the caregiver to scrub before entering. This process takes several minutes. By having the expert join remotely from another room at the hospital, we can have him jump from one surgery to another faster. Think of the supervisor of multiple surgery rooms at a hospital or a specialist. Saving scrubbing times can increase efficiency.
Then there is the option of getting external observers into the surgery rooms without having them in the surgery room itself. They can be silent or vocal participants. Joining in as trainees for example, as part of their learning process to become surgeons.
As we advance in this area, we see AR and VR technologies enter the space, either to assist the doctor locally in the surgery or have the external experts join remotely.
TrainingLearning in operating rooms is just part of training in the healthcare domain.
Training can take different shapes and sizes here, and in a way, it is also part of the education market.
Here are some of the examples I’ve seen:
Healthcare is a domain that has lots and lots and lots of devices and machinery. From simple thermometers to CT scanners and surgical robots.
What we are seeing in many areas is the remoting of these devices and machines. Having the patient being diagnosed or treated use a device (or have a device used on him), while having the technician, specialist, nurse or doctor operate or access the data of the device remotely.
This has many different reasons – from letting patients stay at home, to getting specialists from remote areas, to increasing the efficiency of the caregivers (reducing their travel time between visitations).
Here are a few examples:
Stethoscopes, Thermometers, Ophthalmoscopes, Otoscopes, etc. These devices can be made smart – having the patient use them on his own and have their measurements sent to remote nurses or doctors
X-ray, CT, MRI – different type of scans that can be done in one place and have the operator or the person deciphering the results located elsewhere
Surgical robots, that can be observed or operated remotely
Robots roaming hospitals, taking care of menial tasks such as sanitizing equipment and rooms
There is an ongoing increase in adding smarts into devices and the healthcare space is part of that trend. When caregivers need to interact with these devices or access their measurements in real time, this can be done using WebRTC technology.
Simultaneous translation and/or scribesDoctors are a scarce resource. As such, a critical part is having their time better utilized.
There are two telehealth solutions that are aiming to get that done in a similar fashion but totally different focus:
Translation – patients speaking a different language than that of a caregiver need a better way to communicate. Hospitals and clinics cannot always have a translator in hand available. In such cases, having a translator join remotely can be a good solution.
The purpose? Increase accessibility of doctors to patients who don’t speak the doctor’s language.
Scribes – doctors need to keep everything documented. The patient digital record (PDR) is an important part of treatment over time. The writing part takes time and is done in parallel to diagnosing the patient. It is quite common today to have a doctor sit in front of you, typing away on his PC without even looking at the patient (being on the receiving end of that treatment more than once, it does sometimes feel somewhat surreal). Remote scribes can alleviate that by taking part in the doctor visitation, taking care of filling in the PDR. A different approach making headway here is AI-based transcription and the automatic creation of the medical record entries – this alleviates the need for a human scribe.
The purpose? Increase efficiencies and enable doctors to treat more patients.
At the boundary between education and healthcareThen there is the education part adjacent to healthcare. Think of children who are treated for long periods of time where they either need to stay in the hospital or at home for treatment and rest. How do you make sure they don’t lose too much of the curriculum during that time? That they stay connected with their friends in class?
There are solutions for that, in the form of providing a PC at school and a tablet or laptop to the kid to remotely join such sessions.
This is probably more suitable for the education market, but I just wanted to add it here for completeness.
A game of numbersTelehealth is a relatively small WebRTC market.
If you take all physicians in the world, and try to figure out how many there are per the size of the population, you will get averages of 1:500 at most (see Wikipedia as a source for example).
Not all physicians practice telehealth. Of those who do, many do it seldomly. The size of the number here isn’t big when it comes to minutes or visitations conducted.
Compared to the number of minutes conducted every day on Facebook Messenger, the total telehealth minutes worldwide will be miniscule.
The difference here though, is the importance and willingness to pay for each such minute.
When trying to do market sizing or value – be sure to remember this –
Total number of doctors, minutes and visits isn’t that large worldwide
Telehealth minutes are more valuable than social media minutes
WebRTC telehealth and HIPAA complianceWhenever telehealth is discussed, HIPAA compliance is thrown out in the air. At its heart, HIPAA compliance is about security and privacy of patients and their information, all wrapped up in a nice certification package:
Most countries have separate regulations for patient privacy which are generally more stricter than personal privacy. While there’s more to it than what I’ll share here, it usually boils down to encryption and all the management that goes around it.
WebRTC is encrypted, so all that is left is for the application to not ruin it… which isn’t always simple.
Sometimes, you will find vendors touting E2EE (End-to-End encryption), which in most WebRTC jargon means the use of media servers who can’t access the media. Oftentimes, these vendors actually mean the use of P2P (Peer-to-Peer), where no media server is used at all.
Oh, and if you are using a third party video conferencing solution (say… a CPaaS vendor), then you will need to obtain a BAA (Business Associate Agreement) from that vendor, indicating that he complies with HIPAA. You will then need to certify your own application on top of it.
Network and firewall restrictionsHospitals and clinics usually end up with very restrictive internet networks. This stems from the need to maintain patient confidentiality and privacy. The increase in ransomware attacks on businesses and healthcare organizations is a source of worry as well.
To such a climate, adding WebRTC telehealth solutions requires opening more IP addresses and ports on the organizations’ firewalls.
A big challenge for vendors is to get their WebRTC applications to work in certain healthcare organizations. Usually because their services get blocked or throttled by deep packet inspection.
Vendors who can make this process smoother and simpler for customers will win the day.
Quality of mediaNot being able to see video well in a social interaction is acceptable.
Having a doctor not being able to see the mole on your skin is a totally different thing.
Quality of media can be critical in certain use cases of telehealth. Here, it might be a matter of resolution and sharpness of the image, but it can also be related to the latency of the session. Remote procedures conducted via WebRTC for telehealth might be a bit more sensitive to latency than your common meeting scenario.
Depending upon the use case, you have to prioritize resolution vs frame rate. A still patient needs higher resolution and surgery or any motion specific activity requires a higher framerate. The ability to switch between these two priorities is also a consideration.
At times, 4K requirements or specific color spaces and audio restrictions may be needed. Especially when dealing with analysis of sensor data from medical devices. These may require a bit more work to integrate properly with WebRTC.
Asymmetric nature of users and devicesOne tidbit about telehealth is that sessions are almost always asymmetric in nature and for the majority, they are going to end up as a 2-way conversation.
By asymmetric I mean that the users have different devices:
This asymmetric nature affects how telehealth applications need to be designed and built, taking special care around permissions, privacy and the unique user experience of the various users.
Medical devices, sensors and telemetryModern healthcare has the most variety of devices and sensors out there from all industries (leaving out the defense industry). These devices are now being digitized and modernized. Part of this modernization is adding communication channels for them, and even more recently – being able to virtualize and remote their use – either partially or fully.
Medical devices sometimes generate images. Other times an audio stream. Or a video feed. Or other sensory data and information. WebRTC enables sending such data in real time, or the telehealth application can send this data out of band, via Websockets or HTTP messages.
It can be as simple as taking a measurement of a patient remotely, while he is holding the medical device and the nurse or doctor observes him and the results sent over inside the application.
That can progress passively overseeing a procedure and commenting on it in a video session. Think of a doctor or a nurse consulting remotely with a specialist while giving a treatment or operating a surgical procedure.
And it can go to the extreme of remotely giving the procedure. A radiologist operating the CT machine remotely for example.
How these get connected and where WebRTC fits exactly is a tricky challenge. There’s latency to deal with, connectivity to physical devices, oftentimes without the ability to replace them, regulatory issues – this space has quite a few obstacles, which are also great barriers of entry and motes against competitors if one invests the effort here.
SaaS, CPaaS & open source: Build vs BuyTelehealth comes in different shapes and sizes.
Many of the CPaaS vendors have gone ahead and made themselves easy to use for telehealth, mainly by supporting HIPAA compliance requirements.
I’ve seen various telehealth solutions built on CPaaS while others build their own service from scratch using open source components. There is no single approach here that I can suggest, as each has its own advantages and challenges.
One of the biggest challenges in adopting CPaaS for telehealth is upholding the patient’s privacy. Functions of the CPaaS platform require it to know certain elements of PHI (Personal Health Information), especially if call recordings are implemented. At times, a telehealth platform may expose a patient name or other information to the CPaaS implementation. These invite additional security risks and may violate patient privacy laws. A BAA here helps, but may not be enough, since most patient privacy laws require to expose only the bare minimum that is needed to an external entity (in this case, the CPaaS vendor) when it comes to PHI.
Here. vendors should look at their core competencies and the actual requirements they have from their WebRTC infrastructure. And as always, my suggestion is to go with CPaaS unless there is a real reason not to.
Where can I help, if at all?
Online WebRTC courses, to skill up engineers on this technology
Consulting, mostly around architecture decisions and technology stack selection
Testing and monitoring WebRTC systems, via my role as Senior Director at Cyara (and the co-founder of testRTC)
The post WebRTC in telehealth: More than just HIPAA compliance appeared first on BlogGeek.me.
I’ve been meaning to write about a different topic about WebRTC, but somehow, this was more important.
There’s a war going on here where I live between Israel and Hamas. Or Israel and Gaza. Or Israel and the Palestiniens. Or Israel and Iran’s proxies. Or Israel and muslim extremists.
Or all of the above if we’re frank with ourselves.
We haven’t invited this war or wanted it, but it is what we need to face and deal with.
Others are explaining the situation better than I can on social media sites and in english. Here is one such example:
To those of you who reached out to me asking if I am ok, if me and my family are safe, I answered that we’re ok’ish mostly.
Well… I am not ok.
I. Am. Not. Ok.
No. I am not ok.
I am not ok.
I am not ok.
I am not ok.
I am not ok.
Physically? I am fine.
The rest? Not so much
–
If you know me or have been to this site before, then you know a bit about Israelis already.
We are here to create and innovate. To bring good to the world and to improve things.
In the 10+ years I’ve been running this blog, I shared my thoughts and helped my industry as much as I could. Many times, not asking for anything in return. It is what I do.
Two years ago, me and my other Israeli co-founders sold testRTC. Ever since I’ve been asking myself what I should do next.
One of my dreams recently has been to start teaching. Kids. Older ones. Show them the world of technology and entrepreneurship and what is possible. Be a mentor. Raise the next generation of creativity and innovation of Israelis.
I believe Israelis are a net positive to the world.
I act like this every day. I teach my kids in that way. I see that the floundering and ill equipped education system we have here in Israel does the same. There is no hatred in our teachings or in the way we raise our kids.
–
Palestiniens. Hamas. Extremist muslims.
How can they slaughter kids in cold blood? Murder whole families? Kill without discrimination whole communities? Then go and show it to the world on social media. And then praise it and celebrate on the streets.
This is inhumane.
In many ways, I see them as a net negative to the world.
I just can’t see it otherwise at the moment.
–
People who ask me what they can do to help – nothing. And everything.
Our dysfunctional government will find a way to help, and until then, the civilians here and the soldiers will figure it out. We always do. We don’t have a choice.
I don’t really need anything from you. We’re Israelis. We’ll survive. We have done so ever since the holocaust and we know we can only depend on ourselves. So thanks for asking, but I don’t need a thing at the moment.
Here’s a few picks from the news:
What can you do?
Understand that there aren’t really two sides to this story.
This conflict isn’t symmetrical in any way. It is between people who want to live and people who want to kill and ruin.
If you don’t believe me, then just go on social media and see what the Palestinians are doing. How they parade dead Israeli soldiers, small kids and elderly on the streets of Gaza for all their people to see and enjoy. This is the 21st century.
So no. I am not ok.
We will prevail. And in the meantime, I will be working. Different than usual, but still working. Still making my small and modest contribution to the world. Trying to touch and better those I interact with.
The post No. I am not ok appeared first on BlogGeek.me.
WebRTC has its place in surveillance and security applications. It isn’t core to these industries, but it is critical in many deployments.
Surveillance has become near and dear to my heart. I had a few vendors consult with me in the past. There are a few using testRTC. And then there’s the personal level. The system we have in our apartment building.
This got me to think quite a lot about WebRTC in surveillance tech lately.
Table of contentsI live in an apartment building here in Israel:
23 floors
91 apartments
2 main entrances (and another side one)
3 elevators
3 levels of underground parking
…
And yes. We have a surveillance camera system. Like all of the other apartment buildings in my neighborhood:
The view from my apartment on a nice dayA year ago, I was in charge of the vendor selection and upgrade process of our cameras. We switched from an analog system into a hybrid analog/IP one.
This month, we’re looking into upgrading an elevator camera to an IP one, as well as adding WiFi to our underground parking. Having a chat with one of the vendors we’re reaching out to, he was fascinated with my work on WebRTC and the potential of using it for application-less viewing of cameras.
I’ve had my share of meetings and dealings with vendors building different types of surveillance and security solutions. From private security solutions to large scale, enterprise visual intelligence ones. Obviously, the matter of these interactions were around WebRTC.
I am not an expert in surveillance, so take the market overview with a grain of salt
That said, I do know my way with WebRTC and where it fits nicely
Here are some of the things I learned over the years
Security and surveillance use cases in WebRTCI’ll start with the obvious – cameras, security and surveillance have multiple use cases. Some of them can be seen as classic to this domain while others slightly newer or a specialized niche. Each of these use cases is a world onto its own with its requirements from WebRTC and the types of solutions emerging in it.
Small scale / cheap multiple surveillance camerasThis is where I’d frame my own experience of our apartment building. A system that requires 32 or less video cameras, spread across the location, connected to a DVR (Digital Video Recorder) or an NVR (Network Video Recorder).
In essence, you go install the cameras in sensitive locations, wire them up (with an analog cable, IP or even wireless) to the media server that is located onsite as well. That media server is a DVR if it is a closed loop system or an NVR if you’re living in modern times. I’ll just refer to these two as xVR from here on.
Once there, you hook’em up to a local monitor that nobody goes and look at, as well as let the owner connect remotely from his PC or mobile phone.
Is WebRTC needed here? Not really.
Surveillance cameras today use RTP (and sometimes also RTSP). These are the new ones. Old ones are pure analog. They connect to that xVR media server, which handles them quite well today. It did so also before WebRTC came to our lives. The user then accesses the system to play the videos remotely using a dedicated application, which again, existed before WebRTC.
Since there’s no specific requirement to access this through a web browser, the use of WebRTC here is questionable.
You might say WebRTC would make things easier, but hey – if it ain’t broken, don’t fix it
These solutions are purchased from local vendors that install such systems. The buyer will usually reach out to an installer that will pick and choose the cameras and the surveillance system for the buyer. The buyer cares less about the technology and more about the local vendor’s ability to install and maintain the system when needed.
Enterprise / large scale surveillanceLarge scale surveillance systems for enterprises is more of the same as the small scale ones, but with a few main differences:
The two things that are making headways in this industry?
Like the small scale solutions, here too the buyer will look for local installers. These will be the local integrators who bring the systems and install them. At times, the decision of brand will come from the buyer, though this is less likely. It is important to remember that a considerable part of the cost goes towards the setup and installation and not necessarily to the cost of the equipment itself.
Personal/home surveillanceThis one is the residential one. It is a B2C space where the buyer is a person buying a camera for his own home security. The decision is made on price or brand mostly.
Here you’ll find also solutions that make use of old smartphones and tablets as cameras, or something like the one we purchased a few years back when our kids were younger:
A digital peephole cameraHaving the ability for them to see who is outside our door when they were shorter.
Here too, the market is going into multiple directions:
Where does WebRTC play here? It might make things smoother to develop for the companies, but this doesn’t seem to be the case.
One thing that goes through all use cases above, is the existence of another solution – the video doorbell. Taken into buildings, this becomes an intercom system, which again – can make use of WebRTC. And why? Because it needs bidirectional support for audio at the very least, making WebRTC a suitable alternative.
Personal securityA totally different niche is the one of personal security.
This manifests itself as apps (and services) people can use to increase their security while going about in their daily tasks. Some of these apps connect you to friends and family while others to personal security agents. The WebRTC requirement here is the same for all cases – be able to conduct voice and video calls in real time.
Taken more broadly from the personal level, the same can be implemented in campuses, cities, events, etc.
Unique (?) challenges for WebRTC with camera hardwareThere are some unique challenges for WebRTC when it comes to the surveillance space, and that’s mostly a matter of hardware.
Most of these issues won’t plague a software solution. But here, we end up in the real world simply because someone needs to go and install the physical cameras.
When figuring out the hardware platform to use, it is important to think of future trends and technology improvements that affect your implementation
In the case of surveillance, there’s WebRTC, future video codecs (AV1) and machine learning in the vision domain to think about. Probably also programmable photography that is bringing innovations to smartphones for a few years now
Ingress, egress and the concept of real timeWhere to place WebRTC in the solution?
Since I write a lot about WebRTC, and this article is mostly about WebRTC in surveillance markets, it is THE biggest question to answer here.
There are two different places, and both are suitable, but not necessarily together in the same system.
Surveillance needs real time. Sometimes.
Egress
In our own residential building, I seldom care about the live feed from the cameras. It is to check if the front door to the building is open or not, or if there’s some area that got dirty (usually dog pee). Then most of the time is spent rewinding to figure out who caused the problem. Nothing here is considered real time in nature or requires sub second latency.
Elsewhere, real time might be critical on the viewer side (egress), which brings with it the question of whether WebRTC fits here well.
Ingress
Web cameras that directly stream out WebRTC to the world (or the xDR). Is that a benefit? What’s the value of it versus the existing camera technologies used?
I am not quite for or against this, as I am not really sure here. I’d say that a benefit here can be in the fact that it makes the whole technology stack simpler if you end up using WebRTC end-to-end instead of needing to switch protocols from the camera to the viewer. Just remember here that rewind and playback will likely require something other than WebRTC.
The main advantage of WebRTC here might be the removal of the need to transcode and translate across protocols and codecs. It makes xDR software simpler to write and reduces a lot of their CPU requirements, making the systems lighter and cheaper (the xDR – not the camera itself).
One more thing to think of is cameras that also require bidirectional audio. Because a security guard wants to announce or warn perpetrators, or because this is a video doorbell. There, WebRTC fits nicely, though again – not mandatory (I’d still try using it there more than elsewhere).
Going to introduce WebRTC to a surveillance system? Great. Check first where exactly within the whole architecture WebRTC fits and ask yourself why
Mobile or desktop?Another important aspect of a surveillance system is where people go to watch the videos.
When we installed our own system, we were told that the mobile app is better than the PC app. In both, these were applications. But somehow for the consumers, it meant using the smartphone. It sucks. But yes – it sucks more on the desktop. Which is crazy, considering that what you’re trying to do is watch output coming from 4K cameras in order to identify people.
Then again, who is your customer?
If this is a large enterprise, where there’s going to be a fancy video wall of video feeds with a bored security guard looking at it, then should this be an application or would it be preferable to use a web application for it, with the help of WebRTC? It seems that much of the industry on the client side is looking for lightweight solutions that require less software installations, favoring browsers and… WebRTC.
And if you’re already doing WebRTC for one egress destination, you can use it for all others – browser and app based.
One more thing to consider – it is easier today to develop a web application than it is a native PC application. Cheaper and faster. Which means that supporting WebRTC if the desktop is your primary viewing device might be the right decision to make.
See if there’s a strong need for a zero-install or desktop viewing. This might well lead you towards WebRTC on the egress side
The age of Artificial Intelligence in surveillance techThe biggest driver in this industry is machine learning and artificial intelligence. And not necessarily the Generative AI kind, but rather the kind that deals with object classification.
The challenge with surveillance is watching the damn cameras. You need eyeballs on screens. The good old motion detection removes a lot of noise (or more accurately, static), but it leaves much to be desired.
One of the elevators in my building, along with the video you get most hours of the day – empty. The bar at the bottom with the blue stripes marks when there’s actual movement.
Using machine learning, it will be easier to search for dogs, people, colors, items and other tidbits to figure out times of interest in the thousands of hours of boring videos, as well as act as “Google search” on recorded video feeds.
Doing all that in the cloud is possible, but expensive and tedious – how do you ship all the video, decode it, process it again, etc.
Doing it on the edge, on the device itself (the camera or the xDR) is preferable, but requires new hardware, so requires another technology leap and refresh.
WebRTC isn’t core for surveillance but it is criticalThis is something to remember.
WebRTC isn’t core to surveillance. You don’t really need it to get surveillance cameras working, installed or connected to their xDR media servers. You don’t even need it to view videos – either “live” or as playback.
But, and that’s a big one – in some cases, having WebRTC is critical. Because your customer may want to be able to use web browsers and install nothing. He may want to be able to get bidirectional media. There might be a need to get video feeds that are at sub second latencies.
For these, WebRTC might not be a core competency, but they are critical to the successful delivery and deployment of your product. This translates into having a need to have that skill set in your team or be able to outsource it to someone with that skill set.
Where can I help, if at all?
Online WebRTC courses, to skill up engineers on this technology
Consulting, mostly around architecture decisions and technology stack selection Testing and monitoring WebRTC systems, via my role as Senior Director at Cyara (and the co-founder of testRTC)
The post Fitting WebRTC in the brave new world of webcams, security, surveillance and visual intelligence appeared first on BlogGeek.me.
How to think and plan for CPaaS vendor lock-in when it comes to your WebRTC application implementation.
How can/should CPaaS vendors compete on winning customers? More than that, how can/should CPaaS vendors poach customers from other CPaaS vendors?
What prompted this article is the various techniques CPaaS vendors use and what they mean to customers – how should customers react to these techniques. I’ll focus on the Video API part of CPaaS – or to be more specific, the part that deals with WebRTC implementation.
Table of contentsFor me CPaaS (or Communication Platform as a Service) is a service that lets companies build their own communication experiences in a flexible manner. Usually done via APIs and requires developers, but recently, also via lowcode/nocode interactions (such as embedding an iframe).
A CPaaS vendor ends up defining its own interface of APIs which his customers are using to create these communication experiences.
That API interface is proprietary. There is no standard specification for how CPaaS APIs need to look or behave. This means that if you used such an API, and you want to switch to another CPaaS vendor – you’re going to need to do all that integration work all over again.
Think of it like switching from an Android phone to an iPhone or vice versa:
In a way, you want the same experience (only better), but there’s going to be a learning curve and an adaptation curve where you familiarize yourself with the new CPaaS vendor and “make yourself at home”.
The vendor lock-in part is how much effort and risk will you need to invest and overcome in order to switch from one vendor to another – to call that other vendor your new home.
Vendor lock-in has 3 aspects to it in CPaaS:
Vendor lock-in is scary. Not because of the technical effort involved but because of the risks from the unknowns. The more years and the more interfaces, scenarios and code you have running on a CPaaS vendor, the higher the lock-in and risk of migration you are at.
The innovation in WebRTC that CPaaS is “killing”Before WebRTC, we had other standards. RTP and RTCP came a lot before WebRTC.
We had RTMP, RTSP, SIP and H.323.
The main theme of all these standard specifications was that their focus has always been about standardizing what goes on over the network. They didn’t care or fret about the interface for the developer. The idea behind this was to enable using this standard on whatever hardware, operating system and programming language. Just read the spec and implement it anyway you like.
WebRTC changed all that (ignoring Flash here). We now have a specification where the API interface for the developer of a web application is also predefined.
WebRTC specifies what goes on the network, but also the JavaScript API in web browsers.
Here’s how I like explaining it in my slides:
One of the main advantages of WebRTC is that a developer who uses WebRTC in one project for one company can relatively easily switch to implement a different WebRTC project for another company. (that’s not really correct, but bear with me a little here)
We now could think of WebRTC just like other technologies – someone proficient in WebRTC is “comparable” to someone who worked with Node.js or SQL or other technologies. Whereas working with SIP or H.323 begs the question – which framework or implementation was used – learning a new one has its own learning curve.
Enter CPaaS…
And now the WebRTC API interface is no longer relevant. The CPaaS vendor’s SDK has its own interface indicating how things get done. And these may or may not bear any resemblance to the WebRTC API. Moreover – it might even try very hard to hide the WebRTC stack implementation from the developer.
This piece of innovation, where a developer using WebRTC can jump into new code of another project quickly is gone now. Because the interfaces of different CPaaS vendors aren’t standardized and don’t adhere to the standard WebRTC API interface (and they shouldn’t be – it isn’t because they are mean – it is because they offer a higher level of abstraction with more complex and complete functionality).
Not having the same interface across CPaaS vendors is one of the reasons we’ve started down this rabbit hole of exploring what CPaaS vendor lock-in is exactly.
CPaaS vendor poaching techniques and how to react to themEvery so often, you see one or more CPaaS vendors trying to grab a bit more market share in this space. Sometimes, it is about enticing customers who want to start using a CPaaS vendor. Other times it is focused on trying to poach customers from other CPaaS vendors.
When looking at the latter, here are the CPaaS vendor poaching techniques I’ve seen, how effective they are, and what you as a target company should think about them.
#1 – Feature list comparisonsThe easiest technique to implement (and to review) is the feature list comparison.
In it, a CPaaS vendor would simply generate and share a comparison table of how its feature set is preferable over the popular alternatives.
For a company looking to switch, this would be a great place to start. You can skim through the feature list and see exactly what’s there in the platform you are currently using and the one you are thinking of switching to.
When looking at such a list, remember and ask yourself the following questions:
I’ve had my fare share of reading, writing and responding to comparison tables. A long time ago (pre-WebRTC), we received inputs that our competitor can do almost 10 times the number of concurrent calls we are able to do with much higher throughput. Obviously, we created a task force to deal with it. The conclusion was simple – the competitor didn’t measure the network time at all – just CPU time in the machine. We weren’t measuring the same thing and his choice of metric meant he always looked better
Your role in this? To read between the lines and understand what wasn’t written. Always remember that this isn’t an objective comparison – it is highly skewed towards the author of it (otherwise, he wouldn’t be publishing it)
#2 – Performance comparisonsHere the intent of the CPaaS vendor is to show that his platform is superior in its performance. It can offer better quality, at lower bitrates and CPU use for larger groups.
If a vendor does it on his own, then potential customers will immediately view the results as suspect. This is why most of them use third party objective vendors to do these performance comparisons for them (at a cost).
We’ve done this at testRTC a couple of times – some publicly shared (for this one, I’ve placed my own reputation and testRTC’s reputation on the frontline, insisting not to name the other vendors) and others privately done. It is a fun project since it requires working towards a goal of figuring out how different CPaaS vendors behave in different scenarios.
Zoom did this as well, comparing itself to other CPaaS vendors. Agora answered in kind with a series of posts comparing themselves back to Zoom (where Zoom didn’t look as shiny).
Just remember a few things when reading such comparisons:
In the end, the fact that a CPaaS vendor performs better than another in a scenario you don’t need says nothing for you. Make sure to give more weight to the results of actual scenarios relevant to you, and be sure you understand what is really being compared
#3 – Guides, how-to’s and success storiesHow do you make the migration of a customer from a different CPaaS vendor to your own? You write a migration document about it. A guide. Or a how-to. Or you get a testimonial or a success story from a customer willing to share publicly that he migrated and how life is so much better for him now.
These are mainly targeted at raising the confidence level for those who are contemplating switching, signaling them that the process isn’t risky and that others have taken this path successfully already.
As someone thinking of moving from one vendor to another, I’d seriously consider reaching out to the CPaaS vendor and ask the hard questions:
Anecdotes and recipes are nice. What you are after is having more data points.
Read these guides and success stories. Try reading between the lines in them. Check if you have any open questions and then ask these questions directly. Gather as much information as you can to get a clearer picture
#4 – Reference applicationsI wasn’t sure if this fits for migrating customers because it is a bit broader in nature. But here we are
In many cases, CPaaS vendors have reference applications available. Usually hosted on github. Just pull the code, compile, host and run it. You get an app that is “almost” ready for deployment.
You see how easy that was? Think how easy it is going to be to migrate to us with this great reference.
Remember a few things here:
From my point of view, reference apps are nice to get a taste of what’s possible and how the API of a CPaaS vendor gets used. But that’s about it. They are unlikely to be useful during the migration process itself
#5 – Shims and adaptorsThey say imitation is the highest form of flattery. If that is true, then shims and adapters would fit well here.
In CPaaS, the most common one was supporting TwiML (that’s Twilio’s XML “language” for actions on telephony events). There’s also the idea/intent of having the whole API interface of another CPaaS vendor (or parts of it) supported directly by the poacher. The purpose of which is to make it easy to switch over.
Clearing things up a bit:
The result? If you’re using vendor A, theoretically, you can take the shim created by vendor B and magically without any investment, you migrate to vendor B. Problem solved
While this looks great on paper, I am afraid it has little chance of holding up in the real world . Here’s why:
The thing is, that using a shim still means a ton of testing and headaches, but such that are hard to overcome.
If I had to switch between vendors, I’d ignore such shims altogether. For me they’re more of a trap than anything else.
Someone suggesting you use their shim for switching over to their CPaaS? Ignore them and just analyze what needs to be done as if there’s no shim available. You’ll thank me later
Build vs Buy – my first preference is ALWAYS buy (=CPaaS)We’ve seen 5 different techniques CPaaS vendors use to try and poach customers from one another. For the most part, they are of the type of “buyers beware”. And yet, we do need to migrate from time to time from one CPaaS vendor to another. Market dynamics might force us to do so or just the need to switch to a better platform or offering.
Does that mean it would be best to go it alone and build your own platform instead of using a third party CPaaS vendor?
No.
Vendor lock-in isn’t necessarily a bad thing. My first preference is always to adopt a CPaaS vendor. And if not to adopt one, then to articulate very clearly why the decision to build is made.
What should you do when you start using a CPaaS vendor to make the transition to another vendor (or to your own platform) smoother in the distant future? Here are a few things to consider.
The post Solving CPaaS vendor lock-in (as a customer and as a CPaaS vendor) appeared first on BlogGeek.me.
Open Broadcast Studio or OBS is an extremely popular open-source program used for streaming to broadcast platforms and for local recording. WebRTC is the open-source real time video communications stack built into every modern browser and used by billions for their regular video communications needs. Somehow these two have not formally intersected – that is […]
The post WebRTC cracks the WHIP on OBS appeared first on webrtcHacks.
How do you choose the right architecture for a WebRTC audio conferencing service?
Last month, Lorenzo Miniero published an update post on work he is doing on Janus to improve its AudioBridge plugin. It touched a point that I failed to write about for a long time (if at all), so I wanted to share my thoughts and views on it as well.
I’ll start with a quick explanation – Lorenzo is adding to Janus a lot of layers and flexibility that is needed by developers who are taking the route of mixing audio in WebRTC conferences. What I want to discuss here is when to use audio mixing and when not to use it. And as everything else, there usually isn’t a clear cut decision here.
Table of contentsGroup calls in WebRTC can take different shapes and sizes. For the most part, there are 3 dominant architectures for WebRTC multiparty calling: mesh, mixing and routing.
I’ll be focusing on mixing and routing here since they scale well to 100’s or more users.
Let’s start with the basics.
Assume there’s a conversation between 5 people. Each of these people can speak his mind and the others can hear him speaking. If all of these people are remote with each other and we now need to model it in WebRTC, we might think of it as something like this illustration:
This is known as a mesh network. Its biggest disadvantage for us (though there are others) is the messiness of it all – the number of connections between participants that grows polynomially with the number of users. The fact that we need to send out the same audio stream to all participants individually is another huge disadvantage. Usually, we assume (and for good reasons) that the network available to us is limited.
The immediate obvious solution is to get a central media server to mix all audio inputs, reducing all network traffic and processing from the users:
This media server is usually called an MCU (or a conferencing bridge). Users here “feel” as if they are in a session with only a single entity/user and the MCU is in charge of all the headaches on behalf of the users.
This mixer approach can be a wee bit expensive for the service provider and at times, not the most flexible of approaches. Which is why the SFU routed model was introduced, though mostly for video meetings. Here, we try to enjoy both worlds – we have the SFU route the media around, to try and keep bitrates and network use at reasonable levels while trying to reduce our hosting and media processing costs as service providers:
The SFU has become commonplace and the winning architecture model for video meetings almost everywhere. Voice only meetings though, have been somewhere in-between. Probably due to the existence and use of audio bridges a lot before WebRTC came to our lives.
This begs the question then, which architecture should we be using for our audio in group calls? Should we mix it in our media servers or just route it around like we do with video?
Before I go ahead to try and answer this question, there’s one more thing I’d like to go through, and that’s the set of media processing tools available to us today for audio in WebRTC.
Audio processing tools available for us in WebRTCEncoding and decoding audio is the baseline thing. But other than that, there are quite a few media processing and network related algorithms that can assist applications in getting to the desired scale and quality of audio they need.
Before I list them, here are a few thoughts that came to mind when I collected them all:
There is an RTP header extension for audio level. This allows a WebRTC client to indicate what is the volume that can be found inside the encoded audio packet being sent.
The receiver can then use that information without decoding the packet at all.
What can one do with it?
Decide if you need to decode the packet at all or just discard it if there’s no or little voice activity or if the audio level is too low (no one’s going to hear what’s in there anyway).
You can replace it with DTX (see below) or not forward the packet in a Last-N architecture (see below).
Not mix its content with other audio channels (it doesn’t hold enough information to be useful to anyone).
DTXIf there’s nothing really to send – the person isn’t speaking but the microphone is open – then send “silence” but with less packets over the network.
That’s what DTX is about, and it is great.
In larger meetings, most people will listen and not speak over one another. So most audio streams will just be “silence” or muted. If they aren’t muted, then sending DTX instead of actual audio reduces the traffic generated. This can be a boon to SFUs who end up processing less packets.
An SFU media server can also decide to “replace” actual audio it receives from users (because they have a low audio level in them or because of Last-N decisions he is making) with DTX data when routing media around.
PLCPackets are going to be lost, but there would be content that still needs to be played back to the user.
You can decide to play silence, a repeat of the last heard packet, lower its volume a bit, etc.
This can be done both on the server side (especially in the case of an MCU mixer) or on the client side – where such algorithms are implemented in the browser already. SFUs can ignore this one, mostly since they don’t decode and process the actual media anyway.
At times, these can be done using machine learning, like Google’s proprietary WaveNetEq, which tries to estimate and predict what was in the missing packet based on past packets received.
Packet loss concealment isn’t great at all times, but it is a necessary evil.
RTX & NACKTheoretically, you could use retransmissions for lost packets.
WebRTC does that mostly for video packets, but this can also find a home for audio.
It is/was a rather neglected area because PLC and Opus inband FEC techniques worked nicely.
For the time being, you’re likely to skip this tool, but it is one I’d keep an eye on if I were highly interested in audio quality advancements.
FEC and REDForward Error Correction is about sending redundant data that can be used to reconstruct lost packets. Redundancy coding is what we usually do for audio, which is duplicating encoded frames.
Audio bandwidth requirements are low, so duplicating frames doesn’t end up taxing much of our network, especially in a video call.
This approach enables us at a “low cost” to gain higher resiliency to packet losses.
This can be employed by the client sender, or even from the server side, beefing up what it received – both as an SFU or an MCU.
Check Philipp Hancke’s tal at Kranky Geek about Advanced in Audio Codecs
Then there’s the nuances and headaches of when to duplicate and how much, but that’s for another article.
Last-NA known technicality in WebRTC’s implementation is that it only mixes the 3 loudest incoming audio channels before playing back the audio.
Why 3? Because 2 wasn’t enough and 4 seemed unnecessary is my guess. Also, the more sources you mix, the higher the noise levels are going to be, specially without good noise suppression (more on that below)
Well… Google just decided to remove that restriction. Based on the announcement, that’s because the audio decoding takes place in any case, so there’s not much of a performance optimization not to mix them all.
So now, you can decide if you want to mix everything (which you just couldn’t before) or if you want to mix or route only a few loudest volume (or most important) audio streams if that’s what you’re after. This reduces CPU and network load (depending on which architecture you are using).
Google Meet for example, is employing Last-3 technique, only sending up to 3 loudest audio streams to users in a meeting.
Oh, and if you want to dig deeper into the reasoning, there’s a nice Jitsi paper from 2016 explaining Last N.
Noise suppression: RNNoise and other machine learning algorithmsNoise suppression is all the rage these days.
RNNoise is a veteran among the ML-based noise suppression algorithms that is quite popular these days.
Janus for example, have added it to their AudioBridge and implemented optional RNNoise logic to handle channel-based noise suppression in their MCU mixer for each incoming stream.
Google added this in their Google Meet cloud – their SFU implementation passes the audio to dedicated servers that process this noise suppression – likely by decoding, noise suppression and encoding back the audio.
Many vendors today are introducing proprietary noise suppression to their solutions on the client side. These include Krisp, Dolby, Daily, Jitsi, Twilio and Agora – some via partnerships and others via self development.
Mixing keeps the headaches away from the browserWhy use an MCU for mixing your audio call? Because it takes all the implementation headaches and details away from the browser.
To understand some of what it entails on the server though, I’d refer you again to read Lorenzo’s post.
The great thing about this is that for the most part, adding more users means throwing more cloud hardware on the problem to solve it. At least up to a degree this can work well without thinking of scaling out, decentralization and other big words.
It is also how this was conducted for many years now.
Here are the tools I’d aim for in using for an audio MCU:
ToolUse?ReasoningAudio levelDecoding less streams will get higher performance density for the server. Use this with Last-N logicDTXBoth when decoding and while encodingPLCOn each incoming audio stream separatelyRTX & NACKTo early to do this todayFEC and REDToday, for an MCU, this would be rare to see as a supported featureConsider on outgoing audio streams; as well as enable for incoming streams from devicesLast-NLast-3 is a good default unless you have a specific user experience in mind (see below examples)Noise suppressionOn incoming channels, those that passed Last-N filtering, to clean them up before mixing the incoming streams togetherThings to note with an audio MCU, is that the MCU needs to generate quite a few different outgoing streams. For 10 participants with 4 speakers (at Last-4 configuration), it would be something like this:
We have 5 separate mixers at play here:
Why do we use an SFU for audio conferences? Because we use it for video already… or because we believe this is the modern way of doing things these days.
When it comes to routing audio, the thing to remember is that we have a delicate balance between the SFU and the participants, each playing a part here to get a better experience at the end of the day.
Here are the tools I’d use for an audio SFU:
ToolUse?ReasoningAudio levelWe must have this thing implemented and enabled, especially since we really really really want to be able to conduct Last-N logic and not send each user all audio channels from all other participantsDTXWe can use this to detect silence as well here (and remove from Last-N logic). On the sending logic, the SFU can decide to DTX the channels in Last-N that are silent or at a low volume to save a bit of extra bandwidth (a minor optimization)PLCNot needed. We route the audio packets and let the participants fix any losses that take placeRTX & NACKTo early to do this todayFEC and REDThis can be added on the receiver and sender side in the SFU to improve audio quality. Adding logic to dynamically device when and how much redundancy based on network conditions is also an advantage hereLast-NLast-3 is a good default. Probably best to keep this at most at Last-5 since the decision here means more CPU use on the participants’ sideNoise suppressionNot needed. This can be done on the participants’ sideIn many ways, an audio SFU is simpler to implement than an audio MCU, but tweaking it just right to gain all the benefits and optimizations from the client implementation is the tricky part.
Where the rubber hits the road – let’s talk use casesAs with everything else I deal with, which approach to use depends on the circumstances. One of the main deciding criteria in this case is going to be the use case you are dealing with and the scenario you are solving this for.
Here are a few that came to mind.
Gateway to the old worldThe first one is borderline “obvious”.
Before WebRTC, no one really did an audio conference using an SFU architecture. And if they did, it was unique, proprietary and special. The world revolved and still revolves around MCU and mixing audio bridges.
If your service needs to connect to legacy telephony services, existing deployments of VoIP services running over SIP (or god forbid H.323), connect to a large XMPP network – whatever it may be – that “other” world is going to be running as an MCU. Each device is likely capable of handling only one incoming audio stream.
So trying to connect a few users from your service (no matter if you are using an SFU or an MCU) would need to mix these users when connecting them to the legacy service.
Video meetings with mixed audioThere are services that decide to use an SFU to route video streams and an MCU for the audio streams.
Sometimes, it is because the main service started as an audio service (so an audio bridge was/is at the heart of the service already) and video was bolted on the platform. Sometimes it is because gatewaying to the old world is central to the service and its mindset.
Other times, it is due to an effort to reduce the number of audio streams being sent around, or to reduce the technical requirements of audio only participants.
Whatever the reason, this is something you might bump into.
The big downside of such an approach is the loss of lip synchronization. There is no practical way you can synchronize a single audio stream that represents mixed content of multiple video streams. In fact, no lip synchronization with any of the video streams takes place…
Usually, the excuse I’ll be hearing is that the latency difference isn’t noticeable and no one complained. Which begs the question – why do we bother with lip synchronization mechanisms at all then? (we do because it does matter and is noticeable – especially when the network is slightly bumpier than usual)
Experience the crowdThink of a soccer game. 50,000 people in a stadium. Rawring when there’s a goal or a miss.
With Last-3 audio streams mixed, you wouldn’t be hearing anything interesting when this takes place “remotely” for the viewers.
The same applies to a virtual online concert.
Part of the experience you are trying to convey is the crowds and the noises and voices they generate.
If we’re all busy reducing noise levels, suppressing it, picking and choosing the 2-3 voices in the crowd to mix, then we just degrade the experience.
Crowds matter in some scenarios. And keeping their experience properly cannot be done by routing audio streams around. Especially not when we’re starting to talk about hundreds of more active participants.
This case necessitates the use of MCU audio bridging. And likely a distributed approach the moment the numbers of users climb higher.
Metaverse and spatial audioThe metaverse is coming. Or will be. Maybe. Now that Apple Vision Pro is upon us. But even before that, we’ve seen some metaverse use cases.
One thing that comes to mind here is the immersion part of it, which leads to spatial audio. The intent of hearing multiple sounds coming from different directions – based on where the speaker is.
This means several things:
Do you do that on the client side by way of an SFU implementation, or would it be preferable to do this in an MCU implementation?
And what about trying to run concerts in the metaverse? How do you give the notion of the crowds on the audio side?
These are questions that definitely don’t have a single answer.
In all likelihood, in some metaverse cases, the SFU model will be the best architectural approach while in others an MCU would work better.
Recording it allNot exactly a use case in its own right, but rather a feature that is needed a lot.
When we need to record a session, how do we go about doing that?
Today, in at least 99% of the time that would be by mixing all audio and video sources and creating a single stream that can be played as a “regular” mp4 file (or similar).
Recording as a single stream means using an MCU-like solution. Sometimes by implementing it in a headless browser (as if this is a silent participant in the session) and other times by way of dedicated media servers. The result is similar – mixing the multiple incoming streams into a single outgoing one that goes directly to storage.
The downside of this, besides needing to spend energy on mixing something that people might never see (which is a decision point to which architecture to pick for example), is that you get to view and hear only a single viewpoint of a single user – since the mixed recording is already “opinionated” based on what viewpoint it took.
We can theoretically “record” the streams separately and then play them back separately, but that’s not that simple to achieve, and for the most part, it isn’t commonplace.
A kind of a compromise we see today with professional recording and podcast services is to record by mixed and separated audio streams. This allows post production to take either based on the mixing needs, but done manually.
Which will it be? MCU or SFU for your next audio meeting?We start with this, and we will end with this.
It depends.
You need to understand your requirements and from there see if the solution you need will be based on an MCU, and SFU or both. And if you need help with figuring that out, that’s what my WebRTC courses are for – check them out.
The post WebRTC conferences – to mix or to route audio appeared first on BlogGeek.me.
webrtcHacks celebrates our 10th birthday today 🎂. To commemorate this day, I’ll cover 2 topics here: Our new merch store Some stats and trends looking back on 10 years of posts We have the Merch In the early days of webrtcHacks, co-founder Reid Stidolph ordered a bunch of stickers which proved to be extremely popular. […]
The post 10 Years of webrtcHacks – merch and stats appeared first on webrtcHacks.
Explore the future of Real-Time Communications with WebrtcHacks as we delve into the use of WebCodecs and WebTransport as alternatives to WebRTC's RTCPeerConnection. This comprehensive blog post features interviews with industry experts, a review of potential WebCodecs+WebTransport architecture, and a discussion on real-time media processing challenges. We also examine performance measurements, hardware encoder issues, and the practicality of these new technologies.
The post WebCodecs, WebTransport, and the Future of WebRTC appeared first on webrtcHacks.
A new Higher-level WebRTC protocols course and discounts, available for a limited period of time.
Over a year ago, Philipp Hancke came to me with the idea of creating a new set of courses. Ones that will dig deeper into the heart of the protocols used in WebRTC. This being a huge undertaking, we decided to split it into several courses, and focus on the first one – Low-level WebRTC protocols.
We received positive feedback about it, so we ended up working on our second course in this series – Higher-level WebRTC protocols.
Why the need for additional WebRTC courses?There is always something more to learn.
The initial courses at WebRTC Course were focused on giving an understanding of the different components of WebRTC itself and on getting developers to be able to design and then implement their application.
What was missing in all that was a closer look at the protocols themselves. Of looking at what goes on in the network, and being able to understand what goes over the wire. Which is why we started off with the protocols courses.
Where the Low-level WebRTC protocols looks at directly what goes to the network with WebRTC, our newer Higher-level WebRTC protocols is taking it up one level:
This time, we’re looking at the protocols that make use of RTP and RTCP to make the job of real time communications manageable.
If you don’t know exactly what header extensions are, and how they work (and why), or the types of bandwidth estimation algorithms that WebRTC uses – and again – how and why – then this course is for you.
If you know RTP and RTCP really well, because you’ve worked in the video conferencing industry, or have done SIP for years – then this course is definitely for you.
Just understanding the types of RTP header extensions that WebRTC ends up using, many of them proprietary, is going to be quite a surprise for you.
Our WebRTC Protocols coursesGot a use case where you need to render remote machines using WebRTC? These require sitting at the cutting edge of WebRTC, or more accurately and a slightly skewed angle versus what the general population does with WebRTC (including Google).
Taking upon yourself such a use case means you’ll need to rely more heavily on your own expertise and understanding of WebRTC.
There are now 2 available protocols courses for you:
And there are 2 different ways to purchase them:
You should probably hurry though…
Check out my WebRTC courses
The post New: Higher-Level WebRTC Protocols course appeared first on BlogGeek.me.
WebRTC is an important technology for cloud gaming and virtual desktop type use cases. Here are the reasons and the challenges associated with it.
Google launched and shut down Stadia. A cloud gaming platform. It used WebRTC (yay), but it didn’t quite fit into Google’s future it seems.
That said, it does shed a light on a use case that I’ve been “neglecting” in my writing here, though it was and is definitely top of mind in discussions with vendors and developers.
What I want to put in writing this time is cloud gaming as a concept, and then alongside it, all virtual desktops and cloud rendering use cases.
Let’s dig in
Table of contentsGoogle Stadia started life as Project Stream inside Google.
Technically, it made perfect sense. But at least in hindsight, the business plan wasn’t really there. Google is far remote from gaming, game developers and gamers.
On the technical side, the intent was to run high end games on cloud machines that would render the game and then have someone play the game “remotely”. The user gets a live video rendering of the game and sends back console signals. This meant games could be as complex as they need be and get their compute power from cloud servers, while keeping the user’s device at the same spec no matter the game.
Source: GoogleI’ve added the WebRTC text on the diagram from Google – WebRTC was called upon so that the player could use a modern browser to play the game. No installation needed. This can work nicely even on iOS devices, where Apple is adamant about their part of the revenue sharing on anything that goes through the app store.
Stadia wanted to solve quite a few technological challenges:
And likely quite a few other challenges as well (scaling this whole thing and figuring out how to obtain and keep so many GPUs for example).
Technically, Stadia was a success. Businesswise… well… it shut down a little over 3 years after its launch – so not so much.
What Stadia did though, was show that this is most definitely possible.
WebRTC, Cloud gaming and the challenges of real timeTo get cloud gaming right, Google had to do a few things with WebRTC. Things they haven’t really needed too much when the main thing for WebRTC at Google was Google Meet. These were lowering the latency, dealing with a larger color space and aiming for 4K resolution at 60 fps. What they got virtually for “free” with WebRTC was its data channel – the means to send game controller signals quickly from the player to the gaming machine in the cloud.
Lets see what it meant to add the other three things:
4K resolution at 60 fpsGoogle aimed for high end games, which meant higher resolutions and frame rates.
WebRTC is/was great for video conferencing resolutions. VGA, 720p and even 1080p. 4K was another jump up that scale. It requires more CPU and more bandwidth.
Luckily, for cloud gaming, the browser only needs to decode the video and not encode it. Which meant the real issue, besides making sure the browser can actually decode 4K resolutions efficiently, was to conduct efficient bandwidth estimation.
As an algorithm, bandwidth estimation is finely tuned and optimized for given scenarios. 4K and cloud gaming being a new scenario, meant that bitrates that were needed weren’t 2mbps or even 4mbps but rather more in the range of 10-35mbps.
The built-in bandwidth estimator in WebRTC can’t handle this… but the one Google built for the Stadia servers can. On the technical side, this was made possible by Google relying on sender-side bandwidth estimation techniques using transport-cc.
Lower latency: playout delayRemember this diagram?
It can be found in my article titled With media delivery, you can optimize for quality or latency. Not both.
WebRTC is designed and built for lower latency, but in the sub-second latency, how would you sort the latency requirements of these 3 activities?
WebRTC’s main focus over the years has been online meetings. This means having 100 milliseconds or 200 milliseconds delay would be just fine.
With an online game? 100 milliseconds is the difference between winning and losing.
So Google tried to reduce latency even further with WebRTC by adding a concept of Playout Delay. The intent here is to let WebRTC know that the application and use case prefers playing out the media earlier and sacrificing even further in quality, versus waiting a bit for the benefit of maybe getting better quality.
Larger color spaceVideo conferencing and talking heads doesn’t need much. If you recall, with video compression what we’re after is to lose as much as we can out of the original video signal and then compress. The idea here is that whatever the eye won’t notice – we can make do without.
Apparently, for talking heads we can lose more of the “color” and still be happy versus doing something similar for an online game.
To make a point, if you’ve watched Game of Thrones at home, then you may remember the botch they had with the last season with some of the episodes that ended up being too dark for television. That was due to compression done by service providers…
So far this is my favorite screenshot from #BattleForWinterfell #GameofThrones pic.twitter.com/6uI45SjPG7
— Lady Emily (@GreatCheshire) April 29, 2019While different from the color space issue here, it goes to show that how you treat color in video encoding matters. And it differs from one scenario to another.
When it comes to games, a different treatment of color space was needed. Specifically, moving from SDR to HDR, adding an RTP header extension in the process to express that additional information.
–
Oh, and if you want to learn more about these changes (especially resolution and color space), then make sure to watch this Kranky Geek session by YouTube about the changes they had to make to support Stadia:
What’s in cloud gaming anyway?Here’s the thing. Google Stadia is one end of the spectrum in gaming and in cloud gaming.
Throughout the years, I’ve seen quite a few other reasons and market targets for cloud gaming.
Types of cloud gamesHere are the ones that come out of the top of my head:
Why not even play these games with others remotely?
My son recently had a sit down with 4 other friends, all playing on Xbox together a TMNT game. It was great having them all over, but you could do it remotely as well. If the game doesn’t offer remote players, by pushing it to the cloud you can get that feature simply because all users immediately become remote players.
At this stage, you can even add a voice conference or a video call to the game between the players. Just to give them the level of collaboration they can get out of playing the likes of Fortnite. Granted, this requires more than just game rendering in the cloud, but it is possible and I do see it happen with some of the vendors in this space.
Beyond cloud gaming – virtual desktop, remote desktop and cloud renderingLower latencies. Bigger color space. Higher resolutions. Rendering in the cloud and consuming remotely.
All these aren’t specific to cloud gaming. They can easily be extended to virtual desktop and remote desktop scenarios.
You have a machine in the cloud – big or small or even a cluster. That “machine” handles computations and ends up rendering the result to a virtual display. You then grab that display and send it to a remote user.
One use case can just be a remote desktop a-la VNC. Here we’re actually trying to get connected from one machine to another, usually in a private and secure peer-to-peer fashion, which is different from what I am aiming for here.
Another, less talked about is doing things like Photoshop operations in the cloud. For the poor sad people like me who don’t have the latest Mac Pro with the shiny M2 Ultra chip, I might just want to “rent” the compute power online for my image or video editing jobs.
I might want to open a rendered 3D view of a sports car I’d like to buy, directly from the browser, having the ability to move my view around the car.
Or it might just be a simple VDI scenario, where the company (usually a large financial institute, but not only) would like the employees to work on Chromebook machines but have nothing installed or stored in them – all consumed by accessing the actual machine and data in their own corporate data center or secure cloud environment.
A good friend of mine asked me what PC to buy for himself. He needed it for work. He is a lawyer. My answer was the lowest end machine you can find would do the job. That saved him quite a lot of money I am guessing, and he wouldn’t even notice the difference for what he needs it for.
But what if he needs a bit more juice and power every once in a while? Can renting that in the cloud make a difference?
What about the need to use specialized software that is hard to install and configure? Or that requires a lot of collaboration on large amounts of data that need to be shared across the collaborators?
Taking the notion and capabilities of cloud gaming and applying them to non-gaming use cases can help us with multiple other requirements:
Do these have to happen with WebRTC? No
Can they happen with WebRTC? Yes
Would changing from proprietary VDI environments to open standard WebRTC in browsers improve things? Probably
Why use WebRTC in cloud gamingWhy even use WebRTC for cloud gaming or more general cloud rendering then?
With cloud gaming, we’re fine doing it from inside a dedicated app. So WebRTC isn’t really necessary. Or is it?
In one of our recent WebRTC Insights issues we’ve highlighted that Amazon Luna is dropping the dedicated apps in favor of the web (=WebRTC). From that article:
“We saw customers were spending significantly more time playing games on Luna using their web browsers than on native PC and Mac apps. When we see customers love something, we double down. We optimized the web browser experience with the full features and capabilities offered in Luna’s native desktop apps so customers now have the same exact Luna experience when using Luna on their web browsers.”
Browsers are still a popular enough alternative for many users. Are these your users too?
If you need or want web browser access for a cloud gaming / cloud rendering application, then WebRTC is the way to go. It is a slightly different opinion than the one I had with the future of live streaming, where I stated the opposite:
“The reason WebRTC is used at the moment is because it was the only game in town. Soon that will change with the adoption of solutions based on WebTransport+WebCodecs+WebAssembly where an alternative to WebRTC for live streaming in browsers will introduce itself.”
Why the difference? It is all about the latency we are willing to accommodate:
Your mileage may vary when it comes to the specific latency you’re aiming for, but in general – live streaming can live with slightly higher latency than our online meetings. So something other than WebRTC can cater for that better – we can fine tune and tweak it more.
Cloud gaming needs even lower latency than WebRTC. And WebRTC can accommodate for that. Using something else that is unproven yet (and suffers from performance and latency issues a bit at the moment) is the wrong approach. At least today.
Enter our WebRTC Protocols coursesGot a use case where you need to render remote machines using WebRTC? These require sitting at the cutting edge of WebRTC, or more accurately and a slightly skewed angle versus what the general population does with WebRTC (including Google).
Taking upon yourself such a use case means you’ll need to rely more heavily on your own expertise and understanding of WebRTC.
Over a year ago I launched with Philipp Hancke the Low-level WebRTC Protocols course. We’re now recording our next course – Higher-level WebRTC Protocols.
If you are interested in learning more about this, be sure to join our waiting list for once we launch the course
Join the course waiting listOh, and I’d like to thank Midjourney for releasing version 5.2 – awesome images
The post Cloud gaming, virtual desktops and WebRTC appeared first on BlogGeek.me.
The Apple Vision pro is a new VR/AR headset. Here are my thoughts on if and how it will affect the metaverse and WebRTC.
There were quite a few interesting announcements and advances made in recent months that got me thinking about this whole area of the metaverse, augmented reality and virtual reality. All of which culminated with Apple’s unveiling last week of the Apple Vision Pro. For me, the prism from which I analyze things is the one of communication technologies, and predominantly WebRTC.
A quick disclaimer: I have no clue about what the future holds here or how it affects WebRTC. The whole purpose of this article is for me to try and sort my own thoughts by putting them “down on paper”.
Let’s get started then
Table of contentsApple just announced its Vision Pro VR/AR headset. If you’re reading this blog, then you know about this already, so there isn’t much to say about it.
For me? This is the first time that I had this nagging feeling for a few seconds that I just might want to go and purchase an Apple product.
Most articles I’ve read were raving about this – especially the ones who got a few minutes to play with it at Apple’s headquarters.
AR/VR headsets thus far have been taking one of the two approaches:
Apple took the middle ground – their headset is a VR headset since it replaces what you see with two high resolution displays – one for each eye. But it acts as an AR headset – because it uses external cameras on the headset to project the world on these displays.t
The end result? Expensive, but probably with better utility than any other alternative, especially once you couple it with Apple’s software.
Video calling, FaceTime, televisions and ARAlmost at the sidelines of all the talks and discussions around Apple Vision Pro and the new Mac machines, there have been a few announcements around things that interest me the most – video calling.
FaceTime and Apple TVOne of the challenges of video calling has been to put it on the television. This used to be called a lean back experience for video calling, in a world predominantly focused on lean forward when it comes to video calling. I remember working on such proof of concepts and product demos with customers ~15 years ago or more.
These never caught on.
The main reason was somewhere between the cost of the hardware, maintaining privacy with a livingroom camera and microphone positioning/noise.
By tethering the iPhone to the television, the cost of hardware along with maintaining privacy gets solved. The microphones are now a lot better than they used to – mostly due to better software.
Apple, being Apple, can offer a unique experience because they own and control the hardware – both of the phone and the set-top box. Something that is hard for other vendors to pull off.
There’s a nice concept video on the Apple press release for this, which reminded me of this Facebook (now Meta) Portal presentation from Kranky Geek:
Can Android devices pull the same thing, connected to Chromecast enabled devices maybe? Or is that too much to ask?
Do television and/or set-top box vendors put an effort into a similar solution? Should they be worried in any way?
Where could/should WebRTC play a role in such solutions, if at all?
FaceTime and Apple Vision ProHow do you manage video calls with a clunky AR/VR headset plastered on your face?
First off, there’s no external camera “watching you”, unless you add one. And then there’s the nagging thing of… well… the headset:
Apple has this “figured out” by way of generating a realistic avatar of you in a meeting. What is interesting to note here, is that in the Apple Vision Pro announcement video itself, Apple made a three important omissions:
What do the people at the meeting see of her? Do they see her looking at them, or the side of her head? Do they see the context of her real-life surroundings or a virtual background?
I couldn’t find any person who played with the Apple Vision Pro headset and reported using FaceTime, so I am assuming this one is still a work in progress. It will be really interesting to see what they come up with once this is released to market, and how real life use looks and feels like.
Lifelike video meetings: Just like being thereThen there’s telepresence. This amorphous thing which for me translates into: “expensive video conferencing meeting rooms no one can purchase unless they are too rich”.
Or if I am a wee bit less sarcastic – it is where we strive to with video conferencing – what would be the ultimate experience of “just like being there” done remotely if we had the best technology money can buy today.
Google Project Starline is the current poster child of this telepresence technology.
The current iteration of telepresence strives to provide 3D lifelike experience (with eye contact obviously). To do so while maintaining hardware costs down and fitting more environments and hardware devices, it will rely on AI – like everything else these days.
The result as I understand it?
Now look at what FaceTime on an Apple Vision Pro really means:
Generate a hyper realistic avatar representation of the person – this sounds really similar to removing the background and using cameras to generate a 3D representation of the speaker (just with a bit more work and a bit less accuracy).
Both Vision Pro and Starline strive for lifelike experiences between remote people. Starline goes for a meeting room experience, capturing the essence of the real world. Vision Pro goes after a mix between augmented and virtual reality here – can’t really say this is augmented, but can’t say this is virtual either.
A telepresence system may end up selling a million units a year (a gross exaggeration on my part as to the size of the market, if you take the most optimistic outcome), whereas a headset will end up selling in the tens of millions or more once it is successful (and this is probably a realistic estimate).
What both of these ends of the same continuum of a video meeting experience do is they add the notion of 3D, which in video is referred to as volumetric video (we need to use big fancy words to show off our smarts).
And yes, that does lead me to the next topic I’d like to cover – volumetric video encoding.
Volumetric video codingWe have the metaverse now. Virtual reality. Augmented reality. The works.
How do we communicate on top of it? What does a video look like now?
The obvious answer today would be “it’s a 3D video”. And now we need to be able to compress it and send it over the network – just like any other 2D video.
The Alliance for Open Media, who has been behind the publication and promotion of the AV1 video codec, just published a call for proposals related to volumetric video compression. From the proposal, I want to focus on the following tidbits:
This being promoted now, on the same week Apple Vision Pro comes out might be a coincidence. Or it might not.
The founding members include all the relevant vendors interested in AR/VR that you’d assume:
The rest also have vested interest in the metaverse, so this all boils down to this:
AR/VR requires new video coding techniques to enable better and more efficient communications in 3D (among other things)
Apple Vision Pro isn’t alone in this, but likely the one taking the first bold steps
The big question for me is this – will Apple go off with its own volumetric video codecs here, touting how open they are (think FaceTime open) or will they embrace the Alliance of Open Media work that they themselves are co-chairing?
And if they do go for the open standard here, will they also make it available for other developers to use? Me thinking… WebRTC
Is the metaverse web based?Before tackling the notion of WebRTC into the metaverse, there’s one more prerequisite – that’s the web itself.
Would we be accessing the metaverse via a web browser, or a similar construct?
For an open metaverse, this would be something we’d like to have – the ability to have our own identity(ies) in the metaverse go with us wherever we go – between Facebook, to Roblox, through Fortnite or whatever other “domain” we go to.
Last week also got us this sideline announcement from Matrix: Introducing Third Room TP2: The Creator Update
Matrix, an open source and open standard for decentralized communications, have been working on Third Room, which for me is a kind of a metaverse infrastructure for the web. Like everything related to the metaverse, this is mostly a work in progress.
I’d love the metaverse itself to be web based and open, but it seems most vendors would rather have it limited to their own closed gardens (Apple and Meta certainly would love it that way. So would many others). I definitely see how open standards might end up being used in the metaverse (like the work the Alliance of Open Media is doing), but the vendors who will adopt these open standards will end up deciding how open to make their implementations – and will the web be the place to do it all or not.
Where would one fit WebRTC in the metaverse, AR and VR?Maybe. Maybe not.
The unbundling of WebRTC makes it both an option while taking us farther away from having WebRTC as part of the future metaverse.
Not having the web means no real reliance on WebRTC.
Having the tooling in WebRTC to assist developers in the metaverse means there’s incentive to use and adopt it even without the web browser angle of it.
WebRTC will need at some point to deal with some new technical requirements to properly support metaverse use cases:
We’re still far away from that target, and there will be a lot of other technologies that will need to be crammed in alongside WebRTC itself to make this whole thing happen.
Apple’s new Vision Pro might accelerate that trajectory of WebRTC – or it might just do the opposite – solidify the world of the metaverse inside native apps.
—
I want to finish this off with this short piece by Jason Fried: The visions of the future
It looks at AR/VR and generative AI, and how they are two exact opposites in many ways.
Recently I also covered ChatGPT and WebRTC – you might want to take a look at that while at it.
The post Apple Vision, VR/AR, the metaverse and what it means to the web and WebRTC appeared first on BlogGeek.me.
Here at webrtcHacks we are always exploring what’s next in the world of Real Time Communications. One area we have touched on a few times is the use of WebCodecs with WebTransport as an alternative to WebRTC’s RTCPeerConnection. There have been several recent experiments by Bernard Aboba – WebRTC & WebTransport Co-Chair and webrtcHacks regular, […]
The post Livestream this Friday: WebCodecs, WebTransport, and the Future of WebRTC appeared first on webrtcHacks.
Is WebRTC really free? It is open source and widely used due to it. But it isn’t free when it comes to running and hosting your own WebRTC applications.
If you are new to WebRTC, then start here – What is WebRTC?
Time to answer this nagging question:
Is WebRTC really free?
One of the reasons that WebRTC is the most widely used developer technology for real time communications in the world is that it is open source. It helps a lot that it comes embedded and available in all modern browsers. That means that anyone can use WebRTC for any purpose they see fit, without paying any upfront licensing fee or later on royalties. This has enabled thousands of companies to develop and launch their own applications.
But does that mean every web application built with WebRTC is free? No. WebRTC may well be free, but whatever is bolted on top of it might not be. And then there are still costs involved with getting a web application online and dealing with traffic costs.
For that reason, in this article, I’ll be touching on why WebRTC really is free, and what you have to factor in for it if you want to get your own WebRTC application.
Table of contentsSince I am sure you didn’t really go read that other article – I’ll suggest it here again: What is WebRTC?
The TL;DR version of it?
The WebRTC software library is open sourced under a permissible open source license. That means its source code is available to everyone AND that individuals and companies can modify and use it anywhere they wish without needing to contribute back their changes. It makes it easier for commercial software to be developed with it (even when no changes or improvements are made to the base WebRTC library – just because of how corporate lawyers are).
You see? WebRTC really is free.
Google “owns” and maintains the main WebRTC library implementation. Everyone benefits from this. That siad, they aren’t doing this only from the goodness of their heart – they have their own uses for WebRTC they focus on.
However, there are costs involved with running a WebRTC applicationWhile you don’t have to pay anything for WebRTC itself, there’s the application you develop, publish and then maintain. There are costs that come into play here – and considerable ones. These costs can vary depending on your requirements.
I’d like to split the costs here into 3 components:
The first thing you can put as a cost is to build the WebRTC application itself.
Here, as in all other areas, there’s more demand than supply when it comes to skilled WebRTC engineers. So much so that I had to write an article about hiring WebRTC developers – and I still send this link multiple times a month when asked about this.
Here too, you should split the cost into two parts:
Since everything done in WebRTC requires skilled engineers (that are scarce when it comes to WebRTC expertise), you can safely assume it is going to be a wee bit more expensive than you estimate it to be.
2. How expensive it is to optimize a WebRTC implementationI know what you’re going to say. Your WebRTC application is going to be awesome. Glorious. Superb. It is going to be so good that it will wipe the floor with the existing solutions such as Zoom, Google Meet and Microsoft Teams.
That kind of a mentality is healthy in an entrepreneur, but a dose of reality is necessary here:
This brings me to the need to optimize what you’re doing on an ongoing basis.
Ever since the pandemic, we’ve seen a growing effort in the leading vendors in this space to improve and optimize quality. This manifests itself in the research they publish as well as features they bring to the market. Here are a few examples:
You should plan for ongoing optimization of your own as well. Your customers are going to expect you to keep up with the industry. The notion of “good enough” works well here, but the bar of what is “good enough” is rising all the time.
Such optimizations are also needed not only to improve quality, but also to reduce costs.
Factor these costs in…
3. Hosting and maintenance costs of a WebRTC applicationI had a meeting the other day. A founder of a startup who had to use WebRTC because customers needed something live and interactive. That component wasn’t at the core of his application, but not having it meant lost deals and revenue. It was a mandatory capability needed for a specific feature.
He complained about WebRTC being expensive to operate. Mainly because of bandwidth costs.
We can split WebRTC maintenance costs here into two categories: cloud costs, keeping the lights on costs.
Cloud costsThat startup founder was focused on cloud costs.
When we look at the infrastructure costs of web applications, there’s the usual CPU, memory, storage and network. We might be paying these directly, or indirectly via other managed and serverless services.
With WebRTC, the network component is the biggest hurt. Especially for video applications. You can reduce these costs by going to 2nd tier IaaS vendors or by hosting in “no-name” local data centers, but if you are like most vendors, you’re likely to end up on Amazon, Microsoft or Google cloud. And there, bandwidth costs for outgoing traffic are high.
WebRTC is peer to peer, but:
And the more successful you become – the more bandwidth you’ll consume – and the higher your cloud costs are going to be.
You will need to factor this in when developing your application, especially deciding when to start optimizing for costs and bandwidth use.
Keeping the lights costsThen there’s the “keeping the lights” costs.
WebRTC changes all the time. Things get deprecated and removed. Features change behavior over time. New features are added. You continually need to test that your application does not break in the upcoming Chrome release. Who is going to take care of all that in your WebRTC application?
You will also need to understand the way your WebRTC application is used. Are users happy? Are there areas you need to invest in with further optimization? Observability (=monitoring) is key here.
Keeping the lights on has its own set of costs associated with it.
Build vs buy a WebRTC infrastructureBuying your WebRTC infrastructure by using managed services like CPaaS vendors is expensive. But then again, building your own (along with optimizing and maintaining it) is also expensive.
Roughly speaking, this is the kind of a decision table you’ll see in front of you:
BuildBuyPros Customized to your specific needThere’s also a middleground, where you can source/buy certain pieces and build others. Here are a few examples/suggestions:
You can also start with a CPaaS vendor and once you scale and grow, invest the time and money needed to build your own infrastructure – once you’ve proven your application and got to product-market-fit.
So, how free is WebRTC, really?Part of WebRTC’s claim to fame is its nature as an open source and thus free software for building interactive web applications. While the technology itself is indeed free of charge and offers numerous freedoms, there are still costs associated with running a WebRTC application.
When we had to launch our own video conferencing service some 25 years ago, we had to put an investment of several millions of dollars along with an engineering team for a period of a couple of years. Only to get to the implementation of a media engine.
WebRTC gives that to you for “free”. And it is also kind enough to be pre-integrated in all modern browsers.
What Google did with WebRTC was to reduce the barrier of entry to real time communication drastically.
Creating a WebRTC application isn’t free – not really. But it does come with a lot of alternatives that bring with them freedom and flexibility.
The post Is WebRTC really free? The costs of running a WebRTC application appeared first on BlogGeek.me.
How WebRTC media resilience works – what FEC, RED, PLC, RTX are and why they are needed to improve media quality in real-time communications.
Networks are finicky in nature, and media codecs even more so.
With networks, not everything sent is received on the other end, which means we have one more thing to deal with and care about when it comes to handling WebRTC media. Luckily for us, there are quite a few built-in tools that are available to us. But which one should we use at each point and what benefits do they bring?
This is what I’ll be focusing on in this article.
Table of contentsCommunication networks are lossy in nature. This means that if you send a packet through a network – there’s no guarantee of that packet reaching the other side. There’s also no guarantee that packets are reached in the order you’ve sent them or in a timely fashion, but that’s for another article.
This is why almost everything you do over the internet has this nice retransmission mechanism tucked away somewhere deep inside as an assumption. That retransmission mechanism is part of how TCP works – and for that matter, almost every other transport protocol implemented inside browsers.
The assumption here is that if something is lost, you simply send it again and you’re done. It may take a wee bit longer for the receiver to receive it, but it will get there. And if it doesn’t, we can simply announce that connection as severed and closed.
We call and measure that “something is lost” aspect of networks as packet loss.
Stripping away that automatic assumption that networks are reliable and everything you send over them is received on the other side is the first important step in understanding WebRTC but also in understanding real-time transport protocols and their underlying concepts.
Media codecs are lossy (and sensitive)Media codecs are also lossy but in a different way. When an audio codec or a video codec needs to encode (=compress) the raw input from a microphone or a camera, what they do is strip the data out of things they deem unnecessary. These things are levels of perceived quality of the original media.
I remember many years ago, sitting at the dorms in the university and talking about albums and CDs. One of the roommates there was an audiophile. He always explained how vinyl albums have better audio quality than CDs and how MP3 just ruins audio quality. Me? I never heard the difference.
Perceived quality might be different between different people. The better the codec implementation, the more people will not notice degraded quality.
Back to codecs.
Most media codecs are lossy in nature. There are a few lossless ones, but these are rarely used for real time communications and not used in WebRTC at all. The reason we use lossy codecs is to have better compression rates:
Taking 1080p (Full HD) video at 30 frames per second will result in roughly 1.5Gbps of data. Without compressing it – it just won’t work. We’re trying to squeeze a lot of raw data over networks, and as always, we need to balance our needs with the resources available to us.
To compress more, we need:
That last one is where media codecs become really sensitive.
If every bit matters, then losing a bit matters. And if losing a bit matters, then losing a whole packet matters even more.
Since networks are bound to lose packets, we’re going to need to deal with media packets missing and our system (in the decoder or elsewhere) needing to fill that gap somehow
More on lossy codecs
More on the future of audio codecs (lossy and lossless ones)
Types of WebRTC media correctionMedia packets are lost. Our media decoders – or WebRTC system as a whole – needs to deal with this fact. This is done using different media correction mechanisms. Here’s a quick illustration of the available choices in front of us:
Each such media correction technique has its advantages and challenges. Let’s review them so we can understand them better.
PLC: Packet Loss ConcealmentEvery WebRTC implementation needs a packet loss concealment strategy. Why? Because at some point, in some cases, you won’t have the packets you need to play NOW. And since WebRTC is all about real-time, there’s no waiting with NOW for too long.
What does packet loss concealment mean? It means that if we lost one or more packets, we need to somehow overcome that problem and continue to run to the best of our ability.
Before we dive a bit deeper, it is important to state: not losing packets is always better than needing to conceal lost packets. More on that – later.
This is done differently between audio and video:
Audio PLCFor the most part, audio packets are decoded frame-by-frame and usually also packet-by-packet. If one is lost, we can try various ways to solve that. There are the most common approaches:
Packet loss on video streams has its own headaches and challenges.
In video, most of the frames are dependant on previous ones, creating chains of dependencies:
I-frames or keyframes (whatever they are called depending on the video codec used) break these dependency chains, and then one can use techniques like temporal scalability to reduce the dependencies for some of the frames that follow.
When you lose a packet, the question isn’t only what to do with the current video frame and how to display it, but rather what is going to happen to future frames depending on the frame with the lost packet.
In the past, the focus was on displaying every bit that got decoded, which ended up with video played back with smears as well as greens and pinks.
Check it for yourself, with our most recent WebRTC fiddle around frame loss.Today, we mostly not display frames until we have a clean enough bitstream, opting to freeze the video a bit or skip video frames than show something that isn’t accurate enough. With the advances in machine learning, they may change in the future.
–
PLC is great, but there’s a lot to be done to get back the lost packets as opposed to trying to make do with what we have. Next, we will see the additional techniques available to us.
RTX: RetransmissionsHere’s a simple mechanism (used everywhere) to deal with packet loss – retransmission.
In whatever protocol you use, make sure to either acknowledge receiving what is sent to you or NACKing (sending a negative acknowledgement) when not receiving what you should have received. This way, the sender can retransmit whatever was lost and you will have it readily available.
This works well if there’s enough time for another round trip of data until you must play it back. Or when the data can help you out in future decoding (think the dependency across frames in video codecs). It is why retransmissions don’t always work that well in WebRTC media correction – we’re dealing with real time and low latency.
Another variation of this in video streams is asking for a new I-frame. This way, the receiver can signal the sender to “reset” the video stream and start encoding it from scratch, which essentially means a request to break the dependency between the old frames and the new ones that should be sent after the packet loss.
RED: REDundancy EncodingRetransmission means we overcome packet losses after the fact. But what if we could solve things without retransmissions? We can do that by sending the same packet more than once and be done with it.
Double or triple the bitstream by flooding it with the same information to add more robustness to the whole thing.
RED is exactly that. It concatenates older audio frames into fresh packets that are being sent, effectively doubling or tripling the packet size.
If a packet gets lost, the new frame it was meant to deliver will be found in one of the following packets that should be received.
Yes. it eats up our bandwidth budget, but in a video call where we send 1Mbps of video data or more, tripling the audio size from 40kbps to 90kbps might be a sacrifice worth making for cleaner audio.
FEC: Forward Error CorrectionRedundancy encoding requires an additional 100% or more of bitrate. We can do better using other means, usually referred to as Forward Error Correction.
Mind you, redundancy encoding is just another type of forward error correction mechanism
With FEC, we are going to add more packets that can be used to restore other packets that are lost. The most common approach for FEC is by taking multiple packets, XORing them and sending the XORed result as an additional packet of data.
If one of the packets is lost, we can use the XORed packet to recreate the lost one.
There are other means of correction algorithms that are a wee bit more complex mathematically (google about Reed-Solomon if you’re interested), but the one used in WebRTC for this purpose is XOR.
FEC is still an expensive thing since it increases the bitrate considerably. Which is why it is used only sparingly:
PLC, RTX, FEC, RED, …
How is each one signaled over the network? When would it make sense to use it? How does WebRTC implement it in the browser and what exactly can you expect out of it?
All that is mostly arcane knowledge. Something that is passed from one generation of WebRTC developers to another it seems.
Lucky for you, Philipp Hancke and myself are working on a new course – Higher Level WebRTC Protocols. In it, we are covering these specific topics as well as quite a few others in a level of detail that isn’t found anywhere else out there.
Most of the material is already written down. We just need to prettify it a bit and record it.
If you are interested in learning more about this, be sure to join our waiting list for once we launch the course
Join the course waiting listThe post WebRTC media resilience: the role FEC, RED, PLC, RTX and other acronyms play appeared first on BlogGeek.me.
ChatGPT is changing computing and as an extension how we interact with machines. Here’s how it is going to affect WebRTC.
ChatGPT became the service with the highest growth rate of any internet application, reaching 100 million active users within the first two months of its existence. A few are using it daily. Others are experimenting with it. Many have heard about it. All of us will be affected by it in one way or another.
I’ve been trying to figure out what exactly does a “ChatGPT WebRTC” duo means – or in other words – what does ChatGPT means for those of us working with and on WebRTC.
Here are my thoughts so far.
Table of contentsLet’s start with a quick look at what ChatGPT really is (in layman terms, with a lot of hand waving, and probably more than a few mistakes along the way).
BI, AI and Generative AII’ll start with a few slides I cobbled up for a presentation I did for a group of friends who wanted to understand this.
ChatGPT is a product/service that makes use of machine learning. Machine learning is something that has been marketed a lot as AI – Artificial Intelligence. If you look at how this field has evolved, it would be something like the below:
We started with simple statistics – take a few numbers, sum them up, divide by their count and you get an average. You complicate that a bit with weighted average. Add a bit more statistics on top of it, collect more data points and cobble up a nice BI (Business Intelligence) system.
At some point, we started looking at deep learning:
Here, we train a model by using a lot of data points, to a point that the model can infer things about new data given to it. Things like “do you see a dog in this picture?” or “what is the text being said in this audio recording?”.
Here, a lot of 3 letter acronyms are used like HMM, ANN, CNN, RNN, GNN…
What deep learning did in the past decade or two was enable machines to describe things – be able to identify objects in images and videos, convert speech to text, etc.
It made it the ultimate classifier, improving the way we search and catalog things.
And then came a new field of solutions in the form of Generative AI. Here, machine learning is used to generate new data, as opposed to classifying existing data:
Here what we’re doing is creating a random input vector, pushing it into a generator model. The generator model creates a sample for us – something that *should* result in the type of thing we want created (say a picture of a dog). That sample that was generated is then passed to the “traditional” inference model that checks if this is indeed what we wanted to generate. If it isn’t, we iteratively try to fine tune it until we get to a result that is “real”.
This is time consuming and resource intensive – but it works rather well for many use cases (like some of the images on this site’s articles that are now generated with the help of Midjourney).
So…
The thing is that all this thing I just explained wouldn’t be interesting without ChatGPT – a service that came to our lives only recently, becoming the hottest thing out there:
The Most Important Chart In 100 Years https://t.co/Ypcsqi0AWJ #AI #GPT #ChatGPT #technology @JohnNosta pic.twitter.com/QjMroVZ7cG
— Kyle Hailey (@kylelf_) February 16, 2023ChatGPT is based on LLMs – Large Language Models – and it is fast becoming the hottest thing around. No other service grew as fast as ChatGPT, which is why every business in the world now is trying to figure out if and how ChatGPT will fit into their world and services.
Why ChatGPT and WebRTC are like oil and waterSo it begged the question: what can you do with ChatGPT and WebRTC?
Problem is, ChatGPT and WebRTC are like oil and water – they don’t mix that well.
ChatGPT generates data whereas WebRTC enables people to communicate with each other. The “generation” part in WebRTC is taken care of by the humans that interact mostly with each other on it.
On one hand, this makes ChatGPT kinda useless for WebRTC – or at least not that obvious to use for it.
But on the other hand, if someone succeeds to crack this one up properly – he will have an innovative and unique thing.
What have people done with ChatGPT and WebRTC so far?It is interesting to see what people and companies have done with ChatGPT and WebRTC in the last couple of months. Here are a few things that I’ve noticed:
In LiveKit’s and Twilio’s examples, the concept is to use the audio source from humans as part of prompts for ChatGPT after converting them using Speech to Text and then converting the ChatGPT response using Text to Speech and pass it back to the humans in the conversation.
Broadening the scope: Generative AIChatGPT is one of many generative AI services. Its focus is on text. Other generative AI solutions deal with images or sound or video or practically any other data that needs to be generated.
I have been using MidJourney for the past several months to help me with the creation of many images in this blog.
Today it seems that in any field where new data or information needs to be created, a generative AI algorithm can be a good place to investigate. And in marketing-speak – AI is overused and a new overhyped term was needed to explain what innovation and cutting edge is – so the word “generative” was added to AI for that purpose.
Fitting Generative AI to the world of RTCHow does one go about connecting generative AI technologies with communications then? The answer to this question isn’t an obvious or simple one. From what I’ve seen, there are 3 main areas where you can make use of generative AI with WebRTC (or just RTC):
Here’s what it means
Conversations and botsIn this area, we either have a conversation with a bot or have a bot “eavesdrop” on a conversation.
The LiveKit and Twilio examples earlier are about striking a conversation with a bot – much like how you’d use ChatGPT’s prompts.
A bot eavesdropping to a conversation can offer assistance throughout a meeting or after the meeting –
As I stated above, this has little to do with WebRTC itself – it takes place elsewhere in the pipeline; and to me, this is mostly an application capability.
Media compressionAn interesting domain where AI is starting to be investigated and used is media compression. I’ve written about Lyra, Google’s AI enabled speech codec in the past. Lyra makes assumptions on how human speech sounds and behaves in order to send less data over the network (effectively compressing it) and letting the receiving end figure out and fill out the gaps using machine learning. Can this approach be seen as a case of generative AI? Maybe
Would investigating such approaches where the speakers are known to better compress their audio and even video makes sense?
How about the whole super resolution angle? Where you send video at resolutions of WVGA or 720p and then having the decoder scale them up to 1080p or 4K, losing little in the process. We’re generating data out of thin air, though probably not in the “classic” sense of generative AI.
I’d also argue that if you know the initial raw content was generated using generative AI, there might be a better way in which the data can be compressed and sent at lower bitrates. Is that something worth pursuing or investigating? I don’t know.
Media processingSimilar to how we can have AI based codecs such as Lyra, we can also use AI algorithms to improve quality – better packet loss concealment that learns the speech patterns in real time and then mimics them when there’s packet loss. This is what Google is doing with their WaveNetEQ, something I mentioned in my WebRTC unbundling article from 2020.
Here again, the main question is how much of this is generative AI versus simply AI – and does that even matter?
Is the future of WebRTC generative (AI)?ChatGTP and other generative AI services are growing and evolving rapidly. While WebRTC isn’t directly linked to this trend, it certainly is affected by it:
Like any other person and business out there, you too should see if and how does generative AI affects your own plans.
The post ChatGPT meets WebRTC: What Generative AI means to Real Time Communications appeared first on BlogGeek.me.
Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.
Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.
Wow, this most certainly is a great a theme.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.