The next evolution of my WebRTC training program is here.
A few years ago I wanted to try something new, so I spent a few months creating the Advanced WebRTC Architecture course. 3 years and 300 students later, it is time for a refresh.
While I keep my course up to date, hosting office hours, adding links on a monthly basis and modifying existing lessons when the need arise, there were things that I just never got around to. Which is why three months ago, I sat down and planned the next stage for my course – thinking of how to add more content but not implode the course and its price point due to it.
The end result?
4 separate courses, 3 courses available starting this month, and the fourth one? Once I am done creating it.
I’ve renamed them a bit, at least on the higher level, for simplicity, while keeping the Advanced WebRTC Architecture course mouthful-name inside the course itself (it made no sense to record it all again just for a “name-change”). Here is the new structure:
The WebRTC Basics course is something I’ve been thinking about on and off for quite some time. The content of this course are quite simple – it is the first module of my Advanced WebRTC Architecture course.
I even made that module free to access in my existing course in the past few months, though it is hard to tell how many people understood that it is free to access. For this reason, and a few others, I’ve decided to split it from the main course and offer it as a stand-alone free course.
Interested in learning the basics of WebRTC? You can just enroll to this new course today for free and watch the lessons at your own pace.
Advanced WebRTC (Architecture course)This is my signature WebRTC course. It got a facelift in this round:
If you are a student of this course already, login today and see if you can notice the difference
One thing that didn’t make it in the migration is your course progress… all in the name of… progress
WebRTC Tooling – a brand new “course”This one’s brand new and is geared to become a rich library of resources.
Today, it includes two modules:
In each of these modules there are already over 8 “lessons”, and I plan to grow the list on a monthly basis – especially by request/demand of the students who enroll to it.
For this week only, the All included course comes with the Tooling course for free (it is priced like the Advanced WebRTC course).
Supporting WebRTC – coming soonThis is a new course I’ve been thinking of on and off in the last year. It seems like I am getting more and more requests for such a thing and in some of my consulting engagements I end up working directly with support teams on figuring out what they see in WebRTC dumps.
The intention of this course is to focus on support teams and what they need to know about WebRTC to effectively assist their customers.
This is in the ideation phase for me, but will soon go into creation phase. If you are interested to learn more or participate – contact me.
All Included – a bundled offeringThis is a bundle of the Advanced WebRTC and the WebRTC Tooling course into one package.
It costs less than enrolling to each separately. And for the coming week, it is priced like the Advanced WebRTC course. Which means large savings.
In the one week launch period, there are 3 eBooks that will be supplied for free as well. Which leads me to the next part –
eBooksWhile we’re at it, I’ve written a new eBook and made two other eBooks available for purchase:
During the coming week, through the launch period of the course, these eBooks will be freely available as part of the All Included bundle. If you’re not interested in the courses, but interested in one or all of the eBooks, you can purchase them separately.
Q&A about this WebRTC course restructuringI understand that this might confuse a bit, especially students who are already enrolled in the course. I’ll try to address these issues and other questions here –
What happens to those who enrolled in the WebRTC course in the last 12 months?
Nothing special.
They get to enjoy the new tools available for them in the Advanced WebRTC course. If you are one of these people and you have difficulties logging in – contact me.
What if I enrolled more than 12 months ago?
Then your subscription to the course is over. If you still want access, contact me.
When is the next office hours round taking place?
After the summer vacation.
I plan on starting these come September.
When will this restructuring take place?
It already has.
The courses and eBooks are now all available on webrtccourse.com.
Where can I learn more about the WebRTC courses?
On the course website.
You can find there testimonials from people who took the course, an FAQ, the list of partners, the syllabus and other details.
If you have specific questions, feel free to reach out to me and ask them.
The post WebRTC Courses: Free, Advanced and Tooling appeared first on BlogGeek.me.
Connecting WebRTC sessions effectively isn’t overly complicated, but it is something you need to be mindful of.
Every other day someone asks somewhere over the internet why his sessions don’t get connected with WebRTC. This can happen on discuss-webrtc, through my contact page, on open source WebRTC related forums, etc. Here’s one that published on Stack Overflow this month:
I am working on video calling functionality using webRTC. I have used “Google webRTC” framework instead of libJingle.Once my peerconnection established it remains always in “RTCICEConnectionChecking” state.
I have few question.
1) Peerconnection state always remain in “RTCICEConnectionChecking”.
2) When network is different (3g/4g) video call is not working.
3) Same network it is working fine.
I have used many turn server but could not get success.
Please, suggest me ,thanks in advance.
The usual complaint?
WebRTC works fine on a local network, but stops working when trying to run it on other networks.
That’s so common you’d think people would know what to do with it by now.
That nice question has another angle to it – “I have used many turn server but could not get success”. Hmm… someone here feels WebRTC should be free.
If you haven’t read about it already, then please do – Why Doesn’t Google Provide a Free TURN Server? It turns out that TURN costs real money to operate. And at scale even serious money. Which is why finding “turn server” and “get success” is rather hard (and probably impossible for the long run).
This continuous unstoppable flow of similar questions in the past couple of years got me to the point when it was time to put out a nice answer to it. Which is why I created my latest video mini-course – a 3 short videos that will explain how we got to this ridiculous point: being unable to connect simple use cases with WebRTC.
In these videos, I’ll be teaching you the problem that is causing this to happen, what are the mistakes developers usually do when trying to solve that problem (think “used many turn server”), and then 2 actionable solutions for you that will guarantee that more WebRTC sessions will get connected.
Why am I doing this?
First because I like receiving emails from people saying “thank you“ (so if you’ll find this course useful – be sure to reply with a thank you note).
But also because another round of office hours will take place soon for my WebRTC course. For this one, I am making a lot of changes in the structure of my WebRTC course and creating almost 3 additional hours worth of content.
Want to know how to get more WebRTC sessions connected?
Learn how to effectively connect WebRTC sessions
The post WebRTC connectivity is challenging (a free video course) appeared first on BlogGeek.me.
It must have been a fun week for Zoom. It showed why WebRTC is needed if you value security.
For those who haven’t followed the tech news, a week ago a serious vulnerability was publicly disclosed about Zoom by Jonathan Leitschuh. If you have a Mac and installed Zoom to join a meeting, then people could use web pages and links to force your machine to open up your Zoom client and camera. To make things worse, uninstalling Zoom was… impossible. That same link would forcefully reinstall zoom as well.
I don’t want to get into the details of the question of how serious the actual vulnerability that was found is, but rather want to discuss what got Zoom there, and to some extent, why WebRTC is the better technical choice.
What caused the Zoom vulnerability?the road to hell is paved with good intentions.
When the Zoom app installs on your machine, it tries to integrate itself with the browser, in an effort to make it really quick to respond. The idea behind it is to reduce friction to the user.
An installation process is usually a multistep process these days:
Anything can go wrong in each step along the way – and when things can get wrong they usually do. At scale, this means a lot of frustration to users.
I’ve been at this game myself. Before the good days of WebRTC, when I worked at a video conferencing company, this was a real pain for us. My company at the time developed its own desktop client, as an app that gets downloaded as a browser plugin. Lots of issues and bugs in getting this installed properly and removing friction.
These days, you can’t install browser plugins, so you’re left with installing an app.
Zoom tried to do two things here:
That first thing? Everyone tries to do it these days. We’re in removing friction for users – remember?
The second one? That’s something that people consider outrageous. You uninstall the Zoom app, and if you open a web page with a link to a zoom meeting it will go about silently installing it in the background for the user. Why? Because there’s a “virus” left by the Zoom installation in your system. A web server that waits for commands and one of them is installing the Zoom client.
Here’s how joining a Zoom call looks on my Chrome browser in Linux:
The Zoom URL for joining a meeting opens the above window. Sometimes, it pops up a dialog and sometimes it doesn’t. When it doesn’t, you’re stuck on the page with either the need to “download & run Zoom” (which is weird, since it is already installed on my machine), “join from your browser” which we already know gives crappy quality or “click here”.
Since I am used to this weirdly broken behavior, I already know that I need to “click here”. This will bring about this lovely pop up:
This isn’t Zoom – it is Chrome opening a dialog of its own indicating that the browser page is trying to open a natively installed Linux application. It took me quite some time to decide to click that “Open xdg-open” button for these kinds of installed apps. For the most part, this is friction. Ugly friction at its best.
Does Google Chrome team cares? No. Why should they? Companies who want to take the experience out of the domain of the browser into native-land is something they’d prefer not to happen.
Does Zoom care? It does. Not on Linux apparently (otherwise, this page would have been way better in its explanation of what to do). But on Mac? It cares so much that it went above and beyond to reduce that friction, going as far as trying to hack its way around security measures set by the Safari team.
Is the Zoom vulnerability really serious?Maybe. Probably. I don’t know.
It was disclosed as a zero-day vulnerability, which is considered rather serious.
The original analysis of the vulnerability indicated quite a few avenues of attack:
Some of these issues have been patched by Zoom already, but the thing that remains here is the responsibility of developers in applications they write. We will get to it a bit later.
While I am no security expert, this got the attention of Apple, who decided to automate the process and simply remove the Zoom web server from all Mac machines remotely and be done with it. It was serious enough for Apple.
Security is a game of cat and mouseThere are 3 main arm races going on in the internet these days:
Zoom fell for that 3rd one.
Assume that every application and service you use has security issues and unknown bugs that might be exploited. The many data breaches we’ve had in the last few years of companies large and small indicate that clearly. So does the ransom attacks on US cities.
Unified communications and video conferencing services are no different. As video use and popularity grows, so will the breaches and security exploits that will be found.
There were security breaches for these services before and there will be after. This isn’t the first or the last time we will be seeing this.
Could Zoom or any other company minimize its exposure? Sure.
Zoom’s responseMy friend Chris thinks Zoom handled this nicely, with Eric Yuan joining a video call with security hackers. I see it more as a PR stunt. One that ended up backfiring, or at least not helping Zoom’s case here.
The end result? This post from Zoom, signed by the CEO as the author. This one resonates here:
Our current escalation process clearly wasn’t good enough in this instance. We have taken steps to improve our process for receiving, escalating, and closing the loop on all future security-related concerns
At the end, this won’t reduce the amount of people using Zoom or even slow Zoom’s growth. Users like the service and are unlikely to switch. A few people might heed to John Gruber’s suggestion to “eradicate it and never install it again”, but I don’t see this happening en masse.
Zoom got scorched by the fire and I have a feeling they’ll be doing better than most in this space from now on.
Competitor’s dancing movesA few competitors of Zoom were quick to respond. The 3 that got to my email and RSS feed?
LogMeIn, had a post on the GoToMeeting website, taking this stance:
Lifesize issued a message from their CEO:
Apizee decided to join the party:
The truth? I’d do the same if I were a competitor and comfortable with my security solution.
The challenge? Jonathan Leitschuh or some other security researcher might well go check them up, and who knows what they will find.
Why WebRTC improves security?For those who don’t know, WebRTC offers voice and video communications from inside the browser. Most vendors today use WebRTC, and for some reason, Zoom doesn’t.
There are two main reasons why WebRTC improves security of real time communication apps:
Many have complained about WebRTC and the fact that you cannot send unencrypted media with it. All VoIP services prior to WebRTC run unencrypted by default, adding encryption as an optional feature.
Unencrypted media is easier to debug and record, but also enable eavesdropping. Encrypted media is thought to be a CPU hog due to the encryption process, something that in 2019 needs to be an outdated notion.
When Zoom decided not to use WebRTC, they essentially decided to take full responsibility and ownership of all security issues. They did that from a point of view and stance of an application developer or maybe a video conferencing vendor. They didn’t view it from a point of view of a browser vendor.
Browsers are secured by default, or at least try to be. Since they are general purpose containers for web applications that users end up using, they run with sandboxed environments and they do their best to mitigate any security risks and issues. They do it so often that I’d be surprised if there are any other teams (barring the operating system vendors themselves) who have better processes and technologies in place to handle security.
By striving for frictionless interactions, Zoom came headon into an area where browser vendors handle security threats of unknown code execution. Zoom made the mistake of trying to hack their way through the security fence that the Safari browser team put in place instead of working within the boundaries provided.
Why did they take that approach? Company DNA.
Zoom “just works”, or so the legend goes. So anything that Zoom developers can do to perpetuate that is something they will go the length to do.
WebRTC has a large set of security tools and measures put in place. These enables running it frictionlessly without the compromises that Zoom had to take to get to a similar behavior.
Where may WebRTC fail?There are several places where WebRTC is failing when it comes to security. Some of it are issues that are being addressed while others are rather debatable.
I’d like to mention 4 areas here:
#1 – WebRTC IP leakLike any other VoIP solution, WebRTC requires access to the local IP addresses of devices to work. Unlike any other VoIP solution, WebRTC exposes these IP addresses to the web application on top of it in JavaScript in order to work. Why? Because it has no other way to do this.
This has been known as the WebRTC IP leak issue, which is a minor issue if you compare it to the Zoom zero day exploit. It is also one that is being addressed with the introduction of mDNS, which I wrote about last time.
A few months from now, the WebRTC IP leak will be a distant problem.
I also wouldn’t categorize it as a security threat. At most it is a privacy issue.
#2 – Default access to web camera and microphoneWhen you use WebRTC, the browser is going to ask you to allow access to your camera and microphone, which is great. It shows that users need to agree to that.
But they only need to agree once per domain.
Go to the Google AppRTC demo page. If it is the first time you’re using it, it will ask you to allow access to your camera and microphone. Close the page again and reopen – and it won’t ask again. That’s at least the behavior on Chrome. Each browser takes his own approach here.
Clicking the Allow button above would cause all requests for camera and microphone access from appr.tc to be approved from now on without the need for an explicit user consent.
Is that a good thing? A bad thing?
It reduces friction, but ends up doing exactly what Jonathan Leitschuh complained about with Zoom as well – being able to open a user’s webcam without explicit consent just by clicking on a web link.
This today is considered standard practice with WebRTC and with video meetings in general. I’d go further to say that if there’s anything that pisses me off, it is video conferencing services that makes you join with muted video requiring me to explicitly unmute my video.
As I said, I am not a security expert, so I leave this for you to decide.
#3 – Ugly exploitsDid I say a cat and mouse game? Advertising and ad blockers are there as well.
Advertisers try to push their ads, sometimes aggressively, which brought into the world the ad blockers, who then deal with cleaning up the mess. So advertisers try to hack their way through ad blockers.
Since there’s big advertising money involved, there are those who try to game the system. Either by using machines to automate ad viewing and clicking to increase revenue, getting real humans in poor countries to manually click ads for the same reason or just inject their own code and ads instead of the ads that should have appeared.
That last one was found using WebRTC to inject its code, by placing it in the data channel. There’s some more information on the DEVCON website. Interestingly, this exploit works best via Webview inside apps like Facebook that open web pages internally instead of through the browser. It makes it a lot harder to research and find in that game of cat and mouse.
I don’t know if this is being addressed at all at the moment by browser vendors or the standards bodies.
#4 – Lazy developersThis is the biggest threat by far.
Developers using WebRTC who don’t know better or just assume that WebRTC protects them and do their best to not take responsibility on their part of the application.
Remember that WebRTC is a building block – a piece of browser based technology that you use in your own application. Also, it has no signaling protocol of its own, so it is up to you to decide, implement and operate that signaling protocol yourself.
Whatever you do on top of WebRTC needs to be done securely as well, but it is your responsibility. I’ve written a WebRTC security checklist. Check it out:
Download the WebRTC security checklist
Why isn’t Zoom using WebRTC?Zoom was founded in 2011.
WebRTC was just announced in 2011.
At the time it started, WebRTC wasn’t a thing.
When WebRTC became a thing, Zoom were probably already too invested in their own technology to be bothered with switching over to WebRTC.
While Zoom wanted frictionless communications for its customers, it probably had and still has to pay too big a price to switch to WebRTC. This is probably why when Zoom decided to support browsers directly with no downloads, they went for WebAssembly and not use WebRTC. The results are a lot poorer, but it allowed Zoom to stay within the technology stack it already had.
The biggest headaches for Zoom here is probably the video codec implementation. I’ll take a guess and assume that Zoom are using their own proprietary video codec derived from H.264. The closest indication I could find for it was this post on the Zoom website:
We have better coding and compression for our screen sharing than any other software on the market
If Zoom had codecs that are compatible with WebRTC or that can easily be made compatible with WebRTC they would have adopted WebRTC already.
Zoom took the approach of using this as a differentiator and focusing on improving their codecs, most probably thinking that media quality was the leading factor for people to choose Zoom over alternative solutions.
Where do we go from here?It is 2019.
If you are debating using WebRTC or a proprietary technology then stop debating. Use WebRTC.
It will save you time and improve the security as well as many other aspects of your application.
If you’re still not sure, you can always contact me.
The post Zoom app vulnerability shows why WebRTC is important appeared first on BlogGeek.me.
Another unstabilizing WebRTC experiment in Chrome to become reality.
I’ve had clients approaching me in the past month or two with questions about a new type of address cropping up in as ICE candidates. As it so happens, these new candidates have caused some broken experiences.
In this article, I’ll try to untangle how local ICE candidates work, what is mDNS, how it is used in WebRTC, why it breaks WebRTC and how this could have been handled better.
How local ICE candidates work in WebRTC?Before we go into mDNS, let’s start with understanding why we’re headed there with WebRTC.
When trying to connect a session over WebRTC, there are 3 types of addresses that a WebRTC client tries to negotiate:
During the ICE negotiation process, your browser (or app) will contact its configured STUN and TURN server, asking them for addresses. It will also check with the operating system what local IP addresses it has in its disposal.
Why do we need a local IP address?If both machines that need to connect to each other using WebRTC sit within the same private network, then there’s no need for the communication to leave the local network either.
Why do we need a public IP address through STUN?If the machines are on different networks, then by punching a hole through the NAT/firewall, we might be able to use the public IP address that gets allocated to our machine to communicate with the remote peer.
Why do we need a public IP address on a TURN server?If all else fails, then we need to relay our media through a “third party”. That third party is a TURN server.
Local IP addresses as a privacy riskThat part of sharing local IP addresses? Can really improve things in getting calls connected.
It is also something that is widely used and common in VoIP services. The difference though is that VoIP services that aren’t WebRTC and don’t run in the browsers are a bit harder to hack or abuse. They need to be installed first.
WebRTC gives web developers “superpowers” in knowing your local IP address. That scares privacy advocates who see this is as a breach of privacy and even gave it the name “WebRTC Leak”.
A few things about that:
Yes, we have known that problem ever since the NY Times used a webrtc-based script to gather IP addresses back in 2015. “WebRTC IP leak” is one most common search terms (SEO hacking at its best).
Luckily for us, Google is collecting anonymous usage statistics from Chrome, making the information available through a public chromestatus metrics site. We can use that to see what percentage of the page loads WebRTC is used. The numbers are quite… big:
RTCPeerConnection calls on % of Chrome page loads (see here)
Currently, 8% of page loads create a RTCPeerConnection. 8%. That is quite a bit. We can see two large increases, one in early 2018 when 4% of pageloads used RTCPeerConnection and then another jump in November to 8%.
Now that just means RTCPeerConnection is used. In order to gather local IPs the setLocalDescription call is required. There are statistics for this one as well:
setLocalDescription calls on % of Chrome page loads (see here)
The numbers here are significantly lower than for the constructor. This means a lot of peer connections are constructed but not used. It is somewhat unclear why this happens. We can see a really big increase in November 2018 to 4%, at about the same time that PTCPeerConnection calls jumped to 7-8%. While it makes no sense, this is what we have to work with.
Now, WebRTC could be used legitimately to establish a peer-to-peer connection. For that we need both setLocalDescription and setRemoteDescription and we have statistics for the latter as well:
setRemoteDescription calls on % of Chrome page loads (see here)
Since the big jump in late 2017 (which is explained by a different way of gathering data) the usage of setRemoteDescription hovers between 0.03% and 0.04% of pageloads. That’s close to 1% of the pages a peer connection is actually created on.
We can get another idea about how popular WebRTC is from the getUserMedia statistics:
getUserMedia calls on % of Chrome page loads (see here)
This is consistently around 0.05% of pageloads. A bit more than RTCPeerConnection being used to actually open a session (that setRemoteDescription graph) but there are use-cases such as taking a photo which do not require WebRTC.
Here’s what we’ve arrived with, assuming the metrics collection of chromestats reflects real use behavior. We have 0.04% of pageloads compared to 4%. This shows that a considerable percentage of the RTCCPeerConnections are potentially used for a purpose other than what WebRTC was designed for. That is a problem that needs to be solved.
* credits and thanks to Philipp Hancke for assisting in collecting and analyzing the chromestats metrics
What is mDNS?Switching to a different topic before we go back to WebRTC leaks and local IP addresses.
mDNS stands for Multicast DNS. it is defined in IETF RFC 6762.
mDNS is meant to deal with having names for machines on local networks without needing to register them on DNS servers. This is especially useful when there are no DNS servers you can control – think of a home with a couple of devices who need to interact locally without going to the internet – Chromecast and network printers are some good examples. What we want is something lightweight that requires no administration to make that magic work.
And how does it work exactly? In a similar fashion to DNS itself, just without any global registration – no DNS server.
At its basic approach, when a machine wants to know the IP address within the local network of a device with a given name (lets say tsahi-laptop), it will send out an mDNS query on a known multicast IP address (exact address and stuff can be found in the spec) with a request to find “tsahi-laptop.local”. There’s a separate registration mechanism whereby devices can register their mDNS names on the local network by announcing it within the local network.
Since the request is sent over a multicast address, all machines within the local network receive it. The machine with that name (probably my laptop, assuming it supports mDNS and is discoverable in the local network), will return back with its IP address, doing that also over multicast.
That means that all machines in the local network heard the response and can now cache that fact – what is the IP address on the local network for a machine called tsahi-laptop.
How is mDNS used in WebRTC?Back to that WebRTC leak and how mDNS can help us.
Why do we need local IP addresses? So that sessions that need to take place in a local network don’t need to use public IP addresses. This makes routing a lot simpler and efficient in such cases.
But we also don’t want to share these local IP addresses with the Java Script application running in the browser. That would be considered a breach of privacy.
Which is why mDNS was suggested as a solution. There It is a new IETF draft known as draft-ietf-rtcweb-mdns-ice-candidates-03. The authors behind it? Developers at both Apple and Google.
The reason for it? Fixing the longstanding complaint about WebRTC leaking out IP addresses. From its abstract:
WebRTC applications collect ICE candidates as part of the process of creating peer-to-peer connections. To maximize the probability of a direct peer-to-peer connection, client private IP addresses are included in this candidate collection. However, disclosure of these addresses has privacy implications. This document describes a way to share local IP addresses with other clients while preserving client privacy. This is achieved by concealing IP addresses with dynamically generated Multicast DNS (mDNS) names.
How does this work?
Assuming WebRTC needs to share a local IP address which it deduces is private, it will use an mDNS address for it instead. If there is no mDNS address for it, it will generate and register a random one with the local network. That random mDNS name will then be used as a replacement of the local IP address in all SDP and ICE message negotiations.
The result?
Here’s the rub though. mDNS breaks WebRTC implementations.
mDNS is supposed to be innocuous:
With WebRTC though, mDNS names are shared instead of IP addresses. And they are sent over the public network, inside a protocol that expects to receive only IP addresses and not DNS names.
The result? Questions like this recent one on discuss-webrtc:
Weird address format in c= line from browser
I am getting an offer SDP from browser with a connection line as such:
c=IN IP4 3db1cebd-e606-4dc1-b561-e0af5b4fd327.local
This is causing trouble in a webrtc server that we have since the parser is bad (it is expecting a normal ipv4 address format)
[…]This isn’t a singular occurrence. I’ve had multiple clients approach me with similar complaints.
What happens here, and in many other cases, is that the IP addresses that are expected to be in SDP messages are replaced with mDNS names – instead of x.x.x.x:yyyy the servers receive <random-ugly-something>.local and the parsing of that information is totally different.
This applies to all types of media servers – the common SFU media server used for group video calls, gateways to other systems, PBX products, recording servers, etc.
Some of these have been updated to support mDNS addresses inside ICE candidates already. Others probably haven’t, like the recent one above. But more importantly, many of the deployments made that don’t want, need or care to upgrade their server software so frequently are now broken as well, and should be upgraded.
Could Google have handled this better? Close-up Businessman Playing Checkers At Office DeskIn January, Google announced on discuss-webrtc this new experiment. More importantly, it stated that:
No application code is affected by this feature, so there are no actions for developers with regard to this experiment.
Within a week, it got this in a reply:
As it stands right now, most ICE libraries will fail to parse a session description with FQDN in the candidate address and will fail to negotiate.
More importantly, current experiment does not work with anything except Chrome due to c= line population error. It would break on the basic session setup with Firefox. I would assume at least some testing should be attempted before releasing something as “experiment” to the public. I understand the purpose of this experiment, but since it was released without testing, all we got as a result are guaranteed failures whenever it is enabled.
The interesting discussion that ensued for some reason focused on how people interpret the various DNS and ICE related standards and does libnice (an open source implementation of ICE) breaks or doesn’t break due ton mDNS.
But it failed to encompass the much bigger issue – developers were somehow expected to write their code in a way that won’t break the introduction of mDNS in WebRTC – without even being aware that this is going to happen at some point in the future.
Ignoring that fact, Google has been running mDNS as an experiment for a few Chrome releases already. As an experiment, two things were decided:
The bigger issue here is that many view only solutions of WebRTC are developed and deployed by people who aren’t “in the know” when it comes to WebRTC. They know the standard, they may know how to implement with it, but most times, they don’t roam the discuss-webrtc mailing list and their names and faces aren’t known within the tight knit of the WebRTC community. They have no voice in front of those that make such decisions.
In that same thread discussion, Google also shared the following statement:
FWIW, we are also considering to add an option to let user force this feature on regardless of getUserMedia permissions.
Mind you – that statement was a one liner inside a forum discussion thread, from a person who didn’t identify in his message with a title or the fact that he speaks for Google and is a decision maker.
Which is the reason I sat down to write this article.
mDNS is GREAT. AWESOME. Really. It is simple, elegant and gets the job done than any other solution people would come up with. But it is a breaking change. And that is a fact that seems to be lost to Google for some reason.
By enforcing mDNS addresses on all local IP addresses (which is a very good thing to do), Chrome will undoubtedly break a lot of services out there. Most of them might be small, and not part of the small majority of the billion-minutes club.
Google needs to be a lot more transparent and public about such a change. This is by no means a singular case.
Just digging into what mDNS is, how it affects WebRTC negotiation and what might break took me time. The initial messages about an mDNS experiment are just not enough to get people to do anything about it. Google did a way better job with their explanation about the migration from Plan B to Unified Plan as well as the ensuing changes in getStats().
My main worry is that this type of transparency doesn’t happen as part of a planned rollout program. It is done ad-hoc with each initiative finding its own creative solution to convey the changes to the ecosystem.
This just isn’t enough.
WebRTC is huge today. Many businesses rely on it. It should be treated as the mission critical system that developers who use it see in it.
It is time for Google to step up its game here and put the mechanisms in place for that.
What should you do as a developer?First? Go check if mDNS breaks your app. You can enable this functionality on chrome://flags/#enable-webrtc-hide-local-ips-with-mdns
In the long run? My best suggestion would be to follow messages coming out of Google in discuss-webrtc about their implementation of WebRTC. To actively read them. Read the replies and discussions that take place around them. To understand what they mean. And to engage in that conversation instead of silently reading the threads.
Test your applications on the beta and Canary releases of Chrome. Collect WebRTC behavior related metrics from your deployment to find unexpected changes there.
Apart from that? Nothing much you can do.
As for mDNS, it is a great improvement. I’ll be adding a snippet explanation about it to my WebRTC Tools course, something new that will be added next month to the WebRTC Course. Stay tuned!
The post PSA: mDNS and .local ICE candidates are coming appeared first on BlogGeek.me.
Marketing automation isn’t easy.
I’ve been doing that for a few years now in BlogGeek.me, trying to figure it out as I go along. My newsletter service configuration and settings looks like a large ball of spagetti at this point, with little way for me to handle things in it. This as well as a few more reasons got me to switch my marketing automation provider as part of a larger project I am running.
It has taken its toll. Mainly a lot of time and energy spent on figuring things out yet again and cleaning up stuff. Along this process, I’ve enrolled to an online course and learned some more about what I can do without pissing off subscribers. Hopefully, I’ll be headed down that road a bit more in the coming months.
Anyways, a few quick notes:
See you on the other end of my infrastructure nightmare
The post Migrating BlogGeek.me and why it is quiet here lately appeared first on BlogGeek.me.
In 2019, WebRTC is ready, but there’s still work ahead.
When I wrote that WebRTC is ready over 6 months ago it pissed a few people off.
Here’s the thing – WebRTC is ready simply because the industry deems it ready and companies are deploying products that rely on WebRTC to work for them.
Are there challenges along the way? Sure.
Do things break? Sure.
But if you are thinking of whether you should start using WebRTC and build an application on top of it or wait for the next fad to come by for your video calling service, then don’t. Use WebRTC as nothing else will do today.
Trying to understand where WebRTC is available? Download my free cheat sheet
WebRTC 1.0 – the specificationIn 2015 I remember someone telling me that WebRTC 1.0 will be closed and published by year end.
I heard the same in 2016. And later in 2017.
In 2018 I ignored such promises.
2019? There is a small chance that things will be ready. Why? Because the spec is almost completed. That almost is the sticking point.
But then again, who cares?
Everyone is already using WebRTC as if it is a done deal. Because it is.
We’ve agreed on the technology (WebRTC). We’ve agreed on the larger picture and the ways things are going to look like (peer connection and how browsers implement it today). We’re left with the nitty gritty details of how to make the experience easier and uniform across browsers for developers. We will get there, but just remember – users expect it to work, and it does.
Chrome and WebRTCConsider Chrome to be the de facto specification for WebRTC. It isn’t WebRTC 1.0 compliant. Yet. According to Statista, 69% of the desktop internet is driven by Chrome. On this website? 74% of the viewers use Chrome.
The thing about Chrome is that it is slowly getting the missing WebRTC 1.0 support, and by moving there it is breaking things up with each release. Usually because the way it works today isn’t exactly spec compliant, so things have to break – or just because the additions are delicate and the work done breaks behavior that developers relied on in the past. At times, it is because Google has no qualms when it comes to technical debt and code rewrites and when it sees a need to optimize something it usually does that (we’re now in the 3rd generation of echo canceller in WebRTC, each one was a complete rewrite of the previous one).
If you are developing anything that needs to run in the browser and use WebRTC, then Chrome is the first thing you should be developing for.
Firefox and WebRTCFirefox is close to be spec compliant when it comes to WebRTC.
They had it easy with the recent decision to adopt Unified Plan instead of Plan B in the WebRTC specification. Where Google had to shift from Plan B to Unified Plan, Firefox had only slight modifications to make.
The problem is that Firefox is a distant second to Chrome in market share. At times, developers actively decide not to support Firefox just because they consider it a waste of time. This is doubly true for those who use Chrome for guest access and as a stepping stone to getting their users to download their Electron app instead.
Safari and WebRTCSafari now supports WebRTC. That includes things like simulcast and both VP8 and H.264. Which is to say that most WebRTC features already work in Safari, but not all of them.
You wouldn’t find VP9 which isn’t mandatory or popular yet, but something that is more than desirable. And then some of the more complicated scenarios such as multiparty sessions have more pending open issues of both functionality and interoperability than Chrome or Firefox have.
The challenge is that Safari is important to developers. Both because it is the only way to get on iOS devices and because it is the default browser for Mac, a desktop/laptop that for some reason is becoming a fad with developers (go figure).
Edge and WebRTCEdge was once its own browser with its own technology stack, but is now becoming just another flavor of Chrome. Microsoft announced that Edge will be using Chromium as its browser engine. This has gotten Edge to work on Mac already with rumors of a possible Linux release.
Edge runs on Chromium.
Chrome runs on Chromium.
Chrome isn’t WebRTC spec compliant because Chromium isn’t WebRTC spec compliant.
So Edge isn’t spec compliant either. But it is well… the same as Chrome.
This all relates to the upcoming official release of Edge.
Microsoft IE and WebRTCStill dream about Internet Explorer at night?
Stop it.
IE won’t be supporting WebRTC. Not now and not ever.
Use a plugin or just use Electron. Or better yet – update to a more modern browser.
Opera/Brave/whoever and WebRTCMost of the other browsers out there, be it Opera, Brave or anything else is just a fork of Chromium or a skin on top of Chromium.
For all intent and purpose, they are Chrome, offering the same spec compliance to WebRTC as Chrome does. At least if they haven’t gone and intentionally made changes to it (like disabling it in the name of privacy).
Android and WebRTCAndroid has support of WebRTC.
Chrome browser that ships with Android has WebRTC support.
Other browsers shipping on Android have WebRTC support (such as Firefox).
Sometimes, a device manufacturer ends up shipping his own browser (Samsung for example). Then WebRTC compliance and availability is somewhat questionable.
The good thing is that the Webview in Android also supports WebRTC. So built-in application browsers such as the one used by Facebook or Slack also end up supporting WebRTC experiences.
And if you write your own app, you can use the Webview, a precompiled version of WebRTC for Android or compile it on your own.
iOS and WebRTCOn iOS things are slightly trickier.
Safari supports WebRTC on iOS and there are companies making commercial use of it already.
Other browsers don’t and can’t support WebRTC on iOS. That’s because the supplied iOS Webview still doesn’t support WebRTC (or disables it on purpose).
If you write your own app, you can use a precompiled version of WebRTC for iOS or compile it on your own. No Webview for you yet.
Your Next Steps?Haven’t started with WebRTC yet? Now’s the time. I can help.
Trying to understand where WebRTC is available? Download my free cheat sheet
The post What’s the status of WebRTC in 2019? appeared first on BlogGeek.me.
In 2019, WebRTC is ready, but there’s still work ahead.
When I wrote that WebRTC is ready over 6 months ago it pissed a few people off.
Here’s the thing – WebRTC is ready simply because the industry deems it ready and companies are deploying products that rely on WebRTC to work for them.
Are there challenges along the way? Sure.
Do things break? Sure.
But if you are thinking of whether you should start using WebRTC and build an application on top of it or wait for the next fad to come by for your video calling service, then don’t. Use WebRTC as nothing else will do today.
Trying to understand where WebRTC is available? Download my free cheat sheet
WebRTC 1.0 – the specificationIn 2015 I remember someone telling me that WebRTC 1.0 will be closed and published by year end.
I heard the same in 2016. And later in 2017.
In 2018 I ignored such promises.
2019? There is a small chance that things will be ready. Why? Because the spec is almost completed. That almost is the sticking point.
But then again, who cares?
Everyone is already using WebRTC as if it is a done deal. Because it is.
We’ve agreed on the technology (WebRTC). We’ve agreed on the larger picture and the ways things are going to look like (peer connection and how browsers implement it today). We’re left with the nitty gritty details of how to make the experience easier and uniform across browsers for developers. We will get there, but just remember – users expect it to work, and it does.
Chrome and WebRTCConsider Chrome to be the de facto specification for WebRTC. It isn’t WebRTC 1.0 compliant. Yet. According to Statista, 69% of the desktop internet is driven by Chrome. On this website? 74% of the viewers use Chrome.
The thing about Chrome is that it is slowly getting the missing WebRTC 1.0 support, and by moving there it is breaking things up with each release. Usually because the way it works today isn’t exactly spec compliant, so things have to break – or just because the additions are delicate and the work done breaks behavior that developers relied on in the past. At times, it is because Google has no qualms when it comes to technical debt and code rewrites and when it sees a need to optimize something it usually does that (we’re now in the 3rd generation of echo canceller in WebRTC, each one was a complete rewrite of the previous one).
If you are developing anything that needs to run in the browser and use WebRTC, then Chrome is the first thing you should be developing for.
Firefox and WebRTCFirefox is close to be spec compliant when it comes to WebRTC.
They had it easy with the recent decision to adopt Unified Plan instead of Plan B in the WebRTC specification. Where Google had to shift from Plan B to Unified Plan, Firefox had only slight modifications to make.
The problem is that Firefox is a distant second to Chrome in market share. At times, developers actively decide not to support Firefox just because they consider it a waste of time. This is doubly true for those who use Chrome for guest access and as a stepping stone to getting their users to download their Electron app instead.
Safari and WebRTCSafari now supports WebRTC. That includes things like simulcast and both VP8 and H.264. Which is to say that most WebRTC features already work in Safari, but not all of them.
You wouldn’t find VP9 which isn’t mandatory or popular yet, but something that is more than desirable. And then some of the more complicated scenarios such as multiparty sessions have more pending open issues of both functionality and interoperability than Chrome or Firefox have.
The challenge is that Safari is important to developers. Both because it is the only way to get on iOS devices and because it is the default browser for Mac, a desktop/laptop that for some reason is becoming a fad with developers (go figure).
Edge and WebRTCEdge was once its own browser with its own technology stack, but is now becoming just another flavor of Chrome. Microsoft announced that Edge will be using Chromium as its browser engine. This has gotten Edge to work on Mac already with rumors of a possible Linux release.
Edge runs on Chromium.
Chrome runs on Chromium.
Chrome isn’t WebRTC spec compliant because Chromium isn’t WebRTC spec compliant.
So Edge isn’t spec compliant either. But it is well… the same as Chrome.
This all relates to the upcoming official release of Edge.
Microsoft IE and WebRTCStill dream about Internet Explorer at night?
Stop it.
IE won’t be supporting WebRTC. Not now and not ever.
Use a plugin or just use Electron. Or better yet – update to a more modern browser.
Opera/Brave/whoever and WebRTCMost of the other browsers out there, be it Opera, Brave or anything else is just a fork of Chromium or a skin on top of Chromium.
For all intent and purpose, they are Chrome, offering the same spec compliance to WebRTC as Chrome does. At least if they haven’t gone and intentionally made changes to it (like disabling it in the name of privacy).
Android and WebRTCAndroid has support of WebRTC.
Chrome browser that ships with Android has WebRTC support.
Other browsers shipping on Android have WebRTC support (such as Firefox).
Sometimes, a device manufacturer ends up shipping his own browser (Samsung for example). Then WebRTC compliance and availability is somewhat questionable.
The good thing is that the Webview in Android also supports WebRTC. So built-in application browsers such as the one used by Facebook or Slack also end up supporting WebRTC experiences.
And if you write your own app, you can use the Webview, a precompiled version of WebRTC for Android or compile it on your own.
iOS and WebRTCOn iOS things are slightly trickier.
Safari supports WebRTC on iOS and there are companies making commercial use of it already.
Other browsers don’t and can’t support WebRTC on iOS. That’s because the supplied iOS Webview still doesn’t support WebRTC (or disables it on purpose).
If you write your own app, you can use a precompiled version of WebRTC for iOS or compile it on your own. No Webview for you yet.
Your Next Steps?Haven’t started with WebRTC yet? Now’s the time. I can help.
Trying to understand where WebRTC is available? Download my free cheat sheet
The post What’s the status of WebRTC in 2019? appeared first on BlogGeek.me.
Video recording using WebRTC can be a lot more lucrative a business than WebRTC video calling.
There’s been an ongoing rumble around WebRTC in a lot of discussions I had about it and sometimes from what you read online – What’s the market size of WebRTC? How do you make money out of it? Who is making money out of it?
Questions that are really hard to answer. Usually because people don’t like to hear the answers to them.
Looking to understand where and how to fit WebRTC into your business? Let’s talk
The Zoom IPOIs there money in video conferencing or video calling?
The service today is practically free, spread across a multitude of different service types:
SocialAn unending list of social communication services that happen to have video calling in them. I’ve bunched Apple and Google in here simply because they “own” the smartphones we use today.
BusinessHere you’ll find services that are free to a certain extent. They are either time limited, feature limited, or just bundled up to bigger offerings.
Zoom were probably the first to go this route with a well-featured product where the biggest limit for a free account was time – 40 minutes per session. Long enough for a lot of uses.
Consumer/SohoThere are many consumer-type services that got built using WebRTC and gained traction. The services started as free offerings, and each grew of its own accord. Jitsi Meet got acquired by Atlassian and then 8×8 acquired it from Atlassian. Appear.in started offering paid Pro accounts and got acquired by Videonor. Talky became a showcase for SimpleWebRTC.
Others started with a free service, ending with a paid service, like Gruveo.
Show me the moneyThis is where things got complicated.
No one saw a way to make money out of WebRTC. Or video.
At least not until Zoom IPO’d. ~$425 million annual run rate, growing at over 100% a year. Alex Clayton has a nice breakdown of their filing:
The moment this happened, both BlueJeans and LifeSize decided to publish their numbers – BlueJeans reached $100m ARR while Lifesize reached $100m in bookings. Their message? Zoom isn’t alone.
For the record, and to make this clear:
The thing here is video conferencing service, and how do you make money out of it? You can, if you’re big enough, though it will be hard to join the game now and try to outdo Zoom in video conferencing by using their playbook.
The challenge is probably that everyone is looking under the light post.
You’ve got practically 100s of developers, startups, enterprises and whatnots vying towards disrupting the video conferencing market with WebRTC. The challenge is that with so many players coming in with the same technology, only a few will stay standing.
Differentiation is tough in this space. Why would someone pick up your service and not another? How will they find you? Why should they pay?
Which brings me to the reason I started writing this in the first place –
Not video calling – WebRTC video recordingI went to AppSumo this week, deciding to purchase another deal on their site. Every once in awhile I find there some great deals and new services to use for my business. The latest featured offer on that site? Dubb (now sold out)
DubbThis is a service that runs as a Chrome extension enabling its users to record a short video and share it with customers over SMS, email or other networks.
I don’t know if Dubb supports WebRTC or not, but –
In all likelihood, this is using WebRTC’s MediaRecorder to record locally and upload the result to the Dubb cloud service.
Dubb is positioned as a sales tool to build rapport – not as a video conferencing or a communication tool. There’s no “real time”, “collaboration” or “conferencing” here.
Seeing it got me thinking of another tool I bumped into recently – Loom
LoomI started a coaching program a few months back. My WebRTC Course showed success in the last 3 years of its existence and I wanted to grow it in size – have more people enroll and learn WebRTC in the process. The coaching program is interesting. I am learning a ton in it, some of it already found its way into the course and a lot more will be coming in the next course launch in a few months time.
Anyways, when I ask questions via email, I usually get back video recordings of my coach reviewing the question and answering it, thinking through the issues I raise. I can see him and his screen, which is great. The link and tool he uses? Loom.
So I checked it out:
Similarly to Dubb, this one is about recording videos from the browser, with no installation needed. In Loom’s case, they are even trying to showcase the various uses of their tool.
WebRTC isn’t only about callingWebRTC isn’t only about calling.
It has other capabilities. There’s the data channel, there’s the simple access to the camera and mic and there’s the ability to record media on the client side to name a few.
That client side recording enables these services – Dubb and Loom. there’s also Ziggeo and Pipe for those looking for a managed API for it.
I am wondering. When everyone is closely looking at video calling, trying to figure out how to make $$$ out of that space, is the real usability of WebRTC lies elsewhere altogether?
Looking to understand where and how to fit WebRTC into your business? Let’s talk
The post WebRTC video recording may be more useful than WebRTC video calling appeared first on BlogGeek.me.
Video recording using WebRTC can be a lot more lucrative a business than WebRTC video calling.
There’s been an ongoing rumble around WebRTC in a lot of discussions I had about it and sometimes from what you read online – What’s the market size of WebRTC? How do you make money out of it? Who is making money out of it?
Questions that are really hard to answer. Usually because people don’t like to hear the answers to them.
Looking to understand where and how to fit WebRTC into your business? Let’s talk
The Zoom IPOIs there money in video conferencing or video calling?
The service today is practically free, spread across a multitude of different service types:
SocialAn unending list of social communication services that happen to have video calling in them. I’ve bunched Apple and Google in here simply because they “own” the smartphones we use today.
BusinessHere you’ll find services that are free to a certain extent. They are either time limited, feature limited, or just bundled up to bigger offerings.
Zoom were probably the first to go this route with a well-featured product where the biggest limit for a free account was time – 40 minutes per session. Long enough for a lot of uses.
Consumer/SohoThere are many consumer-type services that got built using WebRTC and gained traction. The services started as free offerings, and each grew of its own accord. Jitsi Meet got acquired by Atlassian and then 8×8 acquired it from Atlassian. Appear.in started offering paid Pro accounts and got acquired by Videonor. Talky became a showcase for SimpleWebRTC.
Others started with a free service, ending with a paid service, like Gruveo.
Show me the moneyThis is where things got complicated.
No one saw a way to make money out of WebRTC. Or video.
At least not until Zoom IPO’d. ~$425 million annual run rate, growing at over 100% a year. Alex Clayton has a nice breakdown of their filing:
The moment this happened, both BlueJeans and LifeSize decided to publish their numbers – BlueJeans reached $100m ARR while Lifesize reached $100m in bookings. Their message? Zoom isn’t alone.
For the record, and to make this clear:
The thing here is video conferencing service, and how do you make money out of it? You can, if you’re big enough, though it will be hard to join the game now and try to outdo Zoom in video conferencing by using their playbook.
The challenge is probably that everyone is looking under the light post.
You’ve got practically 100s of developers, startups, enterprises and whatnots vying towards disrupting the video conferencing market with WebRTC. The challenge is that with so many players coming in with the same technology, only a few will stay standing.
Differentiation is tough in this space. Why would someone pick up your service and not another? How will they find you? Why should they pay?
Which brings me to the reason I started writing this in the first place –
Not video calling – WebRTC video recordingI went to AppSumo this week, deciding to purchase another deal on their site. Every once in awhile I find there some great deals and new services to use for my business. The latest featured offer on that site? Dubb (now sold out)
DubbThis is a service that runs as a Chrome extension enabling its users to record a short video and share it with customers over SMS, email or other networks.
I don’t know if Dubb supports WebRTC or not, but –
In all likelihood, this is using WebRTC’s MediaRecorder to record locally and upload the result to the Dubb cloud service.
Dubb is positioned as a sales tool to build rapport – not as a video conferencing or a communication tool. There’s no “real time”, “collaboration” or “conferencing” here.
Seeing it got me thinking of another tool I bumped into recently – Loom
LoomI started a coaching program a few months back. My WebRTC Course showed success in the last 3 years of its existence and I wanted to grow it in size – have more people enroll and learn WebRTC in the process. The coaching program is interesting. I am learning a ton in it, some of it already found its way into the course and a lot more will be coming in the next course launch in a few months time.
Anyways, when I ask questions via email, I usually get back video recordings of my coach reviewing the question and answering it, thinking through the issues I raise. I can see him and his screen, which is great. The link and tool he uses? Loom.
So I checked it out:
Similarly to Dubb, this one is about recording videos from the browser, with no installation needed. I Loom’s case, they are even trying to showcase the various uses of their tool.
WebRTC isn’t only about callingWebRTC isn’t only about calling.
It has other capabilities. There’s the data channel, there’s the simple access to the camera and mic and there’s the ability to record media on the client side to name a few.
That client side recording enables these services – Dubb and Loom. there’s also Ziggeo and Pipe for those looking for a managed API for it.
I am wondering. When everyone is closely looking at video calling, trying to figure out how to make $$$ out of that space, is the real usability of WebRTC lies elsewhere altogether?
Looking to understand where and how to fit WebRTC into your business? Let’s talk
The post WebRTC video recording may be more useful than WebRTC video calling appeared first on BlogGeek.me.
WebRTC vs WebSockets: They. Are. Not. The. Same.
Sometimes, there are things that seem obvious once you’re “in the know” but just isn’t that when you’re new to the topic. It seems that the difference between WebRTC vs WebSockets is one such thing. Philipp Hancke pinged me the other day, asking if I have an article about WebRTC vs WebSockets, and I didn’t – it made no sense for me. That at least, until I asked Google about it:
It seems like Google believes the most pressing (and popular) search for comparisons of WebRTC is between WebRTC and WebSockets. I should probably also write about them other comparisons there, but for now, let’s focus on that first one.
Need to learn WebRTC? Check out my online course – the first module is free.
What are WebSockets?WebSockets are a bidirectional mechanism for browser communication.
There are two types of transport channels for communication in browsers: HTTP and WebSockets.
HTTP is what gets used to fetch web pages, images, stylesheets and javascript files as well as other resources. In essence, HTTP is a client-server protocol, where the browser is the client and the web server is the server:
My WebRTC course covers this in detail, but suffice to say here that with HTTP, your browser connects to a web server and requests *something* of it. The server then sends a response to that request and that’s the end of it.
The challenge starts when you want to send an unsolicited message from the server to the client. You can’t do it if you don’t send a request from the web browser to the web server, and while you can use different schemes such as XHR and SSE to do that, they end up feeling like hacks or workarounds more than solutions.
Enter WebSockets, what’s meant to solve exactly that – the web browser connects to the web server by establishing a WebSocket connection. Over that connection, both the browser and the server can send each other unsolicited messages. Not only that, they can send binary (gasp!) messages – something impossible without yet another hack (known as base64) in HTTP.
Because WebSockets are built-for-purpose and not the alternative XHR/SSE hacks, WebSockets perform better both in terms of speed and resources it eats up on both browsers and servers.
WebSockets are rather simple to use as a web developer – you’ve got a straightforward WebSocket API for them, which are nicely illustrated by HPBN:
var ws = new WebSocket('wss://example.com/socket'); ws.onerror = function (error) { ... } ws.onclose = function () { ... } ws.onopen = function () { ws.send("Connection established. Hello server!"); } ws.onmessage = function(msg) { if(msg.data instanceof Blob) { processBlob(msg.data); } else { processText(msg.data); } }You’ve got calls for send and close and callbacks for onopen, onerror, onclose and onmessage. Of course there’s more to it than that, but this is holds the essence of WebSockets.
It leads us to what we usually use WebSockets for, and I’d like to explain it this time not by actual scenarios and use cases but rather by the keywords I’ve seen associated with WebSockets:
Funnily, a lot of this sometimes get associated with WebRTC as well, which might be the cause of the comparison that is made between the two.
WebRTC, in the context of WebSocketsThere are numerous articles here about WebRTC, including a What is WebRTC one.
In the context of WebRTC vs WebSockets, WebRTC enables sending arbitrary data across browsers without the need to relay that data through a server (most of the time). That data can be voice, video or just data.
Here’s where things get interesting –
WebRTC has no signaling channelWhen starting a WebRTC session, you need to negotiate the capabilities for the session and the connection itself. That is done out of the scope of WebRTC, in whatever means you deem fit. And in a browser, this can either be HTTP or… WebSocket.
So from this point of view, WebSocket isn’t a replacement to WebRTC but rather complementary – as an enabler.
You can send media over a WebSocketSort of.
I’ll start with an example. If you want you connect to a cloud based speech to text API and you happen to use IBM Watson, then you can use its WebSocket interface. The first sentence in the first paragraph of the documentation?
The WebSocket interface of the Speech to Text service is the most natural way for a client to interact with the service.
So. you stream the speech (=voice) over a WebSocket to connect it to the cloud API service.
That said, it is highly unlikely to be used for anything else.
In most cases, real time media will get sent over WebRTC or other protocols such as RTSP, RTMP, HLS, etc.
WebRTC’s data channelWebRTC has a data channel. It has many different uses. In some cases, it is used in place of using a kind of a WebSocket connection:
The illustration above shows how a message would pass from one browser to another over a WebSocket versus doing the same over a WebRTC data channel. Each has its advantages and challenges.
Funnily, the data channel in WebRTC shares a similar set of APIs to the WebSocket ones:
const peerConnection = new RTCPeerConnection(); const dataChannel = peerConnection.createDataChannel("myLabel", dataChannelOptions); dataChannel.onerror = (error) => { … }; dataChannel.onclose = () => { … }; dataChannel.onopen = () => { dataChannel.send("Hello World!"); }; dataChannel.onmessage = (event) => { … };Again, we’ve got calls for send and close and callbacks for onopen, onerror, onclose and onmessage.
This makes an awful lot of sense but can be confusing a bit.
There this one tiny detail – to get the data channel working, you first need to negotiate the connection. And that you do either with HTTP or with a WebSocket.
When should you use WebRTC instead of a WebSocket?Almost never. That’s the truth.
If you’re contemplating between the two and you don’t know a lot about WebRTC, then you’re probably in need of WebSockets, or will be better off using WebSockets.
I’d think of data channels either when there are things you want to pass directly across browsers without any server intervention in the message itself (and these use cases are quite scarce), or you are in need of a low latency messaging solution across browsers where a relay via a WebSocket will be too time consuming.
Need to learn WebRTC? Check out my online course – the first module is free.
The post WebRTC vs WebSockets appeared first on BlogGeek.me.
WebRTC vs WebSockets: They. Are. Not. The. Same.
Sometimes, there are things that seem obvious once you’re “in the know” but just isn’t that when you’re new to the topic. It seems that the difference between WebRTC vs WebSockets is one such thing. Philipp Hancke pinged me the other day, asking if I have an article about WebRTC vs WebSockets, and I didn’t – it made no sense for me. That at least, until I asked Google about it:
It seems like Google believes the most pressing (and popular) search for comparisons of WebRTC is between WebRTC and WebSockets. I should probably also write about them other comparisons there, but for now, let’s focus on that first one.
Need to learn WebRTC? Check out my online course – the first module is free.
What are WebSockets?WebSockets are a bidirectional mechanism for browser communication.
There are two types of transport channels for communication in browsers: HTTP and WebSockets.
HTTP is what gets used to fetch web pages, images, stylesheets and javascript files as well as other resources. In essence, HTTP is a client-server protocol, where the browser is the client and the web server is the server:
My WebRTC course covers this in detail, but suffice to say here that with HTTP, your browser connects to a web server and requests *something* of it. The server then sends a response to that request and that’s the end of it.
The challenge starts when you want to send an unsolicited message from the server to the client. You can’t do it if you don’t send a request from the web browser to the web server, and while you can use different schemes such as XHR and SSE to do that, they end up feeling like hacks or workarounds more than solutions.
Enter WebSockets, what’s meant to solve exactly that – the web browser connects to the web server by establishing a WebSocket connection. Over that connection, both the browser and the server can send each other unsolicited messages. Not only that, they can send binary (gasp!) messages – something impossible without yet another hack (known as base64) in HTTP.
Because WebSockets are built-for-purpose and not the alternative XHR/SSE hacks, WebSockets perform better both in terms of speed and resources it eats up on both browsers and servers.
WebSockets are rather simple to use as a web developer – you’ve got a straightforward WebSocket API for them, which are nicely illustrated by HPBN:
var ws = new WebSocket('wss://example.com/socket'); ws.onerror = function (error) { ... } ws.onclose = function () { ... } ws.onopen = function () { ws.send("Connection established. Hello server!"); } ws.onmessage = function(msg) { if(msg.data instanceof Blob) { processBlob(msg.data); } else { processText(msg.data); } }You’ve got calls for send and close and callbacks for onopen, onerror, onclose and onmessage. Of course there’s more to it than that, but this is holds the essence of WebSockets.
It leads us to what we usually use WebSockets for, and I’d like to explain it this time not by actual scenarios and use cases but rather by the keywords I’ve seen associated with WebSockets:
Funnily, a lot of this sometimes get associated with WebRTC as well, which might be the cause of the comparison that is made between the two.
WebRTC, in the context of WebSocketsThere are numerous articles here about WebRTC, including a What is WebRTC one.
In the context of WebRTC vs WebSockets, WebRTC enables sending arbitrary data across browsers without the need to relay that data through a server (most of the time). That data can be voice, video or just data.
Here’s where things get interesting –
WebRTC has no signaling channelWhen starting a WebRTC session, you need to negotiate the capabilities for the session and the connection itself. That is done out of the scope of WebRTC, in whatever means you deem fit. And in a browser, this can either be HTTP or… WebSocket.
So from this point of view, WebSocket isn’t a replacement to WebRTC but rather complementary – as an enabler.
You can send media over a WebSocketSort of.
I’ll start with an example. If you want you connect to a cloud based speech to text API and you happen to use IBM Watson, then you can use its WebSocket interface. The first sentence in the first paragraph of the documentation?
The WebSocket interface of the Speech to Text service is the most natural way for a client to interact with the service.
So. you stream the speech (=voice) over a WebSocket to connect it to the cloud API service.
That said, it is highly unlikely to be used for anything else.
In most cases, real time media will get sent over WebRTC or other protocols such as RTSP, RTMP, HLS, etc.
WebRTC’s data channelWebRTC has a data channel. It has many different uses. In some cases, it is used in place of using a kind of a WebSocket connection:
The illustration above shows how a message would pass from one browser to another over a WebSocket versus doing the same over a WebRTC data channel. Each has its advantages and challenges.
Funnily, the data channel in WebRTC shares a similar set of APIs to the WebSocket ones:
const peerConnection = new RTCPeerConnection(); const dataChannel = peerConnection.createDataChannel("myLabel", dataChannelOptions); dataChannel.onerror = (error) => { … }; dataChannel.onclose = () => { … }; dataChannel.onopen = () => { dataChannel.send("Hello World!"); }; dataChannel.onmessage = (event) => { … };Again, we’ve got calls for send and close and callbacks for onopen, onerror, onclose and onmessage.
This makes an awful lot of sense but can be confusing a bit.
There this one tiny detail – to get the data channel working, you first need to negotiate the connection. And that you do either with HTTP or with a WebSocket.
When should you use WebRTC instead of a WebSocket?Almost never. That’s the truth.
If you’re contemplating between the two and you don’t know a lot about WebRTC, then you’re probably in need of WebSockets, or will be better off using WebSockets.
I’d think of data channels either when there are things you want to pass directly across browsers without any server intervention in the message itself (and these use cases are quite scarce), or you are in need of a low latency messaging solution across browsers where a relay via a WebSocket will be too time consuming.
Need to learn WebRTC? Check out my online course – the first module is free.
The post WebRTC vs WebSockets appeared first on BlogGeek.me.
WebRTC simulcast and ABR is all about offer choice to “viewers”.
I’ve been dealing recently with more clients who are looking to create live broadcast experiences. Solutions where one or more users have to broadcast their streams from a single session to a large audience. Large is a somewhat lenient target number, which seems to be stretching from anywhere between 100 to a 1,000,000 viewers. And yes, most of these clients want that viewers will have instantaneous access to the stream(s) – a lag of 1-2 seconds at most, as opposed to the 10 or more seconds of latency you get from HLS.
Simulcast, ABR – need a quick reference to understand their similarities and differences? Download the free cheatsheet:
Compare simulcast to ABR
What I started seeing more and more recently are solutions that make use of ABR. What’s ABR? It is just like simulcast, but… different.
What’s Simulcast?Simulcast is a mechanism in WebRTC by which a device/client/user will be sending a video stream that contains multiple bitrates in it. I explained it a bit in my WebRTC Multiparty Architectures last month.
With simlucast, a WebRTC client will generate these multiple bitrates, where each offers a different video quality – the higher the bitrate the higher the quality.
These video streams are then received by the SFU, and the SFU can pick and choose which stream to send to which participant/viewer. This decision is usually made based on the available bandwidth, but it can (and should) make use of a lot of other factors as well – display size and video layout on the viewer device, CPU utilization of the viewer, etc.
The great thing about simulcast? The SFU doesn’t work too hard. It just selects what to send where.
What’s ABR?ABR stands for Adaptive Bitrate Streaming. Don’t ask me why R and not S in the acronym – probably because they didn’t want to mix this with car breaks. Anyways, ABR comes from streaming, long before WebRTC was introduced to our lives.
With streaming, you’ve got a user watching a recorded (or “live”) video online. The server then streams that media towards the user. What happens if the available bitrate from the server to the user is low? Buffering.
Streaming technology uses TCP, which in turn uses retransmissions. It isn’t designed for real-time, and well… we want to SEE the content and would rather wait a bit than not see it at all.
Today, with 1080p and 4K resolutions, streaming at high quality requires lots and lots of bandwidth. If the network isn’t capable, would users rather wait and be buffered or would it be better to just lower the quality?
Most prefer lowering the quality.
But how do you do that with “static” content? A pre-recorded video file is what it is.
You use ABR:
With ABR, you segment bandwidth into ranges. Each range will be receiving a different media stream. Each such stream has a different bitrate.
Say you have a media stream of 300kbps – you define the segment bandwidth for it as 300-500kbps. Why? Because from 500kbps there’s another media stream available.
These media streams all contain the same content, just in different bitrates, denoting different quality levels. What you try doing is sending the highest quality range to each viewer without getting into that dreaded buffering state. Since the available bitrate is dynamic in nature (as the illustration above shows), you can end up switching across media streams based on the bitrate available to the viewer at any given point in time. That’s why they call it adaptive.
And it sounds rather similar to simulcast… just on the server side, as ABR is something a server generates – the original media gets to a server, which creates multiple output streams to it in different bitrates, to use when needed.
The ABR challenge for WebRTC media serversRecently, I’ve seen more discussions and solutions looking at using ABR and similar techniques with WebRTC. Mainly to scale a session beyond 10k viewers and to support low latency broadcasting in CDNs.
Why these two areas?
But here’s the problem.
We’ve been doing SFUs with WebRTC for most of the time that WebRTC existed. Around 7-8 years. We’re all quite comfortable now with the concept of paying on bandwidth and not eating too much CPU – which is the performance profile of an SFU.
Simulcast fits right into that philosophy – the one creating the alternate streams is the client and not the SFU – it is sending more media towards the SFU who now has more options. The client pays the price of higher bitrates and higher CPU use.
ABR places that burden on the server, which needs to generate the additional alternate streams on its own, and it needs to do so in real time – there’s no offline pre-processing activity for generating these streams from a pre-existing media file as there is with CDNs. this means that SFUs now need to think about CPU loads, muck around with transcoding, experiment with GPU acceleration – the works. Things they haven’t done so far.
Is this in our future? Sure it is. For some, it is already their present.
Simulcast, ABR – need a quick reference to understand their similarities and differences? Download the free cheatsheet:
Compare simulcast to ABR
What’s next?WebRTC is growing and evolving. The ecosystem around it is becoming much richer as time goes by. Today, you can find different media servers of different types and characteristics, and the solutions available are quite different from one another.
If you are planning on developing your own application using a media server – make sure you pick a media server that fits to your use case.
The post WebRTC simulcast and ABR – two sides of the same coin appeared first on BlogGeek.me.
WebRTC simulcast and ABR is all about offer choice to “viewers”.
I’ve been dealing recently with more clients who are looking to create live broadcast experiences. Solutions where one or more users have to broadcast their streams from a single session to a large audience. Large is a somewhat lenient target number, which seems to be stretching from anywhere between 100 to a 1,000,000 viewers. And yes, most of these clients want that viewers will have instantaneous access to the stream(s) – a lag of 1-2 seconds at most, as opposed to the 10 or more seconds of latency you get from HLS.
Simulcast, ABR – need a quick reference to understand their similarities and differences? Download the free cheatsheet:
Compare simulcast to ABR
What I started seeing more and more recently are solutions that make use of ABR. What’s ABR? It is just like simulcast, but… different.
What’s Simulcast?Simulcast is a mechanism in WebRTC by which a device/client/user will be sending a video stream that contains multiple bitrates in it. I explained it a bit in my WebRTC Multiparty Architectures last month.
With simlucast, a WebRTC client will generate these multiple bitrates, where each offers a different video quality – the higher the bitrate the higher the quality.
These video streams are then received by the SFU, and the SFU can pick and choose which stream to send to which participant/viewer. This decision is usually made based on the available bandwidth, but it can (and should) make use of a lot of other factors as well – display size and video layout on the viewer device, CPU utilization of the viewer, etc.
The great thing about simulcast? The SFU doesn’t work too hard. It just selects what to send where.
What’s ABR?ABR stands for Adaptive Bitrate Streaming. Don’t ask me why R and not S in the acronym – probably because they didn’t want to mix this with car breaks. Anyways, ABR comes from streaming, long before WebRTC was introduced to our lives.
With streaming, you’ve got a user watching a recorded (or “live”) video online. The server then streams that media towards the user. What happens if the available bitrate from the server to the user is low? Buffering.
Streaming technology uses TCP, which in turn uses retransmissions. It isn’t designed for real-time, and well… we want to SEE the content and would rather wait a bit than not see it at all.
Today, with 1080p and 4K resolutions, streaming at high quality requires lots and lots of bandwidth. If the network isn’t capable, would users rather wait and be buffered or would it be better to just lower the quality?
Most prefer lowering the quality.
But how do you do that with “static” content? A pre-recorded video file is what it is.
You use ABR:
With ABR, you segment bandwidth into ranges. Each range will be receiving a different media stream. Each such stream has a different bitrate.
Say you have a media stream of 300kbps – you define the segment bandwidth for it as 300-500kbps. Why? Because from 500kbps there’s another media stream available.
These media streams all contain the same content, just in different bitrates, denoting different quality levels. What you try doing is sending the highest quality range to each viewer without getting into that dreaded buffering state. Since the available bitrate is dynamic in nature (as the illustration above shows), you can end up switching across media streams based on the bitrate available to the viewer at any given point in time. That’s why they call it adaptive.
And it sounds rather similar to simulcast… just on the server side, as ABR is something a server generates – the original media gets to a server, which creates multiple output streams to it in different bitrates, to use when needed.
The ABR challenge for WebRTC media serversRecently, I’ve seen more discussions and solutions looking at using ABR and similar techniques with WebRTC. Mainly to scale a session beyond 10k viewers and to support low latency broadcasting in CDNs.
Why these two areas?
But here’s the problem.
We’ve been doing SFUs with WebRTC for most of the time that WebRTC existed. Around 7-8 years. We’re all quite comfortable now with the concept of paying on bandwidth and not eating too much CPU – which is the performance profile of an SFU.
Simulcast fits right into that philosophy – the one creating the alternate streams is the client and not the SFU – it is sending more media towards the SFU who now has more options. The client pays the price of higher bitrates and higher CPU use.
ABR places that burden on the server, which needs to generate the additional alternate streams on its own, and it needs to do so in real time – there’s no offline pre-processing activity for generating these streams from a pre-existing media file as there is with CDNs. this means that SFUs now need to think about CPU loads, muck around with transcoding, experiment with GPU acceleration – the works. Things they haven’t done so far.
Is this in our future? Sure it is. For some, it is already their present.
Simulcast, ABR – need a quick reference to understand their similarities and differences? Download the free cheatsheet:
Compare simulcast to ABR
What’s next?WebRTC is growing and evolving. The ecosystem around it is becoming much richer as time goes by. Today, you can find different media servers of different types and characteristics, and the solutions available are quite different from one another.
If you are planning on developing your own application using a media server – make sure you pick a media server that fits to your use case.
The post WebRTC simulcast and ABR – two sides of the same coin appeared first on BlogGeek.me.
As you may have heard, Whatsapp discovered a security issue in their client which was actively exploited in the wild. The exploit did not require the target to pick up the call which is really scary.
Since there are not many facts to go on, lets do some tea reading…
The security advisory issued by Facebook says
A buffer overflow vulnerability in WhatsApp VOIP stack allowed remote code execution via specially crafted series of SRTCP packets sent to a target phone number.
Continue reading The WhatsApp RTCP exploit – what might have happened? at webrtcHacks.
When running WebRTC at scale, you end up hitting issues and frequent regressions. Being able to quickly identify what exactly broke is key to either preventing a regression from landing in Chrome Stable or adapting your own code to avoid the problem. Chrome’s bisect-builds.py tool makes this process much easier than you would suspect. Arne from appear.in gives you an example of how he used this to workaround an issue that came up recently.
{“editor”, “Philipp Hancke“}
In this post I am going to provide a blow-by-blow account of how a change to Chrome triggered a bug in appear.in and how we went about determining exactly what that change was.
Continue reading Bisecting Browser Bugs (Arne Georg Gisnås Gleditsch) at webrtcHacks.
At Google I/O 2019, the advances Google made in AI and machine learning were put to use for improving privacy and accessibility.
I’ve attended Google I/O in person only once. It was in 2014. I’ve been following this event from afar ever since, making it a point to watch the keynote each year, trying to figure out where Google is headed – and how will that affect the industry.
This weekend I spend some time going over te Google I/O 2019 keynote. If you haven’t seen it, you can watch it over on YouTube – I’ve embedded it here as well.
The main theme of Google I/O 2019Here’s how I ended my review about Google I/O 2018:
Where are we headed?
That’s the big question I guess.
More machine learning and AI. Expect Google I/O 2019 to be on the same theme.
If you don’t have it in your roadmap, time to see how to fit it in.
In many ways, this can easily be the end of this article as well – the tl;dr version.
Google got to the heart of their keynote only in around the 36 minute mark. Sundar Pichai, CEO of Google, talked about the “For Everyone” theme of this event and where Google is headed. For Everyone – not only for the rich (Apple?) or the people in developed countries, but For Everyone.
The first thing he talked about in this For Everyone context? AI:
From there, everything Google does is about how the AI research work and breakthroughs that they are doing at their scale can fit into the direction they want to take.
This year, that direction was defined by the words privacy, security and accessibility.
Privacy because they are being scrutinized over their data collection, which is directly linked to their business model. But more so because of a recent breakthrough that enables them to run accurate speech to text on devices (more on that later).
Security because of the growing number of hacking and malware attacks we hear about all the time. But more so because the work Google has put into Android from all aspects is placing them ahead on competition (think Apple) based on third party reports (Gartner in this case).
Interestingly, Apple is attacking Google around both privacy and security.
Accessibility because that’s the next billion users. The bigger market. The way to grow by reaching ever larger audiences. But also because it fits well with that breakthrough in speech to text and with machine learning as a whole. And somewhat because of diversity and inclusion which are big words and concepts in tech and silicon valley these days (and you need to appease the crowds and your own employees). And also because it films well and it really does benefit the world and people – though that’s secondary for companies.
The big reveal for me at Google I/O 2019? Definitely its advances in speech analytics by getting speech to text minimized enough to fit into a mobile device. It was the main pillar of this show and for things to come in the future if you ask me.
A lot of the AI innovations Google is talking about is around real time communications. Check out the recent report I’ve written with Chad Hart on the subject:
Event TimelineI wanted to understand what is important to Google this year, so I took a rough timeline of the event, breaking it down into the minutes spent on each topic. In each and every topic discussed, machine learning and AI were apparent.
Time spentTopic10 minSearch; introduction of new feature(s)8 minGoogle Lens; introduction of new feature(s) – related to speech to text16 minGoogle assistant (Duplex on the web, assistant, driving mode)19 minFor Everyone (AI, bias, privacy+security, accessibility)14 minAndroid Q enhancements and innovations (software)9 minNext (home)9 minPixel (smartphone hardware)16 minGoogle AILet’s put this in perspective: out of roughly 100 minutes, 51 were spent directly on AI (assistant, for everyone and AI) and the rest of the time was spent about… AI, though indirectly.
Watching the event, I must say it got me thinking of my time at the university. I had a neighbor at the dorms who was a professional juggler. Maybe not professional, but he did get paid for juggling from time to time. He was able to juggle 5 torches or clubs, 5 apples (while eating one) and anywhere between 7-11 balls (I didn’t keep track).
One evening he comes storming into our room, asking us all to watch a new trick he was working on and just perfected. We all looked. And found it boring. Not because it wasn’t hard or impressive, but because we all knew that this was most definitely within his comfort zone and the things he can do. Funny thing is – he visited us here in Israel a few weeks back. My wife asked him if he juggles anymore. He said a bit, and said his kids aren’t impressed. How could they when it is obvious to them that he can?
Anyways, there’s no wow factor in what Google is doing with machine learning anymore. It is obvious that each year, in every Google I/O event, some new innovation around this topic will be introduced.
This time, it was all about voice and text.
Time to dive into what went on @ Google I/O 2019 keynote.
Speech to text on deviceWe had a glimpse of this piece of technology late last year when Google introduced call screening to its Pixel 3 devices. This capability allows people to let the Pixel answer calls on their behalf, see what people are saying using live transcription and decide how to act.
This was all done on device. At Google I/O 2019, this technology was just added across the board on Android 10 to anything and everything.
On stage, the explanation given was that the model used for speech to text in the cloud is 2.5Gb in size, and Google was able to squeeze it down to 80Mb, which meant being able to run it on devices. It was not indicated if this is for any language other than English, which probably meant this is an English only capability for now.
What does Google gain from this capability?
For now, Google will be rolling this out to Android devices and not just Google Pixel devices. No mention of if or when this gets to iOS devices.
What have they done with it?
The origins of Google came from Search, and Google decided to start the keynote with search.
Nothing super interesting there in the announcements made, besides the continuous improvements. What was showcased was news and podcasts.
How Google decided to handle Face News and news coverage is now coming to search directly. Podcasts are now made searchable and better accessible directly from search.
Other than that?
A new shiny object – the ability to show 3D models in search results and in augmented reality.
Nice, but not earth shattering. At least not yet.
Google LensAfter Search, Google Lens was showcased.
The main theme around it? The ability to capture text in real time on images and do stuff with it. Usually either text to speech or translation.
In the screenshot above, Google Lens marks the recommended dishes off a menu. While nice, this probably requires each and every such feature to be baked into lens, much like new actions need to be baked into the Google Assistant (or skills in Amazon Alexa).
This falls nicely into the For Everyone / Accessibility theme of the keynote. Aparna Chennapragada, Head of Product for Lens, had the following to say (after an emotional video of a woman who can’t read using the new Lens):
“The power to read is the power to buy a train ticket. To shop in a store. To follow the news. It is the power to get things done. So we want to make this feature to be as accessible to as many people as possible, so it already works in a dozen of languages.”
It actually is. People can’t really be part of our world without the power to read.
It is also the only announcement I remember that the number of languages covered was mentioned (which is why I believe speech to text on device is English only).
Google made the case here and in almost every part of the keynote in favor of using AI for the greater good – for accessibility and inclusion.
Google assistantGoogle assistant had its share of the keynote with 4 main announcements:
Duplex on the web is a smarter auto fill feature for web forms.
Next generation Assistant is faster and smarter than its predecessor. There were two main aspects of it that were really interesting to me:
Every year Google seems to be making Assistant more conversational, able to handle more intents and actions – and understand a lot more of the context necessary for complex tasks.
For EveryoneI’ve written about For Everyone earlier in this article.
I want to cover two more aspect of it, federated learning and project euphonia.
Federated LearningMachine learning requires tons of data. The more data the better the resulting model is at predicting new inputs. Google is often criticized for collecting that data, but it needs it not only for monetization but also a lot for improving its AI models.
Enter federated learning, a way to learn a bit at the edge of the network, directly inside the devices, and share what gets learned in a secure fashion with the central model that is being created in the cloud.
This was so important for Google to show and explain that Sundar Pichai himself showed and gave that spiel instead of leaving it to the final part of the keynote where Google AI was discussed almost separately.
At Google, this feels like an initiative that is only starting its way with the first public implementation of it embedded as part of Google’s predictive keyboard on Android and how that keyboard is learning new words and trends.
Project EuphoniaProject Euphonia was also introduced here. This project is about enhancing speech recognition models towards hard to understand speech.
Here Google stressed the work and effort it is putting on collecting recorded phrases from people with such problems. The main issue here being the creation or improvement of a model more than anything else.
Android QOr Android 10 – pick your name for it.
This one was more than anything else a shopping list of features.
Statistics were given at the beginning:
Live captions was again explained and introduced, along with on-device learning capabilities. AI at its best baked into the OS itself.
For some reason, the Android Q segment wasn’t followed with the Pixel one but rather with the Nest one.
Nest (helpful home)Google rebranded all of its smart home devices under Nest.
While at it, the decided to try and differentiate from the rest of the pack by coining their solution the “helpful home” as opposed to the “smart home”.
As with everything else, AI and the assistant took center stage, as well as a new device, the Nest Hub Max, which is Google’s answer to the Facebook Portal.
The solution for video calling on the Next Hub Max was built around Google Duo (obviously), with a similar ability to auto zoom that Facebook Portal has, at least on paper – it wasn’t really demoed or showcased on stage.
The reason no demo was really given is that this device will ship “later this summer”, which means it wasn’t really ready for prime time – or Google just didn’t want to spend more precious minutes on it during the keynote.
Interestingly, Google Duo’s recent addition of group video calling wasn’t mentioned throughout the keynote at all.
Pixel (phone)The Pixel section of the keynote showcased a new Pixel phone device, the Pixel 3a and 3a XL. This is a low cost device, which tries to make do with lower hardware spec by offering better software and AI capabilities. To drive that point home, Google had this slide to show:
Google is continuing with its investment in computational photography, and if the results are as good as this example, I am sold.
The other nice feature shown was call screening:
The neet thing is that your phone can act as your personal secretary, checking for you who’s calling and why, and also converse with the caller based on your instructions. This obviously makes use of the same innovations in Android around speech to text and smart reply.
My current phone is Xiaomi Mi A1, an Android One device. My next one may well be the Pixel 3a – at $399, it will probably be the best phone on the market at that price point.
Google AIThe last section of the keynote was given by Jeff Dean, head of Google.ai. He was also the one closing the keynote, instead of handing this back to Sundar Pichai. I found that nuance interesting.
In his part he discussed the advancements in natural language understanding (NLU) at Google, the growth of TensorFlow, where Google is putting its efforts in healthcare (this time it was oncology and lung cancer), as well as the AI for Social Good initiative, where flood forecasting was explained.
That finishing touch of Google AI in the keynote, taking 16 full minutes (about 15% of the time) shows that Google was aiming to impress and to focus on the good they are making in the world, trying to reduce the growing fear factor of their power and data collection capabilities.
It was impressive…
Next year?More of the same is my guess.
Google will need to find some new innovation to build their event around. Speech to text on device is great, especially with the many use cases it enabled and the privacy angle to it. Not sure how they’d top that next year.
What’s certain is that AI and privacy will still be at the forefront for Google during 2019 and well into 2020.
A lot of the AI innovations Google is talking about is around real time communications. Check out the recent report I’ve written with Chad Hart on the subject:
The post Google I/O 2019 was all about AI, Privacy and Accessibility appeared first on BlogGeek.me.
Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.
Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.
Wow, this most certainly is a great a theme.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.