As you may have heard, Whatsapp discovered a security issue in their client which was actively exploited in the wild. The exploit did not require the target to pick up the call which is really scary.
Since there are not many facts to go on, lets do some tea reading…
The security advisory issued by Facebook says
A buffer overflow vulnerability in WhatsApp VOIP stack allowed remote code execution via specially crafted series of SRTCP packets sent to a target phone number.
Continue reading The WhatsApp RTCP exploit – what might have happened? at webrtcHacks.
When running WebRTC at scale, you end up hitting issues and frequent regressions. Being able to quickly identify what exactly broke is key to either preventing a regression from landing in Chrome Stable or adapting your own code to avoid the problem. Chrome’s bisect-builds.py tool makes this process much easier than you would suspect. Arne from appear.in gives you an example of how he used this to workaround an issue that came up recently.
{“editor”, “Philipp Hancke“}
In this post I am going to provide a blow-by-blow account of how a change to Chrome triggered a bug in appear.in and how we went about determining exactly what that change was.
Continue reading Bisecting Browser Bugs (Arne Georg Gisnås Gleditsch) at webrtcHacks.
At Google I/O 2019, the advances Google made in AI and machine learning were put to use for improving privacy and accessibility.
I’ve attended Google I/O in person only once. It was in 2014. I’ve been following this event from afar ever since, making it a point to watch the keynote each year, trying to figure out where Google is headed – and how will that affect the industry.
This weekend I spend some time going over te Google I/O 2019 keynote. If you haven’t seen it, you can watch it over on YouTube – I’ve embedded it here as well.
The main theme of Google I/O 2019Here’s how I ended my review about Google I/O 2018:
Where are we headed?
That’s the big question I guess.
More machine learning and AI. Expect Google I/O 2019 to be on the same theme.
If you don’t have it in your roadmap, time to see how to fit it in.
In many ways, this can easily be the end of this article as well – the tl;dr version.
Google got to the heart of their keynote only in around the 36 minute mark. Sundar Pichai, CEO of Google, talked about the “For Everyone” theme of this event and where Google is headed. For Everyone – not only for the rich (Apple?) or the people in developed countries, but For Everyone.
The first thing he talked about in this For Everyone context? AI:
From there, everything Google does is about how the AI research work and breakthroughs that they are doing at their scale can fit into the direction they want to take.
This year, that direction was defined by the words privacy, security and accessibility.
Privacy because they are being scrutinized over their data collection, which is directly linked to their business model. But more so because of a recent breakthrough that enables them to run accurate speech to text on devices (more on that later).
Security because of the growing number of hacking and malware attacks we hear about all the time. But more so because the work Google has put into Android from all aspects is placing them ahead on competition (think Apple) based on third party reports (Gartner in this case).
Interestingly, Apple is attacking Google around both privacy and security.
Accessibility because that’s the next billion users. The bigger market. The way to grow by reaching ever larger audiences. But also because it fits well with that breakthrough in speech to text and with machine learning as a whole. And somewhat because of diversity and inclusion which are big words and concepts in tech and silicon valley these days (and you need to appease the crowds and your own employees). And also because it films well and it really does benefit the world and people – though that’s secondary for companies.
The big reveal for me at Google I/O 2019? Definitely its advances in speech analytics by getting speech to text minimized enough to fit into a mobile device. It was the main pillar of this show and for things to come in the future if you ask me.
A lot of the AI innovations Google is talking about is around real time communications. Check out the recent report I’ve written with Chad Hart on the subject:
Event TimelineI wanted to understand what is important to Google this year, so I took a rough timeline of the event, breaking it down into the minutes spent on each topic. In each and every topic discussed, machine learning and AI were apparent.
Time spentTopic10 minSearch; introduction of new feature(s)8 minGoogle Lens; introduction of new feature(s) – related to speech to text16 minGoogle assistant (Duplex on the web, assistant, driving mode)19 minFor Everyone (AI, bias, privacy+security, accessibility)14 minAndroid Q enhancements and innovations (software)9 minNext (home)9 minPixel (smartphone hardware)16 minGoogle AILet’s put this in perspective: out of roughly 100 minutes, 51 were spent directly on AI (assistant, for everyone and AI) and the rest of the time was spent about… AI, though indirectly.
Watching the event, I must say it got me thinking of my time at the university. I had a neighbor at the dorms who was a professional juggler. Maybe not professional, but he did get paid for juggling from time to time. He was able to juggle 5 torches or clubs, 5 apples (while eating one) and anywhere between 7-11 balls (I didn’t keep track).
One evening he comes storming into our room, asking us all to watch a new trick he was working on and just perfected. We all looked. And found it boring. Not because it wasn’t hard or impressive, but because we all knew that this was most definitely within his comfort zone and the things he can do. Funny thing is – he visited us here in Israel a few weeks back. My wife asked him if he juggles anymore. He said a bit, and said his kids aren’t impressed. How could they when it is obvious to them that he can?
Anyways, there’s no wow factor in what Google is doing with machine learning anymore. It is obvious that each year, in every Google I/O event, some new innovation around this topic will be introduced.
This time, it was all about voice and text.
Time to dive into what went on @ Google I/O 2019 keynote.
Speech to text on deviceWe had a glimpse of this piece of technology late last year when Google introduced call screening to its Pixel 3 devices. This capability allows people to let the Pixel answer calls on their behalf, see what people are saying using live transcription and decide how to act.
This was all done on device. At Google I/O 2019, this technology was just added across the board on Android 10 to anything and everything.
On stage, the explanation given was that the model used for speech to text in the cloud is 2.5Gb in size, and Google was able to squeeze it down to 80Mb, which meant being able to run it on devices. It was not indicated if this is for any language other than English, which probably meant this is an English only capability for now.
What does Google gain from this capability?
For now, Google will be rolling this out to Android devices and not just Google Pixel devices. No mention of if or when this gets to iOS devices.
What have they done with it?
The origins of Google came from Search, and Google decided to start the keynote with search.
Nothing super interesting there in the announcements made, besides the continuous improvements. What was showcased was news and podcasts.
How Google decided to handle Face News and news coverage is now coming to search directly. Podcasts are now made searchable and better accessible directly from search.
Other than that?
A new shiny object – the ability to show 3D models in search results and in augmented reality.
Nice, but not earth shattering. At least not yet.
Google LensAfter Search, Google Lens was showcased.
The main theme around it? The ability to capture text in real time on images and do stuff with it. Usually either text to speech or translation.
In the screenshot above, Google Lens marks the recommended dishes off a menu. While nice, this probably requires each and every such feature to be baked into lens, much like new actions need to be baked into the Google Assistant (or skills in Amazon Alexa).
This falls nicely into the For Everyone / Accessibility theme of the keynote. Aparna Chennapragada, Head of Product for Lens, had the following to say (after an emotional video of a woman who can’t read using the new Lens):
“The power to read is the power to buy a train ticket. To shop in a store. To follow the news. It is the power to get things done. So we want to make this feature to be as accessible to as many people as possible, so it already works in a dozen of languages.”
It actually is. People can’t really be part of our world without the power to read.
It is also the only announcement I remember that the number of languages covered was mentioned (which is why I believe speech to text on device is English only).
Google made the case here and in almost every part of the keynote in favor of using AI for the greater good – for accessibility and inclusion.
Google assistantGoogle assistant had its share of the keynote with 4 main announcements:
Duplex on the web is a smarter auto fill feature for web forms.
Next generation Assistant is faster and smarter than its predecessor. There were two main aspects of it that were really interesting to me:
Every year Google seems to be making Assistant more conversational, able to handle more intents and actions – and understand a lot more of the context necessary for complex tasks.
For EveryoneI’ve written about For Everyone earlier in this article.
I want to cover two more aspect of it, federated learning and project euphonia.
Federated LearningMachine learning requires tons of data. The more data the better the resulting model is at predicting new inputs. Google is often criticized for collecting that data, but it needs it not only for monetization but also a lot for improving its AI models.
Enter federated learning, a way to learn a bit at the edge of the network, directly inside the devices, and share what gets learned in a secure fashion with the central model that is being created in the cloud.
This was so important for Google to show and explain that Sundar Pichai himself showed and gave that spiel instead of leaving it to the final part of the keynote where Google AI was discussed almost separately.
At Google, this feels like an initiative that is only starting its way with the first public implementation of it embedded as part of Google’s predictive keyboard on Android and how that keyboard is learning new words and trends.
Project EuphoniaProject Euphonia was also introduced here. This project is about enhancing speech recognition models towards hard to understand speech.
Here Google stressed the work and effort it is putting on collecting recorded phrases from people with such problems. The main issue here being the creation or improvement of a model more than anything else.
Android QOr Android 10 – pick your name for it.
This one was more than anything else a shopping list of features.
Statistics were given at the beginning:
Live captions was again explained and introduced, along with on-device learning capabilities. AI at its best baked into the OS itself.
For some reason, the Android Q segment wasn’t followed with the Pixel one but rather with the Nest one.
Nest (helpful home)Google rebranded all of its smart home devices under Nest.
While at it, the decided to try and differentiate from the rest of the pack by coining their solution the “helpful home” as opposed to the “smart home”.
As with everything else, AI and the assistant took center stage, as well as a new device, the Nest Hub Max, which is Google’s answer to the Facebook Portal.
The solution for video calling on the Next Hub Max was built around Google Duo (obviously), with a similar ability to auto zoom that Facebook Portal has, at least on paper – it wasn’t really demoed or showcased on stage.
The reason no demo was really given is that this device will ship “later this summer”, which means it wasn’t really ready for prime time – or Google just didn’t want to spend more precious minutes on it during the keynote.
Interestingly, Google Duo’s recent addition of group video calling wasn’t mentioned throughout the keynote at all.
Pixel (phone)The Pixel section of the keynote showcased a new Pixel phone device, the Pixel 3a and 3a XL. This is a low cost device, which tries to make do with lower hardware spec by offering better software and AI capabilities. To drive that point home, Google had this slide to show:
Google is continuing with its investment in computational photography, and if the results are as good as this example, I am sold.
The other nice feature shown was call screening:
The neet thing is that your phone can act as your personal secretary, checking for you who’s calling and why, and also converse with the caller based on your instructions. This obviously makes use of the same innovations in Android around speech to text and smart reply.
My current phone is Xiaomi Mi A1, an Android One device. My next one may well be the Pixel 3a – at $399, it will probably be the best phone on the market at that price point.
Google AIThe last section of the keynote was given by Jeff Dean, head of Google.ai. He was also the one closing the keynote, instead of handing this back to Sundar Pichai. I found that nuance interesting.
In his part he discussed the advancements in natural language understanding (NLU) at Google, the growth of TensorFlow, where Google is putting its efforts in healthcare (this time it was oncology and lung cancer), as well as the AI for Social Good initiative, where flood forecasting was explained.
That finishing touch of Google AI in the keynote, taking 16 full minutes (about 15% of the time) shows that Google was aiming to impress and to focus on the good they are making in the world, trying to reduce the growing fear factor of their power and data collection capabilities.
It was impressive…
Next year?More of the same is my guess.
Google will need to find some new innovation to build their event around. Speech to text on device is great, especially with the many use cases it enabled and the privacy angle to it. Not sure how they’d top that next year.
What’s certain is that AI and privacy will still be at the forefront for Google during 2019 and well into 2020.
A lot of the AI innovations Google is talking about is around real time communications. Check out the recent report I’ve written with Chad Hart on the subject:
The post Google I/O 2019 was all about AI, Privacy and Accessibility appeared first on BlogGeek.me.
At Google I/O 2019, the advances Google made in AI and machine learning were put to use for improving privacy and accessibility.
I’ve attended Google I/O in person only once. It was in 2014. I’ve been following this event from afar ever since, making it a point to watch the keynote each year, trying to figure out where Google is headed – and how will that affect the industry.
This weekend I spend some time going over te Google I/O 2019 keynote. If you haven’t seen it, you can watch it over on YouTube – I’ve embedded it here as well.
The main theme of Google I/O 2019Here’s how I ended my review about Google I/O 2018:
Where are we headed?
That’s the big question I guess.
More machine learning and AI. Expect Google I/O 2019 to be on the same theme.
If you don’t have it in your roadmap, time to see how to fit it in.
In many ways, this can easily be the end of this article as well – the tl;dr version.
Google got to the heart of their keynote only in around the 36 minute mark. Sundar Pichai, CEO of Google, talked about the “For Everyone” theme of this event and where Google is headed. For Everyone – not only for the rich (Apple?) or the people in developed countries, but For Everyone.
The first thing he talked about in this For Everyone context? AI:
From there, everything Google does is about how the AI research work and breakthroughs that they are doing at their scale can fit into the direction they want to take.
This year, that direction was defined by the words privacy, security and accessibility.
Privacy because they are being scrutinized over their data collection, which is directly linked to their business model. But more so because of a recent breakthrough that enables them to run accurate speech to text on devices (more on that later).
Security because of the growing number of hacking and malware attacks we hear about all the time. But more so because the work Google has put into Android from all aspects is placing them ahead on competition (think Apple) based on third party reports (Gartner in this case).
Interestingly, Apple is attacking Google around both privacy and security.
Accessibility because that’s the next billion users. The bigger market. The way to grow by reaching ever larger audiences. But also because it fits well with that breakthrough in speech to text and with machine learning as a whole. And somewhat because of diversity and inclusion which are big words and concepts in tech and silicon valley these days (and you need to appease the crowds and your own employees). And also because it films well and it really does benefit the world and people – though that’s secondary for companies.
The big reveal for me at Google I/O 2019? Definitely its advances in speech analytics by getting speech to text minimized enough to fit into a mobile device. It was the main pillar of this show and for things to come in the future if you ask me.
A lot of the AI innovations Google is talking about is around real time communications. Check out the recent report I’ve written with Chad Hart on the subject:
Event TimelineI wanted to understand what is important to Google this year, so I took a rough timeline of the event, breaking it down into the minutes spent on each topic. In each and every topic discussed, machine learning and AI were apparent.
Time spentTopic10 minSearch; introduction of new feature(s)8 minGoogle Lens; introduction of new feature(s) – related to speech to text16 minGoogle assistant (Duplex on the web, assistant, driving mode)19 minFor Everyone (AI, bias, privacy+security, accessibility)14 minAndroid Q enhancements and innovations (software)9 minNext (home)9 minPixel (smartphone hardware)16 minGoogle AILet’s put this in perspective: out of roughly 100 minutes, 51 were spent directly on AI (assistant, for everyone and AI) and the rest of the time was spent about… AI, though indirectly.
Watching the event, I must say it got me thinking of my time at the university. I had a neighbor at the dorms who was a professional juggler. Maybe not professional, but he did get paid for juggling from time to time. He was able to juggle 5 torches or clubs, 5 apples (while eating one) and anywhere between 7-11 balls (I didn’t keep track).
One evening he comes storming into our room, asking us all to watch a new trick he was working on and just perfected. We all looked. And found it boring. Not because it wasn’t hard or impressive, but because we all knew that this was most definitely within his comfort zone and the things he can do. Funny thing is – he visited us here in Israel a few weeks back. My wife asked him if he juggles anymore. He said a bit, and said his kids aren’t impressed. How could they when it is obvious to them that he can?
Anyways, there’s no wow factor in what Google is doing with machine learning anymore. It is obvious that each year, in every Google I/O event, some new innovation around this topic will be introduced.
This time, it was all about voice and text.
Time to dive into what went on @ Google I/O 2019 keynote.
Speech to text on deviceWe had a glimpse of this piece of technology late last year when Google introduced call screening to its Pixel 3 devices. This capability allows people to let the Pixel answer calls on their behalf, see what people are saying using live transcription and decide how to act.
This was all done on device. At Google I/O 2019, this technology was just added across the board on Android 10 to anything and everything.
On stage, the explanation given was that the model used for speech to text in the cloud is 2.5Gb in size, and Google was able to squeeze it down to 80Mb, which meant being able to run it on devices. It was not indicated if this is for any language other than English, which probably meant this is an English only capability for now.
What does Google gain from this capability?
For now, Google will be rolling this out to Android devices and not just Google Pixel devices. No mention of if or when this gets to iOS devices.
What have they done with it?
The origins of Google came from Search, and Google decided to start the keynote with search.
Nothing super interesting there in the announcements made, besides the continuous improvements. What was showcased was news and podcasts.
How Google decided to handle Face News and news coverage is now coming to search directly. Podcasts are now made searchable and better accessible directly from search.
Other than that?
A new shiny object – the ability to show 3D models in search results and in augmented reality.
Nice, but not earth shattering. At least not yet.
Google LensAfter Search, Google Lens was showcased.
The main theme around it? The ability to capture text in real time on images and do stuff with it. Usually either text to speech or translation.
In the screenshot above, Google Lens marks the recommended dishes off a menu. While nice, this probably requires each and every such feature to be baked into lens, much like new actions need to be baked into the Google Assistant (or skills in Amazon Alexa).
This falls nicely into the For Everyone / Accessibility theme of the keynote. Aparna Chennapragada, Head of Product for Lens, had the following to say (after an emotional video of a woman who can’t read using the new Lens):
“The power to read is the power to buy a train ticket. To shop in a store. To follow the news. It is the power to get things done. So we want to make this feature to be as accessible to as many people as possible, so it already works in a dozen of languages.”
It actually is. People can’t really be part of our world without the power to read.
It is also the only announcement I remember that the number of languages covered was mentioned (which is why I believe speech to text on device is English only).
Google made the case here and in almost every part of the keynote in favor of using AI for the greater good – for accessibility and inclusion.
Google assistantGoogle assistant had its share of the keynote with 4 main announcements:
Duplex on the web is a smarter auto fill feature for web forms.
Next generation Assistant is faster and smarter than its predecessor. There were two main aspects of it that were really interesting to me:
Every year Google seems to be making Assistant more conversational, able to handle more intents and actions – and understand a lot more of the context necessary for complex tasks.
For EveryoneI’ve written about For Everyone earlier in this article.
I want to cover two more aspect of it, federated learning and project euphonia.
Federated LearningMachine learning requires tons of data. The more data the better the resulting model is at predicting new inputs. Google is often criticized for collecting that data, but it needs it not only for monetization but also a lot for improving its AI models.
Enter federated learning, a way to learn a bit at the edge of the network, directly inside the devices, and share what gets learned in a secure fashion with the central model that is being created in the cloud.
This was so important for Google to show and explain that Sundar Pichai himself showed and gave that spiel instead of leaving it to the final part of the keynote where Google AI was discussed almost separately.
At Google, this feels like an initiative that is only starting its way with the first public implementation of it embedded as part of Google’s predictive keyboard on Android and how that keyboard is learning new words and trends.
Project EuphoniaProject Euphonia was also introduced here. This project is about enhancing speech recognition models towards hard to understand speech.
Here Google stressed the work and effort it is putting on collecting recorded phrases from people with such problems. The main issue here being the creation or improvement of a model more than anything else.
Android QOr Android 10 – pick your name for it.
This one was more than anything else a shopping list of features.
Statistics were given at the beginning:
Live captions was again explained and introduced, along with on-device learning capabilities. AI at its best baked into the OS itself.
For some reason, the Android Q segment wasn’t followed with the Pixel one but rather with the Nest one.
Nest (helpful home)Google rebranded all of its smart home devices under Nest.
While at it, the decided to try and differentiate from the rest of the pack by coining their solution the “helpful home” as opposed to the “smart home”.
As with everything else, AI and the assistant took center stage, as well as a new device, the Nest Hub Max, which is Google’s answer to the Facebook Portal.
The solution for video calling on the Next Hub Max was built around Google Duo (obviously), with a similar ability to auto zoom that Facebook Portal has, at least on paper – it wasn’t really demoed or showcased on stage.
The reason no demo was really given is that this device will ship “later this summer”, which means it wasn’t really ready for prime time – or Google just didn’t want to spend more precious minutes on it during the keynote.
Interestingly, Google Duo’s recent addition of group video calling wasn’t mentioned throughout the keynote at all.
Pixel (phone)The Pixel section of the keynote showcased a new Pixel phone device, the Pixel 3a and 3a XL. This is a low cost device, which tries to make do with lower hardware spec by offering better software and AI capabilities. To drive that point home, Google had this slide to show:
Google is continuing with its investment in computational photography, and if the results are as good as this example, I am sold.
The other nice feature shown was call screening:
The neet thing is that your phone can act as your personal secretary, checking for you who’s calling and why, and also converse with the caller based on your instructions. This obviously makes use of the same innovations in Android around speech to text and smart reply.
My current phone is Xiaomi Mi A1, an Android One device. My next one may well be the Pixel 3a – at $399, it will probably be the best phone on the market at that price point.
Google AIThe last section of the keynote was given by Jeff Dean, head of Google.ai. He was also the one closing the keynote, instead of handing this back to Sundar Pichai. I found that nuance interesting.
In his part he discussed the advancements in natural language understanding (NLU) at Google, the growth of TensorFlow, where Google is putting its efforts in healthcare (this time it was oncology and lung cancer), as well as the AI for Social Good initiative, where flood forecasting was explained.
That finishing touch of Google AI in the keynote, taking 16 full minutes (about 15% of the time) shows that Google was aiming to impress and to focus on the good they are making in the world, trying to reduce the growing fear factor of their power and data collection capabilities.
It was impressive…
Next year?More of the same is my guess.
Google will need to find some new innovation to build their event around. Speech to text on device is great, especially with the many use cases it enabled and the privacy angle to it. Not sure how they’d top that next year.
What’s certain is that AI and privacy will still be at the forefront for Google during 2019 and well into 2020.
A lot of the AI innovations Google is talking about is around real time communications. Check out the recent report I’ve written with Chad Hart on the subject:
The post Google I/O 2019 was all about AI, Privacy and Accessibility appeared first on BlogGeek.me.
ML/AI is coming to communications really fast. It is going to manifest is as automation in communications but also in other ways.
Me? I wanted to talk about automation and communications. But then Google released CallJoy, which was… automation and communications. And it shows where we’re headed quite clearly with a service that is butt simple, and yet… Google seems to be the first at it, at least when it comes to aiming for simplicity and a powerful MVP. Here’s where I took this article –
Ever since Google launched Duplex at I/O 2018 I’ve been wondering what’s next. Google came out with a new service called CallJoy – a kind of a voice assistant/agent for small businesses. Before I go into the age of automation and communications, let’s try to find out where machine learning and artificial intelligence can be found in CallJoy.
Interested in AI in communications? Tomorrow I’ll be hosting a webinar with Chad Hart on this topic – join us:
CallJoy and AIWhat CallJoy does exactly?
From the CallJoy website, it looks that the following takes place: you subscribe for the service, pick a local phone number to use and you’re good to go.
When people call your business, they get greeted by a message (“this call is being recorded for whatever purposes” kind of a thing). Next, it can “share” information such as business hours and ask if the caller wants to do stuff over a web link instead of talking to a human. If a web link is what you want (think a “yes please” answer to whatever you hear on the phone when you call), then you’ll get an SMS with a URL. Otherwise, you’ll just get routed to the business’ “real” phone number to be handled by a human. All calls get recorded.
What machine learning aspects does this service use?
#1 – Block unwanted spam callsIncoming spam calls can really harass small businesses. Being able to get less of these is always a blessing. It is also becoming a big issue in the US, one that brings a lot of attention and some attempts at solving it by carriers as well as other vendors.
I am not sure what blocking does Google do here and if it makes direct use of machine learning or not – it certainly can. The fact that all calls get handled by a chatbot means that there’s some kind of a “gating” process that a spam call needs to pass first. This in itself blocks at least some of them spam calls.
#2 – Call deflection, using a voice botCall deflection means taking calls and deflecting them – having automation or self service handle the calls instead of getting them to human agents. In the case of CallJoy, a call comes in. message plays out to the user (“this call is being recorded”). User is asked if he wants to do something over a text message:
If the user is happy with that, then an SMS gets sent to the caller and he can continue from there.
There’s a voicebot here that handles the user’s answer (yes, yap, yes please, sure, …) and makes that decision. Nothing too fancy.
This part was probably implemented by using Google’s Dialogflow.
Today, the focus is on restaurants and in order-taking for the call deflection part. It can be used for other scenarios, but that’s the one Google is starting with:
Notice how there’s “LEARN MORE” only on restaurants? All other verticals in the examples on the CallJoy websites make use of the rest of CallJoy’s capabilities. Restaurants is the only one where call deflection is highlighted through an integration with a third party The Ordering.app, who are, for all intent and purpose an unknown vendor. Here’s what LinkedIn knows about them:
(one has to wonder how and why this partner was picked – and who’s cousin owns this company)
Anyways – call deflection now is done via SMS, and integration with a third party. Future releases will probably have more integrations and third parties to work with – and with that more use cases covered.
Another aspect in the future might be making a decision of where to route a user to – what link to send him based on his intent. This is something that happens in terms of a focus in larger businesses today in their automation initiatives.
#3 – Call transcriptionThis one seems like table stakes.
Transcription is the source of gaining insights from voice.
CallJoy offers transcription of all calls made.
The purpose? Enable analytics for the small business, which is based on tags and BI (below).
This most certainly makes use of Google’s speech to text service
#4- Automated tagging on call transcriptsIt seems CallJoy offers tagging of the transcripts or finding specific keywords.
There’s not much explanation or information about tags, but it seems to work by specifying search words and these become tags across the recordings of calls that were made.
Identifying tags might be a manual process or an automated one (it isn’t really indicated anywhere). The intent here is to allow businesses to indicate what they are interested in (order, inventory, reservation, etc.).
#5- Metrics and dashboardsThen there’s the BI part – business intelligence.
Take the information collected, place it on nice dashboards to show the users.
This gives small businesses insights on who is calling them, when and for what purpose. Sounds trivial and obvious, but how many small businesses have that data today?
No machine learning or AI here – just old school BI. The main difference is that the data collected along with the insights gleaned make use of machine learning.
Sum it upTo sum things up, CallJoy uses transcription and makes basic use of Dialogflow to build a simple voicebot (probably single step – question+answer) and wraps it up in a solution that is pretty darn useful for businesses.
It does that for $39 a month per location. Very little to lose by trying it out…
A different routeWhere most AI vendors are targeting large enterprises, Google decided to take the route of the small business. Trying to solve their problems. The challenge here is that there’s not enough data within a single business – and not enough money for running a data science project.
Google figured out how to cater for this audience with the tools they had at hand, without using the industry’s gold standard for call centers or try a fancy catch-all solution to answer and manage all calls.
The industry’s gold standard? An IVR. Get a person to menu-hell until he reaches what he needs.
Catch-all solution? Put an AI that can handle 90%+ if the call scenarios on its own automatically.
Both an IVR and mapping call scenarios means customizing the solution, which suggests longer onboarding with a more complicated solution. By taking the route of simplification Google made it possible to cater for small businesses.
A virtuous cycleGoogle gains here twice.
Once by attracting small businesses to its service.
Twice by collecting these calls and the intents and tags businesses put. This ends up gaining more insights for Google, turning them into additional features, which later on attracts yet more businesses to a better CallJoy business.
It is all about automationHere’s what you’ll find on the FAQ page of CallJoy:
With CallJoy, you’ll be able to:
Most of it talks about improving a service by automating much of what takes place. Which is what the whole notion of AI and machine learning is with communications. Well… mostly. There are a few other areas like quality optimization.
The whole AI gold rush we see today in the communications space boils down to the next level of automation we’re getting into with communications. In many cases this is about machine helping humans and not really machine replacing humans – not for many of the use cases and interactions. That will probably come later
Interested in AI in communications? Tomorrow I’ll be hosting a webinar with Chad Hart on this topic – join us:
The post Google CallJoy & the age of automation in communications appeared first on BlogGeek.me.
ML/AI is coming to communications really fast. It is going to manifest is as automation in communications but also in other ways.
Me? I wanted to talk about automation and communications. But then Google released CallJoy, which was… automation and communications. And it shows where we’re headed quite clearly with a service that is butt simple, and yet… Google seems to be the first at it, at least when it comes to aiming for simplicity and a powerful MVP. Here’s where I took this article –
Ever since Google launched Duplex at I/O 2018 I’ve been wondering what’s next. Google came out with a new service called CallJoy – a kind of a voice assistant/agent for small businesses. Before I go into the age of automation and communications, let’s try to find out where machine learning and artificial intelligence can be found in CallJoy.
Interested in AI in communications? Tomorrow I’ll be hosting a webinar with Chad Hart on this topic – join us:
CallJoy and AIWhat CallJoy does exactly?
From the CallJoy website, it looks that the following takes place: you subscribe for the service, pick a local phone number to use and you’re good to go.
When people call your business, they get greeted by a message (“this call is being recorded for whatever purposes” kind of a thing). Next, it can “share” information such as business hours and ask if the caller wants to do stuff over a web link instead of talking to a human. If a web link is what you want (think a “yes please” answer to whatever you hear on the phone when you call), then you’ll get an SMS with a URL. Otherwise, you’ll just get routed to the business’ “real” phone number to be handled by a human. All calls get recorded.
What machine learning aspects does this service use?
#1 – Block unwanted spam callsIncoming spam calls can really harass small businesses. Being able to get less of these is always a blessing. It is also becoming a big issue in the US, one that brings a lot of attention and some attempts at solving it by carriers as well as other vendors.
I am not sure what blocking does Google do here and if it makes direct use of machine learning or not – it certainly can. The fact that all calls get handled by a chatbot means that there’s some kind of a “gating” process that a spam call needs to pass first. This in itself blocks at least some of them spam calls.
#2 – Call deflection, using a voice botCall deflection means taking calls and deflecting them – having automation or self service handle the calls instead of getting them to human agents. In the case of CallJoy, a call comes in. message plays out to the user (“this call is being recorded”). User is asked if he wants to do something over a text message:
If the user is happy with that, then an SMS gets sent to the caller and he can continue from there.
There’s a voicebot here that handles the user’s answer (yes, yap, yes please, sure, …) and makes that decision. Nothing too fancy.
This part was probably implemented by using Google’s Dialogflow.
Today, the focus is on restaurants and in order-taking for the call deflection part. It can be used for other scenarios, but that’s the one Google is starting with:
Notice how there’s “LEARN MORE” only on restaurants? All other verticals in the examples on the CallJoy websites make use of the rest of CallJoy’s capabilities. Restaurants is the only one where call deflection is highlighted through an integration with a third party The Ordering.app, who are, for all intent and purpose an unknown vendor. Here’s what LinkedIn knows about them:
(one has to wonder how and why this partner was picked – and who’s cousin owns this company)
Anyways – call deflection now is done via SMS, and integration with a third party. Future releases will probably have more integrations and third parties to work with – and with that more use cases covered.
Another aspect in the future might be making a decision of where to route a user to – what link to send him based on his intent. This is something that happens in terms of a focus in larger businesses today in their automation initiatives.
#3 – Call transcriptionThis one seems like table stakes.
Transcription is the source of gaining insights from voice.
CallJoy offers transcription of all calls made.
The purpose? Enable analytics for the small business, which is based on tags and BI (below).
This most certainly makes use of Google’s speech to text service
#4- Automated tagging on call transcriptsIt seems CallJoy offers tagging of the transcripts or finding specific keywords.
There’s not much explanation or information about tags, but it seems to work by specifying search words and these become tags across the recordings of calls that were made.
Identifying tags might be a manual process or an automated one (it isn’t really indicated anywhere). The intent here is to allow businesses to indicate what they are interested in (order, inventory, reservation, etc.).
#5- Metrics and dashboardsThen there’s the BI part – business intelligence.
Take the information collected, place it on nice dashboards to show the users.
This gives small businesses insights on who is calling them, when and for what purpose. Sounds trivial and obvious, but how many small businesses have that data today?
No machine learning or AI here – just old school BI. The main difference is that the data collected along with the insights gleaned make use of machine learning.
Sum it upTo sum things up, CallJoy uses transcription and makes basic use of Dialogflow to build a simple voicebot (probably single step – question+answer) and wraps it up in a solution that is pretty darn useful for businesses.
It does that for $39 a month per location. Very little to lose by trying it out…
A different routeWhere most AI vendors are targeting large enterprises, Google decided to take the route of the small business. Trying to solve their problems. The challenge here is that there’s not enough data within a single business – and not enough money for running a data science project.
Google figured out how to cater for this audience with the tools they had at hand, without using the industry’s gold standard for call centers or try a fancy catch-all solution to answer and manage all calls.
The industry’s gold standard? An IVR. Get a person to menu-hell until he reaches what he needs.
Catch-all solution? Put an AI that can handle 90%+ if the call scenarios on its own automatically.
Both an IVR and mapping call scenarios means customizing the solution, which suggests longer onboarding with a more complicated solution. By taking the route of simplification Google made it possible to cater for small businesses.
A virtuous cycleGoogle gains here twice.
Once by attracting small businesses to its service.
Twice by collecting these calls and the intents and tags businesses put. This ends up gaining more insights for Google, turning them into additional features, which later on attracts yet more businesses to a better CallJoy business.
It is all about automationHere’s what you’ll find on the FAQ page of CallJoy:
With CallJoy, you’ll be able to:
Most of it talks about improving a service by automating much of what takes place. Which is what the whole notion of AI and machine learning is with communications. Well… mostly. There are a few other areas like quality optimization.
The whole AI gold rush we see today in the communications space boils down to the next level of automation we’re getting into with communications. In many cases this is about machine helping humans and not really machine replacing humans – not for many of the use cases and interactions. That will probably come later
Interested in AI in communications? Tomorrow I’ll be hosting a webinar with Chad Hart on this topic – join us:
The post Google CallJoy & the age of automation in communications appeared first on BlogGeek.me.
The landscape of WebRTC developer tools is ever-changing. Here’s where we are at now.
It was time. Over a year passed since last I’ve updated my WebRTC PaaS report. The main changes that occurred since December 2017?
While working on the report, there were a few things that I needed to do:
A chapter in the report deals with the WebRTC Developer Tools landscape – the vendors, frameworks, products and services that developers can use when building their WebRTC applications. And that was from June 2017… a long time ago in WebRTC-time.
So I got that updated as well.
You can download the WebRTC Developer Tools landscape infographic.
Helping developers decideA theme that occurs on a daily basis almost is people asking what to use for their project.
Someone asked about a PHP signaling server in 2017. That question was raised again this month. I got a kind of a similar question over email about Python. Others use one CPaaS vendor and want to switch to another (because they are unhappy about quality, support, pricing, …). Or they want to try and build the infrastructure on their own.
The WebRTC Index is there to cater for that need. Guide people through the process of finding the tools they can use. It is great, but it isn’t detailed enough in some cases – it gives you the list of vendors to research, but you still need to go and research them to check their feature list and capabilities.
That’s why I created my paid report – Choosing a WebRTC API Platform. This report covers the CPaaS vendors who has WebRTC capabilities. And now with the updated edition, it is again up to date with the most current information on all vendors.
Thinking of using a 3rd party?
Trying to determine a different vendor to use?
Want to know how committed a certain vendor is to his platform?
All that can be found in the report, in a way that is easily reachable and digestible.
The report is available at a discounted price until the end of April (only 2 days left).
If you want to learn more about the report, you can:
You can purchase the report online.
Shout out to Agora.ioThe reason that 4-pager from Agora.io is openly available is that they sponsored this report.
Agora.io is one of the interesting vendors in this space. They have their own network and coding technologies, and they hook it up to WebRTC. Their solution is also capable of dealing with live broadcasts at scale (think million viewers in a single video stream).
Check them out, and if you’re in San Francisco – attend their AllThingsRTC event.
The post Latest WebRTC Developer Tools Landscape (and report) appeared first on BlogGeek.me.
The landscape of WebRTC developer tools is ever-changing. Here’s where we are at now.
It was time. Over a year passed since last I’ve updated my WebRTC PaaS report. The main changes that occurred since December 2017?
While working on the report, there were a few things that I needed to do:
A chapter in the report deals with the WebRTC Developer Tools landscape – the vendors, frameworks, products and services that developers can use when building their WebRTC applications. And that was from June 2017… a long time ago in WebRTC-time.
So I got that updated as well.
You can download the WebRTC Developer Tools landscape infographic.
Helping developers decideA theme that occurs on a daily basis almost is people asking what to use for their project.
Someone asked about a PHP signaling server in 2017. That question was raised again this month. I got a kind of a similar question over email about Python. Others use one CPaaS vendor and want to switch to another (because they are unhappy about quality, support, pricing, …). Or they want to try and build the infrastructure on their own.
The WebRTC Index is there to cater for that need. Guide people through the process of finding the tools they can use. It is great, but it isn’t detailed enough in some cases – it gives you the list of vendors to research, but you still need to go and research them to check their feature list and capabilities.
That’s why I created my paid report – Choosing a WebRTC API Platform. This report covers the CPaaS vendors who has WebRTC capabilities. And now with the updated edition, it is again up to date with the most current information on all vendors.
Thinking of using a 3rd party?
Trying to determine a different vendor to use?
Want to know how committed a certain vendor is to his platform?
All that can be found in the report, in a way that is easily reachable and digestible.
The report is available at a discounted price until the end of April (only 2 days left).
If you want to learn more about the report, you can:
You can purchase the report online.
Shout out to Agora.ioThe reason that 4-pager from Agora.io is openly available is that they sponsored this report.
Agora.io is one of the interesting vendors in this space. They have their own network and coding technologies, and they hook it up to WebRTC. Their solution is also capable of dealing with live broadcasts at scale (think million viewers in a single video stream).
Check them out, and if you’re in San Francisco – attend their AllThingsRTC event.
The post Latest WebRTC Developer Tools Landscape (and report) appeared first on BlogGeek.me.
Suddenly, there are so many good WebRTC events you can attend.
My kids are still young, and for some reason, still consider me somewhat important in their lives. It is great, but also sad – I found myself this year needing to decline so many good events to attend. Here’s a list of all the places that I am not going to be at, but you should if you’re interested in WebRTC
BTW – Some of these events are still in their call for papers stage – why not go as a speaker?
AllThingsRTCWhen? 13 June
Where? San Francisco
Call for speakers: https://www.papercall.io/allthingsrtc
AllThingsRTC is hosted by Agora.io. The event they did in China a few years back was great (I haven’t attended but got good feedback about it), and this one is taking the right direction. They have room for more speakers – so be sure to add your name if you wish to present.
Sadly, I won’t be able to join this event as I am just finishing a family holiday in London.
URL: https://2019.commcon.xyz/
When? 7-11 July
Where? Buckinghamshire, UK
CommCon started last year by Dan Jenkins from Nimble Ape.
It takes a view of the communications market as a whole from the point of view of the developers in that market. The event runs in two tracks with a good deal of sessions around WebRTC.
I couldn’t attend last year’s even and can’t attend this year’s event (extended family trip to Eastern Europe). What I’ve heard from last year’s attendees was that the event was really good – and as testament, the people I know are going to attend this year’s event as well.
ClueConWhen? 5-8 August
Where? Downtown Chicago
Call for speakers: https://www.cluecon.com/speakers/
This is the 15th year that ClueCon will be held. This event is about open source projects in VoIP, with the team behind the event being the FreeSWITCH team.
This one is just after that extended family trip to Eastern Europe, and I’d rather not be on another airplane so soon.
Twilio SignalURL: https://signal.twilio.com/
When? 6-7 August
Where? San Francisco
Call for speakers: https://eegeventsite.secure.force.com/twiliosignal/twiliosignalcfpreghome
Twilio Signal is a lot of fun. Twilio is the biggest CPaaS vendor out there and their event is quite large. I’ve been to two such events and found them really interesting. They deal a lot about Twilio products and new launches which tend to define a lot of the industry, but they have technical and business sessions as well.
Can’t make it this year. Falls at roughly the same time as ClueCon which I am skipping as well.
JanusConWhen? 23-25 September
Where? Napoli, Italy
Call for papers: https://www.papercall.io/januscon2019
The meetecho team behind Janus decided to create a conference around Janus.
Janus is one of the most popular open source WebRTC media servers today, and this is a leap of faith when it comes to creating an event – always a risky business.
I might end up attending it. For Janus (and for the food obviously). Only challenge is my daughter is starting a new school that month, so need to see if and how will that fit.
IIT RTCURL: https://www.rtc-conference.com/2019/
When? 14-16 October
Where? Chicago
Call for speakers: https://www.rtc-conference.com/2019/submit-presentation-for-conference/
The IIT RTC is a mixture of academic and industry event around real time communications. I’ve taken part in it twice without really being there in person, through a video conference session. The event runs multiple tracks with WebRTC in a track of its own. As with many of the other larger industry events, IIT RTC is preceded by a TADHack event and one of its tracks is TAD Summit.
I’ll be skipping this one due to Sukkot holiday here in Israel.
Kranky GeekURL: https://www.krankygeek.com/
When? 15 November
Where? San Francisco
Call for speakers: just contact me
That’s the event I am hosting with Chris Koehncke and Chad Hart. Our focus is WebRTC and ML/AI in real time communications. We’re still figuring out the sponsors and agenda for this year (just started planning the event).
Obviously, I’ll be attending this event…
Which event should you attend?This is a question I’ve been asked quite a few times, and somehow, this year, there are just so many of them that I want and can’t attend. If you think of going to an event to learn about WebRTC and communications in general, then any of these will be great.
Go to a few – why settle for one?
Next MonthNext month, I’ll be hosting a webinar along with Chad Hart. We will be reviewing the changing domain of machine learning and artificial intelligence in real time communications. We’ve published a report about it a few months back, and it is time to take another look at the topic. If you’re interested – join us.
The post Upcoming WebRTC events in 2019 appeared first on BlogGeek.me.
Suddenly, there are so many good WebRTC events you can attend.
My kids are still young, and for some reason, still consider me somewhat important in their lives. It is great, but also sad – I found myself this year needing to decline so many good events to attend. Here’s a list of all the places that I am not going to be at, but you should if you’re interested in WebRTC
BTW – Some of these events are still in their call for papers stage – why not go as a speaker?
AllThingsRTCWhen? 13 June
Where? San Francisco
Call for speakers: https://www.papercall.io/allthingsrtc
AllThingsRTC is hosted by Agora.io. The event they did in China a few years back was great (I haven’t attended but got good feedback about it), and this one is taking the right direction. They have room for more speakers – so be sure to add your name if you wish to present.
Sadly, I won’t be able to join this event as I am just finishing a family holiday in London.
URL: https://2019.commcon.xyz/
When? 7-11 July
Where? Buckinghamshire, UK
CommCon started last year by Dan Jenkins from Nimble Ape.
It takes a view of the communications market as a whole from the point of view of the developers in that market. The event runs in two tracks with a good deal of sessions around WebRTC.
I couldn’t attend last year’s even and can’t attend this year’s event (extended family trip to Eastern Europe). What I’ve heard from last year’s attendees was that the event was really good – and as testament, the people I know are going to attend this year’s event as well.
ClueConWhen? 5-8 August
Where? Downtown Chicago
Call for speakers: https://www.cluecon.com/speakers/
This is the 15th year that ClueCon will be held. This event is about open source projects in VoIP, with the team behind the event being the FreeSWITCH team.
This one is just after that extended family trip to Eastern Europe, and I’d rather not be on another airplane so soon.
Twilio SignalURL: https://signal.twilio.com/
When? 6-7 August
Where? San Francisco
Call for speakers: https://eegeventsite.secure.force.com/twiliosignal/twiliosignalcfpreghome
Twilio Signal is a lot of fun. Twilio is the biggest CPaaS vendor out there and their event is quite large. I’ve been to two such events and found them really interesting. They deal a lot about Twilio products and new launches which tend to define a lot of the industry, but they have technical and business sessions as well.
Can’t make it this year. Falls at roughly the same time as ClueCon which I am skipping as well.
JanusConWhen? 23-25 September
Where? Napoli, Italy
Call for papers: https://www.papercall.io/januscon2019
The meetecho team behind Janus decided to create a conference around Janus.
Janus is one of the most popular open source WebRTC media servers today, and this is a leap of faith when it comes to creating an event – always a risky business.
I might end up attending it. For Janus (and for the food obviously). Only challenge is my daughter is starting a new school that month, so need to see if and how will that fit.
IIT RTCURL: https://www.rtc-conference.com/2019/
When? 14-16 October
Where? Chicago
Call for speakers: https://www.rtc-conference.com/2019/submit-presentation-for-conference/
The IIT RTC is a mixture of academic and industry event around real time communications. I’ve taken part in it twice without really being there in person, through a video conference session. The event runs multiple tracks with WebRTC in a track of its own. As with many of the other larger industry events, IIT RTC is preceded by a TADHack event and one of its tracks is TAD Summit.
I’ll be skipping this one due to Sukkot holiday here in Israel.
Kranky GeekURL: https://www.krankygeek.com/
When? 15 November
Where? San Francisco
Call for speakers: just contact me
That’s the event I am hosting with Chris Koehncke and Chad Hart. Our focus is WebRTC and ML/AI in real time communications. We’re still figuring out the sponsors and agenda for this year (just started planning the event).
Obviously, I’ll be attending this event…
Which event should you attend?This is a question I’ve been asked quite a few times, and somehow, this year, there are just so many of them that I want and can’t attend. If you think of going to an event to learn about WebRTC and communications in general, then any of these will be great.
Go to a few – why settle for one?
Next MonthNext month, I’ll be hosting a webinar along with Chad Hart. We will be reviewing the changing domain of machine learning and artificial intelligence in real time communications. We’ve published a report about it a few months back, and it is time to take another look at the topic. If you’re interested – join us.
The post Upcoming WebRTC events in 2019 appeared first on BlogGeek.me.
There are multiple ways to implement WebRTC multiparty sessions. These in turn are built around mesh, mixing and routing.
In the past few days I’ve been sick to the bone. Fever, headache, cough – the works. I couldn’t do much which meant no writing an article either. Good thing I had to remove an appendix from my upcoming WebRTC API Platforms report to make room for a new one.
I wanted to touch the topic of Flow and Embed in Communication APIs, and how they fit into the WebRTC space. This topic will replace an appendix in the report about multiparty architectures in WebRTC, which is what follows here – a copy+paste of that appendix:
Multiparty conferences of either voice or video can be supported in one of three ways:
The quality of the solution will rely heavily on the different type of architecture used. In Routing, we see further refinement for video routing between multi-unicast, simulcast and SVC.
WebRTC API Platform vendors who offer multiparty conferencing will have different implementations of this technology. For those who need multiparty calling, make sure you know which technology is used by the vendor you choose.
MeshIn a mesh architecture, all users are connected to all others directly and send their media to them. While there is no overhead on a media server, this option usually falls short of offering any meaningful media quality and starts breaking from 4 users or more.
Mesh topologyFor the most part, consider vendors offering mesh topology for their video service as limited at best.
MixingMCUs were quite common before WebRTC came into the market. MCU stands for Multipoint Conferencing Unit, and it acts as a mixing point.
MCU mixing topologyAn MCU receives the incoming media streams from all users, decodes it all, creates a new layout of everything and sends it out to all users as a single stream.
This has the added benefit of being easy on the user devices, which see it as a single user they need to operate in front; but it comes at a high compute cost and an inflexibility on the user side.
RoutingSFUs were new before WebRTC, but are now an extremely popular solution. SFU stands for Selective Forwarding Unit, and it acts like a router of media.
SFU routing topologyAn SFU receives the incoming media streams from all users, and then decides which streams to send to which users.
This approach leaves flexibility on the user side while reducing the computational cost on the server side; making it the popular and cost effective choice in WebRTC deployments.
To route media, an SFU can employ one of three distinct approaches:
This is the naïve approach to routing media. Each user sends his video stream towards he SFU, which then decide who to route this stream to.
If there is a need to lower bitrates or resolutions, it is either done at the source, by forcing a user to change his sent stream, or on the receiver end, by having the receiving user to throw data he received and processed.
It is also how most implementations of WebRTC SFUs were done until recently. [UPDATE: Since this article was originally written in 2017, that was true. In 2019, most are actually using Simulcast] Simulcast
Simulcast is an approach where the user sends multiple video streams towards the SFU. These streams are compressed data of the exact same media, but in different quality levels – usually different resolutions and bitrates.
SimulcastThe SFU can then select which of the streams it received to send to which participant based on their device capability, available network or screen layout.
Simulcast has started to crop in commercial WebRTC SFUs only recently.
SVCSVC stands for Scalable Video Coding. It is a technique where a single encoded video stream is created in a layered fashion, where each layer adds to the quality of the previous layer.
SVCWhen an SFU receives a media stream that uses SVC, it can peel of layers out of that stream, to fit the outgoing stream to the quality, device, network and UI expectations of the receiving user. It offers better performance than Simulcast in both compute and network resources.
SVC has the added benefit of enabling higher resiliency to network impairments by allowing adding error correction only to base layers. This works well over mobile networks even for 1:1 calling.
SVC is very new to WebRTC and is only now being introduced as part of the VP9 video codec.
The post WebRTC Multiparty Architectures appeared first on BlogGeek.me.
There are multiple ways to implement WebRTC multiparty sessions. These in turn are built around mesh, mixing and routing.
In the past few days I’ve been sick to the bone. Fever, headache, cough – the works. I couldn’t do much which meant no writing an article either. Good thing I had to remove an appendix from my upcoming WebRTC API Platforms report to make room for a new one.
I wanted to touch the topic of Flow and Embed in Communication APIs, and how they fit into the WebRTC space. This topic will replace an appendix in the report about multiparty architectures in WebRTC, which is what follows here – a copy+paste of that appendix:
Multiparty conferences of either voice or video can be supported in one of three ways:
The quality of the solution will rely heavily on the different type of architecture used. In Routing, we see further refinement for video routing between multi-unicast, simulcast and SVC.
WebRTC API Platform vendors who offer multiparty conferencing will have different implementations of this technology. For those who need multiparty calling, make sure you know which technology is used by the vendor you choose.
MeshIn a mesh architecture, all users are connected to all others directly and send their media to them. While there is no overhead on a media server, this option usually falls short of offering any meaningful media quality and starts breaking from 4 users or more.
Mesh topologyFor the most part, consider vendors offering mesh topology for their video service as limited at best.
MixingMCUs were quite common before WebRTC came into the market. MCU stands for Multipoint Conferencing Unit, and it acts as a mixing point.
MCU mixing topologyAn MCU receives the incoming media streams from all users, decodes it all, creates a new layout of everything and sends it out to all users as a single stream.
This has the added benefit of being easy on the user devices, which see it as a single user they need to operate in front; but it comes at a high compute cost and an inflexibility on the user side.
RoutingSFUs were new before WebRTC, but are now an extremely popular solution. SFU stands for Selective Forwarding Unit, and it acts like a router of media.
SFU routing topologyAn SFU receives the incoming media streams from all users, and then decides which streams to send to which users.
This approach leaves flexibility on the user side while reducing the computational cost on the server side; making it the popular and cost effective choice in WebRTC deployments.
To route media, an SFU can employ one of three distinct approaches:
This is the naïve approach to routing media. Each user sends his video stream towards he SFU, which then decide who to route this stream to.
If there is a need to lower bitrates or resolutions, it is either done at the source, by forcing a user to change his sent stream, or on the receiver end, by having the receiving user to throw data he received and processed.
It is also how most implementations of WebRTC SFUs were done until recently.
SimulcastSimulcast is an approach where the user sends multiple video streams towards the SFU. These streams are compressed data of the exact same media, but in different quality levels – usually different resolutions and bitrates.
SimulcastThe SFU can then select which of the streams it received to send to which participant based on their device capability, available network or screen layout.
Simulcast has started to crop in commercial WebRTC SFUs only recently.
SVCSVC stands for Scalable Video Coding. It is a technique where a single encoded video stream is created in a layered fashion, where each layer adds to the quality of the previous layer.
SVCWhen an SFU receives a media stream that uses SVC, it can peel of layers out of that stream, to fit the outgoing stream to the quality, device, network and UI expectations of the receiving user. It offers better performance than Simulcast in both compute and network resources.
SVC has the added benefit of enabling higher resiliency to network impairments by allowing adding error correction only to base layers. This works well over mobile networks even for 1:1 calling.
SVC is very new to WebRTC and is only now being introduced as part of the VP9 video codec.
The post WebRTC Multiparty Architectures appeared first on BlogGeek.me.
A while ago we looked at how Zoom was avoiding WebRTC by using WebAssembly to ship their own audio and video codecs instead of using the ones built into the browser’s WebRTC. I found an interesting branch in Google’s main (and sadly mostly abandoned) WebRTC sample application apprtc this past January. The branch is named wartc… a name which is going to stick as warts!
The repo contains a number of experiments related to compiling the webrtc.org library as WebAssembly and evaluating the performance.
Continue reading Finding the Warts in WebAssembly+WebRTC at webrtcHacks.
WebRTC disconnections are quite common, but you can “fix” many of them just by careful planning and proper development.
Years ago, I developed the H.323 Protocol Stack at RADVISION (later turned Avaya, turned Spirent turned Softil). I was there as a developer, R&D manager and then the product manager. My code is probably still in that codebase, lovingly causing products around the globe to crash from time to time – as any other developer, I have my share of bugs left behind.
Anyways, why am I mentioning this?
I had a client asking me recently about disconnections in WebRTC. And it kinda reminded me of a similar issue (or set of issues) we had with the H.323 stack and protocol years back.
If you bear with me a bit – I promise it will be worth your while.
I am starting this week the office hours for my WebRTC course. The next office hour (after the initial “hi everyone”) will cover WebRTC disconnections.
Check out the course – and maybe go over the first module for free:
A quick intro to H.323 signaling and transportH.323 is like SIP just better and more complex. At least for me, who started his way in VoIP with H.323 (I will always have a soft spot for it). For many years, the way H.323 worked is by opening two separate TCP connections for transporting its signaling. The first for passing what is called Q.931 protocol and the next for passing H.245 protocol.
If you would like to compare it to the way WebRTC handles things, then Q.931 is how you setup the connection – have the users find each other. H.245 is similar to what SDP and JSEP are for (I am blatantly ignoring H.225 here, another protocol in H.323 which takes care of registration and authentication).
Once Q.931 and H.245 get connected, you start adding the RTP/RTCP stuff over UDP, which gets you quite a lot of connections.
Add to that complexities like tunneling H.245 over Q.931, using something called faststart instead of H.245 (or before H.245), then sprinkle a dash of “parallel H.245” and then a bit of NAT traversal and/or security and you get a lot of places that require testing and a huge number of edge cases.
Where can H.323 get “stuck” or disconnected?With so many connections, there are a lot of places that things can go wrong. There are multiple state machines (one for Q.931 state, one for H.245 state) and there are different connections that can get severed for one reason or another.
Oh – and in H.323 (at least in its earlier specifications that I had the joy to work with), when the Q.931 or H.245 connections get severed – the whole session is considered as disconnected, so you go and kill the RTP/RTCP sessions.
At the time, we suffered a lot from zombie sessions due to different edge cases. We ended up with solutions that were either based on the H.323 specification itself or best practices we created along the way.
Here are a few of these:
H.323 existed before smartphones. Systems were usually tethered to an ethernet cable or at most over WiFi in a static location at a time. There was no notion of roaming or moving between networks. Which meant that there was no need to ask yourself if a connection got severed because of a switch in the network or because there’s a real issue.
Life was simple:
And if you were really insistent then maybe this:
(in real life scenarios, these two simplistic state machines were a lot bigger and complicated, but their essence was based on these concepts)
Back to WebRTC signaling and transportWebRTC is simpler and more complicated than H.323 at the same thing.
It is simpler, as there is only SRTP. There’s no signaling that is standardized or preselected for WebRTC. And for the most part, the one you use will probably require only a single connection (as opposed to the two in H.323). It also has a lot less alternatives built into the specification itself that H.323 has.
It is more complicated, as you own the signaling part. You make that selection, so you better make a good one. And while at it, implement it reasonably well and handle all of its edge cases. This is never a simple task even for simple signaling protocols. And it’s now on you.
Then there’s the fact that networks today are more complex. User expect to move around while communicating, and you should expect such scenarios where users switch networks in mid-session.
If you use WebRTC in a browser, then you get these interesting aspects associated with your implementation:
A lot of dying taking place on the browser, and the server, or the other client, will need to “sniff” these scenarios as they might not be gracefully disconnected, and decide what to do about them.
Where can WebRTC get “stuck” or disconnected?We can split disconnections of WebRTC into 3 broad categories:
In each, there will be multiple scenarios, defining the reasons for failure as well as how to handle and overcome such issues.
In broad strokes, here’s what I’d do in each of these 3 categories:
#1 – Failure to connect at allThere’s a decent amount of failures happening when trying to connect WebRTC sessions. They start from not being able to even send out an SDP, through interoperability issues across browsers and devices to ICE negotiation failing to connect media.
In many of these cases, better configuration of the service as well as focus on edge cases would improve the situation.
If you experience connection failures for 10% or more of the sessions – you’re doing something wrong. Some can get it as low as 1% or less, but oftentimes that depends on the type of users your service attracts.
This leads to another very important aspect of using WebRTC:
Measure what you can if you want to be able to improve it in the future
#2 – Media disconnectionsSometimes, your sessions will simply disconnect.
There are many reasons why that can happen:
Each of these requires different handling – some in the code while others some manual handling (think customer support working out the configuration with a customer to resolve the firewall issue).
#3 – Signaling disconnectionsUnlike H.323, if signaling gets disconnected, WebRTC doesn’t even know about it, so it won’t immediately cause the session itself to disconnect.
First thing you’ll need to do is make a decision how you want to proceed in such cases – do you treat this as session failure/disconnection or do you let the show go on.
If you treat these as failures, then I suggest killing peer connections based on the status of your websocket connection to the server. If you are on the server side, then once a connection is lost, you should probably go ahead and kill the media paths – either from your media server towards the “dead” session leg or from the other participant on a P2P connection/session.
If you want to make sure the show goes on, you will need to try and reconnect the peer connection towards the same user/session somehow. In which case, additional signaling logic in your connection state machine along with additional timers to manage it will be necessary.
Announcing the WebRTC course snippets moduleHere’s the thing.
My online WebRTC training has everything in it already. Well… not everything, but it is rather complete. What I’ve noticed is that I get repeat questions from different students and clients on very specific topics. They are mostly covered within lessons of the course, but they sometimes feel as being “buried” within the hours and hours of content.
This is why I decided to start creating course snippets. These are “lessons” that are 3-5 minutes long (as opposed to 20-40 minutes long), with a purpose to give an answer to one specific question at a time. Most of the snippets will be actionable and may contain additional materials to assist you in your development. This library of snippets will make up a new course module.
Here are the first 3 snippets that will be added:
While we’re at it, office hours for the course start today. If you want to learn WebRTC, now is the best time to enroll.
The post Handling session disconnections in WebRTC appeared first on BlogGeek.me.
WebRTC disconnections are quite common, but you can “fix” many of them just by careful planning and proper development.
Years ago, I developed the H.323 Protocol Stack at RADVISION (later turned Avaya, turned Spirent turned Softil). I was there as a developer, R&D manager and then the product manager. My code is probably still in that codebase, lovingly causing products around the globe to crash from time to time – as any other developer, I have my share of bugs left behind.
Anyways, why am I mentioning this?
I had a client asking me recently about disconnections in WebRTC. And it kinda reminded me of a similar issue (or set of issues) we had with the H.323 stack and protocol years back.
If you bear with me a bit – I promise it will be worth your while.
I am starting this week the office hours for my WebRTC course. The next office hour (after the initial “hi everyone”) will cover WebRTC disconnections.
Check out the course – and maybe go over the first module for free:
A quick intro to H.323 signaling and transportH.323 is like SIP just better and more complex. At least for me, who started his way in VoIP with H.323 (I will always have a soft spot for it). For many years, the way H.323 worked is by opening two separate TCP connections for transporting its signaling. The first for passing what is called Q.931 protocol and the next for passing H.245 protocol.
If you would like to compare it to the way WebRTC handles things, then Q.931 is how you setup the connection – have the users find each other. H.245 is similar to what SDP and JSEP are for (I am blatantly ignoring H.225 here, another protocol in H.323 which takes care of registration and authoentication).
Once Q.931 and H.245 get connected, you start adding the RTP/RTCP stuff over UDP, which gets you quite a lot of connections.
Add to that complexities like tunneling H.245 over Q.931, using something called faststart instead of H.245 (or before H.245), then sprinkle a dash of “parallel H.245” and then a bit of NAT traversal and/or security and you get a lot of places that require testing and a huge number of edge cases.
Where can H.323 get “stuck” or disconnected?With so many connections, there are a lot of places that things can go wrong. There are multiple state machines (one for Q.931 state, one for H.245 state) and there are different connections that can get severed for one reason or another.
Oh – and in H.323 (at least in its earlier specifications that I had the joy to work with), when the Q.931 or H.245 connections get severed – the whole session is considered as disconnected, so you go and kill the RTP/RTCP sessions.
At the time, we suffered a lot from zombie sessions due to different edge cases. We ended up with solutions that were either based on the H.323 specification itself or best practices we created along the way.
Here are a few of these:
H.323 existed before smartphones. Systems were usually tethered to an ethernet cable or at most over WiFi in a static location at a time. There was no notion of roaming or moving between networks. Which meant that there was no need to ask yourself if a connection got severed because of a switch in the network or because there’s a real issue.
Life was simple:
And if you were really insistent then maybe this:
(in real life scenarios, these two simplistic state machines were a lot bigger and complicated, but their essence was based on these concepts)
Back to WebRTC signaling and transportWebRTC is simpler and more complicated than H.323 at the same thing.
It is simpler, as there is only SRTP. There’s no signaling that is standardized or preselected for WebRTC. And for the most part, the one you use will probably require only a single connection (as opposed to the two in H.323). It also has a lot less alternatives built into the specification itself that H.323 has.
It is more complicated, as you own the signaling part. You make that selection, so you better make a good one. And while at it, implement it reasonably well and handle all of its edge cases. This is never a simple task even for simple signaling protocols. And it’s now on you.
Then there’s the fact that networks today are more complex. User expect to move around while communicating, and you should expect such scenarios where users switch networks in mid-session.
If you use WebRTC in a browser, then you get these interesting aspects associated with your implementation:
A lot of dying taking place on the browser, and the server, or the other client, will need to “sniff” these scenarios as they might not be gracefully disconnected, and decide what to do about them.
Where can WebRTC get “stuck” or disconnected?We can split disconnections of WebRTC into 3 broad categories:
In each, there will be multiple scenarios, defining the reasons for failure as well as how to handle and overcome such issues.
In broad strokes, here’s what I’d do in each of these 3 categories:
#1 – Failure to connect at allThere’s a decent amount of failures happening when trying to connect WebRTC sessions. They start from not being able to even send out an SDP, through interoperability issues across browsers and devices to ICE negotiation failing to connect media.
In many of these cases, better configuration of the service as well as focus on edge cases would improve the situation.
If you experience connection failures for 10% or more of the sessions – you’re doing something wrong. Some can get it as low as 1% or less, but oftentimes that depends on the type of users your service attracts.
This leads to another very important aspect of using WebRTC:
Measure what you can if you want to be able to improve it in the future
#2 – Media disconnectionsSometimes, your sessions will simply disconnect.
There are many reasons why that can happen:
Each of these requires different handling – some in the code while others some manual handling (think customer support working out the configuration with a customer to resolve the firewall issue).
#3 – Signaling disconnectionsUnlike H.323, if signaling gets disconnected, WebRTC doesn’t even know about it, so it won’t immediately cause the session itself to disconnect.
First thing you’ll need to do is make a decision how you want to proceed in such cases – do you treat this as session failure/disconnection or do you let the show go on.
If you treat these as failures, then I suggest killing peer connections based on the status of your websocket connection to the server. If you are on the server side, then once a connection is lost, you should probably go ahead and kill the media paths – either from your media server towards the “dead” session leg or from the other participant on a P2P connection/session.
If you want to make sure the show goes on, you will need to try and reconnect the peer connection towards the same user/session somehow. In which case, additional signaling logic in your connection state machine along with additional timers to manage it will be necessary.
Announcing the WebRTC course snippets moduleHere’s the thing.
My online WebRTC training has everything in it already. Well… not everything, but it is rather complete. What I’ve noticed is that I get repeat questions from different students and clients on very specific topics. They are mostly covered within lessons of the course, but they sometimes feel as being “buried” within the hours and hours of content.
This is why I decided to start creating course snippets. These are “lessons” that are 3-5 minutes long (as opposed to 20-40 minutes long), with a purpose to give an answer to one specific question at a time. Most of the snippets will be actionable and may contain additional materials to assist you in your development. This library of snippets will make up a new course module.
Here are the first 3 snippets that will be added:
While we’re at it, office hours for the course start today. If you want to learn WebRTC, now is the best time to enroll.
The post Handling session disconnections in WebRTC appeared first on BlogGeek.me.
Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.
Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.
Wow, this most certainly is a great a theme.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.