News from Industry

What’s the status of WebRTC in 2019?

bloggeek - Mon, 06/17/2019 - 12:00

In 2019, WebRTC is ready, but there’s still work ahead.

When I wrote that WebRTC is ready over 6 months ago it pissed a few people off.

Here’s the thing – WebRTC is ready simply because the industry deems it ready and companies are deploying products that rely on WebRTC to work for them.

Are there challenges along the way? Sure.

Do things break? Sure.

But if you are thinking of whether you should start using WebRTC and build an application on top of it or wait for the next fad to come by for your video calling service, then don’t. Use WebRTC as nothing else will do today.

Trying to understand where WebRTC is available? Download my free cheat sheet

WebRTC device cheat sheet

WebRTC 1.0 – the specification

In 2015 I remember someone telling me that WebRTC 1.0 will be closed and published by year end.

I heard the same in 2016. And later in 2017.

In 2018 I ignored such promises.

2019? There is a small chance that things will be ready. Why? Because the spec is almost completed. That almost is the sticking point.

But then again, who cares?

Everyone is already using WebRTC as if it is a done deal. Because it is.

We’ve agreed on the technology (WebRTC). We’ve agreed on the larger picture and the ways things are going to look like (peer connection and how browsers implement it today). We’re left with the nitty gritty details of how to make the experience easier and uniform across browsers for developers. We will get there, but just remember – users expect it to work, and it does.

Chrome and WebRTC

Consider Chrome to be the de facto specification for WebRTC. It isn’t WebRTC 1.0 compliant. Yet. According to Statista, 69% of the desktop internet is driven by Chrome. On this website? 74% of the viewers use Chrome.

The thing about Chrome is that it is slowly getting the missing WebRTC 1.0 support, and by moving there it is breaking things up with each release. Usually because the way it works today isn’t exactly spec compliant, so things have to break – or just because the additions are delicate and the work done breaks behavior that developers relied on in the past. At times, it is because Google has no qualms when it comes to technical debt and code rewrites and when it sees a need to optimize something it usually does that (we’re now in the 3rd generation of echo canceller in WebRTC, each one was a complete rewrite of the previous one).

If you are developing anything that needs to run in the browser and use WebRTC, then Chrome is the first thing you should be developing for.

Firefox and WebRTC

Firefox is close to be spec compliant when it comes to WebRTC.

They had it easy with the recent decision to adopt Unified Plan instead of Plan B in the WebRTC specification. Where Google had to shift from Plan B to Unified Plan, Firefox had only slight modifications to make.

The problem is that Firefox is a distant second to Chrome in market share. At times, developers actively decide not to support Firefox just because they consider it a waste of time. This is doubly true for those who use Chrome for guest access and as a stepping stone to getting their users to download their Electron app instead.

Safari and WebRTC

Safari now supports WebRTC. That includes things like simulcast and both VP8 and H.264. Which is to say that most WebRTC features already work in Safari, but not all of them.

You wouldn’t find VP9 which isn’t mandatory or popular yet, but something that is more than desirable. And then some of the more complicated scenarios such as multiparty sessions have more pending open issues of both functionality and interoperability than Chrome or Firefox have.

The challenge is that Safari is important to developers. Both because it is the only way to get on iOS devices and because it is the default browser for Mac, a desktop/laptop that for some reason is becoming a fad with developers (go figure).

Edge and WebRTC

Edge was once its own browser with its own technology stack, but is now becoming just another flavor of Chrome. Microsoft announced that Edge will be using Chromium as its browser engine. This has gotten Edge to work on Mac already with rumors of a possible Linux release.

Edge runs on Chromium.

Chrome runs on Chromium.

Chrome isn’t WebRTC spec compliant because Chromium isn’t WebRTC spec compliant.

So Edge isn’t spec compliant either. But it is well… the same as Chrome.

This all relates to the upcoming official release of Edge.

Microsoft IE and WebRTC

Still dream about Internet Explorer at night?

Stop it.

IE won’t be supporting WebRTC. Not now and not ever.

Use a plugin or just use Electron. Or better yet – update to a more modern browser.

Opera/Brave/whoever and WebRTC

Most of the other browsers out there, be it Opera, Brave or anything else is just a fork of Chromium or a skin on top of Chromium.

For all intent and purpose, they are Chrome, offering the same spec compliance to WebRTC as Chrome does. At least if they haven’t gone and intentionally made changes to it (like disabling it in the name of privacy).

Android and WebRTC

Android has support of WebRTC.

Chrome browser that ships with Android has WebRTC support.

Other browsers shipping on Android have WebRTC support (such as Firefox).

Sometimes, a device manufacturer ends up shipping his own browser (Samsung for example). Then WebRTC compliance and availability is somewhat questionable.

The good thing is that the Webview in Android also supports WebRTC. So built-in application browsers such as the one used by Facebook or Slack also end up supporting WebRTC experiences.

And if you write your own app, you can use the Webview, a precompiled version of WebRTC for Android or compile it on your own.

iOS and WebRTC

On iOS things are slightly trickier.

Safari supports WebRTC on iOS and there are companies making commercial use of it already.

Other browsers don’t and can’t support WebRTC on iOS. That’s because the supplied iOS Webview still doesn’t support WebRTC (or disables it on purpose).

If you write your own app, you can use a precompiled version of WebRTC for iOS or compile it on your own. No Webview for you yet.

Your Next Steps?

Haven’t started with WebRTC yet? Now’s the time. I can help.

Trying to understand where WebRTC is available? Download my free cheat sheet

WebRTC device cheat sheet

The post What’s the status of WebRTC in 2019? appeared first on BlogGeek.me.

WebRTC video recording may be more useful than WebRTC video calling

bloggeek - Mon, 06/03/2019 - 12:00

Video recording using WebRTC can be a lot more lucrative a business than WebRTC video calling.

There’s been an ongoing rumble around WebRTC in a lot of discussions I had about it and sometimes from what you read online – What’s the market size of WebRTC? How do you make money out of it? Who is making money out of it?

Questions that are really hard to answer. Usually because people don’t like to hear the answers to them.

Looking to understand where and how to fit WebRTC into your business? Let’s talk

Contact Tsahi

The Zoom IPO

Is there money in video conferencing or video calling?

The service today is practically free, spread across a multitude of different service types:

Social
  • Apple FaceTime
  • Google Duo & Google Hangouts
  • Facebook Messenger
  • WhatsApp
  • Skype
  • Houseparty

An unending list of social communication services that happen to have video calling in them. I’ve bunched Apple and Google in here simply because they “own” the smartphones we use today.

Business
  • Google Meet
  • Zoom

Here you’ll find services that are free to a certain extent. They are either time limited, feature limited, or just bundled up to bigger offerings.

Zoom were probably the first to go this route with a well-featured product where the biggest limit for a free account was time – 40 minutes per session. Long enough for a lot of uses.

Consumer/Soho

There are many consumer-type services that got built using WebRTC and gained traction. The services started as free offerings, and each grew of its own accord. Jitsi Meet got acquired by Atlassian and then 8×8 acquired it from Atlassian. Appear.in started offering paid Pro accounts and got acquired by Videonor. Talky became a showcase for SimpleWebRTC.

Others started with a free service, ending with a paid service, like Gruveo.

Show me the money

This is where things got complicated.

No one saw a way to make money out of WebRTC. Or video.

At least not until Zoom IPO’d. ~$425 million annual run rate, growing at over 100% a year. Alex Clayton has a nice breakdown of their filing:

The moment this happened, both BlueJeans and LifeSize decided to publish their numbers – BlueJeans reached $100m ARR while Lifesize reached $100m in bookings. Their message? Zoom isn’t alone.

For the record, and to make this clear:

  • Zoom doesn’t use WebRTC
  • BlueJeans and Lifesize use WebRTC though both existed before WebRTC

The thing here is video conferencing service, and how do you make money out of it? You can, if you’re big enough, though it will be hard to join the game now and try to outdo Zoom in video conferencing by using their playbook.

The challenge is probably that everyone is looking under the light post.

You’ve got practically 100s of developers, startups, enterprises and whatnots vying towards disrupting the video conferencing market with WebRTC. The challenge is that with so many players coming in with the same technology, only a few will stay standing.

Differentiation is tough in this space. Why would someone pick up your service and not another? How will they find you? Why should they pay?

Which brings me to the reason I started writing this in the first place –

Not video calling – WebRTC video recording

I went to AppSumo this week, deciding to purchase another deal on their site. Every once in awhile I find there some great deals and new services to use for my business. The latest featured offer on that site? Dubb (now sold out)

Dubb

This is a service that runs as a Chrome extension enabling its users to record a short video and share it with customers over SMS, email or other networks.

I don’t know if Dubb supports WebRTC or not, but –

  1. It works in the browser with no need to install anything (besides a Chrome extension)
  2. It records video and voice right there inside the browser

In all likelihood, this is using WebRTC’s MediaRecorder to record locally and upload the result to the Dubb cloud service.

Dubb is positioned as a sales tool to build rapport – not as a video conferencing or a communication tool. There’s no “real time”, “collaboration” or “conferencing” here.

Seeing it got me thinking of another tool I bumped into recently – Loom

Loom

I started a coaching program a few months back. My WebRTC Course showed success in the last 3 years of its existence and I wanted to grow it in size – have more people enroll and learn WebRTC in the process. The coaching program is interesting. I am learning a ton in it, some of it already found its way into the course and a lot more will be coming in the next course launch in a few months time.

Anyways, when I ask questions via email, I usually get back video recordings of my coach reviewing the question and answering it, thinking through the issues I raise. I can see him and his screen, which is great. The link and tool he uses? Loom.

So I checked it out:

Similarly to Dubb, this one is about recording videos from the browser, with no installation needed. In Loom’s case, they are even trying to showcase the various uses of their tool.

WebRTC isn’t only about calling

WebRTC isn’t only about calling.

It has other capabilities. There’s the data channel, there’s the simple access to the camera and mic and there’s the ability to record media on the client side to name a few.

That client side recording enables these services – Dubb and Loom. there’s also Ziggeo and Pipe for those looking for a managed API for it.

I am wondering. When everyone is closely looking at video calling, trying to figure out how to make $$$ out of that space, is the real usability of WebRTC lies elsewhere altogether?

Looking to understand where and how to fit WebRTC into your business? Let’s talk

Contact Tsahi

The post WebRTC video recording may be more useful than WebRTC video calling appeared first on BlogGeek.me.

WebRTC video recording may be more useful than WebRTC video calling

bloggeek - Mon, 06/03/2019 - 12:00

Video recording using WebRTC can be a lot more lucrative a business than WebRTC video calling.

There’s been an ongoing rumble around WebRTC in a lot of discussions I had about it and sometimes from what you read online – What’s the market size of WebRTC? How do you make money out of it? Who is making money out of it?

Questions that are really hard to answer. Usually because people don’t like to hear the answers to them.

Looking to understand where and how to fit WebRTC into your business? Let’s talk

Contact Tsahi

The Zoom IPO

Is there money in video conferencing or video calling?

The service today is practically free, spread across a multitude of different service types:

Social
  • Apple FaceTime
  • Google Duo & Google Hangouts
  • Facebook Messenger
  • WhatsApp
  • Skype
  • Houseparty

An unending list of social communication services that happen to have video calling in them. I’ve bunched Apple and Google in here simply because they “own” the smartphones we use today.

Business
  • Google Meet
  • Zoom

Here you’ll find services that are free to a certain extent. They are either time limited, feature limited, or just bundled up to bigger offerings.

Zoom were probably the first to go this route with a well-featured product where the biggest limit for a free account was time – 40 minutes per session. Long enough for a lot of uses.

Consumer/Soho

There are many consumer-type services that got built using WebRTC and gained traction. The services started as free offerings, and each grew of its own accord. Jitsi Meet got acquired by Atlassian and then 8×8 acquired it from Atlassian. Appear.in started offering paid Pro accounts and got acquired by Videonor. Talky became a showcase for SimpleWebRTC.

Others started with a free service, ending with a paid service, like Gruveo.

Show me the money

This is where things got complicated.

No one saw a way to make money out of WebRTC. Or video.

At least not until Zoom IPO’d. ~$425 million annual run rate, growing at over 100% a year. Alex Clayton has a nice breakdown of their filing:

The moment this happened, both BlueJeans and LifeSize decided to publish their numbers – BlueJeans reached $100m ARR while Lifesize reached $100m in bookings. Their message? Zoom isn’t alone.

For the record, and to make this clear:

  • Zoom doesn’t use WebRTC
  • BlueJeans and Lifesize use WebRTC though both existed before WebRTC

The thing here is video conferencing service, and how do you make money out of it? You can, if you’re big enough, though it will be hard to join the game now and try to outdo Zoom in video conferencing by using their playbook.

The challenge is probably that everyone is looking under the light post.

You’ve got practically 100s of developers, startups, enterprises and whatnots vying towards disrupting the video conferencing market with WebRTC. The challenge is that with so many players coming in with the same technology, only a few will stay standing.

Differentiation is tough in this space. Why would someone pick up your service and not another? How will they find you? Why should they pay?

Which brings me to the reason I started writing this in the first place –

Not video calling – WebRTC video recording

I went to AppSumo this week, deciding to purchase another deal on their site. Every once in awhile I find there some great deals and new services to use for my business. The latest featured offer on that site? Dubb (now sold out)

Dubb

This is a service that runs as a Chrome extension enabling its users to record a short video and share it with customers over SMS, email or other networks.

I don’t know if Dubb supports WebRTC or not, but –

  1. It works in the browser with no need to install anything (besides a Chrome extension)
  2. It records video and voice right there inside the browser

In all likelihood, this is using WebRTC’s MediaRecorder to record locally and upload the result to the Dubb cloud service.

Dubb is positioned as a sales tool to build rapport – not as a video conferencing or a communication tool. There’s no “real time”, “collaboration” or “conferencing” here.

Seeing it got me thinking of another tool I bumped into recently – Loom

Loom

I started a coaching program a few months back. My WebRTC Course showed success in the last 3 years of its existence and I wanted to grow it in size – have more people enroll and learn WebRTC in the process. The coaching program is interesting. I am learning a ton in it, some of it already found its way into the course and a lot more will be coming in the next course launch in a few months time.

Anyways, when I ask questions via email, I usually get back video recordings of my coach reviewing the question and answering it, thinking through the issues I raise. I can see him and his screen, which is great. The link and tool he uses? Loom.

So I checked it out:

Similarly to Dubb, this one is about recording videos from the browser, with no installation needed. I Loom’s case, they are even trying to showcase the various uses of their tool.

WebRTC isn’t only about calling

WebRTC isn’t only about calling.

It has other capabilities. There’s the data channel, there’s the simple access to the camera and mic and there’s the ability to record media on the client side to name a few.

That client side recording enables these services – Dubb and Loom. there’s also Ziggeo and Pipe for those looking for a managed API for it.

I am wondering. When everyone is closely looking at video calling, trying to figure out how to make $$$ out of that space, is the real usability of WebRTC lies elsewhere altogether?

Looking to understand where and how to fit WebRTC into your business? Let’s talk

Contact Tsahi

The post WebRTC video recording may be more useful than WebRTC video calling appeared first on BlogGeek.me.

New Kamailio module – app_lua_sr

miconda - Fri, 05/31/2019 - 13:25
A new module named app_lua_sr has been pushed to git master branch. It collects the functions that correspond to the Lua srlibrary, previously exported from app_lua module:The Lua sr library is the old way of exposing Kamailio API to Lua scripting. With the introduction of KEMI in Kamailio v5.0, the KSR library has been exported to Lua, with a larger set of functions, therefore over the time app_lua_sr will be deprecated and removed. Splitting the code from app_lua that is no longer needed for KEMI is the first step in this process.If you are using sr library in your Lua script, consider to migrate to KEMI alternatives offered by KSR library. For now you can still keep your old Lua script with sr library, requiring following updates to kamailio.cfg:# old config
loadmodule "app_lua.so"
modparam("app_lua", "register", "sl")
 
# new config
loadmodule "app_lua.so"
loadmodule "app_lua_sr.so"
modparam("app_lua_sr", "register", "sl")If you find a function available in Lua sr library but not in Lua KSR library, contact us via mailing lists or open an issue on github tracker.Thanks for flying Kamailio!

WebRTC vs WebSockets

bloggeek - Tue, 05/28/2019 - 12:00

WebRTC vs WebSockets: They. Are. Not. The. Same.

Sometimes, there are things that seem obvious once you’re “in the know” but just isn’t that when you’re new to the topic. It seems that the difference between WebRTC vs WebSockets is one such thing. Philipp Hancke pinged me the other day, asking if I have an article about WebRTC vs WebSockets, and I didn’t – it made no sense for me. That at least, until I asked Google about it:

It seems like Google believes the most pressing (and popular) search for comparisons of WebRTC is between WebRTC and WebSockets. I should probably also write about them other comparisons there, but for now, let’s focus on that first one.

Need to learn WebRTC? Check out my online course – the first module is free.

Learn WebRTC

What are WebSockets?

WebSockets are a bidirectional mechanism for browser communication.

There are two types of transport channels for communication in browsers: HTTP and WebSockets.

HTTP is what gets used to fetch web pages, images, stylesheets and javascript files as well as other resources. In essence, HTTP is a client-server protocol, where the browser is the client and the web server is the server:

My WebRTC course covers this in detail, but suffice to say here that with HTTP, your browser connects to a web server and requests *something* of it. The server then sends a response to that request and that’s the end of it.

The challenge starts when you want to send an unsolicited message from the server to the client. You can’t do it if you don’t send a request from the web browser to the web server, and while you can use different schemes such as XHR and SSE to do that, they end up feeling like hacks or workarounds more than solutions.

Enter WebSockets, what’s meant to solve exactly that – the web browser connects to the web server by establishing a WebSocket connection. Over that connection, both the browser and the server can send each other unsolicited messages. Not only that, they can send binary (gasp!) messages – something impossible without yet another hack (known as base64) in HTTP.

Because WebSockets are built-for-purpose and not the alternative XHR/SSE hacks, WebSockets perform better both in terms of speed and resources it eats up on both browsers and servers.

WebSockets are rather simple to use as a web developer – you’ve got a straightforward WebSocket API for them, which are nicely illustrated by HPBN:

var ws = new WebSocket('wss://example.com/socket'); ws.onerror = function (error) { ... } ws.onclose = function () { ... } ws.onopen = function () { ws.send("Connection established. Hello server!"); } ws.onmessage = function(msg) { if(msg.data instanceof Blob) { processBlob(msg.data); } else { processText(msg.data); } }

You’ve got calls for send and close and callbacks for onopen, onerror, onclose and onmessage. Of course there’s more to it than that, but this is holds the essence of WebSockets.

It leads us to what we usually use WebSockets for, and I’d like to explain it this time not by actual scenarios and use cases but rather by the keywords I’ve seen associated with WebSockets:

  • Bi-directional, full-duplex
  • Signaling
  • Real-time data transfer
  • Low latency
  • Interactive
  • High performance
  • Chat, two way conversation

Funnily, a lot of this sometimes get associated with WebRTC as well, which might be the cause of the comparison that is made between the two.

WebRTC, in the context of WebSockets

There are numerous articles here about WebRTC, including a What is WebRTC one.

In the context of WebRTC vs WebSockets, WebRTC enables sending arbitrary data across browsers without the need to relay that data through a server (most of the time). That data can be voice, video or just data.

Here’s where things get interesting –

WebRTC has no signaling channel

When starting a WebRTC session, you need to negotiate the capabilities for the session and the connection itself. That is done out of the scope of WebRTC, in whatever means you deem fit. And in a browser, this can either be HTTP or… WebSocket.

So from this point of view, WebSocket isn’t a replacement to WebRTC but rather complementary – as an enabler.

You can send media over a WebSocket

Sort of.

I’ll start with an example. If you want you connect to a cloud based speech to text API and you happen to use IBM Watson, then you can use its WebSocket interface. The first sentence in the first paragraph of the documentation?

The WebSocket interface of the Speech to Text service is the most natural way for a client to interact with the service.

So. you stream the speech (=voice) over a WebSocket to connect it to the cloud API service.

That said, it is highly unlikely to be used for anything else.

In most cases, real time media will get sent over WebRTC or other protocols such as RTSP, RTMP, HLS, etc.

WebRTC’s data channel

WebRTC has a data channel. It has many different uses. In some cases, it is used in place of using a kind of a WebSocket connection:

The illustration above shows how a message would pass from one browser to another over a WebSocket versus doing the same over a WebRTC data channel. Each has its advantages and challenges.

Funnily, the data channel in WebRTC shares a similar set of APIs to the WebSocket ones:

const peerConnection = new RTCPeerConnection(); const dataChannel = peerConnection.createDataChannel("myLabel", dataChannelOptions); dataChannel.onerror = (error) => { … }; dataChannel.onclose = () => { … }; dataChannel.onopen = () => { dataChannel.send("Hello World!"); }; dataChannel.onmessage = (event) => { … };

Again, we’ve got calls for send and close and callbacks for onopen, onerror, onclose and onmessage.

This makes an awful lot of sense but can be confusing a bit.

There this one tiny detail – to get the data channel working, you first need to negotiate the connection. And that you do either with HTTP or with a WebSocket.

When should you use WebRTC instead of a WebSocket?

Almost never. That’s the truth.

If you’re contemplating between the two and you don’t know a lot about WebRTC, then you’re probably in need of WebSockets, or will be better off using WebSockets.

I’d think of data channels either when there are things you want to pass directly across browsers without any server intervention in the message itself (and these use cases are quite scarce), or you are in need of a low latency messaging solution across browsers where a relay via a WebSocket will be too time consuming.

Need to learn WebRTC? Check out my online course – the first module is free.

Learn WebRTC

The post WebRTC vs WebSockets appeared first on BlogGeek.me.

WebRTC vs WebSockets

bloggeek - Tue, 05/28/2019 - 12:00

WebRTC vs WebSockets: They. Are. Not. The. Same.

Sometimes, there are things that seem obvious once you’re “in the know” but just isn’t that when you’re new to the topic. It seems that the difference between WebRTC vs WebSockets is one such thing. Philipp Hancke pinged me the other day, asking if I have an article about WebRTC vs WebSockets, and I didn’t – it made no sense for me. That at least, until I asked Google about it:

It seems like Google believes the most pressing (and popular) search for comparisons of WebRTC is between WebRTC and WebSockets. I should probably also write about them other comparisons there, but for now, let’s focus on that first one.

Need to learn WebRTC? Check out my online course – the first module is free.

Learn WebRTC

What are WebSockets?

WebSockets are a bidirectional mechanism for browser communication.

There are two types of transport channels for communication in browsers: HTTP and WebSockets.

HTTP is what gets used to fetch web pages, images, stylesheets and javascript files as well as other resources. In essence, HTTP is a client-server protocol, where the browser is the client and the web server is the server:

My WebRTC course covers this in detail, but suffice to say here that with HTTP, your browser connects to a web server and requests *something* of it. The server then sends a response to that request and that’s the end of it.

The challenge starts when you want to send an unsolicited message from the server to the client. You can’t do it if you don’t send a request from the web browser to the web server, and while you can use different schemes such as XHR and SSE to do that, they end up feeling like hacks or workarounds more than solutions.

Enter WebSockets, what’s meant to solve exactly that – the web browser connects to the web server by establishing a WebSocket connection. Over that connection, both the browser and the server can send each other unsolicited messages. Not only that, they can send binary (gasp!) messages – something impossible without yet another hack (known as base64) in HTTP.

Because WebSockets are built-for-purpose and not the alternative XHR/SSE hacks, WebSockets perform better both in terms of speed and resources it eats up on both browsers and servers.

WebSockets are rather simple to use as a web developer – you’ve got a straightforward WebSocket API for them, which are nicely illustrated by HPBN:

var ws = new WebSocket('wss://example.com/socket'); ws.onerror = function (error) { ... } ws.onclose = function () { ... } ws.onopen = function () { ws.send("Connection established. Hello server!"); } ws.onmessage = function(msg) { if(msg.data instanceof Blob) { processBlob(msg.data); } else { processText(msg.data); } }

You’ve got calls for send and close and callbacks for onopen, onerror, onclose and onmessage. Of course there’s more to it than that, but this is holds the essence of WebSockets.

It leads us to what we usually use WebSockets for, and I’d like to explain it this time not by actual scenarios and use cases but rather by the keywords I’ve seen associated with WebSockets:

  • Bi-directional, full-duplex
  • Signaling
  • Real-time data transfer
  • Low latency
  • Interactive
  • High performance
  • Chat, two way conversation

Funnily, a lot of this sometimes get associated with WebRTC as well, which might be the cause of the comparison that is made between the two.

WebRTC, in the context of WebSockets

There are numerous articles here about WebRTC, including a What is WebRTC one.

In the context of WebRTC vs WebSockets, WebRTC enables sending arbitrary data across browsers without the need to relay that data through a server (most of the time). That data can be voice, video or just data.

Here’s where things get interesting –

WebRTC has no signaling channel

When starting a WebRTC session, you need to negotiate the capabilities for the session and the connection itself. That is done out of the scope of WebRTC, in whatever means you deem fit. And in a browser, this can either be HTTP or… WebSocket.

So from this point of view, WebSocket isn’t a replacement to WebRTC but rather complementary – as an enabler.

You can send media over a WebSocket

Sort of.

I’ll start with an example. If you want you connect to a cloud based speech to text API and you happen to use IBM Watson, then you can use its WebSocket interface. The first sentence in the first paragraph of the documentation?

The WebSocket interface of the Speech to Text service is the most natural way for a client to interact with the service.

So. you stream the speech (=voice) over a WebSocket to connect it to the cloud API service.

That said, it is highly unlikely to be used for anything else.

In most cases, real time media will get sent over WebRTC or other protocols such as RTSP, RTMP, HLS, etc.

WebRTC’s data channel

WebRTC has a data channel. It has many different uses. In some cases, it is used in place of using a kind of a WebSocket connection:

The illustration above shows how a message would pass from one browser to another over a WebSocket versus doing the same over a WebRTC data channel. Each has its advantages and challenges.

Funnily, the data channel in WebRTC shares a similar set of APIs to the WebSocket ones:

const peerConnection = new RTCPeerConnection(); const dataChannel = peerConnection.createDataChannel("myLabel", dataChannelOptions); dataChannel.onerror = (error) => { … }; dataChannel.onclose = () => { … }; dataChannel.onopen = () => { dataChannel.send("Hello World!"); }; dataChannel.onmessage = (event) => { … };

Again, we’ve got calls for send and close and callbacks for onopen, onerror, onclose and onmessage.

This makes an awful lot of sense but can be confusing a bit.

There this one tiny detail – to get the data channel working, you first need to negotiate the connection. And that you do either with HTTP or with a WebSocket.

When should you use WebRTC instead of a WebSocket?

Almost never. That’s the truth.

If you’re contemplating between the two and you don’t know a lot about WebRTC, then you’re probably in need of WebSockets, or will be better off using WebSockets.

I’d think of data channels either when there are things you want to pass directly across browsers without any server intervention in the message itself (and these use cases are quite scarce), or you are in need of a low latency messaging solution across browsers where a relay via a WebSocket will be too time consuming.

Need to learn WebRTC? Check out my online course – the first module is free.

Learn WebRTC

The post WebRTC vs WebSockets appeared first on BlogGeek.me.

Kamailio v5.2.3 Released

miconda - Wed, 05/22/2019 - 19:30
Kamailio SIP Server v5.2.3 stable is out – a minor release including fixes in code and documentation since v5.2.2. The configuration file and database schema compatibility is preserved, which means you don’t have to change anything to update.Kamailio® v5.2.3 is based on the latest source code of GIT branch 5.2 and it represents the latest stable version. We recommend those running previous 5.2.x or older versions to upgrade. There is no change that has to be done to configuration file or database structure comparing with the previous releases of the v5.2 branch.Resources for Kamailio version 5.2.3Source tarballs are available at:Detailed changelog:Download via GIT: # git clone https://github.com/kamailio/kamailio kamailio
# cd kamailio
# git checkout -b 5.2 origin/5.2Relevant notes, binaries and packages will be uploaded at:Modules’ documentation:What is new in 5.2.x release series is summarized in the announcement of v5.2.0:Thanks for flying Kamailio!

WebRTC simulcast and ABR – two sides of the same coin

bloggeek - Mon, 05/20/2019 - 12:00

WebRTC simulcast and ABR is all about offer choice to “viewers”.

I’ve been dealing recently with more clients who are looking to create live broadcast experiences. Solutions where one or more users have to broadcast their streams from a single session to a large audience. Large is a somewhat lenient target number, which seems to be stretching from anywhere between 100 to a 1,000,000 viewers. And yes, most of these clients want that viewers will have instantaneous access to the stream(s) – a lag of 1-2 seconds at most, as opposed to the 10 or more seconds of latency you get from HLS.

Simulcast, ABR – need a quick reference to understand their similarities and differences? Download the free cheatsheet:

Compare simulcast to ABR

What I started seeing more and more recently are solutions that make use of ABR. What’s ABR? It is just like simulcast, but… different.

What’s Simulcast?

Simulcast is a mechanism in WebRTC by which a device/client/user will be sending a video stream that contains multiple bitrates in it. I explained it a bit in my WebRTC Multiparty Architectures last month.

With simlucast, a WebRTC client will generate these multiple bitrates, where each offers a different video quality – the higher the bitrate the higher the quality.

These video streams are then received by the SFU, and the SFU can pick and choose which stream to send to which participant/viewer. This decision is usually made based on the available bandwidth, but it can (and should) make use of a lot of other factors as well – display size and video layout on the viewer device, CPU utilization of the viewer, etc.

The great thing about simulcast? The SFU doesn’t work too hard. It just selects what to send where.

What’s ABR?

ABR stands for Adaptive Bitrate Streaming. Don’t ask me why R and not S in the acronym – probably because they didn’t want to mix this with car breaks. Anyways, ABR comes from streaming, long before WebRTC was introduced to our lives.

With streaming, you’ve got a user watching a recorded (or “live”) video online. The server then streams that media towards the user. What happens if the available bitrate from the server to the user is low? Buffering.

Streaming technology uses TCP, which in turn uses retransmissions. It isn’t designed for real-time, and well… we want to SEE the content and would rather wait a bit than not see it at all.

Today, with 1080p and 4K resolutions, streaming at high quality requires lots and lots of bandwidth. If the network isn’t capable, would users rather wait and be buffered or would it be better to just lower the quality?

Most prefer lowering the quality.

But how do you do that with “static” content? A pre-recorded video file is what it is.

You use ABR:

With ABR, you segment bandwidth into ranges. Each range will be receiving a different media stream. Each such stream has a different bitrate.

Say you have a media stream of 300kbps – you define the segment bandwidth for it as 300-500kbps. Why? Because from 500kbps there’s another media stream available.

These media streams all contain the same content, just in different bitrates, denoting different quality levels. What you try doing is sending the highest quality range to each viewer without getting into that dreaded buffering state. Since the available bitrate is dynamic in nature (as the illustration above shows), you can end up switching across media streams based on the bitrate available to the viewer at any given point in time. That’s why they call it adaptive.

And it sounds rather similar to simulcast… just on the server side, as ABR is something a server generates – the original media gets to a server, which creates multiple output streams to it in different bitrates, to use when needed.

The ABR challenge for WebRTC media servers

Recently, I’ve seen more discussions and solutions looking at using ABR and similar techniques with WebRTC. Mainly to scale a session beyond 10k viewers and to support low latency broadcasting in CDNs.

Why these two areas?

  1. Because beyond 10k viewers, simulcast isn’t enough anymore. Simulcast today supports up to 3 media streams and the variety you get with 10k viewers is higher than that. There are a few other reasons as well, but that’s for another time
  2. Because CDNs and video streaming have been comfortable with ABR for years now, so them shifting towards WebRTC or low latency means they are looking for much the same technologies and mechanisms they already know

But here’s the problem.

We’ve been doing SFUs with WebRTC for most of the time that WebRTC existed. Around 7-8 years. We’re all quite comfortable now with the concept of paying on bandwidth and not eating too much CPU – which is the performance profile of an SFU.

Simulcast fits right into that philosophy – the one creating the alternate streams is the client and not the SFU – it is sending more media towards the SFU who now has more options. The client pays the price of higher bitrates and higher CPU use.

ABR places that burden on the server, which needs to generate the additional alternate streams on its own, and it needs to do so in real time – there’s no offline pre-processing activity for generating these streams from a pre-existing media file as there is with CDNs. this means that SFUs now need to think about CPU loads, muck around with transcoding, experiment with GPU acceleration – the works. Things they haven’t done so far.

Is this in our future? Sure it is. For some, it is already their present.

Simulcast, ABR – need a quick reference to understand their similarities and differences? Download the free cheatsheet:

Compare simulcast to ABR

What’s next?

WebRTC is growing and evolving. The ecosystem around it is becoming much richer as time goes by. Today, you can find different media servers of different types and characteristics, and the solutions available are quite different from one another.

If you are planning on developing your own application using a media server – make sure you pick a media server that fits to your use case.

The post WebRTC simulcast and ABR – two sides of the same coin appeared first on BlogGeek.me.

WebRTC simulcast and ABR – two sides of the same coin

bloggeek - Mon, 05/20/2019 - 12:00

WebRTC simulcast and ABR is all about offer choice to “viewers”.

I’ve been dealing recently with more clients who are looking to create live broadcast experiences. Solutions where one or more users have to broadcast their streams from a single session to a large audience. Large is a somewhat lenient target number, which seems to be stretching from anywhere between 100 to a 1,000,000 viewers. And yes, most of these clients want that viewers will have instantaneous access to the stream(s) – a lag of 1-2 seconds at most, as opposed to the 10 or more seconds of latency you get from HLS.

Simulcast, ABR – need a quick reference to understand their similarities and differences? Download the free cheatsheet:

Compare simulcast to ABR

What I started seeing more and more recently are solutions that make use of ABR. What’s ABR? It is just like simulcast, but… different.

What’s Simulcast?

Simulcast is a mechanism in WebRTC by which a device/client/user will be sending a video stream that contains multiple bitrates in it. I explained it a bit in my WebRTC Multiparty Architectures last month.

With simlucast, a WebRTC client will generate these multiple bitrates, where each offers a different video quality – the higher the bitrate the higher the quality.

These video streams are then received by the SFU, and the SFU can pick and choose which stream to send to which participant/viewer. This decision is usually made based on the available bandwidth, but it can (and should) make use of a lot of other factors as well – display size and video layout on the viewer device, CPU utilization of the viewer, etc.

The great thing about simulcast? The SFU doesn’t work too hard. It just selects what to send where.

What’s ABR?

ABR stands for Adaptive Bitrate Streaming. Don’t ask me why R and not S in the acronym – probably because they didn’t want to mix this with car breaks. Anyways, ABR comes from streaming, long before WebRTC was introduced to our lives.

With streaming, you’ve got a user watching a recorded (or “live”) video online. The server then streams that media towards the user. What happens if the available bitrate from the server to the user is low? Buffering.

Streaming technology uses TCP, which in turn uses retransmissions. It isn’t designed for real-time, and well… we want to SEE the content and would rather wait a bit than not see it at all.

Today, with 1080p and 4K resolutions, streaming at high quality requires lots and lots of bandwidth. If the network isn’t capable, would users rather wait and be buffered or would it be better to just lower the quality?

Most prefer lowering the quality.

But how do you do that with “static” content? A pre-recorded video file is what it is.

You use ABR:

With ABR, you segment bandwidth into ranges. Each range will be receiving a different media stream. Each such stream has a different bitrate.

Say you have a media stream of 300kbps – you define the segment bandwidth for it as 300-500kbps. Why? Because from 500kbps there’s another media stream available.

These media streams all contain the same content, just in different bitrates, denoting different quality levels. What you try doing is sending the highest quality range to each viewer without getting into that dreaded buffering state. Since the available bitrate is dynamic in nature (as the illustration above shows), you can end up switching across media streams based on the bitrate available to the viewer at any given point in time. That’s why they call it adaptive.

And it sounds rather similar to simulcast… just on the server side, as ABR is something a server generates – the original media gets to a server, which creates multiple output streams to it in different bitrates, to use when needed.

The ABR challenge for WebRTC media servers

Recently, I’ve seen more discussions and solutions looking at using ABR and similar techniques with WebRTC. Mainly to scale a session beyond 10k viewers and to support low latency broadcasting in CDNs.

Why these two areas?

  1. Because beyond 10k viewers, simulcast isn’t enough anymore. Simulcast today supports up to 3 media streams and the variety you get with 10k viewers is higher than that. There are a few other reasons as well, but that’s for another time
  2. Because CDNs and video streaming have been comfortable with ABR for years now, so them shifting towards WebRTC or low latency means they are looking for much the same technologies and mechanisms they already know

But here’s the problem.

We’ve been doing SFUs with WebRTC for most of the time that WebRTC existed. Around 7-8 years. We’re all quite comfortable now with the concept of paying on bandwidth and not eating too much CPU – which is the performance profile of an SFU.

Simulcast fits right into that philosophy – the one creating the alternate streams is the client and not the SFU – it is sending more media towards the SFU who now has more options. The client pays the price of higher bitrates and higher CPU use.

ABR places that burden on the server, which needs to generate the additional alternate streams on its own, and it needs to do so in real time – there’s no offline pre-processing activity for generating these streams from a pre-existing media file as there is with CDNs. this means that SFUs now need to think about CPU loads, muck around with transcoding, experiment with GPU acceleration – the works. Things they haven’t done so far.

Is this in our future? Sure it is. For some, it is already their present.

Simulcast, ABR – need a quick reference to understand their similarities and differences? Download the free cheatsheet:

Compare simulcast to ABR

What’s next?

WebRTC is growing and evolving. The ecosystem around it is becoming much richer as time goes by. Today, you can find different media servers of different types and characteristics, and the solutions available are quite different from one another.

If you are planning on developing your own application using a media server – make sure you pick a media server that fits to your use case.

The post WebRTC simulcast and ABR – two sides of the same coin appeared first on BlogGeek.me.

Kamailio – Updates To Command Line Arguments

miconda - Fri, 05/17/2019 - 13:23
Recently, a series of commits focused on updating the command line arguments for Kamailio. With an increased popularity of deploying Kamailio in containerised environments, the flexibility of using command line arguments when starting the SIP server can simplify the orchestration.For long time, Kamailio accepted only arguments with short name (single character argument name), so one of these new additions was the option to handle arguments with long name, opening the door to add a lot more variants.The list of command line arguments is printed by running ‘kamailio -h‘, with the version built from the latest Git master branch, these are:# kamailio -h

version: kamailio 5.3.0-dev5 (x86_64/darwin) 104147-dirty
Usage: kamailio [options]
Options:
-a mode Auto aliases mode: enable with yes or on,
disable with no or off
--alias=val Add an alias, the value has to be '[proto:]hostname[:port]'
(like for 'alias' global parameter)
-A define Add config pre-processor define (e.g., -A WITH_AUTH,
-A 'FLT_ACC=1', -A 'DEFVAL="str-val"')
-b nr Maximum receive buffer size which will not be exceeded by
auto-probing procedure even if OS allows
-c Check configuration file for syntax errors
-d Debugging mode (multiple -d increase the level)
-D Control how daemonize is done:
-D..do not fork (almost) anyway;
-DD..do not daemonize creator;
-DDD..daemonize (default)
-e Log messages printed in terminal colors (requires -E)
-E Log to stderr
-f file Configuration file (default: /tmp/kamailio-dev/etc/kamailio/kamailio.cfg)
-g gid Change gid (group id)
-G file Create a pgid file
-h This help message
--help Long option for `-h`
-I Print more internal compile flags and options
-K Turn on "via:" host checking when forwarding replies
-l address Listen on the specified address/interface (multiple -l
mean listening on more addresses). The address format is
[proto:]addr_lst[:port][/advaddr],
where proto=udp|tcp|tls|sctp,
addr_lst= addr|(addr, addr_lst),
addr=host|ip_address|interface_name and
advaddr=addr[:port] (advertised address).
E.g: -l localhost, -l udp:127.0.0.1:5080, -l eth0:5062,
-l udp:127.0.0.1:5080/1.2.3.4:5060,
-l "sctp:(eth0)", -l "(eth0, eth1, 127.0.0.1):5065".
The default behaviour is to listen on all the interfaces.
-L path Modules search path (default: /tmp/kamailio-dev/lib64/kamailio/modules)
-m nr Size of shared memory allocated in Megabytes
-M nr Size of private memory allocated, in Megabytes
-n processes Number of child processes to fork per interface
(default: 8)
-N Number of tcp child processes (default: equal to `-n')
-O nr Script optimization level (debugging option)
-P file Create a pid file
-Q Number of sctp child processes (default: equal to `-n')
-r Use dns to check if is necessary to add a "received="
field to a via
-R Same as `-r` but use reverse dns;
(to use both use `-rR`)
--server-id=num set the value for server_id
--subst=exp set a subst preprocessor directive
--substdef=exp set a substdef preprocessor directive
--substdefs=exp set a substdefs preprocessor directive
-S disable sctp
-t dir Chroot to "dir"
-T Disable tcp
-u uid Change uid (user id)
-v Version number
--version Long option for `-v`
-V Alternative for `-v`
-x name Specify internal manager for shared memory (shm)
- can be: fm, qm or tlsf
-X name Specify internal manager for private memory (pkg)
- if omitted, the one for shm is used
-Y dir Runtime dir path
-w dir Change the working directory to "dir" (default: "/")
-W type poll method (depending on support in OS, it can be: poll,
epoll_lt, epoll_et, sigio_rt, select, kqueue, /dev/poll)Among the latest argument additions:
  • add domain aliases with –alias
  • set advertised address to listen sockets speficied with -l socket/advertise
  • set server id with –server-id
  • set a subst, substdef or substdefs preprocessor expression with –subst, –substdef or –substdefs
Couple of more will be added in the future, aiming to make it easier to control Kamailio from command line. If you have suggestions, do not hesitate to propose them to sr-users mailing list.Thanks for flying Kamailio!

The WhatsApp RTCP exploit – what might have happened?

webrtchacks - Fri, 05/17/2019 - 10:15

As you may have heard, Whatsapp discovered a security issue in their client which was actively exploited in the wild. The exploit did not require the target to pick up the call which is really scary.
Since there are not many facts to go on, lets do some tea reading…

The security advisory issued by Facebook says

A buffer overflow vulnerability in WhatsApp VOIP stack allowed remote code execution via specially crafted series of SRTCP packets sent to a target phone number.

Continue reading The WhatsApp RTCP exploit – what might have happened? at webrtcHacks.

Bisecting Browser Bugs (Arne Georg Gisnås Gleditsch)

webrtchacks - Tue, 05/14/2019 - 13:55

When running WebRTC at scale, you end up hitting issues and frequent regressions. Being able to quickly identify what exactly broke is key to either preventing a regression from landing in Chrome Stable or adapting your own code to avoid the problem. Chrome’s bisect-builds.py tool makes this process much easier than you would suspect. Arne from appear.in gives you an example of how he used this to workaround an issue that came up recently.
{“editor”, “Philipp Hancke“}

In this post I am going to provide a blow-by-blow account of how a change to Chrome triggered a bug in appear.in and how we went about determining exactly what that change was.

Continue reading Bisecting Browser Bugs (Arne Georg Gisnås Gleditsch) at webrtcHacks.

Kamailio – Winner Of Google Open Source Peer Bonus Award

miconda - Tue, 05/14/2019 - 13:21
Recently Google announced the first group of Open Source Peer Bonus Award winners for 2019 and we are thrilled to see Daniel-Constantin Mierla and Kamailio among them.The Google Open Source Peer Bonus program is described as:In the same way that a Google Peer Bonus is used to recognize a fellow Googler who has gone above and beyond, an Open Source Peer Bonus recognizes external people who have made exceptional contributions to open source.The announcement for the 2019 winners is available at:Daniel and Kamailio are listed among open source developers and projects that have a relevant impact out there, like Linux Kernel, Kubernetes, Angular, Pip, LLVM/CLang, Apache projects, Git or Gerrit.We are glad to see Kamailio recognized in this way for its contribution to open source real time communications ecosystem!Thanks for flying Kamailio!

Google I/O 2019 was all about AI, Privacy and Accessibility

bloggeek - Mon, 05/13/2019 - 12:00

At Google I/O 2019, the advances Google made in AI and machine learning were put to use for improving privacy and accessibility.

I’ve attended Google I/O in person only once. It was in 2014. I’ve been following this event from afar ever since, making it a point to watch the keynote each year, trying to figure out where Google is headed – and how will that affect the industry.

This weekend I spend some time going over te Google I/O 2019 keynote. If you haven’t seen it, you can watch it over on YouTube – I’ve embedded it here as well.

The main theme of Google I/O 2019

Here’s how I ended my review about Google I/O 2018:

Where are we headed?

That’s the big question I guess.

More machine learning and AI. Expect Google I/O 2019 to be on the same theme.

If you don’t have it in your roadmap, time to see how to fit it in.

In many ways, this can easily be the end of this article as well – the tl;dr version.

Google got to the heart of their keynote only in around the 36 minute mark. Sundar Pichai, CEO of Google, talked about the “For Everyone” theme of this event and where Google is headed. For Everyone – not only for the rich (Apple?) or the people in developed countries, but For Everyone.

The first thing he talked about in this For Everyone context? AI:

From there, everything Google does is about how the AI research work and breakthroughs that they are doing at their scale can fit into the direction they want to take.

This year, that direction was defined by the words privacy, security and accessibility.

Privacy because they are being scrutinized over their data collection, which is directly linked to their business model. But more so because of a recent breakthrough that enables them to run accurate speech to text on devices (more on that later).

Security because of the growing number of hacking and malware attacks we hear about all the time. But more so because the work Google has put into Android from all aspects is placing them ahead on competition (think Apple) based on third party reports (Gartner in this case).

Interestingly, Apple is attacking Google around both privacy and security.

Accessibility because that’s the next billion users. The bigger market. The way to grow by reaching ever larger audiences. But also because it fits well with that breakthrough in speech to text and with machine learning as a whole. And somewhat because of diversity and inclusion which are big words and concepts in tech and silicon valley these days (and you need to appease the crowds and your own employees). And also because it films well and it really does benefit the world and people – though that’s secondary for companies.

The big reveal for me at Google I/O 2019? Definitely its advances in speech analytics by getting speech to text minimized enough to fit into a mobile device. It was the main pillar of this show and for things to come in the future if you ask me.

A lot of the AI innovations Google is talking about is around real time communications. Check out the recent report I’ve written with Chad Hart on the subject:

AI in RTC report

Event Timeline

I wanted to understand what is important to Google this year, so I took a rough timeline of the event, breaking it down into the minutes spent on each topic. In each and every topic discussed, machine learning and AI were apparent.

Time spentTopic10 minSearch; introduction of new feature(s)8 minGoogle Lens; introduction of new feature(s) – related to speech to text16 minGoogle assistant (Duplex on the web, assistant, driving mode)19 minFor Everyone (AI, bias, privacy+security, accessibility)14 minAndroid Q enhancements and innovations (software)9 minNext (home)9 minPixel (smartphone hardware)16 minGoogle AI

Let’s put this in perspective: out of roughly 100 minutes, 51 were spent directly on AI (assistant, for everyone and AI) and the rest of the time was spent about… AI, though indirectly.

Watching the event, I must say it got me thinking of my time at the university. I had a neighbor at the dorms who was a professional juggler. Maybe not professional, but he did get paid for juggling from time to time. He was able to juggle 5 torches or clubs, 5 apples (while eating one) and anywhere between 7-11 balls (I didn’t keep track).

One evening he comes storming into our room, asking us all to watch a new trick he was working on and just perfected. We all looked. And found it boring. Not because it wasn’t hard or impressive, but because we all knew that this was most definitely within his comfort zone and the things he can do. Funny thing is – he visited us here in Israel a few weeks back. My wife asked him if he juggles anymore. He said a bit, and said his kids aren’t impressed. How could they when it is obvious to them that he can?

Anyways, there’s no wow factor in what Google is doing with machine learning anymore. It is obvious that each year, in every Google I/O event, some new innovation around this topic will be introduced.

This time, it was all about voice and text.

Time to dive into what went on @ Google I/O 2019 keynote.

Speech to text on device

We had a glimpse of this piece of technology late last year when Google introduced call screening to its Pixel 3 devices. This capability allows people to let the Pixel answer calls on their behalf, see what people are saying using live transcription and decide how to act.

This was all done on device. At Google I/O 2019, this technology was just added across the board on Android 10 to anything and everything.

On stage, the explanation given was that the model used for speech to text in the cloud is 2.5Gb in size, and Google was able to squeeze it down to 80Mb, which meant being able to run it on devices. It was not indicated if this is for any language other than English, which probably meant this is an English only capability for now.

What does Google gain from this capability?

  1. Faster speech to text. There’s no need to send audio to the cloud and get text back from it
  2. Ability to run it with no network or with poor network conditions
  3. Privacy of what’s being said

For now, Google will be rolling this out to Android devices and not just Google Pixel devices. No mention of if or when this gets to iOS devices.

What have they done with it?

  • Made the Google assistant more responsive (due to faster speech to text)
  • Created system-wide automatic captioning for everything that runs on Android. Anywhere, on any app
Search

The origins of Google came from Search, and Google decided to start the keynote with search.

Nothing super interesting there in the announcements made, besides the continuous improvements. What was showcased was news and podcasts.

How Google decided to handle Face News and news coverage is now coming to search directly. Podcasts are now made searchable and better accessible directly from search.

Other than that?

A new shiny object – the ability to show 3D models in search results and in augmented reality.

Nice, but not earth shattering. At least not yet.

Google Lens

After Search, Google Lens was showcased.

The main theme around it? The ability to capture text in real time on images and do stuff with it. Usually either text to speech or translation.

In the screenshot above, Google Lens marks the recommended dishes off a menu. While nice, this probably requires each and every such feature to be baked into lens, much like new actions need to be baked into the Google Assistant (or skills in Amazon Alexa).

This falls nicely into the For Everyone / Accessibility theme of the keynote. Aparna Chennapragada, Head of Product for Lens, had the following to say (after an emotional video of a woman who can’t read using the new Lens):

“The power to read is the power to buy a train ticket. To shop in a store. To follow the news. It is the power to get things done. So we want to make this feature to be as accessible to as many people as possible, so it already works in a dozen of languages.”

It actually is. People can’t really be part of our world without the power to read.

It is also the only announcement I remember that the number of languages covered was mentioned (which is why I believe speech to text on device is English only).

Google made the case here and in almost every part of the keynote in favor of using AI for the greater good – for accessibility and inclusion.

Google assistant

Google assistant had its share of the keynote with 4 main announcements:

Duplex on the web is a smarter auto fill feature for web forms.

Next generation Assistant is faster and smarter than its predecessor. There were two main aspects of it that were really interesting to me:

  1. It is “10 times faster”, most probably due to speech to text on the phone which doesn’t necessitate the cloud for many tasks
  2. It works across tabs and apps. A demo was shown, where a the woman instructed the Assistant to search for a photo, picking one out and then asking the phone to send it on an ongoing chat conversation just by saying “send it to Justin”

Every year Google seems to be making Assistant more conversational, able to handle more intents and actions – and understand a lot more of the context necessary for complex tasks.

For Everyone

I’ve written about For Everyone earlier in this article.

I want to cover two more aspect of it, federated learning and project euphonia.

Federated Learning

Machine learning requires tons of data. The more data the better the resulting model is at predicting new inputs. Google is often criticized for collecting that data, but it needs it not only for monetization but also a lot for improving its AI models.

Enter federated learning, a way to learn a bit at the edge of the network, directly inside the devices, and share what gets learned in a secure fashion with the central model that is being created in the cloud.

This was so important for Google to show and explain that Sundar Pichai himself showed and gave that spiel instead of leaving it to the final part of the keynote where Google AI was discussed almost separately.

At Google, this feels like an initiative that is only starting its way with the first public implementation of it embedded as part of Google’s predictive keyboard on Android and how that keyboard is learning new words and trends.

Project Euphonia

Project Euphonia was also introduced here. This project is about enhancing speech recognition models towards hard to understand speech.

Here Google stressed the work and effort it is putting on collecting recorded phrases from people with such problems. The main issue here being the creation or improvement of a model more than anything else.

Android Q

Or Android 10 – pick your name for it.

This one was more than anything else a shopping list of features.

Statistics were given at the beginning:

  • 2.5 billion active devices
  • Over 180 device makers

Live captions was again explained and introduced, along with on-device learning capabilities. AI at its best baked into the OS itself.

For some reason, the Android Q segment wasn’t followed with the Pixel one but rather with the Nest one.

Nest (helpful home)

Google rebranded all of its smart home devices under Nest.

While at it, the decided to try and differentiate from the rest of the pack by coining their solution the “helpful home” as opposed to the “smart home”.

As with everything else, AI and the assistant took center stage, as well as a new device, the Nest Hub Max, which is Google’s answer to the Facebook Portal.

The solution for video calling on the Next Hub Max was built around Google Duo (obviously), with a similar ability to auto zoom that Facebook Portal has, at least on paper – it wasn’t really demoed or showcased on stage.

The reason no demo was really given is that this device will ship “later this summer”, which means it wasn’t really ready for prime time – or Google just didn’t want to spend more precious minutes on it during the keynote.

Interestingly, Google Duo’s recent addition of group video calling wasn’t mentioned throughout the keynote at all.

Pixel (phone)

The Pixel section of the keynote showcased a new Pixel phone device, the Pixel 3a and 3a XL. This is a low cost device, which tries to make do with lower hardware spec by offering better software and AI capabilities. To drive that point home, Google had this slide to show:

Google is continuing with its investment in computational photography, and if the results are as good as this example, I am sold.

The other nice feature shown was call screening:

The neet thing is that your phone can act as your personal secretary, checking for you who’s calling and why, and also converse with the caller based on your instructions. This obviously makes use of the same innovations in Android around speech to text and smart reply.

My current phone is Xiaomi Mi A1, an Android One device. My next one may well be the Pixel 3a – at $399, it will probably be the best phone on the market at that price point.

Google AI

The last section of the keynote was given by Jeff Dean, head of Google.ai. He was also the one closing the keynote, instead of handing this back to Sundar Pichai. I found that nuance interesting.

In his part he discussed the advancements in natural language understanding (NLU) at Google, the growth of TensorFlow, where Google is putting its efforts in healthcare (this time it was oncology and lung cancer), as well as the AI for Social Good initiative, where flood forecasting was explained.

That finishing touch of Google AI in the keynote, taking 16 full minutes (about 15% of the time) shows that Google was aiming to impress and to focus on the good they are making in the world, trying to reduce the growing fear factor of their power and data collection capabilities.

It was impressive…

Next year?

More of the same is my guess.

Google will need to find some new innovation to build their event around. Speech to text on device is great, especially with the many use cases it enabled and the privacy angle to it. Not sure how they’d top that next year.

What’s certain is that AI and privacy will still be at the forefront for Google during 2019 and well into 2020.

A lot of the AI innovations Google is talking about is around real time communications. Check out the recent report I’ve written with Chad Hart on the subject:

AI in RTC report

The post Google I/O 2019 was all about AI, Privacy and Accessibility appeared first on BlogGeek.me.

Google I/O 2019 was all about AI, Privacy and Accessibility

bloggeek - Mon, 05/13/2019 - 12:00

At Google I/O 2019, the advances Google made in AI and machine learning were put to use for improving privacy and accessibility.

I’ve attended Google I/O in person only once. It was in 2014. I’ve been following this event from afar ever since, making it a point to watch the keynote each year, trying to figure out where Google is headed – and how will that affect the industry.

This weekend I spend some time going over te Google I/O 2019 keynote. If you haven’t seen it, you can watch it over on YouTube – I’ve embedded it here as well.

The main theme of Google I/O 2019

Here’s how I ended my review about Google I/O 2018:

Where are we headed?

That’s the big question I guess.

More machine learning and AI. Expect Google I/O 2019 to be on the same theme.

If you don’t have it in your roadmap, time to see how to fit it in.

In many ways, this can easily be the end of this article as well – the tl;dr version.

Google got to the heart of their keynote only in around the 36 minute mark. Sundar Pichai, CEO of Google, talked about the “For Everyone” theme of this event and where Google is headed. For Everyone – not only for the rich (Apple?) or the people in developed countries, but For Everyone.

The first thing he talked about in this For Everyone context? AI:

From there, everything Google does is about how the AI research work and breakthroughs that they are doing at their scale can fit into the direction they want to take.

This year, that direction was defined by the words privacy, security and accessibility.

Privacy because they are being scrutinized over their data collection, which is directly linked to their business model. But more so because of a recent breakthrough that enables them to run accurate speech to text on devices (more on that later).

Security because of the growing number of hacking and malware attacks we hear about all the time. But more so because the work Google has put into Android from all aspects is placing them ahead on competition (think Apple) based on third party reports (Gartner in this case).

Interestingly, Apple is attacking Google around both privacy and security.

Accessibility because that’s the next billion users. The bigger market. The way to grow by reaching ever larger audiences. But also because it fits well with that breakthrough in speech to text and with machine learning as a whole. And somewhat because of diversity and inclusion which are big words and concepts in tech and silicon valley these days (and you need to appease the crowds and your own employees). And also because it films well and it really does benefit the world and people – though that’s secondary for companies.

The big reveal for me at Google I/O 2019? Definitely its advances in speech analytics by getting speech to text minimized enough to fit into a mobile device. It was the main pillar of this show and for things to come in the future if you ask me.

A lot of the AI innovations Google is talking about is around real time communications. Check out the recent report I’ve written with Chad Hart on the subject:

AI in RTC report

Event Timeline

I wanted to understand what is important to Google this year, so I took a rough timeline of the event, breaking it down into the minutes spent on each topic. In each and every topic discussed, machine learning and AI were apparent.

Time spentTopic10 minSearch; introduction of new feature(s)8 minGoogle Lens; introduction of new feature(s) – related to speech to text16 minGoogle assistant (Duplex on the web, assistant, driving mode)19 minFor Everyone (AI, bias, privacy+security, accessibility)14 minAndroid Q enhancements and innovations (software)9 minNext (home)9 minPixel (smartphone hardware)16 minGoogle AI

Let’s put this in perspective: out of roughly 100 minutes, 51 were spent directly on AI (assistant, for everyone and AI) and the rest of the time was spent about… AI, though indirectly.

Watching the event, I must say it got me thinking of my time at the university. I had a neighbor at the dorms who was a professional juggler. Maybe not professional, but he did get paid for juggling from time to time. He was able to juggle 5 torches or clubs, 5 apples (while eating one) and anywhere between 7-11 balls (I didn’t keep track).

One evening he comes storming into our room, asking us all to watch a new trick he was working on and just perfected. We all looked. And found it boring. Not because it wasn’t hard or impressive, but because we all knew that this was most definitely within his comfort zone and the things he can do. Funny thing is – he visited us here in Israel a few weeks back. My wife asked him if he juggles anymore. He said a bit, and said his kids aren’t impressed. How could they when it is obvious to them that he can?

Anyways, there’s no wow factor in what Google is doing with machine learning anymore. It is obvious that each year, in every Google I/O event, some new innovation around this topic will be introduced.

This time, it was all about voice and text.

Time to dive into what went on @ Google I/O 2019 keynote.

Speech to text on device

We had a glimpse of this piece of technology late last year when Google introduced call screening to its Pixel 3 devices. This capability allows people to let the Pixel answer calls on their behalf, see what people are saying using live transcription and decide how to act.

This was all done on device. At Google I/O 2019, this technology was just added across the board on Android 10 to anything and everything.

On stage, the explanation given was that the model used for speech to text in the cloud is 2.5Gb in size, and Google was able to squeeze it down to 80Mb, which meant being able to run it on devices. It was not indicated if this is for any language other than English, which probably meant this is an English only capability for now.

What does Google gain from this capability?

  1. Faster speech to text. There’s no need to send audio to the cloud and get text back from it
  2. Ability to run it with no network or with poor network conditions
  3. Privacy of what’s being said

For now, Google will be rolling this out to Android devices and not just Google Pixel devices. No mention of if or when this gets to iOS devices.

What have they done with it?

  • Made the Google assistant more responsive (due to faster speech to text)
  • Created system-wide automatic captioning for everything that runs on Android. Anywhere, on any app
Search

The origins of Google came from Search, and Google decided to start the keynote with search.

Nothing super interesting there in the announcements made, besides the continuous improvements. What was showcased was news and podcasts.

How Google decided to handle Face News and news coverage is now coming to search directly. Podcasts are now made searchable and better accessible directly from search.

Other than that?

A new shiny object – the ability to show 3D models in search results and in augmented reality.

Nice, but not earth shattering. At least not yet.

Google Lens

After Search, Google Lens was showcased.

The main theme around it? The ability to capture text in real time on images and do stuff with it. Usually either text to speech or translation.

In the screenshot above, Google Lens marks the recommended dishes off a menu. While nice, this probably requires each and every such feature to be baked into lens, much like new actions need to be baked into the Google Assistant (or skills in Amazon Alexa).

This falls nicely into the For Everyone / Accessibility theme of the keynote. Aparna Chennapragada, Head of Product for Lens, had the following to say (after an emotional video of a woman who can’t read using the new Lens):

“The power to read is the power to buy a train ticket. To shop in a store. To follow the news. It is the power to get things done. So we want to make this feature to be as accessible to as many people as possible, so it already works in a dozen of languages.”

It actually is. People can’t really be part of our world without the power to read.

It is also the only announcement I remember that the number of languages covered was mentioned (which is why I believe speech to text on device is English only).

Google made the case here and in almost every part of the keynote in favor of using AI for the greater good – for accessibility and inclusion.

Google assistant

Google assistant had its share of the keynote with 4 main announcements:

Duplex on the web is a smarter auto fill feature for web forms.

Next generation Assistant is faster and smarter than its predecessor. There were two main aspects of it that were really interesting to me:

  1. It is “10 times faster”, most probably due to speech to text on the phone which doesn’t necessitate the cloud for many tasks
  2. It works across tabs and apps. A demo was shown, where a the woman instructed the Assistant to search for a photo, picking one out and then asking the phone to send it on an ongoing chat conversation just by saying “send it to Justin”

Every year Google seems to be making Assistant more conversational, able to handle more intents and actions – and understand a lot more of the context necessary for complex tasks.

For Everyone

I’ve written about For Everyone earlier in this article.

I want to cover two more aspect of it, federated learning and project euphonia.

Federated Learning

Machine learning requires tons of data. The more data the better the resulting model is at predicting new inputs. Google is often criticized for collecting that data, but it needs it not only for monetization but also a lot for improving its AI models.

Enter federated learning, a way to learn a bit at the edge of the network, directly inside the devices, and share what gets learned in a secure fashion with the central model that is being created in the cloud.

This was so important for Google to show and explain that Sundar Pichai himself showed and gave that spiel instead of leaving it to the final part of the keynote where Google AI was discussed almost separately.

At Google, this feels like an initiative that is only starting its way with the first public implementation of it embedded as part of Google’s predictive keyboard on Android and how that keyboard is learning new words and trends.

Project Euphonia

Project Euphonia was also introduced here. This project is about enhancing speech recognition models towards hard to understand speech.

Here Google stressed the work and effort it is putting on collecting recorded phrases from people with such problems. The main issue here being the creation or improvement of a model more than anything else.

Android Q

Or Android 10 – pick your name for it.

This one was more than anything else a shopping list of features.

Statistics were given at the beginning:

  • 2.5 billion active devices
  • Over 180 device makers

Live captions was again explained and introduced, along with on-device learning capabilities. AI at its best baked into the OS itself.

For some reason, the Android Q segment wasn’t followed with the Pixel one but rather with the Nest one.

Nest (helpful home)

Google rebranded all of its smart home devices under Nest.

While at it, the decided to try and differentiate from the rest of the pack by coining their solution the “helpful home” as opposed to the “smart home”.

As with everything else, AI and the assistant took center stage, as well as a new device, the Nest Hub Max, which is Google’s answer to the Facebook Portal.

The solution for video calling on the Next Hub Max was built around Google Duo (obviously), with a similar ability to auto zoom that Facebook Portal has, at least on paper – it wasn’t really demoed or showcased on stage.

The reason no demo was really given is that this device will ship “later this summer”, which means it wasn’t really ready for prime time – or Google just didn’t want to spend more precious minutes on it during the keynote.

Interestingly, Google Duo’s recent addition of group video calling wasn’t mentioned throughout the keynote at all.

Pixel (phone)

The Pixel section of the keynote showcased a new Pixel phone device, the Pixel 3a and 3a XL. This is a low cost device, which tries to make do with lower hardware spec by offering better software and AI capabilities. To drive that point home, Google had this slide to show:

Google is continuing with its investment in computational photography, and if the results are as good as this example, I am sold.

The other nice feature shown was call screening:

The neet thing is that your phone can act as your personal secretary, checking for you who’s calling and why, and also converse with the caller based on your instructions. This obviously makes use of the same innovations in Android around speech to text and smart reply.

My current phone is Xiaomi Mi A1, an Android One device. My next one may well be the Pixel 3a – at $399, it will probably be the best phone on the market at that price point.

Google AI

The last section of the keynote was given by Jeff Dean, head of Google.ai. He was also the one closing the keynote, instead of handing this back to Sundar Pichai. I found that nuance interesting.

In his part he discussed the advancements in natural language understanding (NLU) at Google, the growth of TensorFlow, where Google is putting its efforts in healthcare (this time it was oncology and lung cancer), as well as the AI for Social Good initiative, where flood forecasting was explained.

That finishing touch of Google AI in the keynote, taking 16 full minutes (about 15% of the time) shows that Google was aiming to impress and to focus on the good they are making in the world, trying to reduce the growing fear factor of their power and data collection capabilities.

It was impressive…

Next year?

More of the same is my guess.

Google will need to find some new innovation to build their event around. Speech to text on device is great, especially with the many use cases it enabled and the privacy angle to it. Not sure how they’d top that next year.

What’s certain is that AI and privacy will still be at the forefront for Google during 2019 and well into 2020.

A lot of the AI innovations Google is talking about is around real time communications. Check out the recent report I’ve written with Chad Hart on the subject:

AI in RTC report

The post Google I/O 2019 was all about AI, Privacy and Accessibility appeared first on BlogGeek.me.

Google CallJoy & the age of automation in communications

bloggeek - Mon, 05/06/2019 - 12:00

ML/AI is coming to communications really fast. It is going to manifest is as automation in communications but also in other ways.

Me? I wanted to talk about automation and communications. But then Google released CallJoy, which was… automation and communications. And it shows where we’re headed quite clearly with a service that is butt simple, and yet… Google seems to be the first at it, at least when it comes to aiming for simplicity and a powerful MVP. Here’s where I took this article –

Ever since Google launched Duplex at I/O 2018 I’ve been wondering what’s next. Google came out with a new service called CallJoy – a kind of a voice assistant/agent for small businesses. Before I go into the age of automation and communications, let’s try to find out where machine learning and artificial intelligence can be found in CallJoy.

Interested in AI in communications? Tomorrow I’ll be hosting a webinar with Chad Hart on this topic – join us:

Register to the webinar

CallJoy and AI

What CallJoy does exactly?

From the CallJoy website, it looks that the following takes place: you subscribe for the service, pick a local phone number to use and you’re good to go.

When people call your business, they get greeted by a message (“this call is being recorded for whatever purposes” kind of a thing). Next, it can “share” information such as business hours and ask if the caller wants to do stuff over a web link instead of talking to a human. If a web link is what you want (think a “yes please” answer to whatever you hear on the phone when you call), then you’ll get an SMS with a URL. Otherwise, you’ll just get routed to the business’ “real” phone number to be handled by a human. All calls get recorded.

What machine learning aspects does this service use?

#1 – Block unwanted spam calls

Incoming spam calls can really harass small businesses. Being able to get less of these is always a blessing. It is also becoming a big issue in the US, one that brings a lot of attention and some attempts at solving it by carriers as well as other vendors.

I am not sure what blocking does Google do here and if it makes direct use of machine learning or not – it certainly can. The fact that all calls get handled by a chatbot means that there’s some kind of a “gating” process that a spam call needs to pass first. This in itself blocks at least some of them spam calls.

#2 – Call deflection, using a voice bot

Call deflection means taking calls and deflecting them – having automation or self service handle the calls instead of getting them to human agents. In the case of CallJoy, a call comes in. message plays out to the user (“this call is being recorded”). User is asked if he wants to do something over a text message:

If the user is happy with that, then an SMS gets sent to the caller and he can continue from there.

There’s a voicebot here that handles the user’s answer (yes, yap, yes please, sure, …) and makes that decision. Nothing too fancy.

This part was probably implemented by using Google’s Dialogflow.

Today, the focus is on restaurants and in order-taking for the call deflection part. It can be used for other scenarios, but that’s the one Google is starting with:

Notice how there’s “LEARN MORE” only on restaurants? All other verticals in the examples on the CallJoy websites make use of the rest of CallJoy’s capabilities. Restaurants is the only one where call deflection is highlighted through an integration with a third party The Ordering.app, who are, for all intent and purpose an unknown vendor. Here’s what LinkedIn knows about them:

(one has to wonder how and why this partner was picked – and who’s cousin owns this company)

Anyways – call deflection now is done via SMS, and integration with a third party. Future releases will probably have more integrations and third parties to work with – and with that more use cases covered.

Another aspect in the future might be making a decision of where to route a user to – what link to send him based on his intent. This is something that happens in terms of a focus in larger businesses today in their automation initiatives.

#3 – Call transcription

This one seems like table stakes.

Transcription is the source of gaining insights from voice.

CallJoy offers transcription of all calls made.

The purpose? Enable analytics for the small business, which is based on tags and BI (below).

This most certainly makes use of Google’s speech to text service

#4- Automated tagging on call transcripts

It seems CallJoy offers tagging of the transcripts or finding specific keywords.

There’s not much explanation or information about tags, but it seems to work by specifying search words and these become tags across the recordings of calls that were made.

Identifying tags might be a manual process or an automated one (it isn’t really indicated anywhere). The intent here is to allow businesses to indicate what they are interested in (order, inventory, reservation, etc.).

#5- Metrics and dashboards

Then there’s the BI part – business intelligence.

Take the information collected, place it on nice dashboards to show the users.

This gives small businesses insights on who is calling them, when and for what purpose. Sounds trivial and obvious, but how many small businesses have that data today?

No machine learning or AI here – just old school BI. The main difference is that the data collected along with the insights gleaned make use of machine learning.

Sum it up

To sum things up, CallJoy uses transcription and makes basic use of Dialogflow to build a simple voicebot (probably single step – question+answer) and wraps it up in a solution that is pretty darn useful for businesses.

It does that for $39 a month per location. Very little to lose by trying it out…

A different route

Where most AI vendors are targeting large enterprises, Google decided to take the route of the small business. Trying to solve their problems. The challenge here is that there’s not enough data within a single business – and not enough money for running a data science project.

Google figured out how to cater for this audience with the tools they had at hand, without using the industry’s gold standard for call centers or try a fancy catch-all solution to answer and manage all calls.

The industry’s gold standard? An IVR. Get a person to menu-hell until he reaches what he needs.

Catch-all solution? Put an AI that can handle 90%+ if the call scenarios on its own automatically.

Both an IVR and mapping call scenarios means customizing the solution, which suggests longer onboarding with a more complicated solution. By taking the route of simplification Google made it possible to cater for small businesses.

A virtuous cycle

Google gains here twice.

Once by attracting small businesses to its service.

Twice by collecting these calls and the intents and tags businesses put. This ends up gaining more insights for Google, turning them into additional features, which later on attracts yet more businesses to a better CallJoy business.

It is all about automation

Here’s what you’ll find on the FAQ page of CallJoy:

With CallJoy, you’ll be able to:

  • Gain powerful insights with audio recordings and searchable text transcripts of all connected incoming calls.
  • Make better business decisions with metrics such as peak call times, new vs. returning callers, and conversation topics.
  • Easily direct callers via text message text to place an order or schedule an appointment online, increasing sales while freeing up your staff.

Most of it talks about improving a service by automating much of what takes place. Which is what the whole notion of AI and machine learning is with communications. Well… mostly. There are a few other areas like quality optimization.

The whole AI gold rush we see today in the communications space boils down to the next level of automation we’re getting into with communications. In many cases this is about machine helping humans and not really machine replacing humans – not for many of the use cases and interactions. That will probably come later  

Interested in AI in communications? Tomorrow I’ll be hosting a webinar with Chad Hart on this topic – join us:

Register to the webinar

The post Google CallJoy & the age of automation in communications appeared first on BlogGeek.me.

Google CallJoy & the age of automation in communications

bloggeek - Mon, 05/06/2019 - 12:00

ML/AI is coming to communications really fast. It is going to manifest is as automation in communications but also in other ways.

Me? I wanted to talk about automation and communications. But then Google released CallJoy, which was… automation and communications. And it shows where we’re headed quite clearly with a service that is butt simple, and yet… Google seems to be the first at it, at least when it comes to aiming for simplicity and a powerful MVP. Here’s where I took this article –

Ever since Google launched Duplex at I/O 2018 I’ve been wondering what’s next. Google came out with a new service called CallJoy – a kind of a voice assistant/agent for small businesses. Before I go into the age of automation and communications, let’s try to find out where machine learning and artificial intelligence can be found in CallJoy.

Interested in AI in communications? Tomorrow I’ll be hosting a webinar with Chad Hart on this topic – join us:

Register to the webinar

CallJoy and AI

What CallJoy does exactly?

From the CallJoy website, it looks that the following takes place: you subscribe for the service, pick a local phone number to use and you’re good to go.

When people call your business, they get greeted by a message (“this call is being recorded for whatever purposes” kind of a thing). Next, it can “share” information such as business hours and ask if the caller wants to do stuff over a web link instead of talking to a human. If a web link is what you want (think a “yes please” answer to whatever you hear on the phone when you call), then you’ll get an SMS with a URL. Otherwise, you’ll just get routed to the business’ “real” phone number to be handled by a human. All calls get recorded.

What machine learning aspects does this service use?

#1 – Block unwanted spam calls

Incoming spam calls can really harass small businesses. Being able to get less of these is always a blessing. It is also becoming a big issue in the US, one that brings a lot of attention and some attempts at solving it by carriers as well as other vendors.

I am not sure what blocking does Google do here and if it makes direct use of machine learning or not – it certainly can. The fact that all calls get handled by a chatbot means that there’s some kind of a “gating” process that a spam call needs to pass first. This in itself blocks at least some of them spam calls.

#2 – Call deflection, using a voice bot

Call deflection means taking calls and deflecting them – having automation or self service handle the calls instead of getting them to human agents. In the case of CallJoy, a call comes in. message plays out to the user (“this call is being recorded”). User is asked if he wants to do something over a text message:

If the user is happy with that, then an SMS gets sent to the caller and he can continue from there.

There’s a voicebot here that handles the user’s answer (yes, yap, yes please, sure, …) and makes that decision. Nothing too fancy.

This part was probably implemented by using Google’s Dialogflow.

Today, the focus is on restaurants and in order-taking for the call deflection part. It can be used for other scenarios, but that’s the one Google is starting with:

Notice how there’s “LEARN MORE” only on restaurants? All other verticals in the examples on the CallJoy websites make use of the rest of CallJoy’s capabilities. Restaurants is the only one where call deflection is highlighted through an integration with a third party The Ordering.app, who are, for all intent and purpose an unknown vendor. Here’s what LinkedIn knows about them:

(one has to wonder how and why this partner was picked – and who’s cousin owns this company)

Anyways – call deflection now is done via SMS, and integration with a third party. Future releases will probably have more integrations and third parties to work with – and with that more use cases covered.

Another aspect in the future might be making a decision of where to route a user to – what link to send him based on his intent. This is something that happens in terms of a focus in larger businesses today in their automation initiatives.

#3 – Call transcription

This one seems like table stakes.

Transcription is the source of gaining insights from voice.

CallJoy offers transcription of all calls made.

The purpose? Enable analytics for the small business, which is based on tags and BI (below).

This most certainly makes use of Google’s speech to text service

#4- Automated tagging on call transcripts

It seems CallJoy offers tagging of the transcripts or finding specific keywords.

There’s not much explanation or information about tags, but it seems to work by specifying search words and these become tags across the recordings of calls that were made.

Identifying tags might be a manual process or an automated one (it isn’t really indicated anywhere). The intent here is to allow businesses to indicate what they are interested in (order, inventory, reservation, etc.).

#5- Metrics and dashboards

Then there’s the BI part – business intelligence.

Take the information collected, place it on nice dashboards to show the users.

This gives small businesses insights on who is calling them, when and for what purpose. Sounds trivial and obvious, but how many small businesses have that data today?

No machine learning or AI here – just old school BI. The main difference is that the data collected along with the insights gleaned make use of machine learning.

Sum it up

To sum things up, CallJoy uses transcription and makes basic use of Dialogflow to build a simple voicebot (probably single step – question+answer) and wraps it up in a solution that is pretty darn useful for businesses.

It does that for $39 a month per location. Very little to lose by trying it out…

A different route

Where most AI vendors are targeting large enterprises, Google decided to take the route of the small business. Trying to solve their problems. The challenge here is that there’s not enough data within a single business – and not enough money for running a data science project.

Google figured out how to cater for this audience with the tools they had at hand, without using the industry’s gold standard for call centers or try a fancy catch-all solution to answer and manage all calls.

The industry’s gold standard? An IVR. Get a person to menu-hell until he reaches what he needs.

Catch-all solution? Put an AI that can handle 90%+ if the call scenarios on its own automatically.

Both an IVR and mapping call scenarios means customizing the solution, which suggests longer onboarding with a more complicated solution. By taking the route of simplification Google made it possible to cater for small businesses.

A virtuous cycle

Google gains here twice.

Once by attracting small businesses to its service.

Twice by collecting these calls and the intents and tags businesses put. This ends up gaining more insights for Google, turning them into additional features, which later on attracts yet more businesses to a better CallJoy business.

It is all about automation

Here’s what you’ll find on the FAQ page of CallJoy:

With CallJoy, you’ll be able to:

  • Gain powerful insights with audio recordings and searchable text transcripts of all connected incoming calls.
  • Make better business decisions with metrics such as peak call times, new vs. returning callers, and conversation topics.
  • Easily direct callers via text message text to place an order or schedule an appointment online, increasing sales while freeing up your staff.

Most of it talks about improving a service by automating much of what takes place. Which is what the whole notion of AI and machine learning is with communications. Well… mostly. There are a few other areas like quality optimization.

The whole AI gold rush we see today in the communications space boils down to the next level of automation we’re getting into with communications. In many cases this is about machine helping humans and not really machine replacing humans – not for many of the use cases and interactions. That will probably come later  

Interested in AI in communications? Tomorrow I’ll be hosting a webinar with Chad Hart on this topic – join us:

Register to the webinar

The post Google CallJoy & the age of automation in communications appeared first on BlogGeek.me.

Latest WebRTC Developer Tools Landscape (and report)

bloggeek - Mon, 04/29/2019 - 12:00

The landscape of WebRTC developer tools is ever-changing. Here’s where we are at now.

It was time. Over a year passed since last I’ve updated my WebRTC PaaS report. The main changes that occurred since December 2017?

While working on the report, there were a few things that I needed to do:

  1. Update all 21 vendors with relevant information. Some progressed more than others. Some haven’t made any significant changes.
  2. Refresh all references, links and information in the report, to fit the status of WebRTC in 2019
  3. Publicize the appendix on group calling architectures, to give room for a new appendix on Flow and Embedded – two trends that are taking shape
WebRTC Developer Tools landscape

A chapter in the report deals with the WebRTC Developer Tools landscape – the vendors, frameworks, products and services that developers can use when building their WebRTC applications. And that was from June 2017… a long time ago in WebRTC-time.

So I got that updated as well.

You can download the WebRTC Developer Tools landscape infographic.

Helping developers decide

A theme that occurs on a daily basis almost is people asking what to use for their project.

Someone asked about a PHP signaling server in 2017. That question was raised again this month. I got a kind of a similar question over email about Python. Others use one CPaaS vendor and want to switch to another (because they are unhappy about quality, support, pricing, …). Or they want to try and build the infrastructure on their own.

The WebRTC Index is there to cater for that need. Guide people through the process of finding the tools they can use. It is great, but it isn’t detailed enough in some cases – it gives you the list of vendors to research, but you still need to go and research them to check their feature list and capabilities.

That’s why I created my paid report – Choosing a WebRTC API Platform. This report covers the CPaaS vendors who has WebRTC capabilities. And now with the updated edition, it is again up to date with the most current information on all vendors.

Thinking of using a 3rd party?

Trying to determine a different vendor to use?

Want to know how committed a certain vendor is to his platform?

All that can be found in the report, in a way that is easily reachable and digestible.

The report is available at a discounted price until the end of April (only 2 days left).

If you want to learn more about the report, you can:

  1. Download the table of contents and introduction
  2. Check out Agora.io’s 4-pager from the report (each vendor profiled as such a 4-pager for it)
  3. Contact me to ask questions

You can purchase the report online.

Shout out to Agora.io

The reason that 4-pager from Agora.io is openly available is that they sponsored this report.

Agora.io is one of the interesting vendors in this space. They have their own network and coding technologies, and they hook it up to WebRTC. Their solution is also capable of dealing with live broadcasts at scale (think million viewers in a single video stream).

Check them out, and if you’re in San Francisco – attend their AllThingsRTC event.

The post Latest WebRTC Developer Tools Landscape (and report) appeared first on BlogGeek.me.

Latest WebRTC Developer Tools Landscape (and report)

bloggeek - Mon, 04/29/2019 - 12:00

The landscape of WebRTC developer tools is ever-changing. Here’s where we are at now.

It was time. Over a year passed since last I’ve updated my WebRTC PaaS report. The main changes that occurred since December 2017?

While working on the report, there were a few things that I needed to do:

  1. Update all 21 vendors with relevant information. Some progressed more than others. Some haven’t made any significant changes.
  2. Refresh all references, links and information in the report, to fit the status of WebRTC in 2019
  3. Publicize the appendix on group calling architectures, to give room for a new appendix on Flow and Embedded – two trends that are taking shape
WebRTC Developer Tools landscape

A chapter in the report deals with the WebRTC Developer Tools landscape – the vendors, frameworks, products and services that developers can use when building their WebRTC applications. And that was from June 2017… a long time ago in WebRTC-time.

So I got that updated as well.

You can download the WebRTC Developer Tools landscape infographic.

Helping developers decide

A theme that occurs on a daily basis almost is people asking what to use for their project.

Someone asked about a PHP signaling server in 2017. That question was raised again this month. I got a kind of a similar question over email about Python. Others use one CPaaS vendor and want to switch to another (because they are unhappy about quality, support, pricing, …). Or they want to try and build the infrastructure on their own.

The WebRTC Index is there to cater for that need. Guide people through the process of finding the tools they can use. It is great, but it isn’t detailed enough in some cases – it gives you the list of vendors to research, but you still need to go and research them to check their feature list and capabilities.

That’s why I created my paid report – Choosing a WebRTC API Platform. This report covers the CPaaS vendors who has WebRTC capabilities. And now with the updated edition, it is again up to date with the most current information on all vendors.

Thinking of using a 3rd party?

Trying to determine a different vendor to use?

Want to know how committed a certain vendor is to his platform?

All that can be found in the report, in a way that is easily reachable and digestible.

The report is available at a discounted price until the end of April (only 2 days left).

If you want to learn more about the report, you can:

  1. Download the table of contents and introduction
  2. Check out Agora.io’s 4-pager from the report (each vendor profiled as such a 4-pager for it)
  3. Contact me to ask questions

You can purchase the report online.

Shout out to Agora.io

The reason that 4-pager from Agora.io is openly available is that they sponsored this report.

Agora.io is one of the interesting vendors in this space. They have their own network and coding technologies, and they hook it up to WebRTC. Their solution is also capable of dealing with live broadcasts at scale (think million viewers in a single video stream).

Check them out, and if you’re in San Francisco – attend their AllThingsRTC event.

The post Latest WebRTC Developer Tools Landscape (and report) appeared first on BlogGeek.me.

Upcoming WebRTC events in 2019

bloggeek - Mon, 04/22/2019 - 12:00

Suddenly, there are so many good WebRTC events you can attend.

My kids are still young, and for some reason, still consider me somewhat important in their lives. It is great, but also sad – I found myself this year needing to decline so many good events to attend. Here’s a list of all the places that I am not going to be at, but you should if you’re interested in WebRTC

BTW – Some of these events are still in their call for papers stage – why not go as a speaker?

AllThingsRTC

URL: http://allthingsrtc.org/

When? 13 June

Where? San Francisco

Call for speakers: https://www.papercall.io/allthingsrtc

AllThingsRTC is hosted by Agora.io. The event they did in China a few years back was great (I haven’t attended but got good feedback about it), and this one is taking the right direction. They have room for more speakers – so be sure to add your name if you wish to present.

Sadly, I won’t be able to join this event as I am just finishing a family holiday in London.

CommCon 2019

URL: https://2019.commcon.xyz/

When? 7-11 July

Where? Buckinghamshire, UK

CommCon started last year by Dan Jenkins from Nimble Ape.

It takes a view of the communications market as a whole from the point of view of the developers in that market. The event runs in two tracks with a good deal of sessions around WebRTC.

I couldn’t attend last year’s even and can’t attend this year’s event (extended family trip to Eastern Europe). What I’ve heard from last year’s attendees was that the event was really good – and as testament, the people I know are going to attend this year’s event as well.

ClueCon

URL: https://www.cluecon.com/

When? 5-8 August

Where? Downtown Chicago

Call for speakers: https://www.cluecon.com/speakers/

This is the 15th year that ClueCon will be held. This event is about open source projects in VoIP, with the team behind the event being the FreeSWITCH team.

This one is just after that extended family trip to Eastern Europe, and I’d rather not be on another airplane so soon.

Twilio Signal

URL: https://signal.twilio.com/

When? 6-7 August

Where? San Francisco

Call for speakers: https://eegeventsite.secure.force.com/twiliosignal/twiliosignalcfpreghome

Twilio Signal is a lot of fun. Twilio is the biggest CPaaS vendor out there and their event is quite large. I’ve been to two such events and found them really interesting. They deal a lot about Twilio products and new launches which tend to define a lot of the industry, but they have technical and business sessions as well.

Can’t make it this year. Falls at roughly the same time as ClueCon which I am skipping as well.

JanusCon

URL: https://www.januscon.it/

When? 23-25 September

Where? Napoli, Italy

Call for papers: https://www.papercall.io/januscon2019

The meetecho team behind Janus decided to create a conference around Janus.

Janus is one of the most popular open source WebRTC media servers today, and this is a leap of faith when it comes to creating an event – always a risky business.

I might end up attending it. For Janus (and for the food obviously). Only challenge is my daughter is starting a new school that month, so need to see if and how will that fit.

IIT RTC

URL: https://www.rtc-conference.com/2019/

When? 14-16 October

Where? Chicago

Call for speakers: https://www.rtc-conference.com/2019/submit-presentation-for-conference/

The IIT RTC is a mixture of academic and industry event around real time communications. I’ve taken part in it twice without really being there in person, through a video conference session. The event runs multiple tracks with WebRTC in a track of its own. As with many of the other larger industry events, IIT RTC is preceded by a TADHack event and one of its tracks is TAD Summit.

I’ll be skipping this one due to Sukkot holiday here in Israel.

Kranky Geek

URL: https://www.krankygeek.com/

When? 15 November

Where? San Francisco

Call for speakers: just contact me

That’s the event I am hosting with Chris Koehncke and Chad Hart. Our focus is WebRTC and ML/AI in real time communications. We’re still figuring out the sponsors and agenda for this year (just started planning the event).

Obviously, I’ll be attending this event…

Which event should you attend?

This is a question I’ve been asked quite a few times, and somehow, this year, there are just so many of them that I want and can’t attend. If you think of going to an event to learn about WebRTC and communications in general, then any of these will be great.

Go to a few – why settle for one?

Next Month

Next month, I’ll be hosting a webinar along with Chad Hart. We will be reviewing the changing domain of machine learning and artificial intelligence in real time communications. We’ve published a report about it a few months back, and it is time to take another look at the topic. If you’re interested – join us.

The post Upcoming WebRTC events in 2019 appeared first on BlogGeek.me.

Pages

Subscribe to OpenTelecom.IT aggregator

Using the greatness of Parallax

Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.

Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.

Get free trial

Wow, this most certainly is a great a theme.

John Smith
Company name

Yet more available pages

Responsive grid

Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

More »

Typography

Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

More »

Startup Growth Lite is a free theme, contributed to the Drupal Community by More than Themes.