bloggeek

Subscribe to bloggeek feed
The leading authority on WebRTC
Updated: 7 min 45 sec ago

Should you use Kurento or Jitsi for your multiparty WebRTC video conference product?

Mon, 09/05/2016 - 12:00

Kurento or Jitsi; Kurento vs Jitsi – is the the ultimate head to head comparison for open source media servers in WebRTC?

Yes and no. And if you want an easy answer of “Kurento is the way to go” or “Jitsi will solve all of your headaches” then you’ve come to the wrong place. As with everything else here, the answer depends a lot on what it is you are trying to achieve.

Since this is something that get raised quite often these days by the people I chat with, I decided to share my views here. To do that, the best way I know is to start by explaining how I compartmentalized these two projects in my mind:

Jitsi Videobridge

The Jitsi Videobridge is an SFU. It is an open source one, which is currently owned and maintained by Atlassian.

The acquisition of the Jitsi Videobridge serves Atlassian in two ways:

  1. Integrating Jitsi Videobridge into HipChat while owning the technology (it took the better part of the last 18 months)
  2. Showing some open source love – they did change the license of Jitsi from LGPL to APL

Here’s the intro of Jitsi from its github page:

Jitsi Videobridge is an XMPP server component that allows for multiuser video communication. Unlike the expensive dedicated hardware videobridges, Jitsi Videobridge does not mix the video channels into a composite video stream, but only relays the received video channels to all call participants. Therefore, while it does need to run on a server with good network bandwidth, CPU horsepower is not that critical for performance.

I emphasized the important parts for you. Here’s what they mean:

  • XMPP server component – a decision was made as to the signaling of Jitsi. It was made years ago, where the idea was to “compete” head-to-head with Google Hangouts. So the choice was made to use XMPP signaling. This means that if you need/want/desire anything else, you are in for a world of pain – doable, but not fun
  • does not mix the video channels – it doesn’t look into the media at all or can process raw video in any way
  • only relays the received video – it is an SFU

Put simply – Jitsi is an SFU with XMPP signaling.

If this is what you’re looking for then this baby is for you. If you don’t want/need an SFU or have other signaling protocol, better start elsewhere.

You can find outsourcing vendors who are happy to use Jitsi and have it customized or integrated to your use case.

Kurento

Kurento is a kind of an media server framework. This too is an open source one, but one that is maintained by Kurento Technologies.

With Kurento you can essentially build whatever you want when it comes to backend media processing: SFU, MCU, recording, transcoding, gateway, etc.

This is an advantage and a disadvantage.

An advantage because it means you can practically use it for any type of use case you have.

A disadvantage because there’s more work to be done with it than something that is single purpose and focused.

Kurento has its own set of vendors who are happy to support, customize and integrate it for you, one of which are the actual authors and maintainers of the Kurento code base.

Which one’s for you? Kurento or Jitsi?

Both frameworks are very popular, with each having at the very least 10’s of independent installations and integrations done on top of them and running in production services.

Kurento or Jitsi? Kurento or Jitsi? Not always an easy choice, but here’s where I draw the line:

If what you need is a pure SFU with XMPP on top, then go with Jitsi. Or find some other “out of the box” SFU that you like.

If what you need is more complex, or necessitates more integration points, then you are probably better off using Kurento.

What about Janus?

Janus is… somewhat tougher to explain.

Their website states that it is a “general purpose WebRTC Gateway”. So in my mind it will mostly fit into the role of a WebRTC-SIP gateway.

That said, I’ve seen more than a single vendor using it in totally other ways – anything from an SFU to an IOT gateway.

I need to see more evidence of use cases where production services end up using it for multiparty as opposed to a gateway component to suggest it as a solid alternative.

Oh – and there are other frameworks out there as well – open source or commercial.

Where can I learn more?

Multiparty and server components are a small part of what is needed when going about building a WebRTC infrastructure for a communication service.

In the past few months, I’ve noticed a growing requests in challenges and misunderstandings of how and what WebRTC really is. People tend to focus on the obvious side of the browser APIs that WebRTC has, and forget to think about the backend infrastructure for it – something that is just as important, if not more.

It is why I’ve decided to launch an online WebRTC Architecture course that tackles these types of questions.

Course starts October 24, priced at $247 USD per student. If you enroll before October 10, there’s a $50 discount – so why wait?

The post Should you use Kurento or Jitsi for your multiparty WebRTC video conference product? appeared first on BlogGeek.me.

Will there ever be a decentralized web?

Mon, 08/29/2016 - 12:00

No. Yes. Don’t know.

I’ve recently read an article at iSchool@Syracuse. For lack of a better term on my part, pundits opining about the decentralized web.

It is an interesting read. Going through the opinions there, you can divide the crowd into 3 factions:

  1. We want privacy. Also we hate governments and monopolies. This is the largest group
  2. There’s this great tech we can put in place to make the internet more robust
  3. We actually don’t know

I am… somewhat split across all of these three groups.

#1 – Privacy, Gatekeepers and Monopolies

Like any other person, I want privacy. On the other hand, I want security, which in many cases (and especially today) comes at the price of privacy. I also want convenience, and at the age of artificial intelligence and chat bots – this can easily mean less privacy.

As for governments and monopolies – I don’t think these will change due to a new protocol or a decentralized web. The web started as something decentralized and utopian to some extent. It degraded to what it is today because governments caught on and because companies grew inside the internet to become monopolies. Can we redesign it all in a way that will not allow for governments to rule over the data going into them or for monopolies to not exist? I doubt it.

I am taking part now in a few projects where location matters. Where you position your servers, how you architect your network, and even how you communicate your intent with governments – all these can make or break your service. I just can’t envision how protocols can change that in a global scale – and how the forces that be that need to promote and push these things will actively do so.

I think it is a good thing to strive for, but something that is going very challenging to achieve:

  • Most powerful services today rely on big data = no real privacy (at least not in front of the service you end up using). This will always cause tension between our design for privacy versus our desire for personalization and automation
  • Most governments can enforce rules in the long run in ways that catch up with protocols – or simply abuse weaknesses in products
  • Popular services bubble to the top, in the long run making them into monopolies and gatekeepers by choice – no one forces us to use Google for search, and yet most of us view search on the web and Google as synonymous
#2 – Tech

Yes. Our web is client-server for the most part, with browsers getting their data fix from backend servers.

We now have technologies that can work differently (WebRTC’s data channel is one of them, and there are others still).

We can and should work on making our infrastrucuture more robust. More impregnable to malicious attackers and prone to errors. We should make it scale better. And yes. Decentralization is usually a good design pattern to achieve these goals.

But if at the end of the day, the decentralized web is only about maintaining the same user experience, then this is just a slow evolution of what we’re already doing.

Tech is great. I love tech. Most people don’t really care.

#3 – We just don’t know

As with many other definitions out there, there’s no clear definition of what the decentralized web is or should be. Just a set of opinions by different pundits – most with an agenda for putting out that specific definition.

I really don’t know what that is or what it should be. I just know that our web today is centralized in many ways, but in other ways it is already rather decentralized. The idea that I have this website hosted somewhere (I am clueless as to where), while I write these words from my home in Israel, it is being served either directly or from a CDN to different locations around the globe – all done through a set of intermediaries – some of which I specifically selected (and pay for or use for free) – to me that’s rather decentralized.

At the end of the day, the work being done by researchers for finding ways to utilize our existing protocols to offer decentralized, robust services or to define and develop new protocols that are inherently decentralized is fascinating. I’ve had my share of it in my university days. This field is a great place to research and learn about networks and communications. I can’t wait to see how these will evolve our every day networks.

 

 

The post Will there ever be a decentralized web? appeared first on BlogGeek.me.

Are WebRTC room systems interesting again?

Mon, 08/22/2016 - 12:00

I get a feeling that the room system is actually about to change. And that’s probably a good thing.

For many years, video conferencing was defined by the “codec”. The “codec” in this case wasn’t H.264 or any other specification of a video compression standard. It was the term given to the grey box sitting inside a meeting room connected to a camera. For me, a better term for it was always the “room system”. The first ones started as designed, proprietary hardware, running proprietary embedded operating systems. They were connected to a specific camera that was either a part of the box or connected to the box externally – but in most cases was again a proprietary camera.

There have been attempts in the past to replace the room system with something less expensive. I even remember GIPS (remember them? Google acquired them 6 years ago and made WebRTC out of them) writing a post on their blog on how to build your own video conferencing system from an Intel machine and a Logitech webcam. It was nice, but it really didn’t change the industry.

Little has changed in the video conferencing room system. When I stopped following that industry closely, which was a few years ago, things were still in the same trajectory:

  • Use proprietary hardware (the industry leaned towards the TI DSP at the time)
  • Use Embedded Linux as the OS (at the time, this was actually a refreshing sidestep from VxWorks)
  • Use an external proprietary camera (sourced from Sony if you wanted expensive highend or from another vendor if you wanted expensive “lowend”)

Software was taking the same design concepts of embedded platforms and closed systems at the time. You wrote ugly proprietary code from scratch with specialized UI frameworks. No fun at all.

When I decided to write my first posts about WebRTC, I wanted to share my views o f what WebRTC will do to the video conferencing room system. I noted three changes we will see:

So how will we handle it now?

  1. Commodity hardware, probably still with proprietary cameras
  2. Android operating system
  3. WebRTC multimedia and a web browser for signaling and everything else

I wrote it more than 4 years ago. And it still hasn’t happened. What I did fail to see, was how two additional changes are going to affect this industry:

  1. Migration towards cloud based deployments, services and business models (specifically in the video conferencing industry)
  2. Open hardware. Or at the very least, the constant grind of Moore’s Law and the stupidly capable hardware we have today

Hardware is cool again. IoT (the Internet of Things) made sure of that. Everything from wristbands, to drones, to self driving cars. Somehow, hardware startups had to also look at the video conferencing system.

Highfive was an early indication of that. A company conceived in 2012, just about the time I’ve written my own thoughts on the video conferencing room system. To some extent, also Double Robotics, who made use of an iPad and a Segway-like device. Both employed cloud for their distribution, selling a service around their devices. They were pioneers in selling their own video “codec” (=room system) coupled with a service they host and manage.

In the past month, things seem to be progressing in this same trajectory. Three items on the news recently caught my attention:

#1 – HELLO

HELLO is a video conferencing room system created by Solaborate. Solaborate is a social business/collaboration platform that has been around for several years now. Their CEO, Labinot Bytyqi was interviewed here a few years ago about Solaborate. I am not sure how they are fairing since then, but they must have been busy.

It seems that they are now adding a hardware component to the Solaborate platform in the form of HELLO. And what better place to go about doing that than a Kickstarter campaign?

HELLO Kickstarter

The thing I liked most is the image they shared of their first prototype:

For the uninitiated, that’s the Logitech C920 webcam, cut from its plastic contraption and glued together to something that looks like one of them Linux or Android-in-a-stick devices. Probably what holds the quad core ARM processor. Commodity hardware at its best.

Solaborate took a low goal for their Kickstarter campaign, passing it and then some. They will probably end up below the million dollar mark, but with a rather solid number of backers considering this is at the end of the day an enterprise product.

Oh – and did I mention they use WebRTC?

#2 – Pluot

Pluot is a new startup I came across over TechCrunch when they reported that Pluot raised $2.5 million.

The idea isn’t any different than the previous set of vendors. You get a small box and a camera, connected to the Pluot service.

From a hardware standpoint, it isn’t much different than the HELLO box. The camera from the picture is a Logitech C920 one.

The box, if you ask me, is too similar to an Intel NUC.

And it is actually running an Intel off-the-shelf commodity hardware:

The Pluot device is an Intel NUC running Ubuntu Core. […]

All the WebRTC media streams are peer-to-peer. […] That’s why we’re using an Intel Core i3 instead of a cheaper ARM option.

And yes. It is using WebRTC. And guess what? As with Skype, Pluot is also based on Electron (and Chromium as an extension of it):

So we scratched our own itch and built a little appliance, using WebRTC and atom-shell (which is now electron).

Pluot took a different business model approach – one used extensively by mobile operators: the box is free and you pay for the monthly subscription service only.

Commodity hardware, commodity software, commodity video conferencing core inside a Chromium shell, powering the whole video conferencing service.

#3 – Cisco trimming its workforce

In seemingly unrelated news, Cisco is trimming down its workforce. Everywhere in the news that this is mentioned, it also comes with an indication that the cuts are mainly on the hardware side of the house. There’s a need to focus more on software these days.

As one of the biggest players in video conferencing room systems, I wonder what that means. Is it a move towards leaner, more software focused room systems? Is the room systems in Cisco considered hardware or software in essence? Will we see a shift in business models?

The room system is slowly starting to change and take a new shape.

This change isn’t just a technical one in the specification of the hardware and software, but goes a lot deeper than that. These changes come with a change of how the room system is built, which parts are developed and which are “sourced” from open source alternatives (or paid third parties), who offers the service and how the business model look like.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post Are WebRTC room systems interesting again? appeared first on BlogGeek.me.

Microsoft Acquires Beam, Showing the Value of WebRTC to Interactive Live Streaming

Mon, 08/15/2016 - 12:00

Low latency is critical for interactive live streaming.

Microsoft acquired last week Beam, a company focused on a gamer interactive live streaming service.

According to CrunchBase, Beam has been around for almost 2 years before getting plucked by Microsoft. The investment in them has been smaller than 0.5M USD.

For some reason unknown to me, there are people who love watching other people play games. I guess it is similar to some extent to people sitting down to watch a soccer game. Another thing I can’t really understand. It is the reason why Twitch was acquire by Amazon for almost a billion dollar – a month prior to Beam’s founding.

What Beam worked on was a way to enable viewers to be a part of the game and up their engagement. You do this by allowing viewers to push feedback to the gamers – add challenges to them, buy virtual goods for them, etc. From Beam’s website:

We make it possible for streamers to involve viewers in their gameplay, no matter what game they’re playing.

Want to let your viewers choose your weapon, make quests for you, or even fly a drone around your room? You can do that, all in realtime. Our SDK allows developers to create interactive experiences for existing games with as few as 25 lines of code.

In the console world, there are two major players – Microsoft Xbox and Sony PlayStation. With the acquisition of Beam, Microsoft is trying to build an ecosystem of viewers around the gamers and games offered in Xbox. Will they share the SDK and platform with Sony? It is too soon to tell, especially now that Microsoft is opening up and trying to build large ecosystem around its services as opposed to its operating systems. It might just be that Microsoft is trying to become a big player in gaming in general – not just console ones but also mobile.

Back to Beam and video streaming.

To enable higher and richer interactions between viewers and gamers, and offer the kind of  that, latency higher than a second are detrimental. This makes HLS and MPEG-DASH protocols irrelevant. Flash is on its way out the window. The only other technology that can get to a sub-second latency for real time video streaming then is WebRTC.

 

WebRTC is exactly what Beam has been using in their “protocol” dubbed FTL. It used WebRTC to stream video to the viewers instead of the more traditional mechanism of Flash.

I have been a believer in WebRTC for live streaming and broadcast for over a year now. It is just another place where WebRTC makes a lot of sense, but it will take time for us to get there. The main reason for that is that current implementations are too focused on video chat scenarios – trying to leverage the WebRTC implementation found in Chrome and hooking it up to backend media servers that are again geared towards video chat use cases.

There are 4 different techniques that WebRTC can be leveraged in interactive live streaming (or streaming at all):

  1. Use WebRTC’s data channel as a replacement for HTTP(S) to send video packets
    • Theoretically, this should be faster than HTTP and enables optimization to buffering
    • No one has taken that route yet as far as I can tell
  2. Build a kind of P2P CDN on top of WebRTC’s data channel
    • Think BitTorrent inside the browser
    • Peer5 and a view other vendors are doing just that
  3. Use WebRTC in its full glory – voice and video channels opened and streamed
    • Acquire the original live stream using WebRTC or some other mechanism, and then use WebRTC to connect the viewers via a VOD like architecture to the broadcast
    • Probably the most wasteful of all approached
    • And the one I am guessing Beam is currently employing
  4. Optimize on (3) to offer something akin to a Flash/HLS streamer
    • Handle multiple bitrates and resolutions
    • Be able to get high density of streams in a single machine

Options (1) and (2) require knowledge of networking.

Option (2) requires knowledge of P2P networks.

Option (3) requires WebRTC knowledge at its basic level.

Option (4) means you practically implement a WebRTC stack of your own with a focus on live streaming.

My guess is that with time, we will see vendors implementing options (2) and (4) which will be the winning architectures for live streaming.

Option (2) will be deployed to support today’s use cases, while option (4) will be deployed to support future use cases, where interactivity between viewer and broadcaster are important.

Beam took the right challenge on itself. It got it acquired in a short timespan and in a way redefine live streaming and low latency.

For Microsoft, this is yet another acquisition in the WebRTC space, and another area in which it now relies on this technology – even without supporting it on IE.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post Microsoft Acquires Beam, Showing the Value of WebRTC to Interactive Live Streaming appeared first on BlogGeek.me.

WebRTC Plugin? An Electron WebRTC app is the only viable fallback

Mon, 08/08/2016 - 12:00

I was meaning to write something about Skype, Linux and WebRTC. But never got around to it. Until now.

The reason why I decided to write about it eventually? This tweet by Alex:

IMTC (Microsoft, Cisco, polycom, unify, sonus, …) to provide free (no cost) and free (do what you want) webrtc plugin for I.E. And Safari.

— Dr. Alex. Gouaillard (@agouaillard) August 3, 2016

Hmm. The IMTC is planning to offer a FREE plugin for IE and Safari.

Sounds like Temasys, and from the person who worked at Temasys at the time of releasing their plugin – now a commercial one rather than a free offering.

While some like this plugin, others don’t. They tried it and decided that the warning messages it pops up when being installed aren’t worth the effort.

The Electron WebRTC app approach

What did catch my eye was the Skype for Linux announcement. This is an alpha release of the Skype app for Linux – something that Microsoft have been neglecting for quite some time now.

The interesting bit isn’t that Microsoft is actively investing in a Linux version for Skype and acknowledging this part of the user base, but rather how they did that and the stance they have.

Here are a few lines from the announcement on the Skype community site:

The new version of Skype for Linux is a brand new client using WebRTC, the launch of which ensures we can continue to support our Linux users in the years to come.

[…] you’ll be using the latest, fastest and most responsive Skype UI, so you can share files, photos, videos and a whole new range of new emoticons with your friends.

The highlighted text is my own addition.

Here are my thoughts:

  • This is implemented on top of WebRTC and not ORTC. In a way, we’ve gone full circle with Microsoft – from ORTC, to adding WebRTC support in Edge to using WebRTC to develop their own products where needed
  • Microsoft gives the best reasoning behind using WebRTC in its own development: to ensure continued support for Linux
    • For the most part, using WebRTC equates better support for more devices and platforms than any other technology out there today
    • Yes. You still need to put some effort into getting it working on some platforms – but with a lot less of a hassle than any other technology and at a lower cost
  • Responsive Skype UI = HTML5. So there’s some browser engine / rendering engine for HTML in there somewhere
  • Latest and fastest…

It turns out Microsoft decided to use Electron.

What is Electron? It is a framework around Chromium that can be used to created desktop apps from web apps. And it is the most popular platform for doing it these days.

The irony.

Microsoft. Who owns, develops and promotes IE and Edge. Who was against WebRTC and for ORTC. That Microsoft used Chromium (effectively Chrome) to bring its Linux Skype app to market.

A few years ago, that would have been unheard of. Today? It makes too much sense – it actually increased the value of Microsoft in my eyes. Making the most practical decision of all and putting the ego aside.

Back to a WebRTC Plugin

So.

The IMTC is now investing its time and effort in a WebRTC plugin. Call me skeptic, but I can’t see this heading in the right direction.

Here’s why:

  • The IMTC is an interoperability group. Its strength lies in getting multiple vendors into the same room and having them test their products against each other. “their products” being products that follow the same specification and end up being deployed in the same network and service
  • Companies put their money into the IMTC to enable them that testing services
  • The problem with WebRTC and the IMTC is that WebRTC doesn’t really require interoperability per se – besides that between browser vendors. And browser vendors aren’t exactly the type of audience the IMTC caters for. To be exact, Microsoft is the only browser vendor who is part of the IMTC – and that’s probably for their Skype for Business product and not Edge or IE
  • Writing and maintaining a WebRTC plugin is hard work. It gets updated too frequently to be considered a one-time effort, so maintaining it comes at a cost – a type of cost that is new to the IMTC and its member companies

I believe it will be hard for the IMTC to maintain such a plugin on their own, and if the idea is to open source it to the larger community so the external community can take it up and continue to work and maintain it for the IMTC then that’s just wishful thinking. Open source projects are not synonymous with community development – they don’t all get picked up, adopted, used and maintained by the masses. The webrtc-everywhere project on github shows that – 2 contributors, a few forks, but not much of a collaboration or community around it.

Since the IMTC is a group of vendors who all seek reaching interoperability of the spec while maintaining a technical advantage on the rest of the vendors (I was there once), I can’t see them cooperating for a long term development of such a thing and putting the resources into it while contributing back to the community.

Furthermore, do we really need a WebRTC plugin?

Yes. I know. Safari. Important. IE. All those poor enterprise guys forced to use it. You can’t live without it and such.

But guess what? That same target market? How receptive do you think it will be for a plugin? What will be the install rate and usage rate for a plugin in such environments?

I have a warm place in my heart for the IMTC, but I think it is losing its way when it comes to WebRTC. I can’t see how a free plugin for WebRTC today will make a change. There are better things to focus on.

What to do in 2016 with WebRTC on IE/Safari?

There are two use cases here:

  1. I need to use the service daily
  2. I just want to get on a URL and do whatever needs to be done (call a doctor for example)

The first one can be solved with an installed PC app. A quaint choice maybe, but one which seems to be popular by comms vendors who started from the web. Think Slack or even Whatsapp – they both have a PC app. If you are using a service daily, the idea goes, you might as well just have it somewhere handy in the background of your PC instead of having to have it opened in a browser tab all the time.

The second one is where things get nasty. Asking for a plugin installation for it is just like asking for an app installation for it. Maybe worse if the installer of the plugin comes with a large set of browser warnings (because browsers now hate plugins). So you might just rethink the app option – or just ask the user to come back with a better browser.

My suggestion?

Explore the option of using Electron instead of a plugin.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post WebRTC Plugin? An Electron WebRTC app is the only viable fallback appeared first on BlogGeek.me.

Surprise: Free Video Calling is no Guarantee for Success (or Adoption)

Mon, 08/01/2016 - 12:00

Guess what? Mozilla is removing Hello from Firefox.

It will still be available as an add-on, but it seems to have degraded in its importance to Mozilla, which is understandable.

Goodbye HelloWhat is/was Hello?

Hello was Mozilla’s attempt to build a video calling service. Something that is baked right into the browser, but can be used by any browser supporting WebRTC. Think FaceTime or Hangouts but without the app or even a website.

Mozilla partnered for Hello with TokBox (a Telefonica company), which provided the backend to the service – mainly NAT traversal as far as I can tell.

When Hello was announced, I had my doubts and questions about it.

What went wrong?

A few things were wrong from the onset in Firefox Hello:

  1. While it debuted on a desktop browser, its main purpose was mobile. The problem is that Firefox OS got scrapped/pivoted, leaving Hello with no real use
  2. It came at a low point in Mozilla’s history. Mozilla partnered during 2014 with 3 vendors, trying to reduce Google’s hold on it: Yahoo, Cisco and Telefonica
    • Yahoo is all but dead – it just got acquired by Verizon
    • Telefonica needed Firefox OS on mobile, and now that that hasn’t matured, my guess is that its interests lie elsewhere these days, so having Telefonica/TokBox as part of Hello probably isn’t helping too much today
    • Cisco only wanted to protect its H.264 investments, which it succeeded
    • This cost Mozilla in focus and diluted its brand from being a pure open alternative
  3. Firefox has no real network effect or user base to rely on. It doesn’t connect users to one another but rather it connects viewers to web pages. Having hundreds of millions of viewers doesn’t equate monthly active users for a personal communication tool that is baked into the same product
  4. Hello was simple, but offered nothing interesting/innovative/new/needed. People who used apps continued to use apps. Those that wanted to meet over URLs used URLs. Having the button in the browser wasn’t enough to make people leap for the opportunity to use it
  5. While available in all WebRTC supporting browsers (=Chrome & Firefox), it was really a Firefox thing. This limited the user base, and especially the ability to start or to really receive a call over a mobile device

The main issue though is that a free video calling service isn’t that much of a deal these days (if this surprises you – just ask Google).

So Mozilla started by embedding Hello right into the browser. Then making it into a system add-on. And now it is making it into just another add-on. I assume it has a lot to do with the usage they’ve seen over the past year for Hello (and its non-adoption). It makes no sense to continue investing the time and effort in it if no one is using it – and having it officially released with the browser once every few months is a waste. Better throw it out of the browser and simplify the browser releases.

The next step might be to sunset the add-on/service altogether and say goodbye to Hello.

Is this predictive to Google’s Duo app?

Google announced Duo and is about to release it. Simplifying things a bit (and dumbing it down), Duo is a FaceTime clone. I covered Allo/Duo a few months back.

On face value, there’s no reason why Google Duo won’t meet a similar fate as Mozilla Hello.

That said, there are a few notable differences:

  • Duo is a mobile only app, whereas Hello focused on desktop browsers
  • Duo will probably be released on Android and iOS, covering 100% of the mobile market from day one
  • Google has a large users base on Android and the ability to get Duo in front of users. It also has the social graph of these people – via the phone’s address book
  • While Google kept Duo simple, it did bake two features into it:
    • Speed of connectivity, taking it to the extreme by adding QUIC into the mix
    • Caller’s video sent even before you accept the call

Will this be enough for Google Duo to get the adoption? I don’t know.

Where do we go from here?

In 2016 there should be no doubt anymore:

If you plan to monetize a video calling service, you need a serious business plan.

Most services I see launched have no business plan. They attempt to grow to millions of users. There’s a lot of dumb luck involved in it.

I’ve had my doubts about the viability of Wire as a company due to the same reasons. The only progress made by Wire is open sourcing their app – this doesn’t strike me like a business plan or a signal of strength and healthy growth.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post Surprise: Free Video Calling is no Guarantee for Success (or Adoption) appeared first on BlogGeek.me.

VP9 Hardware Acceleration is Real

Mon, 06/20/2016 - 12:00

Hardware acceleration for video codecs is almost mandatory.

VP9 is getting a performance boost

There are three things that keep VP8 in the game when compared to H.264:

  1. It was the only video codec in Chrome for WebRTC in the last 5 years, giving it a headstart in deployments
  2. H.264 while available in mobile chipsets isn’t always accessible for the developer (or works as it should when it is accessible)
  3. VP8 and H.264 are rather old now, so software implementations of them are quite decent

 

With VP9, the main worry was that it will be left behind and not get the love and attention from chipset vendors – leading it to the same fate as VP8 – abysmal, if any, hardware acceleration support. It is probably why Google went to great lengths to make it running on YouTube so soon and is publicizing its stats all the time.

This worry is now rather behind us. Recent signs show some serious adoption from the companies that we should really care about:

#1 – ARM

Mobile=ARM

Without checking stats, I’d say that 99% or more of all smartphones sold in the past 5 years are based on ARM.

If and when ARM decides to support a feature directly, that brings said feature very close towards world domination in future smartpones.

Which is somewhat what happened last week – ARM announced its Mali Egil Video Processor with VP9 acceleration.

Here’s a deck they shared:

ARM Mali "Egil" technical preview from Phil Hughes

Being farther away from chipsets than I were 5 years ago, it is hard for me to say if this is an integral part of an ARM processor, but I believe that it isn’t. It is an add-on component that takes care of video processing that chipset vendors add next to their ARM core. They can source the design from ARM or other suppliers – or they can develop their own.

Not sure how popular the ARM alternative is for video processing, but they have the advantage of being the first alternative for any chipset vendor (hell – they already source the ARM core itself, so why not bundle?). Which also means every other vendor needs to match up to their feature set – and improve on it.

Now that VP9 encode/decode capabilities are front and center in the ARM Mali Egil, it has become a mandatory checkmark for everyone else as well.

#2 – Intel

If ARM is the king of mobile, then Intel rules the desktop.

As with ARM, I haven’t been following up on Intel CPU acceleration lately. And as with ARM, it was Fippo who got my attention with this link here: the new Intel Media SDK.

For those who don’t know, Intel is providing several interesting software packages that make direct use of its chipset capabilities. Especially when it comes to optimizing different types of workloads. The Intel IPP and Media SDKs handle media related processing, and are quite popular by low level developers who need access to such facilities.

From the release page itself:

With this release we are happy to announce new full hardware accelerated support for HEVC and VP9.

  • HEVC Main 10 (10-bit) encoder and decoder support
  • VP9 8-bit and 10-bit decoder support

So… HEVC (=H.265) has encode and decode while VP9 only has decode support.

Probably because HEVC has been in the works for a lot longer than VP9, but there’s hope still.

#3 – Alliance of Open Media

The Alliance of Open Media. I’ve published a recent update on the alliance.

Intel was there from the start. The recent additions include ARM, AMD and NVIDIA.

I am sure additional chipset vendors will be joining in the coming months – there seems to be a ramp up in memerships there, with Ateme and Adobe added to their logos just last week.

While the alliance is about what comes after VP9, it is easy to see how these vendors may sway to using VP9 in the interim.

The Future

The future is most definitely one of royalty free video codecs. We’ve got there with voice, now that we have OPUS (though Speex and SILK were there before to pave the way). We will get there with video as well.

Coding technologies need to be accessible and available to everyone – freely – if we are to achieve Benedict Evans’ latest claims: Video is the new HTML. But for that, I’ll need another post.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post VP9 Hardware Acceleration is Real appeared first on BlogGeek.me.

Will Microsoft’s Acquisition of LinkedIn Change the WebRTC Landscape?

Tue, 06/14/2016 - 12:00

It’s good to have Fippo when there’s lack of ideas in your head.

While there are synergies abound, a flawless execution is necessary

Yap. Fippo again prodded me about a topic, so here comes the post for it.

If you missed it, yesterday Microsoft acquired LinkedIn. $26.2B.

In some ways, Microsoft now rules the enterprise space – communication, collaboration and creation:

  • Microsoft Office suite (Excel, PowerPoint and Word as the main pillars)
  • Microsoft Outlook and the Exchange server (Email)
  • Yammer (Enterprise communications)
  • Skype (Voice and video communications)
  • LinkedIn (User identities and profiles)

Dean Bubley puts it nicely:

The @microsoft / @linkedin deal has nailed enterprise comms federation. Complete map of who knows whom. Add Skype4B & goodbye telephony

— Dean Bubley (@disruptivedean) June 13, 2016

There’s a longform here, but I am less convinced.

I am more inclined to how Radio Free Mobile sees this:

However, for all of this to work, LinkedIn’s systems and data has to become deeply integrated with those of Microsoft which with the companies remaining independent, will be orders of magnitude more difficult.

Microsoft of late has an issue with the ability to execute and follow through.

Skype, while huge, isn’t growing since Microsoft’s acquisition. It is actually letting others take its place.

Same with Yammer. Have you heard anything about it in the last few years? The news is all about Slack, and worse still – it is about how Atlassian’s HipChat is struggling because of Slack – Yammer isn’t even mentioned as a competitor/contender in this space.

Which brings us to LinkedIn, Microsoft’s intents for it and its ability and willingness to follow through.

Back to LinkedIn

I wrote about LinkedIn exactly a year ago. It was about their acquisition at the time of Lynda, a learning company, and me griping on why LinkedIn isn’t doing anything about comms (and WebRTC).

The people at LinkedIn aren’t stupid. They are $26.2B smarter than I am. And frankly, that’s also $17.7B smarter than Skype.

What does that tell us?

  • LinkedIn saw no real value in real time communications
    • Not enough to invest in it and build something with WebRTC
    • Not enough to acquire someone outright
    • Not enough to partner and integrate someone like Skype (Facebook did that in the past for example)
  • That decision played well for LinkedIn – they just got acquired
  • Messaging isn’t that important to LinkedIn either
    • They have rudimentary messaging capability in their platform
    • But it is lacking in so many ways that it is hard to enumerate them
    • And you can’t call its messaging anything similar to… messaging. If feels more like emails

If LinkedIn can’t find value in real time communications for its platform on its own, can Microsoft do a better job at it?

I don’t know.

Now lets look at the Microsoft assets that canbe integrated with LinkedIn.

Skype and LinkedIn

As Dean suggested, there is some synergy in Skype connecting to LinkedIn.

LinkedIn can slap a Skype button on its profiles, making it easy to connect to the people you’re connected with on LinkedIn.

While that’s great, most communication today happens OUTSIDE of LinkedIn. You reach out to people on it, connect with them, and then shift to email and other means of communications. Especially once you know a person to some extent.

To make a point – I wouldn’t send a message to Dean over LinkedIn – I’ll make it over email. Or just ping him on Skype, because that’s where he is.

When someone asks me for an introduction, it usually goes like this: “I saw you are connected to John Doe on LinkedIn. Can you send an intro email for me?”. It happens a lot less on LinkedIn even when it is driven from LinkedIn.

Getting the communication back to LinkedIn will be hard. Getting slightly more communications from LinkedIn directly to Skype is possible, though I am not sure it will be widely accepted.

Yammer and LinkedIn

Yammer isn’t best of breed in enterprise messaging. Not even sure if doing anything with it and LinkedIn is worth the effort.

My suggestion is to open the coffers and take out a few more billions of dollars and acquire Slack. Then throw out all voice integrations and bolt Skype in there. But that has nothing to do with LinkedIn.

Outlook/Exchange and LinkedIn

Email is what drives LinkedIn in the most effective way.

Having the ability to embed and merge profiles properly into Outlook – without any ugly add-ons – that’s great.

But nothing earth shattering that we haven’t seen before with Rapportive on Gmail.

Office and LinkedIn

I guess that having a tighter integration between PowerPoint and Slideshare would be great. But that isn’t the reason LinkedIn was acquired.

Sarah Perez of TechCrunch wrote about the integration of Office and LinkedIn. It includes Outlook. Focuses on Outlook.

And mostly goes one-way: how LinkedIn can enrich Office/Outlook related information. A bit on how Office can enrich LinkedIn data by adding more users. But nothing about how LinkedIn’s functionality can grow. A shame.

If this is where things are headed – growing Office but not growing LinkedIn, then I am afraid LinkedIn is expecting a similar fate to Yammer and Skype. Its days of greatness will be behind it and its level of innovation and introduction of powerful features that can compete in the market – will come to an end.

Other Domains

Cortana and Microsoft’s CRM are areas I missed. You can read more about them in Richard’s analysis on Radio Free Mobile.

The Corporate Structure

It seems that LinkedIn will sit as an independent entity within Microsoft under Satya Nadella directly.

I wonder how that will make things easy for the tight integrations envisioned for LinkedIn and the rest of Microsoft’s assets. How easy will it get to get the Skype team to cooperate and assist the LinkedIn team to integrate Skype for Web? What will the Office team want in return for the data they will be passing to LinkedIn? Will legal even authorize it?

There will be a lot of coordination taking place here, and I do hope that along the way, they won’t lose what’s needed to be done – there’s a lot of synergies and power here, but this will require a lot of agility from a huge company.

Back to WebRTC

This affects larger players in the UC space. If (and that’s a big if) Microsoft can connect the dots of Office, Exchange, Skype and LinkedIn – this makes for a very compelling offering. One that can differentiate and top Cisco and Google.

If Microsoft can make LinkedIn into the congregation point of people across enterprises – and not only a place to find CVs – it will be in a position to expand its offering towards real time communications in ways that others will find hard to compete against. LinkedIn lacked this vision. I wonder if Microsoft can follow through – or will they as well see it as unnecessary.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post Will Microsoft’s Acquisition of LinkedIn Change the WebRTC Landscape? appeared first on BlogGeek.me.

The Alliance of Open Media – 10 Months in

Thu, 06/09/2016 - 12:00

How time flies.

About 10 months ago, the announcement of the creation of a new alliance caught me off guard.

Somehow, Google, Microsoft and a few other companies put their differences aside and decided to create the Alliance of Open Media. The intent – create royalty free video codec to rival H.265/HEVC. I’ve written about the Aliance of Open Media. It is time to revisit the topic.

A few things happened these last few months that are worth mentioning:

  1. We’ve learned more about the alliance – Jan Ozer  wrote a good progress report
  2. AMD, ARM and Nvidia joined the alliance
  3. Ittiam joined the alliance
  4. Vidyo joined the alliance

I am told work is being done on the actual codec itself. From the report Jan Ozer wrote, the following is apparent:

  • Baseline for the codec is VP10 (Google)
  • Most contributions of technologies on top of it come from Mozilla and Cisco; though I assume Microsoft is contributing there as well
  • Hardware vendors are putting their weight to make sure the algorithms used are easy to place in a hardware design
  • There’s a focus on GPU acceleration, which is important
  • Intent is to have it integrated into a browser by the beginning of 2017 and have hardware acceleration a year later

All the right moves.

ARM and Nvidia

Adding ARM and Nvidia is quite a catch.

ARM is in charge of the architecture of most smartphones on the market today, along with many of the IOT devices out there. Having them on board means that considerations for mobile and low power devices are taken into consideration by the alliance – but also that the work of the alliance will find its way into future designs of ARM.

Nvidia is where you find GPU processing power. They complement the attendance of Intel, brining the important GPU players to the table. In a recent whitepaper I’ve written for Surf, I touched the GPU issue briefly. I’ve done some research in that domain, and it does seem like the GPU is the best candidate to handle our future video coding – having GPUs relevant to this next generation codec fron the start is an important catch for the alliance.

Ittiam

Ittiam is a recent addition to the alliance.

I’ve had the chance to know Ittiam a decade ago, while competing head to head with their VoIP software. They have expertise in the multimedia space and in video compression, but they still are the smallest (or least relevant) player in this alliance. Having them is required to fill in the ranks and grow in numbers.

It would be nice to see others join such as Imagination Technologies (who are larger and a lot more meaningful).

Vidyo

Vidyo just join the alliance. On one hand, it surprised me. On the other hand, it should have.

Vidyo is collaborating with Google for a long time now in VPx and WebRTC. Recently it reiterated that with the work it is doing on VP9 SVC for WebRTC (you can find out more about it on a guest post Alex Eleftheriadis shared here on scalability and VP9).

Their addition to the alliance means several things:

  • Vidyo is making itself an integral part of every initiative related to future video codecs. This is a smart move, as it maintains its lead in the backend side and the smarts that is placed on top of SVC capabilities
  • This future codec will have SVC support in it, hopefully from the moment it is released to market
  • While a smaller company compared to the other members, the contribution of Vidyo to the alliance can be larger than many others of its members
Qualcomm

Qualcomm is missing.

So is Samsung.

And a few other smaller mobile chipset vendors.

I think it is their loss, as well as a missed opportunity.

They both should have joined the alliance at its inception.

Apple

Apple being Apple, they aren’t a part of it. Putting ads in the App Store and changing subscription revenue sharing models were more important to them, which is understandable.

The thing I don’t understand here is that Apple has removed most of its support in H.265. What does it have to lose by joining the alliance?

There are three paths available to Apple:

  1. Go with H.265. The current reduction in its support of H.265 can only be explained as a negotiation tactic in such a case
  2. Go with the Alliance of Open Media. Which it could do at any point in time. But if that is the case, then why wait?
  3. Release its own unique iCodec. Apple knows best, and it is time to lock its customers a bit further anyways

I wonder which route they are taking here.

Content Creators and Service Providers

We’ve got YouTub, Netflix and Amazon already covered. The internet may rejoice.

But what about Game of Thrones? Or the next movie blockbuster? Are they staying on the route of H.265 or will they veer away from it towards the alliance?

Hard to tell, though for the life of me, I can’t understand a long term decision of staying with H.265.

It would be nice to see the large studios and even Bollywood join the alliance – or at the very least back it publicly.

Timeline

If we look at the VP9 timeline, we havethe following estimates:

  • 1 year – Chrome decoding, along with a small percentage of YouTube videos supported
  • 2 years – First chipsets and reference designs support. My bet is on Nvidia and Intel here
  • 2.5 years – Chrome official support of it for WebRTC
H.264 in WebRTC

H.264 is hear to stay. More worrying – H.264 will grow in popularity in WebRTC services during 2016.

This progress and success of the alliance changes nothing in the current ecosystem and the current video technology.

The future of H.265

The future of H.265 does look grim. I do hope the alliance will kill it.

H.265 is in a collision course with VP9. It is still the more “popular” choice in legacy businesses, but that may change, as commercial deployments of it are small or non-existent.

The alliance simply means that a future codec is based on the VPx line of codecs instead of the H.26x ones. Now developers shifting from H.264 to a better codec will need to decide if they switch codec lines now or just later.

The royalty issues around H.265 along with the progress made in the alliance should tip the scales towards VP9 on this one.

What’s next?

Money time.

Where does that leave us all?

  • Vendors who handle codecs directly should join the alliance. The benefits outweigh the risks.
  • Consumers and users can continue not caring
  • Developers, especially those of backend media servers, need to decide if they shift towards VP9 or wait for the next generation to switch to a royalty free codecs. They also need to decide if they want to use VP8 or H.264 today

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post The Alliance of Open Media – 10 Months in appeared first on BlogGeek.me.

4 Reasons to Choose H.264 for your WebRTC Service (or why H.264 Just won over VP8)

Mon, 05/30/2016 - 12:00

H.264 is set to replace VP8 for WebRTC services.

You can thank Fippo for making me write this one.

Microsoft ended last week with an announcement of sorts on their Edge dev blog, indicating that H.264/AVC support for ORTC is now available in Edge.

  • Yes. It is ORTC and not WebRTC
  • Yes. It is only behind a runtime flag
  • Yes. It is only on Edge. No IE

But then again, it is the only way today (or at least tomorrow) to get a video call running cross browser between Firefox, Chrome and Edge. VP8 or VP9 gets you as far as Chrome and Firefox.

Which got me to this one over here. Edge support for H.264 in ORTC isn’t much. It isn’t even interesting in the bigger scheme of things (Edge has literally no market share compared to the other browsers, so why bother with it?). And still it marks a turning point – one in which we can all ask ourselves what video codec should we be leaning towards if we started developing a product that uses WebRTC today?

Last year, the answer would have been “VP8”.

A few months ago, it was, “it depends”.

Today, it will lean towards “H.264, unless you must use VP8”.

Here are 4 reasons why this is happening:

#1 – Browser interop baseline

If you want your service to get the most coverage on as many browsers as possible and you need video, then H.264 is the way to go. In a few months, H.264 will get official support by all of these vendors and that will be the end of it. Furthermore, you can expect Apple to use H.264 first and contemplate VP8 – same as Microsoft is doing now with Edge.

#2 – Mobile

Mobile devices like H.264 more than they like VP8. Video codecs take up a lot of resources. To overcome this, mobile handsets use hardware acceleration for video codecs. They all have H.264 video acceleration (though you can’t always gain access to it as a developer). Many of them don’t even know how to spell VP8. This boils down to WebRTC implementations on mobile needing to implement VP8 using software.

Some developers ended up replacing VP8 with H.264 on mobile just because of this reason. Especially for mobile only products.

While I am sure support for VP8 is improving in new chipsets, there’s this pesky issue of supporting the billion and more devices that are already out there. And now that all browsers support H.264 in one way or another, what incentive do developers needing to support mobile apps have to use VP8?

#3 – Legacy video systems

All them video conferencing systems? They use H.264. Most don’t have VP8. Not even in their latest released products. The way they end up supporting WebRTC until today is via a specialized gateway, on the MCU or not at all.

Transcoding was one of the main barriers to getting WebRTC to legacy video systems. It just costs a lot. It would have been easier to just go H.264 all the way. Which is what is now available.

It is one of the reasons why Cisco first worked on Firefox with Spark. It made a decision to use H.264 for WebRTC instead of transcoding from VP8.

#4 – Streaming

Over 60% of the Internet traffic is video. Most of it isn’t real time video, but rather the YouTube or Netflix kind. Passive consumption.

Video streaming today is predominantly H.264 based, and at times VP9 (=YouTube whenever possible).

To get video content on an iPhone device, HLS is required, and that again means H.264.

So again we are left with the alternative of either transcoding our WebRTC generated content to H.264 when we want to stream it out – or to create it using H.264 to begin with.

Do you even care?

If your service is a 1:1 calling service with no server side media processing, then you shouldn’t even care. In such a case, whatever the browsers end up negotiating will be good enough for you (and most probably the best alternative for that specific situation).

Those who invested in server side media processing, be it recording, mixing, routing –  have made investments that are targeted at VP8. Modifying these to work with H.264 as well may not be trivial. For them, the decision of switching to H.264 is a harder one to make, but one that needs to be addressed.

The Future of Video Coding in WebRTC

Once we step into the future, we see VP9. And the SVC flavor of VP9.

And then there’s the Alliance of Open Media and the work they are doing towards a widely accepted next gen royalty free video codec. I’ve touched the progress they are making in my recent Virtual Coffee session

For the record, I rather hate H.264 and what it stands for. But now I must accept that it is here to stay and grow with WebRTC.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post 4 Reasons to Choose H.264 for your WebRTC Service (or why H.264 Just won over VP8) appeared first on BlogGeek.me.

NUBOMEDIA: the first open source WebRTC PaaS

Wed, 05/25/2016 - 12:00

[Luis Lopez is the face in front of Kurento, one of the popular open source media servers that can handle WebRTC. He wanted to share here the story of the new open source WebRTC PaaS – NUBOMEDIA]

When I first heard about WebRTC by 2011, I was fascinated by the idea of standardized APIs and protocols enabling the creation of interoperable RTC applications for the Web. However, I noticed very soon that my peer-to-peer services were too limited and that, as a developer, I was hungry for further features that could only be provided by a WebRTC infrastructure. This is why I got involved in the Kurento project for creating a media server. Kurento got nice traction but, as it was maturing, we found an increasing number of feature requests related to its scalability. The message was quite clear: a cloudification of Kurento was necessary.

With this in mind, by 2014 we got down to work and, with the financial support of the European Commission, we worked hard during a couple of years in cooperation with some of the most remarkable cloud experts around Europe. These efforts were worthy: NUBOMEDIA, the first open source WebRTC PaaS, is now a reality.

NUBOMEDIA: the first WebRTC PaaS

In the WebRTC ecosystem, scalable clouds for developers are not new. Providers such as Tokbox, Kandy, Twilio and many others offer them. These solutions are commonly called “WebRTC API PaaS”, “WebRTC Cloud APIs”, or just “Cloud APIs” as they expose a number of WebRTC capabilities through custom APIs that exhibit all the nice “-ilities” of cloud services (i.e. scalability, security, reliability, etc.)

For NUBOMEDIA we also considered this “Cloud API” concept as a solution. However, although APIs are the main building block developers use for creating applications, applications are more than just a set of API calls. After analyzing WebRTC developers’ needs, we felt more appealing the concept of platform than the concept of API. A platform is more than an API in the sense that it provides all the required facilities for executing applications. These typically include an operating system, some programming-language-specific runtime environments and some service APIs. The cloud version of a platform is commonly called a PaaS, which is (literally) a platform that is offered “as a Service”.

There are many such PaaSes in the market including Heroku, the Google App Engine or AWS Elastic Beanstalk. All of them expose to developers the ability of uploading, deploying, executing and managing applications written in different programming languages. These PaaS services are quite convenient as they let developers to concentrate on creating their applications’ logic while all the complex aspects of provisioning, scaling and securing them are assumed by the PaaS. In spite of the wide offer of PaaS services, we noticed that most common PaaS providers did not expose WebRTC capabilities as part of their APIs. Hence, WebRTC developers were not able to enjoy all the advantages of full PaaSes.

The main difference between a WebRTC cloud API and a full WebRTC PaaS is illustrated in the following figure. As it can be observed, WebRTC Cloud API providers (left) do not host developers’ applications, but just expose some WebRTC capabilities through a network API that applications consume. On the other hand, full WebRTC PaaSes host application and take the responsibility of executing, scaling and managing them.

Based on these ideas, the NUBOMEDIA idea emerged clearly: instead of evolving Kurento into a cloud API we should rather create a full PaaS out of it, so that developers could enjoy the nice features of PaaSes (i.e. application deployment, execution, scaling, etc.) while consuming the Kurento APIs in a scalable and secure way.

Why NUBOMEDIA may be interesting for you

NUBOMEDIA is now a reality and it can be enjoyed openly by developers worldwide. Like solutions such as OpenShift, Cloud Foundry or Apprenda, NUBOMEDIA is a private PaaS in the sense that it consists of an open source software stack that can be downloaded, installed and executed on top of any OpenStack IaaS cloud.

If you are a developer, you may be interested in trying NUBOMEDIA for your next application as it combines the simplicity and ease of development of WebRTC Cloud APIs with the flexibility of full PaaSes. When doing so, consider that NUBOMEDIA is a Java PaaS. Hence, you will be able to leverage all the capabilities of the Java platform for creating your WebRTC application. The only difference with other Java PaaS services it that NUBOMEDIA will provide you a specific SDK through which you will be able to access the complete feature set of Kurento in a scalable way.

From a practical perspective, the main differences between NUBOMEDIA and other WebRTC cloud solutions are illustrated in the next figure. As it can be seen, there is a trade-off between flexibility and simplicity: the simplest the development, the less flexible the application is and the more difficult it is to adapt it to custom needs and requirements.

For example, most flexible solutions (IaaS on the bottom left corner of the image) require complex developments for creating fully operational WebRTC applications. On the other hand, SaaS solutions (top right corner) do not require much development efforts, but developers’ ability for customizing and adapting it to special requirements is typically very limited. For this reason, WebRTC developers tend to prefer WebRTC Cloud APIs that provide some flexibility but, at the same time, enable simple developments.

NUBOMEDIA also positions within this balance but giving more prevalence to flexibility. This makes NUBOMEDIA more suitable for developments requiring to comply with special or rare requirements. Just for illustration, these are some of the things you can make with NUBOMEDIA that are complex to achieve using the common WebRTC Cloud APIs:

  • To use the signaling protocols you prefer (e.g. SIP, XMPP, custom, etc.)
  • To have special communication topologies. For example, imagine that you need a videoconferencing room with “spy participants” that can view others but should not be noticed by the rest; or imagine that you need simultaneous translators that are not viewed but need to listen to some participants while being listened by others.
  • To have custom AAA (Authentication, Authorization and Accounting). For example, imagine that you wish to implement rules customizing who can access the media capabilities (e.g. recording, viewing a specific stream, etc.) so that they depend on some non-trivial logic (e.g. context information, time-of-day, time-in-call, etc.).
  • To go beyond calls. We may imagine lots of use-cases where WebRTC might be used beyond plain calls. For example, person-to-machine or machine-to-machine scenarios where you need cameras to connect to users or to other systems in a flexible way without restricting to the typical room videoconferencing models commonly exposed by WebRTC Cloud APIs.

As another interesting property, as NUBOMEDIA is a private PaaS, it can execute onto any OpenStack infrastructure. This means that the operational costs of an application running in NUBOMEDIA are fully under your control as you can decide in which IaaS to deploy the PaaS. This significantly reduces the operational costs with respect to an equivalent application consuming a Cloud API, as the Cloud API provider margins disappear.

The NUBOMEDIA Open Source Community

We have created NUBOMEDIA following the same open philosophy we used with Kurento. Currently, it is supported by an active and vibrant open source software community that is structured as an association of several projects providing different technological enablers including: the cloud orchestration mechanisms, the PaaS management technologies, the media server, many media processing modules and client SDKs for Android, iOS and Web.

If you are interested in knowing more about NUBOMEDIA you can check the community documentation where you will be able to find detailed information showing how to install and manage the platform and how to develop and deploy applications into the PaaS. You can also check the community YouTube channel and see one of the many videos with demos and tutorials illustrating how to develop and deploy NUBOMEDIA applications. If you want to know about the latest news of the NUBOMEDIA Community, you may follow it on Twitter.

 

Want to make the best decision on the right WebRTC platform for your company? Now you can! Check out my WebRTC PaaS report, written specifically to assist you with this task.

Get your Choosing a WebRTC Platform report at a $700 discount. Valid until the beginning of May.

The post NUBOMEDIA: the first open source WebRTC PaaS appeared first on BlogGeek.me.

With WebRTC, Vendors Must Embrace True Aglie

Mon, 05/23/2016 - 12:00

And not only the development.

For too many years now we’ve been enamored with Agile. Supposedly the successor of the fountain development model, agile is all about short iterations and faster feedback.

In larger places, agile is usually just the next undertaking of the program manager – or whatever equivalent you have in the company that deals with processes. I remember hearing the term “we must be agile”. With the end result being… 18 to 24 months product release cycles.

That’s nice, but it isn’t really agile – at least not more than the Geek & Poke caricature above.

I had an interesting discussion with a consultant during the London WebRTC conference two months ago. He complained that browsers are moving too fast, making it hard for enterprises to follow suit and adopt WebRTC.

Here’s a quick reminder – WebRTC doesn’t care about enterprises. It cares about innovation and forward moving. If something breaks, then you’re just out of luck.

WebRTC today forces enterprises to think and act Agile

Why is this the case?

  • Browsers are updating at the speed of light – every 6 to 8 weeks
    • Each time they do, something gets deprecated
    • And other things can get broken
    • This is doubly so with WebRTC, which is essentially a perpetual work in progress
    • And will stay that way well into 2017
    • Enterprises need to be prepared for it and willing to update their own deployments to keep pace
  • WebRTC’s codecs are changing – and upgrading
    • VP9 is upon us
    • H.264 is here to stay
    • R&D teams need to adopt new codecs to keep their service pristine
    • Otherwise, competitors will do it and win the market simply by offering better user experience and media quality
  • New capabilities
    • Browser side recording?
    • Playing video from a canvas?
    • Pipelining media?
    • WebRTC has it all, and things are only improving
    • Do these affect your product? Do you need someone to define how this changes things for you?
What Needs to Change

Enterprises need to change their stance. They aren’t in control anymore. They should act accordingly.

This means having product managers, developers, testers, support and IT all working in concert in an agile way – thinking about launched products as living and breathing entities that must be updated continuously.

Thinking of launchng a WebRTC based product? Especially if it is an on premise one – you must make sure you understand the implications AND that your customers understand the implications as wlel.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post With WebRTC, Vendors Must Embrace True Aglie appeared first on BlogGeek.me.

Allo, Duo, Hangouts or Jibe? Help…

Thu, 05/19/2016 - 12:00

Wasn’t there enough complications already?

I use Hangouts all the time. At testRTC, we use it for most of our demos and customer meetings. As good and complete as Hangouts is in terms of the feature set that I need, it can be quite confusing at times. Something that probably stems from its dual use nature: Google Hangouts is both a consumer messaging app and an enterprise unified communications app. And while the two rely on the same technology – they are not the same.

If there is one other similar service that does that it is Skype, and even with it, it is mostly by branding and not by the service itself (I am not sure how uniform the Skype and Skype for Business apps and infrastructure are, but they sure are getting worse in the last year or two).

Can a single app rule them all? By the way things look today – no.

And yet this latest move by Google leaves me somewhat baffled.

At Google I/O’s keynote yesterday, Google came out with a slew of announcements. The ones interesting for me here are those related to messaging or to WebRTC:

  • Allo – a new messaging app to fend off Facebook Messenger
  • Duo – a new video chat app to fend off Apple FaceTime
  • Firebase – a new version which I won’t be covering here
Allo

Allo is Google’s “Smart Messaging App”.

It is yet-another-messaging-app – until you see the suggestions it gives you.

I use Switfkey as my Android keyboard, and it “learns” what you click so future clicking is shorter. The smart messaging replies in Allo are the next step for me – instead of doing it on the word level it does it on the conversation level.

The smarts in Allo seems to be split into two parts – what Allo does on his own, which is suggestions inside the conversation. On top of it, Google added something they call Google Assistant, which goes “out” of the conversation to offer suggestions for external actions. The example in the I/O keynote was restaurant reservation.

This competes directly with messaging and bots. Specifically Facebook. Maybe others.

Where can this lead us?

  • If I were Google, I’d make this into a bot or a layer that can be stitched into everything
  • Messaging services could use it directly, which will allow Google to sift every interaction and offer their suggestions and automation – no matter the app
  • Would messaging apps adopt it? I don’t know, but why shouldn’t they try it out?
Duo

Duo IS WebRTC. Or at least what you can do with it.

A not about Duo, WebRTC and purism – Duo is mobile only (for now), closed app, running on Android and iOS.

I’ll repeat that.

Duo is mobile only (for now), closed app, running on Android and iOS.

No web browser. No complaints about unsupportive Safari or IE browsers. And from Google.

To those who decide to skip WebRTC just because it doesn’t run on IE or not supported by Safari (without really understanding what WebRTC means) – this should be the best wake up call. Coming directly from Google, the company who wants everything running in the browser.

Recognize anyone in the Duo app?

If tech media outlets taught me anything this time, is that you should be suspicious at what they write.

Ingrid Lunden on TechCrunch did a nice write up on Duo, offering the gist of it:

  • 1:1 video chat app, like FaceTime
  • Focus is on super fast (responsiveness) and media quality
  • You see the caller’s video before you answer a call. A nice gimmick I guess
  • Based on WebRTC

This is where things flal apart a bit in her coverage:

The other thing that Duo is touting is the engineering that has gone into making the video in the app work. Google says it will work the same whether your network is superfast or patchy. This in itself, if it really bears out, would be amazing for anyone who has cursed his or her way through a bad Hangout or Skype call.

Duo was built by the same team that created WebRTC and it uses WebRTC, engineering director Erik Kay said today on stage at I/O. It was built using a new programming protocol, Quic, which Google unveiled last year as a route to speeding up data-heavy applications that travel over the web.

So Duo has this magic of working better than Hangouts and Skype. Great. So why didn’t Google just build it into Hangouts? Especially considering both use WebRTC…

That reference to the QUIC protocol – to be sure – this does NOTHING to the actual media – only to the time you wait until the smartphone “dials”. You shave a few hundreds of milliseconds there, but that won’t move the needle in the industry either way.

Mashable’s Raymond Wong explains QUIC and how it is a serious advantage:

Google says people don’t place as many video calls with their friends and family because connections can sometimes be spotty and drop. Duo uses a new protocol called QUIC that’s supposed to be more robust than any other video calling infrastructure out there.

QUIC won’t make the call more robust or get calls work better. It will just make them make the initial connection faster or having the mute button appear QUICker on the other end’s device. QUIC is a nice touch of how Google can go to extremes sometimes with optimizing the technology. Sometimes it makes a lot of sense, but other times less so. QUIC is definitely a step forward from TCP, but its effect on video calling isn’t huge.

What do we have here? Apple FaceTime, done by Google, working on both Android and iOS. Nothing more and nothing less.

 

There’s also Jibe

An acquisition from last year, placing Google as a serious RCS player.

No mention of it in I/O. Probably because its focus is on “fixing”/”improving”/”popularizing” the basic Google Messenger app, which does SMS.

This being something that needs to be synchronized with carriers – it will take time to materialize.

The future of Hangouts

Is the enterprise.

With Allo and Duo, why should consumers even care about Hangouts from now on?

Can this succeed?

Can such an approach succeed for Google? Having multiple communication apps, two of them announced in the same day.

Can they reach mass adoption?

Google is taking the path of unbundling here, but doing it to what was until now the same service – communications. They split it into multiple smaller apps, tearing real time voice and video calling from current messaging apps. It feels somewhat like iMessage and FaceTime, but Allo is more capable than iMessage (sans SMS) and Duo is a bit more capable than FaceTime (the knock knock feature).

I can’t really decide if taking this unbundling approach is better or worse. Will it increase engagement of users with these services or hurt them. And where does Google Hangouts fit in here, if at all?

The post Allo, Duo, Hangouts or Jibe? Help… appeared first on BlogGeek.me.

WebRTC Signaling Protocols and WebRTC Transport Protocols Demystified

Mon, 05/16/2016 - 12:00

A refresher on what I’ve written in 2014 (here and here).

Can you guess the signaling and transport here?

WebRTC as a protocol comes without signaling. This means that you as a developer will need to take care of it.

The first step will be selecting the protocol for it. Or more accurately – two protocols: transport and signaling. In many cases, we don’t see the distinction (or just don’t care), but sometimes, they are important. A recent question in the comments section of one of the two posts mentioned here in the beginning, got me to write this explanation. Probably yet again.

WebRTC Transport Protocols and Browsers

This actually fits any browser transport protocol.

A transport protocol is necessary for us to sent a message from one device to another. I don’t care what is in that message or how the message is structured at this point – just that it can be sent – and then received.

HTTP/1.1

5 years ago browsers were simple when it came to transport protocols. We essentially had HTTP/1.1 and all the hacks on top of it, known as XHR, SSE, BOSH, Comet, etc. If you are interested in the exact mechanics of it, then leave a comment and I’ll do my best to explain in a future post (though there’s a lot of existing explanation around the internet already).

I call the group of solutions on top of HTTP/1.1 workarounds. They make use of HTTP/1.1 because there was no alternative at the time, but they do it in a way that makes no technical sense.

Oh – and you can even use REST to some extent, which is again a minor “detail” above HTTP/1.1.

Since then, three more technique materialized: WebSocket, WebRTC and recently HTTP/2.

WebSocket

The WebSocket was added to do what HTTP/1.1 can’t. Provide a bidirectional mechanism where both the client and the web server can send each other messages. What these messages are, what they mean and what type of format they follow was left to the implementer of the web page to decide.

There’s also socket.io or the less popular SockJS. Both offer client side implementations that simulate WebSocket in cases it cannot be used (browser or proxy doesn’t support it). If you hear that the transport is socket.io – for the most part you can just think about it as WebSocket.

When your WebSocket work great, they are great. But sometimes it doesn’t (more on that below, under the HTTP/2 part).

WebRTC’s Data Channel

To some extent, the Data Channel in WebRTC can be used for signaling.

Yes. You’ll need to negotiate IP addresses and use ICE first – and for that you’ll need an additional layer of signaling and transport (from the list in this post here), but once connected, you can use the data channel for it.

This can be done either directly between the two peers, or through intermediaries (for multiple reasons).

Where would you want to do that?

  1. To reduce latency in your signaling – this is theoretically the fastest you can go
  2. To reduce load on the server – now it won’t receive all messages just to route them around – you’ll be sending it things it really needs
  3. To increase privacy – not sending messages through the server means the server can’t be privy to their content – or even the fact there was communication

For the most part, this is quite rare as transport for signaling in WebRTC.

HTTP/2

I’ve written about HTTP/2 before. Since then, HTTP/2 has grown in its popularity and spread.

HTTP/2 fixes a lot of the limitations in HTTP/1.1, which can make it a good long term candidate for transport of signaling protocols.

A good read here would be Allan Denis’ writeup on how HTTP/2 may affect the need for WebSocket.

 

WebRTC Signaling Protocols

Signaling is where you express yourself. Or rather your service does. You want one user to be able to reach out to another one. Or a group of people to join a virtual room of sorts. To that end, you decide on what types of messages you need, what they mean, how they look like, etc.

That’s your signaling protocol.

As opposed to the transport protocol, you aren’t really limited by what the browser allows, but rather by what you are trying to achieve.

Here are the 3 main signaling protocols out there in common use with WebRTC:

SIP

I hate SIP.

Never really cared for it.

It has its uses, especially when it comes to telephony and connecting to legacy voice and video services.

Other than that, I find it too bloated, complex and unnecessary. At least for most of the use cases people approach me with.

SIP comes from the telephony world. Its main transport was UDP. Then TCP and TLS were added as transport protocols for it. Later on SCTP. You don’t care about any of these, as you can’t really access them directly with a browser. So what was done was to add WebSocket as a SIP transport and just call it “SIP over WebSocket”. Before WebRTC got standardized (it hasn’t yet), SIP over WebSocket got standardized and already has an RFC of its own. Why is it important? Because the only use of SIP over WebSocket is to enable it to use WebRTC.

So there’s SIP. And if you know it, like it or need it. You can use it for your WebRTC signaling protocol.

XMPP

I hate XMPP.

Not really sure why. Probably because any time I say something bad about it, a few hard core fans/followers/fanatics of XMPP come rushing in to its rescue in the comments section. It makes things fun.

XMPP has a worldview revolving around presence and instant messaging, and use cases that need it can really benefit from it – especially if the developer already knows XMPP and what he is doing.

If you like it enough – make sure to slam me in the comments – you’ll find their section at the end of this post…

Proprietary

I hate NIH. And yet a proprietary signaling protocol has a lot of benefits in my view.

In many cases, you just want to get the two darn users into the “same page”. Not much more. I know I am dumbing it down, but the alternative is to carry around you extra protocol messages you don’t need or intend using.

In many other cases, you don’t really want to add another web server to handle signaling. You want your web server to host the whole site. So you resolve into a proprietary signaling protocol. You might not even call it that, or think of it as a signaling protocol at all.

How to Choose?

Always start from the signaling protocol.

If there’s reason to use SIP due to existing infrastructure or external systems you need to connect to – then use it. If there’s no such need, then my suggestion would be to skip it.

If you like XMPP, or need its presence and instant messaging capabilities – then go use it.

If the service you are adding WebRTC to already has some logic of its own, it probably has signaling in there. So you just add the relevant messages you need to that proprietary signaling.

In any other case, my advice would be to use a proprietary signaling solution that fits your exact need. If you’re fine with it, I’d even go as far as picking a SaaS vendor for signaling.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post WebRTC Signaling Protocols and WebRTC Transport Protocols Demystified appeared first on BlogGeek.me.

Last Chance to Enjoy a $700 Discount on my WebRTC PaaS Report

Fri, 05/13/2016 - 14:00

Grab your copy now.

I am in the last stretch of updates for my Choosing a WebRTC API Platform report. In the past month, the report has been available at a discounted price – from $1950 down to $1250. Purchasing the report includes 1 year of updates, which means that if you get your copy now – you’ll be receiving the new update next week.

What’s new in the report?

Things are at constant change with the WebRTC ecosystem, and the best place to see it is in the API space. Since the last update, we’ve experienced the rebranding of Comverse as XURA, which affected their Forge platform as well.

Here’s what you will find in the updated report, due next week:

  • Updated all vendor profiles and feature sets, so they now reflect the existing
  • Added a new vendor – QuickBlox. This brings us to 24 covered platforms in the report
  • I added a new KPI to the report – investment level – where I indicate for periods between updates how much investment was made in new features and capabilities in the platform. This can be an indicator to the level of commitment the vendor has to his platform and what to expect moving forward when it comes to new features being introduced
  • I’ve written a new Vendor Selection Blueprint. This document can assist you in the vendor selection process by guiding you through it. It includes an Excel sheet as well as a mockup example of such a process for an imaginary use case
  • Presentation deck of the visuals has been redesigned and improved, so now if you need visuals – they will be even more professional looking
What do you get when you purchase the report?

The report itself isn’t only a PDF file you print and put on your manager’s table. It includes a lot more than that:

  • The report, in PDF format (obviously)
  • 1 year of free updates, these will cover 1-2 more updates (I tend to publish them every 6-8 months or so)
  • Site membership access to additional materials
  • Online comparison matrix, to make quick comparisons easy to handle
  • Presentation visuals, which you can use in your own presentations
  • Vendor Selection Blueprint, to guide you through the vendor selection process
  • Access to the monthly Virtual Coffee sessions as well as the archived sessions
How to purchase?

Online.

  1. Go to the WebRTC PaaS report page
  2. Scroll down to the end of the page
  3. Select the Premium option and press the BUY NOW button
  4. Use your PayPal account or a credit card to make the purchase

If you do this in the next couple of days – you are guaranteed to enjoy the discounted price.

The post Last Chance to Enjoy a $700 Discount on my WebRTC PaaS Report appeared first on BlogGeek.me.

What will Happen when iOS Webviews Adopt WebRTC?

Thu, 05/12/2016 - 12:00

The real benefits of Apple and WebRTC has been left out of the conversations.

Can you help Apple find WebRTC?

There’s been too much chatter recently about Apple adding WebRTC. I am definitely in the opinion of Fippo here:

Things are going wild on the twitter #webrtc tag. Not a day without someone writing about Apple and WebRTC. Usually with little actual information.

I am not one to say I have inside information – I don’t. I don’t even know personally any Apple employee.

What I can say for sure, is that the real discussion on why Apple is important in the ecosystem of WebRTC has been ignored – as does the only place that is important.

Apple can add WebRTC in 3 places:

  1. Safari on Mac OS X
  2. Safari on iOS
  3. Webview on iOS

Just as a point of reference, when Google adopted WebRTC, it added it to Chrome on the desktop, then to Chrome on Android and somewhat later to Android webview. Not surprisingly, the priorities were decided based on the complexity and risk of the tasks (from the “easiest” to the most complex).

WebRTC in Mac OS X Safari

Safari on Mac OS X is nice, but at this point it won’t matter much. For the most part, Chrome is the leading browser these days – surpassing even IE; and from asking around, it seems that Mac users are used to switching from Safari to Chrome when needed – it isn’t unheard of.

Adding support for WebRTC to Safari on Mac OS X is nice, but for the most part, it won’t change things in any meaningful way.

WebRTC in iOS Safari

Safari on iOS is interesting. For iOS, there’s only a single alternative today, and that’s to port WebRTC on your own and integrate it into your app.

While this works well for most use cases, there are a few edge cases where this isn’t desirable.

Here are 4 such areas:

#1 – Porn

I had an interesting discussion two years ago with a porn vendor who wanted to start exploring WebRTC as a long term solution and a migration path from Flash.

Their main concern was that porn viewers were migrating from the desktop to their smartphones. I guess it has something to do with phone use in restrooms.

Surprisingly (or not), the main reason for him to want WebRTC was iOS. Applications on iOS require to be puritan apps to a large degree. If an app’s content doesn’t abide to the app store submission rules (and porn doesn’t), then the app won’t get approved. This becomes a kind of a headache if what you do is serve porn.

There are two ways to “fix” that today, both run in the browser (Safari on iOS that is):

  1. Use HLS to stream the video, but the latency wasn’t good for this vendor. He needed low latency. In his words, “the viewer needs to be able to interact with the performer and tell her what to do”. Apparently, waiting 20 or 30 seconds until the performer responds to the viewer’s whims isn’t fast enough
  2. Capture JPEG images and send them over HTML as if they are video. It means 3 frames per second or something just as stupid, but it seems porn viewers are happy with it. At least to some extent

Having WebRTC in Safari for iOS means no need to go through the approval process of Apple’s App Store, something impossible for such companies.

As you might have guessed, I learned a lot in the meeting with that company.

#2 – Click to dial

There are times when WebRTC is used for customers and potential customers to reach out to the vendor. They happen to search the internet, bump into your travel agency, and want to make a call to book a flight. Or they might have bumped into an issue with the toaster they purchased and want to ask for assistance. Whatever the case is, that person has no inclination to install an app on his phone just to make that one time interaction.

WebRTC on Safari’s iOS means that this is possible now to achieve for iOS users – and not only the Android ones.

#3 – Guest Access

In many cases, there’s a UC (Unified Communication) system already in use. While its focus is in employees communicating with each other, these systems also allow for guest access. Think of joining one of them GoToMeeting or WebEx sessions. First thing you do in them is install a client to be able to do them – or fumble around with a phone number and a PIN code. Both ugly practices.

WebRTC enables to leave that behind by sending the guest a URL in his email – not a URL with instructions in it, or an installation link – but a URL to the actual session. Along the way, you can also make that URL unique per participant if you wish. This is already available today – unless you use an iOS device.

#4 – Gaming (the gambling kind)

Apple takes 30% of all purchases made through apps. 30%

Gambling and booking usually work on profit margins that are usually lower than 10%.

That being the case, how, if at all, can they make money out of gamblers using apps without letting them pay through the app? The whole idea of gambling is the reduction of friction.

Now, getting gamblers to open a URL and play from there, with whatever interactivity they wish to add through the use of WebRTC – that’s a useful capability.

WebRTC in iOS Webview

This is where things get really interesting.

As stated earlier, most consumption on mobile today is done via apps. For WebRTC, most of these apps are developed as native apps. This happens for a couple of reasons:

  1. Force of habit. That’s just life as we know it today
  2. Native tends to work slightly better than HTML5 apps
  3. WebRTC dictates porting it and wrapping it as an SDK on iOS today

While native is great, HTML5 is even better. It offers cross platform development capabilities across desktop, browser and mobile – and in them, across ALL operating systems. WebRTC isn’t there yet because Apple isn’t there yet.

Add to that the new technique behind PWA (Progressive Web Applications), and you may well find HTML5 enticing for your service. To support such a thing, WebRTC on iOS isn’t enough, but it is still needed. What got this piece of technology into my radar was this write up by Henrik Joreteg – Why I switched to Android after 7 years of iOS. It goes into detail on the user experience of PWAs.

Even if you decide to stick to native development, having WebRTC implemented, optimized and fine tuned by Google and Apple for their respective mobile operating systems and then just slapping a Webview in place to use them in your app is a worthwhile investment.

Will Apple add WebRTC to its products? Probably yes

Do we know when will it happen? No

Should we prepare for it? Maybe. You tell me

 

The post What will Happen when iOS Webviews Adopt WebRTC? appeared first on BlogGeek.me.

VP8 vs VP9 – Is this about Quality or Bitrate?

Mon, 05/09/2016 - 12:00

Both.

VP8 and VP9 are video codecs developed and pushed by Google. Up until recently, we had only VP8 in Chrome’s WebRTC implementation and now, we have both VP8 and VP9. This lead me to several interesting conversations with customers around if and when to adopt VP9 – or should they use H.264 instead (but that’s a story for another post).

This whole VP8 vs VP9 topic is usually misunderstood, so let me try to put some order in things.

First things first:

  1. VP8 is currently the default video codec in WebRTC. Without checking, it is probably safe to say that 90% or more of all WebRTC video sessions use VP8
  2. VP9 is officially and publicly available from Chrome 49 or so (give or take a version). But it isn’t the default codec in WebRTC. Yet
  3. VP8 is on par with H.264
  4. VP9 is better than VP8 when it comes to resultant quality of the compressed video
  5. VP8 takes up less resources (=CPU) to compress video

With that in mind, the following can be deduced:

You can use the migration to VP9 for one of two things (or both):

  1. Improve the quality of your video experience
  2. Reduce the bitrate required

Let’s check these two alternatives then.

1. Improve the quality of your video experience

If you are happy with the amount of bandwidth required by your service, then you can use the same amount of bandwidth but now that you are using VP9 and not VP8 – the quality of the video will be better.

When is this useful?

  • When the bandwidth available to your users is limited. Think 500 kbps or less – cellular and congested networks comes to mind here
  • When you plan on supporting higher resolutions/better cameras etc.
2. Reduce the bitrate required

The other option is to switch to VP9 and strive to stay with the same quality you had with VP8. Since VP9 is more efficient, it will be able to maintain the same quality using less bitrate.

When is this useful?

  • When you want to go “down market” to areas where bandwidth is limited. Think a developed countries service going to developing countries
  • When you want to serve enterprises, who need to conduct multiple parallel video conferences from the same facility (bandwidth towards the internet becomes rather scarce in such a use case)
How is bitrate/quality handled in WebRTC by default?

There is some thing that is often missed. I used to know it about a decade ago and then forgot until recently, when I did the comparison between VP8 and VP9 in WebRTC on the network.

The standard practice in enterprise video conferencing is to never use more than you need. If you are trying to send a VGA resolution video, any reputable video conferencing system will not take more than 1 Mbps of bitrate – and I am being rather generous. The reason for that stems from the target market and timing.

Enterprise video conferencing has been with us for around two decades. When it started, a 1 mbps connection was but a dream for most. Companies who purchased video conferencing equipment needed (as they do today) to support multiple video conferencing sessions happening in parallel between their facilities AND maintain reasonable internet connection service for everyone in the office at the same time. It was common practice to reduce the internet connection for everyone in the company every quarter at the quarterly analyst call for example – to make sure bandwidth is properly allocated for that one video call.

Even today, most enterprise video conferencing services with legacy in their veins will limit the bitrate that WebRTC takes up in the browser – just because.

WebRTC was developed with internet thinking. And there, you take what you are given. This is why WebRTC deals less with maximum bandwidth and more with available bandwidth. You’ll see it using VP8 with Chrome – it will take up 1.77 Mbps (!) when the camera source is VGA.

This difference means that without any interference on your part, WebRTC will lean towards improving the quality of your video experience when you switch to VP9.

One thing to note here – this all changes with backend media processing, where more often than not, you’ll be more sensitive to bandwidth and might work towards limiting its amount on a per session basis anyway.

All Magic Comes with a Price

We haven’t even discussed SVC here and it all looks like pure magic. You switch from VP8 and VP9 and life is beautiful.

Well… like all magic, VP9 also comes with a price. For start, VP9 isn’t as stable as VP8 is yet. And while this is definitely about to improve in the coming months, you should also consider the following challenges:

  • If you thought VP8 is a resource hog, then expect VP9 to be a lot more voracious with its CPU requirements
  • It isn’t yet available in hardware coding, so this is going to be challenging (VP8 usually isn’t either, but we’re coping with it)
  • Mobile won’t be so welcoming to VP9 now I assume, but I might be mistaken
  • Microsoft Edge won’t support it any time soon (assuming you care about this Edge case)

This is a price I am willing to pay at times – it all depends on the use case in question.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

 

 

The post VP8 vs VP9 – Is this about Quality or Bitrate? appeared first on BlogGeek.me.

What if WebRTC SDP Munging was Prohibited?

Thu, 05/05/2016 - 12:00

How will we be able to live in a world without… SDP?

The one thing I love best about the WebRTC Standards website is that it looks at a place I neglect most of the time – the IETF and W3C. While I had my share of dealings with standardizaton organizations when I was young and pretty, it isn’t something I like doing much these days.

Last month, it seems a decision was made/in the process of being made – to prohibit SDP munging. As these things go, if this happens at all it will take VERY long to happen. That said, such a change will have huge impact on a lot of services that make use of this practice.

What’s WebRTC SDP munging?

SDP munging is a process of a WebRTC application taking its future in its own hands and deciding to change the SDP. With WebRTC, once the application sets the user media and connects it with a peer connection (=setting up to start a session), it receives the SDP blob that needs to be sent to the other participant in the session. This blob holds all of its capabilities and intents for the session.

If you want to learn more about the contents of the SDP, then this article on webrtcHacks will get you started.

Here’s a quick flow of what happens:

Where SDP munging takes place in WebRTC

Now that the application holds the SDP blob, the question that must be asked is what can the application should/can do with this SDP blob?

  1. The application should pass it to the other participant. Probably by placing it in an HTTP request or a Websocket message
  2. The application can change it (=mung it) before well… setting and sending it

The problem is in that second part.

What’s the problem with WebRTC SDP munging?

SDP embodies everything that is wrong about SIP. Or at least some of what’s wrong about SIP

There are several aspects to it:

  1. Being a textual kind of a protocol that is open as hell, it is open to interpretation of humans, making it hard to use. Interoperability is a headache with it, and now we’re leaving it at the hands of web developers. It becomes doubly hard, as there are extensions to SDP – some standardized, some in process and some just proprietary ones. And you need to sift between them all to decide what to do on the SDP level
  2. When you modify the SDP, it is assumed that the browser needs to interpret your modifications. Since it already created an SDP, it had its own understanding of what you want, but now he needs to interpret it yet again but instead of doing that through an API, it needs to do it via an ugly text blob. And browsers are created by humans, so they might not interpret it the same way you did when you munged it – or different browsers might interpret it differently
  3. New browser versions might not be able to interpret what you munged simply because that isn’t part of their main focus. The smaller you are, the more susceptible you will be to practicing SDP munging – what you do there might not be as popular as you though (or not defined as popular by browser vendors) – and it will break in some future version
  4. SDP isn’t that fun to modify with JavaScript. So it frustrates developers which ends up leading to more bugs and inconsistencies
What happens if and when it gets prohibited?

When SDP munging gets banned, existing applications that rely on it will break.

They might break completely, but mostly, they’ll break in ways that are less predictable – codecs won’t be configured in the exact way the developer intended, bitrates won’t be controlled properly, etc.

The whole idea behind SDP munging is to get more control over what the browser decides to do by default, so disabling it means losing that control you had.

When is this change expected?

Not soon, if at all.

That said, I wouldn’t recommend ignoring it.

What I’ve understood is that there’s little chatter about this on the standards mailing lists, so this just might die out.

The reason I think it is important is because at the end of the day, munging the SDP leaves you prone to whims of browser vendors as well as leaves you open to this future option of banning SDP munging.

What should you do about it?

First of all – don’t worry. This one will take time. That said, better plan ahead of time and not be surprised in the future. Here’s what I’d do:

  1. Refrain from practicing SDP munging as much as possible
  2. Since we’re already starting to see some of the ORTC APIs tricking into WebRTC, you should make an active investment now and in the near future to use these APIs whenever you feel the urge to make changes in the SDP (that’s assuming what you need is supported in the API level and not only via the SDP)
  3. If you aren’t sure, then check the code you have to see if you are practicing SDP munging, and if you are, make some kind of a plan on how to wean yourself from it

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post What if WebRTC SDP Munging was Prohibited? appeared first on BlogGeek.me.

The WebRTC Slack-Rush

Mon, 05/02/2016 - 12:00

If the only thing you have is IP calling, then why are you investing in a Slack integration this late in the game?

Looking for gold in Slack by adding WebRTC calls to it?

Slack is a rising star. It has a small and growing set of users, some of which are happy to pay for the service. When it works, it is great. When it doesn’t, well… it then just feels like any other UC or enterprise communication service. I find myself using Slack more and more. Not necessarily because I need to, but rather because I am drawn to it by the teams I collaborate with. I like the experience.

In the last few months it seems that everyone is rushing to Slack, trying to build their own WebRTC integration with it. The latest casualty? LyteSpark.

Browsing Slack’s App Directory, I found the following WebRTC based services under the Communications category:

  • Google Hangouts
  • Skype
  • appear.in
  • GoToMeeting free
  • Room
  • UberConference
  • Limnu
  • Blue Jeans
  • Screenleap
  • Yodel
  • Videolink2.me
  • Quickchat
  • KOMASO

There are others, not in the marketplace, and probably a few others in other categories or ones that I just missed.

The problem with many of them is that Slack is actively adding VoIP now – using WebRTC of course.

As I always stated, WebRTC downgrades real time communications from a service to a feature. And now, Slack is adding this feature themselves.

The problem now becomes that these WebRTC services are competing with the built-in feature of Slack – something that will be infinitely easier and simpler to use – especially on mobile, where it is just there. What would be the incentive then to use a Hangouts bot when I can just start the same functionality from Slack without any integration? This is doubly so for free accounts, which are limited to 10 integrations.

The only WebRTC services that can make sense in such a case, are those that have some distinct added value that isn’t available (or easily available through roadmap of Slack). It boils down to two capabilities:

  1. Seamless integration with PSTN calling. This is what OttSpott does. I think this is defensible simply because I don’t see Slack going after that market. They will be more inclined to focus on IP based solutions. Just a gut feeling – nothing more
  2. Solving a higher level problem than pure voice or video calling. Maybe a widget integration with the customer’s website for click-to-call capabilities, though it can be some other capabilities that focus on a smaller niche or vertical

This Slack-rush of WebRTC services seems a bit unchecked. Basking under the light of WebRTC doesn’t work anymore, so time to move to some other hype-rich territory, and what better place than Slack? Problem is, without a real business problem to solve (conducting a video call over the web isn’t a business problem), Slack won’t be the solution.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post The WebRTC Slack-Rush appeared first on BlogGeek.me.

WebRTC and Server GPUs? A whitepaper

Fri, 04/29/2016 - 13:00

GPUs is most probably where we’re headed.

A couple of months ago, I was approached by SURF. An Israeli vendor specializing in server media processing. As many of its peers, SURF has been migrating from hardware based DSP systems to software systems in their architecture. As they’ve entered the WebRTC space, they wanted to have a whitepaper on the topic, and I accepted the challenge.

The end result? WebRTC Server Side Media Processing: Simplified

Download the whitepaper

Two things that I wanted to share here:

#1 – WebRTC Server Side Media Processing is real

What made writing this whitepaper so interesting for me was the fact that there really is a transition happening – not to using WebRTC – that already happened as far as I can tell. It is something different. A transition from simple WebRTC services that require a bit of signaling to services that process the media in the backend. This processing can be anything from recording to gatewaying, streaming, interoperating or modifying media in transit. And it seems like many commercial use cases that start simple end up requiring some kind of server side media processing.

In the span of the last two months, I’ve seen quite a few services that ended up building some WebRTC server side media processing for their use case. Maybe it is just related to the research I did around this are for the whitepaper, but I think it is more than that.

#2 – The Future Lies in GPUs

As I was working on the whitepaper, this one got published by Jeff Atwood – it is about AI winning a game of Go. Or more accurately, how GPUs are a part of it:

GPUs are still doubling in performance every few years

The whole piece is really interesting and a great read. It also fits well with my own understanding and knowledge of video compression (=not that much).

Two decades ago, video compression was a game of ASIC – the ugliest piece of technology possible. Hard to design and develop. You wanted to implement a new video codec? Great. Carve a few years for the task and a couple of millions to get there. They are hard to design and hard to program for.

Later it was all DSP. Still hard and ugly, but somewhat cheaper and with some flexibility as to what can get done with them. DSPs is what powers most of our phones when it comes to recording and playing back videos. It works pretty well and made it seem as if the device in our pocket is really powerful – until you try using its CPU like a real PC.

GPUs were always there, but mostly for gaming. They do well with polygons and things related to 2D and 3G graphics, but were never really utilized for video compression. Or at least that’s what I thought. I heard of CUDA in passing. Heard it was hard to program for. That was something like 5 years ago I believe.

Then I read about GPUs being used to break hashes, which was an indication of their use elsewhere. The Jeff Atwood piece indicated that there are other workloads that can benefit from GPUs. Especially ones that can be parallelized, and to some extent, video compression is such a task. It is also where SURF is focusing with its own server media processing, which places them in the future of that field.

GPUs are no longer used only for gaming or in our PCs and laptops – they are also being deployed in the cloud. They assist companies running Audocad in the cloud (heard such a story in the recent WebRTC Global Summit event), so why not use them for video compression when possible?

If you are interested in WebRTC and how media processing is finding its way to the server, and how that fits in with words like cloud and GPU, then take a look at this new whitepaper. I hope you’ll enjoy reading it as much as I’ve enjoyed writing it.

Download and read this new WebRTC whitepaper.

The post WebRTC and Server GPUs? A whitepaper appeared first on BlogGeek.me.

Pages

Using the greatness of Parallax

Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.

Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.

Get free trial

Wow, this most certainly is a great a theme.

John Smith
Company name

Startup Growth Lite is a free theme, contributed to the Drupal Community by More than Themes.