Subscribe to bloggeek feed bloggeek
The leading authority on WebRTC
Updated: 54 min 40 sec ago

Upcoming: WebRTC Summit and my Next Virtual Coffee

Sat, 10/24/2015 - 15:30

Here’s what to expect during November.

Just wanted to share two things during this weekend.

WebRTC Summit, testing and San Francisco

I am traveling on the first week of November to San Francisco. The idea is to talk about WebRTC testing (and testRTC) at the WebRTC Summit.

I’ll be touching the challenges of testing WebRTC, which is somethings that isn’t discussed a lot out there:

  1. Either there’s no challenge or problem and all is well
  2. Or we’re still in the exploration phase with WebRTC, with little commercial value to it

I think there needs to be more focus in that area, and not just because I co-founded a WebRTC testing company.

I plan on being at the WebRTC Summit in Santa Clara on November 3-4. Here’s more about my session if you need. I am already filling up my days around that summit with meetings in both Santa Clara and San Francisco – if you wish to meet – contact me and I’ll see if I can still squeeze you in to my agenda.

Virtual Coffee with Tsahi

The first Virtual Coffee event took place a bit over a week ago. The recording of that session still isn’t available, but will be in a week or two.

It went well and I truly enjoyed the experience – the ability to handpick the people who can participate, get them signed in through my membership area on this website, and do it all under my own brand – it was great.

I’d like to thank (again) Drum’s team with their Share Anywhere service. It is as close to what I needed as could be – and easily customizable. Their team is great to work with as well (and no – they haven’t paid for me to say this).

The next session

When? November 11, 13:30 EDT

Where? Online, of course


  • Microsoft Edge, ORTC – what you should know about it, and how to prepare for 2016?
  • Open Q&A – on the topic above, or on any other topic


  • These sessions are closed sessions. They are available to the following groups
  • Employees of companies who have an active subscription for my WebRTC API Platforms report
  • And employees of companies who I currently consult
Last but not least

I noticed recently people contacting me and asking me not to share their stories on this blog.

To make it clear – there are three reasons for me to share stories here:

  1. I heard or read about it online, in a public setting. So the assumption is that the information is already public and sharable
  2. I specifically asked if this can be shared – and got permission. Usually this ends up as an interview on my site
  3. I share a story, but not the details about the specific company or the people involved

I put the bread on the table mainly through consulting. This means being able to assist vendors, and that requires doing things in confidence and without sharing strategies, roadmaps, status and intents with others. If you contact me through my site, my immediate assumption is that what you share is private unless you say otherwise.


Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post Upcoming: WebRTC Summit and my Next Virtual Coffee appeared first on

The What’s Next for WebRTC Can Wait Until We Deal With What’s Now

Thu, 10/22/2015 - 12:00

Why reminisce in the future when we’ve got so much to do in the here and now.

This week Chad wrote a post titled What’s Next for WebRTC? It is a good post, so don’t get this one as a rant or a critique about Chad. It is just the moment I saw the title and some of the words on the accompanying visual (AR, VR, drones, Industrial, Computer Vision, 3D, Connected Cars) – I immediately knew there’s something that bugs me.

It wasn’t about the fact that WebRTC isn’t used for any of these things. It was due to two reasons:

  1. We’re still not scratching the surface of WebRTC yet, so what’s the rush with what’s next?
  2. I hate it when people stick a technology on anything remotely/marginally related. This is the case for the soup of words I saw in the visual…

The second one, of buzzword abuse, I can only say this: WebRTC may play a role in each and everyone of these buzzwords, but its place in these market will be minuscule compared to the market itself. For many use cases in these markets, it won’t be needed at all.

For the first one, I have decided to write this.

There are many challenges for those who wish to use WebRTC today. This is something I tried to address in the last Kranky Geek event – WebRTC is both easy and hard – depending on your pedigree.

VoIP developers will see it as the easiest way to implement VoIP. Web developers will find it hard – it is the hardest thing that you can add to a browser these days, with many moving parts.

Here’s the whole session if you are interested:

Here’s what I think we should strive for with WebRTC and even ask those who work to make it available for us as a technology:

#1 – Become TCP

TCP works. We expect it to work. There are no interoperability issues with TCP. And if there are, they are limited to a minuscule number of people who need to deal with it. WebRTC isn’t like it today.

WebRTC requires a lot of care and attention. This fresh interview with Dan about the WebRTC standard shows that. You’ll find there words about versioning, deprecation, spec changes, etc. – and the problem is they affect us all.

This brings us to this minor nagging issue – if you want to use and practice WebRTC, you need to be on top of your game and have your hand on the WebRTC pulse at all times – it isn’t going to be a one-off project where you invest in developing a web app or an app and then monetize and bask in the sun for years.

The other alternative is to use a WebRTC API vendor, who needs to take care of all that on his own. This can’t be easily achieved by those who need an on premise deployment or more control over the data. This alternative also speaks louder to developers than it does to IT managers in enterprises, leaving out part of the industry of potential adopters of WebRTC.

The faster WebRTC becomes like TCP the better.

#2 – More success stories of a variety of simple use cases

There are a lot of areas where I see vendors using WebRTC. Healthcare, learning, marketplaces, contact centers, etc.

In many cases, these are startups trying to create a new market or change how the market works today. While great, it isn’t enough. What we really need is stories of enterprises who took the plunge – like the story told by AMEX last year. We also need to see these startups grow and become profitable companies – or larger vendors who acquire technology (I am talking to you Slack, Atlassian and Blackboard) use them in their products.

These stories that I am interested in? They should be able the business side of things – how using WebRTC transformed the business, improved it, got adopted by the end customers.

Where are we?

With all the impressive numbers of WebRTC flying around, we still are in the early adopters phase.

We are also still struggling with the basics.

There are many great areas to explore with WebRTC – the large scale streaming space is extremely interesting to me. So is the potential of where WebRTC fits in IOT – which is even further out than the large scale streaming one. I love to be a part of these projects and those that seek them are at the forefront of this technology.

We’re not there yet.

But we will be.

There’s no stopping this train any time soon.


Test and Monitor your WebRTC Service like a pro - check out how testRTC can improve your service' stability and performance.


The post The What’s Next for WebRTC Can Wait Until We Deal With What’s Now appeared first on

The Future of Messaging is…

Tue, 10/20/2015 - 12:00

A lot more than pure messaging.

Messaging used to be about presence and IM. Then the VoIP people came and placed the voice and video chat stickers on it. That then became unified communications. Which is all nice and well, but it is both boring and useless at this point. Useless not because the utility of the service isn’t there, but because the expectation of such a service is to be free – or close to that. Or as I like saying, it has now become a feature within another service more than a service in its own right.

While this is killing unified communications, it doesn’t seem to be making much of a dent on messaging just yet. And the reason I think is the two very different trajectories these are taking:

  • Unified Communications is focused on being the one true source of everything that gets federated with all other communication means
  • Messaging shifted towards becoming platforms, where the size of the ecosystem and its utility outweighs any desire or need to federate with other similar services

This migration of messaging towards becoming platforms isn’t so easy to explain. There’s no silver bullet of how this is done. No secret recipe that gets you there.

Here are a few strategies that different messaging platforms are employing in their attempt to gain future growth.

Whatsapp and Simplicity

Whatsapp is all about simplicity. It offers pure messaging that replaces the SMS for many, coupled with group messaging that makes it sticky and viral in many countries.

Features don’t make it into Whatsapp fast. The only thing that was added in the past two years of any notable value is voice calling.

With this approach, Whatsapp still is the largest player in town when it comes to messaging; and it is probably doing so with the smallest possible team size.

The problem with such an approach, is that there isn’t enough room for many such players – and soon, to be a viable player in this domain will require a billion monthly active users.

Apple and iMessage

In that same token, the Apple iMessage is similar. It is simple, and it is impossible to miss or ignore if you have an iPhone.

But it is limited to Apple’s ecosystem which only runs on iOS devices.

Google Hangout (and now Jibe Mobile)

Google Hangouts was supposed to do the same/similar on Android, but didn’t live up to the expectation:

  • Unlike Whatsapp, group chat is available in Hangouts, but isn’t viral or “mandatory”
  • Unlike Apple iMessage, the user needs to make a mental note of using Hangouts instead of the SMS app. There are two of those, and as a user, you are free to choose which one to us. Choice adds friction and omplexity

With the acquisition of Jibe Mobile, this may change in the future. Will others follow suit? Is there enough utility and need in connecting messaging with Telco messaging, and especially with RCS, that many (myself included, at least until this acquisition) see as dead on arrival?

Facebook and Artificial Intelligence

Facebook is experimenting with artificial intelligence that is embedded into their Facebook Messenger service – not the social network where e-commerce is the current focus.

This new AI initiative is called Facebook M and is planned to be driven by part machine part humans.

In many ways, this is akin to the integration LivePerson (a chat widget for contact centers) has with knowledge bases that can cater to customer’s needs without “harassing” live agents in some cases. But this one is built into the messaging service the customer uses.

It is compared to Siri and Cortana, but you can also compare it to Google Now – once Facebook fleshes out the service, they can open up APIs for third parties to integrate to it, making it a platform for engaging with businesses.

WeChat and the Digital Life Platform

WeChat is large in Asia and dominant in many ways. It is an e-commerce platform and a digital life ecosystem.

Connie Chan of Andreessen Horowitz gives a good overview of what makes WeChat a platform:

Along with its basic communication features, WeChat users in China can access services to hail a taxi, order food delivery, buy movie tickets, play casual games, check in for a flight, send money to friends, access fitness tracker data, book a doctor appointment, get banking statements, pay the water bill, find geo-targeted coupons, recognize music, search for a book at the local library, meet strangers around you, follow celebrity news, read magazine articles, and even donate to charity … all in a single, integrated app.

WeChat transitioned from being a communication tool to becoming a platform. It has APIs that makes it easy for third parties to integrate with it and offer their own services on top of WeChat’s platform.

While I use the term “from service to feature” when talking about VoIP and WebRTC, Connie Chan uses “where social is just a feature” to explain the transition WeChat has made in this space.

The ability to send messages back and forth and communicate in real time via voice and video is now considered table stakes. It is also not expected to be a paid service but a feature that gets monetized elsewhere.

Meanwhile in Enterprise Messaging

Slack, which Connie Chan also briefly notes in his account of WeChat, is the guiding light of enterprise messaging these days.

Unlike other players in this space, Slack has built itself around the premise of three strong characteristics:

  • Integration – third parties can integrate their apps into Slack, and in many cases, Slack integrates automatically through links that get shared inside messages. Integrations that make sense and bring value to larger audiences of Slack gets wrapped into Slack – the acquisition of Screenhero and the plans to enhance it to video conferencing shows this route
  • Omnisearch – everything in Slack is searchable. Including the content of links shared on Slack. This makes for a powerful search capability
  • Slackbot – the slackbot is a Slack bot you can interact with inside the service. It offers guidance and some automation – and is about to enjoy artificial intelligence (or at the very least machine learning)

The enterprise platform is all about utility.

Slack is introducing AI and has its own marketplace of third party apps via integrations. The more enterprises use it, the more effect these two capabilities will have in enforcing its growth and effectiveness.

While the fight seems to be these days between Unified Communications and Enterprise Messaging, I believe that fight is already behind us. The winner will be Enterprise Messaging – either because UC vendors will evolve into Enterprise Messaging (or acquire such vendors) or because they will lose ground fast to Enterprise Messaging vendors.

The real fight will be between modern Enterprise Messaging platforms such as Slack and consumer messaging platforms such as WeChat – enterprises will choose one over the other to manage and run their internal workforce.


Kranky and I are planning the next Kranky Geek - Q1 2016. Interested in speaking? Just ping me through my contact page.

The post The Future of Messaging is… appeared first on

WebRTC Basics: What’s a Video Codec Anyway?

Mon, 10/19/2015 - 12:00

Time for another WebRTC Basics: Video Codecs

I’ve been yapping about video codec more than once here on this blog. But what is it exactly?

If you’re a web developer and you are starting to use WebRTC, then there’s little reason (until now) for you to know about it. Consider this your primer to video coding.


A video codec takes the raw video stream, which can be of different resolution, color depth, frame rate, etc. – and compress it.

This compression can be lossless, where all data is maintained (so when you decompress it you get the exact same content), BUT it is almost always going to be lossy. The notion is that we can lose data that our human eye doesn’t notice anyway. So when we compress video, we take that into account, and throw stuff out relative to the quality we wish to get. The more we throw – the less quality we end up with.

The video codec comes in two pieces:

  1. Encoder – takes the raw video data and compresses it
  2. Decoder – takes the compressed data created by an encoder and decompresses it

The decoded stream will be different from the original one. It will be degraded in its quality.

The Decoder is the Spec

The thing many miss is that in order to define a video codec, the only thing we have is a specification for a decoder:

Given a compressed video stream, what actions need to take place to decompress it.

There is no encoder specification. It is assumed that if you know how the compressed result needs to look like, it is up to you to compress it as you see fit. Which brings us to the next point.

Generally speaking, decoders will differ from each other by their performance: how much CPU they take to run, how much memory they need, etc.

The Encoder is… Magic

Or more like a large set of heuristics.

In a video codec, you need to decide many things. How much time and effort to invest in motion estimation, how aggressive to be when compressing each part of the current frame, etc.

You can’t really get to the ultimate compression, as that would take too long a time to achieve. So you end up with a set of heuristics – some “guidelines” or “shortcuts” that your encoder is going to take when he compresses the video image.

Oftentimes, the encoder is based on experience, a lot of trial and error and tweaking done by the codec developers. The result is as much art as it is science.

Encoders will differ from each other not only by their performance but also by how well they end up compressing (and how well can’t be summed up in a single metric value).

Hardware Acceleration

A large piece of what a codec does is brute force.

As an example, most modern codecs today split an image into macroblocks, each requiring DCT. With well over 3,000 macroblocks in each frame of 720p resolution that’s a lot that need to get processed every second.

Same goes for motion estimation and other bits and pieces of the video codec.

To that end, many video codec implementations are hardware accelerated – either the codec runs completely by accelerated hardware, or the ugly pieces of it are, with “software” managing the larger picture of the codec implementation itself.

It is also why hardware support for a codec is critical for its market success and adoption.

Bandwidth Management

A video codec doesn’t work in a void. Especially not when the purpose of it all is to send the video over a network.

Networks have different characteristics of available bandwidth, packet loss, latency, jitter, etc.

When a video encoder is running, it has to take these things into account and compensate for them – reducing the bitrate it produces when there’s network congestion, reset its encoding and send a full frame instead of partial ones, etc.

There are also different implementations for a codec on how to “invest” its bitrate. Which again brings us to the next topic.

Different Implementations for Different Content Types (and use cases)

Not all video codec implementations are created equal. It is important to understand this when picking a codec to use.

When Google added VP9 to YouTube, it essentially made two compromises:

  1. Having to implement only a decoder inside a browser
  2. Stating the encoder runs offline and not in real-time

Real-tme encoding is hard. It means you can’t think twice on how to encode things. You can’t go back to fix things you’ve done. There’s just not enough time. So you use single-pass encoders. These encoders look at the incoming raw video stream only once and decide upon seeing a block of data how to compress it. They don’t have the option of waiting a few frames to decide how to compress best for example.

Your content is mostly static, coming from a Power Point presentation with mouse movements on top? That’s different from a head-shot video common in web meetings, which is in turn different than the latest James Bond Spectre trailer motion.

And in many ways – you pick your codec implementation based on the content type.

A Word about WebRTC

WebRTC brings with it a huge challenge to the browser vendors.

They need to create a codec that is smart enough to deal with all these different types of contents while running on variety of hardware types and configurations.

From what we’ve seen in the past several years – it does quite well (though there’s always room for improvement).


Next time you think why use WebRTC and not build on your own – someone implementing this video codec for you is one of the reasons.


Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.



The post WebRTC Basics: What’s a Video Codec Anyway? appeared first on

3 Advantages of WebRTC Embedded in the OS

Thu, 10/15/2015 - 12:00

Here’s a thought. Why not get WebRTC to the operating system level and be done with it?

Today, there are different ways to get WebRTC going:

  1. Use a browser…
  2. Compile the code and link it to your own app (PC or mobile)
  3. Wrap the browser within an app (PC)
  4. Use a webview (Android)

That last option? This is the closest one to an OS level integration of WebRTC. You assume it is there and available, and use it in your app somehow.

But what if we could miraculously get the WebRTC APIs (Javascript or whatever) from he operation system itself? No compilation needed. No Cordova plugins to muck around with. Just good ol’ “system calls”?

While I don’t really expect this to happen, here’s what we’d gain from having that:

1# Smaller app sizes

Not needing to get WebRTC on a device means your app takes up less space. With the average app size on the increase, this is always a good thing.

The OpenH264 codec implementation binary alone is around 300k, depending on the platform. Assume you need 3-4 more codecs (and that number will be growing every couple of years), the other media algorithms, all the network implementation, code to integrate with device drivers, WebRTC specific wrappers, … – lots and lots of size.

And less app size means more space for other app and less app data to send over the network when intsalling the app.

2# Less variability

While the first one is obvious, and somewhat nagging – so it takes a second more to install an app – who cares?

This point has a lot more of a reason for it.

If there’s a single implementation of WebRTC, maintained by the OS itself, there’s a lot less hassle of dealing with the variance.

When people port WebRTC on their own and use it – they make changes and tweaks. They convince themselves (with or without any real reason) that they must make that small fix in that piece of algorithm in WebRTC – after all, they know their use case best.

But now, it is there, so you make do with what you have. And that piece of code gets updated magically and improves with time – you need not upgrade it manually and re-integrate all the changes you’ve made to it.

Less variability here is better.

3# Shorter TTM

Since you don’t need to muck around with the work of porting and integration – it takes less time to implement.

I’ve been working with many vendors on how to get WebRTC to work in their use case. Oftentimes, that requires that nasty app to get a WebRTC implementation into it. There’s no straightforward solution to it. Yes – it is getting easier with every passing day, but it is still work that needs to be done and taken into account.

Back to reality

This isn’t going to happen anytime soon.

Unless… it already has to some extent and in some operating systems.

Chrome is an OS – not only Chrome OS but Chrome itself. It has WebRTC built in – in newer Android versions as well, where you can open up webviews with it.

For the rest, it is unlikely to be the path this technology will be taking.


Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post 3 Advantages of WebRTC Embedded in the OS appeared first on

Google Goes All in for Messaging, Invests in Symphony

Tue, 10/13/2015 - 12:00

Something is brewing at Google.

Last week it was announced that Symphony just raised another $100M lead by Google. Not Google Ventures mind you – Google Inc.

Who is Symphony?
  • High profile Silicon Valley startup (obviously), soon to become a unicorn, if it isn’t already
  • Well known founder from the Unified Communications industry – David Gurle
  • Have been around for only a year
  • Already has over 100 employees, most of them engineers
  • Focused on enterprise messaging, and targeting highly regulated and security sensitive industries
The Symphony Service

The service itself is targeted at the enterprise, but a free variant of it is available. I tried logging into it, to see what is all about. It is a variant of the usual messaging app on the desktop, with bits and pieces of Facebook and Slack.

On face value, not much different than many other services.

Symphony Foundation

Symphony decided to build its service on top of an open source platform of its own, which it calls Symphony Foundation. It includes all the relevant washed-out words required in a good marketing brochure, but little else for now: a mission statement, some set of values. That’s about it.

It will be open source, when the time comes. It will be licensed under the Apache license (permissive enough). And you can leave an inquiry on the site. In the name of openness… that’s as open as Apple’s FaceTime protocol is/was supposed to be. I’ll believe it when I see it.

Why Invest in Symphony?

This is the bigger question here. Both for why Google put money in it, as well as others.

With a total of $166M of investment in two rounds and over 100 employees recruited in its first year of existence, there seems to be a gold rush happening. One that is hard to explain.

As a glaring reminder – Whatsapp on acquisition day had 32 developers and around 50 employees. Symphony has twice that already, but no active user base to back it up.

It might be because of its high profile. After all, this is David Gurle we’re talking about. But then again, Talko has Ray Ozzie. But they only raised $4M in the past 3 years, and have less than 10 employees (if you believe LinkedIn).

The only other reason I can see is the niche they went for.

The financial industry deals with money, so it has money. It also has regulations and laws, making it a hard nut to crack. While most other players are focused on bringing consumer technology to the SMB, Symphony is trying to start from the top and trickle to the bottom with a solution.

The feature set they are putting in place, based on their website, include:

  • Connectivity across organizations, while maintaining “organizational compliance”
  • Security and privacy
  • Policy control on the enterprise level
  • Oh… and it’s a platform – with APIs – and developers and partners

The challenge will be keeping a simple interface while maintaining the complex feature set regulated industries need (especially ones that love customization and believe they are somehow special in how they work and communicate).

On Messaging and Regulation

The smartphone is now 8 years old, if you count it from the launch of the iPhone.

Much has changed in 8 years, and most of it is left unregulated still.

Messaging has moved from SMS to IP based messaging services like Whatsapp in many countries of the world. Businesses are trying to kill email with tools like Slack. We now face BYOD phenomena, where employees use whatever device and tools they see fit to get their work done – and enterprises find it hard to force them to use specific tools.

If Hillary Clinton can use her own private email server during the course of her workday, what should others say or do?

While regulation is slow to catch up, I think some believe the time is ripe for that to happen. And having a messaging system that is fit for duty in those industries that are sensitive today means being able to support future regulation in other/all industries later.

This trend might raise the urgency or the reason for the capital that Symphony has been able to attract.


Why did Google invest here? Why not Google Ventures? It doesn’t look like an Alphabet investment but rather a Google one. And why invest and not acquire?

Google’s assets in messaging include today:

Jibe/RCS is about consumer and an SMS replacement in the long run. It may be targeted at Apple. Or Facebook. Or Skype. Or all of them.

None of its current assets is making a huge impact. They aren’t dominant in their markets.

And messaging may be big in the consumer, but the money is in the enterprise – it can be connectivity to enterprises, ecommerce or pure service. Google is finding it difficult there as well.

Symphony is a different approach to the same problem. Targets the enterprise directly. Focusing on highly regulated customers. Putting money into it as an investment is a no-brainer, especially if it includes down the road rights of first refusal on an acquisition proposal for example. So Google sits and waits, sees what happens with this market, and decides how to continue.

Is this a part of a bigger picture? A bigger move of Google in the messaging space? Who knows? I still can’t figure out the motivation behind this one…

Messaging and me

I’ve been writing on general messaging topics on and off throughout the years on this blog.

It seems this space is becoming a lot more active recently.

Expect more articles here about this topic of messaging from various angles in the near future.


Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post Google Goes All in for Messaging, Invests in Symphony appeared first on

Do you Need to test a WebRTC P2P Service?

Mon, 10/12/2015 - 12:00


It is a question I get from time to time, especially now, that I am a few months into the WebRTC testing venture as a co-founder with a few partners – testRTC.

The logic usually goes like this: the browsers already support WebRTC. They do their own testing, so what we end up getting is a solid solution we can use. Fippo would say that

If life was that easy… here are a few things you need to take care of when it comes to testing the most simple of WebRTC services:

#1 – Future proofing browser versions

Guess what? Things break. They also change. Especially when it comes to WebRTC.

A few interesting tidbits for you:

  • Google is dropping HTTP support for GetUserMedia, so services must migrate to HTTPS. Before year end
  • The echo canceller inside WebRTC? It was rewritten. From scratch. Using a new algorithm. That is now running on a billion devices. Different devices. And it works! Most times
  • WebRTC’s getStats() API is changing. Breaking its previous functionality

And the list goes on.

WebRTC is a great technology, but browsers are running at breakneck speeds of 6-8 weeks between releases (for each browser) – and every new release with a potential to break a service in multitude of ways – either because of a change in the spec, deprecation of capability or just bugs.

Takeaway: Make sure your service works not only on the stable version of the browsers, but also on their beta or even dev versions as well.

#2 – Media relay

Your service might be a P2P service, but at times, you will need to relay media through TURN servers.

The word on the street is that around 15% of sessions require relay. To some it can be 50% and to others 8% (real numbers I heard from running services).

Media relay is tricky:

  • You need to configure it properly (many fall at this one)
  • You need to test it in front of different firewall and NAT configurations
  • You need to make it close to your users (you don’t want a local session in Paris to get relayed through a server in San Francisco)
  • You need to test it for scale (check the next point for more on that)

Takeaway: Don’t treat WebRTC as a browser side technology only, or something devoid of media handling. Even if the browser does most of the heavy lifting, some of the effort (and responsibility) will lie on your service.

#3 – Server scale

Can your server cater for 200 sessions in parallel to fit that contact center? What about a 1000?

What will happen if you’ll have a horde effect due to a specific event? Can you handle that number of browsers hitting your service at once? Does your website operate in the same efficiency for the 1000th person as it does for the first?

This relates to both your signaling server, which is no part of WebRTC, but is there as part of your service AND your media server from my previous point.

Takeaway: Make sure your service scales to the capacities that it needs to scale. Oh – and you won’t be able to test it manually with the people you have with you in your office…

#4 – Service uptime

You tested it all. You have the perfect release. The service is up and running.

How do you make sure it stays running?

Manually? Every morning come in to the office and run a session?

Use Pingdom to make sure your site is up? Go to the extreme of using New Relic to check the servers are up, the CPUs aren’t over loaded and the memory use seems reasonable? Great. But does that mean your service is running and people can actually connect sessions? Not necessarily.

Takeaway: End-to-end monitoring. Make sure your service works as advertised.

The ugly truth about testing

The current norm in many cases is to test manually. Or not test at all. Or rely on unit testing done by developers.

None of this can work if what you are trying to do is create a commercial service, so take it seriously. Make testing a part of your development and deployment process.

And while we’re at it…

Check us out at testRTC

If you don’t know, I am a co-founder with a few colleagues at a company called testRTC. It can help you with all of the above – and more.

Leave us a note on the contact page there if you are interested in our paid service – it can cater to your testing needs with WebRTC as well as offering end-to-end monitoring.


Need to test WebRTC?


The post Do you Need to test a WebRTC P2P Service? appeared first on and WebRTC: An Interview With Moshe Maeir

Thu, 10/08/2015 - 12:00
isVisible=false; function show_hide_searchbox(w){ if(isVisible){ document.getElementById('filterBoxSelection').style.display = 'none'; w.innerText='Filter ▼'; }else{ document.getElementById('filterBoxSelection').style.display = 'block'; w.innerText='Filter ▲'; } isVisible=!isVisible; } function checkIfSelected(chk){ if(chk.checked==1) chk.parentNode.className = "selected"; else chk.parentNode.className = "notselected"; getSelectedValues(); } function getSelectedValues(){ var a=document.getElementsByClassName('selected'); var vtVal=[] , ctVal=[] , ftVal=[]; var ct=0,vt=0,ft=0; for (x = 0; x < a.length; ++x) { try{ if(a[x].getElementsByTagName('input')[0].className=='companyType'){ ctVal[ct]= a[x].getElementsByTagName('input')[0].value; ct++; } if(a[x].getElementsByTagName('input')[0].className=='vendorType'){ vtVal[vt]= a[x].getElementsByTagName('input')[0].value; vt++; } if(a[x].getElementsByTagName('input')[0].className=='focusType'){ ftVal[ft]= a[x].getElementsByTagName('input')[0].value; ft++; } }catch(err){ } } search_VType(vtVal); search_CType(ctVal); search_FType(ftVal); } function search_VType(val){ var a=document.getElementsByClassName('interview-block'); for(x=0;x=0 && val[i]!=null){ a[x].style.display='block'; } } if(val.length==0){ a[x].style.display='block'; } } } function search_CType(val){ var a=document.getElementsByClassName('interview-block'); for(x=0;x=0 && val[i]!=null && a[x].style.display=='block'){ break; } if(i==val.length-1){ a[x].style.display='none'; } } } } function search_FType(val){ var a=document.getElementsByClassName('interview-block'); for(x=0;x=0 && val[i]!=null && a[x].style.display=='block'){ break; } if(i==val.length-1){ a[x].style.display='none'; } } } } Check out all webRTC interviews >>

Fone.Do: Moshe Maeir

October 2015

SMB phone system

Disrupting the hosted PBX system with WebRTC.

[If you are new around here, then you should know I’ve been writing about WebRTC lately. You can skim through the WebRTC post series or just read what WebRTC is all about.]


There’s no doubt that WebRTC is disrupting many industries. One of the obvious ones is enterprise communications, and in this space, an area that has got little attention on my end (sorry) is the SMB – where a small company needs a phone system to use and wants to look big while at it.

Moshe Maeir, Founder at, just launched the service out of Alpha. I have been aware of what they were doing for quite some time and Moshe took the time now that their service is public to answer a few of my questions.


What is all about? is a WebRTC based phone system for small businesses that anyone can set up in 3 minutes. It replaces both legacy PBX systems that were traditionally based in your communications closet and also popular Hosted PBX systems. Businesses today are mobile and the traditional fixed office model is changing. So while you can connect a SIP based IP phone to our system, we are focused on meeting the needs of the changing business world.


Why do small businesses need WebRTC at all? What’s the benefit for them?

You could ask the same question about email, social networks etc. Why use web based services at all? Does anyone want to go back to the days of “computer programs” that you downloaded and installed on your computer? Unfortunately, many still see telephony and communications as a stand alone application. WebRTC changes this. Small businesses can communicate from any place and any device as long as they have a compatible platform.


What excites you about working in WebRTC?

Two things. Not sure which is more exciting. First of all. If I build something great – the whole world is my potential market. All they need is a browser and they are using our system in 3 minutes. The other exciting aspect is that telephony is no longer a closed network. Once you are on the web the potential is unlimited. You can easily connect your phone system to the wealth of data and services that already exist on the web and take communications to a new level. In fact, that is why we hired developers who knew nothing about telephony but were experienced in web development. The results are eye opening for traditional telecom people.


I know you’re a telecom guy yourself. Can you give an example how working with web developers was an eye opener to you?

There are many. The general attitude is just do it. With legacy telecom, everything has the accepted way of doing things and you don’t want to try  anything new without extended testing procedures. A small example – in the old VoIP days writing a “dial plan” was a big thing. When we came to this issue on Fone.Do, one of the programmers naturally googled the issue and found a Google service that will automatically adapt the dial plan based on the users’ mobile number. 1-2-3 done.


Backend. What technologies and architecture are you using there?

Our main objective was to build an architecture that will work well and easily scale in the cloud (we are currently using AWS). So while we have integrated components such as the Dialogic XMS and the open source Restcomm, we wrote our own app server which manages everything. This enables us if we need to freely change back end components.


Can you tell us a bit about your team? When we talked about it a little of a year ago ago, I suggested a mixture of VoIP and web developers. What did you end up doing and how did it play out?

All our developers are experienced front end and backend web programmers with no telecom experience. However, our CTO who designed the system has over 15 years of experience in Telecom, so he is there to fill in any missing pieces. There were some bumps at the beginning, but I am very happy we did it this way. You can teach a web guy about Telephony, but it is very hard to get a Telecom guy to change his way of thinking. Telecom is all about “five nines” and minimizing risk. Web development is more about innovation and new functionality. With todays’ technology it is possible to innovate and be almost as reliable as traditional telephony


Where do you see WebRTC going in 2-5 years?

Adoption is slower than I expected, but eventually I see it as just another group of functions in your browser that developers can access as needed.


If you had one piece of advice for those thinking of adopting WebRTC, what would it be?

WebRTC is here. It makes your user experience better – so what are you waiting for?


What’s next for

We recently released our alpha product and we are looking to launch an open beta in the next couple of months. Besides a web based “application”, we also have applications for Android and iOS.

The interviews are intended to give different viewpoints than my own – you can read more WebRTC interviews.

The post and WebRTC: An Interview With Moshe Maeir appeared first on

4 Good Reasons for Using HTTP/2

Tue, 10/06/2015 - 12:00

HTTP/2 is too good to pass.

If you don’t know much about HTTP/2 then check this HTTP/2 101 I’ve written half a year ago.

In essence, it is the next version of how we all get to consume the web over a browser – and it has been standardized and deployed already. My own website here doesn’t yet use it because I am dependent on the third parties that host my service. I hope they will upgrade to HTTP/2 soon.

Watching this from the sidelines, here are 4 good reasons why you should be using HTTP/2. Not tomorrow. Today.

#1 – Page Load Speed

This one is a no-brainer.

A modern web page isn’t a single resource that gets pulled towards your browser for the pleasure of your viewing. Websites today are built with many different layers:

  • The core of the site itself, comprising of your good old HTML and CSS files
  • Additional JavaScript files – either because you picked them yourself (JQuery or some other piece of interactive code) or through a third party (Angular framework, ad network, site tracking code, etc.)
  • Additional JavaScript and CSS files coming from different add-ons and plugins (WordPress is fond of these)
  • Images and videos. These may be served from your server or via a CDN

At the time of writing, my own website’s homepage takes 116 requests to render. These requests don’t come from a single source, but rather from a multitude of them, and that’s when I am using weird hacks such as CSS sprites to reduce the number of resources that get loaded.

There’s no running away from it – as we move towards richer experiences, the resources required to render them grows.

A small HTTP/2 demo that CDN77 put in place shows exactly that different – they try loading 200 small images to a page in either HTTP/1.1 or HTTP/2 shows the improved load times of the page.

HTTP/2 has some more features that can be used to speed up web page serving – we just need to collectively start adopting it.

#2 – Avoiding Content Injection

In August, AT&T was caught using ad injection. Apparently, AT&T ran a pilot where people accessing the internet via its WiFi hotspots in airports got ads injected to the pages they browsed over the internet.

This means that your website’s ads could be replaced with those used by a third party – who will get the income and insights coming from the served ads. It can also mean that your website, that doesn’t really have ads – now shows them. The control freak that I am, this doesn’t sound right to me.

While HTTP/2 allows both encrypted and unencrypted content to be served, only the encrypted variant is supported by browsers today. You get the added benefits of encryption when you deploy HTTP/2. This makes it hard to impossible to inject 3rd party ads or content to your site.

#3 – Granularity

During that same August (which was the reason this post was planned to begin with), Russia took the stupid step of blocking Wikipedia. This move lasted less than a week.

The reason? Apparently inappropriate content in a Wikipedia page about drugs. Why was the ban lifted? You can’t really block a site like Wikipedia and get away with it. Now, since Wikipedia uses encryption (SPDY, the predecessor of HTTP/2 in a way), Russia couldn’t really block specific pages on the site – it is an all or nothing game.

When you shift towards an encrypted website, external third parties can’t see what pages get served to viewers. They can’t monetize this information without your assistance and they can’t block (or modify) specific pages either.

And again, HTTP/2 is encrypted by default.

#4 – SEO Juice

Three things that make HTTP/2 good for your site’s SEO:

  1. Encrypted by default. Google is making moves towards giving higher ranking for encrypted sites
  2. Shorter page load times translate to better SEO
  3. As Google migrates its own sites to HTTP/2, expect to see them giving it higher ranking as well – Google is all about furthering the web in this area, so they will place either a carrot or a stick in front of business owners with websites


Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post 4 Good Reasons for Using HTTP/2 appeared first on

How NOT to Compete in the WebRTC API Space

Mon, 10/05/2015 - 12:00

Some aspects are now table stakes for WebRTC API Platforms.

There are 20+ vendors out there who are after your communications. They are willing to take up the complexity and maintenance involved with running real time voice and video that you may need in your business or app. Some are succeeding more than others, as it always has been.

So how do you as a potential customer going to choose between them?

Here are a few things I’ve noticed in the two years since I first published my report on this WebRTC API space:

  1. Vendors are finding it hard to differentiate from one another. Answering the question to themselves of what they do better than anyone else in this space (or at least from the vendors they see as their main competitors) isn’t easy
  2. Vendors often times don’t focus. They try to be everything to everyone, ending up being nothing to most. You can see what they are good for if you look from the sidelines, feel how they pitch, operate, think – but they can’t see it themselves
  3. Vendors attempt to differentiate over price, quality and ease of use. This is useless.
Table Stakes

Most vendors today have pretty decent quality with a set of APIs that are easy. Pricing varies, but usually reasonable. While some customers are sensitive to pricing, others are more focused on getting their proof of concept or initial beta going – and there, the price differences doesn’t matter in the short to medium term anyway.

The problem is mainly vendor lock-in, where starting to use a specific vendor means sticking with it due to high switching costs later on. But then, savvy developers use multiple vendors or prepare adapter layers to abstract that vendor lock-in.

Vendors need to think more creatively at how they end up differentiating themselves. From carving a niche to offering unique value.

My Virtual Coffee

This is the topic for my first Virtual Coffee session, which takes place on October 14.

It is something new that I am trying out – a monthly meeting of sorts. Not really a webinar. But not a conference either.

Every month, I will be hosting an hour long session:

  • It will take place over a WebRTC service – I am dogfooding
  • It will cover a topic related to the WebRTC ecosystem (first one will be differentiation of WebRTC API Platform vendors)
  • It will include time for Q&A. On anything
  • Sessions will be recorded and available for playback later on
  • It is open to my consulting customers and those who purchased my report in the past year

If you are not sure if you are eligible to join, just contact me and we’ll sort things out.

I’d like to thank the team at Drum for letting me use their ShareAnywhere service for these sessions – they were super responsive and working with them on this new project was a real joy for me.

Virtual Coffee #1 Title: WebRTC PaaS Growth Strategies How WebRTC API vendors differentiate and attempt to grow their business When: Oct 14. 13:30 EDT (add to calendar) Where: Members only What’s next?

Want to learn more about this space? The latest update of my report is just what you need


The post How NOT to Compete in the WebRTC API Space appeared first on

Android Does… RCS !? What About WebRTC? Hangouts?

Thu, 10/01/2015 - 10:10

Some people are fidgeting on their chairs now, while others are happier than they should be.

I’ll start by a quick disclaimer: I like Google. They know when you acquire companies to fit my schedule – just got back from vacation – so I actually have time to cover this one properly.

Let’s start from the end:

Google and Apple are the only companies that can make RCS a reality.

To all intent and purpose, Google just gave RCS the kiss of life it needed.

Google just acquired Jibe Mobile, a company specializing in RCS. The news made it to the Android official blog. To understand the state of RCS, just look at what TechCrunch had to say about it – a pure regurgitation of the announcement, with no additional value or insights. This isn’t just TechCrunch. Most news outlets out there are doing the same.

Dataset subscribers have the acquisitions table updated with this latest information

Why on earth is Google investing in something like RCS?


RCS stands for Rich Communication Suite. It is a GSMA standard that has been around for a decade or so. It is already in version 5.2 or so with little adoption around the world.

What is has on offer is an OTT-style messaging capabilities – you know the drill – an address book, some presence information, the ability to send text and other messages between buddies. Designed by committee, it has taken a long time to stabilize – longer than it took Whatsapp to get from 0 to 800. Million. Monthly active users.

The challenge with RCS is the ecosystem it lives in – something that mires other parts of our telecom world as well.

Put simply, in order to launch such a service that needs to take any two devices in the world and connect them, we need the following vendors to agree on the need, on the priority, on the implementation details and on the business aspects:

  • Chipset vendors
  • Handset vendors
  • Mobile OS vendors
  • Telco vendors
  • Telcos

Call it an impossible feat.

In a world where Internet speeds dictate innovation and undercut slower players, how can a Telco standard succeed and thrive? The moment it gets out the door it feels old.

Google and Messaging

Google has many assets today related to messaging:

  • Android, the OS powering 1.4 billion devices, where 1 billion of them call home to Google’s Play service on a monthly basis
  • Hangouts, their own chat/voice/video service that is targeted at both consumers and enterprises. It is part of Android, but also works as an app or through the browser virtually everywhere
  • Firebase, a year-old acquisition that is all about powering messaging (and storage) for developing apps

As Kranky puts it, they were missing an iMessage service. But not exactly.

Google thrives from large ecosystems. The larger the better – these are the ones you can analyze, optimize and monetize. And not only by building an AdWords network on top of it.

The biggest threats to Google today, besides regulators around the globe, would be:

  1. Apple, who is doing its darnedest today to show off their better privacy policies compared to Google
  2. Facebook, who is vying after Google’s AdWords money with its own social network/ads empire
  3. Telcos, who can at a whim decide to shut off Google’s ambitions – by not promoting Android, making it hard for YouTube or other services to run, etc.

Getting into RCS and committing to it, as opposed to doing a half witted job at an RCS client on vinyl Andorid, gives Google several advantages:

  • It puts them at the good side of Telcos, which can’t be bad
  • Improves Android’s standing as an ecosystem, and making it easier for Google to force the hands of handset manufacturers and chipset vendors in adjacent domains
    • Maybe getting the codecs they want embedded as part of the device for example?
    • Forcing improvements on mobile chipset designs that offer better power management/performance for all messaging apps
  • Opens the door to deeply integrating Hangouts with RCS/Telco messaging
  • Enabling Google to become the gateway to the telco messaging space
    • Got a device running Android? An RCS client is already there and running
    • Don’t have Android? Connect through your browser from everywhere
    • Or just install that Google RCS app – it already has a billion downloads on it, as opposed to a measly 5,000 downloads of an operator-brand app
  • Becoming the glue between consumer and enterprise
    • Hangouts may well be a consumer type of a product, but it is part of the Google Apps offering to enterprises
    • Carriers are struggling in monetizing consumer services these days besides connectivity, and Google is fine with giving consumers a free ride while making money elsewhere
    • Google is struggling with getting into the enterprise space. Hangouts is marginal compared to Microsoft Lync/Skype and Cisco
    • Offering direct connectivity to the carrier’s messaging for consumers can bridge that gap. It increases the value of RCS to the enterprise, making Google a player that can integrate better with it than competition
Why Acquire Jibe?

Beside being a nice signal to the market about seriousness, Jibe offers a few advantages for Google.

  1. They are already deployed through carriers
  2. Their service is cloud based, which sits well with Google. It means traffic goes through Jibe/Google – something which places Google as the gateway between the customer and the Telco – a nice position to be in

In a way, Jibe isn’t caught up in the old engineering mentality of telco vendors – it provides a cloud service to its customers, as opposed to doing things only on premise. While Google may not need the architecture or code base of Jibe Mobile, it can use its business contracts to its advantage – and grow it tenfold.

When your next RCS message will be sent out, Google will know about it. Not because it sits on your device, but because it sits between the device and the network.

Why will Telcos Accept this?

They have no choice in the matter.

RCS has been dead for many years now. Standardization continues. Engineers fly around the world, but adoption is slow. Painfully slow. So slow that mid-sized OTT players are capable of attracting more users to their services. It doesn’t look well.

And the problem isn’t just the service or the UI – it is the challenge for a carrier to build the whole backend infrastructure, build the clients for most/all devices on its network and then launch and attract customers to it.

Google embedding the client front end directly into Android and a part of the devices means there’s no headache in getting the service to the hands of customers and putting it as their default means of communications.

Google offering the backend for telcos in a cloud service means they no longer have to deal with the nasty setup and federation aspects of deploying RCS.

Only thing they need to do is sign a contract and hit the ground running.

An easy way out of all the sunk costs placed in RCS so far. It comes at a price, but who cares at this point?

The End Game

There are three main benefits for Google in this:

  1. Selling more Google devices
    • If these devices come equipped with RCS, and their backend comes from the same Telco and operated by Google, then why should a Telco promote another device to its customers?
    • It isn’t limited to Android versus an iOS device – it also relates to Chrome OS versus Windows 10
    • When mobility needs will hit tablets and laptops and the requirement to be connected everywhere with these devices will grow, we might start seeing Telcos actually succeeding in selling such devices with connectivity to their network. Having RCS embedded in these devices becomes interesting
  2. The next billion
    • Facebook and Google are furiously thinking of the next billion users. How to reach them and get them connected
    • With RCS as part of the messaging service a Telco has on offer, they are less dependent on third party apps to connect
    • With Google having both RCS and Hangouts, it increases the size of their applicable users base and the size of their ecosystem
  3. Carrier foothold
    • Carriers are reluctant when it comes to Google. They aren’t direct competitors, but somehow, it can feel that way at times – Google Fiber and Google Fi are prime examples of what Google can do and is doing
    • This is why having cloud services owned by Google and connected to the heart of a Telco is enticing to Google. It gives them a better foothold inside the carrier’s network
Where’s WebRTC?

Not really here. Or almost not. It isn’t about WebRTC. It is about telecom and messaging. Getting federated access that really works to the billions of mobile handsets out there.

Jibe has its own capabilities in WebRTC, a gateway of sorts that enables communicating with the carrier’s own network from a browser. How far along is it? I don’t know, and I don’t think it even matters. Connecting Jibe RCS cloud offering to Google Hangouts will include a WebRTC gateway. If it will or won’t be opened and accessible to others is another question (my guess is that it won’t be in the first year or two).

An interesting and unexpected move by Google that can give RCS the boost it desperately needs to succeed.


Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post Android Does… RCS !? What About WebRTC? Hangouts? appeared first on

WebRTC Book Review: Multiplayer Game Development with HTML5

Mon, 09/28/2015 - 12:00

Lots of Node. Little of WebRTC.

It has been quite some time since my last WebRTC book review. So when I got an indication that there is another book with WebRTC inside it, I had to read it. Which is what got me to Multiplayer Game Development with HTML5 by Rodrigo Silveira.

The promise of WebRTC in this book? Learning to “create peer-to-peer gaming using WebRTC”. I was intrigued. Spent reading it a few hours – and was happy about it, even though the WebRTC part of it was limited in its value.

This book takes the reader into a “Hello World” implementation of an online HTML5 multiplayer game. It is done by taking a step by step approach to implementing the classic snake game. First in HTML5, using a backend. And then by building on top of it all the rest.

The book itself is focused on Node.js development of the game, taking care to explain and use concepts of authoritative game servers – servers that make the main decisions in a game. It connects that to responsiveness and fluidity of the game, etc.

To those interested in real time communications, this is an interesting book. It has a lot of the same thought processes of developing signaling protocols and implementing their backend, dealing with responsiveness, latency and causality of message passing. It also handles the game lobby – the place where you connect players – you can view this as a conferencing server (the signaling part of it).

Rodrigo mentions WebRTC almost in passing – as a way of reducing latency by making use of the data channel in WebRTC, but that’s about it. There’s no real discussion or example of how to integrate it in a multiplayer game where you have an authoritative server and clients that communicate directly with each other at the same time.

That said, I felt the book is an interesting one for those developing WebRTC – and it wasn’t because of the WebRTC parts of it.

If you are interested in architecture, design, signaling or just programming – this book is a really interesting read.

I warmly recommend it.

The post WebRTC Book Review: Multiplayer Game Development with HTML5 appeared first on

My WebRTC API Platforms report Gains a Membership Portal

Thu, 09/24/2015 - 12:00

An update to my WebRTC API Platforms report is now available.

Updates in the reports

The last time I published an update of my Choosing a WebRTC API Platform report was 6 months ago. Since that time the market has changed quite a bit. If I had to note the most important aspects of that change, they would be:

Other notables include Atlassian acquiring BlueJimp and non-WebRTC API platforms joining the game.

These frequent changes made it into the latest update to my report, along with an addition of 4 vendors (AT&T, Bistri, Bit6 and Circuit). This with the updates of what vendors are doing didn’t seem enough to cover the market properly. Which is why I have decided to open a membership section on my website to go along with the report.

New membership area (and tools)

What does this membership section includes?

  • An online vendor matrix – one that will get updated a bit more frequently than the report itself. Purchasers of the report, under a valid subscription account will be able to access this online vendor matrix whenever they want and see what features companies have on offer. It should be a quick way to decide which vendors you should be looking at for the feature set you are looking for
  • Visuals – I’ve taken the visuals and tables from the report, compiling them into a simple Powerpoint deck. You can download the deck and copy+paste from it whatever you need for your own presentations
  • Monthly Virtual Coffee with Tsahi – a new monthly “webinar” of sorts open only to the report subscribers and my consulting clients, where I’ll be discussing the ecosystem as well as open the floor to any questions related to real time communications

So. If you purchased the report within the last year or have renewed your report’s subscription, you’ll be getting immediate access to the membership area and its tools – an email will be sent to you today with the necessary details.

Overview and sample vendor

If you want to learn more about the report, you can download the report’s table of contents and introduction section.

This time, I also wanted to give people a taste of what they’ll find in the report itself. To that end, I’ve asked AT&T to sponsor the vendor analysis section covering their platform and WebRTC APIs and they accepted. There are 23 vendors covered in the report in detail. The AT&T one is now freely available to download – you can expect the same level of detail on all other vendors in the report.

New pricing

This brings me to the last item, which is pricing.

There are now two price points for the report:

  • Basic, at $1700, which is what was included until today for a higher price point (essentially, the report and any updates within a year from purchase)
  • Premium, at $1950, which is the lowest earlier price point, which grants access to the new membership area
Join the Free Webinar

Want to learn more, or understand how the market is changing by non-API players? I am hosting a free webinar later today:

Development Approaches of WebRTC Based Services: There are many ways in which people approach adding real-time communications with WebRTC to their service. While the dominant approaches are probably self development and using a WebRTC PaaS vendor, there’s a wider range of approaches.

register and join me there

Got questions? Feel free to ask them in the comments area below or by contacting me directly.

The post My WebRTC API Platforms report Gains a Membership Portal appeared first on

Apple TV, Amazon Fire TV or a new Google Chromecast Dongle – 4K Won’t Matter

Tue, 09/22/2015 - 12:00

4K isn’t part of the current round of fighting.

A quick disclaimer: I own a Chromecast dongle. I don’t use it much. My daughter plays Just Dance Now every couple of days on it. And sometimes we watch our pictures on the large screen. So I can’t be called a true user of these devices.

That said, these devices are heavily used for streaming, which means video, which means a video codec. Which means I am a bit interested in them lately. Especially now with the H.265 crisis and the newly found Alliance for Open Media.

We had two launches lately and rumors of a third one. Let’s look at each one of them from the prism of codec support and resolution.

Apple TV

Apple TV has its issues with the web. The spec of this upcoming device, from Apple’s website, include the following video formats:

H.264 video up to 1080p, 60 frames per second, High or Main Profile level 4.2 or lower

H.264 Baseline Profile level 3.0 or lower with AAC-LC audio up to 160 Kbps per channel, 48kHz, stereo audio in .m4v, .mp4, and .mov file formats

MPEG-4 video up to 2.5 Mbps, 640 by 480 pixels, 30 frames per second, Simple Profile with AAC-LC audio up to 160 Kbps, 48kHz, stereo audio in .m4v, .mp4, and .mov file formats

Running an A8 chip, it can be deduced that it might actually have H.265 capabilities, but Apple decided to not use them for the time being – the same way it removed H.265 from FaceTime on the iPhone 6.

They also aren’t going overboard with the resolution, sticking to 1080p, streamed with H.264. The nice thing here is their 60 fps support.

There’s no 4K though. And no H.265.

Amazon Fire TV

Amazon announced its own response to the Apple TV a day after the Apple TV announcement. As with all classic after-Apple announcement, this had the two obvious features: lower price point and better hardware.

The better hardware part boils down to support for 4K resolutions.

The specs indicate the following content formats:

Video: H.265, H.264, Audio: AAC-LC, AC3, eAC3 (Dolby Digital Plus), FLAC, MP3, PCM/Wave, Vorbis, Dolby Atmos (EC3_JOC), Photo: JPEG, PNG, GIF, BMP

So higher resolutions probably get streamed at H.265 while everything else is H.264.

Here’s the rub though:

  1. Amazon is now part of the Alliance for Open Media – created to ditch royalty bearing codecs such as H.265
  2. The HEVC Advance announced their intent to leech ask payment based on streamed content and not only devices sold. How does that get calculated into a low margin retailer such as Amazon?

This is a hardware device. No real option to add or replace video codecs easily – at least not at such high resolutions. They worked on this one for over a year, so they couldn’t have foretold the mess that H.265 patents will be today. They didn’t want (or couldn’t) risk it with VP9. So now what?

Will this 4K device be useful for watching Amazon video movies at 4K? How higher will these need to be priced to deal with the royalty headaches of H.265?

Google’s YouTube service certainly isn’t going to support H.265 for its 4K streams anytime soon.

Can’t see 4K using H.265 on a hardware device in 2015 the right choice. Sorry.

Google Chromecast

Only rumors for now, but it seems this one will be announced on September 29th. We will know soon enough how stupid my estimates really are.

Here we go – these are my own estimates:

  • We really know little about the Chromecast’s specs. Even the one on the market – no clue on the video codec in it. It might be VP8 or H.264. My bet is on H.264 on the older model
  • The new Chromecast won’t support H.265. It will have support for H.264 and VP9
  • It won’t do 4K. It will focus on software related features to beat competition
  • VP9 will be there to better work with YouTube’s new VP9 support and reduce bandwidth strains on both Google and the end customer

We will see in a a week how I fared on this one.

Bottom Line

While 4K is a higher resolution than 1080p, it is too new and too niche at this point:

  • There aren’t enough TVs out there supporting 4K
  • There’s not enough content available
  • No one decided way of compressing such resolutions (with a nice patents minefield to go along with it)
  • And there aren’t many viewers who will be able to see the difference anyway


Kranky and I are planning the next Kranky Geek - Q1 2016. Interested in speaking? Just ping me through my contact page.

The post Apple TV, Amazon Fire TV or a new Google Chromecast Dongle – 4K Won’t Matter appeared first on

Do we Care about ORTC on Edge?

Mon, 09/21/2015 - 12:00

Yes and no.

Microsoft just announced officially that they have added ORTC to Edge. ORTC is… well… it’s kind’a like’a WebRTC. But not exactly.

Someone is doing his best NOT to mention WebRTC in all this…

Here are a few random thoughts I had on the subject:

  • It is more about WebRTC than it is about ORTC. Even though WebRTC was mentioned only half as much as ORTC in the text and never in the title (god forbid)
  • Getting “Hello World” to work on ORTC is harder than with WebRTC. Or it might just be me knowing WebRTC betther than ORTC
  • It was perfectly timed to coincide with Skype’s own support for it
  • Voice using Opus is a win. I wonder when we will see interoperability for a voice call between Edge and Chrome
  • Video using H.264UC (=proprietary) and later H.264 with no mention of VP8 or VP9 is a loss. Not for Microsoft but for the industry
  • Codecs, especially video ones, are going to cause major headaches moving forward. I wonder how web developers will swallow this sour pill
  • Will developers start using H.264 instead of VP8 now that it is apparent all browsers supporting WebRTC in 2016 will have H.264, but some won’t have VP8?
  • While Windows 10 is showing promise in its adoption (and aggressive push by Microsoft), the adoption of Edge is worrying. If numbers don’t increase, will it even matter if ORTC is there or which codecs Microsoft chose to incorporate?
  • The whole idea of getting Microsoft onboard is to get enterprises market share for WebRTC – where no other browser than Microsoft’s can penetrate. But if Edge isn’t there – then who really cares? It may well be like testing your service runs well on Opera (I am sure you did)
  • Here’s the rub though:
    • ORTC by the way isn’t a standard. It is a W3C Community Group
    • To get things into the HTML5 spec, ORTC needs to contribute their proposals to the W3C WebRTC Working Group
    • This process means that the APIs may change until it actually get standardized by W3C
    • It makes ORTC APIs less stable than those of WebRTC, and we’ve seen how people complained about the frequent changes in the browser APIs of WebRTC
    • Can Microsoft maintain this process?
  • This means that the next version of Edge will have different APIs for ORTC than the current one, and that this will continue for at least a year if not longer
  • Microsoft will need to release Edge at the same frequency that Google releases Chrome – every month or two
  • It will also need to handle deprecation of APIs at a fast pace – can its target customers (enterprise) handle that?

All in all, another good indicator for the health of this community and real time communications in the web.

For a real analysis, read Alex’s ruminations on ORTC in Edge.


Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post Do we Care about ORTC on Edge? appeared first on

Thoughts on Apple, WebRTC, HTML5, H.265 and VP9

Thu, 09/17/2015 - 12:00

There are a few side stories around Apple lately that relate to WebRTC. I wanted to share them here.

Apple TV ships with no HTML5 support

It seems that in the Apple TV reboot, there are going to be apps. But not ones that can make use of HTML5. Just native apps. John Gruber points to a post around that topic titled Everything but the Web, concluding that Web views won’t find their way to the TV screen if Apple has anything to do with it.

He doesn’t state the reason though. If you ask me, this has nothing to do to the Apple/Flash war of the past. It also has nothing to do with design or aesthetics. It has anything to do with ecosystem control. For Apple, the ability of cross platform development is an aberration. Why on earth enable developers to write their code once and then run it elsewhere? There’s nothing outside the closed garden of Apple, so why bother?

Killing HTML5 in Mac is impossible. Killing it on the iPhone or iPad is also rather hard – too many apps already use it, and there’s that pesky browser people use. But on the TV? That’s greenfield! So why not just forget about HTML5 altogether?

If you ask me, the good people in Apple see only one reason for HTML5 to exist – and that is for people to be able to go to website. Other than that? Useless.

The future of WebKit

There have been a lot of back and forth lately about the future of the web. Should we run full steam ahead with it or sit and wait. People prefer having it change and progress less. I can’t see why – when every year thousands of new APIs are rained at us by Apple and WWDC and Google at I/O – why can’t the Web improve? Why should it stay static?

WebKit, on the other had, is a rather dead rendering engine at this point in time. It might be fast and optimized, but it is becoming a bit old when it comes to adopting and supporting standards. WebRTC isn’t there, but multiple other technologies aren’t either. It seems to be keeping up with the HTML5 and CSS notation, but the programmatic parts of JavaScript? Falling behind the other browsers.

I’ve written before on how Microsoft Edge is getting way better. Mozilla is getting their act together and modernizing the older parts of their Firefox browser (extensions for example) and Google are speeding up and optimizing Chrome now that it has become huge. But Safari? Microsoft Edge will keep Google and Mozilla on edge and get them to improve. I don’t think the other browser vendors care too much about Safari getting too good anytime soon.

I wonder how much care and affection the Safari/WebKit team gets inside Apple these days. Probably not that much.

This goes somewhat counter intuitive to the positive assertions Alex made here about Apple and WebRTC.

H.265 / VP9

Apple and H.265 take center stage in my video codec sessions lately. You can see the video codec wars session I gave at TokBox last week.

My usual spiel?

  • Apple is a part of MPEG-LA
  • Apple owns H.265 related patents. It may well wish to enforce them to make it difficult on others
  • Apple builds hardware, so changing a video codec isn’t an easy feat – it requires time and getting old hardware off the market
  • Apple has H.265 running on FaceTime in iPhone 5
  • So Apple is on the H.265 camp

But then I get directed to this interesting post in 9to5Mac:

Another interesting detail: 4K videos are being recorded in H.264, and Apple is no longer making reference to H.265 support for any purpose, FaceTime or otherwise


Is it only me or did Apple just drop H.265 support and is shifting camps? Or at the very least, sitting on the fence. It might have something to do with the HEVC Advance stupidity that brought the gang to open up the Alliance of Open Media. They might be edging away from royalty bearing codecs and moving to the free alternative. Or they might try using it as leverage over HEVC Advance to make their licensing terms more palatable.

How do you do 4K video resolutions with a camera if not by using H.265? Use H.264? Ridiculous. But that’s exactly what seems to be happening now for the new iPhone 6.

Should they be moving to VP9 instead? Probably, but it will be hard on Apple. They rely heavily on hardware acceleration and they don’t seem to have it on their chipsets at the moment.

This is a loss to the H.26x camp at the moment.

Where is this all headed?

I am not sure, but here are a couple of things I’d plan if I had that task given to me.

  • Rely on native development on mobile. Especially when it comes to anything Apple
  • Use HTML5 for browser development. Wrap it using Chrome Embedded Framework if a standalone desktop app is needed
  • Tread carefully in what I end up using on for my video codec. No simple answers there


Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post Thoughts on Apple, WebRTC, HTML5, H.265 and VP9 appeared first on

Tellybean and WebRTC: An Interview With Cami Hongell

Thu, 09/10/2015 - 12:00
isVisible=false; function show_hide_searchbox(w){ if(isVisible){ document.getElementById('filterBoxSelection').style.display = 'none'; w.innerText='Filter ▼'; }else{ document.getElementById('filterBoxSelection').style.display = 'block'; w.innerText='Filter ▲'; } isVisible=!isVisible; } function checkIfSelected(chk){ if(chk.checked==1) chk.parentNode.className = "selected"; else chk.parentNode.className = "notselected"; getSelectedValues(); } function getSelectedValues(){ var a=document.getElementsByClassName('selected'); var vtVal=[] , ctVal=[] , ftVal=[]; var ct=0,vt=0,ft=0; for (x = 0; x < a.length; ++x) { try{ if(a[x].getElementsByTagName('input')[0].className=='companyType'){ ctVal[ct]= a[x].getElementsByTagName('input')[0].value; ct++; } if(a[x].getElementsByTagName('input')[0].className=='vendorType'){ vtVal[vt]= a[x].getElementsByTagName('input')[0].value; vt++; } if(a[x].getElementsByTagName('input')[0].className=='focusType'){ ftVal[ft]= a[x].getElementsByTagName('input')[0].value; ft++; } }catch(err){ } } search_VType(vtVal); search_CType(ctVal); search_FType(ftVal); } function search_VType(val){ var a=document.getElementsByClassName('interview-block'); for(x=0;x=0 && val[i]!=null){ a[x].style.display='block'; } } if(val.length==0){ a[x].style.display='block'; } } } function search_CType(val){ var a=document.getElementsByClassName('interview-block'); for(x=0;x=0 && val[i]!=null && a[x].style.display=='block'){ break; } if(i==val.length-1){ a[x].style.display='none'; } } } } function search_FType(val){ var a=document.getElementsByClassName('interview-block'); for(x=0;x=0 && val[i]!=null && a[x].style.display=='block'){ break; } if(i==val.length-1){ a[x].style.display='none'; } } } } Check out all webRTC interviews >>

Tellybean: Cami Hongell

September 2015

WebRTC in the big screen

WebRTC on the large screen.

[If you are new around here, then you should know I’ve been writing about WebRTC lately. You can skim through the WebRTC post series or just read what WebRTC is all about.]

I am not a fan of video calling in the living room. Not because I have real issues with it, but because I think it is a steep mountain to climb – I am more of the low-hanging-fruit kind of a guy.

That’s not the case with Tellybean, a company focused on TV video calling and recently doing that using WebRTC.

Cami Hongell, CEO of Tellybean, found the time to chat with me and answer a few questions about what they are doing and the challenges they are facing.
What is Tellybean all about?

Tellybean is all about easy video calling on the TV. My two co-founders are Aussies living in Finland and they had a problem. A software update or a forgotten password too often got in the way of their weekly Skype call with grandma Down Under. Once audio and video were finally working both ways, there were four people fighting for a spot in front of the 13” screen.

We realised that modern life tends to separate families and our problem was far from unique. That’s when we decided to build an easy video calling service for the TV. It had to be so easy that even grandma could use it from the comfort of her couch. At the same time as we worked hard to eliminate complexity, we also needed to keep it affordable and build a channel which would provide users an easy way of getting the service.

Today we have an app which allows easy video calls on Android TV devices of our TV and set-top box partners. Currently you can make calls between selected Tellybean enabled Android TV devices and our web app on To make it as easy as possible to call somebody from your TV, we will release apps for Android and iOS mobiles and tablets in the future.


 You started by building your TV solution using Skype. What made you switch to WebRTC?

When we founded Tellybean four years ago the tech landscape looked very different from today. WebRTC wasn’t there. Android TV and Tizen weren’t there – the TV operating systems were all over the place. So initially we set out to build an easy service which would run on our own dedicated Linux box. Our intention was to allow our service to connect with other existing services by putting our own UI on top of headless clients developed using the SDK’s provided by some of the existing services. We started with SkypeKit and had a first version of it ready a few years ago. We were going to continue by adding Gtalk.

However, Skype decided to wind down the support of 3rd party developers and Google stopped Gtalk development. This happened almost at the same time as WebRTC was starting to gain traction. Switching to WebRTC turned out to be an easy decision once we looked into it and moved over to working on Android and 3rd party hardware only.


What excites you about working in WebRTC?

Having tried different VoIP platforms in the past, we have learned to appreciate the fact that working with WebRTC has allowed us to focus our  resources on the more important UX and UI development. Since WebRTC offers a plugin-free and no-download alternative for video calling with modern browsers, combined with our TV and upcoming mobile device approach we are able to provide easy use for a huge audience, with almost all entry barriers removed.

We are excited about having a great service which is getting a lot of interest from everybody in the Android TV value chain from the chip manufacturers to the TV and STB manufacturers as well as the operators. We’ve announced co-operation with TP Vision / Philips TVs and Nvidia and much more in the pipeline. The great support and resources available in the WebRTC community, coupled with the support from the hardware manufacturers means that  WebRTC is truly becoming a compelling open source alternative for service developers, such as ourselves.


Can you tell me a bit about the challenges of getting WebRTC to operate properly in an embedded environment fit for the TV?

An overall problem has been that we are moving slightly ahead of the curve.

Firstly, we need access to a regular USB camera. Unfortunately the Android TV platform and most devices lack UVC camera support. So we have been pushing everybody, Google, the device manufacturers and the chip suppliers, to add camera support. The powerful Nvidia Shield Console has camera support and we already have a few of the other major players implementing it for us.

Secondly, there are still devices that are underpowered and/or lack support for VP8 HW encoding, meaning that it is hard for us to provide a satisfactory call quality. Luckily again, most of the devices launched this year can handle video calling and our app.

The third problem relates to fine tuning the audio for our use case where the distance between the USB camera’s mic and the TV’s speakers is not a constant. Third time lucky: WebRTC provides us pretty good echo cancellation and other tools to optimize this and produce good audio quality.


What signaling have you decided to integrate on top of WebRTC?

Wanting to support browsers for user convenience and to get going quickly, we started out building our own solution with Socket I/O, but we are transitioning to MQTT for two reasons. Firstly, we came to the conclusion that MQTT provided us much more efficient scalability. Secondly, MQTT is much easier on the battery for mobile devices.

Current implementations of MQTT also allow us to use websockets for persistent connections in browsers, so it suits our purposes well. Additionally, some transaction-like functionality is done using REST. We are writing our own custom protocol as we go, which allows us to grow the service organically instead of trying to match a specification set forth by another party that doesn’t match our requirements or introduces undue complexity in architecture or implementation.


Backend. What technologies and architecture are you using there?

We have server instances on Amazon Web Services, running our MQTT brokers and REST API, as well as the TURN/STUN service required for WebRTC. We use Node.JS on the servers and MongoDB from a cloud service which allows us easy distributed access to shared data.


Where do you see WebRTC going in 2-5 years?

The recent inclusion of H.264 will lead to broader adoption of WebRTC in online services, and also in dedicated hardware devices since H.264 decoders are readily available. Microsoft is also starting to adopt WebRTC in their new Edge browser, so it seems like there’s a bright future for rich communication using WebRTC once all the players have started moving. Like everybody else, we would naturally like full WebRTC support from Microsoft and Apple sooner rather than later, and it will be hard for them to ignore it with all the support it is already receiving. In this timeframe, at least high-end mobile devices should have powerful enough hardware to support WebRTC in the native browsers without issues. With this kind of background infrastructure a lot of online services will be starting to use WebRTC in some form, instead of more isolated projects. With everyone moving towards a new infrastructure, hopefully any interoperability issues between different endpoints have been sorted out, which allows service developers to focus on their core ideas.


If you had one piece of advice for those thinking of adopting WebRTC, what would it be?

WebRTC is still an emerging technology, that will surely have an impact for developers and businesses going forward, but it’s not completely mature yet. We’ve seen a lot of good development over time, so for a specific use case, it might be a plug-and-play experience or then in a more advanced case you may need a lot of development work.


Given the opportunity, what would you change in WebRTC?

WebRTC has been improving a lot during the time that we’ve worked with it, so we believe that current issues will be improved on and disappear over time. The big issue right now on the browser side is obviously adoption, with Microsoft and especially Apple not up to speed yet. We would also like to see good support for all WebRTC codecs from involved parties, to avoid transcoding and to be able to use existing hardware components for a great user experience.


What’s next for Tellybean?

We’ve recently launched our Android TV app and are seeing the first users on the Nvidia Shield console, the first compatible device. We are now learning a lot and have a chance to fine tune our app. From a business point of view we currently have full focus on building a partner network which will provide us the platform for 100+ million TV installations in the coming years. Next we are starting development of mobile apps for Android and iOS. Later we will need to decide if moving to other TV operating systems or e.g. enabling other video calling services to connect to Tellybean TVs will be the next most important step towards achieving our aim of becoming THE video calling solution for the TV.

The interviews are intended to give different viewpoints than my own – you can read more WebRTC interviews.

The post Tellybean and WebRTC: An Interview With Cami Hongell appeared first on

WebRTC Plugin Free World is Almost Here: Apple and Microsoft joining the crowd

Tue, 09/08/2015 - 12:00

There are indications out there that soon we won’t be needing plugins to support WebRTC in some of the browsers out there.

[Alexandre Gouaillard decided to drop by here at offering his analysis on recent news coming out of Apple and Microsoft – news that affect how these two players will end up supporting WebRTC real soon.] Apple Safari news

  • When the webrtc-in-webkit project was first announced through the voice of its main members: Stefan H. from Ericsson, and myself, not everybody was a believer.
  • For some, it was even an exercise in Futility
  • A further post was even written in webrtcHacks to explain how to let apple know about one’s interest in having webrtc on iOS
  • In my November 2014 presentation (slide 20) with JO Hache about the Temasys Plugin, we indicated that the goal was indeed to have an implementation in the Apple version of things by end of August 2015, to have a chance to see it in the next version of Safari which traditionally is shipped along new OS in September/October, every year
  • Early July the webrtc-in-webkit project has delivered the media streams fondation, and getusermedia in webkit, with a full functional implementation in the Linux browser WebKitgtk+
  • Right after that, Apple put his own resources to support GetUserMedia in the Apple specific version of the code, and worked on it until mid-August. A detailed analysis of the code changes by Apple and their technical implications can be found here

It is still unknown when this (GetUserMedia only) will find its way into Safari, and more specifically in Safari on iOS. Hopefully before the end of the year. (high, but probably unrealistic, hopes for a Sept. 9 announcement).

We can also only hope that the WebView framework apps are forced to use to display web pages according to the Apple app store rules will be updated accordingly, which would open WebRTC to iOS apps directly, catching up a little bit with the WebView on android.

How much the webrtcinwebkit project helped making this happen is also open to debate but I want to believe it did (yes, I am uber-biased). It is also possible that the specifications for Device API being stable (last call status at W3C) motivated Apple to go for it.

In any case, what is important is that it is now undeniable that Apple is bringing WebRTC to its browsers and devices, and that answers a question left open for almost 4 years now!

Microsoft Edge news
  • In May 2015, the Edge team announced support for the same media streams and getuserMedia API we spoke about earlier
  • In June 2015, philippe Hancke extended Google and Mozilla’s provided adapter.js to support Edge
  • More recently, some action has been visible on the ORTC side of things, but with nothing testable so far (Release version 10525)
  • Today (Sep. 1st) three separate announcements were made:
    • support for webm container is now in development (here)
    • support for VP9 is now in development (here)
    • support for opus is still under consideration but is now high priority (here)

Here again, a lot of good news. For a long time, it was unsure if Microsoft would do anything at all, and even now, nobody is clear on exactly what API they are going to implement and how compatible it will be with the WebRTC specs. The convergence of the WebRTC Working Group and ORTC Community Group within w3c is raising hope of better interoperability, but it was not substantiated. Until today that is. There is no other web API that would use VP9 than WebRTC. Ok, it could be to provide a better YouTube experience, but Opus is WebRTC only.

So here again, while the exact date is unknown it is undeniable that Microsoft Edge is moving in the right direction. Moreover, it’s been moving faster than most expected lately.

All good news for the ecosystem, and surely more news to come on the Kranky Geek WebRTC Live event on september 11th where three browser vendors will be present to make their announcements.

Toward a plugin free experience (finally)

During my original announcement of a plugin for IE and Safari I stated that the “goal [was] to remove the “what about IE and safari” question from the Devs and Investors’ table long enough for a native implementation to land.

I also stated that “We hate plugins, with a passion. Some browser vendors put us in the situation where we do not have the luxury to wait for them to act on a critical and loudly expressed need from their user base. We sincerely hope that this is only a temporary solution, and we don’t want people to get the impression that plugins are the magical way to bypass what browser vendors do (or don’t do). Native implementation is always best.”

I truly believe that the day we can get rid of plugins for webrtc is now very very close. If I’m lucky, Santa Claus will bring it to me for Xmass (after all, I’ve been a good boy all year). There will still be a need for some help, but it will be in the form of a JS library and not a heavy duty plugin. Of course, you still have to support some older versions of Windows here and there, especially for enterprise market, but Microsoft, and I am writing this from Redmond, next to the Microsoft campus ;-), is putting a lot of ressources behind moving people to auto-updating versions of their software, be it Windows itself, or the browsers. Nowadays, the OS do not bring value per themselves, but bring in a lot of maintenance burden. It is in everybody’s interest to have short update cycles, and MS knows that.

For those who need to support older versions of IE for some time (Apple users will never be seen with an old Apple device :-D), there are today several options, all converging toward the same feature set, and a zero price tag. You can see more about this here.

Tsahi and I have this in common that we hate plug-ins, especially for video communication. I think we are seeing the end of this problem.

The post WebRTC Plugin Free World is Almost Here: Apple and Microsoft joining the crowd appeared first on

Upcoming Sep-Oct Events

Fri, 09/04/2015 - 13:30

A quick note.

Just wanted to list out the events and venues where you’ll be able to find me and meet with me in the next month or two.

Me in San Francisco

I’ll be in San Francisco 9-11 September, mainly for the Kranky Geek event. If you want to meet and have a chat with me – contact me and let’s see if we can schedule a time together.

WebRTC Codec Wars: Rebooted

When? Wednesday, September 9, 18:00

Where? TokBox’ office – 501 2nd Street, San Francisco

TokBox were kind enough to invite me to their upcoming TechTok meetup event. Codecs are becoming a hot topic now – up to the point that I had to rearrange my writing schedule and find the time to write about the new Alliance for Open Media. It also got me to need to change my slides for this event.

Would be great to see you there, and if you can’t make it, I am assuming the video of the session will be available on YouTube later on.

Attendance is free, but you need to register.

Kranky Geek WebRTC Show

When? Friday, September 11, 12:00

Where? Google – 6th floor 345 Spear St, San Francisco

This is our second Kranky Geek event in San Francisco, and we’re trying to make it better than the successful event we had a year ago.

Check out our roster of speakers – while registration has closed, we do have a waiting list, so if you still want to join – register for the waiting list and you might just make it to our event.

Development Approaches of WebRTC Based Services

When? September 24, 14:00 EDT

Where? Online

It is becoming a yearly thing for me, having a webinar on the BrightTALK platform.

This time, I wanted to focus on the various development approaches companies take when building WebRTC based services. This has recently changed with one or two new techniques that I have seen.

The event is free and takes place online, so be sure to register and join.

Video+Conference 2015

When? Thursday, October 15, 11:00

Where? Congress Centre Hotel “Alfa”, Moscow

I have never been to Russia before, and I won’t be this time. I will be joining this one remotely. TrueConf have asked me to give a presentation about WebRTC.

The topic selected for this event is WebRTC Extremes and how different vendors adopt and use WebRTC to fit their business needs.

If you happen to be in Moscow at that time, it would be great to virtually meet you on video.


Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post Upcoming Sep-Oct Events appeared first on

WebRTC Codec Wars: Rebooted

Thu, 09/03/2015 - 12:00

The beginning of the end of HEVC/H.265 video codec.

On September 1st the news got out. There’s a new group called Alliance for Open Media. There have been some interesting coverage about it in the media and some ramifications that already got published. The three most relevant pieces of news I found are these:

I’ve written about the pending codec wars just a week ago on SearchUC, concluding that all roads lead to a future with royalty free video codecs. This was before I had any knowledge on the announcement of the open media alliance. This announcement makes this future a lot more possible.

What I’d like to do here, is to cover some aspects of where this is headed and what it tells us about the players in this alliance and the pending codec wars.

The Press Release

Let’s start off with the alliance’ initial press release:

This initial project will create a new, open royalty-free video codec specification based on the contributions of members, along with binding specifications for media format, content encryption and adaptive streaming, thereby creating opportunities for next-generation media experiences.

So the idea is to invent a new codec that is royalty-free. As Chris pointed out, this is hard to impossible. Cisco in their own announcement of their new Thor codec made it quite clear what the main challenge is. As Jonathan Rosenberg puts it:

We also hired patent lawyers and consultants familiar with this technology area. We created a new codec development process which would allow us to work through the long list of patents in this space, and continually evolve our codec to work around or avoid those patents.

The closest thing to a “finished good” here is VP9 at the moment.

Is the alliance planning on banking on VP9 and use it as their baseline for the specification of this new codec, or will they be aiming at VP10 and a clean slate? Mozilla, a member company in this alliance, stated that they “believe that Daala, Cisco’s Thor, and Google’s VP10 combine to form an excellent basis for a truly world-class royalty-free codec.”

Daala takes a lot of its technologies from VP9. Thor is too new to count, and VP10 is just a thought compared to VP9. It makes more sense that VP9 would be used as the baseline; and Microsoft’s adoption of VP9 at that same timeframe may indicate just that intent. Or not.

The other tidbit I found interesting is the initial focus in the statement:

The Alliance’s initial focus is to deliver a next-generation video format that is:

  • Interoperable and open;
  • Optimized for the web;
  • Scalable to any modern device at any bandwidth;
  • Designed with a low computational footprint and optimized for hardware;
  • Capable of consistent, highest-quality, real-time video delivery; and
  • Flexible for both commercial and non-commercial content, including user-generated content.

Would be easier to just bio-engineer Superman.

Jokes aside, the bulleted list above are just table-stakes today:

  • Interoperable and open
    • Without interoperability a codec has no life
    • Openness is what you do in an initiative like this one
  • Optimized for the web
    • People consume video over IP these days. This is where the focus should be
    • It also hints for embedability in web browsers, and having Google, Microsoft and Mozilla on this alliance couldn’t hurt
  • Scalable to any modern device at any bandwidth
    • Scalability here means many things. SVC for one, but that’s just a single feature out of the list of necessary needs
    • Modern devices means that anything that is built probably before 2012 or even 2014 is going to be ignored. With current lifecycle of smartphones, that seems reasonable
    • Any bandwidth means it needs to support crappy internet connections but also 4K resolutions and above
  • Designed with a low computational footprint and optimized for hardware
    • This one is going to be tough. Each codec generation takes 2-3 times the computational footprint of its predecessor. I am not sure this can be met if the idea is to displace something like H.265
    • Optimized for hardware is a wink to hardware vendors that they need to support this as well. Having Intel is nice, but they are almost the non-player in this market (more on that later)
  • Capable of consistent, highest-quality, real-time video delivery
    • Guess what? Everyone wants that for any time of a video codec
  • Flexible for both commercial and non-commercial content, including user-generated content
    • This talks about licensing and royalties. Free should be the business model to aim for, though the language may well translate to royalty payments, though at a lower rate than what MPEG-LA and HEVC Advance are trying to get

High goals for a committee to work on.

It will require Cisco’s “cookbook”: a team comprised of codec engineers and lawyers.

The Members

What can we learn from the 7 initial alliance members? That this was an impossible feat and someone achieved just that. Getting these players into the same table while leaving the egos out of the room wasn’t easy.


Amazon is new to video codecs – or codecs and media. They have their own video streaming service, but that’s about it.

Their addition into this group is interesting in several aspects:

  • The Amazon Instant Video service has its place. Not the dominant service, but probably big enough so it isn’t ignored. Added to Netflix and YouTube, it adds its weight
  • More interestingly, how will AWS be affected? Their Amazon Elastic Transcoder for example, or the ability to host real time media processing services on top of AWS

Cisco is a big player in network gear and in unified communications. It has backed H.264 to date, mainly due to its own deployed systems. That said, it is free to pick and choose next generation codecs. While it supports H.265 in its high-end telepresence units, it probably saw the futility of the exercise continuing down this path.

Cisco though, has very little say over future codecs adoption.


Google needs free codecs. This is why it acquired On2 in the first place – to have VP8, VP9 and now VP10 compete with H.26x. To some extent, you can point the roots of this alliance to the On2 acquisition and the creation as webm as the first turning point in this story.

For Google, this means ditching the VPx codec branding, but having what they want – a free video codec.

The main uses for Google here are first and foremost YouTube and later on WebRTC. Chrome is the obvious vehicle of delivery for both.

I don’t see Google slowing down on their adoption of VP9 in WebRTC or reducing its use on YouTube – on the contrary. Assume the model played out here will be the same one Google played with SPDY and HTTP/2:

  • SPDY was Google’s proprietary transport mechanism to replace HTTP/1.1. It was later used as the baseline of HTTP/2
  • VP9 is Google’s proprietary video codec to replace H.26x. It is now being used as the baseline of the next generation video codec to displace H.265

To that end, Google may well increase their team size to try and speed up their technology advancement here.


Intel is trying for years now to conquer mobile with little to show for its efforts. When it comes to mobile, ARM chipsets rule.

Intel can’t really help with the “any modern device” part of the alliance’s charter, but it is a good start. They are currently the only chipset vendor in the alliance, and until others join it, there’s a real risk of this being a futile effort.

The companies we need here are ARM, Qualcomm, Broadcom and Samsung to begin with.


Microsoft decided to leave the H.26x world here. This is great news. It is also making the moves towards adopting WebRTC.

Having Google Chrome and Microsoft Edge behind this initiative is what is necessary to succeed. Apple is sorely missing, which will most definitely cause market challenges moving forward – if Apple doesn’t include hardware acceleration for this codec in their iOS devices, then a large (and wealthy) chunk of the consumer market will be missing.

Every day that passes it seems that Microsoft is acting like a modern company ready for this day and age as opposed to the dinosaur of the 90’s.


Mozilla somehow manages to plug itself into every possible initiative. This alliance is an obvious fit for a company like Mozilla. It is also good for the alliance – 3 out of 4 major browser players behind this initiative is more than we’ve seen for many years in this area.


Netflix started by adopting H.265 for their 4K video streaming. It seemed weird for me that they adopted H.265 and not VP9 at the time.  I am sure the latest announcements coming out of HEVC Advance about licensing costs for content streaming has caused a lot of headache at Netflix and tipped the scale towards them joining this alliance.

If you are a content provider operating at Netflix scale with their margins and business model, the greedy %0.5 gross revenue licensing of HEVC Advance becomes debilitating.

With YouTube, Amazon and Netflix behind this alliance, you can safely say that web video streaming has voiced their opinion and placed themselves behind this alliance and against HEVC/H.265.

Missing in Action

Who’s missing?

We have 3 out of 4 browser vendors, so no Apple.

We have the web streaming vendors. No Facebook, but that is probably because Facebook isn’t as into the details of these things as either Netflix or Google. Yet.

We don’t have the traditional content providers – cable companies and IPTV companies.

We don’t have the large studios – the content creators.

We don’t have the chipset vendors.


Apple is an enigma. They make no announcements about their intent, but the little we know isn’t promising.

  • They have devices sold to think of. These devices support H.265 hardware acceleration, so they are somewhat committed to it. Hard to switch to another horse as a vertical integrator
  • Safari and WebKit are lagging behind when it comes to many of the modern web technologies – WebRTC being one of them
  • Apple owns patents in H.265 and are part of MPEG-LA. Would they place their bets in another alliance? In both at the same time? Contribute their H.265 patents to the Alliance for Open Media? Probably not

Once this initiative and video codec comes to W3C and IETF for standardization, will they object? Join? Implement? Ignore? Adopt?

Content providers

Content providers are banking around H.265 for now. They are using the outdated MPEG2 video codec or the current H.264 video codec. For them, migrating to H.265 seems reasonable. Until you look at the licensing costs for content providers (see Netflix above).

That said, some of them, in Korea and Japan, actually own patents around H.265.

Where will they be headed with this?

Content creators

Content creators wouldn’t care less. Or they would, as some of them are now becoming also content providers, streaming their own content direct-to-consumer in trials around unbundling and cord cutting.

They should be counting themselves as part of the Alliance for Open Media if you ask me.

Chipset vendors

Chipset vendors are the real missing piece here. Some of them (Samsung) hold patents around H.265. Will they be happy to ditch those efforts and move to a new royalty free codec? Hard to say.

The problem is, that without the chipset vendors behind this initiative it will not succeed. One of the main complaints around WebRTC is lack of support for its codecs by chipsets. This will need to change for this codec to succeed. It is also where the alliance needs to put its political effort to increase its size.

The Beginning of the End for HEVC/H.265

This announcement came as a surprise to me. I just finished writing my presentation for an upcoming TechTok with the same title as this post: WebRTC Codec Wars Rebooted. I will now need to rewrite that presentation.

This announcement if played right, can mean the end of the line for the H.26x video codecs and the beginning of a new effort around royalty free video codecs, making them the norm. The enormity of this can be compared to the creation of Linux and its effect on server operating systems and the Internet itself.

Making video codecs free is important for the future of our digital life.

Kudos for the people who dared dream this initiative and making it happen.

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post WebRTC Codec Wars: Rebooted appeared first on


Using the greatness of Parallax

Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.

Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.

Get free trial

Wow, this most certainly is a great a theme.

John Smith
Company name

Startup Growth Lite is a free theme, contributed to the Drupal Community by More than Themes.