It seems like we’re in an ongoing roller coaster when it comes to CPaaS. Tropo has just closed its doors. At least if you want to be a new customer (which is doubtful at best at this point in time).
Looking for the public announcement that got deleted? You can find it hereTropo and Cisco – Where it all started?
Two years ago (give or take a month), Cisco acquired Tropo. I’ve written about it at the time, trying to figure out what will Cisco focus on when it comes to Tropo’s roadmap. I guess I was quite wrong in where I saw this headed and where it went to…
The two questions I did ask in that article that were probably the most important were:
Will developers be attracted to Tropo or will they look for their solution elsewhere?
Was this acquisition about the technology? Was it about the existing customer base? Was it about the developer ecosystem?
We’ve got an answer to the first question – they went elsewhere. If this was a growing business, then developers would be attracted, but it wasn’t.
Looking back at my recent update to my WebRTC API report, it made perfect sense that this will be shut down soon enough. Just look at the investment made by Tropo and then Cisco into the platform:
Nothing serious was done for developers on the WebRTC part of the platform since September 2014, and then after the acquisition, focus of the Tropo developers went to Cisco Spark.
This answers my second question – this wasn’t about technology or customers or even the existing developer ecosystem of Tropo. It was about the developers of Tropo and their understanding of developer ecosystems. It was about talent acquisition for Cisco Spark. Why kill the running business that is Tropo? I really don’t know or understand. I also fail to see how such a signal can be a good one to developers who plan on using other Cisco APIs.
When an API vendor (CPaaS, WebRTC or other) discontinues its APIs or stops investing in developers. What…
Click To Tweet
I don’t know what was the rationale of Cisco to acquire Tropo in the first place. If all they wanted was the talent, then this seems like too big of a deal to make. If it was for the platform, then they sure didn’t invest in it the needed resources to make it grow and flourish.
Now, here’s why I think this move took place, and it probably happened at around the time of the acquisition and in parallel to it.UCaaS or CPaaS? The Shifting Focus
We’ve got two worlds in a collision course.
UCaaS – Unified Communications as a Service – the “modern” day voice and video communications sold to enterprises.
CPaaS – Cloud Communications as a Service – the APIs used by developers to embed communications into their products.
UCaaS has a problem. Its problem is that it always saw the world in monochrome. If you were an organization, then ALL of your communication needs get met by UCaaS (or UC, or convergence or whatever name it had before). And it worked. Up to a point. Go to most UC or UCaaS vendors’ websites and look at their solutions or industries section.
The screenshot above was taken from Polycom’s homepage (couldn’t find one on Cisco since their move to Cisco Spark). Now, notice the industries? Education, Healthcare, …
All of these industries are now shifting focus. Many of the solutions found in these industries that used to go to expensive UC vendors are now going to niche vendors offering solutions specifically for these niches. And they are doing it by way of WebRTC. Here are a few examples from interviews I’ve had done over the years on this site:
You’ll even find some of these as distinct verticals in my updated WebRTC for Business People free report.
The result? The UCaaS market is shrinking because of WebRTC and CPaaS.
The UCaaS market is shrinking because of WebRTC and CPaaS
Click To Tweet
Why is this happening? Three reasons that come to play here:
- Cloud. And the availability of CPaaS to build whatever it is that I want
- WebRTC (and CPaaS), that is downsizing this thing we call communications from from a service into a feature – something to be embedded in other services
- The rise of the gig economy. In many cases, a business is offering communications not between its employees at all, but rather between its users. Think Uber – drivers and riders. Airbnb – hosts and travelers. Upwork – employers and freelancers.
With the rising popularity of the dominant CPaaS vendors, there’s a threat to UCaaS: at the end of the day, UCaaS is just a single example of a service that can be developed using CPaaS. And you start to see a few entrepreneurs actually trying to build UCaaS on top of CPaaS vendors.
If I had to draw the relationship here, I’d do it in this way probably:
Just the opposite of what UC vendors have imagined in the last two decades. And a really worrying possibility of how the future may look like.
What should a UCaaS vendor do in such a case?
So now UCaaS vendors are trying to add APIs to serve as their CPaaS layer on top of their existing business. Most of them do it with the intent of making sure their UCaaS customers just don’t go elsewhere for communications.
Think of a bank that has an internal UCaaS deployment. He now wants to build a contact center. He has 3 ways of doing that:
- Go to a contact center vendor. Probably a cloud based one – CCaaS
- Build something using CPaaS. By himself. Using external developers. Or just someone that is already integrated with CPaaS
- Try and do something with his UCaaS vendor
If (3) happens, then great.
If (1) happens, then it is fine, assuming the UCaaS vendor offers contact center capabilities as well. And if he doesn’t, then there’s at least a very high probability that there’s no real competition between him and that CCaaS vendor
If (2) happens… well… that can be a threat in the long run.
And did we talk about Uber, Upwork and Airbnb? Yes. They have internal communications between their employees. Maybe there’s some UCaaS in there. But most of the communications generated by these companies don’t even involve their employees – only their customers and users. Where the hell do you fit UCaaS in there?
The APIs are there as a moat against the CPaaS vendors. Which is just where Cisco decided to place Tropo. As Cisco’s moat for its UCaaS offering – Cisco Spark. The Tropo team are there to make sure the APIs are approachable and create the buzz and developer ecosystem around it.
Quite an expensive moat if the end result was just killing Tropo in the process (the asset that was acquired).The Surprise Announcement
Last week, Tropo had a blog post published on their site and had it removed after a few hours.
Since Tropo publish the full post content into their RSS, and Feedly grabs the content and has it archived, the information in that post was not lost. I copied it from Feedly and placed it in a shared Google Doc – you can find it here.
The post is no longer on Tropo’s website. A few reasons why this may be:
- Someone higher up probably decided to do this quietly by contacting the most active customers only
- The post was published too early, and Cisco wanted to break the news a bit later
- The post got too many queries from angry developers, so they took it down, until they rewrite it to a more positive note
That last one is what I think is the real reason. And that’s because after reading it more than once, I still find it rather confusing and not really clear.
This announcement from Tropo has two parts to it:
- We’ve got your back with our great Cisco Spark commitment to developers (an API portal, a marketplace, ambassadors program and an innovation fund)
- Tropo is dead if you aren’t on to Cisco Spark. But we will honor our SLA terms (because we must)
That first part is about CPaaS serving UCaaS. Cisco Spark is UCaaS with a layer of APIs that Cisco is now trying to grow.
It is also about damage control – if you’re a Tropo user, then this whole acquisition thing by Cisco probably ended bad for you. So they want to remind you about the alternatives they now offer.
The second part is about how they see current Tropo developers:
- If you want to onboard to Tropo as a new customer, then tough luck. Doors are closed. Go elsewhere. Or go enroll to the Cisco Spark ISV Program
- If you are just trying the system out and have an account in there that has no apps in production environment (i.e – you are a freeloader and not a paying customer) – you will be thrown out, no questions asked
- If you are an existing paying customer running in production, then Tropo “will honor all contractual obligations”
This reads to me like the acquisition of AddLive by Snapchat. First, the service was acquired and closed its doors to new customers. It honored existing contracts. And it shut down its platform to all existing customers this year – once all contracts have ended.
- Don’t expect Tropo to renew existing paying contracts
- Expect Cisco to push paying Tropo customers to Cisco Spark instead
- Tropo won’t add new features, and the existing ones, including any support to them will deteriorate, but will still fit into the SLA they have in place
Tropo was predominantly about voice and SMS.
It was also the biggest competitor of Twilio some 6-7 years ago, when both with comparable in size and focus. Or at least until Tropo shifted its focus from developers to carriers.
There are two places where Tropo customers can go:
- Cisco Spark. I think this won’t happen in droves. Some will be headed there, but most will probably go look for a different home
- Other CPaaS vendors. Mostly Twilio, but some will go to Nexmo, Plivo or Sinch. Maybe even TeleSign
The big winner here is Twilio.
The big losers are the developers who put their fate in Tropo.CPaaS: Buyers Beware
As I always say, the CPaaS market is a buyers beware one.
Developers can and should use CPaaS to build their applications. It will reduce a lot of the upfront costs and the risks involved in starting a new initiative. It gives you the ability to explore an idea, fine tuning and tweaking it until you find the right path in the market. But it comes at a price. And I don’t mean the money you need to put in order to use CPaaS. I mean the risk involved in the fluidity of this market.
CPaaS and WebRTC APIs. A buyers beware market
Click To Tweet
The trick then, is to pick a CPaaS vendor that doesn’t only fit your feature requirements, but is also here to stay. And it is hard to do that.
Large companies are not necessarily the right approach – Cisco/Tropo as an example. Or AT&T shutting down its WebRTC APIs.
Small startups aren’t necessarily the answer either – they might shut down or pivot. SightCall pivoted. OpenClove has silently closed doors.
Dominant players might not be a viable alternative – if you consider Tropo such a player, and Cisco as a sure bet of an acquirer – and where it now ended.
You can reduce these risks if you know who these CPaaS vendors are and where they are headed. If you follow them for quite some time and understand if and what they are investing on. And if you need help with that, just follow my blog or contact me.
The post Another One Bites the Dust: Tropo Closes its Doors to New Customers appeared first on BlogGeek.me.
Too many WebRTC open source projects. Not enough good ones.
Ever went to github to search for something you needed for your WebRTC project? Great. Today, there’s almost as many WebRTC github projects as there are WebRTC LinkedIn profiles.
Today, there’s almost as many WebRTC github projects as there are WebRTC LinkedIn profiles.
Click To Tweet
Some of these code repositories really are popular and useful while others? Less so.
Here’s the most glaring example for me –
When you just search for WebRTC on github, and let it select the “Best match” by default for you, you’ll get PubNub’s sample of using PubNub as your signaling for a simple 1:1 video call using WebRTC. And here’s the funny thing – it doesn’t even work any longer. Because it uses an old PubNub WebRTC SDK. And that’s for an area that requires less of an effort from you anyway (signaling).
Let’s assume you actually did find a WebRTC open source media server that you like (on github of course). How do you know that it is any good? Here are 10 different signals (not WebRTC ones) that you can use to make that decision.
Need to pick a WebRTC media server framework? Why not use my Free Media Server Framework Selection Worksheet when checking your alternatives?Get the worksheet1. Do You Grok the Code?
If you are going to adopt an open source media server for your WebRTC project then expect to need to dive into the code every once in awhile.
This is something you’ll have to do either to get the darn thing to work, fix a bug, tweak a setting or even write the functionality you need in a plugin/add-on/extension or whatever name that media server uses for making it work.
In many of the cases I see when vendors rely on an open source WebRTC media server framework, they end up having to dig in its code, sometimes to the point of taking full ownership of it and forking it from its baseline forever.
To make sure you’re making the right decision – check first that you understand what you’re getting yourself into and try to grok the code first.
My own personal preference would be a code that has comments in it (I know I have high standards). I just can’t get myself behind the notion of code that explains itself (it never does). So be sure to check if the non-obvious parts (think bandwidth estimation) are commented properly while you’re at it.2. Is the Code Fresh?
Apple just landed with WebRTC. And yes. We’re all happyyyyy.
But now we all need to shift out focus to H.264. Including that WebRTC media server you’ve been planning to use.
Oh – and Google? They just announced they will be migrating slowly from Plan B to Unified Plan. Don’t worry about the details – it just changes the way multiple streams are handled. Affecting most group calling implementations out there.
And there was that getstats() API change recently, since the draft specification of WebRTC finally decided on the correct definition of it, which was different than its Chrome implementation.
The end result?
Code written a year or two ago have serious issues in actually running.
Without even talking about upgrades, updates, security patches, etc.
Just baseline “make it work” stuff.
When you check that github page of the WebRTC media server you plan on adopting – make sure to look when it was last updated. Bonus points if you check what that update was about and how frequently updates take place.3. Anyone Using It?
Nothing like making the same mistakes others are making.
Err… wrong expression.
What you want is a popular open source WebRTC media server. Anything else has a reason, and that reason won’t be that you found a diamond in the rough.
Go for a popular framework. One that is battle tested. Preferably used by big names. In production. Inside commercial products.
Why? Because it gives you two things you need:
It gives you validation that this thing is worth something – others have already used it.
And it gives you an ecosystem of knowledge and experience around it. This can be leveraged sometimes for finding some freelancers who have already used it or to get assistance from more people in the “community”.
I wouldn’t pick a platform only based on popularity, but I would use it as a strong signal.4. Is This Thing Documented?
What you get in a media framework for WebRTC is a sort of an engine. Something that has to be connected to your whole app. To do that, you need to integrate with it somehow – either through its APIs when you link on it – or a lot more commonly these days by REST APIs (or GraphQL or whatever). You might need both if you plan on writing your own addon/extension to it.
And to do that, you need to know the interface that was designed especially for you.
Which means something written. Documentation.
When it comes to open source frameworks, documentation is not guaranteed to be of specific quality – it will vary greatly from one project to another.
Just make sure the one you’re using is documented to a level that makes it… understandable.
If possible, check that the documentation includes:
- Some introductory material to the makeup and architecture of the project
- An API reference
- A few demos and examples of how to use this thing
- Some information about installation, configuration, maintenance, scaling, …
The more documentation the better off you will be a year down the road.5. Is It Debuggable?
WebRTC is real time and real time is hard to debug.
It gets harder still if what you need to look at isn’t the signaling part but rather the media part.
I know you just LOVE adding your own printf and cout statements in your C++ code and try reproducing that nagging bug. Or better yet – start collecting PCAP files and… err… read them.
It would be nice though if some of that logging, debugging, etc would be available without you having to always add these statements into the code. Maybe even have a mechanism in place with different log levels – and have sensible information written into the logs so the time you’ll need to invest in finding bugs will be reduced.
Also – make sure it is easy to plug a monitoring system to the “application” metrics of the media server you are going to use. If there is no easy way to test the health of this thing in production, it is either not used in production or requires some modifications to get there. You don’t want to be in that predicament.
While at it – make sure the code itself is well documented. There’s nothing as frustrating (and as stupid) as the self explanatory code notion. People – code can’t explain itself – it does what it does. I know that the person who wrote that media server was god incarnate, but the rest of us aren’t. Your programmers are excellent, but trust me – not that good. Pick something maintainable. Something that is self explanatory because someone took the time to write some damn good comments in the tricky parts of the code. I know this one is also part of grokking the code, but it is that important – check it twice.
For me, the ability to debug, troubleshoot and find issues faster is usually a critical one for something that is going to get into my own business. Just a thought.6. Does it Scale?
Media servers are resource hogs (check this video mini series for a quick explanation).
This means that in most likelihood, if your business will become successful in any way, you will need more than a single media server to run at “scale”.
You can always crank it up on Amazon AWS from m4.large up to m4.16xlarge, but then what’s next?
At the end of the day, scaling of media servers comes down to adding more machines. Which is simple enough until you start dealing with group calls.
Here’s an example.
- Assume a single machine can handle 100 participants, split into any group type (I am simplifying here)
- And we have 10 participants on average in each call
- Each group call can have 2 participants, up to… 50 participants
Now… how do we scale this thing?
How many machines do we need to put out there? When do we decide that we don’t add new calls into a running machine? Do we cascade these machines when they are fully booked? Throw out calls and try to shift them to other machines?
I don’t know, but you should. At least if you’re serious about your product.
You probably won’t find the answers to this in the open source WebRTC media server’s documentation, but you should at least make sure you have some reasonable documentation of how to run it at scale, and not as a one-off instance only.7. What Languages Does it Use?
They wrote it in Kotlin.
Because that’s the greatest language ever. Here. Even Google just made it an official one for Android.
What can go wrong?
Two things I look for in the language being used:
- That the end result will be highly performant. Remember it’s a resource hog we’re dealing with here, so better start with something that is even remotely optimized
- That I know how to use and have it covered by my developers
Some of these things are Node.js based while other are written in Java. Which one are your developers more comfortable with? Which one fits better with your technology plans for your company in the next 5 years?
If you need to make a decision between two media servers and the main difference between them for you is the language – then go with the one that works better for your organization.8. Does It Fit Your Signaling Paradigm?
Three things you need in your WebRTC product:
- Signaling server
- STUN/TURN server
- Media server
That 3rd one isn’t mandatory, but it is if you’re reading this.
That media server needs to interact with the signaling server and the STUN/TURN server.
Sure. you can use the signaling capabilities of the media server, but they aren’t really meant for that, and my own suggestion is not to put the media server publicly out there for everything – have it controlled internally in your service. It just doesn’t make architectural sense for me.
So you’ll need to have it interacting with your signaling server. Check that they both share similar paradigms and notions, otherwise, this in itself is going to be quite a headache.
While at it – check that there’s easy integration between the media server you’re selecting and the STUN/TURN server that you’ve decided to use. This one is usually simple enough, but just make sure you’re not in for a surprise here.9. Is the License Right For You?
BSD? MIT? APL? GPL? AGPL?
What license does this open source WebRTC media server framework comes with exactly?
Interestingly, some projects switch their license somewhere along the way. Jitsi did it after its acquisition by Atlassian – moving from LGPL to a more permissive APL.
The way your business model looks like, and the way you plan to deploy the service are going to affect the types of open source licenses you will want and will be able to adopt inside your service.
There are different types of free when it comes to open source licenses.
Click To Tweet
Every piece of code you pick and use needs to adhere to these internal requirements and constraints that you have, and that also includes the media server. Since the media server isn’t something you’ll be replacing in and out of place like a used battery, my suggestion is to pick something that comes with a permissive open source license – or go for a commercial product instead (it will cost you, but will solve this nagging issue).
I’ve written about open source licenses before – check it out as well.10. Is Anyone Offering Paid Support For It?
Yes. I know.
You’re using open source to avoid paying anyone.
And yet. If you check many of the successful and well maintained open source projects – especially the small and mid-sized ones – you will see a business model behind them. A way for those who invest their time in it to make a living. In many cases, that business model is in support and custom development.
Having paid support as an option means two things:
- Someone is willing to take ownership and improve this thing and he is doing it as a day job – and not as a hobby
- If you’ll need help ASAP – you can always pay to get it
If no one is offering any paid support, then who is maintaining this code and to what end? What would encourage them to continue with this investment? Will they just decide to stop investing in it next month? Last month? Next year?Making the Decision
I am not sure about you, but I love making decisions. I really do.
Taking in the requirements and the constraints, understanding that there’s always unknowns and partial information. And from there distill a decision. A decisive selection of what to go with.
You can find more technical information on media servers in this great compilation made by Gustavo Garcia.
After you take the functional requirements that you have, and find a few suitable open source WebRTC media frameworks that can fit these requirements, go over this list.
See how each of them addresses the points raised. It should help you get to the answer you are seeking about which framework to go with.
Towards that goal, I also created a Media Server Framework Selection Sheet. Use it when the need comes to select an open source WebRTC media server framework for your project.Get the worksheet
The post 10 Tips for Choosing the Right WebRTC Open Source Media Server Framework appeared first on BlogGeek.me.
Last week WWDC was a happy occasion for those who deal with WebRTC. For the first time, we got the official word – and code – of WebRTC on Apple devices – WebRTC iOS and WebRTC Safari support is finally here.
I spent the time since then talking to a couple of developers and managers who either tinkered with Safari 11 already or have to make plans in their products for this development.
I came out a bit undecided about this whole thing. Here are where things stand.Apple’s Coverage
The WebKit site has its own announcement – as close as we’ll ever get to an official announcement from Apple with any detail it seems.
What I find interesting with this one (go read it – I’ll wait here):
- Google isn’t mentioned (god forbid). But their “LibWebRTC open source framework” is. Laughed my pants off of this one. The lengths companies would go to not mention one another is ridiculous
- WebRTC in WebKit isn’t mentioned. Two years ago I opined it wouldn’t move the needle. And it seems like it didn’t – or at least not enough for Apple to mention it in this WebKit announcement
- TokBox and BlueJeans are mentioned as early beta. TokBox I understand. Can’t miss one of the dominant players in this space. But BlueJeans? That was a surprise to me
First things first, here are some posts that were published already about Apple’s release of WebRTC Safari (in alphabetical order):
- Peer5 – great for the industry. And how it makes sense when Flash is dying and an alternative low latency solution is needed in the browser
- Quobis – checked their Sippo status. Interesting to see what works and doesn’t “out of the box”
- The New Dial Tone – hold on with the party. Amir Zmora details what’s included in Apple’s WebRTC as we know it now
- TokBox – excited about WebRTC in iOS 11 and Safari. Mostly due to the fact that this solves spontaneous use cases
- WebRTC.is – WebRTC ships in MacOS and iOS 11. Eric Lagerway wakes up after a year of non-activity on his blog
- WebRTC.ventures – WebRTC support in Safari 11. Arin Sime weighs in on what to expect in the short-mid term (=nothing yet), but sees a bright future
I’ve ignored a few generic posts that just indicated WebRTC is out there.
Most relevant Twitter thread on this?
— Saúl Ibarra Corretgé (@saghul) June 6, 2017Who’s Missing?
Well… the general media.
I haven’t really seen the keynote. I find the Apple keynotes irrelevant to my needs, so prefer not to waste my time on them. My understanding is that WebRTC took place there as a word in a slide. And that’s HUGE! Especially considering the fact that none of the general technology media outlets cared to mention it in any way.
Not even the general developers media.
What was mentioned instead were things like the iOS file manager. That seems a lot more transformative.
This isn’t a rant about general media and how they miss WebRTC. It is a rant about how WebRTC as a technology has not caught the attention at all, outside a little circle of people.
Is it because communications is no longer interesting?
At a large developers event here in Israel on that same week, there were 6 different tracks: Architecture, Full stack, Philosophy, Big Data, IOT, Security. No comms.
Not sure what the answer…
On one hand, WebRTC is transformative and important. On the other hand, it is mostly ignored. #iOS
Click To Tweet
WebRTC Safari Support
So what did get included in the WebRTC Safari implementation by Apple?
- The basics. GetUserMedia and PeerConnection
- One-on-one voice and video calls seem to work well across browsers AND devices (including things like Safari to Edge)
- Opus is there for audio
- H.264 only for video. There is no VP8 or VP9. More on that later
- The code supports Plan B, if you are into multiparty media routing
- Data Channel is supported, but too buggy to be used apparently
- No screen sharing. Yet
- This is both for the desktop and mobile:
- WebRTC Safari support is there for the macOS in the new version of safari
- WebRTC iOS support is there for iOS 11 – via the Safari web browser, and maybe(?) inside WebViews
This is mostly expected. See below why.WebKit, Blink and WebRTC
A long time ago, in a galaxy far far away. There was a browser rendering engine called WebKit. Everyone used WebKit. Especially Google and Apple. Google used WebKit for Chrome. And Apple used WebKit for Safari.
And then one day, in the middle of the development of WebRTC, Google decided enough is enough. They forked WebKit, renamed it to Blink, removed all excess code baggage and never looked back.
Apple never did care about WebRTC. At least not enough to make this thing happen until last week. I hope this is a move forward and a change of pace in Apple’s adoption of WebRTC.
Here’s what I think Apple did:
- Seems like they just took the Google code for WebRTC and hammered at it until it fit nicely back into WebKit (ignoring WebRTC in WebKit in the process)
- How did they modify it? Remove VP8. Add H.264 by hooking it up with the hardware codec in iOS and on the Mac
- And did the rest of the porting work – connecting devices, etc.
- Plan B is there because. Well. Google uses Plan B at the moment. And it just stands to reason that the code Apple had was Plan B
When it comes to WebRTC, the question is one of browser interoperability. There aren’t many browser vendors (I am counting four major ones).
The basics seem to work fine. If you run a simple peer-to-peer call between any of the 4 browsers, you’ll get a call going. Voice and video. The lowest common denominator for that seems to be Opus+H.264 due to Safari. Otherwise, Opus+VP8 would also be a possibility.
The challenge starts when what you’re trying to do is multiparty. While H.264 is supported by all browsers, the ability to use simulcast with H.264 isn’t. At the moment, Chrome doesn’t support simulcast with H.264. The current market perception is that multiparty video without simulcast is meh.
Doing group calling?
- Go for audio only
- Force everyone to use H.264 if you need Safari (either as a general rule or when the specific session has someone using Safari) – and understand that simulcast won’t be available to you
Now it is going to become a matter of who blinks first: Google by adding H.264 simulcasting to Chrome; or Apple by adding VP8 to Safari.The Next Video Codec War
Which leads us to the next frontier in the video codec wars.
If you came in late to the game, then know that we had over 3 years of continuous bickering regarding the mandatory video codec in WebRTC. Here’s the last I wrote about the codec wars when the Alliance of Open Media formed some two years ago.
At the time, both VP8 and H.264 were defined as mandatory video codecs in WebRTC. The trajectory was also quite obvious:
After H.264 and VP8, there was going to be a shift towards VP9 and then a leap towards AV1 – the new video codec currently being defined by the Alliance of Open Media.
Who isn’t in the alliance? Apple.
And it seems that Apple decided to bank on HEVC (H.265) – the successor of H.264. This is true for both iOS and macOS. This is done through hardware acceleration for both the encoder and the decoder, with the purpose of reducing storage requirements for video files and reducing bandwidth requirements for streaming.
But it goes to show where Apple will be going with WebRTC next. They will be adding HEVC support to it, ignoring VP9 altogether.
There’s a hefty cost in taking that route:
- H.264 is simple yet expensive – when you use it, you need to pay up royalties to a single patent pool – MPEG-LA
- HEVC is complex AND expensive – when you use it, you need to pay up royalties for MULTIPLE patent pools – MPEG-LA, HEVC Advance, Velos Media. Wondering which one you’ll need to pay for and how much? Me too
Which is why I think Apple is taking this route in the first place.
Apple has its own patents in HEVC, and is part of the MPEG-LA patent pool.
And it knows royalties is going to be complex and expensive. Which makes this a barrier for other vendors. Especially those who aren’t as vertically integrated – who needs to pay royalties here? The chipset vendor? The OS vendor? The handset manufacturer?
By embedding HEVC in iOS 11 and macOS High Sierra, Apple is doing what it does best – differentiates itself from anyone else based on quality:
- It has hardware acceleration for HEVC. Both encoding and decoding
- It starts using it today on its devices, and “magically” media quality improves and/or storage/network requirements decrease
Google, and Android by extension, won’t be adding HEVC. Google has taken the VP9 route. But in VP9 most solutions are software based – especially for the encoder. Which means that using VP9 eats up CPU. Results look just as good as HEVC, but the cost is higher on CPU and memory needs. Which means an “inferior” solution.
Which is exactly what Apple wants and needs. It just doesn’t care if interoperability with other devices requires lowering quality as the perception of who’s at fault will fall flat on Android and Google and not on Apple.
Don’t expect to see VP9 or AV1 anytime soon in Apple’s roadmap. Not for WebRTC and not for anything else.
Dan Rayburn covers the streaming side (non-WebRTC) of this HEVC decision quite nicely on StreamingMedia.
Oh, and while at it, Jeremy Noring wrote a well thought rant about this lack of support for VP8 and VP9. His suggestion? Go vote for bug #173141 on WebKit. It probably won’t help, but it will make you feel a bit better
The only way I see this being resolved? If Google retracts their support for H.264 and just blatantly removes it from Chrome until Apple adds VP8 to Safari. I’ll be happy to see this happening.FaceTime , Multiparty and WebRTC
Apple has FaceTime.
And FaceTime probably doesn’t use WebRTC. I am not sure if it ever will.
When Google came out with WebRTC, they had the Hangouts (now Meet) team about a year behind in its adoption of WebRTC as their underlying technology – but the intent and later execution was there.
When Microsoft came out with WebRTC, Skype didn’t support WebRTC. But they did launch Skype for Linux which is built somewhat on top of WebRTC, and Skype for Web which is taking the same route. Call it ORTC instead of WebRTC – they are one and the same as they are set to merge anyway.
Apple? Will they place FaceTime on top of WebRTC? I see no incentive there whatsoever.
Can Cisco change this? Rowan Trollope broke the news titled “Cisco and Apple Announce New Features” that WebEx and Cisco Spark now work on Safari – no download needed. I’ll translate it for you by adding the missing keyword here – WebRTC. Cisco is using WebRTC to do that. And since their stack is built atop H.264, they got that working on Safari.
Cisco and Apple here is interesting. While Cisco mentions this in the headline as if these new features were done jointly, Apple isn’t really acknowledging it. There’s no quote from an Apple representative, and at the same time, Cisco isn’t mentioned in the WebKit announcement – TokBox and BlueJeans are.
Back to FaceTime. Which is a 1:1 video chat service.
And the fact that many look into group video chat and other multiparty video configurations.
Will Apple care enough to support it well in WebRTC? Will it move from Plan B to Unified Plan? Will it care about simulcast? Will it invest in SVC? Will it listen and work with Cisco on its multiparty needs for the benefit of us all?Older Devices
Apple will not be upgrading iPhone 5 devices to iOS 11. That’s a 5 years old device.
In many requirement documents I see a request to support iPhone 4.
Will this bump the general audience out there to focus on iPhone 6 and upwards, seeing what Apple is doing as well? Will this mean vendors will need to port WebRTC on their own to support older devices?
Time will tell, but I think that switching to iPhone 6 and above and focusing there makes sense.Chrome/Firefox support on iOS
Here’s a question for you.
If you use Chrome or Firefox on iOS. And open a URL to a site using WebRTC. Will that work?
Here’s the catch.
The reason there was no real browser for iOS that supported iOS until today? Apple mandates WebKit as the rendering engine on any browser app that comes to its AppStore.
Now that WebKit is getting official WebRTC support – will Chrome and Firefox add WebRTC support to their iOS browsers?
And if they do – you’ll be getting the Apple restrictions. I can just see the WebRTC developer teams at Google and Mozilla cringing at this thought.
This can get really interesting if and when Apple decides to add HEVC support to WebRTC in its WebKit implementation of iOS. You’ll get Chrome on iOS with H.264 and HEVC and Chrome everywhere else with H.264, VP8 and VP9.
Fun times.What Should Developers Do?
Here’s what you’ve been waiting for. The question I’ve been asked multiple times already:
Do I need to build an app? Should I wait?
The suggest at the moment is wait. Question is for what and until when exactly.
If you are looking for a closed app and planning on developing native, then go with whatever worked for you until today. This news item just isn’t for you.
If you are looking for browser support on iOS, then go with Safari and plan on enabling H.264 video codec in your service. Don’t wait up for VP8.
If you want something that will be cross platform app development using HTML5, then wait. Webview WebRTC support in iOS is still an unknown. If it gets there in the coming months then waiting a few more minutes probably won’t make a real difference for you anyway.My Updated Cheat Sheet
As it is, this change with Safari, iOS and macOS required some necessary updates in my resources.
First to update is the WebRTC Device Cheat sheet. You can find the updated one in the same download page.
One last thing –Join my and Philipp Hancke for Virtual Coffee
I planned for a different Virtual Coffee session this month. One about developer tools. It got bumped into July.
The session takes place on Monday, June 19 at 15:30 EDT.
It is free to join, but will not be available later as a recording (unless you are a customer).
And while at it – don’t mix signaling with NAT traversal.
Somehow, many people are asking these question in different phrasing, taking different angles and approaches to it. The thing is, if you want to build a robust production worthy service using WebRTC, you need to split these three entities.If you haven’t already, then I suggest you check out my free 3-part video mini course on WebRTC servers.
Now, let’s dive into the details a bit –Signaling Servers
Signaling servers is something we all have in our WebRTC products.
Because without them there’s no call. At all. Not even a Hello World example.
It is that simple.
You can co-locate the signaling server with your application server.
Here are a few things that you probably surmised about these servers already:
- You can scale a single server to handle 1000’s or event 100,000’s of connections and sessions in parallel
- These servers must maintain state for each user connected to them, making them hard to scale out
- Oftentimes, decisions that take place in these servers rely on external databases
- Latency of a couple 100’s of milliseconds is fine for these servers, but it is rather easy to be abusive and have that blown out of proportion if not designed and implemented properly (a few high profile services that I use daily come to mind here)
Did I mention signaling servers are written in higher level languages? Java, Node.js, Rails, Python, PHP (god forbid), …NAT Traversal Servers
STUN and TURN is what I mean here.
And yes, we usually cram STUN along with TURN. TURN is the resource hog out of the two, but STUN can be attached to the same server just because they both have the same general purpose in life (to get the media flowing properly).
This is why I will ignore STUN here and focus on TURN.
Sometimes, people forget to TURN. They do so because WebRTC works great between two browser tabs or two people in the same office without the need for TURN, and putting Google’s STUN server URL is just so simple to do… that this is how they “ship” the product. Until all hell breaks loose.
TURN ends up relaying media between session participants. It does that when the participants can’t reach each other directly for one reason or another. This kind of a relay mechanism dictates two things:
- TURN will eat up bandwidth. And a lot of it
- Your preference is to place your TURN server as close it to the participant as possible. It is the only way to improve media quality and reduce latency, as from that TURN server, you have more control over the network (you can pay for better routes for example)
While you might not need many TURN servers, you probably want one at each availability zone of the cloud provider you are using.
Oh – and most NAT traversal servers I know are written in C/C++.Media Servers
Media Servers are optional. So much so that they aren’t really a part of the specification – they’re just something you’d add in order to support certain functions. Group calls and recording are good examples of features that almost always translate into needing media servers.
The problem is that media servers are resource hogs compared to any of the other servers you’ll be needing with WebRTC.
This means that they end up scaling quite differently – a lot faster to be exact. And when they fail or crash (which happens), you still want to be able to reconnect the session nicely in front of the customer.
But the main thing is that it has different specs than the other servers here.
Which is why in most cases, media servers are placed in “isolation”.
There’s a point in placing media servers co-located with TURN servers – they scale somewhat together when TURN is needed. But I am not in favor of this approach most times, because TURN is a lot more Internet facing than the media server. And while I haven’t seen any publicity around hackers attacking media servers, it is probably only a matter of time.
And guess what? Media Servers? They are usually implemented in C/C++. They say it’s for speed.Why Split them up?
Because they are different.
They serve different purposes.
And most likely, they need to be located in different parts of your deployment.
So just don’t. Place them in separate machines. Or VMs. Or Docker. Or whatever. Just have them logically separated and be prepared to separate them physically when the need arise.If you want to understand more about WebRTC servers, then try out my free WebRTC server side mini course. You won’t regret it.
The post With WebRTC, Don’t Never Ever Mix Media and Signaling appeared first on BlogGeek.me.
Twilio’s Jeff Lawson had a really interesting keynote at their Signal event. I think Twilio is trying to redefine what CPaaS is. If this works for them, it will make it doubly hard for their competitors.
This is going to be long, as the keynote was long and packed full with information and details that pave the road to what CPaaS is going to be in 2020.
I suggest you watch this keynote yourself –
What I loved the most? The beginning, where Jeff refers to code as making art. I have to agree. In my developer days, that was the feeling. Coding was like building with lego bricks without the instructions or sitting down to paint on T-shirts (yes – I did that in my youth). When a CEO of a company talks about coding as art and you see he truly believes it – you know that what that company is doing must be… art.Before we Begin
One term you didn’t hear at the keynote:
One term that was there every other slide:
This was about developers, who is the buyer and how software APIs are everywhere.
It was also about how CPaaS is changing and Twilio is now much bigger than that – in the traditional sense of what CPaaS means.
It wasn’t said out loud, but the low level APIs that everyone are haggling about – SMS and voice – are nice, but not where the future lies.Twilio by the Numbers
The numbers game was reserved for the first 13 minutes of the keynote, where Jeff asserted Twilio’s distinct leadership in this market:
- 28 billion interactions (a year)
- 70 countries (numbers in) today; plan to grow to numbers in a 100 countries “by this summer”
- Over 900 employees; Over 400 in R&D
- 30,000 annual deployments (the number of times a year Twilio is introducing new releases… in a continuous delivery mechanism to an operational cloud service) – this enables shipping faster with less mistakes. The result? 3.5 days between new product or feature launch
- 1.6 million developer accounts on Twilio; 600,000 new accounts in the past year; doubling YoY
- 99.999% API availability – and 99.999% success rate
- 0 APIs killed
More about the two last bullets later.
Here’s what Twilio deployed in the past year:
To me, this is becoming hard to follow and grasp, especially when I need to look at other vendors as well.
If you look at it, you’ll see that Twilio has been working hard in multiple vectors. The main ones are Enterprise, IP communications and “legacy” telephony.
The main messages?
- Twilio is the largest player in the communication API space
- Twilio is stable and solid for the enterprise
- Twilio is globally available and rapidly expanding to new countries
- Twilio is fast to introduce new features
All this boils down to stating that a competitive advantage can be best achieved on top of Twilio.Twilio’s New Layering Model
If you’ve been watching this space, you might have noticed that I tend to use this model to explain CPaaS feature sets:
And this is how Jeff explained it on stage in Twilio’s Signal event 2016:
Building blocks. Unrelated. Could have been placed horizontally one next to the other to get the same concept. But piling them on top of each other is great – it shows there’s lots and lots of services and features to use.
2017 brings with it a change in the paradigm and a new layers model that Jeff explained, and was later expanded with more details:
The funny thing is that this reminded me of how we explained the portfolio and API layers in our VoIP products at RADVISION more than 10 years ago. It is great to see how this translates well when shifting from on premise APIs to cloud APIs. But I digress.
Back to the layering model.
The Super Network wasn’t given much thought in this time around. There were announcements and improvements in this area, but these are a given by now. For those who wish to outmaneuver Twilio by offering a better network – that’s going to be tough without the layers above it.
Then there’s the Programmable Communications Cloud, which is where most of the CPaaS vendors are. This is what I drawn as my own perspective of CPaaS services. The names have changed a bit for the Twilio’s services – we’ve got Programmable Chat now instead of IP Messaging. SMS has 3 separate building blocks here instead of one, and the baseline one is called Programmable SMS – keeping the lower level Communications APIs with a nice naming convention of Programmable X.
The interesting part of this story comes in the Engagement Cloud. Jeff made a point of explaining the three aspects of it: Systems, Departments and Individuals. And the thing about the Engagement Cloud is that services there are actually best practices – they aren’t “functional” in their nature. So Twilio are referring to the APIs in this layer as Declarative APIs.The Engagement Cloud
The main difference between what’s in the Engagement Cloud and the Programmable Communications Cloud? In the Programmable Communications Cloud you know as a developer what will happen – you ask to send an SMS and the SMS is sent. With the Engagement Cloud, you ask for a message to reach someone – and you don’t really care how it is done – just that it will be done in any channel that fits best.No Channel To Rule Them All
What is that “any channel that fits best”?
That’s based on what Twilio decides in the modules they offer in the Engagement Cloud, and it is where the words “best practices” were used during the event.
Best practices are powerful. As a supplier, they show you know the business of your customer to a point where you can assist him more than just by giving him the thing that he think he needs. It places you often as a trusted advisor, or one to go to in order to decide what it is you are going to do. After all – you own the best practices, so why not follow them?
It is also where the most value is to be made moving forward.
SMS is probably still kind when it comes to revenue in CPaaS. Not only for Twilio, but for all players in the market. And while this is nice and true, it is also a real threat to them all:
Yes. SMS is growing in use.
Yes. The stupid term A2P (Application 2 Person) is growing rapidly and it is done using SMS.
Yes. People prefer that over installing apps, receiving emails and getting push notifications.
Yes. People do read SMS messages. But I am not sure if they trust them.
Here’s a quick story for you.
I use them once in awhile. I was just planning a trip with the family for July. Found the dates. Booked the flights. Found an Airbnb to stay at. Reserved a place – and was asked if I am cool with push notifications. I clicked yes. And here’s what I got the next moment on my phone:
Businesses might be recommended to use SMS to reach their customers, but the prices of SMS urges businesses to seek other, cheaper channels of communications at the same time.
There is no money to be had in Communications APIs in the long term.
Click To Tweet
There is already a price war at this level. Vendors trying to be “cheaper than X”. Developers complaining about the high prices of CPaaS, not understanding the real costs of developing and maintaining such systems.What’s in Twilio’s Engagement Cloud
Which is where the Engagement Cloud comes – or more accurately – the best practices and smarts on top of just calling communications APIs.
Twilio are now offering 4 APIs in that domain:
- Authy – handling authentication type scenarios, including but not limited to the classic 2FA
- Notify – a notification service from applications and services to users. It started as SMS, continued with the addition of push notifications, but now supports multiple channels
- TaskRouter – a way to connect incoming tasks with workers who can handle them. Can be used to build queuing systems in contact centers
- Proxy – a mechanism to send connect between people in two separate groups – riders and drivers for example. Suitable for the sharing economy or when the “agents” aren’t “employees” or are loosely “controlled” by the organization
The interesting bit here is that these all started as functional building blocks. But now the stories behind them are all about multi-channel.
SMS is great, but it isn’t the answer.
IP messaging is great, but it isn’t the answer.
Facebook messenger with its billion+ users is great, but it isn’t the answer.
XKCD says it best:
With such a model in which we live in, programmable communications need to be able to keep track of the best means to reach a person. And so Twilio’s Engagement Cloud is about becoming Omnichannel (=everywhere) with the smarts needed to pick and choose the best channel per interaction.
Are we there yet with the current Twilio offering? I don’t know. But the positioning, intent, roadmap and vision is crystal clear. And with Twilio’s current speed of execution, it is going to happen sooner rather than later.Vendor Lock-in
The great thing about this layer of Engagement Cloud for Twilio is that it is going to be hard to replace once you start using it.
How hard is it to replace an API that sends out an SMS to a phone number with another API that does the same? Not much.
But how hard is it to replace best practices wrapped inside an API that decides what to do on its own based on context? Harder. And getting even more so as time goes by and that piece of module gets smarter.
Twilio gets a better handle on its customers with the Engagement Cloud. It makes it a lot harder for developers to go for a multi-vendors strategy where they use SMS from the CPaaS vendor whose price is the lowest.Developer’s Benefits
Why would developers use these Engagement Cloud modules from Twilio?
Because they save them a ton of time and even a lot more in headaches.
Today, there are 3 huge benefits for developers:
- Logic – the logic in these modules is one you get for “free” here. No need to write it on your own and make the decisions on your own
- Best practices – I know that your contact center is unique, but it probably adheres to similar queuing mechanisms as all the rest. You end up using similar/same best practices as others in your domain, and having these already pre-built is great. Maybe you can even improve your own business simply by using best practices and not do it alone
- Multiple vendors – adding channels means figuring out the APIs and craziness of multiple providers. No fun. You get a single API to rule them all. What’s not to like?
These areas are usually those that developers usually don’t like to deal with. That third one especially is a real pain – after you did it for 2 vendors/channels – connected it to SMS and maybe Facebook Messenger, it feels boring to add the next channel. But now you don’t have to anymore. And don’t you get me started with how the APIs there deprecated and changed through time.Machine Learning and its CPaaS Role
The Speech Recognition one is a bit less interesting. It is done in partnership with Google, using Google’s engine for it. The smarts on Twilio’s side here is the integration and how they are stitching these capabilities of text to speech throughout their line of products.
Here what Twilio is doing is acting in the most Twilio-like approach – instead of developing its own speech recognition tech, or using a 3rd party that gets installed on premise, it decided to partner with Google and user their cloud based speech recognition technology. And then making it easier for developers to consume as part of the bigger Twilio offering.
The real story lies elsewhere though – in Twilio Understand.
While Speech Recognition is a functional piece where you feed the machine with voice and get text, Understand is about modeling your use case and then having the machine parse text based on that model.
It is also where Twilio seems to have gone it alone (or embedded a third party internally), building its real first customer-facing Machine Learning based product.
In the past few years we’ve seen huge growth in this space. It started with Big Data, turned Analytics, turned Real Time Analytics, turned Decision Engines, turned Machine Learning.
Companies use this type of capabilities in many ways. Mostly internally, where Twilio probably had been doing that already. But embedding machine learning and big data, making products smarter is where we’re headed. And for me, this is the first instance I’ve seen by a CPaaS vendor taking this route.
It is still a small step, as Understand is another piece of API – a module – that you can use. And just like many of Twilio’s other APIs, you can use it as a building block integrated with its other building blocks. It is a move in the right direction to evolving into something much bigger.
LinkedIn shows that Twilio has several data scientists (the man power you need for such tasks), though none of them was “kind enough” to offer details of his role or doings at Twilio
Moving forward, I’d expect Twilio to hire several more people in that domain, beefing up its chops and starting to offer these capabilities elsewhere.
The only competitor at the moment who is seeing that is Cisco Spark – with their recent acquisition of MindMeld.
The great thing about machine learning? People feel and assume that it is super hard. Which means it is worth paying for.The Enterprise
Here’s where enterprises find a home at Twilio’s Signal 2017 keynote. Best to just show it in slides:
Twilio’s API calls success rate. This goes on top of its 99.999% API availability and this is where Jeff wants you to focus – not on getting an API returning an error (would would still fall under availability) but rather on how many successful results you get from the APIs.
Since Twilio launched, none of its APIs was ever deprecated or killed (haven’t checked it myself but this is what Jeff wants you to remember).
Twilio has been working hard on reaching out to enterprises. It introduced an Enterprise plan last year. Implemented ISO 2700. Added Public Key Validation. Introduced support for Enterprise SSO.
All these are great, but what I think resonates here the most are the above two items.99.999% Success Rate
Enterprises LOVE this.
SLAs. Guarantees. All the rage.
Twilio is operating at 99.999% uptime and is happy to offer a 99.99% guarantee in its enterprise SLA:
For an enterprise to go for Twilio requires two leaps of faith:
- Leaping from on premise to the cloud. Yes. I am told everyone’s migrating to the cloud, but this is still painful
- Picking Twilio as its vendor of choice and not its natural enterprise vendors (known as ISVs)
When you pick Twilio, who’s giving you any guarantees?
Well… Twilio does. At 99.99% while maintaining 99.999% across all of its services to all of its customers.
That’s a powerful message. Especially if you couple it with 30,000 deployments a year.0 APIs Killed
This one is REALLY interesting.
In the world of APIs where everything is in the cloud with a single copy running (it isn’t, but bear with me a second), having someone say that they offer backward compatibility to all of their APIs is huge.
The number of changes you usually need to follow with APIs on the internet is huge. If you have a product using third party APIs, then every year or two, you need to make some changes to have it continue to work properly – because the APIs you use change.
0 APIs kills means that if an enterprise writes their code today for a project they have, it won’t need to worry about changes to it due to Twilio. Now, in many cases, enterprises develop a project, launch it and then are happy to continue with it as is without further investment (or budget). Which means that this kind of a soft guarantee is important.
How does Twilio do it?
They launch products in beta and run the beta for long periods of time. During that time, they get developers to use and tinker with the APIs, collect feedback and when they feel ready, they officially launch it – at which point the API is deemed stable.
It works well because Twilio has lots and lots of customers, some willing to jump on new offerings and take the risk of having things break a bit during those beta periods.
The end result? 0 APIs killed.Will it Blend?
I believe it will.
Twilio has introduced a new paradigm for they way it is layering its product offerings.
In the process, it repositioned all of its higher level APIs as the Engagement Cloud. It stitched these APIs to use its lower Programmable Communications APIs, adding business logic and best practices. And it is now looking into machine learning as well.
It is a powerful package with nothing comparable on the market.
Twilio are the best of suite approach of CPaaS – offering the largest breadth of support across this space. And it is making sure to offer powerful building blocks to make developers think twice before going for an alternative.
Twilio isn’t for everyone. And other CPaaS vendors do have their place. But increasingly, these places become niches.Is there more?
This analysis is long, but by no means full.
There were a lot of other aspects of the announcements and Twilio’s moves that require more thought and details. The pricing model on group Programmable Video is one of them. Third Party Add Ons in certain domains (especially for analytics) is another. Or Twilio heading into the UI layer. And then there’s serverless via Twilio Functions. This isn’t even an exhaustive list…
I won’t be going into these here, but these are things that I am actively looking at.
Contact me if you are interested in understanding more about this space.
Want to make the best decision on the right WebRTC platform for your company? Now you can! Check out my WebRTC PaaS report, written specifically to assist you with this task.
WebRTC for Business People – The 2017 Edition
The revamped WebRTC for Business People report is now published.
I’ve been reviewing the stats of the content here lately, and noticed that people still find their way and download my WebRTC for Business People report.
My only problem with it is that it was already old – and showing it: There’s no serious mention of the advances made by Microsoft Edge towards WebRTC support, nothing about the difficulty of finding experienced WebRTC developers.
But most of all – the use cases it mentioned. Some of them were companies that got acquired. Others were shutdown. Some stagnated in place, or are now on life support.
That’s not a good way to introduce someone to the topic of WebRTC.
So a rewrite was in order here, which brought me to work with a few sponsors:
They were kind enough to make the investment needed for me to put the time and effort into it.WebRTC for Business People – what’s in the 2017 edition?
First off – the report is still free. You can download it, print it, read it online – practically do whatever you want with it.
I’ve refreshed the visuals and updated the analysis part with data from 2015 until 2017 (the time that have passed since the last update to the report). Did you know that WebRTC is still growing linearly in all relevant parameters that you can check?
No hype here. Just solid, steady growth. The minor change in github projects trajectory? That started when Google moved their WebRTC samples and demos from Google code to github.
I wonder if this will change when Apple adds WebRTC to Safari and iOS.
I removed all the nonsense in the report about SIP and H.323. These protocols still exist, but more often than not, people don’t look at them and compare them with WebRTC – because WebRTC has gone way beyond these signaling protocols.
Oh – yes – and I completely rewritten all the vendor use cases and the segments I look at in this report. Here’s the new set of vendors in the report:
If you are interested in WebRTC, the ecosystem around it and understand how companies are using it today – in the real life – making real commercial use of it – then check out this report.
Download the report
Time to start another ongoing project. This time – my Monthly Virtual Coffee sessions about WebRTC, CPaaS, APIs and comms in general.
Some time in 2015-2016, I decided to host Virtual Coffee sessions. Once a month, I’d pick a subject, create a presentation and host a meeting with my customers. All of them. It was open for questions and it was fun. It stopped because… I don’t know. It just did.
Ever since then, I wanted to do something similar. I found I like talking and interacting with people, and I want to do it more.
Which is why I am now announcing the new Virtual Coffee with Tsahi.Here’s how it will go down:
- The live sessions are free to join. For anyone
- Recording of the session will be available to my customers only
- Topic will be selected and announced a week or more in advance
- This will happen once a month. Hopefully
I won’t be using this blog to publish future sessions – sorry.
I really don’t know…
If you want something specific – drop me a line.Our 1st Virtual Coffee together
The first topic I want to tackle?CPaaS, WebRTC, Differentiation and M&A
When? May 23 @ 15:30 EDT
There are over 20 different CPaaS vendors out there, and that number is growing and shrinking at the same time:
- AT&T closing their Enhanced WebRTC APIs
- Cisco acquiring Tropo
- Vonage acquiring Nexmo
- CLX Communication acquiring Sinch
- RingCentral adding APIs to its UCaaS
- TeleSign announcing self service CPaaS, only to get acquired
I want to take the time to review some of this M&A activities, as well as show how different vendors are trying to differentiate themselves from the rest of the crowd.
Oh – if you you have questions for this already – just ask them on Crowdcast once you register.
See you there!
What is WebRTC and What is it Good For? This 7-minute video provides a quick introduction to WebRTC and demonstrates why it is growing in importance and popularity.
Covered in this video:
- What is WebRTC?
- Current state of adoption of WebRTC
- Why is it so much more than just a video chat enabler
- The power of “Open Source” in WebRTC
- How WebRTC works
- Five reasons to choose WebRTC
WebRTC is an HTML5 specification that you can use to add real time media communications directly between browser and devices.
WebRTC enables for voices and video communication to work inside web pages.
And you can do that without the need of any prerequisite of plugins to be installed in the browser.
WebRTC was announced in 2011 and since then it has steadily grown in popularity and adoption.
By 2016 there has been an estimate from 2 billion browsers installed that are enabled to work with WebRTC. From traffic perspective, WebRTC has seen an estimate of over a billion minutes and 500 terabytes of data transmitted every week from browser communications alone. Today, WebRTC is widely popular for video calling but it is capable of so much more.
A few things worth mentioning:
- WebRTC is also completely free
- It comes as open source project that has been embedded in browsers but you can take and adopt it for your own needs
- This in turn has created a vibrant and dynamic ecosystem around WebRTC of a variety of open source projects and frameworks as well as commercial offerings from companies that help you to build your products
- WebRTC constantly evolving and improving, so you need to keep an eye on it
It is important to understand from where we are coming from: If you wanted to build anything that allowed for voice or video calling a few years ago, you were most probably used C/C++ for that. This means long development cycles and higher development costs.
WebRTC today is available in most modern browsers. Chrome, Firefox and Microsoft Edge support it already, while Apple is rumored to be in the process of adding WebRTC to Safari.
You can also take WebRTC and embed it into an application without the need of browser at all.Media and access
What WebRTC does is allow the access to devices. You can access the microphone of your device, the camera that you have on your phone or laptop – or it can be a screen itself. You can capture the screen of the user and then have that screen shared or recorded remotely.
Whatever WebRTC does that does in the real time, enabling live interactions.
WebRTC isn’t limited to voice and video. It allows sending any type of data any arbitrary dataThere are several reasons WebRTC is a great choice for real time communications
- First of all, WebRTC is an open source project
- It is completely free for commercial or private use, so why not use it?
- Since it is constantly evolving and improving, you are banking on a technology that would service you for years to come
- WebRTC is a pretty solid choice – It already created a vibrant ecosystem around it of different vendors and companies that can assist you with your application
- WebRTC today is available in browsers and most modern browsers today support it
- This has enabled and empowered the creation of new cases and business models
- From taking a Guitar or a Yoga lesson – to medical clowns or group therapy – to hosting large scale professional Webinars. WebRTC is capable of serving all of them and more
- WebRTC not limited to only browsers because it is also available for mobile applications
- The source code is portable and has been used already in a lot of mobile apps
- SDKS are available for both mobile and embedded environments so you can use WebRTC to run anywhere
- WebRTC is not only about for voice or video calling
- It ss quite powerful and versatile
- You can use it to build group calling service, add recording to it or use it only for data delivery
- It is up to you to decide what to do with WebRTC
- WebRTC takes the notion of communication service and downgrades it into a feature inside a different type of service. So now you can take WebRTC and simply add communication in business processes you need within your application or business
So what other choice do you really have besides using WebRTC?
The idea around WebRTC and what you can use it for are limitless. So go on start building whatever you need and use WebRTC for that.
Embed this video on your own site for free! Just copy and paste the code below…<iframe src="https://player.vimeo.com/video/217448338" width="640" height="360" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>
Contact centers are the main adopters of WebRTC still. This is clearly reflected by my infographic of the WebRTC state of the market 2017.
Motto:“This ‘telephone’ has too many shortcomings to be seriously considered as a means of communication. The device is inherently of no value to us.”
Western Union telegraph company memo, 1877.
Think you know how WebRTC fits in a contact center? Check out with The Complete WebRTC Contact Center Uses SwipefileGet the swipefile
Recently, Jaroslav from iCORD, told me the stats they now see from the contact center deployment they have in O2 Czech Republic, who also happen to be their parent company.How is O2 CZ making use of WebRTC in their Contact Center?
What they did isn’t the classic approach you will see to WebRTC in contact centers, but rather something slightly different. If you are a customer of O2 CZ and you are thinking of making a purchase on their website, you have the option to leave a number for them to immediately get back to you:
And yes – there is also an “exit intent” on that sales page, so if try to leave this page, it will appear as a popup.
How is a phone call related to WebRTC you ask? Well… it isn’t. Unless you factor in the fact that we now know what web page the user is on.
What happens next, is that a contact center agent will call back to the user, and the user will see something new on his browser – a shared space between him and the agent that just called him.
This shared space will enable the agent to browse the same page the customer was on, and move on from there elsewhere. It also includes annotations – the agent can draw or mark things on the screen. One last thing – the user will see the video of the agent, but will not share his video.
See? They even haggle and write down discount prices right on the webpage.
Now, if the interaction started with a phone call, the agent in the contact center can instruct the customer to go to the O2 CZ website and enter a PIN code there – and magically get to the same experience.
Here’s a diagram to show the communication channels we now have between the customer and the contact center agent:
Why this approach?
- Customers continue using the phone. Same way they did in the past
- There’s no reliance on the customer’s camera, microphone or muted speaker volume
- It got stitched right into the existing contact center O2 CZ already had in place
But was this effective? Was it worth the effort?
O2 CZ have been running this contact center service throughout 2016, and took the time to analyze the results. They did so only for sales related calls – the money makers.
Here’s what they found out:
Using this approach is much more efficient than a simple phone call.
Let’s stop right here for a second and soak that statement.
We’re talking about a contact center.
Of a mid-sized European carrier (4 million subscribers).
The type of those where I am told over and over would NOT adopt WebRTC because it does not support Internet Explorer 4. Oh. And this specific service falls back to Flash if the customer’s browser doesn’t support WebRTC and even decreases further in feature set to static screenshot and PDF file sharing for those who don’t even support Flash.
And they are already doing it for a full year.
In front of live customers.
Who would have thought a non-startup company that isn’t located in Silicon Valley and operated by 16-year olds would be able of doing such a ridiculous thing like deploy WebRTC in production directly to where money gets negotiated with customers.
— end of rant —
Back to the results.Call length on average dropped
It takes 30% less time to negotiate and close a deal than a regular phone call and considerably shorter than text chat. This may seem a bit backwards – the fact that chat takes the longest and a video session the shortest, but that’s the experience of this contact center.
How about succeeding to close a deal and make a sale? WebRTC gets closed deals 25% more than regular phone calls. Chat is slightly less successful than WebRTC but more successful as phone. These values were measured on session landing at sales agents’ desk once those irrelevant and redirected were filtered out.
And the customer satisfaction? Over 20% rate the service 5 stars at the end of the interaction and 7% left positive textual evaluation of the service. Compared to the traditional IVR system that’s really high.Where does this lead us?
- There are many ways to deploy WebRTC in a contact center
- WebRTC is already being used by contact centers – successfully
- Done right, there’s huge value in adding WebRTC to your customer engagement. It adds efficiency and improves customer satisfaction, resulting with higher value to both sales and care
And if you are looking for more information about the O2 CZ deployment details – especially the technical ones, Jaroslav will be happy to have a conversation with you.
The post What to Expect when Deploying WebRTC in Contact Centers? appeared first on BlogGeek.me.
How do you find good WebRTC outsourcing talent?
At least once a week.
That’s about the current rate in which I bump into a hiring or talent question related to WebRTC.
Recently, I got a few calls with companies that went through the process of working with an outsourcing vendor who developed their app and got stuck.
Sometimes it was due to bad blood going between the two companies. But more often than not it was because the company that approached me wasn’t happy with the delivered results. The application that was developed just didn’t really work as expected. Looking at some of these apps, it was easily apparent to see that the developers were clueless about WebRTC. Things like wrong NAT traversal configurations (or none at all), or the use of mesh media delivery for large multiparty video sessions are the most obvious warning signs here.
If I had to think why this is so, my guess it boils down to three reasons:
- WebRTC is still rather new. 5 years. So there’s still not enough mileage on it for most developers
- It isn’t Web and it isn’t VoIP. But it is also Web and VoIP together. Which means many seem to misunderstand it
- Skilled WebRTC developers are hard to find. Less than 12,000 profiles with that term on LinkedIn’s now 500 million profiles
When you go and ask from an outsourcing vendor to build you a service, the answer you will get is “sure thing”. And then a price and a timeline. That’s their business, and most would often use that project as their jumping board towards another domain of expertise for them. Many of these outsourcing vendors won’t invest in learning new technologies without a customer paying for that investment.
This means that a lot of the market for WebRTC outsourcing is a market of lemons. Which is why it is so important you check and validate your prospective WebRTC outsourcing vendor before signing an agreement with him.
Picked a WebRTC outsourcing vendor? Here are a few quick telltale signs that will help you determine just how knowledgeable he is about WebRTC:Get the WebRTC Outsourcing Vendor Signals swipefile
Here are 6 questions to ask yourself before you hire a WebRTC outsourcing vendor.#1 – Do I know my own requirements?
There are two parts to knowing your requirements from the product:
- Knowing and understanding your business and the interactions you want for it
- Understanding what’s realistic for you with WebRTC to set your expectations accordingly – this also means understanding the costs of certain features versus how important they are for you
For that, I suggest you use something like my WebRTC requirements template.
This is a biggie.
Try. Not. To be. Their FIRST. Customer. That does. WebRTC.
Don’t be their first customer doing WebRTC.
Make sure you’re not the first one they build a WebRTC product for.
Their first WebRTC project? You shouldn’t be the one they do it for.
Got the point?
One more time if you missed it:
I knew that picture (and font) would come in handy some day.#3 – Is the team working for me built a WebRTC product before?
This one is somewhat tricky, and I must say – a bit new in my list of top questions to a WebRTC outsourcing vendor.
If you’ve been reading this from the start instead of skimming through, you might have seen the number 12,000. This number is higher than the number of profiles in LinkedIn that have the term WebRTC in them anywhere. It means that with some of these WebRTC outsourcing vendors, the people put in place on your project might not be the ones who know WebRTC – these are already fully booked by other clients – or they might have gone elsewhere (with the demand of WebRTC developers, I wouldn’t be surprised to see them learn the trade in one vendor and move on to the next).
I’ve seen it happen once or twice before.
So make sure that not only does the vendor knows WebRTC well – he is also placing the right people on your project. And understand that there are times when not the whole team must know WebRTC to develop a successful project.#4 – Can I validate what they build for me?
Developers who don’t know and understand WebRTC won’t be able to deliver a commercial product for you.
If they don’t understand the server side of WebRTC and its implications (check my free mini course on WebRTC server side), then the end result will run great between you and your pal sitting next to you, but when you take it to production it will fail spectacularly.
Things to look for:
- Not configuring NAT traversal properly (public STUN servers, no TURN servers, no TCP or TLS configurations for TURN)
- Using mesh instead of mixing or routing the media (see here) – in plain English, not using a media server in scenarios that beg for it to be used
- Not testing for scale (see here)
- Not checking the result in varying network conditions
While some of these can be solved just by more testing (and focused testing – one where the tester actually knows what to look for), there are times when the architecture selected for the product is just all wrong. It should have been apparent from the get go that it won’t hold water.
But anyways – make sure you’ve got a plan in place on what and how to test to validate that that thing that was given to you as the finished good is actually the finished good and not finished for good.#5 – Should I ask for something On Premise or CPaaS based?
This goes back to #1, but slightly different. Probably should have placed it as #2.
Developing your own product from scratch will be more expensive than using a CPaaS vendor. CPaaS vendors are those vendors that take the whole hassle of real time communications, wrap it with their nice API and manage it all for you (and yes, I wrote a report about them).
Whenever I sit down with an entrepreneur that wants a product I start there when it comes to vendor and technology stack selections. Trying to understand his restrictions and requirements. Oftentimes, entrepreneurs are deterred by the seemingly high pricing of CPaaS vendors. Especially at the beginning – when they believe they will get to a million monthly active subscribers within a month. Well… it won’t happen to you. And if it does, a VC or two will probably be happy to foot that bill, understanding you probably found a real boon.
What should you do?
- Read this one. And then read this one from Chris Kranky
- Make your decision on that build vs buy decision (in both you will be building – don’t worry)
- Revisit your initial requirements
- Revisit that vendor you plan on working with
Someone needs to be the owner of this project on your end.
Yes. You have a WebRTC outsourcing vendor developing this thing for you, but you need someone to have that vendor behave and deliver.
That someone needs to understand WebRTC well enough to handle the requirements, the discussions with the vendor for all the issues that will arise along the way.
I’d also recommend having that someone on the payroll and not external.
If you don’t have such a someone then you effectively selected you for that job. Congrats!Do Your Homework
If you plan on starting a project that makes use of WebRTC, and you plan on using a WebRTC outsourcing vendor for it, start by doing your homework.
Make sure you have the answers to the questions above.
And if you need help along the way – with the requirements, the architecture, the vendor to select, the process – you know where to find me.
Picked a WebRTC outsourcing vendor? Here are a few quick telltale signs that will help you determine just how knowledgeable he is about WebRTC:Get the WebRTC Outsourcing Vendor Signals swipefile
The post 6 Questions to Ask Yourself BEFORE Hiring a WebRTC Outsourcing Vendor appeared first on BlogGeek.me.
Security is… complex. Even with WebRTC.
I’ve always been one to praise the security measures placed in WebRTC.
While WebRTC is a secure protocol by nature, it seems that browsers take different approaches to who needs to take responsibility of any additional means of security.
The gist of it:
- WebRTC is secure by default
- Whenever a developer’s mistake can be thwarted by tweaking WebRTC – it gets tweaked
- Whenever a security hole is found, it gets fixed and deployed by the browser vendors faster than most other companies in the industry can even perceive the notion of a threat
Seriously – what’s not to like?
Recently though, I started thinking about it. How do browser vendors think about security? How much do they take it upon themselves to be the guardians of their users? His trusted guide in the big bad world that is the Internet?
Which brings me to the big one –
Are browser vendors responsible to the actions of their users when it comes to WebRTC?
It seems that they have different approaches and concepts to this one.Google Chrome
Moto: Users are stupid and should be protected
That’s how I’d put their mindset to words.getUserMedia
Chrome has long been one to clamp down on where and when can WebRTC be used.
They started off with voice and video working on HTTP and HTTPS, while HTTP access granting to the camera and microphone were forgotten, and required a user’s approval each and every time.
They shifted towards HTTPS only. You can’t access the microphone or the camera in an HTTP page.Persistence
The decision a user made is persistent. If you granted a domain access to your microphone or camera – Chrome remembers it – for eternity. Your only way of revoking that is by clicking the camera icon on the address bar (if you can even notice it):
Oh, and for persistency – Chrome offers you two choices:
- Ask when there’s a need (and Chrome will remember the answer for that domain for you)
- Never ever share your device
No middle-ground here.Screen sharing
You can share your screen with Chrome.
But it will ask the user each time for his permission.
And to enable screen sharing, you will first need to create a Chrome Extension for your web app and have the user install it. Not a biggie, but a hurdle.
Now, to publish a Chrome Extension on the Chrome Web Store, you’ll need to pay a small $5 fee.
Why? Fraud – obviously:
You see, screen sharing is considered by Google (and most other browsers) as more of a security threat than camera and microphone access.
By forcing the Chrome Extension, Google raises the bar against abuse, and can theoretically remove any abusive accounts and extensions with better tracability to their source.
The only real downside of it? I have over 10 icons on my toolbar now in Chrome, and most of them are for screen sharing on different services. Once a move I remove a few of them to declutter my browser. Yuck.Mozilla Firefox
Moto: Users are intelligent
Maybe. But not all of humanity. Or even the billion or two that use browsers.getUserMedia
In Firefox, getUserMedia will work in HTTP.
Not sure if persistence can be configured for Firefox for HTTP websites. I guess it is akin to herd immunity in vaccination. Since Chrome is THE browser, developers make sure their WebRTC service works on Chrome (lets call it Chrome first?) so their service starts by running only on HTTPS anyway.Persistence
Anyways, Philipp Hancke wrote a great post about getUserMedia and timing with browsers. Here’s how timing looks for appear.in from the moment getUserMedia is called and until it is completed:
Firefox tend to take longer to complete its getUserMedia calls. Philipp attributes it to this little UI design in Firefox:
In Firefox, if you want to decision (allow/disallow) to be persisted, you need to opt in for it. And for appear.in, most people don’t opt in.
This is great, especially for the Don’t Allow option (it is quite a hassle to remove that restriction from Chrome once you decided not to allow such access in a session).Screen sharing
For screen sharing, Firefox used to have a whitelist of domains you had to register on to get screen sharing to work.
From Firefox 52, this restriction has been removed. Mozilla wrote a post about it, explaining their millions of users around the world about the dangers.
I am not sure about you, but I’ve learned early on as a developer catering to developers that other developers are stupid (if you are a developer, then I am sorry, but bear with me – and read this one while you’re at it). So when I wrote code for developers, I made sure that if they screw things up, we crash spectacularly. The reasoning was, the sooner we crash the faster our customers (who are developers) will fix their bugs – and do that during development – so they won’t get into deadlocks or weird crashes in production that are way harder to find. These were the good old days of C programming.
Now… if developers are stupid, then what would mere users do about their understanding of security and threats?
In Firefox, they need to read and understand that yellowish warning when all they want to do is share their screen now – after all – people are waiting for them to do so in the session already.
With such a warning… I am not sure I am going to be in a trusting mood no matter the site.
While I mostly prefer Firefox approach for getUserMedia permissions, I think Chrome does a better job at it with the extensions mechanism.Microsoft Edge
Microsoft Edge has started to support WebRTC (finally).
While I a, in the process of installing my Creators update (where I am promised proper support for WebRTC), this will take more time than I have to get some nice screenshots of what Edge is doing.
So I asked Philipp Hancke (like I do about these things).
Here’s what I got:
- Edge enable persistence for getUserMedia
- It has a model similar to Firefox – you need to opt-in for persistency
- It doesn’t support screen sharing yet
Download the WebRTC Device Cheat Sheet to learn more on how to get WebRTC to as many devices and environments as possible.Are Browser Vendors Responsible for Our WebRTC Actions?
Yes they are.
In the same approach that browser vendors are taking in HTTPS everywhere, removing Flash from the web, protecting against known phishing sites, etc; they need to also protect users from the abuse of WebRTC.
The first step is by not allowing developers to do stupid (by forcing encryption and DTLS-SRTP for example). The second one and just as important is by not allowing users to do stupid.
The post Should Browser Vendors be Responsible for their User’s WebRTC Actions? appeared first on BlogGeek.me.
And I have a couple of bonuses waiting for you in this WebRTC course launch.
I’ve been thinking lately on how to make this course available throughout the year, but still “launch” it as a live program once or twice every year. The idea here is to get as many people as possible into the course and improve our current market state (which is rather abysmal):
I always say that WebRTC sits between Web and VoIP, but I guess this says it best.
You can find a million people whose profile contain either “VoIP” or “HTML5”. If you go into specifics, you’ll have hundred of thousands of people with either “SIP” or “Node.js”. But “WebRTC”? Only 11,874 righteous people. We’re a pretty small industry. And those with enough understanding and knowledge of WebRTC? Probably less than that.What are people challenged with?
The request that comes up almost every time someone contacts me through the blog? It is about finding an experienced WebRTC developer. Here are a few “sound bites” from these emails I am getting:
if we were to hire someone to build our own platform – what qualifications in a programmer would I need to look for?!!
We are needing to develop video chat and having a difficult time finding a qualified developer to create this
I am seeking a WebRTC engineer to do a peer review on a WebRTC app I had developed in oversees (west Russia.)
A couple of thoughts about this
- If you are a developer and you know WebRTC well, then your talents are in high demand – and if you aren’t conversant in WebRTC, this can be an opportunity for you to learn and grow
- If you are an employer and you need someone to build a real time comms product, you’re going to be hard pressed to find good talent. Your three best choices are:
- Outsource the whole project to a company who is skilled in WebRTC
- Hire a freelancer to help your team with the WebRTC parts
- Grow your in-house team to make them skilled with WebRTC
- If you are an outsourcing vendor and you have WebRTC talent, then you’ve got a different set of challenges:
- The more projects you take, the more WebRTC talent you need, which means you are back to the hiring challenge as anyone else
- Your best WebRTC talent is always on high demand outside, getting job proposals and needing to think how happy are they (so you have a retention issue in your hands, which gets worse due to the high demands of the skillset you are nurturing)
And since the market is so slim on resources (around 12,000 people know WebRTC out of a million who know VoIP – when all VoIP projects are adding WebRTC these days), demand and supply don’t match.My WebRTC course and its bonuses
Tomorrow, my Advanced WebRTC Architecture course officially launches. If you haven’t enrolled already, then you should seriously consider doing so.
The previous round had almost 100 students going through it with some very positive feedback.
There are going to be a few bonus materials that I will be giving for anyone who enrolls today (or already enrolled):#1 – 2 live lessons
There are going to be 2 special live lessons taking place. They will be recorded for those who can’t join live. But the lessons as well as the recordings will only be available as part of the course bonuses.
LIVE Lesson 1: Philipp Hancke – Video Quality in WebRTC: The audio and video quality WebRTC provides is amazing. Well, most of the time at least. Sometimes, the video gets pixelated and audio starts dropping out even. What is going on here and why is bandwidth estimation still a problem?
LIVE Lesson 2: Bradley T. Hughes – How to deploy TURN on AWS? TURN servers are boring. They do nothing but relay data. However, they are necessary in WebRTC. Here’s how appear.in’s global TURN infrastructure works – and how you should think of when deploying your own.
2 live lessons.
With top industry experts.
Recorded and available only for you.#2 – The Perfect WebRTC Developer Profile ebook
Recently I’ve been asked multiple times about CVs and profiles and stuff. It goes both ways:
- Recruiters want to know what experience to look for in order to find experienced WebRTC developers
- Developers want to know what to learn and put in their CV to be attractive
I had my own thoughts about it, but decided to take a different route on this one. I went and asked top developers and “recruiters” who work with WebRTC for quite some time now. I asked them about the ideal WebRTC developer and what they’d look for in a CV. Collected the answers and created an ebook out of it:
Who’s in there? Amir Zmora, Arin Sime, Chad Hart, Emil Ivov, Gustavo García, Iñaki Baz Castillo and Philipp Hancke.
You’ll get to see what they think about WebRTC developers and what it means to be a WebRTC professional.#3 – WebRTC Course FAQ
There are a lot of popular questions out there about WebRTC. You can find them lurking on webrtc-discuss forum, stackoverflow, Quora and elsewhere. But what are the answers? And how should you go about finding them?
What I did in the past few weeks was collect questions and map them to the course lessons. To these questions I provided short and clear answers for you, packaging it all in a neat document.
Now, you can use these questions to tackle specific issues you bump into – or to check how much you understood of the lessons of the course. Hell – if you need to recruit someone – you might as well use it as some good questions to ask to gauge experience.What if you are not sure?
Besides looking at the testimonials from previous students, I can suggest checking out two things:
- My free WebRTC server side mini-course. You can expect this kind of content in the course itself, just on a deeper level, on a lot more WebRTC related topics AND with the option of asking questions on the online course forum or during the live Office Hours
- Join me for the WebRTC Course AMA on Wednesday this week. I will be answering any questions related to WebRTC or the course, so you can make your decision about enrolling to the course (or just get some free advice for your current project)
Bonuses will go away in 48 hours.
After that, the only price plan available for the course will be the Plus price plan and it will only include the Office Hours for the initial duration of this course.
Have questions about my course? Here’s a WebRTC Course AMA for you.
Later this week, I will be opening my Advanced WebRTC Architecture course for enrollment again.
Last year, I decided to launch a course to teach WebRTC. Something different than just going through the WebRTC APIs or explaining the network specification. The end result? A 100 people enrolled and when through the course (!) And more than that – people seemed to be genuinely satisfied with it (!!)
It was fun, so it is time to do it again.
While I am changing and adding stuff to the course, the baseline material is going to stay the same – most of it is “timeless” anyway.
I am adding to this round a couple of things, and this one I want to mention two of them:#1 – Corporate Plans
The course now has a corporate plan, for larger teams who need to use WebRTC. I’ve got a couple of companies already enrolled to it, which is great.
Corporate plans include a private Slack forum for Q&As alongside the course’ forum. They also include a corporate badge that you can use on your own site, along with their logo on my own site as Corporate Partners.
If you want to learn more about the corporate plans, check out the course syllabus (PDF).#2 – Course AMA
Philipp forced my hands on this one…
— Philipp Hancke (@HCornflower) April 7, 2017
Only thing left to do is…
I am trying to make this the best place for people to get their WebRTC education.
For those who aren’t sure yet, I’ll be hosting a WebRTC Course AMA, where you can Ask Me Anything. About the course. About WebRTC. About me. About the weather (though I know nothing interesting about the weather).
The WebRTC Course AMA is free to attend. It will be part webinar, part Q&A, but mostly fun.
Philipp – you are hereby cordially invited to join as well
Register to the WebRTC Course AMA – and even write down your questions on the event’s page right now – no need to wait until the 19th for that!#3 – A few more launch bonuses
For those who end up enrolling early, I’ll have a few additional launch bonuses, but that’s for later.
On a personal note, today is Passover here in Israel. If I seemed somewhat “off” in the past couple of days (or will seem like that in the coming days), then it probably has to do with me eating too much food and spending some time with my family.
The post My Advanced WebRTC Architecture Course is back with an AMA appeared first on BlogGeek.me.
No such thing as free lunch. Or a free TURN server.
It is now 2017 and WebRTC has been with us for over 5 years now. You’d think that by now people would know enough about WebRTC so that noob questions won’t be with us anymore. But that just isn’t the case.Want to learn more about WebRTC server requirements and specifications? Enroll now to my 3-part video mini-course for free:
- CommentsThis field is for validation purposes and should be left unchanged.
One question that comes up from time to time is why doesn’t Google (or anyone else for that matter) offer a free TURN server?
Besides the fact that you shouldn’t be using free STUN or TURN servers that are out there simply because you have zero way to control them when things go wrong, lets first understand what’s the difference between these two servers – or more accurately protocols, since STUN and TURN usually end up being deployed together.How STUN works
The illustration below should give you the gist of how STUN works:
When STUN is used, the browser or any other WebRTC enabled device sends out a message to the STUN server asking him “who am I?”. The idea here that STUN is used to find out your public IP address. This is something your machine doesn’t know on its own as this “allocation” happens by the NAT you are behind (and you will almost always be behind a NAT). That information is also dynamic in nature – you can’t really rely on the same answer being received each time – or that the pinhole generated by the query itself will stay open.
This is a simple question that the STUN can provide a single answer for. Furthermore, this takes place over UDP, making it lightweight and quick – not even requiring establishing a longstanding connection or having context in place.
Once the browser has the answer, he can share it, and if all else works as expected, he will be receiving media directly.
The STUN’s role here was limited to this single question at the beginning.How TURN works
Here’s how TURN works:
When it comes to TURN, we start with a request for binding – our browser is practically asking the TURN server if he can be used as a relay point. And if the TURN server obliges, then it can now be used to receive all media from the other device on the session and relay that to our own browser.
While the initial binding request isn’t taxing (though still more expensive on our TURN server than the query sent to the STUN server), the real issue is the media that gets relayed.
If you take a simple WebRTC video session that gets limited to 500kbps or so, then a 15 minute session will end up eating…
That ends up being over 50MB in traffic. Assuming we do only 10 sessions an hour on average on that TURN server, we end up with 360GB in traffic per month. And that for quite a small service. It isn’t really expensive, but it does if you scale it up: use more bandwidth per session, have more sessions per hour on average – and you’re going to end up with lots of data traffic.
Here’s how a recent stress run we’ve had on testRTC ended up:
For a stress test with 500 participants, split into group of 5 browsers per multiparty call, running for only 6.5 minutes, we ended up with 52Gb of media traffic in each direction. Less than 10 minutes.
Now think what happens if all that traffic need to go through a TURN server. And that TURN server is free for all.Putting it all together
STUN and TURN are drastically different from each other. We need both in real production WebRTC services. And we usually think of them of a single server entity deployed in the backend – for STUN we simply don’t fret about the resource needs it has and focus on what we need to get TURN running in scale and in multiple geographical locations.
It is also standard practice to clamp down on your TURN server and have credentials configured for it. For WebRTC, these credentials need to be ephemeral in nature – created per session on demand and not per user (as often is the case in SIP).
- If you are wondering why there are no free TURN server out there, or good code on github that has TURN already configured for it that works – don’t. It makes no sense for anyone to give that for free
- If you happen to bump into a TURN server with user/password credentials that work, then please don’t make use of them – someone ends up footing the bill for you – and he is probably doing it without even knowing (he wasn’t aware of the abuse potential I am assuming)
- And if you still end up using that TURN server (nasty you) – expect that person to find that out at some point and just shut you out of his server – not something you want happening if you have users for your service
Tomorrow, I will be launching a free video mini-course. This course explains what servers you will need to deploy for your WebRTC product, what are their machine specifications and what are the tools that are out there to assist you developing them faster.Want to learn more about WebRTC server requirements and specifications? Enroll now to my 3-part video mini-course for free:
- CommentsThis field is for validation purposes and should be left unchanged.
So you’ve been using WebRTC for a while, or even not at all. You might have checked out AppRTC or took a piece of code from github and forked it, running your own server (yay!). But now you’re feeling stuck, unable to become a serious WebRTC developer.
WebRTC developers come in different shapes and sizes. They usually have one of two origin stories:
- They were VoIP developers 10 years ago (that can mean anything from played with the configuration of an Asterisk installation to wrote their own RTP stack)
- They are web developers (which can easily be implementing WordPress sites on top of premium themes, but can just as well be building backends used by the Facebooks of the world)
The challenge though is that WebRTC sits somewhere between these two very different disciplines that are VoIP and Web:
Me? I’ve got a VoIP developer origin story. I wrote my own static memory, no-recursion implementation of an ASN.1 PER encoder/decoder. Dealt with scaling linearly a UDP/TCP sockets implementation on different operating systems. Handled multi-threading in C code. Low level stuff that most developers today don’t even grok. While these are great starting points, they don’t really offer any way of making the transition to WebRTC.
This isn’t about WebRTC. It is about understanding the different mindsets and approaches of developing VoIP products and developing Internet web applications. And it requires being able to learn new techniques and new ways of thinking.
While I won’t take you through that journey here, I can help you formulate a plan. In this article, we’ll explore together three things you can start doing today to make the jump from VoIP or Web to an experienced WebRTC developer.What Don’t You Know?
To really call yourself a WebRTC developer, you’ll need to get a handle on a wide range of topics:
- WebRTC APIs: That’s quite an understandable requirement from a WebRTC developer
- Backend development: Node.js or some other modern asynchronous development platform
- Networking: TCP, UDP, HTTP, WebSocket and anything in-between these concepts
- Codecs and media processing: How codecs are designed and implemented, what are the algorithms and techniques used to send them over a network – using SRTP. Extra credit for understanding Simulcast and SVC
- Server side media processing: Knowing the different techniques that can be used to support group calling, live broadcasts, recording, etc.
- Troubleshooting and monitoring tools: to include, at a minimum, how to read webrtc-internals dump files and understand ICE failures
- Common frameworks: know the popular frameworks that are out there and what they are good for. It doesn’t hurt to have some understanding of Jitsi, Kurento and Janus for example
With these skills and knowledge base in hand, you’ll be able to create virtually any real time communications product. That might seem like a lot to learn, but I want to share some resources below that you can use to learn about each of these topics relatively quickly. In addition, you don’t have to be an expert in every one of these topics, but as a WebRTC developer, you do need to have some basic familiarity with each and every one of these topics. I know I am not an expert in any of these…
Now that you know what you need to know, let’s look at the top three ways you can learn what you need to learn.#1- Learn WebRTC Development by Reading Subscribing to WebRTC related blogs is one of the ways you can keep your development skills up-to-date
One of the best things you can do to grow in your knowledge about any topic is to read and follow along with posts and tutorials written by other experts in the industry. WebRTC is no different in this regard.
There aren’t many WebRTC blogs out there, but you’ll still want to focus to get maximum value here. Pick up just a few-quality blogs to follow. I try subscribing to ALL of them, mainly because it is my job to know as much as I can about the WebRTC market AND because I need to curate the WebRTC Weekly. What I do follow closely and make sure I read and understand when it comes to hard core WebRTC development amounts to just three main blogs:
- webrtcHacks: High quality content about WebRTC for developers. That sums it up…
- Mozilla’s Advancing WebRTC: Mozilla recently started a new blog dedicated to WebRTC. Quality there is top notch and the information is really useful. It covers small issues related to WebRTC and offers code snippets that are very useful
- Philipp Hancke: If there’s anyone who knows… everything (?) about WebRTC it is Philipp. He is methodical with his approach to WebRTC and analyzing browsers behavior. Every time he writes and publishes something, I end up learning at least one new thing
- WebRTC Weekly: Yap. I curate this one with Kranky. This one gives you the pulse of what’s happening with WebRTC, so following it can get you faster to the content that others are writing – especially those who don’t publish as much as I’d like them to
Pick a few blogs that you want to follow and subscribe to them. You don’t have to read everything they publish, but make sure that on a regular basis you read technical articles that stretch and challenge your developer muscles. That’s my approach here and I don’t consider myself a developer anymore. You probably take the next step as well and open a code editor and try the code snippets that the articles you read publish.#2 – Learn WebRTC Development by Studying The Advanced WebRTC Architecture course offers a holistic view of WebRTC development
Reading about WenRTC is ongoing maintenance that you must do to keep up-to-date – especially because WebRTC changes all the time and there’s no solid specification out there just yet. To really develop new skills quickly some formal education is in order.
There are a couple of online courses about WebRTC out there that you can find. Those that I’ve seen focus on the WebRTC APIs piece which is great to start with WebRTC. The challenge though is that WebRTC is still not standardized, so things change to quickly for such content to be kept up to date.
Out of these courses, I guess there are two places where you can learn extensively about WebRTC:
- Google’s WebRTC Codelab: This was put by Google as an introductory course on using WebRTC. It will get you through the basic concepts of WebRTC with a fast time-to-results when it comes to getting a first simple proof of concept out there
- WebRTC School: The focus of the WebRTC classes of the WebRTC school is the WebRTC API and how to use it. For those who need to develop tomorrow with WebRTC directly, this is a great resource to begin with
- Advanced WebRTC Architecture: This is the course that I offer here on this site. I decided NOT to focus on the WebRTC APIs and just skim through them. I tried covering more of the backend part in my course and just getting students to a point where they can build their own product architectures with all of its backend glory
- Kranky Geek: for the visual types, I’d sugget checking out the topics curated by Kranky Geek (I am a part of that team). You’ll get high quality content there on specific topics in WebRTC development
Other than WebRTC, make sure to look at other courses as well – things around fullstack web development or Node.js development courses at the likes of Udemy, Codecademy or Pluralsight.How about books?
I have a nagging feeling that goes with me for a few months now.
There hasn’t been any new book published about WebRTC in over a year.
I believe June 2015 was the last time a new WebRTC book got published.
Now, if reading and learning from books is still a thing for you, then check out this roundup of WebRTC books – it is still valid today as it were then.
Oh – and make sure you read High Performance Browser Networking if you are doing anything with web browsers (and especially if you are planning on using WebRTC and cobble up your own signaling).#3 – Learn WebRTC Development by Doing Working and writing about your projects can further help cement what you read and learn about
Reading about WebRTC development will keep you sharp. Studying WebRTC development will help you develop new skills and give you solid understanding of the technology. But to really grow as a developer you’ve got to find opportunities to put all of that education to good use.
I can think of four different ways you can put your education to good use:
- Build out personal projects – a blog, a personal portfolio, a hobby site. Create personal projects that really stretch you and that you’ll have to figure out as you go. I guess Muaz Khan does this best. I really liked how Brian Ho shared his experience recently on using WebRTC in a client-server web game
- Build products for paying clients – You wouldn’t believe how many vendors are out there looking for capable hands when it comes to WebRTC development
- Write about the new skills you’re learning and publish them on your own blog or better yet – submit it to webrtcHacks. There’s nothing that will ensure you really understand a topic like writing a tutorial about it
- Become a WebRTC professional – Once you have the requisite knowledge and a bit of experience, you might want to join a WebRTC outsourcing company that is actively building products with WebRTC. Here’s how Germán Goldenstein puts it – you really can’t ask for more than this as a developer. It reminded my what I love so much about developing and working with developers
There’s a paradox in our industry.
For one, WebRTC is stupidly simple (if you compare it to doing the same things before WebRTC). But at the same time, there aren’t a lot of experienced developers out there that know how to use it. Talk to employers who are looking for WebRTC skills and you’ll see how hard it is to come by – most end up with using internal resources they grow into WebRTC developers – sometimes after a really bad experience with an external outsourcing vendor that knows how to build websites or mobile apps, but know nothing about WebRTC.
If you’re ready to move past copy+paste implementations from github of Hello World WebRTC concepts and become a real WebRTC developer. However, if you’re ready to mobe past implementation or hobby and become a real WebRTC developer all you need is a plan and the self-discipline to stick to it.
Your plan should include three core learning activities: reading, studying, and doing. Work on those three activities on a consistent basis and it won’t be long before you’ve left the hobby behind and grown into a WebRTC professional developer.
Learn how to design the best architecture for our WebRTC service in this new Advanced WebRTC Architecture course.
WebRTC, Chrome, Market share. These are all intertwined, but WebRTC isn’t the only reason Chrome is dominant today.
The nagging question about support of WebRTC in Internet Explorer and Safari refuse to go away. I hear them almost on a daily basis in meetings I have, emails I receive and posts I read. In recent months. Things being what they are for the past 5 years or so, I have decided to ask a slightly different question –
If a browser doesn’t support WebRTC – will it hurt the browser’s adoption?
My inclination until recently was that WebRTC doesn’t matter that much. It is a great technology, but it won’t really hurt the market share of browser vendors that decide not to adopt it wholeheartedly.Want to run WebRTC on anything? Check out my free WebRTC Device Cheat Sheet.
And then I read this story: Safari browser sheds users, mimicking IE
According to California-based analytics vendor Net Applications, in March 2015, an estimated 69% of all Mac owners used Safari to go online. But by last month, that number had dropped to 56%, a drop of 13 percentage points — representing a decline of nearly a fifth of the share of two years prior.
I took the liberty of using StatCounter for my own check, focusing on the desktop browser and operating system (mobile is different opera).
Here’s what you’ll find when you look from the beginning of 2012 and up until today – roughly the time frame of WebRTC’s existence:
- Edge starting to see “some” traction, but a lot less than what you’d expect. I’ve covered that one before
- Internet Explorer is on the decline, with the bitter end already known
- Firefox shedding users
- Safari keeping in the 5% market share – we will get back to this one in a second
- Chrome is the only browser that is increasing its market share, apparently grabbing it from all other browsers
- If you think at WebRTC Chrome market share, I am sure numbers will look a lot more in favor of Chrome – and not only because Safari and IE don’t support WebRTC
Is Safari really keeping its market share or losing market share? To answer that question, we need to look at the desktop operating systems market share for that same period:
I’ll make it simple for you. In the past 5 years, OS X grew from around 7% to over 11%. That’s a growth of over 50% in its market share.
While at the same time, Safari, available only on OS X AND the default browser on OS X – didn’t grew in its market share.
Which means that people using Mac OS X are now more inclined towards using Chrome than they were 5 years ago.
Today, people CHOOSE their browser on the desktop and don’t use the default one provided to them by the operating system.
And when they choose, they more often than not pick Chrome.
- Statistics are just that, and it all depends on what you count – but there’s a trend here that is being counted through a long period of time that is hard to ignore
- It depends on regions, as there are areas who are still predominantly IE and countries with people who love Firefox
This is a tough question to answer. For me, it is the one I am most comfortable with. Since I switched to it from Firefox years ago, I never looked back – and when I do – I just return back to Chrome.
If I had to estimate, it has to do with developers. It isn’t quite related to APIs, apps or the creation of developer ecosystems where innovation and value creation happens.
No. It is about developers as early adopters.
The people who spread the word and get their friends and family to switch browsers when things don’t work for them.
The people who end up building websites and working on them daily on… Chrome… which brings it all into a virtuous cycle, as now sites work better on Chrome, so you end up having more users use it, which means developers will target it first, increasing its popularity.
Google has done a lot to get it there. It started by making a lean and mean browser that had tabs crash independently of each other, and from there it grew with the set of rich developer tools it has and the technologies packed into it.
Other browsers have been trying recently to innovate as well, but it seems this haven’t caught on with the mainstream crowds just yet.Is this a good thing?
Today, Chrome is the de facto standard for browser behavior.
There are other browsers as well, but their market share is declining – not growing or even stagnating. If this continues, Chrome will become the only play in town on the desktop.
This isn’t a good thing. It gives too much power in the hands of Google.
That said, Chrome is mostly open source (Chromium), it adheres to standards as much as can be expected from a product used by so many people daily.
But they can change their minds at some point in the future.
On the other hand, we’re now all mobile, and there, the browser is a lot less important.What does that mean for WebRTC?
as stated, Chrome is the de facto standard for browser behavior.
So much so that Edge is doing its best to look like a Chrome browser to developers, making it easier to interoperate with.
Should you develop based on the WebRTC specification? No.
You should develop based on what’s available in Chrome and what is planned for it in the mid term, as well as make use of adapter.js.
There’s the W3C flavor of WebRTC and then there’s the Chrome implementation of WebRTC.
Developers should stick to the Chrome implementation of WebRTC.
If you need WebRTC to work for you, you’ll need to understand how to get it running on any device and browser. My WebRTC Device Cheat Sheet is still as relevant as ever. It’s free, so go ahead and download it.
The post Does WebRTC has a Role in Chrome’s Market Dominance? appeared first on BlogGeek.me.
Is it finally the year of video? Who knows.
Up until recently, there wasn’t much out there for anyone who wanted to really use video in his use case. In the past few months we’ve seen so many announcements and moves that makes video as a service seem almost commonplace.
Who are these new and old players and what is it that they are doing in 2017?TokBox
TokBox has been the main player when it comes to multiparty video with WebRTC. In 2012, they got acquired by Telefonica, but left as an independent entity. This gives them stability of sorts.
Their main competitor – AddLive – got acquired in 2014 and taken off market. Other vendors tried to enter that same niche, but no one caught on in the same way.
What’s interesting is the path that TokBox selected for itself. It is resisting the path of connecting to legacy directly. Their offering doesn’t include PSTN or phone numbers. On the other hand, they have been progressing and expanding their video support into broadcast.
Their recent announcement tells a story of large scale live broadcast:
- 3,000 real-time interactive users. TokBox is probably achieving that by cascading their SFU infrastructure – not an easy feat by all means – especially if you consider the large variety of customers and the dynamics of sessions that they need to endure
- Adding RTMP support. TokBox had HLS support already. HLS makes it easy to stream video, but doesn’t work well for live broadcasts. RTMP does a better job at that – and enables connecting to YouTube Live, Facebook Live and others
It seems that any type of workload that relates to real time and video is where TokBox is.Twilio
Twilio is getting ready to become a serious video player.
In 2015, almost two years ago, Twilio came out with video chat capabilities. Since then, it stayed mostly in beta, with 1:1 video chat support only.
Last year, Twilio acquired Kurento (or at least some important parts of it along with the its developers). This acquisition of Kurento was about “programmable video” – the ability to do multiparty, but also much more. Last month? Twilio introduced a new Rooms API – a first step in offering multiparty video.
For Twilio, this is a catch-up game, trying to fit the many requirements of video. It starts with multiparty video, moves on to recording, then hybrid support of video/voice/telephony and from there to live broadcast and whoever knows what comes next.
In 2017, Twilio probably won’t be the best of breed solution for video, buy hey – if you’re already using them for voice and/or messaging – it makes sense to source video from the same vendor.Vidyo.io
Vidyo.io is new to CPaaS but they are not new to video.
Vidyo have been working over a decade on video conferencing. Longer than other CPaaS vendors. They did that in the enterprise, offering multiparty video using SVC technology. This gave them an edge in media quality competing with enterprise video conferencing giants Cisco and Polycom at the time.
Now they are bringing this technology and their know how to the cloud and developers via Vidyo.io. And it doesn’t hurt that they are collaborating with Google on getting SVC into VP9…
They are also an enterprise player, which is where the action seems to be today for CPaaS.
One has to wonder how this is going to affect the other players in this space.Others?
There are other players out there in CPaaS. Some offering video from the start while others adding it onto their voice offering. Notable names include Agora, TrueVoice and VoxImplant. Each with his own flavor. Each with his own story.
How big is this pond and is there room for so many players?
Want to make the best decision on the right WebRTC platform for your company? Now you can! Check out my WebRTC PaaS report, written specifically to assist you with this task.Is this only about CPaaS?
While we’re on this topic, what about enablers? Vendors or frameworks who offer the ability to use WebRTC video in your own installation – cloud or on prem? There we had only Jitsi in the past, but now?
- We have Jitsi and Kurento as the leading open source frameworks
- There’s mediasoup and medooze, building SFUs
- There’s the Intel platform
- There’s SwitchRTC
- And FrozenMountain, who is working on their own multiparty offering
Here’s the thing though… There is still no one-size-fits-all.
Time for another refresh of my WebRTC CpaaS report.
Call it what you want – WebRTC PaaS, CPaaS, API Platform, Communication API – they all end up doing a couple of things for the industry:
- Simplify and streamline the use of programmable communications that can be embedded virtually anywhere
- Make use of WebRTC as one of their building blocks. Usually a significant one
- Run in the cloud. Though not necessarily and not always
Today, I am launching the update of my Choosing a WebRTC API Platform report. It is now in its 6th edition with almost 4 years behind it already.Before I begin – a thanks to my sponsor on this one
This time, I have a launch sponsor for the report – one of the new vendors that are profiled in it – Vidyo.
Vidyo launched recently CPaaS that enables you to embed real time video experiences into applications (here’s my recent analysis). While they are not the first in the market to do so, they bring with them some interesting SVC capabilities.
You can read more about SVC in this guest post by Alex Eleftheriadis – Chief Scientist & Co-founder of Vidyo.
SVC is a mechanism by which you can layer a video you are encoding in a way that allows peeling off the layers along the path the video takes – without needing to decode the video. This seemingly innocuous capability brings with it two very powerful capabilities:
- Better multiparty conferencing routing and control – because we can now easily decide in our SFU/router which layers to send based on the dynamic conditions of the call (network, CPU, screen size, etc)
- Better resiliency to bad network conditions – because we can take the lower layer(s) and add things like forward error correction to them to make them more resilient to packet losses (we can’t do it to all layers because that will just eat too much bandwidth)
That second capability is often forgotten – it is what can make or break great video experiences over things like WiFi or 4G networks.
SVC is starting to make its way into WebRTC, and you see developers tinkering with it. Vidyo has been at it longer than most, and have it boiled into their product. Vidyo already has SVC as an integrated part of their backend as well as their mobile and PC SDKs. When will that get enabled in their WebRTC part of the offering is a question to them, but you can assume it will have all the bells and whistles required.
Anyway, you can learn more about Vidyo.io in this report sample.
This report will guide you through the process of deciding how to tackle the task of developing your product:
- It explains what WebRTC is briefly, and reviews the WebRTC ecosystem actors
- It lists the challenges you will face when developing products that make use of WebRTC
- It explores the various approaches to develping WebRTC – anything from self-development up to full CPaaS
- It shows you the key performance indicators that can guide you through your selection decision
- It profiles all relevant WebRTC PaaS vendors to make the selection process simpler for you
Having this kind of information can reduce the risks of your project by enabling you to make a more informed decision.What’s new in this report?
You still won’t find marketing BS in this one such as weird market sizing numbers going to billions of dollars or devices with estimates of 2030.
That said, there are 4 important changes I made to the report:#1 – New Vendors
The main driver for these report updates are two: major changes in existing vendors (multiparty video support in Twilio for example) and new entrants to the market.
In this update, we have 3 new entrants to the market, and they are all interesting:
- Agora.io, who are now adding WebRTC support to their platform
- TrueVoice, started with 3D voice and are now introducing video capabilities
- Vidyo.io, who are also sponsoring this report launch – check out there free profile (have I already mention that?)
I will be writing some more about the new video players in CPaaS next week.#2 – Outlier Vendors
As time goes by, vendors and their products change and shift focus. Since my first release of the report this was bound to happen. Which is why I “migrated” some vendor profiles to a new section about outliers.
In some cases, you can use an outlier vendor instead of a CPaaS one, assuming your use cases fits what they have on sale. These vendors essentially cover a specific type of behavior (such as contact centers who offer visual assistance, or a click to dial consultation service for independent professionals). They might also target a very niche set of customers that just not fit what I am looking for in this report.
While they are SaaS companies in nature, they do have APIs and do offer some deep integration capabilities – and they are viable approaches to research for your product.
Two vendors who were profiled in this report in the past now find their home in this section:
- Respoke by Digium, who now focus on Digium’s Asterisk customers and cater to their needs only
- SightCall, who now focus on Visual Assist and no longer catre the more general purpose CPaaS market
And then there are platform vendors that got acquired, shut down or stopped catering for developers altogether. They were once profiled in the report, and are no longer with us.
They now have a whole section for themselves.
The reason I am not removing it? It isn’t due to sentiment. It is because this stuff is important. I believe that understanding the past and how these dynamics work will help make a better decision. It will give some more insights into the stability and trajectory of the “living” vendors in the report.
Whose gone to my deadpool section?
- AddLive – they were acquired in 2014 by Snapchat (yes, that company with the successful IPO). They immediately closed their doors to new customers and are shuttering their whole API service this year
- AT&T WebRTC Enhanced APIs – AT&T is still alive and kicking obviously. Their API developer platform is also active. But the WebRTC parts of it were shutdown. While no reason was given, it is probably due to low traction
- ooVoo – they tried appealing for developers by opening up their existing platform. Just to shut this initiative down recently. I am assuming similar reasons to AT&T’s decision
- OpenClove – I am not really sure what happened to them. The site is live, though copyright is from 2014. Up until last year, I got replies from my contacts there, but no more. There’s no indication if this is alive or dead, which is dead enough for me
- Requestec – who got acquired by Blackboard and taken off the API market
Each vendor section here is frozen in time, at the last point of its update. Reasons for its death have been added.
Hopefully, future updates of this report won’t make this section grow any bigger.#4 – Multiparty Explainer
It is 2017, and it seems that many of the WebRTC API Platform vendors are now offering multiparty support of one kind of another.
When it comes to video, there’s a variety of alternative technologies of getting there. These affect pricing, quality and flexibility of the solution. It was appropriate to add that as an additional factor in the report, which means that now, there’s a new appendix in the report giving an explanation of the differences between mesh, mixing, routing, simulcast and scalability.The Timespan Aspect of the report
This is something I added in the previous update and just… updated again in this round.
When I got to updating the vendor profiles this time around, I found this addition really useful for myself – and I try to follow the industary and gauge its heartbeat on a daily basis.
This is a small section added at the end of each vendor’s profile, where I added an “investment” section – an indication of what did the vendor introduced to the market since the pervious report. This can really show what these vendors are focusing on for growth and are they in the process of pivoting out.
Take for example the investment section I have for SightCall:
Green indicates investment into the API platform. Yellow indicates some investment. White with no text means nothing to write home about. And red means defocus in investment – putting money in non-API activities.
In this case, it is understandable that after SightCall’s investment in its Visual Assist service (May 2016), which was probably successful for them, they pivoted out of the general API market. Which is also the reason there is no update for it in March 2017.
This this is now part of each vendor’s profile in the report, so you can now factor it in with your vendor selection process (I know I would).How about the price of this report?
The report’s price is $1950 USD. That includes:
- The written report (all 200+ pages of it)
- Online vendor selection matrix to use for comparison purposes
- Yearly updates – you’ll be elidgable to receive the next update of the report when it comes out
If you make your buy decision today, you’ll get a 35% discount by using the FASTMOVER coupon.
How do you do that?
Click to purchase the Premium package of the report.
Then on the shopping cart, make sure to indicate the coupon.
Prices go back up tomorrow at midnight, so better purchse this report now.
The post Check the Pulse of WebRTC and CPaaS in my Updated Report appeared first on BlogGeek.me.
Who really knows?Local or global for CPaaS?
In the past two months I’ve been conducting briefings with a lot of the CPaaS vendors out there. Some of them are now being added to my Choosing a WebRTC API Platform report that is getting a refresh next month.
During these chats, I came to realize a relatively lesser known type of CPaaS – the localized one.
To me, it was almost always about two types of players: you were either a “telco” or a global players. Telcos were based in their geography of operation, both limited and protected in their environment due to rules and regulations. Global players were the pure XaaS cloud vendors. I even created a nice graphic for that in 2013, comparing telcos to OTTs:
What we see now are mainly UCaaS vendors who are trying to join the CPaaS world. Put simply – vendors offering phone systems as a service to enterprises adding APIs for developers.
There are two reasons why these vendors are local in nature:
- Their offering is still tethered to a phone number for the most part, making them a kin to the telco
- They prefer it that way, probably because of where most/all of their business is taking place
For UCaaS, this localization might not a real issue. Corporations are still thinking locally, even the global ones. You work with people. And use phone numbers, so you tend to use the “law of the land” in every country. From how you allocate phone numbers to what exactly BYOD means at that country.
The problem starts when you want to serve one of two markets as your customers:
- Large multinational coroporates, whose footprint is global
- Small enterpreneurial teams who are spread out globally
That first one always existed. But I believe there are more of them today – smaller sized companies are reaching out globally, making them “multinatoinal corporations”.
The second? More of them as well, but probably focused on specific industries – high-tech comes to mind immediately.What changes when moving to CPaaS while staying local?
First thing you’ll notice with these vendors, is that while they may be using AWS for their data centers and are offering WebRTC connectivity and VoIP, their distribution across AWS data centers ends up looking something like this:
This means that WebRTC calling is going to be affected – and not for the better. If I need to get a call going in France (2 people in Paris for example), then they get connected via US East – maybe even rounting their media through US East. Which lends itself to a bad user experience.
For UCaaS we might not care, as this would be an outlier – but for CPaaS?
The difference now is that we are API driven in what we do. We build processes around our company to offer programmatic communications. And these tend to be wider spread than the local mindset of corporate communications.
Are we using CPaaS to enable our customers to interact directly with each other? Are our customers as local as our own company is?The end result
In most cases, the end result is simple. CPaaS is there as an API initiative for the UCaaS vendor.
As companies learn the importance and strength of integrations (see Slack and many of the new startups who offer B2B services and capabilities), they tend to offer APIs on top of their own infrastructure. One that their customers can use to integrate better. This makes CPaaS just that API layer.
In the same way that a movie rating company would offer an API that exposes its ratings, a UCaaS vendor exposes an API that enables communications – and then coins it CPaaS.
Only it may not really be CPaaS – just an API layer on top of his UCaaS offering.
The business models here are also different – they tend to be per seat and not per transaction. They tend to rely on the notion that the customer already paid for UCaaS, so no need to double dip when he uses CPaaS as long as he does that reasonably.
Does this make it any worse?
It just makes it different, but still something you’d like to try out and see if it fits your needs.
Is this confusing? The whole UCaaS/CPaaS area is murky and mired with doubletalk and marketing speak. It is really hard to dig deeper and understand what you’re getting before trying it out.
Tomorrow I’ll be starting out a short series of emails about the decision making process of build vs buy – building a comms infrastructure versus adopting a CPaaS offering. If you aren’t subscribed to my newsletter, then you should subscribe now, as these emails will not be available here on the blog.
All paths lead to the enterprise.
My report was titled Choosing a WebRTC API Platform vendor. Later, I leaned towards calling it WebRTC PaaS. Last year, a new industry term starting to get used a lot – CPaaS – Communication Platform as a Service. These in many cases will include WebRTC, which leads me to call the vendors in my report CPaaS from time to time.
I am currently working on the 6th edition of my report. It has come a long way since it was first introduced. A theme that has grown over the years and especially in the past several months is the way vendors are vying towards the enterprise. It makes sense in a way.
Here are 3 different approaches CPaaS vendors are taking when they are targeting the enterprise.#1 – Special Pricing
The current notion that WebRTC is free. This in turn leads vendors in this space towards a race to the bottom when it comes to pricing. This leads vendors to look at other alternatives, bringing them towards the enterprise.
Because enterprises are willing to pay. Maybe.
This special pricing for enterprise means they pay more for a package that may accomodate them a bit better.
Check out for example Twilio’s Enterprise Plan. It starts at $15,000/month, offering more control over the account – user roles, single sign on, events auditing, etc.
It is a great way to grow from the penny pinching game of all the platform users to some serious numbers. Only question is – would enterprises be willing to see the extra value and pay premium for it? I sure hope they do, otherwise, we may have a big problem.#2 – Extending UCaaS to CPaaS
The second set of players viying for a place in the enterprise CPaaS game? The UCaaS players.
If you do UCaaS, Unified Communication as a Service, then you have an infrastructure already that can be leveraged for the use of CPaaS. Or at least the story says.
Who do we have here?
- ooVoo, went from running a communication service towards a developer play (not an enterprise one mind you), only to shut it down later
- Cisco Spark, coming up with its own API
- Skype, looking at developers from time to time, trying to get their attention and then failing to give them the tools they need
- Vonage, with its acquisition of Nexmo
- RingCentral, adding a developer platform
- Vidyo, to some extent, with VidyoCloud and Vidyo.io
The challenge here is the change in the nature of the business and the expectations of the customers. Developers aren’t IT. They have different decition processes, different values they live by. Selling to them is different.#3 – Simplifying and Customizing
There are those who try to simplify things. They build widgets, modules and reference applications on top of their CPaaS. They customize it for the customer. They try to target customers who aren’t developers – assuming that enterprises lack the capacity and willingness to develop.
They augment that with system integrators they work with. Ones who speak and talk the language of the enterprise. Ones who can fill in all the integration gaps.
This is a slow process though. It is elephant hunting at its best – a slow process compared to the CPaaS game.Where does this all lead us?
There is no one-size-fits-all.
No silver bullet for winning the enterprise.
But there are a few approaches out there that are worth looking at.
For me? I am just looking at it from the sideline and documenting this process. It is part of what gets captured in my upcoming WebRTC PaaS report – these changing dynamics. The new entrants, those who exist and the progress and change that others are experiencing.
Want to make the best decision on the right WebRTC platform for your company? Now you can! Check out my WebRTC PaaS report, written specifically to assist you with this task.
The post When CPaaS Target Enterprises. 3 Different Approaches appeared first on BlogGeek.me.