Time to stop playing things on the internet and start building the internet of things.
We’ve been using that stupid IOT acronym for quite some time. Probably a decade. The idea and notion that every object can be network enabled, share its collected data and receive its commands remotely is quite exciting. I think we’re far from that vision.
It isn’t that we’re not making progress. We are. The apartment building I now live in is 3 years old. It is more automated than the previous apartment building I lived in, which was 15 years old. I wouldn’t call it IOT or a smart building quite yet. And I don’t think there’s a simple way to turn a dumb building into a smart one either.
When we moved to our new apartment we renovated a bit. There was this opportunity to add smart-home capabilities into the apartment. There were just a few teeny set of problems here:
And to top it all, it felt like a one time undertaking that will be hard/impossible to upgrade or modify later on without a complete overhaul. That wasn’t what I was aiming for.
Mozilla just announced their Things Gateway that can be installed on a Raspberry Pi 3. It is a rather interesting project, especially since its learnings are then applied to the W3C Web of Things Interest Group with the intent of reducing the fragmentation of IOT. They’ve got their hands full of work.
IOT today is a patchwork of devices and companies, each trying to become a dominant player. The end result is that we’re living in a world where things can be placed on the internet, but they don’t amount for an internet of things.
Here are a few questions/hurdles that I think we’ll need to answer as an industry before we can reach that vision of IOT.
SecurityI am putting security here first. Here’s why:
I’ve seen it happen with VoIP and it is definitely happening today with IOT.
Until this becomes a priority, IOT will not really happen.
Security has many different aspects to it:
Most vendors won’t be able to get these done properly to being with. And they don’t have any real incentive to do that either.
StandardizationThere’s a need for standardization in this space. One that tackles all levels of the IOT food-chain.
Out of the top of my head, here are a few areas:
I don’t believe we’ll get this thing standardized properly in our industry for quite some time.
AutomationI’ve seen a lot of rules engines when it comes to IOT. You can program them to create sequences of events – if the density sensor indicates someone is at home, open the lights.
The problem is that you need to program them. This can’t scale.
The other problem is the issue of what to do with all that sensor data? Someone needs to collect it, aggregate it, process it, analyze it and make decisions out of it.
Simple rule engines are nice, but they won’t get us far down the IOT path.
We also need to add machine learning and AI into the mix.
The end result? Probably similar in nature to AWS Deep Lens. Only problem, it either needs to be really generic and flexible.
Different Industries, Different Requirements and EcosystemsThere are different markets in IOT. they have different needs and different customers. They will have different ecosystems around them.
In broad strokes, we can split to consumer and enterprise. Enterprise here includes industrial, smart cities, etc. The consumer is all about the home, the car and the self.
Who will be the players here?
From Smartphones to Smart SpeakersThis is where I think we made the most progress.
Up until a year ago, IOT was something you end up delivering to customers via apps on a smartphone. You purchase a lightbulb, you get an app. You get a new TV, there’s an app. Refrigerator? App.
Amazon Alexa did something miraculous. It moved the discussion over the home from an app towards a stationary home device with voice activation and control. No screen or touch screen needed.
Since then, Google and Apple have joined and voice assistants in the home are all the rage now.
In some ways, I expect this to find its way into the enterprise as well. First via conference rooms and later – who knows?
This is one more piece in the IOT puzzle.
Where do we go from here?I have no clue.
To me, it seems that we’re still in the things on the internet, and we will be there for a lot longer.
The post The Internet of Things or Things on the Internet? appeared first on BlogGeek.me.
There are things you don’t want to do when you are NIH’ing your way to a stellar WebRTC application.
Here’s a true, sad story. This month, the unimaginable happened. Rain (!) dropped from the sky here in Israel. The end of it was that 6 apartments in my building are suffering from moisture due to a leakage from a balcony of the penthouse. Being a new building, we’re at the mercies of the contractor to fix it.
Nothing in the construction market moves fast in Israel – or without threats, so we had to start sending official sounding letters to the constructor about the leak. I took charge, and immediately said we need to lawyer up and have a professional assist us in writing a letter from us to the constructor. Others were in the opinion we can do it on our own, as we need a lawyer only if he is signed directly on the document.
And then it hit me. I wanted to lawyer up is because I see many smart people failing with WebRTC. They are making rookie mistakes, and I didn’t want to make rookie mistakes when it comes to the moisture problems in my apartment.
Why are we Failing with WebRTC?I am not sure that smart people fail a lot more around WebRTC technology than they are with other technologies, but it certainly feels that way.
A famous Mark Twain quote goes like this:
“There is no such thing as a new idea. It is impossible. We simply take a lot of old ideas and put them into a sort of mental kaleidoscope. We give them a turn and they make new and curious combinations. We keep on turning and making new combinations indefinitely; but they are the same old pieces of colored glass that have been in use through all the ages.”
Many of the rookie mistakes people do about WebRTC stems from this. WebRTC is this kind of new. It is simply a lot of old ideas meshed into a new and curious combination. So we know it. And we assume we know how to handle ourselves around it.
Entrepreneurs? Skype is 14 years old. It shouldn’t be that hard to build something like Skype today.
VoIP developers? SIP we know. WebRTC is just SIP without the signaling. So we force SIP onto it and we’re done.
Web developers? WebRTC is part of HTML5. A few lines of JS code and we’re practically ready to go live.
Video developers? We can just take the WebRTC video feeds and put them on a CDN. Can’t we?
The result?
My biggest gripe recently is people who decide in 2018 that peerJS is what they need for their WebRTC application. A project with 402 lines of code, last updated in 2015 (!). You can’t use such code with WebRTC. Code older than a year is stale or dead already. WebRTC is still too new and too dynamic.
That said, it isn’t as if you have a choice anymore. Flash is dying, and there’s no other serious alternative to WebRTC. If you’re thinking of adopting WebRTC, then here are five mistakes to avoid.
Mistake #1: Failing to Configure STUN/TURNYou wouldn’t believe how often developers fail to configure NAT traversal servers. Just yesterday I had someone ask me over the chat widget of my website how can he run his application by hosting his signaling and web servers on HostGator without any STUN/TURN servers. It just doesn’t work.
The simple answer is that you can’t – barring some esoteric use cases, you will definitely need STUN servers. And for most use cases, TURN servers will also be mandatory if you want sessions to connect.
In the past month, I found myself explaining quite a lot about NAT traversal:
There’s more, but this should get you started.
Mistake #2: Selecting the WRONG Signaling FrameworkPeerJS anyone? PeerJS feels like a tourist trap:
With 1,693 stars and 499 forks, PeerJS is one of the most popular WebRTC projects on github. What can go wrong?
Maybe the fact that it is older than the internet?
A WebRTC project that had its last commit 3 years ago can’t be used today.
Same goes for using Muaz Khan’s code snippets and expecting them to be commercial grade, stable, highly scalable products. They’re not. They’re just very useful code snippets.
Planning to use some open source project? Make sure that:
Don’t take the selection process here lightly. Not when it comes to a signaling server and not when it comes to a media server.
Mistake #3: Not Using Media Servers When You ShouldI know what you’re thinking. WebRTC is peer to peer so there’s no need for servers. Some think that even signaling and web servers aren’t needed – I hope they can explain how participants are going to find each other.
To some, this peer to peer concept also means that you can run these ridiculously large scale sessions with no servers that carry on media.
Here are two such “architectures” I come across:
Mesh. It’s great. Don’t assume you can get it to run properly this year or the next. Move on.
Live broadcasting by forwarding content. It can be done, but most probably not the way you expect it to grow to a million users with no infrastructure and zero latency.
For many of the use cases out there, you will need a media server to process and route the media for you. Now that you are aware of it, go search for an open source media server. Or a commercial one.
Mistake #4: Thinking Short-TermYou get an outsourcing vendor. Write him a nice requirements doc. Pay him. Get something implemented. And you’re done.
Not really.
WebRTC is still at its infancy. The spec is changing. Browser implementations are changing. It is all in flux all the time. If you’re going to use WebRTC, either:
WebRTC code rots faster than most other HTML5 code. It will eventually change, but we’re not there yet.
It is also the reason I started with a few colleagues testRTC a few years ago. To help with the lifecycle of WebRTC applications, especially in the area of testing and monitoring.
Mistake #5: Failing to Understand WebRTCThey say assumption is the mother of all mistakes. Google seems to agree with it. Almost.
WebRTC isn’t trivial. It sits somewhere between VoIP and the web. It is new, and the information out there on the Internet about it is scattered and somewhat dynamic (which means lots of it isn’t accurate).
If you plan on using WebRTC, make sure you first understand it and its intricacies. Understand the servers that are needed to deploy a WebRTC application. Understand the signaling mechanisms that are built into WebRTC. Understand how media is processes and sent over the network. understand the rich ecosystem of solutions that can be used with WebRTC to build a production ready system.
Lots of things to learn here. Don’t assume you know WebRTC just because you know web development or because you know VoIP or video processing.
If you are looking to seriously learn WebRTC, why not enroll to my Advanced WebRTC Architecture course?
–
What about my apartment? We’ve lawyered up, and now I have someone review and fix all the official sounding letters we’re sending out. Hopefully, it will get us faster to a resolution.
The post 5 Mistakes to Avoid When Developing WebRTC Applications appeared first on BlogGeek.me.
For WebRTC, Mobile and PC are moving in different directions. In the desktop, WebRTC Electron apps are gaining momentum.
In the good old days, people used to complain that WebRTC isn’t available on all browsers. Mobile was less of an issue for most as mobile application developers port WebRTC and use it natively on both iOS and Android.
How times change.
Need to know where WebRTC is available? Download this free WebRTC Device Cheat Sheet.
Today? All modern browsers support WebRTC. We’ve got Chrome, Firefox, Edge and Safari with official WebRTC implementations.
The challenge? None of the browsers are ready:
What’s a developer to do?
Use adapter.js. Or go for a plugin. Or just ignore a few browsers.
Or maybe. Just maybe you should treat PCs and laptops the same way you do mobile? And build an app.
If that’s what you plan on doing then you’re not alone.
The most popular way to build an app for the desktop is by using Electron. There are other ways, like CEF and actual native development, but Electron is by far the most common approach.
Here are 3 vendors making use of Electron (and WebRTC) for their desktop application:
#1 – SlackSlack are a popular team collaboration application. I’ve been using it in the browser for the last 3 years, but switched to their desktop Electron app on both my Ubuntu desktop and my Windows 10 laptop.
Why didn’t I use the app for so long? Because I don’t like installing things.
Why have I installed it now? Because I need to track 3+ slack accounts in parallel at all times now. This means a tab per slack account in my browser. On the desktop app, they don’t “eat up” multiple tabs. It isn’t a matter of memory or performance for me. Just one of “esthetics” – trying to preserve a tabs diet on my Chrome.
And that’s how Slack likes it. During the last Kranky Geek, the Slack team gave an interesting presentation about their current plans. It had about a minute dedicated to Electron in 2:30 of the session:
This recording lacks the Q&A part of the session. In an answer to a question regarding browsers support, Andrew MacDonald of Slack, said their focus is in their desktop app – not the browser. They make sure everything works on Chrome. Invest less time and effort on the other browsers. And focus a lot on their Slack desktop application.
It was telling.
If you are looking for desktop-application-only-features in Slack, then besides having a single window for all projects, there’s the collaboration they offer during screen sharing that isn’t available in the browser (yet another reason for me to switch – to check it out).
During that session, at 2:30 minutes? Andrew says why Electron is so useful to Slack, and it is in the domain of cross platform development and time to market – with their team size, they can’t update as fast as Electron does, so they took it “as is” for the built-in WebRTC implementation of it.
#2 – DiscordDiscord is a kind of Slack but different. A social network targeting gamers. You can also find there non-gaming groups. Discord is doing all it can to get you from the comfort of your browser right into their native application.
Here’s how the homepage looks like:
From the get go their call to action is to either Open Discord (in the browser) or Download for your operating system. On mobile, if you’re curious, the only alternative is to download the app.
Here’s the interesting part, though.
Discord’s call to action suggest by using green buttons you open Discord in the browser. That’s a lower friction action. You select a user name. Then pick an email and password (or use an unclaimed channel until you add your username and password). And now that you’re signed up for the service, it is time to suggest again you use their app:
And… if you skip this one, you’ll get a top bar reminder as well (that orange strip at the top):
You can do with Discord almost anything inside the browser, but they really really really want to get you off that damn internet and into their desktop app.
And it is working for them!
#3 – TalkDeskTalkDesk has its own reason for adopting Electron.
TalkDesk is a contact center solution that integrates with CRMs and third party systems. Towards that goal, you can:
That third option is going the way of the dodo, along with Chrome apps. TalkDesk solved that by introducing Callbar Electron.
What we see here differs slightly from the previous two examples.
Where Slack and Discord try getting people off the web and into their desktop application, TalkDesk is just trying to be everywhere for them. Using HTML5 and Electron means they need not write yet-another-application for the desktop – they can reuse parts of their web app.
They are NOT AloneThere are other vendors I know of that are using Electron for their WebRTC applications. They do it for one of the following reasons:
Add to that CPaaS vendors officially supporting Electron. Vidyo.io and TokBox are such examples. They do it not because they think it is nice, but because there’s customer demand for it.
This shift towards Electron apps makes it harder to estimate the real usage base of WebRTC. If most communications is shifting from Chrome browser (lets face it, most WebRTC comms happens in Chrome today if you only care about browsers) towards applications, then the statistics and trends collected by Google about WebRTC use are skewed. That said, it makes Chrome all the more dominant, as Electron use can be attributed back to Chromium.
Expect vendors to continue adopting Electron for their WebRTC applications. This trend is on .
Need to know where WebRTC is available? Download this free WebRTC Device Cheat Sheet.
The post WebRTC Electron Implementations are on 🔥 appeared first on BlogGeek.me.
Are AI cameras in our future?
In last year’s AWS re:invent event, which took place end of November, Amazon unveiled an interesting product: AWS DeepLens
There’s decent information about this new device on Amazon’s own website but very little of anything else out there. I decided to put my own thoughts on “paper” here as well.
Interested in AI, vision and where it meets communications? I am going to cover this topic in future articles, so you might want to sign-up for my newsletter
Get my free content
What is AWS DeepLens?AWS DeepLens is the combination of 3 components: hardware (camera + machine), software and cloud. These 3 come in a tight integration that I haven’t seen before in a device that is first and foremost targeting developers.
With DeepLens, you can handle inference of video (and probably audio) inputs in the camera itself, without shipping the captured media towards the cloud.
The hype words that go along with this device? Machine Vision (or Computer Vision), Deep Learning (or Machine Learning), Serverless, IoT, Edge Computing.
It is all these words and probably more, but it is also somewhat less. It is a first tentative step of what a camera module will look like 5 years from today.
I’d like to go over the hardware and software and see how they combine into a solution.
AWS DeepLens HardwareAWS DeepLens hardware is essentially a camera that has been glued to an Intel NUC device:
Neither the camera nor the compute are on the higher end of the scale, which is just fine considering where we’re headed here – gazillion of low cost devices that can see.
The device itself was built in collaboration with Intel. As all chipset vendors, Intel is plunging into AI and deep learning as well. More on AWS+Intel vs Google later.
Here’s what’s in this package, based on the AWS blog post on DeepLens:
The hardware tries to look somewhat polished, but it isn’t. Although this isn’t written anywhere, this is:
In a way, this is just a more polished hardware version of Google’s computer vision kit. The real difference comes with the available tooling and workflow that Amazon baked into AWS DeepLens.
AWS DeepLens SoftwareThe AWS DeepLens software is where things get really interesting.
Before we get there, we need to understand a bit how machine learning works. At its basic, machine learning is about giving a “machine” a large dataset, letting it learn the data in one way or another, and then when you introduce similar new data, it will be able to classify it.
Dumbing the whole process and theory, at the end of the day, machine learning is built out of two main steps:
With AWS DeepLens, the intent is to run the training in the AWS cloud (obviously), and then run the deployment step for real time classification directly on the AWS DeepLens device. This also means that we can run this while being disconnected from the cloud and from any other network.
How does all this come to play in AWS DeepLens software stack?
On deviceOn the device, AWS DeepLens runs two main packages:
Why MXNet and not TensorFlow?
The main component here is the new Amazon SageMaker:
SageMarker takes the effort away from the management of training machine learning, streamlining the whole process. That last step in the process of Deploy takes place in this case directly on AWS DeepLens.
Besides SageMaker, when using DeepLens you will probably make use of Amazon S3 for storage, Amazon Lambda when running serverless in the cloud, as well as other AWS services. Amazon even suggests using AWS DeepLens along with the newly announced Amazon Rekognition Video service.
To top it all, Amazon has a few pre-trained models and sample projects, shortening the path from getting a hold of an AWS DeepLens device to seeing it in action.
AWS+Intel vs GoogleSo we’ve got AWS DeepLens. With its set of on-device and cloud software tools. Time to see what that means in the bigger picture.
I’d like to start with the main players in this story. Amazon, Intel and Google. Obviously, Google wasn’t part of the announcement. Its TensorFlow project was mentioned in various places and can be made to work with AWS DeepLens. But that’s about it.
Google is interesting here because it is THE company today that is synonymous to AI. And there’s the increasing rivalry between Amazon and Google that seems to be going on multiple fronts.
When Google came out with TensorFlow, it was with the intent of creating a baseline for artificial intelligence modeling that everyone will be using. It open sourced the code and let people play with it. That part succeeded nicely. TensorFlow is definitely one of the first projects developers would try to dabble with when it comes to machine learning. The problem with TensorFlow seems to be the amount of memory and CPU it requires for its computations compared to other frameworks. That is probably one of the main reasons why Amazon decided to place its own managed AI services on a different framework, ending up with MXNet which is said to be leaner with good scaling capabilities.
Google did one more thing though. It created its own special Tensor processing unit, calling it TPU. This is an ASIC type of a chip, designed specifically for high performance of machine learning calculations. In a research paper released by Google earlier last year, they show how their TPUs perform better than GPUs when it comes to TensorFlow machine learning work loads:
And if you’re wondering – you can get CLOUD TPU on the Google Cloud Platform, albait this is still in alpha stage.
This gives Google an advantage in hosting managed TensorFlow jobs, posing a threat to AWS when it comes to AI heavy applications (which is where we’re all headed anyway). So Amazon couldn’t really pick TensorFlow as its winning horse here.
Intel? They don’t sell TPUs at the moment. And like any other chip vendor, they are banking and investing heavily in AI. Which made working with AWS here on optimizing and working on end-to-end machine learning solutions for the internet of things in the form of AWS DeepLens an obvious choice.
Artificial Intelligence and VisionThese days, it seems that every possible action or task is being scrutinized to see if artificial intelligence can be used to improve it. Vision is no different. You can find it other computer vision or machine vision and it covers a broad set of capabilities and algorithms.
Roughly speaking, there are two types of use cases here:
As with anything else in artificial intelligence and analytics, none of this is workable at the moment for a broad spectrum of classifications. You need to be very specific in what you are searching and aiming for, and this isn’t going to change in the near future.
On the other hand, there are many many cases where what you need is a camera to classify a very specific and narrow vision problem. The usual things include person detection for security cameras, counting people at an entrance to a store, etc. There are other areas you hear about today such as using drones for visual inspection of facilities and robots being more flexible in assembly lines.
We’re at a point where we already have billions of cameras out there. They are in our smartphones and are considered a commodity. These cameras and sensors are now headed into a lot of devices to power the IOT world and allow it to “see”. The AWS DeepLens is one such tool that just happened to package and streamline the whole process of machine vision.
PricingOn the price side, the AWS DeepLens is far from a cheap product.
The baseline cost is of an AWS DeepLens camera? $249
But as with other connected devices, that’s only a small part of the story. The device is intended to be connected to the AWS cloud and there the real story (and costs) takes place.
The two leading cost centers after the device itself are going to be AWS Greengrass and Amazon SageMaker.
AWS Greegrass starts at $1.49 per year per device. Amazon SageMaker costs 20-25% on top of the usual AWS EC2 machine prices. To that, add the usual bandwidth and storage pricing of AWS, and higher prices for certain regions and discounts on large quantities.
It isn’t cheap.
This is a new service that is quite generic and is aimed at tinkerers. Startups looking to try out and experiment with new ideas. It is also the first iteration of Amazon with such an intriguing device.
I, for one, can’t wait to see where this is leading us.
3 Different Compute Models for Machine VisionAWS DeepLens is one of 3 different compute models that I see in this space of machine vision.
Here are all 3 of them:
#1 – CloudIn a cloud based model, the expectation is that the actual media is streamed towards the cloud:
The data can be a video stream, or more often than not, it is just a set of captured images.
And that data gets classified in the cloud.
Here are two recent examples from a domain close to my heart – WebRTC.
At the last Kranky Geek event, Philipp Hancke shared how appear.in is trying to determine NSFW (Not Safe For Work):
The way this is done is by using Yahoo’s Open NSFW open source package. They had to resize images, send them to a server and there, using Python classify the image, determining if it is safe for work or not. Watch the video – it really is insightful at how to tackle such a project in the real world.
The other one comes from Chad Hart, who wrote a lengthy post about connecting WebRTC to TensorFlow for machine vision. The same technique was used – one of capturing still images from the stream and sending them towards a server for classification.
These approaches are nice, but they have their challenges:
This alternative is what we have today in smartphones and probably in modern room based video conferencing devices.
The camera is just the optics, but the heavy lifting takes place in the main processor that is doing other things as well. And since most modern CPUs today already have GPUs embedded as part of the SoC, and chip vendors are actively working on AI specific additions to chips (think Apple’s AI chip in the iPhone X or Google’s computational photography packed into the Pixel X phones).
The underlying concept here is that the camera is always tethered or embedded in a device that is powerful enough to handle the machine learning algorithms necessary.
They aren’t part of the camera but rather the camera is part of the device.
This works rather well, but you end up with a pricy device which doesn’t always make sense. Remember that our purpose here is to aim at having a larger number of camera sensors deployed and having an expensive computing device attached to it won’t make sense for many of the use cases.
#3 – In the CameraThis is the AWS DeepLens model.
TBD – IMAGE
The computing power needed to run the classification algorithms is made part of the camera instead of taking place on another CPU.
We’re talking about $249 right now, but assuming this approach becomes popular, prices should go down. I can easily see such devices retailing at $49 on the low end in 2-3 technology cycles (5 years or so). And when that happens, the power developers will have over what use cases can be created are endless.
Think about a home surveillance system that costs below $1,000 to purchase and install. It is smart enough to have a lot less false positives in alerting its users. AND can be upgraded in its classification as time goes by. There can be a service put in place behind it with a monthly fee that includes such things. You can add face detection and classification of certain people – alerting you when the kids come home or leave for example. Ignoring a stray cat that came into view of the camera. And this system is independent of an external network to run on a regular basis. You can update it when an external network is connected, but other than that, it can live “offline” quite nicely.
No Winning ModelYet.
All of the 3 models have their place in the world today. Amazon just made it a lot easier to get us to that third alternative of “in the camera”.
IoT and the CloudEdge computing. Fog computing. Cloud computing. You hear these words thrown in the air when talking about the billions of devices that will comprise the internet of things.
For IoT to scale, there are a few main computing concepts that will need to be decided sooner rather than later:
I was reading The Meridian Ascent recently. A science fiction book in a long series. There’s a large AI machine there called Big John which sifts through the world’s digital data:
“The most impressive thing about Big John was that nobody comprehended exactly how it worked. The scientists who had designed the core network of processors understood the fundamentals: feed sufficient information to uniquely identify a target, and then allow Big John to scan all known information – financial transactions, medical records, jobs, photographs, DNA, fingerprints, known associates, acquaintances, and so on.
But that’s where things shifted into another realm. Using the vast network of processors at its disposal, Big John began sifting external information through its nodes, allowing individual neurons to apply weight to data that had no apparent relation to the target, each node making its own relevance and correlation calculations.”
I’ve emphasized that sentence. To me, this shows the view of the same IoT network looking at it from a cloud perspective. There, the individual sensors and nodes need to be smart enough to make their own decisions and take their own actions.
–
All these words for a device that will only be launched April 2018…
We’re not there yet when it comes to IoT and the cloud, but developers are working on getting the pieces of the puzzle in place.
Interested in AI, vision and where it meets communications? I am going to cover this topic in future articles, so you might want to sign-up for my newsletter
Get my free content
The post AWS DeepLens and the Future of AI Cameras and Vision appeared first on BlogGeek.me.
As many as you like. You can cram anywhere from one to a million users into a WebRTC call.
You’ve been asked to create a group video call, and obviously, the technology selected for the project was WebRTC. It is almost the only alternative out there and certainly the one with the best price-performance ratio. Here’s the big question: How many users can we fit into that single group WebRTC call?
Need to understand your WebRTC group calling application backend? Take this free video mini-course on the untold story of WebRTC’s server side.
At least once a week I get approached by someone saying WebRTC is peer-to-peer and asking me if you can use it for larger groups, as the technology might not fit for such use cases. Well… WebRTC fits well into larger group calls.
You need to think of WebRTC as a set of technological building blocks that you mix and match as you see fit, and the browser implementation of WebRTC is just one building block.
The most common building block today in WebRTC for supporting group video calls is the SFU (Selective Forwarding Unit). a media router that receives media streams from all participants in a session and decides who to route that media to.
What I want to do in this article, is review a few of the aspects and decisions you’ll need to take when trying to create applications that support large group video sessions using WebRTC.
Analyze the ComplexityThe first step in our journey today will be to analyze the complexity of our use case.
With WebRTC, and real time video communications in general, we will all boil down to speeds and feeds:
Let’s start with an example.
Assume you want to run a group calling service for the enterprise. It runs globally. People will join work sessions together. You plan on limiting group sessions to 4 people. I know you want more, but I am trying to keep things simple here for us.
The illustration above shows you how a 4 participants conference would look like.
Magic Squares: 720pIf the layout you want for this conference is the magic squares one, we’re in the domain of:
You want high quality video. That’s what everyone wants. So you plan on having all participants send out 720p video resolution, aiming for WQHD monitors (that’s 2560×1440). Say that eats up 1.5Mbps (I am stingy here – it can take more), so:
Summing it up in a simple table, we get:
Resolution 720p Bitrate 1.5Mbps User outgoing 1.5Mbps (1 stream) User incoming 4.5Mbps (3 streams) SFU outgoing 18Mbps (12 streams) SFU incoming 6Mbps (4 streams) Magic Squares: VGAIf you’re not interested in resolution that much, you can aim for VGA resolution and even limit bitrates to 600Kbps:
Resolution VGA Bitrate 600Kbps User outgoing 0.6Mbps (1 stream) User incoming 1.8Mbps (3 streams) SFU outgoing 7.2Mbps (12 streams) SFU incoming 2.4Mbps (4 streams)
The thing you may want to avoid when going VGA is the need to upscale the resolution on the display – it can look ugly, especially on the larger 4K displays.
With crude back of the napkin calculations, you can potentially cram 3 VGA conferences for the “price” of 1 720p conference.
Hangouts StyleBut what if our layout is a bit different? A main speaker and smaller viewports for the other participants:
I call it Hangouts style, because Hangouts is pretty known for this layout and was one of the first to use it exclusively without offering a larger set of additional layouts.
This time, we will be using simulcast, with the plan of having everyone send out high quality video and the SFU deciding which incoming stream to use as the dominant speaker, picking the higher resolution for it and which will pick the lower resolution.
You will be aiming for 720p, because after a few experiments, you decided that lower resolutions when scaled to the larger displays don’t look that good. You end up with this:
0.3Mbps (2 streams) SFU outgoing 8.4Mbps (12 streams) SFU incoming 8.8Mbps (4 streams)
This is what have we learned:
Different use cases of group video with the same number of users translate into different workloads on the media server.
And if it wasn’t mentioned specifically, simulcast works great and improves the effectiveness and quality of group calls (simulcast is what we used in our Hangouts Style meeting).
Across the 3 scenarios we depicted here for 4-way video call, we got this variety of activity in the SFU:
Magic Squares: 720p Magic Squares: VGA Hangouts Style SFU outgoing 18Mbps 7.2Mbps 8.4Mbps SFU incoming 6Mbps 2.4Mbps 8.8Mbps
Here’s your homework – now assume we want to do a 2-way session that gets broadcasted to 100 people over WebRTC. Now calculate the number of streams and bandwidths you’ll need on the server side.
How Many Users Can be Active in a WebRTC Call?That’s a tough one.
If you use an MCU, you can get as many users on a call as your MCU can handle.
If you are using an SFU, it depends on a 3 different parameters:
We’re going to review them in a sec.
Same Scenario, Different ImplementationsAnything about 8-10 users in a single call becomes complicated. Here’s an example of a publicly available service I want to share here.
The scenario:
The media server decided here how to limit and gauge traffic.
And here’s another service with an online demo running the exact same scenario:
Now the incoming bitrate on average per browser was only 2.7Mbps – almost a fourth of the other service.
Same scenario. Different implementations.
What About Some Popular Services?What about some popular services that do video conferencing in an SFU routed model? What kind of size restrictions do they put on their applications?
Here’s what I found browsing around:
Does this mean you can’t get above 50?
My take on it is that there’s an increasing degree of difficulty as the meeting size increases:
The CPaaS Limit on SizeWhen you look at CPaaS platforms, those supporting video and group calling often have limits to their meeting size. In most cases, they give out an arbitrary number they have tested against or are comfortable with. As we’ve seen, that number is suitable for a very specific scenario, which might not be the one you are thinking about.
In CPaaS, these numbers vary from 10 participants to 100’s of participants in a single sesion. Usually, if you can go higher, the additional participants will be view-only.
Key Points to RememberFew things to keep in mind:
Sizing and media servers is something I have been doing lately at testRTC. We’ve played a bit with Kurento in the past and are planning to tinker with other media servers. I get this question on every other project I am involved with:
How many sessions / users / streams can we cram into a single media server?
Given what we’ve seen above about speeds and feeds, it is safe to say that it really really really depends on what it is that you are doing.
If what you are looking for is group calling where everyone’s active, you should aim for 100-500 participants in total on a single server. The numbers will vary based on the machine you pick for the media server and the bitrates you are planning per stream on average.
If what you are looking for is a broadcast of a single person to a larger audience, all done over WebRTC to maintain low latency, 200-1,000 is probably a better estimate. Maybe even more.
Big Machines or Small Machines?Another thing you will need to address is on which machines are you going to host your media server. Will that be the biggest baddest machines available or will you be comfortable with smaller ones?
Going for big machines means you’ll be able to cram larger audiences and sessions into a single machine, so the complexity of your service will be lower. If something crashes (media servers do crash), more users will be impacted. And when you’ll need to upgrade your media server (and you will), that process can cost you more or become somewhat more complicated as well.
The bigger the machine, the more cores it will have. Which results in media servers that need to run in multithreaded mode. Which means they are more complicated to build, debug and fix. More moving parts.
Going for small machines means you’ll hit scale problems earlier and they will require algorithms and heuristics that are more elaborate. You’ll have more edge cases in the way you load balance your service.
Scale Based on Streams, Bandwidth or CPU?How do you decide that your media server achieved full capacity? How do you decide if the next session needs to be crammed into a new machine or another one or be placed on the current media server you’re using? If you use the current one, and new participants want to join a session actively running in this media server, will there be room enough for them?
These aren’t easy questions to answer.
I’ve see 3 different metrics used to decide on when to scale out from a single media server to others. Here are the general alternatives:
Based on CPU – when the CPU hits a certain percentage, it means the machine is “full”. It works best when you use smaller machines, as CPU would be one of the first resources you’ll deplete.
Based on Bandwidth – SFUs eat up lots of networking resources. If you are using bigger machines, you’ll probably won’t hit the CPU limit, but you’ll end up eating too much bandwidth. So you’ll end up determining the capacity available by way of bandwidth monitoring.
Based on Streams – the challenge sometimes with CPU and Bandwidth is that the number of sessions and streams that can be supported may vary, depending on dynamic conditions. Your scaling strategy might not be able to cope with that and you may want more control over the calculations. Which will lead to you sizing the machine using either CPU or bandwidth, but placing rules in place that are based on the number of streams the server can support.
–
The challenge here is that whatever scenario you pick, sizing is something you’ll need to be doing on your own. I see many who come to use testRTC when they need to address this problem.
Cascading a Single SessionCascading is the process of connecting one media server to another. The diagram below shows what I mean:
We have a 4-way group video call that is spread across 3 different media servers. The servers route the media between them as needed to get it connected. Why would you want to do this?
#1 – Geographical DistributionWhen you run a global service and have SFUs as part of it, the question that is raised immediately is for a new session, which SFU will you allocate for it? In which of the data centers? Since we want to get our media servers as close as possible to the users, we either have pre-knowledge about the session and know where to allocate it, or decide by some reasonable means, like geolocation – we pick the data center closest to the user that created the meeting.
Assume 4 people are on a call. 3 of them join from New York, while the 4th person is from France. What happens if the French guy joins first?
The server will be hosted in France. 3 out of 4 people will be located far from the media server. Not the best approach…
One solution is to conduct the meeting by spreading it across servers closest to each of the participants:
We use more server resources to get this session served, but we have a lot more control over the media routes so we can optimize them better. This improved media quality for the session.
#2 – Fragmented AllocationsAssume that we can connect up to 100 participants in a single media server. Furthermore, every meeting can hold up to 10 participants. Ideally, we won’t want to assign more than 10 meetings per media server.
But what if I told you the average meeting size is 2 participants? It can get us to this type of an allocation:
This causes a lot of wasted server resources. How can we solve that?
That last one of cascading? You can do that by reserving some of a media server’s resources for cascading existing sessions to other media servers.
#3 – Larger MeetingsAssuming you want to create larger meetings than one a single media server can handle, your only choice is to cascade.
If your media server can hold 100 participants and you want meetings at the size of 5,000 participants, then you’ll need to be able to cascade to support them. This isn’t easy, which explains why there aren’t many such solutions available, but it definitely is possible.
Mind you, in such large meetings, the media flow won’t be bidirectional. You’ll have fewer participants sending media and a lot more only receiving media. For the pure broadcasting scenario, I’ve written a guest post on the scaling challenges on Red5 Pro’s blog.
RecapWe’ve touched a lot of areas here. Here’s what you should do when trying to decide how many users can fit in your WebRTC calls:
What’s the size of your WebRTC meetings?
Need to understand your WebRTC group calling application backend? Take this free video mini-course on the untold story of WebRTC’s server side.
The post How Many Users Can Fit in a WebRTC Call? appeared first on BlogGeek.me.
Here are CPaaS trends you should be expecting this year.
There’s no doubt about it. CPaaS is growing and it is doing so rapidly. It is a multi billion dollars industry, and while still small, there’s no sign of its growth stopping anytime soon. You’ll see the numbers $4 billion and $8 billion a year appearing in different reports and estimates that are flying around when talking about the near future of the CPaaS market size and growth potential. I have no clue if the numbers are correct – I’ve never been one to play with estimates.
What I do know, is that we’ve got multiple CPaaS vendors now with ARR (Annual Run Rate) higher than $100 million. Most of it may still come from good old SMS and phone calls, but I think this will change along with how consumers communicate.
This change will make CPaaS a lot more interesting and diversified than the boring race to the bottom that seems to be prevalent in some of the players’ offering and messaging in this market. The problem with CPaaS today is twofold:
Which brings me to what you can expect in 2018. Here are 7 CPaaS trends that will grow and become important this year – and more importantly – what they mean.
Planning on selecting a CPaaS vendor? Check out this shortlist of CPaaS vendor selection metrics:
Get the shortlist
#1 – ServerlessServerless is also known as Functions.
You might know about serverless from AWS Lambda, Azure Functions, Google’s Cloud Functions and Apache’s OpenWhisk. The list here isn’t random – it goes to show that all big cloud platforms are now offering serverless capabilities.
This still isn’t prevalent in CPaaS, where for the most part, developers are expected to develop, maintain and operate their own servers that communicate with the CPaaS vendor’s infrastructure. But we do see signs of serverless making its way here.
I’ve covered that last year, when I took a deeper look into the Twilio Functions offering and what that means to the CPaaS market.
At the time, Twilio stated that Functions is already Twilio’s fastest growing product ever. Here’s where they explain what it does:
Twilio being the market leader in CPaaS, and Functions being a fast growing product of theirs means that other CPaaS vendors will follow. Simply because demand here is obvious.
#2 – OmnichannelWhen SMS just isn’t enough.
Not sure when you last used SMS for personal reasons – I know that I rarely end up inside that app on my smartphone. The way things are going, SMS can be considered the spam channel of 2018. Or maybe the channel used by businesses who’ve been told that this is the best way to reach customers and interrupt them.
While I definitely see value in SMS, I also think that businesses should strive to communicate with their customers on other channels – channels their users are now focusing on with their social life. In Israel that would be Whatsapp. In the US probably a mixture of Facebook and iMessage will work better. Telegram would be the choice for Russia.
Whatever that channel is, to support it, someone needs to integrate with it. And then decide which channel to use for which customer and for what interaction. For CPaaS, that’s what Omnichannel is about. Enabling developers, and by extension businesses to communicate with their customers on the customer’s preferred channel.
2018 is going to be the year Omnichannel becomes a serious requirement.
Why?
Because now we can actually use it.
Apple’s own Business Chat service is planned to make its public debut this year.
Facebook has its own APIs already, and Whatsapp announced business accounts (=APIs).
That alone covers a large majority of customer bases.
Throw in SMS, mix and choose the ones you want. And voila! Omnichannel.
For businesses, relying on CPaaS for Omnichannel makes sense, as the hassle of adding all of these channels and maintaining them is expensive. Omichannel CPaaS APIs will abstract that away.
For CPaaS vendors, this is a way to differentiate and make switching between vendors harder.
A win-win.
The ones offering that already? Nexmo with their Chat App and Twilio through their Engagement Cloud.
#3 – Visual / IDEFrom code, to REST, to point-and-click.
We used to use DOS as an “operating system”. I worked at a small computer shop as a kid when I grew up. For a couple of years, my role was to go to people’s homes and explain to them how to use the new computer they just purchased. How to put the DOS disk inside the floppy drive, list the files in a floppy, run games and other applications.
Then came Windows (along with Mac and OS/2 and others) and we all just moved to using a visual operating system and a mouse.
As a kid, I programmed using Logo and Basic. Then Turbo Pascal – in a decent IDE for the first time. In the university, I got acquainted to Tcl/Tk. And then UI development seemed fun. Even it if was by writing code by hand. Then one day, vtcl came to life – a visual editor. Things got easier.
Developing communications is taking the same path now.
It started by needing to build your own stuff from scratch, then with open source frameworks and later CPaaS and REST (or god forbid SOAP) APIs.
In 2017, Twilio Studio was announced – a visual IDE to use on top of the Twilio functionality. In that corner, you can also count Amazon Connect, though not CPaaS but still in the domain of communications – it has a visual IDE of its own.
In a recent VoxImplant event I was invited to speak at in Russia, VoxImplant introduced a new service in beta called Smartcalls – a visual IDE on top of their CPaaS offering. Albeit… in Russian.
The concept of using visual tools requiring less coding can greatly increase productivity and the target audience of these tools. They are no longer restricted to developers “who code”. Hell – I can use these tools. I played with Twilio Studio a bit – it was fun and intuitive. It guides the way you think about what needs to be done. About the flow of the service.
I really can’t see how other CPaaS vendors are going to ignore this trend and not work on their own visual offerings during 2018.
#4 – Machine Learning and Artificial IntelligenceIt is time to be smart about communications
When I worked at Amdocs some years ago, we’ve looked into the area of Big Data Analytics. It was all about how you take the boatloads of information telecommunication companies have and do something with it. You start by analyzing and visualizing it, moving towards the domain of actionable.
It frustrated the hell out of me to understand how little communication vendors are doing with their data compared to enterprises in other markets. Or at least that was my impression looking from inside a vendor.
Fast forward to today, and what you find with CPaaS vendors is that they are offering a well oiled machine that provides generic communications. You can do whatever you want with it, and the smart ones are adding analytics on top for their own needs.
But want about the CPaaS vendors themselves? Shouldn’t they be doing something about analytics? Or its better branded colleague known as machine learning?
Gustavo Garcia wrote a good article about it – improving real time communications with machine learning. This is where most CPaaS vendors are probably looking today, optimizing their network to offer a better service.
But it is just scratching the surface.
The obvious is adding things around NLP – speech to text, text to speech, translation. All those are being done by integrating with third parties today, and many of the CPaaS vendors offer these out of the box.
To move the needle and differentiate, more needs to be done:
If you are a CPaaS vendor and you don’t have at least a data scientist, a machine learning developer and a product manager savvy in this domain yet, then start recruiting.
#5 – AR/VRTime to connect ARKit and ARCode to communications.
Augmented reality and virtual reality have been around for the better part of the last decade or two. But somehow, they are only now becoming interesting.
I guess the popularity of AR has grown a lot, and where it fits directly in smartphones today (and not the bulky 3D headsets) is with things like Pokemon Go and camera filters (started by popularized snapchat and found everywhere today).
With the introduction of Apple ARKit and Google ARCore, this is only going to get more commonplace. And what we see now is CPaaS vendors finding their way around this technology.
The most interesting one yet is Twilio’s work with ARKit, which they showcased at last year’s Kranky Geek event:
With all the focus put in this domain, I am sure we’ll see more CPaaS vendors looking into it.
#6 – BotsOmnichannel + Machine Learning + Automation = Bots
Chat bots is all the rage. Search the internet and you’ll be thinking that humans no longer talk to customers anymore. It is all taken care of by bots.
I’ve added a chat widget to certain pages on my website. And every once in awhile I get a question there asking if that’s a human they’re interacting with.
Bots require integration and APIs. They are also about communications. Which is probably why CPaaS vendors are taking a step towards this direction as well. The ones adding Omnichannel offerings across multiple channels are in effect enabling bots to be created there across channels.
That’s a first step though, as the next would be to cater this market better by enabling conversational interfaces and easing the part of packaging the bots for the various channels.
Expect to see a few announcements around bots to be made by CPaaS vendors this year. A lot of it will revolve around Amazon Alexa and Google Home
#7 – GDPRThe governance headache we’ve all been waiting for.
GDPR stands for General Data Protection Regulation. It is a new set of EU rules that have been put in place to protect the data related to EU citizens that is collected and stored.
While it is easy to assume that CPaaS vendors store no data – they “live” in the real time, that isn’t accurate.
Stored meta data and logs may fall into the GDPR black hole, and definitely recording services. With the introduction of Omnichannel and Bots comes chat history storage.
Twilio jumped on this bandwagon last year with a GDPR program. Other vendors such as MessageBird indicated future support of GDPR. All global CPaaS vendors will need to support GDPR, and since these regulations come to force this year, 2018 will be the year GDPR gets more attention and focus by CPaaS vendors.
2018 – The Year CPaaS Vendors DifferentiatedIn the past few years, we’ve seen CPaaS vendors struggling in two directions:
That second point is important. Up until recently, CPaaS equated to running one or two data centers (or the equivalent of running from a small number of cloud based data centers), connecting developers via REST APIs to the telecom backend. With the introduction of IP based communications (and WebRTC), the was a growing need for client side SDKs along with more points of presence closer to the end user.
We seem to be past that hurdle for most CPaaS vendors. Most of them have grown their footprint to include a global infrastructure.
The next frontier is going to happen elsewhere:
CPaaS will move in rapid pace in the next few years. Vendors who won’t invest and grow their offerings and business will not stay with us for long.
Planning on selecting a CPaaS vendor? Check out this shortlist of CPaaS vendor selection metrics:
Get the shortlist
The post 7 CPaaS Trends to Follow in 2018 appeared first on BlogGeek.me.
adapter.js is the glue that sticks your code to the different browser implementations of WebRTC.
This article was co-written with Philipp Hancke. He has been the driving force behind adapter.js in the last two years, so it seemed like the best approach to have him contribute large portions of it. You can follow his writing here.
One of the visuals I created when I started out with WebRTC was this one:
It had several incarnations, and the main concept here is to show how WebRTC is different than traditional VoIP.
With traditional VoIP, you have multiple vendors implementing the specification, in hopes (as well as active interoperability testing) that the implementations will work in front of each other. If you knew one VoIP implementation, it said nothing about your ability to be able to yield another.
WebRTC was different. It brought to the table the concept of free, but also HTML5; and by that, I mean having a single API that every developer can use to add interactive voice and video to his application.
getUserMedia, PeerConnection and the data channel are all APIs specified in WebRTC. We’re now all speaking the same language when we’re implementing applications. And that, in turn, creates an ecosystem around it. One that was never there with such force with traditional VoIP.
Problem is, you can think of the WebRTC API as a suggestion only. That’s because today, version 1.0 of the specification isn’t yet a reality. We’ve got a candidate for it, but that says nothing about the implementations. Browser implementations of WebRTC are more like dialects of the same language. When you speak one, you understand another, but not fully. Not its nuances. And bad things can happen if two people with different dialects try to talk to each other without patience or understanding.
Which is probably where adapter.js comes into play.
Before we ask ourselves if adapter.js is needed today (it is), it would be worthwhile to understand how it came to be.
adapter.js Origin Storyadapter.js has been around since the early days of WebRTC in late 2012 and early 2013. It was originally part of Google’s apprtc sample application. The original version can still be found in the Chrome tree. It was a very small project, less than 150 lines. The main job was to hide prefix differences like webkitRTCPeerConnection and mozRTCPeerConnection and to provide helper functions to attach a MediaStream to an HTML <audio> or <video> element.
During those wild west days of WebRTC, everyone wrote their own library to make WebRTC easier. This started to change in mid-2015 when Microsoft Edge came along. While Edge did not require prefixes for getUserMedia, attaching the MediaStream to a video element still worked in three different ways in as many implementations. This showed that there was a need to move to standardized behaviour. Also, as Microsoft’s Bernard Aboba pointed out, books were printed that showed the prefixed versions of the APIs — which is the wrong thing to teach.
Preferring ORTC over the WebRTC 1.0 API, Microsoft was extremely happy to support the addition of a shim of the RTCPeerConnection API on top of ORTC. This enabled early interoperability tests and allowed ironing out some bugs before the first public ORTC-enabled Edge version.
MS showing love for our #webrtc polyfill (adapter.js) and sample codehttps://t.co/YhHstGjQps
(thanks @HCornflower) pic.twitter.com/qPzwZEA3VK
— Justin Uberti (@juberti) April 4, 2016
A bit later, Promise support was added to adapter.js. Moving to Promises was one of the first big changes in the WebRTC specification and while Firefox has been adding them swiftly, Chrome was lagging behind. At that point, the “mission statement” for adapter changed. Instead of just trying to fill the gaps it became an enabler, allowing to write modern WebRTC Javascript. Mozilla’s Jan-Ivar Bruaroey recognized that and started contributing more elaborate pieces like a shim for the getUserMedia constraints.
When Safari started shipping WebRTC they contributed a shim for the “legacy” bits of the WebRTC API that they did not want to ship. This was an interesting attempt to get developers to write modern, promise-based WebRTC code. However, it does not seem to have worked out as sadly the release version shipped with the legacy API is enabled by default.
With growing complexity (currently over 2,200 lines of code) and being in the “hot path”, testing of changes to the adapter.s code itself became more of an issue. Initially powered by Selenium the tests have been split up into unit tests and end-to-end tests that use standard testing tools like karma, mocha and chai to make assertions while running in a multitude of browsers on Travis-CI for every pull request and compare the results to previous runs. This shows the state of the art for testing WebRTC libraries and has been adopted by other projects as well.
During much of 2017, the main focus was on shimming the track-based API in Chrome. This is one of the bigger pieces of the move toward the WebRTC 1.0 API, described in this blog post by Mozilla and it was in adapter.js as well. The tests proved useful to ensure the consistency of the API which is particularly tricky since existing code might rely on certain interactions with the legacy API and that API (along with the interactions) is not specified. As is usual with large changes, there were a number of regressions — however, it is much better to discover those regressions in a javascript library where the version can be pinned than to have Chrome ship them natively. Early in 2018, Chrome 64 will become stable and the native addTrack version will take over from the shimmed variant. Note: addTrack turned out not to be quite ready for production yet due to a bug related to getStats. The shim will continue to be preferred until Chrome M65 — make sure your adapter version is updated after that change.
adapter.js TodayFor a quick and dirty project you can simply include https://webrtc.github.io/adapter/adapter-latest.js in your code.
This will give you the latest published version. Note however that your application will automatically pull any changes so this is not recommended for larger applications.
The main source of adapter.js downloads is NPM. In most Javascript projects, you install webrtc-adapter as follows:
npm install webrtc-adapterNote: Since adapter.js is manipulating the core WebRTC javascript APIs upgrading it is somewhat risky. Therefore it is recommended to keep the exact version specified in your package.json file and test a lot when upgrading that version.
To use it, just require the module in one of your javascript files:
const adapter = require(‘webrtc-adapter’);Since it is a polyfill, it transparently modifies the window object by default. The adapter object gives you information about the browser variant and version it detected in the browserDetails object:
console.log(adapter.browserDetails.browser); console.log(adapter.browserDetails.version);
This is slightly different from a version detection library like platform as it treats Chromium-based browsers like Opera as Chrome — since they run the same WebRTC engine that makes sense.
You can use the detected browser and version to add your own logic for working around bugs present in certain Chrome versions (e.g. the Chrome 61/Android video freeze or the Chrome 58 TURN/TCP issue).
To check WebRTC support you will need to check that RTCPeerConnection is defined:
!!window.RTCPeerConnectionand, if your use-case requires it, getUserMedia
!!(navigator.mediaDevices && navigator.mediaDevices.getUserMedia)or the createDataChannel method of the RTCPeerConnection:
‘createDataChannel’ in RTCPeerConnection.prototypeAfter that you can simply write your WebRTC code as shown in the specification:
http://w3c.github.io/webrtc-pc/#simple-peer-to-peer-example
The official WebRTC samples are a great way to get started as they show a lot of use-cases and the maintainers ensure that they are semantically correct. Most of the shims are written in such a way that they will not become inactive when the native variant is available.
Moving ForwardThere are 4 forces at play with adapter.js:
Will a day come when we no longer need adapter.js?
Definitely.
But don’t wait up for it.
If the lifespan of jQuery is any indication (11 years and still going strong, with the last 4 of them with articles on why we don’t need jquery), we will be using adapter.js for many years to come.
The post What is WebRTC adapter.js and Why do we Need it? appeared first on BlogGeek.me.
As the year 2017 comes to an end, there was a small present. Hangouts started to support Firefox with WebRTC instead of a plug-in. While it had been public for a while that the Firefox WebRTC team had been testing this, it was a nice Christmas present to see this shipped. Tsahi Levent-Levi was one […]
The post All I want for Christmas is Hangouts to use WebRTC on Firefox appeared first on webrtcHacks.
Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.
Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.
Wow, this most certainly is a great a theme.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.