bloggeek
Quick vacation
It is time for a quick vacation.
In the past, I tried publishing here while on vacation, I’ll refrain from it this time.
Please do your best not to acquire anyone until end of August.
See you all next month!
The post Quick vacation appeared first on BlogGeek.me.
What WebRTC Tool are you using for your Service?
I need your help to gain better visibility.
If you are developing something with WebRTC, there’s a good chance you are using existing tools and frameworks already. Be it signaling or messaging frameworks, a media engine in the backend, a third party mobile library.
As I work on my research around the tools enabling the vibrant ecosystem that is WebRTC, I find myself more than once wondering about a specific tool – how much is it used? What do people think about? Are they happy with it? What are its limitations? While I know the answers in some cases, in others not so much. This is where you come in.
If you are willing to share your story with a third party tool – one you purchased or an open source one – I’d like to hear about it. Even if it is only the name of the tool or a one liner.
Feel free to comment below or just use my contact form if you wish this to stay private between us.
I really appreciate your help in this.
The post What WebRTC Tool are you using for your Service? appeared first on BlogGeek.me.
If Microsoft can Deliver Windows 10 P2P, Why Can’t we with WebRTC?
What do you know? Peer assisted delivery a-la WebRTC data channel is acceptable.
Whenever write something about the potential of using WebRTC’s data channel for augmenting CDN delivery and getting peers who want to access content to assist each other, there are those who immediately push back. The main reasons? This eats up into data caps and takes up battery.
It was hard to give any real user story besides something like BitTorrent for consumers or how Twitter uses BitTorrent internally to upgrade its servers. Not enough to convince many of my readers here that P2P is huge and WebRTC will be a part of it.
The WebRTC will be a part of it has been covered on this blog many times. P2P is huge is a different story. At least until last month.
Windows 10 was officially released end of July. And with it, millions of PCs around the world got updated. I ran into this article on TheNextWeb by Owen Willions:
by default, Windows 10 uses your internet connection to share updates with others across the internet.
The feature, called Windows Update Delivery Optimization is designed to help users get updates faster and is enabled by default in Windows 10 Home and Pro editions. Windows 10 Enterprise and Education have the feature enabled, but only for the local network.
It’s basically how torrents work: your computer is used as part of a peer to peer network to deliver updates faster to others. It’s a great idea, unless your connection is restricted.
So. Microsoft decided to go for peer assisted delivery and not only a CDN setup to get Windows 10 installation across the wires to its millions of users. That’s 2-3 Gb of a download.
Probably the first large scale commercial use of P2P out there – and great validation for the technique.
I know – they received backlashes and complaints for doing so, but what I haven’t seen is Microsoft stopping this practices. This is another step in the Internet decentralization trend that is happening.
I wonder who will be next.
Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.
The post If Microsoft can Deliver Windows 10 P2P, Why Can’t we with WebRTC? appeared first on BlogGeek.me.
48 Hours left for the WebRTC PaaS Summer Sale
Grab your copy of my WebRTC PaaS report at a $450 discount.
If you are subscribed to my monthly newsletter, then you already know about this summer sale for two weeks:
- My Choosing a WebRTC API Platform is available at a discount
- $1,500 instead of $1,950
- Time limited until the 17th of August
- Which leaves you 48 hours to purchase it
The reasons?
- I am heading towards vacation, making August a short month for me
- In September, the an update to this report will be released
- This update will include a real membership/subscription service with a few interesting additions:
- Online vendor comparison matrix that will be updated periodically
- Monthly web meetings to discuss recent changes and any questions you may have on the subject
If you hurry and purchase it in the next two days, you’ll enjoy the lower price point as well as the membership perks – so why wait? Get your copy of the report now.
The post 48 Hours left for the WebRTC PaaS Summer Sale appeared first on BlogGeek.me.
WIT Software and WebRTC: An Interview With André Silva
Telco vendor's offering
Telephony
Medium
Voice, Video
WebRTC at the hands of a telecom vendor.
The Telecom world has its own set of standards and needs. At times, they seem far remote from the way the Internet and WebRTC operates.
How do you bridge between the two? André Silva, Team Leader & WebRTC Product Manager at WIT Software tries to explain in this interview.
What is WIT Software all about?
WIT is a software development company specialized in advanced solutions for mobile telecommunications companies. The company has over 14 years of experience and a deep expertise in mobile communications and network technologies including IP Multimedia Subsystem (IMS), mobile voice (Mobile VoIP and Voice over LTE), messaging (SMS, MMS and IM), Rich Communication Suite (RCS) and Multimedia Telephony Services (MMTel). Located in Portugal, UK, Germany and California, the company has over 230 fulltime employees and a blue chip industry client base.
You’ve been working in the Telco space offering IMS and RCS products. What brought you towards WebRTC?
Back to 2008, WIT started the development of a Flash-to-SIP Gateway to support voice calls from web browsers to mobile phones. The first commercial deployment was done in 2011, enabling calls from a Facebook App to mobile subscribers connected to the Vodafone Portugal network. This first version included features like enhanced address-book, presence, IP messaging, IP voice calls and video calls.
When Google released the WebRTC project back in 2011, WIT started following the technology and as soon as it got stable we have implemented a new release of our Web Gateway with support for all the browsers in the market, including Chrome, Firefox and Opera that are WebRTC-compliant, but also Safari and IExplorer where we use the Flash-to-SIP capabilities.
How are your customers responding to the WebRTC capabilities you have?
Our customers are searching for ways to extend their mobile/fixed networks to web browsers and IP devices, either to extend voice calling with supplementary services and SMS, or to make more services available to off-net users. We are providing our WebRTC Gateway and our RCS capabilities to provide richer messaging and voice calling use-cases for the consumer and the enterprise market.
One of the facts that is much appreciated is the support for non-WebRTC browsers. The conversion of protocols (DTLS-SRTP and RTMP) to RTP is done by our Gateway and it is transparent for the network.
For codec transcoding, we support the standard JSR-309 to integrate with MRF’s in order to support extra codecs that are not natively available in WebRTC.
Recently we just announced a partnership with Radisys that is a leading provider of products and solutions, to address emerging media processing challenges for network operators and solution vendors.
What signaling have you decided to integrate on top of WebRTC?
We are using a proprietary JSON protocol over WebSockets. This is a lightweight protocol that exploits the best of asynchrony of WebSockets and provides the best security for Web Apps.
We have built a Javascript SDK that abstracts all the heterogeneity of the different browsers, and the technology that is used to establish calls. The Javascript SDK loads a Flash plugin when WebRTC is not available in the browser.
Backend. What technologies and architecture are you using there?
WIT WebRTC Gateway is a Java-based Application Server that can run in several containers. It can be scaled horizontally over several instances. The Gateway integrates with SIP Servlet Containers, for the integration with standard Media Servers, and with streaming servers, to make the media available over RTMP. Our Media engine copes with the WebRTC media and contains a STUN/TURN server to solve the NAT traversal issues.
Where do you see WebRTC going in 2-5 years?
I think WebRTC will become the standard for IP Communications that every VoIP application and server will support, either because they use the WebRTC native APIs, or because they will be improved to also support the extras brought by WebRTC specification.
In 2-5 years I expect to see web developers using the WebRTC JavaScript API to create new applications and just assume that WebRTC is there accessible in every browser, since Microsoft is moving forward to add WebRTC in the new browser.
On the negative side, I also expect browsers to continue having distinct implementations which will force developers to have specific code for each browser. Unfortunately, web development has always been like this.
If you had one piece of advice for those thinking of adopting WebRTC, what would it be?
WebRTC aims to enable VoIP without plugins. So you need to think about WebRTC alternatives for the cases where it is not available, because from our experience, the end user doesn’t really care what’s underneath the application, they just want it to work.
So, you should not filter the browsers or systems where your application will run and force the user to download a new browser.
Given the opportunity, what would you change in WebRTC?
Since H.264 is now one of the video codecs in the specification, a great step would be to add some audio codecs like AMR-WB and G.729 to avoid transcoding with some of the common codecs in existing services.
Also, I would give more focus to the advanced cases that depend on the renegotiation of the WebRTC sessions. We provide supplementary services like call hold, upgrade and downgrade and there are still some limitations in the APIs to allow us to have full control across browsers.
What’s next for WIT-Software?
We are creating WebRTC applications that will be launched later this year for the consumer market, and we are preparing a solution for the enterprise market that will leverage the best of WebRTC technology.
Our latest implementation adds support to voice calls between web browsers and VoLTE devices, and this is a major breakthrough for the convergence of Web Apps and new generation mobile networks.
For more information, please visit our product page at http://webrtc.gw
–
The interviews are intended to give different viewpoints than my own – you can read more WebRTC interviews.
The post WIT Software and WebRTC: An Interview With André Silva appeared first on BlogGeek.me.
It’s Time to Remove GSM Call Prioritization from Smartphones
Smarphones are more laptops than phones.
What’s more important to you? That your smartphone is with you so people call call your phone number to reach you and you can call their phone numbers to reach them. Or the fact that you can have your apps and the internet available at your fingers due to that data package you have or the WiFi you are connected to?
For me the answer is simple. I don’t really care much about my phone number anymore. It is there. It is used. There are hours a month that I am “on the phone”, but it isn’t as important as it used to be. Oftentimes, the most important conversations I conduct are done elsewhere.
This special treatment smartphones give GSM calls is getting a bit tired. The notion of call waiting, hold and switching between calls – who cares anymore?
I had a meeting the other day. As usual, it took place on my desktop machine, with a video camera attached. In the middle, the person I talked to had to answer his phone. Say he is busy. On another call he received he decided not to answer. Apparently, that meeting with me was less important than his daughter and more important than the other person.
The other day, I had a meeting. Again, on my desktop. The house phone rang (a novelty here). When it stopped ringing, my smartphone rang. Call was from an international number. I didn’t answer. The current meeting I was already having was important enough. Whoever searched for me pinged me by email as well.
Interactions happen today not only no multiple apps and services. They also happen to us on multiple devices. The concept that we have one number or service, aggregating all of our communication, and needs to handle a calling queue and be prioritized over everything else is no longer valid. It doesn’t fit our world anymore.
Time to let go of that quaint idea of GSM call prioritization. Treat its notifications and app as just another smartphone app and be done with it.
Kranky and I are planning the next Kranky Geek in San Francisco sometime during the fall. Interested in speaking? Just ping me through my contact page.
The post It’s Time to Remove GSM Call Prioritization from Smartphones appeared first on BlogGeek.me.
WebRTC’s Extremes. Aggregation or Embedability? Federated or Siloed?
WebRTC is but a technology. Its adoption happens at the edges.
It is interesting to see what people do with WebRTC – what use cases do they tackle and what kind of solutions do they come up with.
Here are a few opposite trends that are shaping up to be mainstream approaches to wielding WebRTC.
1. AggregationIn many cases, WebRTC is used to aggregate. The most common example is expert marketplaces.
Popexpert and 24sessions are good examples of such aggregators. You open up your own page on these services, state what services you offer and your asking price. People can search for you and schedule a video session with you. Interesting to see in this space LiveNinja who recently shutdown their aggregation service, shifting towards and embedability alternative.
2. EmbedablityThe opposite of aggregating everyone into a single domain is to enable embedding the service onto the expert’s own website.
The company will offer a piece of JavaScript code or a widget that can be placed on any website, providing the necessary functionality.
Aggregation of Embedability?Which one would be preferred, and to whom?
The Vendor in our case, has more power as an aggregator. He is in charge of all the interaction, offering the gateway into his domain. Succeeding here, places him in a position of power, usually way above the people and companies he serves.
The Expert may enjoy an aggregator when he is unknown. Having an easy way to manage his online presentation and being reachable is an advantage. For someone who is already known, or that have spent the time to make a brand of himself online, being aggregated on someone else’s site may dilute his value or position him too close to his competitors – not something you’d want doing.
The Customer on one hand, can easily find his way through an aggregator. But on the other hand, it places the expert or service he is reaching out to at a distance. One which may or may not be desired, depending on the specific industry and level of trust in it.
Ben Thompson has a good read about aggregation theory which I warmly suggest reading.
3. Silo
Most WebRTC services live in their own silo world. You envision a service, you build the use case with WebRTC, and that’s it. If someone needs to connect through your service – he must use your service – he can’t get connected from anywhere elsewhere. Unless you add gateways into the system, but that is done for specific needs and monetization.
I’ve talked about WebRTC islands two years ago. Here’s a presentation about it:
WebRTC Islands from Tsahi Levent-levi
WebRTC makes it too easy to build your own island, so many end up doing so. Others are hung up to the idea of federations:
4. FederationWhy not allow me to use whatever service I want to call to you, and you use whatever service you prefer to receive that call?
Think calling from Skype to WeChat. Or ooVoo to Hangouts. What a wonderful world that would be.
Apparently, it doesn’t happen because the business need of these vendors isn’t there – they rather be their own silos.
Who is federating then?
- Some connect to the PSTN in order to “federate” – or to enjoy the network effect of the legacy phone system
- Those who have a network already (federated or not), end up using WebRTC as an access point. That’s what Polycom did recently with their RealPresence Web Suite.
- Solutions such as Matrix, looking to offer a framework that enables federated signaling that is suitable for WebRTC as well
At the end of the day, WebRTC is a building block. A piece of technology. Different people and companies end up doing different things with it.
Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.
The post WebRTC’s Extremes. Aggregation or Embedability? Federated or Siloed? appeared first on BlogGeek.me.
Upcoming Webinar: Five Advantages WebRTC Brings to Your Video Conferencing Solution
WebRTC has more to offer in video conferencing than just an access point.
My roots are in video conferencing. I’ve been working in that industry for 13 years of my adult life, most of it spent dealing with signaling protocols and enabling others to build their VoIP solutions. You can say I have a special place in my heart for this industry.
This is why I immediately said yes when LifeSize wanted me to join them for a webinar. We’ve got a mouthful as a title:
Five Advantages WebRTC Brings to Your Video Conferencing Solution
Truth be told – there’s a lot that WebRTC has to offer in the video conferencing space than the mere “additional access point as a browser to our great video conferencing products”. It starts by taking cloud and video seriously, and continues with unlocking the value that a technology like WebRTC can bring to video conferencing solutions.
If you want to learn more, then be sure to join LifeSize and me in this webinar.
When? Aug 18 2015 11:00 am EDT
The post Upcoming Webinar: Five Advantages WebRTC Brings to Your Video Conferencing Solution appeared first on BlogGeek.me.
The Day Adobe Adds WebRTC is the Day we Kill Flash
Adobe Migrating to WebRTC?
The company behind the abomination called Flash? Adobe.
The logic then, is that when Adobe moves to WebRTC, there’s no reason anymore to try and run real time communications related use cases with Flash. Correct?
Well… it is already happening.
Guillaume Privat, Director and General Manager of the Adobe Connect business unit, spilled the beans: Adobe Connect “plans to be ready to support HTML5″ “when WebRTC matures”.
AT&T. Cisco. Microsoft. Comcast. Facebook. And now Adobe. An interesting 2015.
Some thoughts about this partial announcement by Adobe (read it all – it is short and rather interesting):
- Why did Adobe go with WebRTC?
- The promise of HTML5 content working across desktops and mobile devices
- Portability – cross platform development and deployment
- The same thing I always say – if you plan on developing any communication service these days, WebRTC needs to be your first choice and the question should by why you haven’t picked it
- Their future plans are broad, and rather simplistic – use HTML5/WebRTC wherever to bring feature compatibility to what Adobe Connect is capable of today
- Somehow, Adobe places WebRTC as an immature technology. While I see this type of thinking in many places, I believe it is short sighted at best. Those who deem it immature probably aren’t wielding it correctly
- Worse – maturity in WebRTC means “once HTML5 can support large scale collaboration across browsers”
- Will Microsoft Edge imminent support of it enough?
- Will Adobe wait until benevolent Apple introduces WebRTC in Safari?
- Will Adobe be adamant that WebRTC must run on the older Internet Explorer browsers before it is mature enough for Adobe?
- Or is there some other arbitrary rule at play? Maybe the development time inside Adobe Connect’s team?
- The context of the announcement is odd
- It resides on a blog that talks about use cases deployed by Adobe Connect and features introduced
- This announcement is neither
- It says “we know there’s WebRTC and we plan on using it. One day. When we think it is time. Maybe. If we get there. And browsers support it”
- Not sure how should I respond to it. Should I use Adobe Connect now that I know they plan on using WebRTC in some far flung future? Should I sit and wait until they do? Should I rejoice? Should I be worried about my existing Adobe Connect integration?
At least we have another incumbent openly validate WebRTC as a technology. I wonder when the rest of the ostriches burying their head in the sand out there will also come to their senses.
Adobe is abandoning Flash. Shouldn’t you be doing the same?
Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.
The post The Day Adobe Adds WebRTC is the Day we Kill Flash appeared first on BlogGeek.me.
Should WebRTC Data Channels be Explicitly Approved by the User?
I don’t think so.
There have been at of chatter lately about the NY Times and local IP address use. A rather old Mozilla bug got some attention due to it, with some interesting comments:
I’ve said this before and I’ll say it again. Data channels should require user consent just the same as video and audio (getUserMedia). I haven’t yet heard a good reason on why a silent P2P data channel connection is required.
We are considering adding an extension to restrict the use of WebRTC but are still studying what would be most effective.
I would like to second this observation. I have not attempted to dig into the details of the spec, but it *sounds* like the entire problem goes away if creating any sort of channel requires explicit user authorization.
The rants go on.
What they all share in common? Leak of IP addresses is wrong and shouldn’t be done. Not without a user’s consent.
I’d like to break the problem into two parts here:
- IP leakage
- Consent
The issue of leaking a local IP address is disconcerting to some. While I understand the issue for VPN configurations, I find it a useless debate for the rest of us.
My own local IP address at the moment is 10.0.0.3. Feel free to store this information for future dealings with me. Now that you know it – have you gained anything? Probably not.
Oh, and if you have a mobile phone, you probably installed a bunch of apps. These apps are just as complex as any web page – it connects to third parties, it most likely uses an ad network, etc. How hard is it to get the local IP address inside an app and send it to someone else? Do you need special permissions to it? Do users actually approve it in any way? Do you think the NY Times app uses this for anything? How about Candy Crush? Or Angry Birds?
Local IPs are compromised already. Everywhere. They are easy to guess. They are easy to obtain in apps. Why is the web so different? And what huge secret do they store?
ConsentWhen someone wants access to my camera, microphone or screen – I understand the need for consent. I welcome it.
But when it comes to the data channel I am not so sure. There are differences here. My thinking about it runs in multiple paths.
1. ContentMicrophone, Camera and Screen actually give new data to Java Script code to work with. The Data Channel is a transport and not the data itself.
The browser doesn’t ask permission to download 50+ resources from a web page when we only asked for the web page. It doesn’t ask for permission when 40+ of these resources are located at other domains than the one that we asked for. It doesn’t ask for permission when a web page wants to open a WebSocket either. It doesn’t ask for permission when a web page tries to generate other bidirectional methods to connect to our browser – SSE or XHR – it just runs it along.
As we are trying to protect content, permission on the data channel level seems unnecessary.
If we want to protect local IP address exposure, we should find other means of doing that – or accept that in many use cases, they aren’t worth the protection.
2. User experienceFor a video call, a request to allow access is fine – there’s a human involved. But for a programmatic interface that’s a bit of an overkill. With many WebRTC data channel use cases targeting CDN augmentation or replacement, would users be willing to take the additional approval step? Would content providers be willing to take the risk of losing customers?
Let’s assume GIS and mapping on the internet adopts the WebRTC data channel – similar to what PeerMesh are doing. Would you be happy with the need to allow each and every web page that has a Google Map on it to have access to the data channel?
Would you want your games to ask you to allow connecting to others when switching to multiplayer?
Do you want Akamai (a CDN) powered websites to ask you to allow them to work to speed up page loads?
This doesn’t work.
Stop thinking about the data channel as a trojan horse – it is just another hammer in our toolbox.
3. Web trendsIn many ways, we are at a phase where we are trying to decentralize the web – enabling browsers to reach each other and to dis-intermediate the servers from the communications. FireChat is doing it for awhile now, but they are far from being alone in it.
This kind of decentralization cannot work properly without letting browsers chat sideways instead of via web servers. While we may want in the future to make such connections as low level TCP and other network building blocks, this isn’t the case today.
We need to find other solutions than placing a permission request on every data channel we try opening.
Why is it important?We need to be able to distinguish between FUD and reality.
Data channels by themselves aren’t a threat. They may change the way browsers operate on the network level, which may expose vulnerabilites, but the solution shouldn’t be disabling data channels or putting manual roadblocks to them on the browser – it should be in better architecting the solution around them.
As WebRTC grows and matures, these issues will be polished out. For now, I still believe WebRTC is the most secure VoIP technology out there to build your services. Trust, on the other hand, will always depend on the web service’s developers.
The post Should WebRTC Data Channels be Explicitly Approved by the User? appeared first on BlogGeek.me.
WebRTC Monitoring: Do you Monitor your Servers or Your Service?
WebRTC monitoring the right way.
When we started out developing testRTC, what we had in mind is a service that helps QA people test their service prior to heading to production. We’ve built a sleek webapp that enables us to simulate virtually any type of a WebRTC use case. Testers can then just specify or record their script and from there run it and scale it in their tests using testRTC. What we quickly found out was that some were looking for a solution that helps them monitor their service as opposed to manually (or even automatically and continuously) testing their latest build.
The request we got was something like this: “can you make this test we just defined run periodically? Every few minutes maybe? Oh – and if something goes awfully wrong – can you send me an alert about it?”
What some realized before we did was that the tests they were defining can easily be used to monitor their production service. There reasoning behind this request is that there’s no easy way to run an end-to-end monitor on a WebRTC service.
The alternatives we’ve seen out there?
- Pray that it works, and wait for a user to complain
- Using Pingdom to check that the domain is up and running and that the server is alive
- Using New Relic or its poor man’s alternative – Nagios – to handle application monitoring. It boils down to testing that the servers are up and running, CPU and memory load look reasonable and maybe a bit of your server’s metrics
But does that mean the service is up and running, or just that the machines and maybe even processes are there? In many cases, what IT people are really looking to monitor is the service itself – they want to make sure that if a call is made via WebRTC – it actually gets through – and media is sent and received – with a certain expected quality. And that’s where most monitoring tools break down and fail to deliver.
This is why a few weeks ago, we’ve decided to add WebRTC monitoring capabilities to testRTC. As a user, you can set it up by defining a test case, indicate from where in the world you want it to run, define the intervals to run it along with thresholds on quality. And that’s it.
What you’ll get is a continuously running test that will know when to alert you on issues AND collect all of the reports. For all calls. The bad ones and the good ones. So you can drill down in post mortem to see what went wrong and why.
If you need something like this, contact us on testRTC – the team would love to show you around our tool and set you up with a WebRTC monitor of your own.
Test and Monitor your WebRTC Service like a pro - check out how testRTC can improve your service' stability and performance.
The post WebRTC Monitoring: Do you Monitor your Servers or Your Service? appeared first on BlogGeek.me.
Who Needs WebSockets in an HTTP/2 World?
I don’t know the answer to this one…
I attended an interesting meetup last month. Sergei Koren, Product Architect at LivePerson explained about HTTP/2 and what it means for those deploying services. The video is available online:
One thing that really interest me is how these various transports are going to be used. We essentially now have both HTTP/2 and WebSocket capable of pretty much the same things:
HTTP/2 WebSocket Headers Binary + compression Binary, lightweight Content Mostly text + compression Binary or text Multiplexed sessions Supported Supported Direction Client to server & server push BidirectionalWhat HTTP/2 lacks in binary content, it provides in compression.
Assuming you needed to send messages back and forth between your server and its browser clients, you’ve probably been considering using HTTP based technologies – XHR, SSE, etc. A recent addition was WebSocket. While the other alternatives are mostly hacks and workarounds on top of HTTP, a WebSocket essentially hijacks an HTTP connection transforming it into a WebSocket – something defined specifically for the task of sending messages back and forth. It made WebSocket optimized for the task and a lot more scalable than other alternatives.
With HTTP/2, most of the restrictions that existed in HTTP that required these hacks will be gone. This opens up the opportunity for some to skip WebSockets and stay on board with HTTP based signaling.
Last year I wrote about the need for WebSockets for realtime and WebRTC use cases. I am now wondering if that is still true with HTTP/2.
Why is it important?- BOSH, Comet, XHR, SSE – these hacks can now be considered legacy. When you try to build a new service, you should think hard before adopting them
- WebSocket is what people use today. HTTP/2 is an interesting alternative
- When architecting a solution or picking a vendor, my suggestion would be to understand what transports they use today and what’s in their short-term and mid-term roadmap. These will end up affecting the performance of your service
Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.
The post Who Needs WebSockets in an HTTP/2 World? appeared first on BlogGeek.me.
WebRTC Basics: How (and Why) WebRTC Uses your Browser’s IP Address
To reach out to you.
I’ve been asked recently to write a few more on the topic of WebRTC basics – explaining how it works. This is one of these posts.
There’s been a recent frenzy around with the NY Times use of WebRTC. The fraud detection mechanism for the ads there used WebRTC to find local addresses and to determine if the user is real or a bot. Being a cat and mouse game over ad money means this will continue with every piece of arsenal both sides have at their disposal, and WebRTC plays an interesting role in it. The question was raised though – why does WebRTC needs the browser’s IP address to begin with? What does it use it for?
To answer, this question, we need to define first how the web normally operates (that is before WebRTC came to be).
The illustration above explains it all. There’s a web server somewhere in the cloud. You reach it by knowing its IP address, but more often than not you reach it by knowing its domain name and obtaining its IP address from that domain name. The browser then goes on to send its requests to the server and all is good in the world.
Now, assume this is a social network of sorts, and one user wants to interact with another. The one and only way to achieve that with browsers is by having the web server proxy all of these messages – whatever is being sent from A to B is routed through the web server. This is true even if the web server has no real wish to store the messages or even know about them.
WebRTC allows working differently. It uses peer-to-peer technology, also known as P2P.
The illustration above is not new to VoIP developers, but it has a very important difference than how the web worked until the introduction of WebRTC. That line running directly between the two web browsers? That’s the first time that a web browser using HTML could communicate with another web browser directly without needing to go through a web server.
This is what makes all the difference in the need for IP addresses.
When you communicate with a web server, you browser is the one initiating the communication. It sends a request to the server, when will then respond through that same connection your browser creates. So there’s no real need for your browser to announce its IP address in any way. But when one browser needs to send messages to another – how can it do that without an IP address?
So IP addresses need to be exchanged between browsers. The web server in the illustration does pass messages between browsers. These messages contain SDP, which among other things contains IP addresses to use for the exchange of data directly between the browsers in the future.
Why do we need P2P? Can’t we just go through a server?Sure we can go through a server. In fact, a lot of use cases will end up using a server for various needs – things like recording the session, multiparty or connecting to other networks necessitates the use of a server.
But in many cases you may want to skip that server part:
- Voice and video means lots of bandwidth. Placing the burden on the server means the service will end up costing more
- Voice and video means lost of CPU power. Placing the burden on the server means the service will end up costing more
- Routing voice and video through the server means latency and more chance of packet losses, which will degrade the media quality
- Privacy concerns, as when we send media through a server, it is privy to the information or at the very least to the fact that communication took place
So there are times when we want the media or our messages to go peer-to-peer and not through a server. And for that we can use WebRTC, but we need to exchange IP addresses across browsers to make it happen.
Now, this exchange may not always translate into two web browsers communicating directly – we may still end up relaying messages and media. If you want to learn more about it, then check out the introduction to NATs and Firewalls on webrtcHacks.
Kranky and I are planning the next Kranky Geek in San Francisco sometime during the fall. Interested in speaking? Just ping me through my contact page.
The post WebRTC Basics: How (and Why) WebRTC Uses your Browser’s IP Address appeared first on BlogGeek.me.
Will Patents Kill H.265 or Will H.265’s Patents Kill WebRTC?
To H.265 (=HEVC) or not to H.265? That is the question. And the answer will be determined by the browser vendors.
I gave a keynote at a UC event here in Israel last week. I really enjoyed it. One of the other speakers, made it a point to state that their new top of the line telepresence system now supports… H.265. And 4K. I was under impressed.
H.265 is the latest and greatest in video compression. Unless you count VP9. I’ve written about these codecs before.
If you think about WebRTC in 2016 or even 2017, you need to think beyond the current video codecs – H.264 and VP8. This is important, because you need to decide how much to invest in the media side of your service, and what implications these new codecs will bring to your architecture and development efforts.
I think H.265 is going to have a hard time in the market, and not just because VP9 is already out there, streamed over YouTube to most Chrome and Firefox browsers. It will be the case due to patents.
In March this year, MPEG-LA, the good folks counting money from H.264 patents, have announced a new patent pool for HEVC (=H.265). Two interesting posts to read about this are Jan Ozer‘s and Faultline‘s. Some things to note:
- There currently are 27 patent holders
- Over 500 essential patents are in the pool
- Not everyone with patents around H.265 have joined the pool, so licensing H.265 may end up being a nightmare
- Missing are Google and Microsoft from the patent pool
- Missing are also video conferencing vendors: Polycom, Avaya and Cisco
- Unit cost for encoder or decoder is $0.20/unit
- There’s an annual cap of $25M
What does that mean to WebRTC?
- Internet users are estimated above 3 billion people and Firefox has an estimated market share of around 12%. With over 300 million Firefox users, that places Mozilla way above the cap. Can Mozilla pay $25M a year to get H.265? Probably not
- It also means every successful browser vendor will need to shell these $25M a year to MPEG-LA. I can’t see this happening any time soon
- Google has their own VP9, probably with a slew of relevant patents associated with it. These will be used in the upcoming battle with H.265 and the MPEG-LA I assume
- Microsoft not joining… not sure what that means, but it can’t be good. Microsoft might just end up adopting VP9 and going with Google here, something that might actually look reasonable
- Apple being Apple, if they decide to support WebRTC (and that’s still a big if in 2015 and 2016), they won’t go with the VPx side of the house. They will go with H.265 – they are part of that patent pool
- Cisco isn’t part of this pool. I don’t see them shelling $25M a year on top of the estimated $6M they are already “contributing” for OpenH264 towards MPEG-LA
This is good news for Google and VP9, which is the competing video technology.
When we get to the WebRTC wars around H.265 and VP9, there will be more companies on the VP9 camp. The patents and hassles around H.265 will not make things easy:
- If WebRTC votes for VP9, it doesn’t bode well for H.265
- WebRTC is the largest deployment of video technology already
- Deciding to ignore it as a video codec isn’t a good thing to do
- If WebRTC votes for H.265, unlikely as it may seem, may well kill standards based high quality video support across browsers in WebRTC
- Most browsers will probably prefer ignoring it and go with VP9
- Handsets might go with H.265 due to a political push by 3GPP (a large portion of the patent owners in H.265 are telecom operators and their vendors)
- This disparity between browsers and handsets won’t be good for the market or for WebRTC
The codec wars are not behind us. Interesting times ahead. Better be prepared.
Kranky and I are planning the next Kranky Geek in San Francisco sometime during the fall. Interested in speaking? Just ping me through my contact page.
The post Will Patents Kill H.265 or Will H.265’s Patents Kill WebRTC? appeared first on BlogGeek.me.
Is Microsoft Edge Going to be the Best Browser Around?
The newest game in town.
Apple’s Safari. Haven’t used it so can’t say anything. Just that most people I know are really comfortable using Chrome on Macs.
Chrome? Word’s around that it is bloated and kills your CPU. I know. On a machine with 4Gb of memory, you need to switch and use Firefox instead. Otherwise, the machine won’t survive the default tabs I have open.
Firefox? Hmm. Some would say that their Hello service is bloatware. I don’t really have an opinion. I am fine with using Firefox, but I prefer Chrome. No specific reason.
From a recent blog post from Microsoft, it seems like Microsoft Edge is faster than Chrome:
In this build, Microsoft Edge is even better and is beating Chrome and Safari on their own JavaScript benchmarks:
- On WebKit Sunspider, Edge is 112% faster than Chrome
- On Google Octane, Edge is 11% faster than Chrome
- On Apple JetStream, Edge is 37% faster than Chrome
Coming from Microsoft’s dev team, I wouldn’t believe it. Not immediately. Others have slightly different results:
Here’s the rundown (click on an individual test to see the nitty-gritty details):
- SunSpider: Edge wins!
- Octane: Chrome wins!
- Kraken: Chrome wins!
- JetStream: Chrome wins!
- Oort Online: Chrome wins!
- Peacekeeper: Firefox wins!
- WebXPRT: Chrome wins!
- HTML5Test: Chrome wins!
Some already want to switch from Chrome to Edge.
Edge is even showing signs of WebRTC support, so by year end, who knows? I might be using it regularly as well.
–
Edge is the new shiny browser.
Firefox is old news. Search Google for Firefox redesign. They had a major one on a yearly basis. Next in line is their UI framework for extensions as far as I can tell.
Safari is based on WebKit. WebKit was ditched by Google so Chrome can be developed faster. As such, Chrome is built on the ashes of WebKit.
Internet Explorer anyone?
Edge started from a clean slate. A design from 2014, where developers thought of how to build a browser, as opposed to teams doing that before smartphones, responsive design or life without Flash.
Can Edge be the best next thing? A real threat to Chrome on Windows devices? Yes.
Kranky and I are planning the next Kranky Geek in San Francisco sometime during the fall. Interested in speaking? Just ping me through my contact page.
The post Is Microsoft Edge Going to be the Best Browser Around? appeared first on BlogGeek.me.
Now That Flash and Plugins are out the Door, What’s Holding you from Adopting WebRTC?
All routes are leading towards WebRTC.
Somehow, people are still complaining about adoption of WebRTC in browsers instead of checking their alternatives.
Before WebRTC came to our lives, we had pretty much 3 ways of getting voice and video calling into our machines:
- Build an application and have users install it on their PCs
- Use Flash to have it all inside the browser
- Develop a plugin for the service and have users install it on their browsers
We’re now in 2015, and 3 (again that number) distinct things have changed:
- On our PCs we are less tolerant to installing “stuff”
- As more and more services migrate towards the cloud, so are our habits of using browsers as our window to the world instead of installed software
- Chromebooks are becoming popular in some areas, so installing software is close to impossible in them
- Plugins are dying. Microsoft is banning plugins in Edge, joining Google’s Chrome announcement on the same topic
- Flash is being thrown out the window, which is what I want to focus about here
There have been a lot of recent publicity around a new round of zero day exploits and vulnerabilities in Flash. It started with a group called The Hacking Team being hacked, and their techniques exposed. They used a few Flash vulnerabilities among other mechanisms. While Adobe is actively fixing these issues, some decided to vocalize their discontent with Flash:
Facebook’s Chief Security Officer wants Adobe to declare an end-of-life date for Flash.
It is time for Adobe to announce the end-of-life date for Flash and to ask the browsers to set killbits on the same day.
— Alex Stamos (@alexstamos) July 12, 2015
Mozilla decided to ban Flash from its browser until the recent known vulnerabilities are patched.
Don’t get me wrong here. Flash will continue being with us for a long time. Browsers will block Flash and then re-enable it, dealing with continuing waves of vulnerabilities that will be found. But the question then becomes – why should you be using it any longer?
- You can acquire camera and microphone using WebRTC today, so no need for Flash
- You can show videos using HTML5 and MPEG-DASH, so no need for Flash
- You can use WebGL and a slew of other web technologies to build interactivity into sites, so no need for Flash
- You can run voice and video calls at a higher quality than what Flash ever could with WebRTC
- And you can do all of the above within environments that are superior to Flash in both their architecture, quality and security
Without Flash and Plugin support in your future, why would you NOT use WebRTC for your next service?
Kranky and I are planning the next Kranky Geek in San Francisco sometime during the fall. Interested in speaking? Just ping me through my contact page.
The post Now That Flash and Plugins are out the Door, What’s Holding you from Adopting WebRTC? appeared first on BlogGeek.me.
What I Learned About the WebRTC Market from a Webinar on WebRTC Testing
We’re a lot more than I had known.
One of my recent “projects” is co-founding a startup called testRTC which offers testing and monitoring services for WebRTC based services. The “real” public announcement made about this service was here in these last couple of days and through a webinar we did along with SmartBear on the impact of WebRTC on testing.
I actively monitor and maintain a dataset of WebRTC vendors. I use it to understand the WebRTC ecosystem better. I make it a point to know as many vendors as possible through various means. I thought I had this space pretty much covered.
What surprised me was the barrage of requests for information and demos by vendors with real services out there that came into our testRTC contact page that I just wasn’t aware of. About 50% of the requests from vendors came from someone I didn’t know existed.
My current dataset size is now reaching 700 vendors and projects. There might be twice that much out there.
Why is this important?- A lot of the vendors out there are rather silent about what they are doing. This isn’t about the technology – it is about solving a problem for a specific customer
- There are enough vendors today to require a solid, dedicated testing tool focused on WebRTC. I am more confident about this decision we made with testRTC
- If you are building something, be sure to let me know about it or to add it to the WebRTC Index
Oh – and if you want to see a demo of testRTC in action, we will be introducing it and demoing it at the upcoming VUC meeting tomorrow.
Want to make the best decision on the right WebRTC platform for your company? Now you can! Check out my WebRTC PaaS report, written specifically to assist you with this task.
The post What I Learned About the WebRTC Market from a Webinar on WebRTC Testing appeared first on BlogGeek.me.
Is the Web Finally Growing up and Going Binary?
Maybe.
I remember the good old days. I was hired to work on this signaling protocol called H.323. It used an interesting notation called ASN.1 with a binary encoding, capable of using a bit of data for a boolean of information. Life was good.
Then came SIP. With its “simple” text notation, it conquered the market. Everyone could just use and debug it by looking at the network. It made things so much easier for developers. So they told me. What they forgot to tell us then was how hard it is to parse text properly – especially for mere machines.
Anyway, it is now 2015. We live in a textual internet world. We use HTTP to describe our web pages. CSS to express its design and we code using JavaScript and JSON. All of these protocols are textual in nature. Our expectation is that this text that humans write (and read to debug), will be read and processed by machines.
This verbosity of text that we use over the internet is slowing us down twice:
- Text takes more space than binary information, so we end up sending more data over the network
- Computers need to work harder to parse text than they do binary
So we’ve struggled through the years to fix these issues. We minify the text, rendering it unreadable to humans. We use compression on the network, rendering it unreadable to humans over the network. We cache data. We use JIT (Just In Time) compilation on JavaScript to speed it up. We essentially lost most of the benefits of text along the way, but remained with the performance issues still.
This last year, several initiatives have been put in place that are about to change all that. To move us from a textual web into a binary one. Users won’t feel the difference. Most web developers most feel it either. But things are about to change for the better.
Here are the two initiatives that are making all the difference here.
HTTP/2HTTP/2 is the latest and greatest in internet transport protocols. It is an official standard (RFC 7540) for almost 2 full months now.
Its main objective is to speed up the web and to remove a lot of the hacks we had to use to build web pages and run interactive websites (BOSH, Comet and CSS sprites come to mind here).
Oh – and it is binary. From the RFC:
Finally, HTTP/2 also enables more efficient processing of messages through use of binary message framing.
While the content of our web pages will remain textual and verbose (HTML), the transport protocol used to send them, with its multitude of headers, is becoming binary.
To make things “worse”, HTTP/2 is about to encrypt everything by default, simply because the browsers who implemented it so far (Chrome and Firefox) decided not to support non-encrypted connections with HTTP/2. So the verbosity and the ability to watch messages on the network and debug things has gone down the drain.
WebAssemblyI’ve recently covered WebAssembly, comparing the decisions around it to those of WebRTC.
WebAssembly is a binary format meant to replace the use of JavaScript in the browser.
Developers will write their frontend code in JavaScript or whatever other language they fancy, and will have to compile it to WebAssembly. The browser will then execute WebAssembly without the need to parse too much text as it needs to do today. The end result? A faster web, with more languages available to developers.
This is going to take a few years to materialize and many more years to become dominant and maybe replace JavaScript, but it is the intent here that matters.
Why is it important?We need to wean ourselves from textual protocols and shift to binary ones.
Yes. Machines are becoming faster. Processing power more available. Bandwidth abundant. And we still have clogged networks and overloaded CPUs.
The Internet of Things won’t make things any easier on us – we need smaller devices to start and communicate. We need low power with great performance. We cannot afford to ruin it all by architectures and designs based on text protocols.
The binary web is coming. Better be prepared for it.
The post Is the Web Finally Growing up and Going Binary? appeared first on BlogGeek.me.
WebRTC on the New York Times – Not as an Article or a Video Chat Feature
WebRTC has been mentioned with regards to the New York Times. It isn’t about an article covering it – or a new video chat service they now offer.
I was greeted this weekend by this interesting tweet:
WebRTC being used now by embedded 3rd party on http://t.co/AaD7p3qKrE to report visitors' local IP addresses. pic.twitter.com/xPdh9v7VQW
— Mike O'Neill (@incloud) July 10, 2015
I haven’t been able to confirm it – didn’t find the culprit code piece in the several minutes I searched for it, but it may well be genuine.
The New York Times may well be using WebRTC to (gasp) find your private IP address.
In the WebRTC Forum on Facebook, a short exchange took place between Cullen Jennings (Cisco) and Michael Jerris (FreeSWITCH):
Cullen: I’ve been watching this for months now – Google adds served on slash dot for example and many other sites do this. I don’t think it is to exactly get the local ip. I agree they get that but I think there is more interesting things gathered as straight up fingerprinting.
Michael: local ip doesn’t seem that useful for marketers except as a user fingerprinting tool. They already have your public ip, this helps them differentiate between people behind nat. it’s a bit icky but not such a big deal. This issue blows up again when someone starts using it maliciously, which I’m sure will happen soon enough. I don’t get why exactly we don’t just prompt for this the same way we do camera and mic, it wouldn’t be a huge deal to work that into the spec. That being said, I don’t think it’s actually as big of a deal as it has been made either
Cullen: It’s not exactly clear to me exactly how one uses this maliciously. I can tell you most peoples IP address right now 192.168.0.1 and knowing that a large percentage of the world has that local IP does directly help you hack much. To me the key things is browsers need to not allow network connections to random stuff inside the firewall that is not prepared to talk to a browser. I think the browser vendors are very aware of this and doing the righ thting.
My local IP address is 10.0.0.1 which is also quite popular.
In recent months, we’ve seen a lot of FUD going on about WebRTC and the fact that it leaks local IP addresses. I’ve been struggling myself in trying to understand what the fuss is. It does seem bad, a web page knowing too much about me. But how is that hurting me in any way? I am not a security expert, so I can’t really say, but I do believe the noise levels around this topic are higher than they should be.
When coming to analyze this, there are a couple of things to remember:
- As Cullen Jennings points out, for the most part, the local IP address is mostly known. At least for the consumers at home
- We are already sharing so much about ourselves out of our own volition, then I don’t see how this is such an important piece of information we are giving away now
- The alternative isn’t any good either: I currently have installed on my relatively new laptop at least 4 different communication apps that have “forced” themselves on my browser. They know my local IP address and probably a lot more than that. No one seems to care about it. I can install them easily on most/all enterprise machines as well
- Browser fingerprinting isn’t new. It is the process of finding out who you are and singling you out when you surf across the web through multiple websites. Does it need WebRTC? Probably not. Go on and check if your browser have a unique fingerprint – all of the 4 browsers I checked (on 3 devices, including my smartphone’s) turned out rather unique – without the use of WebRTC
- The imminent death of plugins and the commonality of browsers on popular smartphones means that browser fingerprints may become less unique, reducing their usefulness. WebRTC “fixes” that by adding the coupling of the additional local and public IP address information. Is that a good thing? A bad thing?
One thing is clear. WebRTC has a lot more uses than its original intended capability of simply connecting a call.
The post WebRTC on the New York Times – Not as an Article or a Video Chat Feature appeared first on BlogGeek.me.
3CX and WebRTC: An Interview With Nick Galea
Enterprise web meetings
Video Conf
Medium
Voice, Video
WebRTC video conferencing for the enterprise.
[If you are new around here, then you should know I’ve been writing about WebRTC lately. You can skim through the WebRTC post series or just read what WebRTC is all about.]
I have been following 3CX for several years. They were one of the first in the enterprise communication solution vendors that offered WebRTC. Recently, they introduced a new standalone service called 3CX WebMeeting. It has all the expected features of an enterprise multiparty video calling service. And it uses WebRTC.
I had a chat with Nick Galea, CEO of 3CX. I wanted to know what are they doing with WebRTC and what are his impressions of it.
Here are his answers.
What is 3CX all about?
3CX provides a straightforward and easy to use & manage communication solution that doesn’t lack in functionality or features and is still highly affordable. We recognised that there was a need for a Windows-based software PBX and so this is where 3CX began.
Given the fact that the majority of businesses already use Windows, 3CX provides a solution that is easy to configure and manage for IT Admins. There’s no need for any additional training that can be time-consuming and costly. We also help businesses to save money on phone bills with the use of SIP trunking and free interoffice calls and travel costs can be reduced by making use of video conferencing with 3CX WebMeeting. As a UC solutions provider, we focus on cost savings, management, productivity and mobility, and we help our customers to achieve improvements in all four aspects.
Our focus is on innovation and thus, our development team works nonstop to bring our customers and partners the very best. We are always looking out for the latest great technologies and how we can use them to make 3CX Phone System even better and so of course, WebRTC was a technology that we just had to implement.
You decided to plunge into the waters and use WebRTC. Why is that?
To us, unified communications is not only about bringing all methods of communication into one user-friendly interface, but about making those methods of communication as seamless, enjoyable and productive for all involved, whether that be for the organisation that invested in the system, or a partner or client that simply has a computer and internet connection to work with.
Running a business is not an easy feat, and the whole purpose of solutions such as 3CX Phone System and 3CX WebMeeting is to make everyday business processes easier. So, for us, WebRTC was a no-brainer. We believe in plugin-free unified communications and with such technology available for us to leverage, the days of inconvenient downloads and time-consuming preparation in order to successfully (or in some cases, unsuccessfully) hold a meeting are over.
What signaling have you decided to integrate on top of WebRTC?
Signalling is performed through websocket for maximum compatibility. Messages and commands are enveloped in JSON objects. ICE candidates are generated by our server library while SDP are parsed and translated by MCU. This allows full control over SDP features like FEC and RTX in order to achieve best video performance.
Backend. What technologies and architecture are you using there?
The platform is based on a web application written on PHP. We developed a custom MCU service (actually it’s a Selective Forward Unit aka SFU). This service allows us to handle a very large number of media streams in real time. Performance is optimized to reduce latency to a minimum. Raw media streams can be saved to disk, then our Converter Service automatically produces a standard video file with meeting recording.
A key component of web application is the MCU Cluster Manager, which is able to handle several MCUs scattered in different areas, distribute load and manage user location preference.
Since you cater the enterprise, can you tell me a bit about your experience with Internet Explorer, WebRTC and customers?
So far most people are using Chrome without any complaints so it doesn’t concern me that WebRTC is not supported by Internet Explorer. We haven’t come across any issues with customers as they are aware that this is a limitation of the technology and not the software and actually our stats show that 95% of people connect or reconnect with Chrome after receiving the warning message, so for most users Chrome is not a problem.
Where do you see WebRTC going in 2-5 years?
I think that WebRTC will become the de facto communications standard for video conferencing, and maybe even for calls. WebRTC is a part of how technology is evolving and we may even see some surprising uses for it outside the realms of what we’re imagining right now. It’s incredibly easy to use and no other technology is able to compete. It’s what the developers are able to do with it that is really going to make the difference and I believe there is still so much more to come in terms of how WebRTC can be utilised.
If you had one piece of advice for those thinking of adopting WebRTC, what would it be?
That they should have adopted it earlier :).
Given the opportunity, what would you change in WebRTC?
Nothing really but the technology is still growing so I’m looking forward to see what’s in store for WebRTC and how it’s going to improve.
What’s next for 3CX?
We’re working on tighter integration between 3CX WebMeeting and 3CX Phone System and integrating our platform more closely with other vendors of third-party apps such as CRM systems and so on.
–
The interviews are intended to give different viewpoints than my own – you can read more WebRTC interviews.
The post 3CX and WebRTC: An Interview With Nick Galea appeared first on BlogGeek.me.