News from Industry

2600Hz Case Study: A Look At Slable’s Partnership and How 2600Hz’s Offerings Improved Their Business

2600hz - Tue, 08/04/2015 - 21:08

About Slable

Slable provides affordable enterprise I.T. and communication solutions for small-to-medium businesses (SMB) in the Washington, D.C. metropolitan area. Their goal is to eliminate all I.T.-related hassle from work environments of various types ranging from veterinarians to marketing/PR firms. Their network infrastructure enables companies to host their applications on a reliable and secure network so that their customers can focus on what they do best. Slable’s team of 14 employees is able to provide stellar support around the clock for all customers.

Challenges

Slable needed a reliable VoIP platform that could be easily by their customers. Slable’s previous VoIP solution was not powerful enough to handle complex features and began to experience weekly outages, resulting in a loss of faith in their services. Facing the prospect of losing customers, Slable decided to find a reliable VoIP platform that would scale with their growing customer base and provide feature rich applications for advanced users. Slable needed a reliable VoIP platform that could be easily by their customers. Slable’s previous VoIP solution was not powerful enough to handle complex features and began to experience weekly outages, resulting in a loss of faith in their services. Facing the prospect of losing customers, Slable decided to find a reliable VoIP platform that would scale with their growing customer base and provide feature rich applications for advanced users.


2600Hz’s Solution

Implementation

When Slable started researching VoIP platforms, they realized that there
was little focus on SMB customers. Most providers lacked the training and documentation for Slable to implement their solution and had minimums that were too high. 2600Hz was able to provide a reliable platform, partner support, quick onboarding/training and customized solutions for advanced users.

Business Outcome

2600Hz’s customizable platform enhanced Slable’s VoIP capabilities and freed time for customer outreach, service, and support. Slable achieved a better ROI due to prompt support responses, reduced downtime, easy migration, and better tracking of customers.

2600Hz is continuously adding unique and bleeding edge features, creating a one-of-a-kind telecom experience for Slable’s clients. As a result, Slable has been able to grow their VoIP business and are aggressively pushing VoIP in the local Washington, D.C. market with cost-savings and increased features.


Key Improvements

  • Comprehensive 
  • Training 
  • Competitive Pricing 
  • True Support 
  • Customizable Solutions 
  • Scalability
  • Mobility

How to stop a leak – the WebRTC notifier

webrtchacks - Tue, 08/04/2015 - 11:15

The “IP Address Leakage” topic has turned into a public relations issue for WebRTC. It is a fact that the WebRTC API’s can be used to share one’s private IP address(es) without any user consent today. Nefarious websites could potentially use this information to fingerprint individuals who do not want to be tracked. Why is this an issue? Can this be stopped? Can I tell when someone is trying to use WebRTC without my knowledge? We try to cover those questions below along with a walkthrough of a Chrome extension that you can install or modify for yourself that provides a notification if WebRTC is being used without your knowledge.

Creative solutions for leaks

The “IP Leakage” problem Why does WebRTC need a local IP address?

As Reid explained long ago in his An Intro to WebRTC’s NAT/Firewall Problem, peer-to-peer communications cannot occur without providing the peer your IP address. The ICE protocol gathers and checks all the addresses that can be used to communicate to a peer. IP addresses come in a few flavors:

  • host IP address – this is the usually the local LAN IP address and is the one that is being exposed that is causing all the fuss
  • server-reflexive – this is the address outside the web server hosting the page will see
  • relay – this will show-up if you have a TURN server

Why not just use the server reflexive and relay addresses? The host IP address is the If you have 2 peers that want to talk to each other on the same LAN, then the most effective way to do this is to use the host IP address to keep all the traffic local. Otherwise you might end up sending the traffic out to the WAN and then back into the LAN, adding a lot of latency and degrading quality. This is the best address to use for this situation.

Relay addresses require that you setup a TURN server to relay your media. Use of relay means you are no longer truely peer-to-peer. Relay use is typically temporarily to speed connection time or as a last resort when a direct peer-to-peer connection cannot be made. Relay is generally avoided since just passing along a lot of media with no added value is expensive in terms of bandwidth costs and added latency.

This is why the WebRTC designers do not consider the exposure of the host IP address a bug – they built WebRTC on this way on purpose. The challenge is this mechanism can be used in to help with fingerprinting, providing a datapoint on your local addresses that you and your network administrator might not be happy about. The concern over this issue is illustrated by the enormous response on the Dear NY Times, if you’re going to hack people, at least do it cleanly! post last month exemplified this issue.

Why not just ask for when someone wants your local IP address?

When you want to share a video or audio stream, a WebRTC application you use the getUserMedia API. The getUserMedia API requires user consent to access the camera & microphone. However, there is no requirement to do this when using a dataChannel. So why not require consent here?

Let’s look at the use-cases. For a typical WebRTC videochat, user consent is required for the camera permission. The question “do you want to allow this site to access to your camera and microphone” is easy to understand for users. One might require consent here or impose the requirement that a mediastream originating from a camera is attached to the peerconnection.

What about a webinar. Participants might want to join just to listen. No permission is asked currently. Is that bad? Well… is there a permission prompt when you connect to a streaming server to watch a video? No. What is the question that should be asked here?

There are usecases like filetransfer which involve datachannel-only connections without the requirement of local media. Since you can upload the file to any http server without the browser asking for any permission, what is the question to ask here?

Last but not least, there are usecases like peer-to-peer CDNs where visitors of a website form a CDN to reduce the server-load in high-bandwidth resources like videos. While many people claim this is a new use-case enabled by WebRTC, Adobe showed this capability in Flash at MAX 2008 and 2009.

As as side-note, the RTMFP protocol in Flash has leaked the same information since then. It was just alot less obvious to acquire.

There is an additional caveat here. Adobe required user consent before using the user’s upstream to share data — even if peer-to-peer connections did not require consent. Apparently, this consent dialog completely killed the use-case for Flash, at a time when it was still the best way to deliver video. What is the question that the user must answer here? And does the user understand the question?

Photo courtesy flickr user Nisha A under Creative Commons 2.0 What are the browser vendors and the W3C doing about it?

Last week Google created an extension with source code to limit WebRTC to only using public addresses. There have been some technical concerns about breaking applications and degrading performance.
Mozilla is considering similar capabilities for Firefox as discussed here. This should hit the nightly build soon.
The W3C also discussed the issue at their recent meeting in Berlin and will likely address this as part of the unsanctioned tracking group.

 

How do I know if a site is trying to run WebRTC?

We usually have chrome://webrtc-internals open all the time and occasionally we do see sites using WebRTC in unexpected ways? I wondered if there was an easier way to see if a site was covertly using WebRTC, so I asked Fippo how hard it would be to make an extension to show peerConnection attempts. In usual fashion he had some working sample code back to be in a couple of hours. Let’s take a look…

How the extension works

The extension source code is available on github.
It consists of a content script, snoop.js, which is run at document start (as specified in the manifest.json file) and a background script, background.js
The background script is sitting idly and waiting for messages sent via the Message Passing API.
When receiving a message with the right format, it prints that message to the background page’s console and show the page action.

chrome.runtime.onConnect.addListener(function (channel) { channel.onMessage.addListener(function (message, port) { if (message[0] !== 'WebRTCSnoop') return; console.log(new Date(), message[1], message[2]); chrome.pageAction.show(port.sender.tab.id); }); });

Pretty simple, eh? You can inspect the background page console from the chrome://extensions page.
Let’s look at the content script as well. It consists of three blocks.
The first block does the important work. It overloads the createOffer, createAnswer, setLocalDescription and setRemoteDescription methods of the webkitRTCPeerConnection using a technique also used by adapter.js. Whenever one of these methods is called, it does a window.postMessage which is then triggers a call to the background page.

var inject = '('+function() { // taken from adapter.js, written by me ['createOffer', 'createAnswer', 'setLocalDescription', 'setRemoteDescription'].forEach(function(method) { var nativeMethod = webkitRTCPeerConnection.prototype[method]; webkitRTCPeerConnection.prototype[method] = function() { // TODO: serialize arguments var self = this; this.addEventListener('icecandidate', function() { //console.log('ice candidate', arguments); }, false); window.postMessage(['WebRTCSnoop', window.location.href, method], '*'); return nativeMethod.apply(this, arguments); }; }); }+')();';

The code snippet also shows how to listen for the ice candidates in a way which
The second part, inspired by the WebRTCBlock extension, injects the Javascript into the page by creating a script element, inserting the code and removing it immediately.

var script = document.createElement('script'); script.textContent = inject; (document.head||document.documentElement).appendChild(script); script.parentNode.removeChild(script);

Last but not least, a message channel is set up that listens to the events generated in the first part and send them to the background page:

var channel = chrome.runtime.connect(); window.addEventListener('message', function (event) { if (typeof(event.data) === 'string') return; if (event.data[0] !== 'WebRTCSnoop') return; channel.postMessage(event.data); });

There is a caveat here. The code is not executed for iframes that use the sandbox attribute as described here so it does not detect all usages of WebRTC. That is outside our control. Hey Google… can you fix this?

Ok, but how do I install it?

If you are not familiar with side-loading Chrome extensions, the instructions are easy:

  1. Download the zip from github
  2. Unzip it to a folder of your choice
  3. go to chrome://extensions
  4. Click on “Developer mode”
  5. Then click “Load unpacked extension”
  6. Find the webrtcnotify-master folder that you unzipped

View of the WebRTC Notifier extension

That’s it! If you want to see more details from the extension then it is helpful to load the extension’s console log. To do this just click on “background page” by “Inspect views”.

If you are familiar with Chrome Extensions and have improvement ideas, please contribute to the project!

What do I do if I find an offending site?

No one really knows how big of a problem this is yet, so let’s try to crowd source it. If you find a site that appears to be using WebRTC to gather your IP address in a suspicious way then post a comment about it here. If we get a bunch of these and others in the community confirm then we will create a public list.

With some more time we could potentially combine selenium with this extension to do something like a survey of the most popular 100k websites? We are not trying to start a witch hunt here, but having data to illustrate how big a problem this is would help inform the optimal path forward enormously.

{“authors”: [“Chad Hart“, “Philipp Hancke“]}

Want to keep up on our latest posts? Please click here to subscribe to our mailing list if you have not already. We only email post updates. You can also follow us on twitter at @webrtcHacks for blog updates and news of technical WebRTC topics or our individual feeds @chadwallacehart, @victorpascual and @tsahil.

The post How to stop a leak – the WebRTC notifier appeared first on webrtcHacks.

Should WebRTC Data Channels be Explicitly Approved by the User?

bloggeek - Mon, 08/03/2015 - 10:00

I don’t think so.

There have been at of chatter lately about the NY Times and local IP address use. A rather old Mozilla bug got some attention due to it, with some interesting comments:

Daniel Roesler #106:

I’ve said this before and I’ll say it again. Data channels should require user consent just the same as video and audio (getUserMedia). I haven’t yet heard a good reason on why a silent P2P data channel connection is required.

Eric Rescorla #116:

We are considering adding an extension to restrict the use of WebRTC but are still studying what would be most effective.

Zack Weinberg (:zwol) #117:

I would like to second this observation. I have not attempted to dig into the details of the spec, but it *sounds* like the entire problem goes away if creating any sort of channel requires explicit user authorization.

The rants go on.

What they all share in common? Leak of IP addresses is wrong and shouldn’t be done. Not without a user’s consent.

I’d like to break the problem into two parts here:

  1. IP leakage
  2. Consent
IP leakage

The issue of leaking a local IP address is disconcerting to some. While I understand the issue for VPN configurations, I find it a useless debate for the rest of us.

My own local IP address at the moment is 10.0.0.3. Feel free to store this information for future dealings with me. Now that you know it – have you gained anything? Probably not.

Oh, and if you have a mobile phone, you probably installed a bunch of apps. These apps are just as complex as any web page – it connects to third parties, it most likely uses an ad network, etc. How hard is it to get the local IP address inside an app and send it to someone else? Do you need special permissions to it? Do users actually approve it in any way? Do you think the NY Times app uses this for anything? How about Candy Crush? Or Angry Birds?

Local IPs are compromised already. Everywhere. They are easy to guess. They are easy to obtain in apps. Why is the web so different? And what huge secret do they store?

Consent

When someone wants access to my camera, microphone or screen – I understand the need for consent. I welcome it.

But when it comes to the data channel I am not so sure. There are differences here. My thinking about it runs in multiple paths.

1. Content

Microphone, Camera and Screen actually give new data to Java Script code to work with. The Data Channel is a transport and not the data itself.

The browser doesn’t ask permission to download 50+ resources from a web page when we only asked for the web page. It doesn’t ask for permission when 40+ of these resources are located at other domains than the one that we asked for. It doesn’t ask for permission when a web page wants to open a WebSocket either. It doesn’t ask for permission when a web page tries to generate other bidirectional methods to connect to our browser – SSE or XHR – it just runs it along.

As we are trying to protect content, permission on the data channel level seems unnecessary.

If we want to protect local IP address exposure, we should find other means of doing that – or accept that in many use cases, they aren’t worth the protection.

2. User experience

For a video call, a request to allow access is fine – there’s a human involved. But for a programmatic interface that’s a bit of an overkill. With many WebRTC data channel use cases targeting CDN augmentation or replacement, would users be willing to take the additional approval step? Would content providers be willing to take the risk of losing customers?

Let’s assume GIS and mapping on the internet adopts the WebRTC data channel – similar to what PeerMesh are doing. Would you be happy with the need to allow each and every web page that has a Google Map on it to have access to the data channel?

Would you want your games to ask you to allow connecting to others when switching to multiplayer?

Do you want Akamai (a CDN) powered websites to ask you to allow them to work to speed up page loads?

This doesn’t work.

Stop thinking about the data channel as a trojan horse – it is just another hammer in our toolbox.

3. Web trends

In many ways, we are at a phase where we are trying to decentralize the web – enabling browsers to reach each other and to dis-intermediate the servers from the communications. FireChat is doing it for awhile now, but they are far from being alone in it.

This kind of decentralization cannot work properly without letting browsers chat sideways instead of via web servers. While we may want in the future to make such connections as low level TCP and other network building blocks, this isn’t the case today.

We need to find other solutions than placing a permission request on every data channel we try opening.

Why is it important?

We need to be able to distinguish between FUD and reality.

Data channels by themselves aren’t a threat. They may change the way browsers operate on the network level, which may expose vulnerabilites, but the solution shouldn’t be disabling data channels or putting manual roadblocks to them on the browser – it should be in better architecting the solution around them.

As WebRTC grows and matures, these issues will be polished out. For now, I still believe WebRTC is the most secure VoIP technology out there to build your services. Trust, on the other hand, will always depend on the web service’s developers.

The post Should WebRTC Data Channels be Explicitly Approved by the User? appeared first on BlogGeek.me.

Kamailio v4.2.6 Released

miconda - Thu, 07/30/2015 - 13:34
Kamailio SIP Server v4.2.6 stable is out! This is a minor release including fixes in code and documentation since v4.2.5.Kamailio (former OpenSER) v4.2.6 is based on the latest version of GIT branch 4.2, therefore those running previous 4.2.x versions are advised to upgrade to 4.2.6 (or to 4.3.x series). If you upgrade from older 4.2.x to 4.2.6, there is no change that has to be done to configuration file or database structure comparing with older v4.2.x.Resources for Kamailio version 4.2.6Source tarballs are available at:Detailed changelog:Download via GIT: # git clone git://git.kamailio.org/kamailio kamailio
# cd kamailio
# git checkout -b 4.2 origin/4.2Binaries and packages will be uploaded at:Modules’ documentation:What is new in 4.2.x release series is summarized in the announcement of v4.2.0:Note: the branch 4.2 is the previous stable branch. The latest stable branch is 4.3, at this time with v4.3.1 being released out of it. The project is officially maintaining the last two stable branches, these are 4.3 and 4.2. Therefore an alternative is to upgrade to latest 4.3.x – be aware that you may need to change the configuration file from 4.2.x to 4.3.x. See more details about it at:

WebRTC Monitoring: Do you Monitor your Servers or Your Service?

bloggeek - Thu, 07/30/2015 - 12:00

WebRTC monitoring the right way.

When we started out developing testRTC, what we had in mind is a service that helps QA people test their service prior to heading to production. We’ve built a sleek webapp that enables us to simulate virtually any type of a WebRTC use case. Testers can then just specify or record their script and from there run it and scale it in their tests using testRTC. What we quickly found out was that some were looking for a solution that helps them monitor their service as opposed to manually (or even automatically and continuously) testing their latest build.

The request we got was something like this: “can you make this test we just defined run periodically? Every few minutes maybe? Oh – and if something goes awfully wrong – can you send me an alert about it?”

What some realized before we did was that the tests they were defining can easily be used to monitor their production service. There reasoning behind this request is that there’s no easy way to run an end-to-end monitor on a WebRTC service.

The alternatives we’ve seen out there?

  • Pray that it works, and wait for a user to complain
  • Using Pingdom to check that the domain is up and running and that the server is alive
  • Using New Relic or its poor man’s alternative – Nagios – to handle application monitoring. It boils down to testing that the servers are up and running, CPU and memory load look reasonable and maybe a bit of your server’s metrics

But does that mean the service is up and running, or just that the machines and maybe even processes are there? In many cases, what IT people are really looking to monitor is the service itself – they want to make sure that if a call is made via WebRTC – it actually gets through – and media is sent and received – with a certain expected quality. And that’s where most monitoring tools break down and fail to deliver.

This is why a few weeks ago, we’ve decided to add WebRTC monitoring capabilities to testRTC. As a user, you can set it up by defining a test case, indicate from where in the world you want it to run, define the intervals to run it along with thresholds on quality. And that’s it.

What you’ll get is a continuously running test that will know when to alert you on issues AND collect all of the reports. For all calls. The bad ones and the good ones. So you can drill down in post mortem to see what went wrong and why.

If you need something like this, contact us on testRTC – the team would love to show you around our tool and set you up with a WebRTC monitor of your own.

 

Test and Monitor your WebRTC Service like a pro - check out how testRTC can improve your service' stability and performance.

The post WebRTC Monitoring: Do you Monitor your Servers or Your Service? appeared first on BlogGeek.me.

About new variables in Kamailio v4.3

miconda - Tue, 07/28/2015 - 20:35
Each new major version of Kamailio brings a new set of configuration file variables, which adds to the existing long list (see Pseudo-Variables Cookbook), enabling more flexibility or making easier to approach some specific needs.

v4.3 introduced as well several new variables, here we touch few of them.
$var(name) is an old variable, storing the value in private memory, being persistent per process. It is very fast when used in operations (no looking needed), therefore popular across config files. One of its property is that the initial value is 0 (no need to initialize it explicitly) and setting it to $null results in resetting it to value 0.
Requested by community, a new variable class $vn(name) was introduced in v4.3 by pv module, which has the properties of $var(name), but it holds ‘null’. Setting it to 0 requires explicit assignment ‘$vn(name) = 0′ and setting it to ‘$null’ no longer resets the value to 0, but to ‘null’.
The pv module added $sbranch(key), a class of variables that allows to manage all the attributes of outgoing branches, including the first branch corresponding to request URI. It is like a temporary container where to store the attributes before pushing them to the branches. A set of three functions come to help in these operations: sbranch_set_ruri(), sbranch_append() and sbranch_reset(). An use case that is possible now can be setting the Path of a branch (next hops till final destination), including the one for R-URI branch.
Related to XAVP variables, a function named xavp_explode_params() can be now used to take the names and values of a parameters string and add them as XAVPs.
The rr module introduced variables to get the direction of the request – $rdir(name) will return ‘downstream’ if request is from caller to callee and ‘upstream’ if the request is from callee to caller.$rdir(id) is the variant to return 1 for ‘downstream’ and 2 for ‘upstream’. From the same module come $fti and $tti – the From and To tags as for the initial INVITE transaction, no matter of direction for the request. For example, using in config (e.g., as htable key) the dialog 3-tuple identifier (call-id, from-tag, to-tag) is now simpler, no need to care anymore about the direction of the request.

Presence or other IMS modules are among the components introducing new variables, you can see the full list of variables available for Kamailio v4.3 at:What is new in v4.3 is summarized at:Enjoy the summer!

FreeSWITCH Week in Review (Master Branch) July 18th-July 24th

FreeSWITCH - Tue, 07/28/2015 - 19:15

Hello, again. This passed week in the FreeSWITCH master branch we had 46 commits. The new features this week are: the addition of getcputime to retrieve FreeSWITCH process CPU usage, added support for 80 ms, 100 ms, 120 ms packetization to mod_opus,  and added H.263 codec support to mod_av.

Join us on Wednesdays at 12:00 CT for some more FreeSWITCH fun! And head over to freeswitch.com to learn more about FreeSWITCH support.

New features that were added:

Improvements in build system, cross platform support, and packaging:

  • FS-7860 Prevent a switch_rtp header conflict
  • FS-7130 Make /run/freeswitch persistent, so it will start under systemd

The following bugs were squashed:

Who Needs WebSockets in an HTTP/2 World?

bloggeek - Tue, 07/28/2015 - 12:00

I don’t know the answer to this one…

I attended an interesting meetup last month. Sergei Koren, Product Architect at LivePerson explained about HTTP/2 and what it means for those deploying services. The video is available online:

One thing that really interest me is how these various transports are going to be used. We essentially now have both HTTP/2 and WebSocket capable of pretty much the same things:

HTTP/2 WebSocket Headers Binary + compression Binary, lightweight Content Mostly text + compression Binary or text Multiplexed sessions Supported Supported Direction Client to server & server push Bidirectional

What HTTP/2 lacks in binary content, it provides in compression.

Assuming you needed to send messages back and forth between your server and its browser clients, you’ve probably been considering using HTTP based technologies – XHR, SSE, etc. A recent addition was WebSocket. While the other alternatives are mostly hacks and workarounds on top of HTTP, a WebSocket essentially hijacks an HTTP connection transforming it into a WebSocket – something defined specifically for the task of sending messages back and forth. It made WebSocket optimized for the task and a lot more scalable than other alternatives.

With HTTP/2, most of the restrictions that existed in HTTP that required these hacks will be gone. This opens up the opportunity for some to skip WebSockets and stay on board with HTTP based signaling.

Last year I wrote about the need for WebSockets for realtime and WebRTC use cases. I am now wondering if that is still true with HTTP/2.

Why is it important?
  • BOSH, Comet, XHR, SSE – these hacks can now be considered legacy. When you try to build a new service, you should think hard before adopting them
  • WebSocket is what people use today. HTTP/2 is an interesting alternative
  • When architecting a solution or picking a vendor, my suggestion would be to understand what transports they use today and what’s in their short-term and mid-term roadmap. These will end up affecting the performance of your service

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post Who Needs WebSockets in an HTTP/2 World? appeared first on BlogGeek.me.

New WebRTC WG Charter

webrtc.is - Mon, 07/27/2015 - 21:36

The new charter for the WebRTC Working Group has been approved. Current members will need to re-join, from the WebRTC WG mail list…

Hi all,

Great news, the new W3C WebRTC Working Group charter [1] has been officially approved by the W3C Director [2].

The revised charter adds a deliverable for the next version of WebRTC, has an updated list of deliverables based on the work started under the previous charter, clarifies its decision policy, and extends the group
until March 2018.

The charter of this Working Group includes a new deliverable that require W3C Patent Policy licensing commitments from all Participants.

Consequently, all Participants must join or re-join the group, which involves agreeing to participate under the terms of the revised charter and the W3C Patent Policy. Current Participants may continue to attend meetings (teleconferences and face-to-face meetings) for 45 days after this announcement, even if they have not yet re-joined the group. After 45 days (ie. September 10, 2015), ongoing participation (including meeting attendance and voting) is only permitted for those who have re-joined the group.

Use this form to (re)join:
https://www.w3.org/2004/01/pp-impl/47318/join

Instructions to join the group are available at:
http://www.w3.org/2004/01/pp-impl/47318/instructions

Thanks,
Vivien on behalf of the WebRTC WG Chairs and Staff contacts

[1] http://www.w3.org/2015/07/webrtc-charter.html
[2] https://lists.w3.org/Archives/Member/w3c-ac-members/2015JulSep/0024.html


WebRTC Basics: How (and Why) WebRTC Uses your Browser’s IP Address

bloggeek - Mon, 07/27/2015 - 12:00

To reach out to you.

I’ve been asked recently to write a few more on the topic of WebRTC basics – explaining how it works. This is one of these posts.

There’s been a recent frenzy around with the NY Times use of WebRTC. The fraud detection mechanism for the ads there used WebRTC to find local addresses and to determine if the user is real or a bot. Being a cat and mouse game over ad money means this will continue with every piece of arsenal both sides have at their disposal, and WebRTC plays an interesting role in it. The question was raised though – why does WebRTC needs the browser’s IP address to begin with? What does it use it for?

To answer, this question, we need to define first how the web normally operates (that is before WebRTC came to be).

The illustration above explains it all. There’s a web server somewhere in the cloud. You reach it by knowing its IP address, but more often than not you reach it by knowing its domain name and obtaining its IP address from that domain name. The browser then goes on to send its requests to the server and all is good in the world.

Now, assume this is a social network of sorts, and one user wants to interact with another. The one and only way to achieve that with browsers is by having the web server proxy all of these messages – whatever is being sent from A to B is routed through the web server. This is true even if the web server has no real wish to store the messages or even know about them.

WebRTC allows working differently. It uses peer-to-peer technology, also known as P2P.

The illustration above is not new to VoIP developers, but it has a very important difference than how the web worked until the introduction of WebRTC. That line running directly between the two web browsers? That’s the first time that a web browser using HTML could communicate with another web browser directly without needing to go through a web server.

This is what makes all the difference in the need for IP addresses.

When you communicate with a web server, you browser is the one initiating the communication. It sends a request to the server, when will then respond through that same connection your browser creates. So there’s no real need for your browser to announce its IP address in any way. But when one browser needs to send messages to another – how can it do that without an IP address?

So IP addresses need to be exchanged between browsers. The web server in the illustration does pass messages between browsers. These messages contain SDP, which among other things contains IP addresses to use for the exchange of data directly between the browsers in the future.

Why do we need P2P? Can’t we just go through a server?

Sure we can go through a server. In fact, a lot of use cases will end up using a server for various needs – things like recording the session, multiparty or connecting to other networks necessitates the use of a server.

But in many cases you may want to skip that server part:

  • Voice and video means lots of bandwidth. Placing the burden on the server means the service will end up costing more
  • Voice and video means lost of CPU power. Placing the burden on the server means the service will end up costing more
  • Routing voice and video through the server means latency and more chance of packet losses, which will degrade the media quality
  • Privacy concerns, as when we send media through a server, it is privy to the information or at the very least to the fact that communication took place

So there are times when we want the media or our messages to go peer-to-peer and not through a server. And for that we can use WebRTC, but we need to exchange IP addresses across browsers to make it happen.

Now, this exchange may not always translate into two web browsers communicating directly – we may still end up relaying messages and media. If you want to learn more about it, then check out the introduction to NATs and Firewalls on webrtcHacks.

 

Kranky and I are planning the next Kranky Geek in San Francisco sometime during the fall. Interested in speaking? Just ping me through my contact page.

The post WebRTC Basics: How (and Why) WebRTC Uses your Browser’s IP Address appeared first on BlogGeek.me.

Will Patents Kill H.265 or Will H.265’s Patents Kill WebRTC?

bloggeek - Thu, 07/23/2015 - 12:00

To H.265 (=HEVC) or not to H.265? That is the question. And the answer will be determined by the browser vendors.

I gave a keynote at a UC event here in Israel last week. I really enjoyed it. One of the other speakers, made it a point to state that their new top of the line telepresence system now supports… H.265. And 4K. I was under impressed.

H.265 is the latest and greatest in video compression. Unless you count VP9. I’ve written about these codecs before.

If you think about WebRTC in 2016 or even 2017, you need to think beyond the current video codecs – H.264 and VP8. This is important, because you need to decide how much to invest in the media side of your service, and what implications these new codecs will bring to your architecture and development efforts.

I think H.265 is going to have a hard time in the market, and not just because VP9 is already out there, streamed over YouTube to most Chrome and Firefox browsers. It will be the case due to patents.

In March this year, MPEG-LA, the good folks counting money from H.264 patents, have announced a new patent pool for HEVC (=H.265). Two interesting posts to read about this are Jan Ozer‘s and Faultline‘s. Some things to note:

  • There currently are 27 patent holders
  • Over 500 essential patents are in the pool
  • Not everyone with patents around H.265 have joined the pool, so licensing H.265 may end up being a nightmare
  • Missing are Google and Microsoft from the patent pool
  • Missing are also video conferencing vendors: Polycom, Avaya and Cisco
  • Unit cost for encoder or decoder is $0.20/unit
  • There’s an annual cap of $25M

What does that mean to WebRTC?

  • Internet users are estimated above 3 billion people and Firefox has an estimated market share of around 12%. With over 300 million Firefox users, that places Mozilla way above the cap. Can Mozilla pay $25M a year to get H.265? Probably not
  • It also means every successful browser vendor will need to shell these $25M a year to MPEG-LA. I can’t see this happening any time soon
  • Google has their own VP9, probably with a slew of relevant patents associated with it. These will be used in the upcoming battle with H.265 and the MPEG-LA I assume
  • Microsoft not joining… not sure what that means, but it can’t be good. Microsoft might just end up adopting VP9 and going with Google here, something that might actually look reasonable
  • Apple being Apple, if they decide to support WebRTC (and that’s still a big if in 2015 and 2016), they won’t go with the VPx side of the house. They will go with H.265 – they are part of that patent pool
  • Cisco isn’t part of this pool. I don’t see them shelling $25M a year on top of the estimated $6M they are already “contributing” for OpenH264 towards MPEG-LA

This is good news for Google and VP9, which is the competing video technology.

When we get to the WebRTC wars around H.265 and VP9, there will be more companies on the VP9 camp. The patents and hassles around H.265 will not make things easy:

  • If WebRTC votes for VP9, it doesn’t bode well for H.265
    • WebRTC is the largest deployment of video technology already
    • Deciding to ignore it as a video codec isn’t a good thing to do
  • If WebRTC votes for H.265, unlikely as it may seem, may well kill standards based high quality video support across browsers in WebRTC
    • Most browsers will probably prefer ignoring it and go with VP9
    • Handsets might go with H.265 due to a political push by 3GPP (a large portion of the patent owners in H.265 are telecom operators and their vendors)
    • This disparity between browsers and handsets won’t be good for the market or for WebRTC

The codec wars are not behind us. Interesting times ahead. Better be prepared.

 

Kranky and I are planning the next Kranky Geek in San Francisco sometime during the fall. Interested in speaking? Just ping me through my contact page.

The post Will Patents Kill H.265 or Will H.265’s Patents Kill WebRTC? appeared first on BlogGeek.me.

Announcing the ClueCon Coder Games!

FreeSWITCH - Wed, 07/22/2015 - 03:09
So You Think You Can Code?

You’ve seen the presentations, you’ve asked your questions, you have the resources, now it is your time to shine by using the sponsor APIs to create something exciting! We want to see what you can do! Bonus points for each API you can incorporate! Go check out the APIs now to get a head start on the competition and get those creative juices flowing! You have less than two weeks to prepare!

Sponsor APIs: FreeSWITCH, Tropo, Kandy, Twilio, Plivo, and more…

IPv6 Round Table

IPv6 and why you should deploy it ASAP: John Brzozowski, Fellow and Chief Architect, IPv6 at Comcast, Bill Sandiford President of CNOC, Member of the board at ARIN.

Flowroute – Jeopardy

Think you know about SIP? Do you know enough to beat the competition? Flowroute is hosting a SIP themed game of Jeopardy! Put your brain to the test and come see how much you really know!

DTMF-u

Do you like all things games? Well, then this is the game for you! But there is a twist! Before you can win the game you must build it! Using your choice of language or API you must build a game! There will be three categories of DTMF-u; Tic-tac-toe, DTMF pattern recognition, or Freestyle. You could build something that plays a random DTMF sequence, receives player input, and then either continues or fails the player. Or maybe a WebRTC based game of tic-tac-toe? Or surprise us! Have fun with it! Creative ways to fail a player may give you bonus points. The top three games will be played by everyone and the winner of each will take home a prize! All gaming bots will be screened via a Turing test to ensure no unintended apocalyptic consequences.

Show and Tell

Alright, now is your chance! You have been playing with the code all day and this is your chance to show off! We want to know what you’ve done and how you did it. Use your creativity and skills as a programmer to impress the judges and win a prize! This is a no holds barred all out free for all! Any language doing anything! Knock our socks off and take home a fabulous prize and a year’s supply of bragging rights!

Raffle Grand Prize!

The grand prize is a laser engraved commemorative FreeSWITCH 1.6 Edition dual-core 13″ Retina MacBook Pro!

 

Is Microsoft Edge Going to be the Best Browser Around?

bloggeek - Tue, 07/21/2015 - 12:00

The newest game in town.

Apple’s Safari. Haven’t used it so can’t say anything. Just that most people I know are really comfortable using Chrome on Macs.

Chrome? Word’s around that it is bloated and kills your CPU. I know. On a machine with 4Gb of memory, you need to switch and use Firefox instead. Otherwise, the machine won’t survive the default tabs I have open.

Firefox? Hmm. Some would say that their Hello service is bloatware. I don’t really have an opinion. I am fine with using Firefox, but I prefer Chrome. No specific reason.

From a recent blog post from Microsoft, it seems like Microsoft Edge is faster than Chrome:

In this build, Microsoft Edge is even better and is beating Chrome and Safari on their own JavaScript benchmarks:

  • On WebKit Sunspider, Edge is 112% faster than Chrome
  • On Google Octane, Edge is 11% faster than Chrome
  • On Apple JetStream, Edge is 37% faster than Chrome

Coming from Microsoft’s dev team, I wouldn’t believe it. Not immediately. Others have slightly different results:

Here’s the rundown (click on an individual test to see the nitty-gritty details):

Some already want to switch from Chrome to Edge.

Edge is even showing signs of WebRTC support, so by year end, who knows? I might be using it regularly as well.

Edge is the new shiny browser.

Firefox is old news. Search Google for Firefox redesign. They had a major one on a yearly basis. Next in line is their UI framework for extensions as far as I can tell.

Safari is based on WebKit. WebKit was ditched by Google so Chrome can be developed faster. As such, Chrome is built on the ashes of WebKit.

Internet Explorer anyone?

Edge started from a clean slate. A design from 2014, where developers thought of how to build a browser, as opposed to teams doing that before smartphones, responsive design or life without Flash.

Can Edge be the best next thing? A real threat to Chrome on Windows devices? Yes.

Kranky and I are planning the next Kranky Geek in San Francisco sometime during the fall. Interested in speaking? Just ping me through my contact page.

The post Is Microsoft Edge Going to be the Best Browser Around? appeared first on BlogGeek.me.

FreeSWITCH Week in Review (Master Branch) July 11th-July 17th

FreeSWITCH - Tue, 07/21/2015 - 01:18

Hello, again. This passed week in the FreeSWITCH master branch we had 43 commits. We had a number of cool new features this week including: added functionality for capturing screenshots from both legs to uuid_write_png, the addition of new multi-canvas and telepresence features in mod_conference, the addition of vmute member flag to mod_conference, and an API for removing an active ladspa effect on a channel.

Join us on Wednesdays at 12:00 CT for some more FreeSWITCH fun! And head over to freeswitch.com to learn more about FreeSWITCH support.

New features that were added:

  • FS-7769 [mod_conference] Add new multi-canvas and telepresence features
  • FS-7847 [mod_conference] Add layers that do not match the aspect ration of conference by using the new hscale layer param for horizontal scale, and add zoom=true param to crop layer instead of letterbox, add grid-zoom layout group that demonstrates these layouts, and fix logo ratios and add borders too.
  • FS-7813 [mod_conference] Add vmute member flag.
  • FS-7846 [mod_dptools] Add eavesdrop_whisper_aleg=true and eavesdrop_whisper_bleg=true channel variables to allow you to start eavesdrop in whisper mode of specific call leg
  • FS-7760 [mod_sofia] Revise channel fetch on nightmare transfer and add dial-prefix and absolute-dial-string to the nightmare xml
  • FS-7829 [mod_opus] Add sprop-stereo fmtp param to specify if a sender is likely to send stereo or not so the receiver can safely downmix to mono to avoid wasting receiver resources
  • FS-7830 [mod_opus] Added use-dtx param in config file (enables DTX on the encoder, announces in fmtp)
  • FS-7824 [mod_png] Add functionality for capturing screenshots from both legs to uuid_write_png
  • FS-7549 [mod_ladspa] Added an API for removing an active ladspa effect on a channel. For conformance reasons, the uuid_ladspa command now accepts ‘stop’ and ‘start’, while the previous functionality (without any verb) which will simply add ladspa remains intact.

Improvements in build system, cross platform support, and packaging:

  • FS-7845 [mod_conference] Break up mod_conference into multiple source files to improve build performance
  • FS-7769 [mod_conference] Fixed a build issue
  • FS-7820 Fix build system typo. Don’t assign the same variable twice.
  • FS-7043 Fixed apr1 unresolved symbols in libfreeswitch.so.1.0.0
  • FS-7130 Make /run/freeswitch persistent in the Debian packages, so it will start under systemd

The following bugs were squashed:

  • FS-7849 [verto] Remove extra div breaking full screen in html
  • FS-7832 [mod_opus] Fixes when comparing local and remote fmtp params
  • FS-7731 [mod_xml_cdr] Fixed a curl default connection timeout
  • FS-7844 Fix packet loss fraction when calculating loss average

Kamailio v4.3.1 Released

miconda - Mon, 07/20/2015 - 23:10
Kamailio SIP Server v4.3.1 stable is out – a minor release including fixes in code and documentation since v4.3.0 – configuration file and database compatibility is preserved.Kamailio (former OpenSER) v4.3.1 is based on the latest version of GIT branch 4.3, therefore those running previous 4.3.x versions are advised to upgrade. There is no change that has to be done to configuration file or database structure comparing with older v4.3.x.Resources for Kamailio version 4.3.1Source tarballs are available at:Detailed changelog:Download via GIT: # git clone git://git.kamailio.org/kamailio kamailio
# cd kamailio
# git checkout -b 4.3 origin/4.3Binaries and packages will be uploaded at:Modules’ documentation:What is new in 4.3.x release series is summarized in the announcement of v4.3.0:

Now That Flash and Plugins are out the Door, What’s Holding you from Adopting WebRTC?

bloggeek - Mon, 07/20/2015 - 12:00

All routes are leading towards WebRTC.

Somehow, people are still complaining about adoption of WebRTC in browsers instead of checking their alternatives.

Before WebRTC came to our lives, we had pretty much 3 ways of getting voice and video calling into our machines:

  1. Build an application and have users install it on their PCs
  2. Use Flash to have it all inside the browser
  3. Develop a plugin for the service and have users install it on their browsers

We’re now in 2015, and 3 (again that number) distinct things have changed:

  1. On our PCs we are less tolerant to installing “stuff”
    • As more and more services migrate towards the cloud, so are our habits of using browsers as our window to the world instead of installed software
    • Chromebooks are becoming popular in some areas, so installing software is close to impossible in them
  2. Plugins are dying. Microsoft is banning plugins in Edge, joining Google’s Chrome announcement on the same topic
  3. Flash is being thrown out the window, which is what I want to focus about here

There have been a lot of recent publicity around a new round of zero day exploits and vulnerabilities in Flash. It started with a group called The Hacking Team being hacked, and their techniques exposed. They used a few Flash vulnerabilities among other mechanisms. While Adobe is actively fixing these issues, some decided to vocalize their discontent with Flash:

Facebook’s Chief Security Officer wants Adobe to declare an end-of-life date for Flash.

It is time for Adobe to announce the end-of-life date for Flash and to ask the browsers to set killbits on the same day.

— Alex Stamos (@alexstamos) July 12, 2015

Mozilla decided to ban Flash from its browser until the recent known vulnerabilities are patched.

Don’t get me wrong here. Flash will continue being with us for a long time. Browsers will block Flash and then re-enable it, dealing with continuing waves of vulnerabilities that will be found. But the question then becomes – why should you be using it any longer?

  • You can acquire camera and microphone using WebRTC today, so no need for Flash
  • You can show videos using HTML5 and MPEG-DASH, so no need for Flash
  • You can use WebGL and a slew of other web technologies to build interactivity into sites, so no need for Flash
  • You can run voice and video calls at a higher quality than what Flash ever could with WebRTC
  • And you can do all of the above within environments that are superior to Flash in both their architecture, quality and security

Without Flash and Plugin support in your future, why would you NOT use WebRTC for your next service?

 

Kranky and I are planning the next Kranky Geek in San Francisco sometime during the fall. Interested in speaking? Just ping me through my contact page.

The post Now That Flash and Plugins are out the Door, What’s Holding you from Adopting WebRTC? appeared first on BlogGeek.me.

Wiresharking Wire

webrtchacks - Thu, 07/16/2015 - 21:07

This is the next decode and analysis in Philipp Hancke’s Blackbox Exploration series conducted by &yet in collaboration with Google. Please see our previous posts covering WhatsApp, Facebook Messenger and FaceTime for more details on these services and this series. {“editor”: “chad“}

Wire is an attempt to reimagine communications for the mobile age. It is a messaging app available for Android, iOS, Mac, and now web that supports audio calls, group messaging and picture sharing. One of it’s often quoted features is the elegant design. As usual, this report will focus on the low level VoIP aspects, and leave the design aspects up for the users to judge.

As part of the series of deconstructions, the full analysis is available for download here, including the wireshark dumps.

Half a year after launching the Wire Android app currently has been downloaded between 100k and 500k times. They also recently launched a web version, powered by WebRTC. Based on this, it seems to be stuck with what Dan York calls the directory dilemma.

What makes Wire more interesting from a technical point of view is that they’re strong proponents of the Opus codec for audio calls. Maybe there is something to learn here…

The wire blog explains some of the problems that they are facing in creating a good audio experience on mobile and wifi networks:

The WiFi and mobile networks we all use are “best effort” — they offer no quality of service guarantees. Devices and apps are all competing for bandwidth. Therefore, real-time communications apps need to be adaptive. Network adaptation means working around parameters such as variable throughput, latency, and variable latency, known as jitter. To do this, we need to measure the variations and adjust to them in as close to real-time as possible.

Given the preference of ISAC over Opus by Facebook Messenger, the question which led to investigating Wire was whether they can show how to successfully use Opus on mobile.

Results

The blog post mentioned above also describes the Wire stackas “a derivate of WebRTC and the IETF standardized Opus codec”. It’s not quite clear what exactly “derivate of WebRTC” means. What we found when looking at Wire was, in comparison to the other apps reviewed, was a more “out of the box” WebRTC app, using the protocols as defined in the standards body.

Comparison with WebRTC  Feature WebRTC/RTCWeb Specifications Wire SDES MUST NOT offer SDES does not offer SDES ICE RFC 5245 RFC 5245 TURN usage used as last resort used as last resort Audio codec Opus or G.711 Opus Video codec H.264 or VP8 none (yet?) Quality of experience

Audio quality did turn out to be top notch, as our unscientific tests on various networks showed.
Testing on simulated 2G and 3G networks showed some adaptivity to the situations there.

Implementation

The STUN implementation turned out to be based on the BSD-licensed libre by creytiv.com, which is compatible with both the Chrome and Firefox implementations of WebRTC. Binary analysis showed that the webrtc.org media engine along with libopus 1.1 is used for the upper layer.

Privacy

Wire is company that prides itself on the user privacy protection that comes from having it’s HQ in Switzerland, yet has it’s signalling and TURN servers in Ireland. They get strong kudos for using DTLS-SRTP. To sum it up, Wire offers a case study in how to fully adopt WebRTC for both Web and native mobile.
Related articles across the web

    Want to keep up on our latest posts? Please click here to subscribe to our mailing list if you have not already. We only email post updates. You can also follow us on twitter at @webrtcHacks for blog updates and news of technical WebRTC topics or our individual feeds @chadwallacehart, @victorpascual and @tsahil.

    The post Wiresharking Wire appeared first on webrtcHacks.

    What I Learned About the WebRTC Market from a Webinar on WebRTC Testing

    bloggeek - Thu, 07/16/2015 - 12:00

    We’re a lot more than I had known.

    One of my recent “projects” is co-founding a startup called testRTC which offers testing and monitoring services for WebRTC based services. The “real” public announcement made about this service was here in these last couple of days and through a webinar we did along with SmartBear on the impact of WebRTC on testing.

    I actively monitor and maintain a dataset of WebRTC vendors. I use it to understand the WebRTC ecosystem better. I make it a point to know as many vendors as possible  through various means. I thought I had this space pretty much covered.

    What surprised me was the barrage of requests for information and demos by vendors with real services out there that came into our testRTC contact page that I just wasn’t aware of. About 50% of the requests from vendors came from someone I didn’t know existed.

    My current dataset size is now reaching 700 vendors and projects. There might be twice that much out there.

    Why is this important?
    • A lot of the vendors out there are rather silent about what they are doing. This isn’t about the technology – it is about solving a problem for a specific customer
    • There are enough vendors today to require a solid, dedicated testing tool focused on WebRTC. I am more confident about this decision we made with testRTC
    • If you are building something, be sure to let me know about it or to add it to the WebRTC Index

    Oh – and if you want to see a demo of testRTC in action, we will be introducing it and demoing it at the upcoming VUC meeting tomorrow.

     

    Want to make the best decision on the right WebRTC platform for your company? Now you can! Check out my WebRTC PaaS report, written specifically to assist you with this task.

    The post What I Learned About the WebRTC Market from a Webinar on WebRTC Testing appeared first on BlogGeek.me.

    Cluecon 2015

    miconda - Wed, 07/15/2015 - 21:51
    During August 3-6, 2015, takes place a new edition of Cluecon Conference, in Chicago, USA. Backed up mainly by the developers of FreeSwitch project, the topics at the event cover many other open source real time communication projects as well as open discussion round tables.I will present about Kamailio on Tuesday, August 4, 2015.Cluecon is a place gathering lots of VoIP folks around the word, many from Kamailio community, it is one of those events that one should not miss.If you are at the event or around Chicago area during that time and want to meet to discuss about Kamailio, get in touch via sr-dev mailing list. If there are many interested, we can have some ad-hoc sessions and group meetings (e.g., dinner) to approach various topics about Kamailio.For private discussions, you can contact me directly (email to miconda at gmail dot com).


    Is the Web Finally Growing up and Going Binary?

    bloggeek - Tue, 07/14/2015 - 12:00

    Maybe.

    I remember the good old days. I was hired to work on this signaling protocol called H.323. It used an interesting notation called ASN.1 with a binary encoding, capable of using a bit of data for a boolean of information. Life was good.

    Then came SIP. With its “simple” text notation, it conquered the market. Everyone could just use and debug it by looking at the network. It made things so much easier for developers. So they told me. What they forgot to tell us then was how hard it is to parse text properly – especially for mere machines.

    Anyway, it is now 2015. We live in a textual internet world. We use HTTP to describe our web pages. CSS to express its design and we code using JavaScript and JSON. All of these protocols are textual in nature. Our expectation is that this text that humans write (and read to debug), will be read and processed by machines.

    This verbosity of text that we use over the internet is slowing us down twice:

    1. Text takes more space than binary information, so we end up sending more data over the network
    2. Computers need to work harder to parse text than they do binary

    So we’ve struggled through the years to fix these issues. We minify the text, rendering it unreadable to humans. We use compression on the network, rendering it unreadable to humans over the network. We cache data. We use JIT (Just In Time) compilation on JavaScript to speed it up. We essentially lost most of the benefits of text along the way, but remained with the performance issues still.

    This last year, several initiatives have been put in place that are about to change all that. To move us from a textual web into a binary one. Users won’t feel the difference. Most web developers most feel it either. But things are about to change for the better.

    Here are the two initiatives that are making all the difference here.

    HTTP/2

    HTTP/2 is the latest and greatest in internet transport protocols. It is an official standard (RFC 7540) for almost 2 full months now.

    Its main objective is to speed up the web and to remove a lot of the hacks we had to use to build web pages and run interactive websites (BOSH, Comet and CSS sprites come to mind here).

    Oh – and it is binary. From the RFC:

    Finally, HTTP/2 also enables more efficient processing of messages through use of binary message framing.

    While the content of our web pages will remain textual and verbose (HTML), the transport protocol used to send them, with its multitude of headers, is becoming binary.

    To make things “worse”, HTTP/2 is about to encrypt everything by default, simply because the browsers who implemented it so far (Chrome and Firefox) decided not to support non-encrypted connections with HTTP/2. So the verbosity and the ability to watch messages on the network and debug things has gone down the drain.

    WebAssembly

    I’ve recently covered WebAssembly, comparing the decisions around it to those of WebRTC.

    WebAssembly is a binary format meant to replace the use of JavaScript in the browser.

    Developers will write their frontend code in JavaScript or whatever other language they fancy, and will have to compile it to WebAssembly. The browser will then execute WebAssembly without the need to parse too much text as it needs to do today. The end result? A faster web, with more languages available to developers.

    This is going to take a few years to materialize and many more years to become dominant and maybe replace JavaScript, but it is the intent here that matters.

    Why is it important?

    We need to wean ourselves from textual protocols and shift to binary ones.

    Yes. Machines are becoming faster. Processing power more available. Bandwidth abundant. And we still have clogged networks and overloaded CPUs.

    The Internet of Things won’t make things any easier on us – we need smaller devices to start and communicate. We need low power with great performance. We cannot afford to ruin it all by architectures and designs based on text protocols.

    The binary web is coming. Better be prepared for it.

    The post Is the Web Finally Growing up and Going Binary? appeared first on BlogGeek.me.

    Pages

    Subscribe to OpenTelecom.IT aggregator

    Using the greatness of Parallax

    Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.

    Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.

    Get free trial

    Wow, this most certainly is a great a theme.

    John Smith
    Company name

    Yet more available pages

    Responsive grid

    Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

    More »

    Typography

    Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

    More »

    Startup Growth Lite is a free theme, contributed to the Drupal Community by More than Themes.