bloggeek

Subscribe to bloggeek feed bloggeek
The leading authority on WebRTC
Updated: 6 min 44 sec ago

Can Apple Succeed with Two Operating Systems When Google and Microsoft are Consolidating?

Tue, 11/17/2015 - 12:00

One OS to rule them all?

It seems like Apple has decided to leave its devices split between two operating systems – Mac and iOS. If you are to believe Tim Cook’s statement, that is. More specifically, MacBook (=laptop) and iPad (=tablet) are separate devices in the eyes of Apple.

This is a strong statement considering current market trends and Apple’s own moves.

The iPad Pro

Apple’s latest iPad Pro is a 12.9 inch device. That isn’t that far from my Lenovo Yoga 2 Pro with its 13.1 inch. And it has an optional keyboard.

How far is this device from a laptop? Does it compete head to head in the laptop category?

Assuming a developer wants to build a business application for Apple owners. One that requires content creation (i.e – a real keyboard). Should he be writing it for the Mac or for iOS?

Tim Cook may say there’s no such intent, but the lines between Apple’s own devices are blurring. Where does one operating system ends and the other begins is up for interpretation from now on. One which will change with time and customer feedback.

Apple had no real intent of releasing larger iPhones or smaller iPads. It ended up doing both.

Microsoft Windows 10

Windows 10 is supposed to be an all-encompassing operating system.

You write your app for it, and it miraculously fits smartphones, tablets, laptops and PCs. That’s at least the intent – haven’t seen much feedback on it yet.

And I am not even mentioning the Surface Tablet/Laptop combo.

Google Chrome OS / Android

Google has its own two operating systems – Android and Chrome OS. Last month Alistair Barr informed of plans in Google to merge the two operating systems together.

The idea does have merit. Why invest twice in two places? Google needs to maintain and support two operating systems, while developers need to decide to which to build their app – or to develop for both.

Taking this further, Google could attempt making Android apps available inside Chrome browsers, opening them up to even a larger ecosystem not relying only on their own OS footprint. Angular and Material Design are initiatives of putting apps in the web. A new initiative might be interpreting Android’s Java bytecode in Chrome OS, and later in Chrome itself.

Who to believe?

On one hand, both Microsoft and Android are consolidating their operating systems. On the other, Apple doesn’t play by the same rule book. Same as we’ve seen lately in analytics.

I wonder who which approach would win in the end – a single operating system to rule them all, or multiple based on the device type.

The post Can Apple Succeed with Two Operating Systems When Google and Microsoft are Consolidating? appeared first on BlogGeek.me.

WebRTC Demand isn’t Exponentially Growing

Mon, 11/16/2015 - 12:00

A long, boring straight line.

In some ways, WebRTC now feels like a decade ago, when every time we said “next year will be the year of video”. For WebRTC? Next year will be the year of adoption.

Adoption is hard to define though. What does it really means when it comes to WebRTC?

WebRTC has been picked up by carriers (AT&T, Comcast and others if you care about name dropping), most (all?) video conferencing and unified communication vendors, education, banking and healthcare industries, contact centers.

While all is well in the world of WebRTC, there is no hype. A year and a half ago I wrote about it – the fact that there is no hype in WebRTC. It still holds true. Too true. And too steadily.

The chart below is a collection of 2 years of data of some of the data points I follow with WebRTC. I hand picked here 4 of them:

  • The number of github projects mentioning WebRTC
  • The number of questions/answers on Stack Overflow mentioning WebRTC
  • The number of users subscribed to the discuss-webrtc Google group
  • The number of LinkedIn profiles of people deciding to add WebRTC to their “resume”

In all of these cases (as well as other metrics I collect and follow), the trend is very stable. There’s growth, but that growth is linear in nature.

There are two minor areas worth mentioning:

  1. LinkedIn had a correction during September/October – a high increase and then an immediate decrease. Probably due to spam accounts that got caught by LinkedIn. I’ve seen this play out on Google+ account stats as well about a year ago
  2. github and StackOverflow had a slight change in their line angle from the beginning of 2015. This coincides with Google’s decision to host its samples and apprtc on github instead of on code.google.com – probably a wise decision, though not a game changer

Some believe that the addition of Microsoft Edge will change the picture. Statistics of Edge adoption and the statistics I’ve collected in the past two months show no such signs. If anything, I believe most still ignore Microsoft Edge.

Where does that put us?

Don’t be discouraged. This situation isn’t really bad. 2015 has been a great year for WebRTC. We’ve seen public announcements coming from larger vendors (call it adoption) as well as the addition of Microsoft into this game.

Will 2016 be any different? Will it be the breakout year? The year of WebRTC?

I doubt it. And not because WebRTC won’t happen. It already is. We just don’t talk that much about it.

If you are a developer, all this should be great news for you – there aren’t many others in this space yet, so demand versus supply of experienced WebRTC developers favors developers at the moment – go hone your skill. Make yourself more valuable to potential employers.

If you are a vendor, then find the most experienced team you can and hold on to them – they are your main advantage in the next years when it comes to outperforming your competitors when it comes to building a solid service.

We’re not in a hyped up industry as Internet of Things or Big Data – but we sure make great experiences.

The post WebRTC Demand isn’t Exponentially Growing appeared first on BlogGeek.me.

WebRTC Data Channel find a home in Context

Thu, 11/12/2015 - 12:00

There’s a new home for the WebRTC Data Channel – it found its use lately in context.

Ever since WebRTC was announced, I’ve been watching the data channel closely – looking to see what developers end up doing with it. There are many interesting use cases out there, but for the most part, it is still early days to decide where this is headed. In the last couple of weeks though, I’ve seen more and more evidence that there’s one place where the WebRTC Data Channel is being used – a lot more than I’d expect. That place is in adding context to a voice or video call.

Where did my skepticism come from?

Look at this diagram, depicting a simplified contact center using WebRTC:

We have a customer interacting with an agent, and there are almost always two servers involved:

  1. The web server, which got the two browsers connected. It acts as the signaling server for us, and uses HTTP or Websockets for its transport
  2. The media server, which can be an SBC, connecting both worlds or just a media server that is there to handle call queuing, record calls, etc.

The logic here is that the connection to the web server should suffice to provide context – why go through all the trouble of opening up a data channel here? For some reason though, I’ve seen evidence that many are adopting the data channel to pass context in such scenarios – and they are terminating it in their server side and not passing it direct between the browsers.

The question then is why? Why invest in yet another connection?

#1 – Latency

If you do need to go from browser to browser, then why make the additional leg through the signaling server?

Going direct reduces the latency, and while it might not be much of an issue, there are use cases when this is going to be important. When the type of context we are passing is collaboration related, such as sharing mouse movements or whiteboarding activity – then we would like to have it shared as soon as possible.

#2 – Firewalls

We might not want to go through the signaling server for the type of data we wish to share as context. If this is the case, then the need to muck around with yet another separate server to handle a Websocket connection might be somewhat tedious and out of context. Having the WebRTC data channel part of the peer connection object, created and torn down at the same time can be easier to manage.

It also has built in NAT and Firewall traversal mechanisms in place, so if the call passes – so will the context – no need to engineer, configure and test another system for it.

#3 – Asymmetry

At times, not both sides of the session are going to use WebRTC. The agent may as well sit on a PSTN phone looking at the CRM screen on his monitor, or have the session gateway into a SIP network, where the call is received.

In such cases, the media server will be a gateway – a device that translates signaling and media from one end to the other, bridging the two worlds. If we break that apart and place our context in a separate Websocket, then we have one more server to handle and one more protocol to gateway and translate. Doing it all in the gateway that already handles the translation of the media makes more sense for many use cases.

#4 – Load Management

That web server doing signaling? You need it to manage all sessions in the system. It probably holds all text chats, active calls, incoming calls waiting in the IVR queue, etc.

If the context we have to pass is just some log in information and a URL, then this is a non-issue. But what if we need to pass things like screenshots, images or files? These eat up bandwidth and clog a server that needs to deal with other things. Trying to scale and load balance servers with workloads that aren’t uniform is harder than scaling uniform work loads.

#5 – Because We Can

Let’s face it – WebRTC is a new toy. And the data channel in WebRTC is our new shiny object. Why not use it? Developers like shiny new toys…

The Humble WebRTC Data Channel

The data channel has been around as long as WebRTC, but it hasn’t got the same love and attention. There’s very little done with it today. This new home it found with passing context of sessions is an interesting development.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post WebRTC Data Channel find a home in Context appeared first on BlogGeek.me.

Is there any Room for WebRTC in Gaming?

Mon, 11/09/2015 - 12:00

A few use cases where WebRTC can be found in gaming.

When WebRTC first came out, everyone were in frenzy trying to figure out which verticals will end up using WebRTC. One of the verticals that keeps popping up, but never sticking around for long is gaming.

When discussing WebRTC and gaming, there’s more than a single use case – there are a few dominant one; and I wanted to share them here this time.

#1 – Social Games

Remember Cube Slam? Google’s first demo of WebRTC, where you can play a game with someone else and see him on the other side?

That was a demo. Jocly Games is the best example I have. Jocly Games offer turn by turn board games where your opponent is another player somewhere. If you wish, you can see each other during the game by the help of WebRTC. I’ve interviewed Michel Gutierrez, the CEO of Jocly Games two years ago.

Roll20 does a similar thing for multiplayer RPG games.

#2 – Motion Sensor

While I haven’t seen any serious game using this, the fact that you can get a camera feed into a game means you can track movement. And if you can track movement – you can use it to control something.

How about a game of Snake?

#3 – Multiplayer Gaming

Multiplayer games require synchronization between players. The better the connection the more responsive the game. And where latency is important, there’s room for WebRTC’s data channel.

Two and a half years ago, Mozilla released a proof of concept of sorts. Its own WebRTC demo, focused on the data channel. It was a game called BananaBread. It is a first person shooter where the players communicate their positions and actions directly with each other using the data channel.

This year, I reviewed a book about multiplayer game development in HTML5. While the WebRTC part of it was skinny compared to the rest, it did mention its capability.

In the wild, I haven’t seen any evidence of this being used a lot. I assume it is due to the relative complexity of implementing it and taking care of cases where some players can’t use the data channel or must relay it via TURN servers.

#4 – Controller and Display

This is something I haven’t seen up until recently, and now I’ve seen it several times in the same month.

AirConsole uses this technique. To some extent, Ericsson’s Remote Excavation demo takes the same approach.

The idea is one device holds the controls over the other. In our case, a game controller and the PC/console running the game (on a browser of course). Once the two pair up using a WebRTC data channel, the latency involved in passing commands from the controller to the device are minimized.

What am I missing?

4 different typical use cases. None used in any popular game. None considered “best practices” or common approaches to game development.

  • Are there more use cases for gaming with WebRTC?
  • Is any of them making headway in large scale commercial games that I am unaware of?
  • Is there a reason why none of them is catching up?

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post Is there any Room for WebRTC in Gaming? appeared first on BlogGeek.me.

WebRTC Testing Challenges: An Upcoming Webinar and a Recent Session

Thu, 11/05/2015 - 12:00

Announcing an upcoming free webinar on the challenges of WebRTC testing.

This week I took a trip to San Francisco, where the main goal was to attend WebRTC Summit and talk there about the challenges of WebRTC testing. This was part of the marketing effort we’re placing at testRTC. It is a company I co-founded with a few colleagues alongside my consulting business.

During the past year, we’ve gained a lot of interesting insights regarding the current state of testing in the WebRTC ecosystem. Which made for good presentation material. The session at the WebRTC Summit went rather well with a lot of positive feedback. One such comment made was this one that I received by email later during that day:

I liked much your presentation which indeed digs into one of the most relevant problems of WebRTC applications, which is not generally discussed in conferences.

My own favorite, is what you can see in the image I added above – many of the vendors our there just don’t make the effort to test their WebRTC implementations properly – not even when they go to production.

I’ve identified 5 main challenges that are facing WebRTC service developers:

  1. Browser vendor changes (hint: they are many, and they break things)
  2. NAT traversal (testing it isn’t trivial)
  3. Server scale (many just ignore this one)
  4. Service uptime (checking for the wrong metric)
  5. Orchestration (a general challenge in WebRTC testing)

The slides from my session are here below:

Overcoming the Challenges in Testing WebRTC Services from Tsahi Levent-levi

 

That said, two weeks from now, I will be hosting a webinar with the assistance of Amir Zmora on this same topic. While some of the content may change, most of it will still be there. If you are interested, be sure to join us online at no cost. To make things easier for you, there are two sessions, to fit any timezone.

When? Wednesday, November 18

Session 1: 8 AM GMT, 9 AM CET, 5 PM Tokyo

Session 2: 4 PM GMT, 11 AM EDT, 8 AM PDT

Register now

 

Test and Monitor your WebRTC Service like a pro - check out how testRTC can improve your service' stability and performance.

The post WebRTC Testing Challenges: An Upcoming Webinar and a Recent Session appeared first on BlogGeek.me.

Can Apple’s On-Device Analytics Compete with Google and Facebook?

Tue, 11/03/2015 - 12:00

I wonder. Can Apple maintain its lead without getting deep and dirty in analytics?

Apple decided to “take the higher ground”. It has pivoted this year focusing a lot around privacy. Not maintaining user keys for one, but also collecting little or no information from devices and doing as much as possible analytics on device. For now, it seems to be working.

But can it last?

Let’s head 5 or 10 years into the future.

Now lets look at Google and Facebook. Both have voracious appetite to data. Both are analytics driven to the extreme – they will analyze everything and anything possible to improve their service. Where improving it may mean increasing its stickiness, increasing ROI and ARPU, etc.

As time goes by, computing power increases, but also the technology and understanding we have at our disposal in sifting through and sorting out huge amounts of data. We call it Big Data and it is changing all the time. A year or two ago, most discussions on big data were around Hadoop and workloads. This year it was all about real time and Spark. There’s now a shift happening towards machine learning (as opposed to pure analytics), and from there, we will probably head towards artificial intelligence.

To get better at it, there are a few things that need to be in place as well as ingrained into a company’s culture:

  1. You need to have lots and lots of data. The more the merrier
  2. The data needs to be available, and the algorithms put in place need to be tweaked and optimized daily. Think about how Google changes its search ranking algorithm all the time
  3. You need to be analytics driven. It needs to be part and parcel of your products and services – not something done as an afterthought in a data warehouse to generate a daily report to a manager

These traits are already there for Google and Facebook. I am less certain regarding Apple.

Fast forward 5 to 10 years.

  • Large companies collect even more data
  • Technologies and algorithms for analytics improve
  • Services become a lot more smart, personalized and useful

Where would that leave Apple?

If a smartphone (or whatever device we will have at that time) really becomes smart – would you pick out the shiny toy with the eye candy UI or the one that gets things done?

Can Apple stay long term with its stance towards data collection policies or will it have to end up collecting more data and analyzing it the way other companies do?

The post Can Apple’s On-Device Analytics Compete with Google and Facebook? appeared first on BlogGeek.me.

Where’s the Socket.io of WebRTC’s Data Channel?

Mon, 11/02/2015 - 12:00

Someone should build a generic fallback…

If you don’t know Socket.io then here’s the gist of it:

  • Socket.io is a piece of JS client code, and a server side implementation
  • It enables writing message passing code between a client and a server
  • It decides on its own what transport to use – WebSocket, XHR, SSE, Flash, pigeons, …

It is also very popular – as a developer, it lets you assume a WebSocket like interface and develop on top of it; and it takes care of all the mess of answering the question “but what if my browser/proxy/whatever doesn’t support WebSocket?

I guess there are use cases where the WebRTC data channel is like that – you’d love to have the qualities it gives you, such as reduced server load and latency, but you can live without it if you must. It would be nice if we’d have a popular Socket.io-like interface to do just that – to attempt first to use WebRTC’s data channel, then fallback to either a TURN relay for it or to WebSocket (and degrading from there further along the line of older transport technologies).

The closest I’ve seen to it is what AirConsole is doing. They enable a smartphone to become the gamepad of a browser. You get a smartphone and your PC connected so that whatever you do in the phone can be used to control what’s on the PC. Such a thing requires low latency, especially for gaming purposes; and WebRTC probably is the most suitable solution. But WebRTC isn’t always available to us, so AirConsole just falls back to other mechanisms.

While a gaming console is an obvious use case, and I did see it in more instances lately, I think there’s merit to such a generic framework in other instances as well.

Time someone implemented it

The post Where’s the Socket.io of WebRTC’s Data Channel? appeared first on BlogGeek.me.

Apple WebRTC Won’t Happen Soon

Thu, 10/29/2015 - 12:00

Don’t wait up for Apple to get you WebRTC in the near future.

Like many others, I’ve seen the minor twitter storm of our minuscule world of WebRTC. The one in which a screenshot of an official Apple job description had the word WebRTC on it. Amir Zmora does a good job of outlining what’s ahead of Apple with adding WebRTC. The thing he forgot to mention is when should we be expecting anything.

The below are generally guesses of mine. They are the roadmap I’d put for Apple if I were the one calling the shots.

When will we see an Apple WebRTC implementation?

Like anyone else, I am clueless to the inner workings of Apple. If the job postings tell us anything it is that Apple are just starting out. Based on my experience in implementations of media engines, the time it took Google, Mozilla and Microsoft to put a decent release out, I’d say:

We are at least 1 year away from a first, stable implementation

It takes time to implement WebRTC. And it needs to be done across a growing range of devices and hardware when it comes to the Apple ecosystem.

Where will we see an Apple WebRTC implementation?

Safari on Mac OS X. The next official release of it.

  • This one is the easiest to implement for with the least amount of headache and hardware variance
  • I am assuming iOS, iPhone and iPad get a lot more stress and focus in Apple, so getting something like WebRTC into them would be more challenging

The Safari browser on iPad and iPhone will come next. Appearing on iPhone 6 and onwards. Maybe iPhone 5, but I wouldn’t bet on it.

We will later see it supported in the iOS WebView support. Probably 9-12 months after the release of Safari on iOS.

The Apple TV would be left out of the WebRTC party. So will the Apple Watch.

Which Codecs will Apple be using?

H.264, AAC-ELD and G.711. Essentially, what they use in FaceTime with the addition of G.711 for interoperability.

  • Apple won’t care about media quality between Apple devices and the rest of the world, so doing Opus will be considered a waste of time – especially for a first release
  • H.264 and AAC-ELD is what you get in FaceTime today, so they just use it in WebRTC as well
  • G.711 will be added for good measures to get interoperability going
  • VP8 will be skipped. Microsoft is skipping it, and H.264 should be enough to support all browsers a year from now
Will they aim for ORTC or WebRTC APIs?

Apple sets its sights on Google. They now hold Microsoft as best-friends with the Office releasing on iOS.

On one hand, going with ORTC would be great:

  • Apple will interoperate with Microsoft Edge on the API and network level, with Chrome and Firefox on the network level only
  • Apple gets to poke a finger in Google’s eye

On the other hand, going with WebRTC might be better:

  • Safari tends to do any serious upgrades with new releases of the OS. Anything in-between is mostly security updates. This won’t work well with ORTC and will work better with WebRTC (WebRTC is expected to be finalized in a few months time – well ahead of the 1 year estimate I have for the Apple WebRTC implementation)
  • Microsoft Edge isn’t growing anywhere yet, so aligning with it instead of the majority of WebRTC enabled browsers might not make the impact that Apple can make (assuming they are serious about WebRTC and not just adding it as an afterthought)

Being adventurous, I’d go for ORTC if I were Apple. Vindictiveness goes a long way in decision making.

Extra

On launch day, I am sure that Bono will be available on stage with Tim Cook. They will promise a personal video call over WebRTC running in WebKit inside Safari to the first 10 people who stand in line in Australia to purchase the next iPhone.

And then again, I might be mistaken and tomorrow, WebRTC will be soft launched on the Mac. Just don’t build your strategy on it really happening.

 

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post Apple WebRTC Won’t Happen Soon appeared first on BlogGeek.me.

IOT Messaging – Should we Head for the Cloud or P2P?

Tue, 10/27/2015 - 12:00

A clash of worlds.

With the gazillions of devices said to be part of the IOT world, how they interact and speak to each other is going to be important. When we talk about the Internet of Things, there are generally 3 network architectures that are available:

  • Star topology
  • P2P
  • Hubs and spokes
1# – Star Topology

The star topology means that each device gets connected to the data center – the cloud service. This is how most of our interent works today anyway – when you came to this website here, you got connected to my server and its hosting company to read this post. When you chat on Facebook, your messages goes through Facebook’s data centers. When your heat sensor has something to say… it will probably tell it to its server in the cloud.

Pros
  • We know how it works. We’ve been doing it for a long time now
  • Centralized management and control makes it easier to… manage and control
  • Devices can be left as stupid as can be
  • Data gets collected, processed and analyzed in a single place. This humongous amounts of data means we can derive and deduce more out of it (if we take the time to do so)
Cons
  • Privacy concerns. There’s a cloud server out there that knows everything and collects everything
  • Security. Assuming the server gets hacked… the whole network of devices gets compromised
  • As the number of devices grows and the amount of data collected grows – so do our costs to maintain this architecture and the cloud service
  • Latency. At times, we need to communicate across devices in the same network. Sending that information towards the cloud is wasteful and slower
2# – P2P

P2P means devices communication directly with each other. No need for mediation. The garage sensor needs to open the lights in the house and start the heating? Sure thing – it just tells them to do so. No need to go through the cloud.

Pros
  • Privacy. Data gets shared only by the devices that needs direct access to the data
  • Security. You need to hack more devices to gain access to more data, as there’s no central server
  • Low latency. When you communicate directly, the middleman isn’t going to waste your time
  • Scale. It is probably easier to scale, as the more devices out there doesn’t necessarily means most processing power required on any single device to handle the network load
Cons
  • Complicated management and control. How do these devices find each other? How do they know the language of one another? How the hell do you know what goes in your network?
  • There’s more research than real deployments here. It’s the wild west
  • Hard to build real smarts on top of it. With less data being aggregated and stored in a central location, how do you make sense and exploit big data analytics?
3# – Hubs and Spokes

As with all technology, there are middle ground alternatives. In this case, a hubs and spokes model. In most connected home initiatives today, here’s a hub device that sits somewhere in the house. For example, Samsung’s SmartThings revolves around a Hub, where all devices connect to it locally. While I am sure this hub connects to the cloud, it could send less or more data to the cloud, based on whatever Samsung decided to do with it. It serves as a gateway to the home devices that reduces the load from the cloud service and makes it easier to develop  and communicate locally across home devices.

Pros
  • Most of what we’d say is advantageous for P2P works here as well
  • Manageability and familiarity of this model is also an added bonus of this model
Cons
  • Single point of failure. Usually, you won’t build high availability and redundancy for a home hub device. If that device dies…
  • Who’s hub will you acquire? What can you connect to it? Does that means you commit to a specific vendor? A hub might be harder to replace than a cloud service
  • An additional device is one more thing we need to deal with in our system. Another moving part
But there’s more

In the recent Kranky Geek event, Tim Panton, our magician, decided to show how WebRTC’s data channel can be used to couple devices using a duckling protocol. To keep things short, he showed how a device you just purchased can be hooked up to your phone and make that phone the only way to control and access the purchased device.

You can watch the video below – it is definitely interesting.

To me this means that:

  1. We don’t discuss enough the network architectures and topologies that are necessary to make IOT a reality
  2. The result will be hybrid in nature, though I can’t say where will it lead us

 

Kranky and I are planning the next Kranky Geek - Q1 2016. Interested in speaking? Just ping me through my contact page.

The post IOT Messaging – Should we Head for the Cloud or P2P? appeared first on BlogGeek.me.

WebRTC Mobile to Web? Make Sure You Think at Web Speeds

Mon, 10/26/2015 - 12:00

Learn to run faster.

WebRTC isn’t yet standardized. It is on the way there. That said, there are already more than 800 different vendors and services out there that are making use of it – many in production, with commercial services.

There are main 3 approaches to a WebRTC-based service:

  1. Browser based service, where the user interacts with the service solely through a web browser
  2. App based service, where users interact with the service via WebRTC mobile apps
  3. Hybrid approach, where the users can interact via a web browser or a WebRTC mobile app

That third alternative is the most challenging. The reason for the challenge isn’t a technical one, but rather one of mind set.

Fippo, who knows about the WebRCT testing service I am a part of, sends me every once in awhile issues he bumps into. This one that he shared with me recently from the webrtc-discuss group was an eye opener: someone running a native C++ app got WebRTC compiled and linked to his own app, and assuming it will work with browsers. And it did. Up until recently:

Chrome 46 started sending UDP/TLS/RTP/SAVPF in the profile field of the m-line as has been announced a while back in https://groups.google.com/forum/#!topic/discuss-webrtc/ZOjSMolpP40

Your library version has to be pretty old to be affected by this (parsing this should have been supported since mid-2014).

Here are some thoughts about this one:

  • If you run WebRTC in browsers, your main concern about interoperability is around
    • Browsers changing their APIs and deprecating past capabilities
    • Working the kinks of interoperability across browser vendors
  • If you wrap WebRTC in your app and use it there alone, then your concerns are minor – you live in a rather “protected” world where you control everything
  • If you connect from an app to a browser with WebRTC, you’ll need to maintain the WebRTC library in your own app
    • Making sure it works with the latest browser
    • Updating and patching it as you move along

It means that mobile apps must run at the speed of the browser – whenever a new browser version gets released, you must be sure it works with your own version of WebRTC in your app. You should probably get your version updated at the same speed (that’s every 6 weeks or even less, once we’ll have 3 full browsers supporting it properly).

What are you to do if that’s your use case? Here are some suggestions:

#1 – DIY only if you can commit

Don’t put someone in your team to port WebRTC on your own.

If you do, then make sure you know this isn’t a one-time effort. You’ll need to make investments in upgrading the ported library quite frequently.

To be on the safe side, I’d suggest putting the ongoing investment (not the initial porting) at around 50% of a developer’s capacity.

Also remember you have two platforms to deal with – Android and iOS.

Can’t commit to the ongoing maintenance effort? This alternative isn’t for you.

#2 – Outsource to an independent developer with care

If you decide to use someone externally, who would take the WebRTC library, port it for you, assist you in integrating and optimizing it to your needs – make sure he will be there for the long run.

Same as you’ll need to invest internally to maintain this code, you’ll need to be able to call upon that person in the future.

Things to consider:

  • Placing an exact price of future work of maintenance into the proposal – you don’t want to do the initial work just to find out the price hikes in the future when you need that contractor the most
  • Make sure in your agreement with him his availability to you
  • Budget appropriately these additional future work
#3 – Use an official product

The other alternative? Use an official product that gets you WebRTC as an SDK to mobile. Frozen Mountain’s IceLink comes to mind as a good solution.

You essentially outsource the headache of maintaining WebRTC’s interoperability with browsers to someone who does that for a living in a product package.

Make sure in the agreement with such vendors that you get regular updates and upgrades – and that these are guaranteed to work with the latest versions of the browsers (and even with the available beta versions of the browsers).

Check how regularly the vendor releases a new SDK and which ones are mandatory to upgrade to due to browser interoperability issues. Plan accordingly on your end.

#4 – Go for a WebRTC API Platform

Have your worries of this whole mess outsourced to someone else. Not only the mobile SDK part, but the whole real time comms platform.

You need to pick a vendor out of a very large set of potential vendors, which is why I’ve written and updated my own report on WebRTC APIs over the years.

Make sure to take a look at how well the vendor you select works with mobile and is committed to upgrading his own support for it.

It ain’t easy

Getting WebRTC to work well for the long run on mobile and web at the same time isn’t easy. It requires commitment as opposed to a one time effort. Be prepared, and make sure you take the approach that fits you best.

At least until WebRTC stabilizes (no reason for this to happen in the coming year), you’ll need to keep running at the speed of the browsers.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post WebRTC Mobile to Web? Make Sure You Think at Web Speeds appeared first on BlogGeek.me.

Upcoming: WebRTC Summit and my Next Virtual Coffee

Sat, 10/24/2015 - 15:30

Here’s what to expect during November.

Just wanted to share two things during this weekend.

WebRTC Summit, testing and San Francisco

I am traveling on the first week of November to San Francisco. The idea is to talk about WebRTC testing (and testRTC) at the WebRTC Summit.

I’ll be touching the challenges of testing WebRTC, which is somethings that isn’t discussed a lot out there:

  1. Either there’s no challenge or problem and all is well
  2. Or we’re still in the exploration phase with WebRTC, with little commercial value to it

I think there needs to be more focus in that area, and not just because I co-founded a WebRTC testing company.

I plan on being at the WebRTC Summit in Santa Clara on November 3-4. Here’s more about my session if you need. I am already filling up my days around that summit with meetings in both Santa Clara and San Francisco – if you wish to meet – contact me and I’ll see if I can still squeeze you in to my agenda.

Virtual Coffee with Tsahi

The first Virtual Coffee event took place a bit over a week ago. The recording of that session still isn’t available, but will be in a week or two.

It went well and I truly enjoyed the experience – the ability to handpick the people who can participate, get them signed in through my membership area on this website, and do it all under my own brand – it was great.

I’d like to thank (again) Drum’s team with their Share Anywhere service. It is as close to what I needed as could be – and easily customizable. Their team is great to work with as well (and no – they haven’t paid for me to say this).

The next session

When? November 11, 13:30 EDT

Where? Online, of course

Agenda:

  • Microsoft Edge, ORTC – what you should know about it, and how to prepare for 2016?
  • Open Q&A – on the topic above, or on any other topic

Who?

  • These sessions are closed sessions. They are available to the following groups
  • Employees of companies who have an active subscription for my WebRTC API Platforms report
  • And employees of companies who I currently consult
Last but not least

I noticed recently people contacting me and asking me not to share their stories on this blog.

To make it clear – there are three reasons for me to share stories here:

  1. I heard or read about it online, in a public setting. So the assumption is that the information is already public and sharable
  2. I specifically asked if this can be shared – and got permission. Usually this ends up as an interview on my site
  3. I share a story, but not the details about the specific company or the people involved

I put the bread on the table mainly through consulting. This means being able to assist vendors, and that requires doing things in confidence and without sharing strategies, roadmaps, status and intents with others. If you contact me through my site, my immediate assumption is that what you share is private unless you say otherwise.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post Upcoming: WebRTC Summit and my Next Virtual Coffee appeared first on BlogGeek.me.

The What’s Next for WebRTC Can Wait Until We Deal With What’s Now

Thu, 10/22/2015 - 12:00

Why reminisce in the future when we’ve got so much to do in the here and now.

This week Chad wrote a post titled What’s Next for WebRTC? It is a good post, so don’t get this one as a rant or a critique about Chad. It is just the moment I saw the title and some of the words on the accompanying visual (AR, VR, drones, Industrial, Computer Vision, 3D, Connected Cars) – I immediately knew there’s something that bugs me.

It wasn’t about the fact that WebRTC isn’t used for any of these things. It was due to two reasons:

  1. We’re still not scratching the surface of WebRTC yet, so what’s the rush with what’s next?
  2. I hate it when people stick a technology on anything remotely/marginally related. This is the case for the soup of words I saw in the visual…

The second one, of buzzword abuse, I can only say this: WebRTC may play a role in each and everyone of these buzzwords, but its place in these market will be minuscule compared to the market itself. For many use cases in these markets, it won’t be needed at all.

For the first one, I have decided to write this.

There are many challenges for those who wish to use WebRTC today. This is something I tried to address in the last Kranky Geek event – WebRTC is both easy and hard – depending on your pedigree.

VoIP developers will see it as the easiest way to implement VoIP. Web developers will find it hard – it is the hardest thing that you can add to a browser these days, with many moving parts.

Here’s the whole session if you are interested:

Here’s what I think we should strive for with WebRTC and even ask those who work to make it available for us as a technology:

#1 – Become TCP

TCP works. We expect it to work. There are no interoperability issues with TCP. And if there are, they are limited to a minuscule number of people who need to deal with it. WebRTC isn’t like it today.

WebRTC requires a lot of care and attention. This fresh interview with Dan about the WebRTC standard shows that. You’ll find there words about versioning, deprecation, spec changes, etc. – and the problem is they affect us all.

This brings us to this minor nagging issue – if you want to use and practice WebRTC, you need to be on top of your game and have your hand on the WebRTC pulse at all times – it isn’t going to be a one-off project where you invest in developing a web app or an app and then monetize and bask in the sun for years.

The other alternative is to use a WebRTC API vendor, who needs to take care of all that on his own. This can’t be easily achieved by those who need an on premise deployment or more control over the data. This alternative also speaks louder to developers than it does to IT managers in enterprises, leaving out part of the industry of potential adopters of WebRTC.

The faster WebRTC becomes like TCP the better.

#2 – More success stories of a variety of simple use cases

There are a lot of areas where I see vendors using WebRTC. Healthcare, learning, marketplaces, contact centers, etc.

In many cases, these are startups trying to create a new market or change how the market works today. While great, it isn’t enough. What we really need is stories of enterprises who took the plunge – like the story told by AMEX last year. We also need to see these startups grow and become profitable companies – or larger vendors who acquire technology (I am talking to you Slack, Atlassian and Blackboard) use them in their products.

These stories that I am interested in? They should be able the business side of things – how using WebRTC transformed the business, improved it, got adopted by the end customers.

Where are we?

With all the impressive numbers of WebRTC flying around, we still are in the early adopters phase.

We are also still struggling with the basics.

There are many great areas to explore with WebRTC – the large scale streaming space is extremely interesting to me. So is the potential of where WebRTC fits in IOT – which is even further out than the large scale streaming one. I love to be a part of these projects and those that seek them are at the forefront of this technology.

We’re not there yet.

But we will be.

There’s no stopping this train any time soon.

 

Test and Monitor your WebRTC Service like a pro - check out how testRTC can improve your service' stability and performance.

 

The post The What’s Next for WebRTC Can Wait Until We Deal With What’s Now appeared first on BlogGeek.me.

The Future of Messaging is…

Tue, 10/20/2015 - 12:00

A lot more than pure messaging.

Messaging used to be about presence and IM. Then the VoIP people came and placed the voice and video chat stickers on it. That then became unified communications. Which is all nice and well, but it is both boring and useless at this point. Useless not because the utility of the service isn’t there, but because the expectation of such a service is to be free – or close to that. Or as I like saying, it has now become a feature within another service more than a service in its own right.

While this is killing unified communications, it doesn’t seem to be making much of a dent on messaging just yet. And the reason I think is the two very different trajectories these are taking:

  • Unified Communications is focused on being the one true source of everything that gets federated with all other communication means
  • Messaging shifted towards becoming platforms, where the size of the ecosystem and its utility outweighs any desire or need to federate with other similar services

This migration of messaging towards becoming platforms isn’t so easy to explain. There’s no silver bullet of how this is done. No secret recipe that gets you there.

Here are a few strategies that different messaging platforms are employing in their attempt to gain future growth.

Whatsapp and Simplicity

Whatsapp is all about simplicity. It offers pure messaging that replaces the SMS for many, coupled with group messaging that makes it sticky and viral in many countries.

Features don’t make it into Whatsapp fast. The only thing that was added in the past two years of any notable value is voice calling.

With this approach, Whatsapp still is the largest player in town when it comes to messaging; and it is probably doing so with the smallest possible team size.

The problem with such an approach, is that there isn’t enough room for many such players – and soon, to be a viable player in this domain will require a billion monthly active users.

Apple and iMessage

In that same token, the Apple iMessage is similar. It is simple, and it is impossible to miss or ignore if you have an iPhone.

But it is limited to Apple’s ecosystem which only runs on iOS devices.

Google Hangout (and now Jibe Mobile)

Google Hangouts was supposed to do the same/similar on Android, but didn’t live up to the expectation:

  • Unlike Whatsapp, group chat is available in Hangouts, but isn’t viral or “mandatory”
  • Unlike Apple iMessage, the user needs to make a mental note of using Hangouts instead of the SMS app. There are two of those, and as a user, you are free to choose which one to us. Choice adds friction and omplexity

With the acquisition of Jibe Mobile, this may change in the future. Will others follow suit? Is there enough utility and need in connecting messaging with Telco messaging, and especially with RCS, that many (myself included, at least until this acquisition) see as dead on arrival?

Facebook and Artificial Intelligence

Facebook is experimenting with artificial intelligence that is embedded into their Facebook Messenger service – not the social network where e-commerce is the current focus.

This new AI initiative is called Facebook M and is planned to be driven by part machine part humans.

In many ways, this is akin to the integration LivePerson (a chat widget for contact centers) has with knowledge bases that can cater to customer’s needs without “harassing” live agents in some cases. But this one is built into the messaging service the customer uses.

It is compared to Siri and Cortana, but you can also compare it to Google Now – once Facebook fleshes out the service, they can open up APIs for third parties to integrate to it, making it a platform for engaging with businesses.

WeChat and the Digital Life Platform

WeChat is large in Asia and dominant in many ways. It is an e-commerce platform and a digital life ecosystem.

Connie Chan of Andreessen Horowitz gives a good overview of what makes WeChat a platform:

Along with its basic communication features, WeChat users in China can access services to hail a taxi, order food delivery, buy movie tickets, play casual games, check in for a flight, send money to friends, access fitness tracker data, book a doctor appointment, get banking statements, pay the water bill, find geo-targeted coupons, recognize music, search for a book at the local library, meet strangers around you, follow celebrity news, read magazine articles, and even donate to charity … all in a single, integrated app.

WeChat transitioned from being a communication tool to becoming a platform. It has APIs that makes it easy for third parties to integrate with it and offer their own services on top of WeChat’s platform.

While I use the term “from service to feature” when talking about VoIP and WebRTC, Connie Chan uses “where social is just a feature” to explain the transition WeChat has made in this space.

The ability to send messages back and forth and communicate in real time via voice and video is now considered table stakes. It is also not expected to be a paid service but a feature that gets monetized elsewhere.

Meanwhile in Enterprise Messaging

Slack, which Connie Chan also briefly notes in his account of WeChat, is the guiding light of enterprise messaging these days.

Unlike other players in this space, Slack has built itself around the premise of three strong characteristics:

  • Integration – third parties can integrate their apps into Slack, and in many cases, Slack integrates automatically through links that get shared inside messages. Integrations that make sense and bring value to larger audiences of Slack gets wrapped into Slack – the acquisition of Screenhero and the plans to enhance it to video conferencing shows this route
  • Omnisearch – everything in Slack is searchable. Including the content of links shared on Slack. This makes for a powerful search capability
  • Slackbot – the slackbot is a Slack bot you can interact with inside the service. It offers guidance and some automation – and is about to enjoy artificial intelligence (or at the very least machine learning)

The enterprise platform is all about utility.

Slack is introducing AI and has its own marketplace of third party apps via integrations. The more enterprises use it, the more effect these two capabilities will have in enforcing its growth and effectiveness.

While the fight seems to be these days between Unified Communications and Enterprise Messaging, I believe that fight is already behind us. The winner will be Enterprise Messaging – either because UC vendors will evolve into Enterprise Messaging (or acquire such vendors) or because they will lose ground fast to Enterprise Messaging vendors.

The real fight will be between modern Enterprise Messaging platforms such as Slack and consumer messaging platforms such as WeChat – enterprises will choose one over the other to manage and run their internal workforce.

 

Kranky and I are planning the next Kranky Geek - Q1 2016. Interested in speaking? Just ping me through my contact page.

The post The Future of Messaging is… appeared first on BlogGeek.me.

WebRTC Basics: What’s a Video Codec Anyway?

Mon, 10/19/2015 - 12:00

Time for another WebRTC Basics: Video Codecs

I’ve been yapping about video codec more than once here on this blog. But what is it exactly?

If you’re a web developer and you are starting to use WebRTC, then there’s little reason (until now) for you to know about it. Consider this your primer to video coding.

Definition

A video codec takes the raw video stream, which can be of different resolution, color depth, frame rate, etc. – and compress it.

This compression can be lossless, where all data is maintained (so when you decompress it you get the exact same content), BUT it is almost always going to be lossy. The notion is that we can lose data that our human eye doesn’t notice anyway. So when we compress video, we take that into account, and throw stuff out relative to the quality we wish to get. The more we throw – the less quality we end up with.

The video codec comes in two pieces:

  1. Encoder – takes the raw video data and compresses it
  2. Decoder – takes the compressed data created by an encoder and decompresses it

The decoded stream will be different from the original one. It will be degraded in its quality.

The Decoder is the Spec

The thing many miss is that in order to define a video codec, the only thing we have is a specification for a decoder:

Given a compressed video stream, what actions need to take place to decompress it.

There is no encoder specification. It is assumed that if you know how the compressed result needs to look like, it is up to you to compress it as you see fit. Which brings us to the next point.

Generally speaking, decoders will differ from each other by their performance: how much CPU they take to run, how much memory they need, etc.

The Encoder is… Magic

Or more like a large set of heuristics.

In a video codec, you need to decide many things. How much time and effort to invest in motion estimation, how aggressive to be when compressing each part of the current frame, etc.

You can’t really get to the ultimate compression, as that would take too long a time to achieve. So you end up with a set of heuristics – some “guidelines” or “shortcuts” that your encoder is going to take when he compresses the video image.

Oftentimes, the encoder is based on experience, a lot of trial and error and tweaking done by the codec developers. The result is as much art as it is science.

Encoders will differ from each other not only by their performance but also by how well they end up compressing (and how well can’t be summed up in a single metric value).

Hardware Acceleration

A large piece of what a codec does is brute force.

As an example, most modern codecs today split an image into macroblocks, each requiring DCT. With well over 3,000 macroblocks in each frame of 720p resolution that’s a lot that need to get processed every second.

Same goes for motion estimation and other bits and pieces of the video codec.

To that end, many video codec implementations are hardware accelerated – either the codec runs completely by accelerated hardware, or the ugly pieces of it are, with “software” managing the larger picture of the codec implementation itself.

It is also why hardware support for a codec is critical for its market success and adoption.

Bandwidth Management

A video codec doesn’t work in a void. Especially not when the purpose of it all is to send the video over a network.

Networks have different characteristics of available bandwidth, packet loss, latency, jitter, etc.

When a video encoder is running, it has to take these things into account and compensate for them – reducing the bitrate it produces when there’s network congestion, reset its encoding and send a full frame instead of partial ones, etc.

There are also different implementations for a codec on how to “invest” its bitrate. Which again brings us to the next topic.

Different Implementations for Different Content Types (and use cases)

Not all video codec implementations are created equal. It is important to understand this when picking a codec to use.

When Google added VP9 to YouTube, it essentially made two compromises:

  1. Having to implement only a decoder inside a browser
  2. Stating the encoder runs offline and not in real-time

Real-tme encoding is hard. It means you can’t think twice on how to encode things. You can’t go back to fix things you’ve done. There’s just not enough time. So you use single-pass encoders. These encoders look at the incoming raw video stream only once and decide upon seeing a block of data how to compress it. They don’t have the option of waiting a few frames to decide how to compress best for example.

Your content is mostly static, coming from a Power Point presentation with mouse movements on top? That’s different from a head-shot video common in web meetings, which is in turn different than the latest James Bond Spectre trailer motion.

And in many ways – you pick your codec implementation based on the content type.

A Word about WebRTC

WebRTC brings with it a huge challenge to the browser vendors.

They need to create a codec that is smart enough to deal with all these different types of contents while running on variety of hardware types and configurations.

From what we’ve seen in the past several years – it does quite well (though there’s always room for improvement).

 

Next time you think why use WebRTC and not build on your own – someone implementing this video codec for you is one of the reasons.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

 

 

The post WebRTC Basics: What’s a Video Codec Anyway? appeared first on BlogGeek.me.

3 Advantages of WebRTC Embedded in the OS

Thu, 10/15/2015 - 12:00

Here’s a thought. Why not get WebRTC to the operating system level and be done with it?

Today, there are different ways to get WebRTC going:

  1. Use a browser…
  2. Compile the code and link it to your own app (PC or mobile)
  3. Wrap the browser within an app (PC)
  4. Use a webview (Android)

That last option? This is the closest one to an OS level integration of WebRTC. You assume it is there and available, and use it in your app somehow.

But what if we could miraculously get the WebRTC APIs (Javascript or whatever) from he operation system itself? No compilation needed. No Cordova plugins to muck around with. Just good ol’ “system calls”?

While I don’t really expect this to happen, here’s what we’d gain from having that:

1# Smaller app sizes

Not needing to get WebRTC on a device means your app takes up less space. With the average app size on the increase, this is always a good thing.

The OpenH264 codec implementation binary alone is around 300k, depending on the platform. Assume you need 3-4 more codecs (and that number will be growing every couple of years), the other media algorithms, all the network implementation, code to integrate with device drivers, WebRTC specific wrappers, … – lots and lots of size.

And less app size means more space for other app and less app data to send over the network when intsalling the app.

2# Less variability

While the first one is obvious, and somewhat nagging – so it takes a second more to install an app – who cares?

This point has a lot more of a reason for it.

If there’s a single implementation of WebRTC, maintained by the OS itself, there’s a lot less hassle of dealing with the variance.

When people port WebRTC on their own and use it – they make changes and tweaks. They convince themselves (with or without any real reason) that they must make that small fix in that piece of algorithm in WebRTC – after all, they know their use case best.

But now, it is there, so you make do with what you have. And that piece of code gets updated magically and improves with time – you need not upgrade it manually and re-integrate all the changes you’ve made to it.

Less variability here is better.

3# Shorter TTM

Since you don’t need to muck around with the work of porting and integration – it takes less time to implement.

I’ve been working with many vendors on how to get WebRTC to work in their use case. Oftentimes, that requires that nasty app to get a WebRTC implementation into it. There’s no straightforward solution to it. Yes – it is getting easier with every passing day, but it is still work that needs to be done and taken into account.

Back to reality

This isn’t going to happen anytime soon.

Unless… it already has to some extent and in some operating systems.

Chrome is an OS – not only Chrome OS but Chrome itself. It has WebRTC built in – in newer Android versions as well, where you can open up webviews with it.

For the rest, it is unlikely to be the path this technology will be taking.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post 3 Advantages of WebRTC Embedded in the OS appeared first on BlogGeek.me.

Google Goes All in for Messaging, Invests in Symphony

Tue, 10/13/2015 - 12:00

Something is brewing at Google.

Last week it was announced that Symphony just raised another $100M lead by Google. Not Google Ventures mind you – Google Inc.

Who is Symphony?
  • High profile Silicon Valley startup (obviously), soon to become a unicorn, if it isn’t already
  • Well known founder from the Unified Communications industry – David Gurle
  • Have been around for only a year
  • Already has over 100 employees, most of them engineers
  • Focused on enterprise messaging, and targeting highly regulated and security sensitive industries
The Symphony Service

The service itself is targeted at the enterprise, but a free variant of it is available. I tried logging into it, to see what is all about. It is a variant of the usual messaging app on the desktop, with bits and pieces of Facebook and Slack.

On face value, not much different than many other services.

Symphony Foundation

Symphony decided to build its service on top of an open source platform of its own, which it calls Symphony Foundation. It includes all the relevant washed-out words required in a good marketing brochure, but little else for now: a mission statement, some set of values. That’s about it.

It will be open source, when the time comes. It will be licensed under the Apache license (permissive enough). And you can leave an inquiry on the site. In the name of openness… that’s as open as Apple’s FaceTime protocol is/was supposed to be. I’ll believe it when I see it.

Why Invest in Symphony?

This is the bigger question here. Both for why Google put money in it, as well as others.

With a total of $166M of investment in two rounds and over 100 employees recruited in its first year of existence, there seems to be a gold rush happening. One that is hard to explain.

As a glaring reminder – Whatsapp on acquisition day had 32 developers and around 50 employees. Symphony has twice that already, but no active user base to back it up.

It might be because of its high profile. After all, this is David Gurle we’re talking about. But then again, Talko has Ray Ozzie. But they only raised $4M in the past 3 years, and have less than 10 employees (if you believe LinkedIn).

The only other reason I can see is the niche they went for.

The financial industry deals with money, so it has money. It also has regulations and laws, making it a hard nut to crack. While most other players are focused on bringing consumer technology to the SMB, Symphony is trying to start from the top and trickle to the bottom with a solution.

The feature set they are putting in place, based on their website, include:

  • Connectivity across organizations, while maintaining “organizational compliance”
  • Security and privacy
  • Policy control on the enterprise level
  • Oh… and it’s a platform – with APIs – and developers and partners

The challenge will be keeping a simple interface while maintaining the complex feature set regulated industries need (especially ones that love customization and believe they are somehow special in how they work and communicate).

On Messaging and Regulation

The smartphone is now 8 years old, if you count it from the launch of the iPhone.

Much has changed in 8 years, and most of it is left unregulated still.

Messaging has moved from SMS to IP based messaging services like Whatsapp in many countries of the world. Businesses are trying to kill email with tools like Slack. We now face BYOD phenomena, where employees use whatever device and tools they see fit to get their work done – and enterprises find it hard to force them to use specific tools.

If Hillary Clinton can use her own private email server during the course of her workday, what should others say or do?

While regulation is slow to catch up, I think some believe the time is ripe for that to happen. And having a messaging system that is fit for duty in those industries that are sensitive today means being able to support future regulation in other/all industries later.

This trend might raise the urgency or the reason for the capital that Symphony has been able to attract.

Google

Why did Google invest here? Why not Google Ventures? It doesn’t look like an Alphabet investment but rather a Google one. And why invest and not acquire?

Google’s assets in messaging include today:

Jibe/RCS is about consumer and an SMS replacement in the long run. It may be targeted at Apple. Or Facebook. Or Skype. Or all of them.

None of its current assets is making a huge impact. They aren’t dominant in their markets.

And messaging may be big in the consumer, but the money is in the enterprise – it can be connectivity to enterprises, ecommerce or pure service. Google is finding it difficult there as well.

Symphony is a different approach to the same problem. Targets the enterprise directly. Focusing on highly regulated customers. Putting money into it as an investment is a no-brainer, especially if it includes down the road rights of first refusal on an acquisition proposal for example. So Google sits and waits, sees what happens with this market, and decides how to continue.

Is this a part of a bigger picture? A bigger move of Google in the messaging space? Who knows? I still can’t figure out the motivation behind this one…

Messaging and me

I’ve been writing on general messaging topics on and off throughout the years on this blog.

It seems this space is becoming a lot more active recently.

Expect more articles here about this topic of messaging from various angles in the near future.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post Google Goes All in for Messaging, Invests in Symphony appeared first on BlogGeek.me.

Do you Need to test a WebRTC P2P Service?

Mon, 10/12/2015 - 12:00

Yes.

It is a question I get from time to time, especially now, that I am a few months into the WebRTC testing venture as a co-founder with a few partners – testRTC.

The logic usually goes like this: the browsers already support WebRTC. They do their own testing, so what we end up getting is a solid solution we can use. Fippo would say that

If life was that easy… here are a few things you need to take care of when it comes to testing the most simple of WebRTC services:

#1 – Future proofing browser versions

Guess what? Things break. They also change. Especially when it comes to WebRTC.

A few interesting tidbits for you:

  • Google is dropping HTTP support for GetUserMedia, so services must migrate to HTTPS. Before year end
  • The echo canceller inside WebRTC? It was rewritten. From scratch. Using a new algorithm. That is now running on a billion devices. Different devices. And it works! Most times
  • WebRTC’s getStats() API is changing. Breaking its previous functionality

And the list goes on.

WebRTC is a great technology, but browsers are running at breakneck speeds of 6-8 weeks between releases (for each browser) – and every new release with a potential to break a service in multitude of ways – either because of a change in the spec, deprecation of capability or just bugs.

Takeaway: Make sure your service works not only on the stable version of the browsers, but also on their beta or even dev versions as well.

#2 – Media relay

Your service might be a P2P service, but at times, you will need to relay media through TURN servers.

The word on the street is that around 15% of sessions require relay. To some it can be 50% and to others 8% (real numbers I heard from running services).

Media relay is tricky:

  • You need to configure it properly (many fall at this one)
  • You need to test it in front of different firewall and NAT configurations
  • You need to make it close to your users (you don’t want a local session in Paris to get relayed through a server in San Francisco)
  • You need to test it for scale (check the next point for more on that)

Takeaway: Don’t treat WebRTC as a browser side technology only, or something devoid of media handling. Even if the browser does most of the heavy lifting, some of the effort (and responsibility) will lie on your service.

#3 – Server scale

Can your server cater for 200 sessions in parallel to fit that contact center? What about a 1000?

What will happen if you’ll have a horde effect due to a specific event? Can you handle that number of browsers hitting your service at once? Does your website operate in the same efficiency for the 1000th person as it does for the first?

This relates to both your signaling server, which is no part of WebRTC, but is there as part of your service AND your media server from my previous point.

Takeaway: Make sure your service scales to the capacities that it needs to scale. Oh – and you won’t be able to test it manually with the people you have with you in your office…

#4 – Service uptime

You tested it all. You have the perfect release. The service is up and running.

How do you make sure it stays running?

Manually? Every morning come in to the office and run a session?

Use Pingdom to make sure your site is up? Go to the extreme of using New Relic to check the servers are up, the CPUs aren’t over loaded and the memory use seems reasonable? Great. But does that mean your service is running and people can actually connect sessions? Not necessarily.

Takeaway: End-to-end monitoring. Make sure your service works as advertised.

The ugly truth about testing

The current norm in many cases is to test manually. Or not test at all. Or rely on unit testing done by developers.

None of this can work if what you are trying to do is create a commercial service, so take it seriously. Make testing a part of your development and deployment process.

And while we’re at it…

Check us out at testRTC

If you don’t know, I am a co-founder with a few colleagues at a company called testRTC. It can help you with all of the above – and more.

Leave us a note on the contact page there if you are interested in our paid service – it can cater to your testing needs with WebRTC as well as offering end-to-end monitoring.

 

Need to test WebRTC?

 

The post Do you Need to test a WebRTC P2P Service? appeared first on BlogGeek.me.

Fone.do and WebRTC: An Interview With Moshe Maeir

Thu, 10/08/2015 - 12:00
isVisible=false; function show_hide_searchbox(w){ if(isVisible){ document.getElementById('filterBoxSelection').style.display = 'none'; w.innerText='Filter ▼'; }else{ document.getElementById('filterBoxSelection').style.display = 'block'; w.innerText='Filter ▲'; } isVisible=!isVisible; } function checkIfSelected(chk){ if(chk.checked==1) chk.parentNode.className = "selected"; else chk.parentNode.className = "notselected"; getSelectedValues(); } function getSelectedValues(){ var a=document.getElementsByClassName('selected'); var vtVal=[] , ctVal=[] , ftVal=[]; var ct=0,vt=0,ft=0; for (x = 0; x < a.length; ++x) { try{ if(a[x].getElementsByTagName('input')[0].className=='companyType'){ ctVal[ct]= a[x].getElementsByTagName('input')[0].value; ct++; } if(a[x].getElementsByTagName('input')[0].className=='vendorType'){ vtVal[vt]= a[x].getElementsByTagName('input')[0].value; vt++; } if(a[x].getElementsByTagName('input')[0].className=='focusType'){ ftVal[ft]= a[x].getElementsByTagName('input')[0].value; ft++; } }catch(err){ } } search_VType(vtVal); search_CType(ctVal); search_FType(ftVal); } function search_VType(val){ var a=document.getElementsByClassName('interview-block'); for(x=0;x=0 && val[i]!=null){ a[x].style.display='block'; } } if(val.length==0){ a[x].style.display='block'; } } } function search_CType(val){ var a=document.getElementsByClassName('interview-block'); for(x=0;x=0 && val[i]!=null && a[x].style.display=='block'){ break; } if(i==val.length-1){ a[x].style.display='none'; } } } } function search_FType(val){ var a=document.getElementsByClassName('interview-block'); for(x=0;x=0 && val[i]!=null && a[x].style.display=='block'){ break; } if(i==val.length-1){ a[x].style.display='none'; } } } } Check out all webRTC interviews >>

Fone.Do: Moshe Maeir

October 2015

SMB phone system

Disrupting the hosted PBX system with WebRTC.

[If you are new around here, then you should know I’ve been writing about WebRTC lately. You can skim through the WebRTC post series or just read what WebRTC is all about.]

 

There’s no doubt that WebRTC is disrupting many industries. One of the obvious ones is enterprise communications, and in this space, an area that has got little attention on my end (sorry) is the SMB – where a small company needs a phone system to use and wants to look big while at it.

Moshe Maeir, Founder at Fone.do, just launched the service out of Alpha. I have been aware of what they were doing for quite some time and Moshe took the time now that their service is public to answer a few of my questions.

 

What is Fone.do all about?

Fone.do is a WebRTC based phone system for small businesses that anyone can set up in 3 minutes. It replaces both legacy PBX systems that were traditionally based in your communications closet and also popular Hosted PBX systems. Businesses today are mobile and the traditional fixed office model is changing. So while you can connect a SIP based IP phone to our system, we are focused on meeting the needs of the changing business world.

 

Why do small businesses need WebRTC at all? What’s the benefit for them?

You could ask the same question about email, social networks etc. Why use web based services at all? Does anyone want to go back to the days of “computer programs” that you downloaded and installed on your computer? Unfortunately, many still see telephony and communications as a stand alone application. WebRTC changes this. Small businesses can communicate from any place and any device as long as they have a compatible platform.

 

What excites you about working in WebRTC?

Two things. Not sure which is more exciting. First of all. If I build something great – the whole world is my potential market. All they need is a browser and they are using our system in 3 minutes. The other exciting aspect is that telephony is no longer a closed network. Once you are on the web the potential is unlimited. You can easily connect your phone system to the wealth of data and services that already exist on the web and take communications to a new level. In fact, that is why we hired developers who knew nothing about telephony but were experienced in web development. The results are eye opening for traditional telecom people.

 

I know you’re a telecom guy yourself. Can you give an example how working with web developers was an eye opener to you?

There are many. The general attitude is just do it. With legacy telecom, everything has the accepted way of doing things and you don’t want to try  anything new without extended testing procedures. A small example – in the old VoIP days writing a “dial plan” was a big thing. When we came to this issue on Fone.Do, one of the programmers naturally googled the issue and found a Google service that will automatically adapt the dial plan based on the users’ mobile number. 1-2-3 done.

 

Backend. What technologies and architecture are you using there?

Our main objective was to build an architecture that will work well and easily scale in the cloud (we are currently using AWS). So while we have integrated components such as the Dialogic XMS and the open source Restcomm, we wrote our own app server which manages everything. This enables us if we need to freely change back end components.

 

Can you tell us a bit about your team? When we talked about it a little of a year ago ago, I suggested a mixture of VoIP and web developers. What did you end up doing and how did it play out?

All our developers are experienced front end and backend web programmers with no telecom experience. However, our CTO who designed the system has over 15 years of experience in Telecom, so he is there to fill in any missing pieces. There were some bumps at the beginning, but I am very happy we did it this way. You can teach a web guy about Telephony, but it is very hard to get a Telecom guy to change his way of thinking. Telecom is all about “five nines” and minimizing risk. Web development is more about innovation and new functionality. With todays’ technology it is possible to innovate and be almost as reliable as traditional telephony

 

Where do you see WebRTC going in 2-5 years?

Adoption is slower than I expected, but eventually I see it as just another group of functions in your browser that developers can access as needed.

 

If you had one piece of advice for those thinking of adopting WebRTC, what would it be?

WebRTC is here. It makes your user experience better – so what are you waiting for?

 

What’s next for Fone.do?

We recently released our alpha product and we are looking to launch an open beta in the next couple of months. Besides a web based “application”, we also have applications for Android and iOS.

The interviews are intended to give different viewpoints than my own – you can read more WebRTC interviews.

The post Fone.do and WebRTC: An Interview With Moshe Maeir appeared first on BlogGeek.me.

4 Good Reasons for Using HTTP/2

Tue, 10/06/2015 - 12:00

HTTP/2 is too good to pass.

If you don’t know much about HTTP/2 then check this HTTP/2 101 I’ve written half a year ago.

In essence, it is the next version of how we all get to consume the web over a browser – and it has been standardized and deployed already. My own website here doesn’t yet use it because I am dependent on the third parties that host my service. I hope they will upgrade to HTTP/2 soon.

Watching this from the sidelines, here are 4 good reasons why you should be using HTTP/2. Not tomorrow. Today.

#1 – Page Load Speed

This one is a no-brainer.

A modern web page isn’t a single resource that gets pulled towards your browser for the pleasure of your viewing. Websites today are built with many different layers:

  • The core of the site itself, comprising of your good old HTML and CSS files
  • Additional JavaScript files – either because you picked them yourself (JQuery or some other piece of interactive code) or through a third party (Angular framework, ad network, site tracking code, etc.)
  • Additional JavaScript and CSS files coming from different add-ons and plugins (WordPress is fond of these)
  • Images and videos. These may be served from your server or via a CDN

At the time of writing, my own website’s homepage takes 116 requests to render. These requests don’t come from a single source, but rather from a multitude of them, and that’s when I am using weird hacks such as CSS sprites to reduce the number of resources that get loaded.

There’s no running away from it – as we move towards richer experiences, the resources required to render them grows.

A small HTTP/2 demo that CDN77 put in place shows exactly that different – they try loading 200 small images to a page in either HTTP/1.1 or HTTP/2 shows the improved load times of the page.

HTTP/2 has some more features that can be used to speed up web page serving – we just need to collectively start adopting it.

#2 – Avoiding Content Injection

In August, AT&T was caught using ad injection. Apparently, AT&T ran a pilot where people accessing the internet via its WiFi hotspots in airports got ads injected to the pages they browsed over the internet.

This means that your website’s ads could be replaced with those used by a third party – who will get the income and insights coming from the served ads. It can also mean that your website, that doesn’t really have ads – now shows them. The control freak that I am, this doesn’t sound right to me.

While HTTP/2 allows both encrypted and unencrypted content to be served, only the encrypted variant is supported by browsers today. You get the added benefits of encryption when you deploy HTTP/2. This makes it hard to impossible to inject 3rd party ads or content to your site.

#3 – Granularity

During that same August (which was the reason this post was planned to begin with), Russia took the stupid step of blocking Wikipedia. This move lasted less than a week.

The reason? Apparently inappropriate content in a Wikipedia page about drugs. Why was the ban lifted? You can’t really block a site like Wikipedia and get away with it. Now, since Wikipedia uses encryption (SPDY, the predecessor of HTTP/2 in a way), Russia couldn’t really block specific pages on the site – it is an all or nothing game.

When you shift towards an encrypted website, external third parties can’t see what pages get served to viewers. They can’t monetize this information without your assistance and they can’t block (or modify) specific pages either.

And again, HTTP/2 is encrypted by default.

#4 – SEO Juice

Three things that make HTTP/2 good for your site’s SEO:

  1. Encrypted by default. Google is making moves towards giving higher ranking for encrypted sites
  2. Shorter page load times translate to better SEO
  3. As Google migrates its own sites to HTTP/2, expect to see them giving it higher ranking as well – Google is all about furthering the web in this area, so they will place either a carrot or a stick in front of business owners with websites

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post 4 Good Reasons for Using HTTP/2 appeared first on BlogGeek.me.

How NOT to Compete in the WebRTC API Space

Mon, 10/05/2015 - 12:00

Some aspects are now table stakes for WebRTC API Platforms.

There are 20+ vendors out there who are after your communications. They are willing to take up the complexity and maintenance involved with running real time voice and video that you may need in your business or app. Some are succeeding more than others, as it always has been.

So how do you as a potential customer going to choose between them?

Here are a few things I’ve noticed in the two years since I first published my report on this WebRTC API space:

  1. Vendors are finding it hard to differentiate from one another. Answering the question to themselves of what they do better than anyone else in this space (or at least from the vendors they see as their main competitors) isn’t easy
  2. Vendors often times don’t focus. They try to be everything to everyone, ending up being nothing to most. You can see what they are good for if you look from the sidelines, feel how they pitch, operate, think – but they can’t see it themselves
  3. Vendors attempt to differentiate over price, quality and ease of use. This is useless.
Table Stakes

Most vendors today have pretty decent quality with a set of APIs that are easy. Pricing varies, but usually reasonable. While some customers are sensitive to pricing, others are more focused on getting their proof of concept or initial beta going – and there, the price differences doesn’t matter in the short to medium term anyway.

The problem is mainly vendor lock-in, where starting to use a specific vendor means sticking with it due to high switching costs later on. But then, savvy developers use multiple vendors or prepare adapter layers to abstract that vendor lock-in.

Vendors need to think more creatively at how they end up differentiating themselves. From carving a niche to offering unique value.

My Virtual Coffee

This is the topic for my first Virtual Coffee session, which takes place on October 14.

It is something new that I am trying out – a monthly meeting of sorts. Not really a webinar. But not a conference either.

Every month, I will be hosting an hour long session:

  • It will take place over a WebRTC service – I am dogfooding
  • It will cover a topic related to the WebRTC ecosystem (first one will be differentiation of WebRTC API Platform vendors)
  • It will include time for Q&A. On anything
  • Sessions will be recorded and available for playback later on
  • It is open to my consulting customers and those who purchased my report in the past year

If you are not sure if you are eligible to join, just contact me and we’ll sort things out.

I’d like to thank the team at Drum for letting me use their ShareAnywhere service for these sessions – they were super responsive and working with them on this new project was a real joy for me.

Virtual Coffee #1 Title: WebRTC PaaS Growth Strategies How WebRTC API vendors differentiate and attempt to grow their business When: Oct 14. 13:30 EDT (add to calendar) Where: Members only What’s next?

Want to learn more about this space? The latest update of my report is just what you need

 

The post How NOT to Compete in the WebRTC API Space appeared first on BlogGeek.me.

Pages

Using the greatness of Parallax

Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.

Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.

Get free trial

Wow, this most certainly is a great a theme.

John Smith
Company name

Startup Growth Lite is a free theme, contributed to the Drupal Community by More than Themes.