bloggeek

Subscribe to bloggeek feed bloggeek
The leading authority on WebRTC
Updated: 1 hour 42 min ago

Unified Communications is Overrated

Tue, 01/12/2016 - 12:00

Who needs to communicate in enterprises anyway?

Everyone.

Communication is… overrated

But do we really need to treat it as if it is the most critical piece of the enterprise world?

I use multiple systems to make my calls these days. They are WebRTC based or proprietary apps such as Skype, WebEx or GoToMeeting. I grumble when I have to use a proprietary system and install stuff on my laptop, but that’s life.

It was like that for me even when working for enterprises in the past – big and small. Somehow, you always need to have a “phone system” and be reachable. But other than that? I’d say “omnichannel” as a buzzword has stuck to the contact center but is just as important in unified communications.

But in Unified Communications, Omnichannel means something really different – it means that you can now reach out to people on lots of different channels and mediums – picking up the ones most suitable for the taks – which most often than not ends up being different than what the corporate IT has decided you should be using.

And you know what? I couldn’t be bothered with it.

The essence of Unified Communications is the here and now. Real time communications. If a minute passed, it is no longer interesting. It is lost.

Hangouts. Talky. A phone call (international or otherwise). Skype. Anything else.

Just pick one and lets meet.

Enterprise Messaging though is a different story.

It isn’t focused in the here and now, but rather in collecting data and making it accessible. It is about synchronizing teams and aligning them – asynchronously.

And “omnichannel” there? It means integrations with anything and everything that is enterprise software.

Which makes it the point of access for an employee to his daily life in the office.

It is a lot more sticky these days than unified communications.

Unified Communication is on another rebranding rampage. We used to call it “Convergence” a decade or two ago. And when that felt old, we started calling it Unified Communications. There are analysts that are now coining the term WCC – Workstream Communications and Collaboration. A mouthful that simply says Unified Communications need to look at the Enterprise Messaging space and copy it.

The end result will still be a focus on the here and now. And it will still be overrated.

 

[show promotion title=”strategy-session”]

The post Unified Communications is Overrated appeared first on BlogGeek.me.

Can Wire Succeed Where Talko Failed?

Mon, 01/11/2016 - 12:00

Challenges ahead.

A shy over a year ago, I wrote about 3 startups: Talko, Wire and Switch

All of them looked promising. All were using WebRTC.

In 2015, Switch had a meeting with $35 million, along with quite a few successful deployments in businesses big and small.

A month ago, Talko got acquired by Microsoft. I’ve interviewed here the Talko team in the past. Selling to Microsoft shows. Shutting the company. With little objections from customers. It all points to a single conclusion – Talko has been a failure when it comes to the business side of it. It probably had a solid technology – otherwise – why would Microsoft acquihire the team and fold it into Skype? I am sure Ray Ozzie and the team of Microsoft veterans in Talko added to this acquisition, but there was no other value in this transaction.

The Talko Team expresses it best on their updated homepage:

However, as engaged as many of you have been, the reality is that the broad-based success of communications apps tends to be binary: A small number of apps earn and achieve great viral growth, while most fall into some stable niche.

Talko didn’t grow fast enough or big enough. Clementine’s acquisition by Dropbox is similar. A communication solution geared towards team/group/enterprise communications gets acquired for its team with the service being left behind, never to be seen again.

And that’s in the less competitive domain of the enterprise. What will be with Wire? The third company I wrote about.

On Android, Wire reportedly has 100K-500K installs. Assuming iOS has twice as much (I am trying to be positive), that still falls way short of any of the messaging services we usually hear about – they are measured by 100’s of millions. Of active monthly users – not installs.

It is hard to see how Wire can change its abysmal future without a serious pivot or a drastic change in current market trends. Some will say this is a matter of a directory service and network effects. I think it is a matter of strategy and luck. Where Wire failed to attract the crowds, a different messaging service – Telegram, with 50M-100M installs on Android and a reported 60M monthly active users.

Wire was formed in 2012 and Telegram in 2013. So we can’t say Telegram had any head start here.

WebRTC makes it too easy to build and launch a communication service, which in turn, makes it hard to build a viable business with it. The role of product managers and people who need to think of the business case is more important than the technologists building the service when it comes to WebRTC. At the same time, finding good developers who grok WebRTC isn’t easy either.

2016 is going to be crucial for Wire.

What do you see for your initiative in 2016? Do you have a business case and route to market and money, or are you tinkering with the technology, assuming that if you build it they will come?

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post Can Wire Succeed Where Talko Failed? appeared first on BlogGeek.me.

When is 44.5 Billion a Small Number?

Thu, 01/07/2016 - 12:00

When it is the wrong metric to track.

Microsoft playing the number games with Edge adoption stats

44.5 billion.

minutes.

That’s how long people have been using Microsoft Edge “just last month”, according to Microsoft:

Over 44.5 billion minutes spent in Microsoft Edge across Windows 10 devices in just the last month.

That other number of 200 million monthly active devices using Windows 10 is much more impressive.

I am interested in Edge due to WebRTC and ORTC. It is one of the missing pieces of our puzzle to get adoption (or at least that’s what we’ve been told).

So how can 44.5 billion minutes can be so unimpressive?

Do the math.

Let’s assume only half of Windows 10 users make use of Microsoft Edge.

This gets us to an average of 445 minutes a month per user, placing it at less than 15 minutes a day (!)

How many of these minutes are spent with an idle browser? I got a laptop and a desktop open 24/7 with Chrome running on them. Even when I am engaged in other applications.

Microsoft decided to announce a largish number to hide the fact that Microsoft Edge isn’t really getting the love and adoption they expected, which is sad. I’ve used it a couple of times on my own Windows 10 laptop. It does what it is supposed to do and does it well, but that’s about it.

The challenge is migrating from Chrome. It stores my credentials to the many sites I visit, it has that nice search bar that often times just finds the URL I need without really searching (it automatically completes from its history), there are the few extensions I’ve got installed. All in all, it does the work. It is bloated and a memory hog, but the time when this mattered (a year or two ago) passed already, so there’s very little incentive for me to switch browsers.

Microsoft is killing Internet Explorer 8, 9 and 10 in the same day next week, pushing businesses into Internet Explorer 11 or Microsoft Edge. This might gain them a percentage or two more in adoption of Microsoft Edge – not nearly enough. Microsoft will probably announce end of life for Internet Explorer 11 in a year or two – the sooner the better if they want Microsoft Edge to grow.

What else can Microsoft do to improve its position? I don’t know. I don’t believe they can. The opportunity to surpass Google Chrome had come and gone. They will need to wait for the next opening when Google falters with Chrome or make something enticing enough for people to switch. It is sad, as Microsoft Edge is technically sound – it made browsers interesting again.

For WebRTC, Microsoft Edge still makes no difference at all. We’ve seen a few announcements of ORTC support by some vendors, but that’s about it. There’s no urgency in vendors to support it. The discussions are still about Internet Explorer when it comes to WebRTC.

Where does that leave us?

  • Companies who waiting for Microsoft to adopt WebRTC will continue to wait
  • Those who haven’t waited have made the correct choice – deal with what’s available and don’t wait for the forces that be to save you
  • While Apple might get WebRTC properly, Microsoft hasn’t. Introducing ORTC into Internet Explorer is what the market needs, but it won’t happen by Microsoft
  • Mobile is unaffected, as consumption there is done by apps, so browser adoption issues are irrelevant for most

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post When is 44.5 Billion a Small Number? appeared first on BlogGeek.me.

Messaging and Push Notifications: Best Practices

Tue, 01/05/2016 - 12:00

Time to fix the stupid notifications of all them apps. Especially messaging ones.

Push notification for messaging apps – not as easy as you thought

When it comes to messaging, they are probably the most chatty applications out there when it comes to push notifications (not including Candy Crush).

I’ve had my share of bad experiences with messaging and notifications to know what works for me and what annoys the hell out of me. This is also where you see the true leaders shine and the rest slumbering along.

IP messaging is considered by most developers a rather simple thing to implement. It isn’t.

Here are a few things you should incorporate into your push notifications implementation when you want to deal with messaging capabilities.

#1 – Synchronize devices

Your service is sending me messages? Great.

You are aware I am the proud owner of a smartphone, tablet, laptop and PC? And that I generally connect through all of them interchangeably.

So when I am receiving a message (or sending one for that matter), it would be nice if said messages would magically appear in all of my devices. And in a timely fashion.

One of the reasons I’ve been using Skype less and less this year is that it just didn’t synchronize properly – not showing messages on all devices, or popping notifications on the app on my laptop a day or two after I’ve already received them on my phone or on the PC. It seemed like Skype just weren’t seriously prepared for this world of multiple devices per person.

Assume that if I sign in from a new device, I don’t want a “fresh” start – I want all of the data and context that is available to me on my other devices to be availalbe in this new device of mine as well.

#2 – Clear notifications. Everywhere

You know that fuzzy feeling inside when you receive an email? My whole house is pinging (or used to ping), each device trying its best to be the first to announce that incoming email.

The main problem is, that handling the notification (=opening it) on one device didn’t necessarily clean it from all other devices.

Google Mail got it right after a year or two on Android. WhatsApp got it right the first time – it was almost a magical feeling when they came out with their web interface and messages got cleared on the web or on the phone automatically – and FAST.

The most annoying thing is an app that doesn’t clear its notifications. I know there are many who don’t care, but I like my notifications windows clean. Going over multiple devices to clean the same message is a show stopper for me these days (and again, up until a few months ago, Skype didn’t get this one right).

#3 – Mobile and web

Notifications should occur both in mobile apps and in web browsers. Modern browsers already support notifications, so make sure to utilize it when needed.

You need to remember that knowledge workers may sit all day in front of a computer – so why not leverage that to show notifications there instead of making them pick up the phone?

#4 – How urgent is it again?

Not. Every. Single. Message. Is. The. Same.

How are you going to report them? Or even notify them?

You may have them notified separately. Or bunch them up under a single icon.

Slack just added a Do Not Disturb feature. Great. I can now silence notifications in Slack. The problem is, they decided that my work day is 8am-10pm. Anything not in this timeframe isn’t notified to me. It would have been fine – but only if when 8am arrived – they’d pop up a notification about things I’ve missed.

Groups in Whatsapp can be silenced, or certain people. You can even do it for a period of time (I don’t really care about kindergarten related chatter when I am abroad). But it is manual. It would have been so much better if somehow, WhatsApp magically would decide what I prefer and what I didn’t when it came to notifications.

#5 – How do I reply?

The vinyl Android SMS application enabled me to mark messages as read – right from the notification. No need to enter the app just so it knows I’ve read it. Some other apps enable replying to notifications without getting into the app.

What are you doing regarding your app? Is the only thing I can do is enter the app, or can I act from the notification itself? (guess what I prefer)

#6 – Where in the view stack will I be landing?

Got the notification on my phone. Pressed it. Where will it lead me?

LinkedIn’s terrible app (even the latest incarnation of it) does a great job at putting you in the wrong view – try accepting an invite to connect and you’ll end up preferring to open it inside your browser.

Skype gets you to the conversation. Pressing the back button on Android leaves the app. But if you then enter the Skype app explicitly, after several incoming notifications of a group conversation there – it will lead you to the same conversation over and over again – at least to the same amount of times you pressed on the notification of new messages in that group. Something is terribly wrong there.

WhatsApp does a decent job here – there’s a single WhatsApp notification for everything. If all notifications are from the same conversation – that’s where you’ll land. If there are multiple conversations you are being notified of – you’ll land at the WhatsApp homepage. Oh – and if you press back? It takes you from the conversation view to the homepage of WhatsApp before letting you leave the app.Gmail does the same.

#7 – Think Offline

Bonus points for handling unconnected use cases. Many miss this one when it comes to notifications.

When you press the notification, the app is launched and it goes to the server to grab the actual reason for notification. But what if I am INSIDE an elevator? Some apps do a miserable job at making sure that the launched app can show me the message without being connected (you already got me that notification – why not get the whole damn message while at it?)

Why is it important?

IP Messaging is probably one of those areas where developers go NIH. They know it all. How can sending a couple of messages be hard? Oh – you also need push notifications on top? No worries! There’s that simple API in iOS that does that.

But that’s usually only the beginning of the saga when it comes to IP Messaging and push notifications. If you decide to develop it in-house – you better be ready to writing down the exact spec in detail to get it right. Otherwise, find someone who does that for a living.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post Messaging and Push Notifications: Best Practices appeared first on BlogGeek.me.

WebRTC State of the Market: End of 2015

Wed, 12/30/2015 - 12:00

Consider this my end of year review for WebRTC in 2015.

Tomorrow will mark the last day of 2015. As we head into 2016, it is time to review what we had this year in WebRTC. For me this year proved to be a real rollercoaster, but somehow I get a feeling 2016 won’t be any different.

I dug some of the statistics I regularly collect, with differences and trends in 2015 in mind. From there, the road to an infographics about WebRTC State of the Market was a short one. For those who have membership access to my site, I will be spending the next Virtual Coffee discussing these findings in detail.

Feel free to share and embed this infographic (click to enlarge or download the PDF) if you wish:

Share this Image On Your Site

<p><strong>Please include attribution to BlogGeek.me with this graphic.</strong></p><br />
<p><a href=’https://bloggeek.me/webrtc-state-market-2015/’><img src=’https://bloggeek.me/wp-content/uploads/2015/12/201512-WebRTC-infographic.png’ alt=’WebRTC State of the Market – Are we there yet?’ width=’540px’ border=’0′ /></a></p><br />
<p>
See you all in 2016!

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

 

The post WebRTC State of the Market: End of 2015 appeared first on BlogGeek.me.

The Rise of WebRTC Broadcast and Live Streaming

Mon, 12/28/2015 - 12:00

WebRTC Broadcast will be all the rage in 2016.

As I am working my way in analyzing the various use case categories for WebRTC, I decided to check what’s been important in 2015. The “winner” in attention was a relatively new category of WebRTC broadcast – one in which WebRTC is being used when what one is trying to achieve is sending a video stream to many viewers. These viewers can be passive, or they can interact with the creator of the broadcast.

Up until 2014, I had 4 such vendors in my list. 2015 brought 15 new vendors to it – call it “the fastest growing category”. And this is predominantly a US phenomena – only 3 of the new vendors aren’t US based startups.

Periscope and Meerkat are partly to “blame” here. The noise they made in the market stirred others to join the fray – especially if you consider many of them are based in San Francisco as well.

TokBox just introduced Spotlight – their own live broadcast APIs – for those who need. At its heart, Spotlight enables the types of interactions that we see on the market today for these kind of solutions:

  1. Ability to produce video by using WebRTC – either from a browser or a mobile app
  2. Ability to view the video content as a passive participant – usually via CDN by way of Flash, HLS or MPEG-DASH
  3. Ability to “join” the producer, creating a 1:1 video chat or a video conference that gets broadcasted to others

Here are some of my thoughts on this new emerging category:

  • Most of the focus today is using WebRTC broadcast on the producer’s side. The reasons are clear:
    • Flash is dying. HLS and MPEG-DASH are replacing it on the viewer side, but what is going to replace it in the producer side? Some go for specialized broadcasing applications, but WebRTC seems like a good alternative for many
    • This is where vendors have more control – they can force producers to use a certain browser – it is much harder to force the viewers
    • WebRTC plays nice in browsers and mobile. No other technology can achieve that today
  • The producer side is also where most constraints/requirements come from today:
    • You may want to “pull in” a viewer for an interview during a session
    • Or have a panel of possible speakers
    • You may wish to split the producer from the “actor”, facilitating larger crews
    • All these fit well with the capabilities that WebRTC brings to the table versus the proprietary alternatives out there
  • H.264 is the predominant requirement on the viewer side at the moment. VP9 is interesting. This means:
    • Trnascoding in necessary in the backend prior to sending the video to viewers
    • H.264 in Chrome can improve things for the vendors
  • There’s a race towards zero-latency. Vendors are looking to reduce the 10-60 seconds of delay inherent in video streaming technologies to a second or less (not sure why)
    • This would require attempting to replace viewer end of the architecture to a WebRTC one
    • It will also necessitate someone building a backend that is optimized for this use case – something that wasn’t researched enough up until today
  • Peer assisted delivery vendors such as Peer5 and Streamroot is another aspect. These kind of technologies sit “on top” of a video CDN and use WebRTC’s data channel to improve performance
  • I’ve started noticing a few audio-only vendors joining the game as well. This will grow as a trend. The audio based solutions tend to be slightly different than the video ones and the technologies they employ are radically different. The technologies and architectures may converge, but not in 2016

2016 will be a continuation of what we’ve seen during 2015. More companies trying to define what live WebRTC broadcast looks like and aiming for different types of architectures to support it. In most cases, these architectures will combine WebRTC in them.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post The Rise of WebRTC Broadcast and Live Streaming appeared first on BlogGeek.me.

The Browser Wars are Back

Tue, 12/22/2015 - 12:00

Is it just me or are browsers fun again?

Who would have believed? Microsoft releasing their JavaScript engine as open source. And under a permissive MIT license.

While there are many browsers and vendors out there, there are probably only 4 that matter: Chrome (Google), Firefox (Mozilla), Edge (Microsoft) and Safari (Apple).

Who haven’t I included?

  • Microsoft’s Internet Explorer. Microsoft is actively transitioning away from it to Edge
  • The rest of the pack, as far as I can tell, are nice wrappers around Chromium – and are negligible in their market share anyway

What should we expect in 2016 from the browsers? A lot.

Google Chrome

For Google, Chrome is an important piece of the puzzle. It lives in the web and the more control points it has over access to information the better positioned it is.

The ongoing activity of Google in WebRTC is part of the picture, and probably not the biggest one.

Google is the company with the least amount of regard to legacy code that there is. When something requires fixing, Google developers are not afraid to rewrite and refactor large components, and management allows and probably even encourages this behavior – something I haven’t seen anywhere else.

A few examples for recent years:

  • Google forked WebKit into Blink, essentially replacing the page rendering inside the browser. The first order of the day after the fork? Spring cleaning – removing code that isn’t necessary for Chrome
  • Google switched EVERYTHING from OpenSSL to BoringSSL. OpenSSL seemed to have some vulnerabilities lately, so Googlers took the time to fork it, clean it up – and deploy the new project across Google
  • Introducing SPDY and getting HTTP/2 out the door

That said, it seems that Google have been somewhat complacent in the area of speed and size with Chrome. I am sure the Chrome team is aware of it and working hard to fix it, but the results haven’t been encouraging enough. This will change – mostly because of the actions of the other browser vendors.

Mozilla Firefox

Mozilla is in transition. From relying on Google as its main benefactor to spreading the risks.

In the past few months though, Mozilla has started trimming down its projects:

These changes indicate that Mozilla understood it can’t just try and replicate every cool new Google project and open source it – it will now focus on making Firefox better. This is a much needed focus, with Firefox slipping in market share for quite some time now.

On the browser front, the notable changes Firefox is making are around privacy and the pornprivacy mode.

Microsoft Edge

Edge is new. It is a complete rewrite of what a browser is. It is speedy, clean and with huge potential. It has its own adoption challenges to overcome (mainly people comfortable enough with Chrome and not caring to try out Edge).

What to do? Microsoft just open sourced the JavaScript engine in Edge – Chakra. It shows some interesting performance results that seem to rival Chrome’s V8. The more interesting aspect of it, is the clear intent in getting Chakra into Node.js as a V8 alternative. Not sure if it will work, but it does has merit. It shows to me that:

  1. A browser/webapp today is split into two – frontend and backend (we already knew that). More often than not, these days the backend is based on a Node.js framework. Microsoft wants to be a part of that backend, probably to end up licensing Windows 10 on servers
  2. JavaScript today is more than a browser scripting language. A JavaScript engine’s health/popularity/importance relies on the ecosystem around it – which is why Microsoft ended up open sourcing it

I am sure there’s an engineer at Google already tasked at reviewing the code of Chakra once it gets a public git repository.

Edge is trying to move the envelope. This will challenge Google further with Chrome – always a good thing.

Apple Safari

Safari seems second place at Apple. It is working, but not much is said or done about it.

We hear a lot of rumblings about WebRTC in Safari lately. How will this shape into Safari, iOS and Mac is anyone’s guess. The bigger question is will this be the only significant browser change to be introduced by Apple or part of a larger overhaul?

Why is this important?

The web isn’t standing still. It is evolving and changing. Earlier this year, WebAssembly was announced – an effort to speed up the interactive web.

While many believe that apps have won over the web when it comes to development, we need to remember two things:

  1. There are times when an app won’t do – as Benedict Evans phrases it well in Apps versus to web: “Do people want to put your icon on their home screen?” – and sometimes they just don’t
  2. Apps are sometimes built using HTML5 – usually because a developer wants a single code base for all platforms or just needs easier access for his service from a browser and mobile apps at the same time

An interesting road ahead of us.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post The Browser Wars are Back appeared first on BlogGeek.me.

Where to find Quality WebRTC Resources

Mon, 12/21/2015 - 12:00

It’s easy, as long as you know where to look for it.

This was published yesterday. Oftentimes, the things I read out there about WebRTC sounds just like this conversation from Dilbert’s life.

WebRTC is elusive. It is located in the cracks between VoIP and the web – a place where most people are just clueless. My own pedigree is VoIP. About 6 years ago, as an “aging” CTO trying to build a cloud service with an API for developers that runs a VoIP service, I was given an important lesson – there’s much to be learned from a 24 year old kid with milk teeth. In a span of a year and a half I got introduced to agile methodologies, internet scale, continuous deployment and a slew of other techniques – none of them was given the term we use today – but they were all there. It helped me later in understanding how and why WebRTC is so transformative.

As we head into 2016, I guess it is time to state a few of the great resources out there for WebRTC – the places I rely on in my own reading about WebRTC.

The Bloggers

Out of the people out there that cover WebRTC, there are 3 that I make it a point to read. All of them are good friends of mine:

The Vendors

Most company blogs suck. Big time. They are boring, and usually read like brochures or press releases. There are a few decent corporate blogs covering WebRTC – some of them can be considered mandatory reading.

TokBox

TokBox has the best corporate blog all around if what you are looking for is WebRTC related information. Now that they have recruited Philipp Hancke they probably will improve further.

Between their new offerings and features announcements are gems of information in the form of whitepapers of certain verticals and insights on WebRTC from the service they operate. They also run TechToks that get recorded and published on YouTube.

callstats.io

The callstats.io blog is another great resource, especially when it comes to covering getstats() related stuff and media quality. Highly recommended.

AT&T

I’ve written my own guest post on the AT&T Developers blog once or twice, so I know how they operate. While being a large corporation has a lot of limitations, when they publish content about WebRTC or adjacent technologies – it is worth the time to read.

A testament to that is the recent series of WebRTC UX/UI posts they have commissioned from &yet – mandatory reading for anyone who delves into web apps for WebRTC.

Sinch

While Sinch’s blog hasn’t been too interesting when it comes to WebRTC lately, earlier this year they had great content to share. Lately, it tends to be around use cases of their customers – totally interesting, but from a different angle.

I’d register on their blog if I were you to keep posted. I am sure they’ll have interesting articles for us next year as well.

WebRTC Digest & Blacc Spot Media

Blacc Spot Media started WebRTC Digest they also run their own Blacc Spot Media blog. Both are great resources with good content.

The digest site is all about acquisitions and money raising in the space, while Blacc Spot Media tries to cover the industry and the ecosystem.

At times, there needs to be some further validation to the vendors being written about there (some aren’t really doing WebRTC but are in the real time space), but all in all, one of the better resources out there.

webrtcHacks

By far the best place for WebRTC developers to go.

In-depth and timely content.

If you aren’t subscribed – then please do.

WebRTC Weekly

If you don’t want to subscribe to too many resources, and are in the need for a single source, then Chris Kranky and me operate the WebRTC Weekly. Subscribe by email to receive one email a week with links to the relevant articles and posts from all over the web related to WebRTC.

There are three reasons why something doesn’t get included in the WebRTC Weekly:

  1. Trash content, which either isn’t accurate or just too shallow
  2. Repetitive, of something that was already covered in the weekly (usually at a higher quality)
  3. We missed it… email us with things you think we should include

The post Where to find Quality WebRTC Resources appeared first on BlogGeek.me.

Twilio and WebRTC: An Interview with Al Cook

Thu, 12/17/2015 - 20:55
isVisible=false; function show_hide_searchbox(w){ if(isVisible){ document.getElementById('filterBoxSelection').style.display = 'none'; w.innerText='Filter ▼'; }else{ document.getElementById('filterBoxSelection').style.display = 'block'; w.innerText='Filter ▲'; } isVisible=!isVisible; } function checkIfSelected(chk){ if(chk.checked==1) chk.parentNode.className = "selected"; else chk.parentNode.className = "notselected"; getSelectedValues(); } function getSelectedValues(){ var a=document.getElementsByClassName('selected'); var vtVal=[] , ctVal=[] , ftVal=[]; var ct=0,vt=0,ft=0; for (x = 0; x < a.length; ++x) { try{ if(a[x].getElementsByTagName('input')[0].className=='companyType'){ ctVal[ct]= a[x].getElementsByTagName('input')[0].value; ct++; } if(a[x].getElementsByTagName('input')[0].className=='vendorType'){ vtVal[vt]= a[x].getElementsByTagName('input')[0].value; vt++; } if(a[x].getElementsByTagName('input')[0].className=='focusType'){ ftVal[ft]= a[x].getElementsByTagName('input')[0].value; ft++; } }catch(err){ } } search_VType(vtVal); search_CType(ctVal); search_FType(ftVal); } function search_VType(val){ var a=document.getElementsByClassName('interview-block'); for(x=0;x=0 && val[i]!=null){ a[x].style.display='block'; } } if(val.length==0){ a[x].style.display='block'; } } } function search_CType(val){ var a=document.getElementsByClassName('interview-block'); for(x=0;x=0 && val[i]!=null && a[x].style.display=='block'){ break; } if(i==val.length-1){ a[x].style.display='none'; } } } } function search_FType(val){ var a=document.getElementsByClassName('interview-block'); for(x=0;x=0 && val[i]!=null && a[x].style.display=='block'){ break; } if(i==val.length-1){ a[x].style.display='none'; } } } }Check out all webRTC interviews >>

Twilio: Al Cook

December 2015

Communication API

Cloud Communication APIs.

[If you are new around here, then you should know I’ve been writing about WebRTC lately. You can skim through the WebRTC post series or just read what WebRTC is all about.]

API platforms fascinate me. Especially communication API platforms. You can’t get any bigger than Twilio these days. This year, they’ve announced and launched a slew of new capabilities – task routing, video calling, IP messaging and a lot of enhancements to their existing services.

I’ve been wanting to land an interview with Twilio for quite some time. I was happy when Al Cook, Director of Product Marketing at Twilio, obliged. Here’s what he had to say.

 

What is Twilio all about?

Twilio is a cloud communications platform. We provide programmable building blocks that developers use to embed communications into their mobile and web apps – from voice, messaging, and video to authentication. So when you are communicating with your Uber driver via text or anonymous phone call, calling Hulu customer support, or shopping via text with the help of your Nordstrom personal shopper, that’s Twilio. Or to give a WebRTC example – when you call a customer support team powered by Zendesk, the agent is talking to you over a WebRTC connection powered by Twilio. We have over 700,000 developers generating over 50 billion API transactions a year. In WebRTC we’ve powered over half a billion minutes of WebRTC to date.

 

Twilio Video went to public beta today. You’ve been in private beta for a while. How is it going? What have you learned?

That’s right, the private beta started in May and we collaborated with developers to build the right solution, with the right developer experience. Video is in public beta as of now. Now anyone can sign up for immediate access to our WebRTC-powered web and mobile SDKs, and the cloud-based signaling/media services that power them.

During the private beta we onboarded several thousand developers from our base. This group size was critical for gaining useful feedback and insights, while still allowing meaningful interactions.

Interesting. Did you check what users do during the private beta?

During the private beta onboarding, we asked participants to tell us about their use cases. I read every single entry and categorized the use cases. The top categories break out as follows:

  • 21% healthcare
  • 14% support (in-app enterprise customer support, visual customer support)
  • 12% tutoring
  • 10% collaboration
  • 5% recruiting
  • 5% call an expert
  • 4% marketplace / sharing economy
  • 4% interpretation services (including assistive deaf/blind services)

Two of the big areas we spent considerable time refining during the beta were improving the mobile media stack performance, and building a signaling model that allows us to continue to add new capabilities for multi-party, multi-endpoint IP and carrier communications.

 

I have to ask. These developers in the private beta – how many of them were existing Twilio developers who just added video versus new ones?

It’s a mix. A lot of folks are with us because they want multiple channels of communication, and so video is a natural extension for them. But we’ve also had a lot of people who were new to Twilio, and excited to have a better alternative than their current video solution.

 

How is your video offering different from other alternatives that are out there today?

We believe this solution is not available anywhere else. Here’s some insight on the areas where we invested the most time to ensure we were building the right solution for needs that had not been addressed.

  • Without this, each communication capability would either have to be built from scratch or individually purchased and pieced together, if possible. And that’s just the beginning. Our SDKs are designed as a platform to add more communication channels over time.
  • We designed a conversation model that scales in volume, use case and breadth of different endpoint types. Conversations can be either call-based or room-based; start peer-to-peer and move to network-mixed; and interoperate with SIP endpoints and carrier endpoints. Our signaling model is built to fulfill this vision. Some features are enabled today; others are coming. The important thing is we’ve laid the foundation for one platform that can power all communications needs.
  • Our pricing makes it accessible to everyone, and to scale to the very largest deployments. Most video services require per user fees, which are expensive for starting-up and scaling. Twilio video is aimed at infrastructure level pricing where it’s faster and cheaper than building and operating your own service at any scale. And users get the benefit of our ongoing work to deliver high quality and resiliency.

 

What excites you about working in WebRTC?

To me, the most exciting aspect of WebRTC – and really programmable real-time communications more generally – is that it stands to fundamentally change the way we communicate. Through every iteration of the phone, the basic interaction hasn’t really changed. Historically, there has been little-to-no ability to gain immediate context of why the caller is calling, what they were doing beforehand, and what they may need. Embedding communications into applications allows for a far more meaningful and relevant communication. Imagine calling your car insurance company from your car insurance app following an accident, and instantly the call is routed with the right prioritization based on the GPS of your phone to an agent who speaks your prefered language. The app enables you to instantly share a video feed of the accident scene and collaboratively annotate the video using the app. All this while the agent captures the information in their record system to avoid a separate visit from a damage appraiser.

We believe every single app will have communications built into it. Every. Single. App.

 

Where do you see WebRTC going in 2-5 years?

WebRTC/ORTC is moving at such a velocity that 5 years out is pretty hard to forecast. But we believe:

  • In this timeframe, browser support should be ubiquitous. We’ve seen Microsoft Edge get there already (barring video codec support), and we know Apple is working on it for Safari.
  • Ubiquitous doesn’t mean standardized or non-contentious. We expect to continue to see differences in implementation of particular features that the developer will either have to keep track of and deal with directly, or use an SDK such as Twilio Video.
  • Media quality requires continuous improvement. We’ll continue to make it better and more resilient to bad networks.  However, in this timeframe, there will remain some networks that are not viable for real-time video.
  • Mobile in-app usage will be the most important use case for consumers. This means that most consumers won’t be using Google’s latest WebRTC engine off the shelf, but rather a version that has been packaged – and often modified and enhanced – along the way.
  • B2C Communications will focus on high-value, contextual interactions. Low-value B2C interactions will be increasingly handled through self-service channels. WebRTC will be one of the core technologies powering the high value segment.

 

If you had one piece of advice for those thinking of adopting WebRTC, what would it be?

Experiment – and think about how you scale the experiments that find success. It’s relatively simple to get a basic WebRTC call working. But plan for what happens if your new service finds success. Consider how will you scale, maintain and operate your TURN media relay. How will you collect and analyze voice quality diagnostics from all your endpoints. How will you interoperate with SIP networks and PSTN networks.

 

Given the opportunity, what would you change in WebRTC?

Some improvements have been addressed by ORTC. We’re big fans of these improvements and we look forward to the standards combining.

We would like more control over the media stack in a browser environment, if the browser makers could figure out a secure way to enable this. We spend a considerable amount time testing and measuring voice quality in impaired networks. In fact, we open-sourced the testing tool we use. On the mobile side, we operate the media stack and we do a lot of fine tuning to constantly improve the media quality.  This includes taking into account the performance of different networks and hardware configurations. Whether it’s adding codecs to use in particular scenarios, adding Forward Error Correction (FEC) techniques, or other areas we are working on. But when our endpoints call a browser-based endpoint, they have to fall back to the default media stack and it is not possible to layer on additional media enhancements, which is why we’d like more control in the browser environment.

In the more immediate time frame, the subject of handling QoS in WebRTC is tricky, and far from standardized. Plus, QoS behavior, like with much of WebRTC, tends to require significant reverse engineering to establish the exact behavior in different scenarios. We’re happy we can provide this capability on behalf of our customers – but we’d like more control over the experience.

 

What’s next for Twilio?

We’ve talked about a few of them – interoperability with SIP endpoints and PSTN endpoints for example. Of course we’re also working on SFU functionality for large scale video conferences – that should be no surprise to our customers. But we want to provide this capability in such a way that a developer doesn’t have to choose between either peer-to-peer routing or SFU mixed. The solution should intelligently move from one to another as the call topology requires. We also want a solution that scales beyond any existing solutions. And then, well…that’s enough to keep us busy for now Tsahi.

The interviews are intended to give different viewpoints than my own – you can read more WebRTC interviews.

 

The post Twilio and WebRTC: An Interview with Al Cook appeared first on BlogGeek.me.

SaferMobility and WebRTC: An Interview With Matthew Mah

Thu, 12/10/2015 - 12:00

Your private 911 system.

[If you are new around here, then you should know I’ve been writing about WebRTC lately. You can skim through the WebRTC post series or just read what WebRTC is all about.]

I have seen a lot of applications lately that target public safety. Some offer you a “ghost” partner to “walk” with you home, while others focus on the reporting aspects.

SaferMobility targets the authorities as the owners of the system (college campuses, municipalities, business zones, etc) and provides a mobile application to the users. It is reimagining how a 911 service would look like if it was being specified today.

Matthew Mah, CTO of SaferMobility, was kind enough to answer my questions on what role WebRTC plays in their service.

 

What is SaferMobility all about?

SaferMobility focuses on using the capabilities of modern smartphones for enhancing safety. The public safety system in the United States is built around wired telephones, and it is more difficult for authorities to respond to mobile phones because they are harder to locate than fixed telephones. The modern smartphone has audio, video, location, and text capability that just are not being used efficiently yet.

 

There are many other safety related apps out there. What differentiates you from the rest of the pack?

Our systems focus on real-time interaction with authorities. Authorities receive enhanced calls with audio, video, location, and text information in real-time without it having to filter through friends or storage systems.

 

You told me you launched your service using Flash. Why did you migrate to WebRTC?

WebRTC is a huge improvement over Flash in terms of security, support, and capability. Adobe is not really interested in supporting Flash for mobile devices, so capabilities like acoustic echo suppression are not available. This makes a huge difference in communication quality.

 

What signaling have you decided to integrate on top of WebRTC?

We use a proprietary message system built on websockets.

 

Backend. What technologies and architecture are you using there?

Our Java application server runs Tomcat with a PostgreSQL database. It handles the signaling and issues commands to a media server for recording capabilities. We currently run on Dialogic’s Extended Media Server (XMS).

Mobile. You decided to port WebRTC to iOS and Android on your own. How was the experience?

Porting was difficult because of compatibility issues between our WebRTC media server with web, iOS, and Android clients. We would get two clients to work with the server, then upgrade the server and have two different clients work.

For stability on the web side, the nwjs project has been very helpful for producing an application that works even while the web browser updates are racing ahead and frequently breaking things.

 

Where do you see WebRTC going in 2-5 years?

WebRTC will replace stagnant technologies like Flash. The ability to communicate through the browser will also lower the barrier for application development.

 

If you had one piece of advice for those thinking of adopting WebRTC, what would it be?

Be prepared for things to change quickly because WebRTC is still growing and maturing.

 

Given the opportunity, what would you change in WebRTC?

Aside from the expected growing pains, I am pleased with WebRTC.

 

What’s next for SaferMobility?

There’s a huge opportunity to improve public safety, security services, and general communication with modern mobile devices, and SaferMobility will be part of making those improvements.

The interviews are intended to give different viewpoints than my own – you can read more WebRTC interviews.

The post SaferMobility and WebRTC: An Interview With Matthew Mah appeared first on BlogGeek.me.

The Hidden Gems of WebRTC Goodness May Well Lie Within GetUserMedia Itself

Wed, 12/09/2015 - 12:00

WebRTC GetUserMedia is more important than the rest of this communication stack.

Who would have believed? With all the magic and distraction that video calling from a browser brings with it, the real treasure trove resides in the basics – WebRTC GetUserMedia.

Simplifying things, WebRTC has 3 distinct areas/APIs to it:

  1. GetUserMedia, allowing access to camera and microphone inside the browser
  2. PeerConnection, taking care of all the mess that is a voice/video call
  3. Data Channel, making it possible to send any arbitrary message across browsers directly

I’ve pointed up in the past how WebRTC GetUserMedia gets used by Mailchimp and WhatsApp. Taking a camera snapshot is nice, but what else can we achieve with this access we’ve been given?

TalkLessNow

Chris Kranky had an idea a few weeks ago. Measuring how much you’re yapping in a call as opposed to listening. So he made it happen. On a shoestring budget, some connections and a bit of time and TalkLessNow was born.

How it works?

The website is quite spartan. When you go on a phone call (not a WebRTC one), you just press the green Call button on talklessnow.com.

The code on the site “listens” through the machine’s microphone to your call. Whenever it hears enough of a volume – it assumes you’re talking. If the volume is lower than its configured threshold – you’re listening.

Just WebRTC GetUserMedia. No PeerConnection or any other fuss.

Will it work?

Here in Israel, I am sure the results won’t be good. We’re used to talking over each other and interrupting. Efficiency at its best. If in a call between Israelis it shows less than 70% of talk time per participant, I’ll crown that session a success.

Seriously though, we should be listening a lot more than we’re talking.

Same but different

The now defunct Guitar Tuner works the same way. It doesn’t work anymore because the site is served on HTTP and WebRTC GetUserMedia now requires HTTPS to work with the latest Chrome release (progress, you know).

Ziggeo

Here’s another example.

Ziggeo is making use of WebRTC to record videos. They do that by employing WebRTC GetUserMedia, storing the resulting media locally and at the end of the recording sending it to their servers. The sending part doesn’t occur via WebRTC.

There’s an interesting interview with Susan Danziger, CEO of Ziggeo from last week that you should read.

Is this Real Time Communications?

WHO CARES?

It works. It gives business value – and in ways that weren’t really possible up until today.

There’s a lot more to WebRTC than classic VoIP.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post The Hidden Gems of WebRTC Goodness May Well Lie Within GetUserMedia Itself appeared first on BlogGeek.me.

The First WebRTC Earthquake in Video Conferencing: Acano vs Polycom

Mon, 12/07/2015 - 12:00

The future isn’t what it used to be.

I’ve been babbling here a lot about the enterprise video conferencing market and WebRTC’s role in disrupting it. When it first came out, I believed the existing companies are going to be struggling with it. I was mostly ignored by these companies – it is hard to see what’s just around the corner when you’re stuck in the echo chamber of your company and its immediate industry.

When I meet old colleagues of mine from the video conferencing industry and see them working in the same companies, I suggest they leave. Find another company or industry, because the outcome is known – just the timing factor is missing. They dismiss it, probably thinking that I am saying it our of a grudge to the company. I am not.

What happened in November should hit home.

We had two separate news items that in some cosmic way happened in the same week:

  1. Cisco acquired Acano. For $700M USD. A company with around 350 employees (that’s $2M per employee)
  2. Polycom announced closing its Israeli office. Moving the operations to India. That’s 200 employees + 80 contractors

Dumbing things down a bit:

  • Acano was about building a cloud MCU. Polycom Israel was about building an on-premise MCU
  • Acano started life in 2012, making immediate use of WebRTC. Polycom just launched their first MCU to support WebRTC this year (2015)

It isn’t that WebRTC is the reason why Acano succeeded and Polycom Israel has failed. It is that the mindset of these two companies was different. Acano looked into what can be done in this modern age and made use of WebRTC to get there. Polycom looked at how they slowly evolve their product offering. I am sure people in Polycom knew about WebRTC. It probably was on roadmaps and discussions since 2012, never to be given priority, because who needs it? It can’t compete with the high end systems of Polycom. But then the basis of competition changed. What customers care about changed. It isn’t anymore about resolutions and frame rates. It’s about utility and usability – something most video conferencing companies never knew how to handle.

Polycom Israel didn’t have the foresight to make themselves attractive enough to their corporate overlords in San Jose. Probably because they weren’t given the opportunity to do so. The end result? They just weren’t important. Their technology and architecture is now stable and understood enough to move it to countries with lower salaries.

I remember doing a training to developers about WebRTC in 2014. I asked people in the room what they do. There were media engineers and signaling protocols developers. I told them that they are going to be out of work. They saw it as a joke. Some of them are now updating their resume.

What is it that you are doing for a living? What is your company developing? Does it make sense? Do you take the effect WebRTC (and other technologies) have on your job seriously?

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post The First WebRTC Earthquake in Video Conferencing: Acano vs Polycom appeared first on BlogGeek.me.

Ziggeo and WebRTC: An Interview With Susan Danziger

Thu, 12/03/2015 - 12:00
isVisible=false; function show_hide_searchbox(w){ if(isVisible){ document.getElementById('filterBoxSelection').style.display = 'none'; w.innerText='Filter ▼'; }else{ document.getElementById('filterBoxSelection').style.display = 'block'; w.innerText='Filter ▲'; } isVisible=!isVisible; } function checkIfSelected(chk){ if(chk.checked==1) chk.parentNode.className = "selected"; else chk.parentNode.className = "notselected"; getSelectedValues(); } function getSelectedValues(){ var a=document.getElementsByClassName('selected'); var vtVal=[] , ctVal=[] , ftVal=[]; var ct=0,vt=0,ft=0; for (x = 0; x < a.length; ++x) { try{ if(a[x].getElementsByTagName('input')[0].className=='companyType'){ ctVal[ct]= a[x].getElementsByTagName('input')[0].value; ct++; } if(a[x].getElementsByTagName('input')[0].className=='vendorType'){ vtVal[vt]= a[x].getElementsByTagName('input')[0].value; vt++; } if(a[x].getElementsByTagName('input')[0].className=='focusType'){ ftVal[ft]= a[x].getElementsByTagName('input')[0].value; ft++; } }catch(err){ } } search_VType(vtVal); search_CType(ctVal); search_FType(ftVal); } function search_VType(val){ var a=document.getElementsByClassName('interview-block'); for(x=0;x=0 && val[i]!=null){ a[x].style.display='block'; } } if(val.length==0){ a[x].style.display='block'; } } } function search_CType(val){ var a=document.getElementsByClassName('interview-block'); for(x=0;x=0 && val[i]!=null && a[x].style.display=='block'){ break; } if(i==val.length-1){ a[x].style.display='none'; } } } } function search_FType(val){ var a=document.getElementsByClassName('interview-block'); for(x=0;x=0 && val[i]!=null && a[x].style.display=='block'){ break; } if(i==val.length-1){ a[x].style.display='none'; } } } }Check out all webRTC interviews >>

Ziggeo: Susan Danziger

December 2015

Video recording

Asynchronous video meets WebRTC.

[If you are new around here, then you should know I’ve been writing about WebRTC lately. You can skim through the WebRTC post series or just read what WebRTC is all about.]

One area where WebRTC is making strides recently is video streaming. Some of the hyped use cases today are those that enable broadcasting in real time, but there’s another interesting approach – one where WebRTC is employed when the video consumption is asynchronous from its creation.

Ziggeo is an API provider in this specific niche. I met with Susan Danziger, CEO of Ziggeo, and asked her to share a bit of what it is they do with WebRTC and how it is being adopted by their customers.

 

What is Ziggeo all about?

Ziggeo is the leader in asynchronous (recorded) video offering a programmable video recorder/player through our API/native SDKs.

 

You started by working on an HR interviews platform. What made you pivot towards a video recording API platform instead?

In building our own video recording/playback solution for the platform, we realized what a complicated and time-consuming process building our own solution was.  We had to make sure that videos could be recorded and played across all devices and browsers (even as new ones were released) and build a permissions-based security solution that would withstand hackers.  We were surprised there were no off-the-shelf solutions available so decided a bigger opportunity would be to release our technology as an API — and then native SDKs (and shortly thereafter closed our B2C platform).

 

On the same token – you have Flash there. Why did you add WebRTC? Wasn’t Flash enough for your needs?

For the most part our customers hate Flash.  And no wonder: browsers that support Flash have an awful user experience in which you need to basically hit 3 different buttons before you can begin recording from your web camera (once to resume the suspended Flash applet and twice to access the camera).

We added WebRTC to avoid Flash whenever possible.  That said, for certain browsers, e.g. Safari and Internet Explorer we need to default to Flash as they don’t yet support WebRTC.

 

How are customers reacting to the introduction of WebRTC to Ziggeo?

Customers love it!  In fact, our customers seek us out in part because we’re the only API for asynchronous video recording that supports WebRTC.

 

Can you share a few ways customers are using Ziggeo?

In addition to recruiting (where candidates introduce themselves on video), we’ve seen Ziggeo used for training (e.g. trainees record video sales pitches for feedback); dating (potential dates exchange video messages); “Ask Me Anything” (both questions and responses on video); e-commerce (products introduced on video and video reviews recorded); advertising (user-generated videos submitted for contests or for use in commercials); and journalism (crowd-sourcing videos for news from around the world).  I’m still waiting for someone to create a video version of Wikipedia where pieces of knowledge are recorded on video from around the world — that would be the most amazing use case of all.

 

A video version of Wikipedia. Have it in Hebrew and I’ll sign up my daughter on it.

You don’t use the Peer Connection APIs at all – Just getUserMedia. Why did you make the decision to record locally and not use the Peer Connection and record on the server?

Folks like to re-record locally so we chose not to use unnecessary resources.  We pride ourselves on making our technology as efficient and seamless as possible.

How do you store the file locally and how do you then get it to your data centers?

We use IndexedDB to store the file locally and then push it using chunked http.

 

Viewing. Over what protocols do you do it, and how do you handle the different codecs and file formats?

Protocols: Http pseudo streaming, HLS, rtmp, rtsp

Formats: we transcode videos to different formats (mp4, webm) and resolutions

 

Where do you see WebRTC going in 2-5 years?

We imagine there will be full support of WebRTC across all browsers and devices as well as better support for client-side encoding of video data.

 

Given the opportunity, what would you change in WebRTC?

We’d like to see improved support for consistent resolution settings as well as for encoding

 

What’s next?

We’re planning the 2nd Annual Video Hack Day in NYC for this coming May.  You can find more information here at: videohackday.com or follow @videohacknyc on Twitter

The interviews are intended to give different viewpoints than my own – you can read more WebRTC interviews.

The post Ziggeo and WebRTC: An Interview With Susan Danziger appeared first on BlogGeek.me.

The Unconnected Messaging World

Tue, 12/01/2015 - 12:00

You are not always connected.

You are not always connected.

You are not always connected.

Truely you aren’t.

I know you like to think you are, but get over it – this isn’t the case.

From the unveiling of AWS IOT platform @ re:Invent 2015

Every week I need to take my daughter to her artistic gymnastics lessons. And then I have 90 minutes of quality time. With myself. While I usually use it to continue reading on my Kindle, I try once in awhile to actually work at that time. The problem is, that the cellular reception in the waiting hall is less than satisfactory and the mosquitoes make it impossible to sit outside – where it is a lot nicer with much better reception.

I quickly learned that working there is close to impossible, as reception is flaky – not something I can rely on with my line of work which requires an intravenous internet connection at all times. But there are quick things that I can do at that time – which most usually than not includes messaging.

Offline Messaging

Here’s what I found out about the 3 top messaging apps on my phone recently:

 

WhatsApp

WhatsApp rocks when it comes to be able to send messages even when I am offline. It uses the store and forward technique both on the client and on the server:

  • If the user has no internet connection, the message is stored locally until such a connection is restored. This approach works only for text messages – you can share images or videos with it
  • If the receiver has no internet connection, then the message is stored on the WhatsApp server until a point in time when the recipient is available – this works for all types of messages

You just can’t ask for more.

Google Hangouts

Google Hangouts is rather poor when it comes to offline behavior. I does manage its own store and forward mechanism on the server side, which means that if you send a message when you are online – the recipient will see it when he becomes online.

But, you can’t send anything if you aren’t online. Hangouts isn’t kind enough to store it locally until you are online.

This makes for a poor experience for me in that gymnasium waiting room, where the network comes and goes as it pleases. Or when I am riding the elevator going downstairs from my apartment and need to send some quick messages.

Slack

Slack needs to be connected. At least as far as my understanding of it is.

If you open the app, it tries to connect. If you send a message while it is connected – great.

If you try sending when it isn’t connected – it will fail.

But sometimes, it believes that it is connected and it isn’t. In such a case, killing the Android app and restarting it will be the only remedy to be able to send anything out.

Yuck.

Offline Frameworks

Communication frameworks are tricky. The idea is that you have a network to be able to communicate, but as we’ve just seen – this isn’t always the case.

So where do we stand with different frameworks? I had these 3 examples readily available out the top of my head for you:

Matrix History Storage

Matrix (interviewed here in the past) also went to great lengths to deal with offline scenarios. In the case of Matrix, it was about decentralization of the network itself, and how can you “self heal” and synchronized servers that go down min-conversation.

This makes it easy to add and remove servers during runtime, but it doesn’t help me in my daughter’s gymnasium class. I haven’t found any information stating that Matrix can (or can’t) send a message while the sender client is offline.

Twilio and Message History

Twilio announced its own IP Messaging capability. While this isn’t yet generally available, the concepts behind these APIs are outlined on that page.

To make things simple – it includes store and forward on the server (recipient can be offline when sender sends and vice versa); but it probably doesn’t include sending while the sender is offline.

As this is still under development/testing, my suggestion would be to add the “sender is offline” scenario and support it from the SDK.

Amazon Device Shadow

At AWS re:Invent 2015, Amazon unveiled its IOT platform – the building blocks it has on offer for the Internet of Things.

In many ways, the Internet of Things is… connected. But in many other ways it might not be connected at all times. I’ve seen several interesting IOT frameworks overcome these in various means. Here’s AWS take on it – they create what they call a device shadow.

 

Werner Vogels does a great job of explaining this. I suggest viewing the whole session and not just the 1 minute explainer on device shadow.

Why is it important?

We are never always truly online. As messaging becomes one of the central means of communicating – both between people as well as devices – it needs to take this into account. This means covering as many offline use cases as possible and not just assume everything is connected.

Doing this can be tricky to get right, and in many cases, it would preferable for developers to go with a solid framework or a service as opposed to building it on their own. What most frameworks still miss today is that nagging ability to send a message while the user is offline – storing it locally and sending once he comes online.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post The Unconnected Messaging World appeared first on BlogGeek.me.

The Role of Artificial Intelligence in Messaging

Tue, 11/24/2015 - 12:00

Machine learning and artificial intelligence in messaging will become commonplace.

Who would have thought that the most personal and manual form of interaction between humans can be mechanized? Years a go, it started with presence and instant messaging. People found out ways to communicate other than the phone call. Today, messaging is so prevalent that you have to take it seriously:

  • In the consumer space, we’re talking about a billion users for these platforms. WhatsApp at 900 million is the closest to reach its first billion soon enough
  • In the enterprise space, a single hiccup of Slack yesterday, sending many to vent off on Twitter

What is interesting, is how artificial intelligence is starting to find a home in messaging apps – consumer or enterprise ones – and where this all is headed.

I couldn’t care less at this moment if the interface is textual or speech driven. I might cover this in a later article, but for now, let’s just assume this is the means to an end.

Here are a few examples of what artificial intelligence in messaging really means:

The Silent Administrator

You are in a conversation with a friend. Chatting along, discussing that restaurant you want to go to. You end up deciding to meet there next week for lunch.

I do this once a month with my buddies from school. We meet for lunch together, talking about nothing and everything at the same time. For me, this conversation takes place on WhatsApp and ends up as an event on my Google Calendar.

Wouldn’t it be nice to have that event created auto-magically just because I’ve agreed with my friends on the date, time and place of this lunch?

This isn’t as far fetched as it seems – Google is already doing similar stuff in Google Now:

  • Prodding me when the time comes to start the commute to a meeting
  • Tracking flight delays when it finds an itinerary in my Gmail
  • Giving me the weather forecast on mornings, and indicating “drastic” weather changes the night before
  • Providing multiple time zones when I travel

Google Now is currently connecting to apps on the phone through its Google Now on Tap, giving it smarts over a larger portion of our activities on our phones.

Why shouldn’t it connect to Hangouts or any other messaging service scouring it for action items to take for me? Be my trusted silent administrator in the back.

A few years ago, a startup here in Israel, whose name I fail to remember, tried doing something similar to the phone call – get you on a call, then serve ads based on what is being said. Ads here are supposed to be contextual and very relevant to what it is you are looking for. I think this is happening sans ads – by giving me directly what I need from my own conversations, the utility of these messaging services grows. With a billion users to tap to, this can be monetized in other means (such as revenue sharing with service providers that get promoted/used via conversations – booking an Uber taxi or a restaurant table should be the obvious examples).

In the enterprise space, the best example is the Slackbot, which can automate interactions on Slack for a user. No wonder they are beefing up their machine learning and data science teams around it.

Knowledge base Connectivity

That “chat with us” button/widget that gets embedded into enterprise websites, connecting users with agents? Is it really meant to connect you to a live agent?

When you interact with a company through such a widget, you sometimes interact today with a bot. An automated type of an “answering machine” texting you back. It reduces the load on the live agents and enables greater scalability.

This bot isn’t only used to collect information – it can also be used to offer answers – by scouring the website for you, indexing and searching knowledge bases or from past interactions the live agent had with other users.

I recently did a seminar to a large company in the contact center space. There was a rather strong statement made there – that the IVR of the future will replace the human agents completely, offering people the answers and support they need. This is achieved by artificial intelligence. And in a way, is part of the future of messaging.

Speaking with Brands

If you take the previous alternative and enhance it a bit, the future of messaging may lie with us talking to brands from it.

As messaging apps are becoming platforms, ones where brands and developers can connect to the user base and interact with them – we are bound to see this turning into yet another channel in our path towards omnichannel interactions with customers. The beauty of this channel is its ability to automate far better than all the rest – it is designed and built in a way that makes it easier to achieve.

Due to the need to scale this, brands will opt for automation – artificial intelligence used for these interactions, as opposed to putting “humans on the line”.

This can enable an airliner to sell their flight tickers through a messaging service and continue the conversation around that flight plans with the customer throughout the experience – all within the same context.

The Virtual Assistant / Concierge

Siri? Cortana? Facebook M? Google Search?

These are all geared towards answering a question. You voice your needs. And they go searching for an answer.

These virtual assistants, as well as many other such assistants cropping up from start ups, can find a home inside messaging platforms – this is where we chat and voice are requests anyway, so why not do these interactions there?

Today they are mostly separated as they come from the operating system vendors. For Facebook, though, Facebook M, their concierge service, Messenger is the tool  of choice to deliver the service. It is easy to see how this gets wrapped into the largest messaging platforms as an additional capability – one that will grow and improve with time.

Why is this important?

Artificial Intelligence is becoming cool again. Google just open sourced their machine learning project called TensorFlow. Three days go by, and Microsoft answers with an open source project of its own – DMTK (Microsoft Distributed Machine Learning Toolkit). Newspapers are experimenting with machine written news articles.

Messaging platforms have shown us the way both in the consumer market and in the enterprise. They are already integration decision engines and proactive components and bots. The next step is machine learning and from there the road to artificial intelligence in messaging isn’t a long one.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post The Role of Artificial Intelligence in Messaging appeared first on BlogGeek.me.

For Cisco, Slack Would Have Been a Better Acquisition than Acano

Mon, 11/23/2015 - 12:00

Why buy into legacy?

Last week, Cisco made another acquisition in the WebRTC space. This time, Cisco acquired Acano. Acano is a rather new company that started life in 2012 – close to WebRTC’s announcement.

Acano makes use of WebRTC, though I am not sure to what extent. There are 2 reasons Cisc lists for this acquisition:

  1. Interoperability – support for “legacy” video conferencing, Microsoft Skype and WebRTC
  2. Scalability

To me, scalability comes from thinking of video conferencing in the mindset of WebRTC – WebRTC services are mostly cloud based and built to scale (or at least should be). Old video conferencing models thought at the scale of a single company at best, with business models fitting the high end of the market only.

That brings me to why. Why is Cisco buying into legacy here?

If there’s anything that is interesting these days it is what happens in the realm of messaging. And for Cisco, this should mean Enterprise Messaging. I already stated earlier this year that Enterprise Messaging is a threat to Unified Communications.

Don’t believe me? How about these interesting moves:

  1. Atlassian, owner of HipChat (=Enterprise Messaging) acquiring BlueJimp, authors of the popular open source Jitsi Video bridge
  2. HipChat (yes, the same one), writes a cheaky post comparing Skype (=Unified Communications) to HipChat (=Enterprise Messaging). Guess who they favor?
  3. Slack searching for developers to “build audio conferencing, video conferencing and screen sharing into Slack”
  4. Cisco launching its own Cisco-Spark – a video conferencing service modeled around messaging
  5. Unify launching circuit – a video conferencing service modeled around messaging
  6. Broadsoft announcing UC-one – a video conferencing service modeled around messaging

Which brings me back to the question.

Why buy into legacy? At scale. With interoperability. Using fresh technology. But legacy nonetheless.

Why not go after Slack and just acquire it outright?

When Cisco wanted a piece of video conferencing, they didn’t acquire RADVISION – its main supplier at the time. It went after TANDBERG – the market leader.

Then why this time not buy the market leader of enterprise messaging and just get on with it?

Congrats to the Acano team on being acquired.

For Cisco, though, I think the challenges lie elsewhere.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post For Cisco, Slack Would Have Been a Better Acquisition than Acano appeared first on BlogGeek.me.

4 Reasons Vendors Neglect Testing WebRTC Services

Thu, 11/19/2015 - 12:00

Testing WebRTC is tricky.

If there’s something I learned this past year from talking to companies when showcasing the testRTC service, is that vendors don’t really test their WebRTC products.

Not all of them don’t test, but most of them.

Why is that? Here are a few reasons that I think explain it.

#1 – WebRTC is a niche for them – an experiment

You’ve got a business to run. It does something. And then someone decided to add communications to it. With WebRTC no less.

So you let them play. It isn’t much of an effort anyway. Just a few engineers hammering away. Once you launch, you think, you’ll see adoption and then decide if it is worthwhile to upgrade it from a hobby to a full time business.

The thing is, there’s a chicken and egg thing going on here. If you don’t do it properly, how will adoption really look? Will it give you the KPIs you need to make a reasonable decision?

WebRTC is rather new. As an industry, we still don’t have best practices of how to develop, test and deploy such services.

#2 – It’s a startup. Features get priority over stability

Many of the vendors using WebRTC out there are startups. They need to get a product out the door.

It can be a proof of concept, a demo, an alpha version, a beta one or a production version. In all cases, there’s a lot of pressure to cram more features into the product and show its capabilities than there are complaints about its stability or bugs.

Once these companies start seeing customers, they tend to lean more towards stability – and testing.

As we are seeing ourselves by running testRTC (=startup), there’s always a balancing act you need to do between features and stability.

#3 – They just don’t know how

How do you test WebRTC anyway?

VoIP?

If you view it as a VoIP technology, then you are bound to fail – the VoIP testing tools out there don’t really have the mentality and mindset to help you:

  • Testing browsers continuously because they get updated so frequently isn’t something they do
  • They don’t really know how to handle the fact that there’s no signaling protocol defined

The flexibility and fast paced nature of the web and WebRTC aren’t ingrained into their DNA.

Web?

If you view this as a web technology, then you’ll miss all the real time and media aspects of it. The web testing tools are more interested in GUI variability across browsers than they are with latencies and packet loss.

  • How do you different network configurations? Does a firewall affect your results?
  • You do know that you need multiple browsers for the simplest use case testing with WebRTC. How do you synchronize them within the test?

While web tools are great for testing web apps, they don’t fit the VoIP nature that exist in WebRTC.

#4 – They don’t have the tools

You know, if you wanted to test WebRTC a year or two ago, your best alternative was to use QA teams that click manually on buttons – or build your own test infrastructure for it.

Both alternatives are wasteful in resources and time.

So people sidestepped the issue and waited.

These days, there are a few sporadic tools that can test WebRTC – changing the picture for those who want to be serious about testing their service.

Don’t take WebRTC testing lightly

I just did a webinar with Upperside Conferences. If you want to listen in on the recording, you can register to it online.

Whatever your decision ends up being – using testRTC or not – please don’t take testing WebRTC implementations lightly.

The post 4 Reasons Vendors Neglect Testing WebRTC Services appeared first on BlogGeek.me.

Can Apple Succeed with Two Operating Systems When Google and Microsoft are Consolidating?

Tue, 11/17/2015 - 12:00

One OS to rule them all?

It seems like Apple has decided to leave its devices split between two operating systems – Mac and iOS. If you are to believe Tim Cook’s statement, that is. More specifically, MacBook (=laptop) and iPad (=tablet) are separate devices in the eyes of Apple.

This is a strong statement considering current market trends and Apple’s own moves.

The iPad Pro

Apple’s latest iPad Pro is a 12.9 inch device. That isn’t that far from my Lenovo Yoga 2 Pro with its 13.1 inch. And it has an optional keyboard.

How far is this device from a laptop? Does it compete head to head in the laptop category?

Assuming a developer wants to build a business application for Apple owners. One that requires content creation (i.e – a real keyboard). Should he be writing it for the Mac or for iOS?

Tim Cook may say there’s no such intent, but the lines between Apple’s own devices are blurring. Where does one operating system ends and the other begins is up for interpretation from now on. One which will change with time and customer feedback.

Apple had no real intent of releasing larger iPhones or smaller iPads. It ended up doing both.

Microsoft Windows 10

Windows 10 is supposed to be an all-encompassing operating system.

You write your app for it, and it miraculously fits smartphones, tablets, laptops and PCs. That’s at least the intent – haven’t seen much feedback on it yet.

And I am not even mentioning the Surface Tablet/Laptop combo.

Google Chrome OS / Android

Google has its own two operating systems – Android and Chrome OS. Last month Alistair Barr informed of plans in Google to merge the two operating systems together.

The idea does have merit. Why invest twice in two places? Google needs to maintain and support two operating systems, while developers need to decide to which to build their app – or to develop for both.

Taking this further, Google could attempt making Android apps available inside Chrome browsers, opening them up to even a larger ecosystem not relying only on their own OS footprint. Angular and Material Design are initiatives of putting apps in the web. A new initiative might be interpreting Android’s Java bytecode in Chrome OS, and later in Chrome itself.

Who to believe?

On one hand, both Microsoft and Android are consolidating their operating systems. On the other, Apple doesn’t play by the same rule book. Same as we’ve seen lately in analytics.

I wonder who which approach would win in the end – a single operating system to rule them all, or multiple based on the device type.

The post Can Apple Succeed with Two Operating Systems When Google and Microsoft are Consolidating? appeared first on BlogGeek.me.

WebRTC Demand isn’t Exponentially Growing

Mon, 11/16/2015 - 12:00

A long, boring straight line.

In some ways, WebRTC now feels like a decade ago, when every time we said “next year will be the year of video”. For WebRTC? Next year will be the year of adoption.

Adoption is hard to define though. What does it really means when it comes to WebRTC?

WebRTC has been picked up by carriers (AT&T, Comcast and others if you care about name dropping), most (all?) video conferencing and unified communication vendors, education, banking and healthcare industries, contact centers.

While all is well in the world of WebRTC, there is no hype. A year and a half ago I wrote about it – the fact that there is no hype in WebRTC. It still holds true. Too true. And too steadily.

The chart below is a collection of 2 years of data of some of the data points I follow with WebRTC. I hand picked here 4 of them:

  • The number of github projects mentioning WebRTC
  • The number of questions/answers on Stack Overflow mentioning WebRTC
  • The number of users subscribed to the discuss-webrtc Google group
  • The number of LinkedIn profiles of people deciding to add WebRTC to their “resume”

In all of these cases (as well as other metrics I collect and follow), the trend is very stable. There’s growth, but that growth is linear in nature.

There are two minor areas worth mentioning:

  1. LinkedIn had a correction during September/October – a high increase and then an immediate decrease. Probably due to spam accounts that got caught by LinkedIn. I’ve seen this play out on Google+ account stats as well about a year ago
  2. github and StackOverflow had a slight change in their line angle from the beginning of 2015. This coincides with Google’s decision to host its samples and apprtc on github instead of on code.google.com – probably a wise decision, though not a game changer

Some believe that the addition of Microsoft Edge will change the picture. Statistics of Edge adoption and the statistics I’ve collected in the past two months show no such signs. If anything, I believe most still ignore Microsoft Edge.

Where does that put us?

Don’t be discouraged. This situation isn’t really bad. 2015 has been a great year for WebRTC. We’ve seen public announcements coming from larger vendors (call it adoption) as well as the addition of Microsoft into this game.

Will 2016 be any different? Will it be the breakout year? The year of WebRTC?

I doubt it. And not because WebRTC won’t happen. It already is. We just don’t talk that much about it.

If you are a developer, all this should be great news for you – there aren’t many others in this space yet, so demand versus supply of experienced WebRTC developers favors developers at the moment – go hone your skill. Make yourself more valuable to potential employers.

If you are a vendor, then find the most experienced team you can and hold on to them – they are your main advantage in the next years when it comes to outperforming your competitors when it comes to building a solid service.

We’re not in a hyped up industry as Internet of Things or Big Data – but we sure make great experiences.

The post WebRTC Demand isn’t Exponentially Growing appeared first on BlogGeek.me.

WebRTC Data Channel find a home in Context

Thu, 11/12/2015 - 12:00

There’s a new home for the WebRTC Data Channel – it found its use lately in context.

Ever since WebRTC was announced, I’ve been watching the data channel closely – looking to see what developers end up doing with it. There are many interesting use cases out there, but for the most part, it is still early days to decide where this is headed. In the last couple of weeks though, I’ve seen more and more evidence that there’s one place where the WebRTC Data Channel is being used – a lot more than I’d expect. That place is in adding context to a voice or video call.

Where did my skepticism come from?

Look at this diagram, depicting a simplified contact center using WebRTC:

We have a customer interacting with an agent, and there are almost always two servers involved:

  1. The web server, which got the two browsers connected. It acts as the signaling server for us, and uses HTTP or Websockets for its transport
  2. The media server, which can be an SBC, connecting both worlds or just a media server that is there to handle call queuing, record calls, etc.

The logic here is that the connection to the web server should suffice to provide context – why go through all the trouble of opening up a data channel here? For some reason though, I’ve seen evidence that many are adopting the data channel to pass context in such scenarios – and they are terminating it in their server side and not passing it direct between the browsers.

The question then is why? Why invest in yet another connection?

#1 – Latency

If you do need to go from browser to browser, then why make the additional leg through the signaling server?

Going direct reduces the latency, and while it might not be much of an issue, there are use cases when this is going to be important. When the type of context we are passing is collaboration related, such as sharing mouse movements or whiteboarding activity – then we would like to have it shared as soon as possible.

#2 – Firewalls

We might not want to go through the signaling server for the type of data we wish to share as context. If this is the case, then the need to muck around with yet another separate server to handle a Websocket connection might be somewhat tedious and out of context. Having the WebRTC data channel part of the peer connection object, created and torn down at the same time can be easier to manage.

It also has built in NAT and Firewall traversal mechanisms in place, so if the call passes – so will the context – no need to engineer, configure and test another system for it.

#3 – Asymmetry

At times, not both sides of the session are going to use WebRTC. The agent may as well sit on a PSTN phone looking at the CRM screen on his monitor, or have the session gateway into a SIP network, where the call is received.

In such cases, the media server will be a gateway – a device that translates signaling and media from one end to the other, bridging the two worlds. If we break that apart and place our context in a separate Websocket, then we have one more server to handle and one more protocol to gateway and translate. Doing it all in the gateway that already handles the translation of the media makes more sense for many use cases.

#4 – Load Management

That web server doing signaling? You need it to manage all sessions in the system. It probably holds all text chats, active calls, incoming calls waiting in the IVR queue, etc.

If the context we have to pass is just some log in information and a URL, then this is a non-issue. But what if we need to pass things like screenshots, images or files? These eat up bandwidth and clog a server that needs to deal with other things. Trying to scale and load balance servers with workloads that aren’t uniform is harder than scaling uniform work loads.

#5 – Because We Can

Let’s face it – WebRTC is a new toy. And the data channel in WebRTC is our new shiny object. Why not use it? Developers like shiny new toys…

The Humble WebRTC Data Channel

The data channel has been around as long as WebRTC, but it hasn’t got the same love and attention. There’s very little done with it today. This new home it found with passing context of sessions is an interesting development.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post WebRTC Data Channel find a home in Context appeared first on BlogGeek.me.

Pages

Using the greatness of Parallax

Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.

Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.

Get free trial

Wow, this most certainly is a great a theme.

John Smith
Company name

Startup Growth Lite is a free theme, contributed to the Drupal Community by More than Themes.