News from Industry

Telnexus - Quote to Cash – KazooCon 2015

2600hz - Wed, 10/14/2015 - 20:15
Telnexus - Quote to Cash – KazooCon 2015 from 2600Hz

Telnexus CEO Vernon Keenan discuss how he built the Managed Service Provider Telnexus from the ground up and the lessons he has learned in the process.

ThinQ - Least Cost Routing in the Cloud - KazooCon 2015

2600hz - Wed, 10/14/2015 - 01:43
Least Cost Routing in the Cloud from 2600Hz

The ThinQ team discuss how to set up your routing profile, carrier selection, high volume traffic management, and LCR routing

VirtualPBX - Back Office, Delivering Voice in a Competitive Market - KazooCon 2015

2600hz - Wed, 10/14/2015 - 01:36
VirtualPBX - Back Office, Delivering Voice in a Competitive Market - KazooCon 2015 from 2600Hz

In a competitive market, high quality voice services alone are rarely enough. Lon will speak about the customer lifecycle, back office systems from Sales to CRM to deployment, and how to drive profitable growth while delivering an excellent customer experience.

Google Goes All in for Messaging, Invests in Symphony

bloggeek - Tue, 10/13/2015 - 12:00

Something is brewing at Google.

Last week it was announced that Symphony just raised another $100M lead by Google. Not Google Ventures mind you – Google Inc.

Who is Symphony?
  • High profile Silicon Valley startup (obviously), soon to become a unicorn, if it isn’t already
  • Well known founder from the Unified Communications industry – David Gurle
  • Have been around for only a year
  • Already has over 100 employees, most of them engineers
  • Focused on enterprise messaging, and targeting highly regulated and security sensitive industries
The Symphony Service

The service itself is targeted at the enterprise, but a free variant of it is available. I tried logging into it, to see what is all about. It is a variant of the usual messaging app on the desktop, with bits and pieces of Facebook and Slack.

On face value, not much different than many other services.

Symphony Foundation

Symphony decided to build its service on top of an open source platform of its own, which it calls Symphony Foundation. It includes all the relevant washed-out words required in a good marketing brochure, but little else for now: a mission statement, some set of values. That’s about it.

It will be open source, when the time comes. It will be licensed under the Apache license (permissive enough). And you can leave an inquiry on the site. In the name of openness… that’s as open as Apple’s FaceTime protocol is/was supposed to be. I’ll believe it when I see it.

Why Invest in Symphony?

This is the bigger question here. Both for why Google put money in it, as well as others.

With a total of $166M of investment in two rounds and over 100 employees recruited in its first year of existence, there seems to be a gold rush happening. One that is hard to explain.

As a glaring reminder – Whatsapp on acquisition day had 32 developers and around 50 employees. Symphony has twice that already, but no active user base to back it up.

It might be because of its high profile. After all, this is David Gurle we’re talking about. But then again, Talko has Ray Ozzie. But they only raised $4M in the past 3 years, and have less than 10 employees (if you believe LinkedIn).

The only other reason I can see is the niche they went for.

The financial industry deals with money, so it has money. It also has regulations and laws, making it a hard nut to crack. While most other players are focused on bringing consumer technology to the SMB, Symphony is trying to start from the top and trickle to the bottom with a solution.

The feature set they are putting in place, based on their website, include:

  • Connectivity across organizations, while maintaining “organizational compliance”
  • Security and privacy
  • Policy control on the enterprise level
  • Oh… and it’s a platform – with APIs – and developers and partners

The challenge will be keeping a simple interface while maintaining the complex feature set regulated industries need (especially ones that love customization and believe they are somehow special in how they work and communicate).

On Messaging and Regulation

The smartphone is now 8 years old, if you count it from the launch of the iPhone.

Much has changed in 8 years, and most of it is left unregulated still.

Messaging has moved from SMS to IP based messaging services like Whatsapp in many countries of the world. Businesses are trying to kill email with tools like Slack. We now face BYOD phenomena, where employees use whatever device and tools they see fit to get their work done – and enterprises find it hard to force them to use specific tools.

If Hillary Clinton can use her own private email server during the course of her workday, what should others say or do?

While regulation is slow to catch up, I think some believe the time is ripe for that to happen. And having a messaging system that is fit for duty in those industries that are sensitive today means being able to support future regulation in other/all industries later.

This trend might raise the urgency or the reason for the capital that Symphony has been able to attract.

Google

Why did Google invest here? Why not Google Ventures? It doesn’t look like an Alphabet investment but rather a Google one. And why invest and not acquire?

Google’s assets in messaging include today:

Jibe/RCS is about consumer and an SMS replacement in the long run. It may be targeted at Apple. Or Facebook. Or Skype. Or all of them.

None of its current assets is making a huge impact. They aren’t dominant in their markets.

And messaging may be big in the consumer, but the money is in the enterprise – it can be connectivity to enterprises, ecommerce or pure service. Google is finding it difficult there as well.

Symphony is a different approach to the same problem. Targets the enterprise directly. Focusing on highly regulated customers. Putting money into it as an investment is a no-brainer, especially if it includes down the road rights of first refusal on an acquisition proposal for example. So Google sits and waits, sees what happens with this market, and decides how to continue.

Is this a part of a bigger picture? A bigger move of Google in the messaging space? Who knows? I still can’t figure out the motivation behind this one…

Messaging and me

I’ve been writing on general messaging topics on and off throughout the years on this blog.

It seems this space is becoming a lot more active recently.

Expect more articles here about this topic of messaging from various angles in the near future.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post Google Goes All in for Messaging, Invests in Symphony appeared first on BlogGeek.me.

FreeSWITCH Week in Review (Master Branch) October 3rd-October 10th

FreeSWITCH - Tue, 10/13/2015 - 00:53

Hello, again. This past week in the FreeSWITCH master branch we had 51 commits! There were some very important changes this week to the Debian packaging system. The default is now set to build packages with the upstream FS package repos. Since the system dependencies have been removed from the FS codebase the 1.6 branch is now required to use the FS public repo for dependencies.  The notable feature for this week is the addition of the variable media_mix_inbound_outbound_codecs, which mixes inbound and outbound codecs, and this is a behavior change.

Join us on Wednesdays at 12:00 CT for some more FreeSWITCH fun! And head over to freeswitch.com to learn more about FreeSWITCH support.

New features that were added:

  • FS-8290 [verto_communicator] Automatically mark dedicated encoder if out/in bandwith isn’t set to ‘Server default’
  • FS-8290 [verto_communicator] Adding help text on how to enable dedicated remote encoder
  • FS-8321 [core] Add variable media_mix_inbound_outbound_codecs to mix inbound and outbound codecs. BEHAVIOR CHANGE

Improvements in build system, cross platform support, and packaging:

  • FS-8316 [build][Debian] Fixed new build warning from latest clang and resolved the build warnings in the modules too
  • FS-8271 [Debian] Adding some logging, and more cautious handling of spaces in parameters. Now the default will build packages with the upstream FS package repos. This is a change in the default behavior of the Debian packaging system with the justification that in 1.6 it is now required to use the FS public repo for dependencies because system dependencies have been removed from the FS codebase which used to be included. And defaulting to automatically download the binary dependencies because without major changes to package building in cowbuilder(which is the primary supported method of building FS packages), you can’t access the network to build the binary packages from the source package. If using system apt repo list, then include the supplementary ones too
  • FS-8233 [automation] In order to clean up build dependencies for the automated tests, convert the tests/*/Makefile.am into an include file for the top level Makefile.am. This will greatly simplify dependency tracking, and allow tests to be rerun easily on FS source code changes.
  • FS-7820 [automation] Use a more appropriate function for printing diagnostics

The following bugs were squashed:

  • FS-8243 [mod_opus] Adding back the missing part removed in 8b088c2 so FEC works in most surroundings
  • FS-8295 [mod_opus] FMTP fixes to continue the cleanup of FEC
  • FS-8302 [mod_opus] Fix some printing/logging because switch_opus_show_audio_bandwidth() was not returning TRUE/FALSE as expected
  • FS-8130 FS-8305 [mod_opus] Fix some warnings and errors caused by dtx and/or jittery webrtc, refactor of last patch, and add suppression of scary harmless message about opus FEC
  • FS-8296 [mod_opus] Improve the way Opus is initialized when a call comes in
  • FS-8179 [mod_opus] Fixed a regression setting fec_decode breaking output on stereo calls
  • FS-8297 [mod_conference] A fix for auto STUN switching IPs quickly and WebRTC video not working
  • FS-8130 [mod_conference] Fix for micro cut-offs and unstable voice issues
  • FS-8317 [mod_conference] Fix for playing multiple files at once to stack them for immediate playback, sometimes breaking and the floor layer becoming unusable for the rest of the conference.
  • FS-8067 [verto_communicator] When no email is present make sure mm is the default avatar in the circle this way the talk indicator works on PSTN and SIP callers.
  • FS-8247 [verto_communicator] When websocket disconnects go to splash screen to wait for the reconnect
  • FS-8300 [verto_communicator] Fixing reload bug so reloading twice is no longer needed
  • FS-8315 [core] Fix for rtp_media_timeout not working
  • FS-8304 [core] Fix for choppy audio during calls
  • FS-8320 [core] Fixed broken ZRTP not responding to HELLO packet
  • FS-8311 [mod_voicemail] Fix for leave-message event not containing verbose data for a forwarded voicemail
  • FS-8318 [mod_av] Fix for recording being out of sync when video from chrome has packet loss
  • FS-7929 [mod_sofia] Fixed an issue when processing SIP messages while using camp-on
  • FS-6833 [mod_sofia] Add content-type header to ack with sdp

 

And, this past week in the FreeSWITCH 1.4 branch we had 3 new commits merged in from master. And the FreeSWITCH 1.4.23 release is here! Go check it out!

The following bugs were squashed:

  • FS-8246 [mod_json_cdr] Use seconds as default value for delay parameter
  • FS-8282 [core] Fix for sleep is not allowing interruption by uuid_transfer
  • FS-8166 [core] Mute/unmute while shout is playing audio fails because the channel “has a media bug, hard mute not allowed”

 

Microsoft’s ORTC Edge for WebRTC – Q&A with Bernard Aboba

webrtchacks - Mon, 10/12/2015 - 17:56

We have been waiting a long time for Microsoft to add WebRTC to its browser portfolio. That day finally came last month when Microsoft announced its new Windows 10 Edge browser had ORTC. This certainly does not immediately address the Internet Explorer population and ORTC is still new to many (which is why we cover it often). On the positive side, interoperability between Edge, Chrome, and Firefox on the audio side was proven within days by multiple parties. Much of ORTC is finding its way into the WebRTC 1.0 specification and browser implementations.

I was with Bernard Aboba, Microsoft’s WebRTC lead at the IIT Real Time Communications Conference (IIT-RTC) and asked him for an interview to cover the Edge implementation and where Microsoft is headed. The conversation below has been edited for readability and technical accuracy. The full, unedited audio recording is also available below if you would rather listen than read. Warning – we recorded our casual conversation in an open room off my notebook microphone, so please do not expect high production value.

https://webrtchacks.com/wp-content/uploads/2015/10/Bernard-Aboba-QA.mp3

We cover what exactly is in Edge ORTC implementation, why ORTC in the first place, the roadmap, and much more.

You can view the IIT-RTC ORTC Update presentation slides given by Bernard, Robin Raymond of Hookflash, and Peter Thatcher of Google here.

{“editor”, “chad hart“}

Micosoft’s Edge is hungry for WebRTC

Intro to Bernard

webrtcHacks: Hi Bernard. To start out, can you please describe your role at Microsoft and the projects you’ve been working on? Can you give a little bit of background about your long time involvement in WebRTC Standards, ORTC, and also your new W3C responsibilities?

Bernard: I’m a Principal Architect at Skype within Microsoft, and I work on the Edge ORTC project primarily, but also help out other groups within the company that are interested in WebRTC. I have been involved in ORTC since the very beginning as one of the co-authors of ORTC, and very recently, signed up as an Editor of WebRTC 1.0.

webrtcHacks:  That’s concurrent with some of the agreement around merging more of ORTC into WebRTC going forward. Is that accurate?

Bernard: One of the reasons I signed up was that I found that I was having to file WebRTC 1.0 API issues and follow them. Because many of the remaining bugs in ORTC related to WebRTC 1.0, and of course we wanted the object models to be synced between WebRTC 1.0 and ORTC, I had to review pull requests for WebRTC 1.0  anyway, and reflect the changes within ORTC.  Since I had to be aware of WebRTC 1.0 Issues and Pull Requests to manage the ORTC and Pull Requests, I might as well be an editor of WebRTC 1.0.

Bernard Aboba of Microsoft and Robin Raymond of Hookflash discussing ORTC at the IIT Real Time Communications Conference (IIT-RTC)

What’s in Edge

webrtcHacks:  Then I guess we’ll move on to Edge then. Edge and Edge Preview are out there with varying forms of WebRTC. Can you walk through a little bit of that?

Bernard: Just also to clarify for people, Edge ORTC is in what’s called Windows Insider Preview.  Windows Insider Preview builds are only available to people who specifically sign up to receive them.  If you sign up for the Windows Insider Preview program and install the most recent build 10547, then you will have access to the ORTC API in Edge. In terms of what is in it, the audio is relatively complete. We have:

  • G.711,
  • G.722,
  • Opus,
  • Comfort Noise,
  • DTMF, as well as the
  • SILK codec.

Then on the video side, we have an implementation of H.264/SVC, which does both simulcast and scalable video coding, and as well as forward error correction (FEC), known as H.264UC. I should also mention, we support RED and forward error correction for audio as well. 

That’s what’s you will find in the Edge ORTC API within Windows Insider Preview, as well as support for “half-trickle” ICE, DTLS 1.0, etc.

webrtcHacks: I’ll include the slide from your presentation for everyone to reference because there’s a lot of stuff to go through. I do have a couple of questions on a few things for follow up. One was support on the video side of things for. I think you mentioned external FEC and also talked about other aspects of robustness, such as retransmission?

Bernard’s slide from IIT-RTC 2015 showing Edge’s ORTC coverage

Bernard: Currently in Edge ORTC Insider Preview, we do not support generic NACK or re-transmission.  We do support external forward error correction (FEC), both for audio and video.   Within Opus as well as SILK we do not support internal FEC, but you can configure RED with FEC externally.  Also, we do not support internal Discontinuous Operation (DTX) within Opus or SILK, but you can configure Comfort Noise (CN) for use with audio codec, including Opus and SILK.

Video interoperability

webrtcHacks: Then could you explain H.264 UC? The majority of the people out there that aren’t familiar with the old Lync or Skype for Business as it is now called.

Bernard: Basically, H.264 UC supports spatial simulcast along with temporal scalability in H.264/SVC, handled automatically “under the covers”.  These are basically the same technologies that are in Hangouts with VP8.   While the ORTC API offers detailed control of things like simulcast and SVC, in many cases, the developer just basically wants the stack to do the right thing, such as figuring out how many layers it can send. That’s what H.264UC does.  It can adapt to network conditions by dropping or adding simulcast streams or temporal layers, based on the bandwidth it feels is available. Currently, the H.264UC codec is only supported by Edge.

webrtcHacks:  Is the base layer H.264?

Bernard: Yes, the base layer is H.264 but RFC 6190 specifies additional NAL Unit types for SVC, so that an implementation that only understands the base layer would not be able to understand extension layers.  Also, our implementation of RFC 6190 sends layers using distinct SSRCs, which is known as Multiple RTP stream Single Transport (MRST).  In contrast, VP8 uses Single RTP stream Single Transport (SRST).

We are going to work on an implementation of H.264/AVC in order to interoperate.  As specified in RFC 6184 and RFC 6190, H.264/AVC and H.264/SVC have different codec names.

webrtcHacks:  For Skype, at least, in the architecture that was published, they showed a gateway. Would you expect other people to do similar gateways?

Bernard: Once we support H.264/AVC, developers should be able to configure that codec, and use it to communicate with other browsers supporting H.264/AVC.  That would be the preferred way to interoperate peer-to-peer.  There might be some conferencing scenarios where it might make sense to configure H.264UC and have the SFU or mixer strip off layers to speak to H.264/AVC-only browsers, but that would require a centralized conferencing server or media relay that could handle that. 

Roadmap

webrtcHacks:  What can you can you say about the future roadmap? Is it basically what’s on the dev.modern.ie page?

Bernard: In general, people should look at the dev.modern.ie web page for status, because that has the most up to date. In fact, I often learn about things from the page. As I mentioned, the Screen Sharing and Media Recorder specifications are now under consideration, along with features that are in preview or are under development.  The website breaks down each feature.  If the feature is in Preview, then you can get access to it via the Windows Insider Preview.  If it is under development, this means that it is not yet in Preview.  Features that are supported have already been released, so if you have Windows 10, you should already have access to them. 

Slide from Bernard’s IIT-RTC 2015 presentation covering What’s in Edge

In terms of our roadmap, we made a roadmap announcement in October 2014 and are still executing on things such as H.264, which we have not delivered yet.  Supporting interoperable H.264 is about more than just providing an encoder/decoder, which we have already delivered as part of H.264UC.  The IETF RTCWEB Video specification provides guidance on what is needed to provide interoperable H.264/AVC, but that is not all that a developer needs to implement – there are aspects that are not yet specified, such as bandwidth estimation and congestion control.

Beyond the codec bitstream, RTP transport and congestion control there are other aspects as well.  For example, I mentioned robustness features such as Forward Error Correction and Retransmission.   A Flexible FEC draft is under development in IETF which will handle burst loss (distances greater than one).  That is important for robust operation on wireless networks, for both audio and video.  Today we have internal FEC within Opus, but that does not handle burst loss well.

webrtcHacks: Do you see Edge pushing the boundaries in this area? 

Bernard: One of the areas where Edge ORTC has advanced the state of the art is in external forward error (FEC) correction as well as in statistics.  Enabling external FEC to handle burst loss, provides additional robustness for both audio and video.  We also support additional statistics which provide information on burst loss and FEC operation.  What we have found is that burst loss is a fact of life on wireless networks, so that being able to measure this and to address it is important. The end result of this work is that Edge should be more robust than existing implementations with respect to burst loss (at least with larger RTTs where retransmission would not be available).  We can also provide burst loss metrics, which other implementations cannot currently do.  I should also mention that there are metrics have been developed in the XRBLOCK WG to address issues of burst loss, concealment, error correction, etc.

Why ORTC?

webrtcHacks:  You have been a long time advocate for ORTC. Maybe you can summarize why ORTC was a good fit for Edge? Why did you start with that spec versus something else? What does it enable you to do now as a result?

Bernard: Some of the advantages of ORTC were indeed advantages, but in implementation we found there were also other advantages we didn’t think of at the time.

Interoperability

Bernard: ORTC doesn’t have SDP [like WebRTC 1.0]; the irony is ORTC allowed us to get to WebRTC 1.0 compatibility and interoperability faster than we would have otherwise. If you look at the adapter.js, it’s actually interesting to read that code- the actual code for Edge is actually smaller than for some of the other browsers. One might think that’s weird – why would it take less adaptation for Edge than for anything else? Are we really more 1.0 compatible than 1.0? The answer is, to some respects, we are, because we don’t generate SDP than somebody needs to parse and reformat. It certainly saves a lot of development to not have to write that code and have control in JavaScript, and also be easy to modify in case people find bugs in it.

The irony is ORTC allowed us to get to WebRTC 1.0 compatibility and interoperability faster than we would have otherwise

Connection State Details

The other thing we found about ORTC that we didn’t quite understand early on was it gives you detailed status of each of the transports- each of your ICE transports. Particularly when you’re dealing with situations like multiple interfaces, you actually get information about failure conditions that you don’t get out of WebRTC 1.0. 

It’s interesting to look at 1.0 – one of the reasons that I think people will find the objects interesting in 1.0 is because you actually need that kind of diagnostic information. The current connection state [in the current WebRTC] is not really enough – it’s not even clear what it means. It says in the spec that it’s about ICE, but it really combines ICE and DTLS. With the object model, you know exactly what ICE transport went down or if DTLS is in some weird state. Actually for diagnostics, details of the connection state is actually pretty important. It’s one of the most frequently requested statistical things. That was a benefit we didn’t anticipate, that we found is pretty valuable and will be coming into 1.0.

Many simple scenarios

Bernard: Then there were the simple scenarios. Everyone said, “I don’t need ORTC because I don’t do scalable video coding and simulcast” Do you ever do hold? Do you ever do changing owners of codecs? All illustrations that Peter [Thatcher] showed in his WebRTC 1.0 presentation. The answer is, a lot of those things are, in fact, common, and were not possible in 1.0. There is a lot of fairly basic benefits that you get as well. 

How is Edge’s Media Engine built

webrtcHacks:  In building and putting this in the Edge, you had a few different media engines you could choose from. You had the Skype media engine and a Lync media – you combine them or go and build a new one. Can you reveal the Edge media architecture and how you put that together?

Bernard: What we chose to do in Skype is move to a unified media engine. What we’ve done is, we’ve added WebRTC capabilities into that media engine. That’s a good thing because, for example, things like RTCP MUX and things like BUNDLE are now part of the Skype media engine so we can use them. The idea was to produce something that was unified and would have all the capabilities in one. It took a little bit longer to do it that way, but the benefit is that we get to produce a standardized compliant browser and we also get to use those technologies internally. Now we do not have 3 or 4 different stacks that we would have to rationalize later.

right now, our focus is very much on video, and trying to get that more solid, and more interoperable

Also, I should mention that one thing that is interesting about the way we work is we produce stacks that are both client and server capable. We don’t just produce pure client code that wouldn’t, for example, be able to handle load. Some of those things can go into back-end components as well. That is also true for DTLS and all that. Whether or not we use all those things in Skype is another issue, but it is part of the repertoire for apps. 

More than Edge

webrtcHacks: Is there anything else that’s not on dev.modern.ie that is exposed that a developer would care about? Any NuGet packages with these API’s for example?

Bernard: There is a couple of things. dev.modern.ie does not cover non-browser things in Windows platform. For example, currently we support DTLS 1.0. We do want to support 1.2, because there’s additional cipher  suites that are important. For example, the Elliptic Curve stuff we’re seeing going into all the browsers. I think Mozilla already has it, or Chrome has it, or if they don’t, they will very soon. That is actually very important. Elliptic Curve turned out to be more than just a cipher suite issue – the time and effort it takes to generate more secure certificates is large. For RSA-2048 you can actually block the UI thread if you thread the object. Anyway, those are very important things that we don’t cover on dev.modern.ie, but those are the things we obviously have to do. 

There’s a lot of work and a lot of thinking that’s been going on in the IETF if relating to ICE and how to be better for mobile scenarios. Some of that I don’t think is converged yet, but there’s a new ICE working group. Some of that is in the ortc-lib implementation yet. Robin [Raymond] likes to be on the cutting edge so he has done basically the first implementation of a lot of those new technologies. That’s something, I think is of general interest – particularly as ORTC moves to mobile.

I should mention, by the way, that the Edge Insider Preview was only for desktop. It does not run on Windows Phone just to clarify that. 

webrtcHacks:  Any plans for embedding the Edge ORTC engine as a IE plugin?

Bernard: An external plugin or something?

webrtcHacks:  Yeah, or a Microsoft plugin for IE that would implement ORTC. 

Bernard: Basically at this point, IE is frozen technology. All the new features, if you look on the website, they all go into Edge. That’s what we’ve been developing for. I never say Microsoft will never do anything, but currently that’s not the thinking. Windows 10 for consumers is a free upgrade. Hopefully, people will take advantage of that and get all the new stuff, including Edge.

Is there an @MSEdgeDev post on the relationship between this and InPrivate? pic.twitter.com/bbu0Mdz0Yd

— Eric Lawrence (@ericlaw) September 22, 2015

A setting discovered in Internet Explorer that appears to address the IP Address Leakage issue. Validating ORTC

webrtcHacks:  Is there anything you want to share?

Bernard: I do want to clarify a little bit, I think adapter.js is a very important thing because it validates our original idea that essentially WebRTC 1.0 could be built into the JavaScript layer with ORTC. 

webrtcHacks:  And that happened pretty quick – with Fippo‘s help. Really quick. 

Bernard: Fippo has written all the pull requests. We’re paying a lot of attention to the bugs he’s finding. Obviously, he’s finding bugs in Edge, which hopefully we’ll fix, but he’s also finding spec bugs. It really helps make sure that this compatibility that we’ve promised is actually real. It’s a very interesting process to actually reduce that to code so that it’s not just a vague promise. It has to be demonstrated in software.

Of course what we’ve done is currently with audio. We know that video is more complicated, particularly as you start adding lots and lots of codecs to get that level of compatibility. I wouldn’t say that when Fippo is down with audio that it will be the last word. I think we’ll have to  pay even more attention to interoperability stuff in the video cases. It will be interesting because video is a lot more complicated. 

adapter.js is a very important thing because it validates our original idea that essentially WebRTC 1.0 could be built into the JavaScript layer with ORTC.

What does the Microsoft WebRTC team look like

webrtcHacks:  Can you comment on how big the time is that’s working on ORTC in Edge? You have a lot of moving pieces in different aspects … 

Bernard: There’s the people in Edge. There’s the people in Skype. In the Windows system there’s the people on the S-channel team that worked on the DTLS. There’s people all over – for example, the VP9 work that we talked about, was not done by either Skype or the conventional Edge people. It’s the whole Windows Media team. I don’t really know how to get my hands around this, because if you look at all the code we’re using, it’s written by probably, I don’t know, hundreds and hundreds of people. 

webrtcHacks:  And you need to pull it together for purposes of WebRTC/ORTC, is that right?

Bernard: Yeah. We have to pull it together, but there’s a lot there. There’s a lot of teams. There will probably be more teams going forward. People say, “Why don’t you have the datachannel”? The dataChannel isn’t something that would be in Skype’s specific area of expertise. That’s a transfer protocol, it should be really written by people who are experts in transfer protocol, which isn’t either Edge or Skype. It’s not some decision that was made by either of our groups not to do it. We have to find somebody who proves that they can do that work, to take ownership of that. 

Feedback please

webrtcHacks:  Any final comments?

Bernard: No. I just encourage people to download the preview, run it, file bugs, and let us know what you think. You can actually can vote on the website for new features, which is cool. 

We do listen to the input. WebRTC is an expanding thing. There’s a ton of things you can do – there’s all that stuff on dev.modern.ie site and then there’s internal improvement. Getting a sense of priority is what’s most important to people, is not that easy, because there’s so much that you could possibly focus on. I’d say right now, our focus is very much on video, and trying to get that more solid, and more interoperable, at least for the moment. We can walk and chew gum at the same time. We can do more than just one thing. Conceivably, especially when you look at IE and other teams. 

webrtcHacks:  This is great and very insightful. I think it will be a big help to all the developers out there. Thanks!

{
  “Q&A”:{
    “interviewer”:“chad hart“,
    “interviewee”:“Bernard Aboba
  }
}

Want to keep up on our latest posts? Please click here to subscribe to our mailing list if you have not already. We only email post updates. You can also follow us on twitter at @webrtcHacks for blog updates and news of technical WebRTC topics or our individual feeds @chadwallacehart, @victorpascual and @tsahil.

The post Microsoft’s ORTC Edge for WebRTC – Q&A with Bernard Aboba appeared first on webrtcHacks.

Do you Need to test a WebRTC P2P Service?

bloggeek - Mon, 10/12/2015 - 12:00

Yes.

It is a question I get from time to time, especially now, that I am a few months into the WebRTC testing venture as a co-founder with a few partners – testRTC.

The logic usually goes like this: the browsers already support WebRTC. They do their own testing, so what we end up getting is a solid solution we can use. Fippo would say that

If life was that easy… here are a few things you need to take care of when it comes to testing the most simple of WebRTC services:

#1 – Future proofing browser versions

Guess what? Things break. They also change. Especially when it comes to WebRTC.

A few interesting tidbits for you:

  • Google is dropping HTTP support for GetUserMedia, so services must migrate to HTTPS. Before year end
  • The echo canceller inside WebRTC? It was rewritten. From scratch. Using a new algorithm. That is now running on a billion devices. Different devices. And it works! Most times
  • WebRTC’s getStats() API is changing. Breaking its previous functionality

And the list goes on.

WebRTC is a great technology, but browsers are running at breakneck speeds of 6-8 weeks between releases (for each browser) – and every new release with a potential to break a service in multitude of ways – either because of a change in the spec, deprecation of capability or just bugs.

Takeaway: Make sure your service works not only on the stable version of the browsers, but also on their beta or even dev versions as well.

#2 – Media relay

Your service might be a P2P service, but at times, you will need to relay media through TURN servers.

The word on the street is that around 15% of sessions require relay. To some it can be 50% and to others 8% (real numbers I heard from running services).

Media relay is tricky:

  • You need to configure it properly (many fall at this one)
  • You need to test it in front of different firewall and NAT configurations
  • You need to make it close to your users (you don’t want a local session in Paris to get relayed through a server in San Francisco)
  • You need to test it for scale (check the next point for more on that)

Takeaway: Don’t treat WebRTC as a browser side technology only, or something devoid of media handling. Even if the browser does most of the heavy lifting, some of the effort (and responsibility) will lie on your service.

#3 – Server scale

Can your server cater for 200 sessions in parallel to fit that contact center? What about a 1000?

What will happen if you’ll have a horde effect due to a specific event? Can you handle that number of browsers hitting your service at once? Does your website operate in the same efficiency for the 1000th person as it does for the first?

This relates to both your signaling server, which is no part of WebRTC, but is there as part of your service AND your media server from my previous point.

Takeaway: Make sure your service scales to the capacities that it needs to scale. Oh – and you won’t be able to test it manually with the people you have with you in your office…

#4 – Service uptime

You tested it all. You have the perfect release. The service is up and running.

How do you make sure it stays running?

Manually? Every morning come in to the office and run a session?

Use Pingdom to make sure your site is up? Go to the extreme of using New Relic to check the servers are up, the CPUs aren’t over loaded and the memory use seems reasonable? Great. But does that mean your service is running and people can actually connect sessions? Not necessarily.

Takeaway: End-to-end monitoring. Make sure your service works as advertised.

The ugly truth about testing

The current norm in many cases is to test manually. Or not test at all. Or rely on unit testing done by developers.

None of this can work if what you are trying to do is create a commercial service, so take it seriously. Make testing a part of your development and deployment process.

And while we’re at it…

Check us out at testRTC

If you don’t know, I am a co-founder with a few colleagues at a company called testRTC. It can help you with all of the above – and more.

Leave us a note on the contact page there if you are interested in our paid service – it can cater to your testing needs with WebRTC as well as offering end-to-end monitoring.

 

Need to test WebRTC?

 

The post Do you Need to test a WebRTC P2P Service? appeared first on BlogGeek.me.

Fone.do and WebRTC: An Interview With Moshe Maeir

bloggeek - Thu, 10/08/2015 - 12:00
isVisible=false; function show_hide_searchbox(w){ if(isVisible){ document.getElementById('filterBoxSelection').style.display = 'none'; w.innerText='Filter ▼'; }else{ document.getElementById('filterBoxSelection').style.display = 'block'; w.innerText='Filter ▲'; } isVisible=!isVisible; } function checkIfSelected(chk){ if(chk.checked==1) chk.parentNode.className = "selected"; else chk.parentNode.className = "notselected"; getSelectedValues(); } function getSelectedValues(){ var a=document.getElementsByClassName('selected'); var vtVal=[] , ctVal=[] , ftVal=[]; var ct=0,vt=0,ft=0; for (x = 0; x < a.length; ++x) { try{ if(a[x].getElementsByTagName('input')[0].className=='companyType'){ ctVal[ct]= a[x].getElementsByTagName('input')[0].value; ct++; } if(a[x].getElementsByTagName('input')[0].className=='vendorType'){ vtVal[vt]= a[x].getElementsByTagName('input')[0].value; vt++; } if(a[x].getElementsByTagName('input')[0].className=='focusType'){ ftVal[ft]= a[x].getElementsByTagName('input')[0].value; ft++; } }catch(err){ } } search_VType(vtVal); search_CType(ctVal); search_FType(ftVal); } function search_VType(val){ var a=document.getElementsByClassName('interview-block'); for(x=0;x=0 && val[i]!=null){ a[x].style.display='block'; } } if(val.length==0){ a[x].style.display='block'; } } } function search_CType(val){ var a=document.getElementsByClassName('interview-block'); for(x=0;x=0 && val[i]!=null && a[x].style.display=='block'){ break; } if(i==val.length-1){ a[x].style.display='none'; } } } } function search_FType(val){ var a=document.getElementsByClassName('interview-block'); for(x=0;x=0 && val[i]!=null && a[x].style.display=='block'){ break; } if(i==val.length-1){ a[x].style.display='none'; } } } } Check out all webRTC interviews >>

Fone.Do: Moshe Maeir

October 2015

SMB phone system

Disrupting the hosted PBX system with WebRTC.

[If you are new around here, then you should know I’ve been writing about WebRTC lately. You can skim through the WebRTC post series or just read what WebRTC is all about.]

 

There’s no doubt that WebRTC is disrupting many industries. One of the obvious ones is enterprise communications, and in this space, an area that has got little attention on my end (sorry) is the SMB – where a small company needs a phone system to use and wants to look big while at it.

Moshe Maeir, Founder at Fone.do, just launched the service out of Alpha. I have been aware of what they were doing for quite some time and Moshe took the time now that their service is public to answer a few of my questions.

 

What is Fone.do all about?

Fone.do is a WebRTC based phone system for small businesses that anyone can set up in 3 minutes. It replaces both legacy PBX systems that were traditionally based in your communications closet and also popular Hosted PBX systems. Businesses today are mobile and the traditional fixed office model is changing. So while you can connect a SIP based IP phone to our system, we are focused on meeting the needs of the changing business world.

 

Why do small businesses need WebRTC at all? What’s the benefit for them?

You could ask the same question about email, social networks etc. Why use web based services at all? Does anyone want to go back to the days of “computer programs” that you downloaded and installed on your computer? Unfortunately, many still see telephony and communications as a stand alone application. WebRTC changes this. Small businesses can communicate from any place and any device as long as they have a compatible platform.

 

What excites you about working in WebRTC?

Two things. Not sure which is more exciting. First of all. If I build something great – the whole world is my potential market. All they need is a browser and they are using our system in 3 minutes. The other exciting aspect is that telephony is no longer a closed network. Once you are on the web the potential is unlimited. You can easily connect your phone system to the wealth of data and services that already exist on the web and take communications to a new level. In fact, that is why we hired developers who knew nothing about telephony but were experienced in web development. The results are eye opening for traditional telecom people.

 

I know you’re a telecom guy yourself. Can you give an example how working with web developers was an eye opener to you?

There are many. The general attitude is just do it. With legacy telecom, everything has the accepted way of doing things and you don’t want to try  anything new without extended testing procedures. A small example – in the old VoIP days writing a “dial plan” was a big thing. When we came to this issue on Fone.Do, one of the programmers naturally googled the issue and found a Google service that will automatically adapt the dial plan based on the users’ mobile number. 1-2-3 done.

 

Backend. What technologies and architecture are you using there?

Our main objective was to build an architecture that will work well and easily scale in the cloud (we are currently using AWS). So while we have integrated components such as the Dialogic XMS and the open source Restcomm, we wrote our own app server which manages everything. This enables us if we need to freely change back end components.

 

Can you tell us a bit about your team? When we talked about it a little of a year ago ago, I suggested a mixture of VoIP and web developers. What did you end up doing and how did it play out?

All our developers are experienced front end and backend web programmers with no telecom experience. However, our CTO who designed the system has over 15 years of experience in Telecom, so he is there to fill in any missing pieces. There were some bumps at the beginning, but I am very happy we did it this way. You can teach a web guy about Telephony, but it is very hard to get a Telecom guy to change his way of thinking. Telecom is all about “five nines” and minimizing risk. Web development is more about innovation and new functionality. With todays’ technology it is possible to innovate and be almost as reliable as traditional telephony

 

Where do you see WebRTC going in 2-5 years?

Adoption is slower than I expected, but eventually I see it as just another group of functions in your browser that developers can access as needed.

 

If you had one piece of advice for those thinking of adopting WebRTC, what would it be?

WebRTC is here. It makes your user experience better – so what are you waiting for?

 

What’s next for Fone.do?

We recently released our alpha product and we are looking to launch an open beta in the next couple of months. Besides a web based “application”, we also have applications for Android and iOS.

The interviews are intended to give different viewpoints than my own – you can read more WebRTC interviews.

The post Fone.do and WebRTC: An Interview With Moshe Maeir appeared first on BlogGeek.me.

The Next Wave - KazooCon 2015

2600hz - Thu, 10/08/2015 - 11:47
The Next Wave - KazooCon 2015 from James Solada

CTO Karl Anderson discusses the state of Kazoo. This includes integrations with FreeSWITCH, erlang, and Kamailio. Reseller milestones include the release of whitelabeling, webhooks, migration, carriers, debugging, account management and more.

Detecting and Managing VoIP Fraud - KazooCon 2015

2600hz - Thu, 10/08/2015 - 11:34
Detecting and Managing VoIP Fraud from James Solada

This is an overview of VoIP fraud, different types of fraud and what telecommunication carriers are doing to combat this issue. Types of fraud include International / Premium Number Fraud, Impersonation / Social Engineering, Service Degradation / Denial of service. Presented by Mark Magnusson at KazooCon 2015. 

Telecom Rating and Limits - KazooCon 2015

2600hz - Thu, 10/08/2015 - 11:20
Telecom Rating and Limits from James Solada

James Aimonetti discusses VoIP Rates, Routing, and Services at KazooCon 2015.

Billing Data with Kazoo - KazooCon 2015

2600hz - Thu, 10/08/2015 - 11:12

Billing Data with Kazoo from James Solada

Product Director Aaron Gunn discusses billing options for SaaS and IaaS customers. This includes CDR API, AMPQ, and integrating VoIP billing platforms.

Tuning Kazoo to 10,000 Handsets - KazooCon 2015

2600hz - Thu, 10/08/2015 - 11:09
Tuning Kazoo to 10,000 Handsets - KazooCon 2015 from James Solada

People love to talk about scale. Some vendors pitch that their systems easily support 100,000 simultaneous calls, or 500 calls per second, etc. The reality is, in the real world, people’s behaviors vary and the feature sets they use can cut these numbers down quickly. For example, ask that same vendor claiming 100,000 simultaneous calls if it can be done while call recording, call statistics and other features are turned on at the same time, and you’ll usually get a very different, cautious, qualified response.

In this presentation, we’ll show you how to set up your infrastructure to support 100,000 simultaneous calls.

Tuning Kazoo to 10,000 Handsets - KazooCon 2015

2600hz - Thu, 10/08/2015 - 00:25

http://www.slideshare.net/2600hzmarketing/tuning-kazoo-to-10000-handsets-kazoocon-2015

People love to talk about scale. Some vendors pitch that their systems easily support 100,000 simultaneous calls, or 500 calls per second, etc. The reality is, in the real world, people’s behaviors vary and the feature sets they use can cut these numbers down quickly. For example, ask that same vendor claiming 100,000 simultaneous calls if it can be done while call recording, call statistics and other features are turned on at the same time, and you’ll usually get a very different, cautious, qualified response.

In this presentation, we’ll show you how to set up your infrastructure to support 100,000 simultaneous calls.

4 Good Reasons for Using HTTP/2

bloggeek - Tue, 10/06/2015 - 12:00

HTTP/2 is too good to pass.

If you don’t know much about HTTP/2 then check this HTTP/2 101 I’ve written half a year ago.

In essence, it is the next version of how we all get to consume the web over a browser – and it has been standardized and deployed already. My own website here doesn’t yet use it because I am dependent on the third parties that host my service. I hope they will upgrade to HTTP/2 soon.

Watching this from the sidelines, here are 4 good reasons why you should be using HTTP/2. Not tomorrow. Today.

#1 – Page Load Speed

This one is a no-brainer.

A modern web page isn’t a single resource that gets pulled towards your browser for the pleasure of your viewing. Websites today are built with many different layers:

  • The core of the site itself, comprising of your good old HTML and CSS files
  • Additional JavaScript files – either because you picked them yourself (JQuery or some other piece of interactive code) or through a third party (Angular framework, ad network, site tracking code, etc.)
  • Additional JavaScript and CSS files coming from different add-ons and plugins (WordPress is fond of these)
  • Images and videos. These may be served from your server or via a CDN

At the time of writing, my own website’s homepage takes 116 requests to render. These requests don’t come from a single source, but rather from a multitude of them, and that’s when I am using weird hacks such as CSS sprites to reduce the number of resources that get loaded.

There’s no running away from it – as we move towards richer experiences, the resources required to render them grows.

A small HTTP/2 demo that CDN77 put in place shows exactly that different – they try loading 200 small images to a page in either HTTP/1.1 or HTTP/2 shows the improved load times of the page.

HTTP/2 has some more features that can be used to speed up web page serving – we just need to collectively start adopting it.

#2 – Avoiding Content Injection

In August, AT&T was caught using ad injection. Apparently, AT&T ran a pilot where people accessing the internet via its WiFi hotspots in airports got ads injected to the pages they browsed over the internet.

This means that your website’s ads could be replaced with those used by a third party – who will get the income and insights coming from the served ads. It can also mean that your website, that doesn’t really have ads – now shows them. The control freak that I am, this doesn’t sound right to me.

While HTTP/2 allows both encrypted and unencrypted content to be served, only the encrypted variant is supported by browsers today. You get the added benefits of encryption when you deploy HTTP/2. This makes it hard to impossible to inject 3rd party ads or content to your site.

#3 – Granularity

During that same August (which was the reason this post was planned to begin with), Russia took the stupid step of blocking Wikipedia. This move lasted less than a week.

The reason? Apparently inappropriate content in a Wikipedia page about drugs. Why was the ban lifted? You can’t really block a site like Wikipedia and get away with it. Now, since Wikipedia uses encryption (SPDY, the predecessor of HTTP/2 in a way), Russia couldn’t really block specific pages on the site – it is an all or nothing game.

When you shift towards an encrypted website, external third parties can’t see what pages get served to viewers. They can’t monetize this information without your assistance and they can’t block (or modify) specific pages either.

And again, HTTP/2 is encrypted by default.

#4 – SEO Juice

Three things that make HTTP/2 good for your site’s SEO:

  1. Encrypted by default. Google is making moves towards giving higher ranking for encrypted sites
  2. Shorter page load times translate to better SEO
  3. As Google migrates its own sites to HTTP/2, expect to see them giving it higher ranking as well – Google is all about furthering the web in this area, so they will place either a carrot or a stick in front of business owners with websites

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post 4 Good Reasons for Using HTTP/2 appeared first on BlogGeek.me.

FreeSWITCH Week in Review (Master Branch) September 26th-October 2nd

FreeSWITCH - Mon, 10/05/2015 - 22:58

Hello, again. This past week in the FreeSWITCH master branch we had 90 commits! Most of the features for this week went toward the verto communicator and are: created a source map file, created the reset banner action, floor and presenter badges, and locked icon in floorLocked status, added an About screen with version information and links to FS.org and added a link to Confluence with documentation for VC, and made mute/unmute audio/video clickable. Other features this week include: refactoring the local_stream API to be more consistent and add auto complete, compatibility with Solaris 11 process privileges, improvements to the way FEC info is detected within frames by adding support for ptimes higher than 20 ms for FEC detection and improvements to jitter buffer debugging in mod_opus.

Join us on Wednesdays at 12:00 CT for some more FreeSWITCH fun! And head over to freeswitch.com to learn more about FreeSWITCH support.

New features that were added:

  • FS-8243 [mod_opus] Improve the way FEC info is detected within frames by adding support for ptimes higher than 20 ms for FEC detection
  • FS-8161 [mod_opus] Keep FEC enabled only if loss > 10 ( otherwise PLC is supposed to be better)
  • FS-8179 [mod_opus] Improvement on new jitter buffer debugging (debug lookahead FEC)
  • FS-8254 [verto_communicator] Create a source map file
  • FS-8263 [verto_communicator] Created the reset banner action, floor and presenter badges, and lock icon in floorLocked status
  • FS-8288 [verto_communicator] Added an About screen with version information and links to FS.org and added a link to Confluence with documentation for VC
  • FS-8289 [verto_communicator] Make mute/unmute audio/video clickable
  • FS-8287 [mod_local_stream] Refactor local_stream API to be more consistent and add auto complete
  • FS-8195 [core] Compatibility with Solaris 11 process privileges

Improvements in build system, cross platform support, and packaging:

  • FS-8236 Fixed build without libyuv on compilers that error on unused static function and fixed ifdefs for building without libyuv
  • FS-8239 [mod_av] Fixed the default value to avoid failed build on CentOS 7
  • FS-8255 [Debian] Fixed codename changes since Jessie was released as stable
  • FS-8271 [Debian] Simplify package building for the default case
  • FS-8270 [Debian] Fix for package installation failing if /etc/freeswitch/tls is missing
  • FS-8285 [Debian] Removed heart attack inducing warning message when updating deb packages
  • FS-7817 Removed use of _NONSTD for Windows builds of spandsp, so (hopefully) eliminate compatibility problem

The following bugs were squashed:

  • FS-8221 [verto_communicator] Fix number in call history
  • FS-8223 [verto_communicator] Fixing members list layout when callerid is too long
  • FS-8225 [verto_communicator] Avoid duplicate members when recovering calls
  • FS-8214 [verto_communicator] Better handling calls in VC, answering them respecting useVideo param
  • FS-8291 [verto_communicator] Fixed contributors url
  • FS-8229 [verto_communicator] Changing moderator actions bullet menu color to #333
  • FS-8219 [verto_communicator] Fix for camera not deactivating after init or after hangup
  • FS-8245 [verto_communicator] Fix for Video Resolutions available in “Video Quality” drop down not always correct
  • FS-8251 [verto_communicator] Factory reset now clears all local storage
  • FS-8257 [verto_communicator] Fixed configuration provision url because configuration doesn’t work with `grunt serve` and non pathname urls
  • FS-8273 [verto] [verto_communicator] Clear the CF_RECOVERING flag in a spot that was missed
  • FS-8260 [verto_communicator] Prompt for banner text
  • FS-8232 [mod_conference] Conference sending too many video refresh requests
  • FS-8241 [mod_conference] Fix for conference stops playing video when local_stream changes source
  • FS-8261 [mod_conference] Fixed the conference segfaulting when trying to reset the banner
  • FS-8220 [core] Fix for DTMF not working between telephone-event/48000 A leg and telephone-event/8000 B leg
  • FS-8166 [core] Mute/unmute while shout is playing audio fails because the channel “has a media bug, hard mute not allowed”
  • FS-8252 [core] Fixed a crash in rtp stack on dtls pointer
  • FS-8283 [core] Handle RTP Contributing Source Identifiers (CSRC)
  • FS-8275 [core] Fix for broken DTMF
  • FS-8282 [core] Fix for sleep is not allowing interruption by uuid_transfer
  • FS-8240 [mod_local_stream] Fixed a/v getting out of sync when running in the background and added video profile parameter for recording 264 and made it default
  • FS-8216 [mod_av] Fixed a regression in hup_local_stream from last commit
  • FS-8274 [mod_av] Fixed a memory leak caused by images not being freed in video_thread_run
  • FS-7989 [fixbug.pl] Escape double quotes from summary and added more debugging data
  • FS-8246 [mod_json_cdr] Use seconds as default value for delay parameter
  • FS-8256 [mod_opus] More FMTP cleanup
  • FS-8284 [mod_opus] Use use-dtx setting from config in request to callee.
  • FS-8234 [mod_opus] Send correct (configured) fmtp ptime,minptime,maxptime when originating call

 

And, this past week in the FreeSWITCH 1.4 branch we had 6 new commits merged in from master. And the FreeSWITCH 1.4.23 release is here! Go check it out!

The following bugs were squashed:

  • FS-8190 [mod_event_socket] When using nixevent, freeswitch stops sending us certain custom event that were NOT part of the nixevent command
  • FS-7673 [mod_v8] ODBC NULL value incorrectly evaluated
  • FS-8215 Fixed the accuracy of MacOSX nanosleep
  • FS-8269 [mod_sms] Fixed a build issue
  • FS-8244 [mod_dptools] Fixed a compilation issue

 

 

How NOT to Compete in the WebRTC API Space

bloggeek - Mon, 10/05/2015 - 12:00

Some aspects are now table stakes for WebRTC API Platforms.

There are 20+ vendors out there who are after your communications. They are willing to take up the complexity and maintenance involved with running real time voice and video that you may need in your business or app. Some are succeeding more than others, as it always has been.

So how do you as a potential customer going to choose between them?

Here are a few things I’ve noticed in the two years since I first published my report on this WebRTC API space:

  1. Vendors are finding it hard to differentiate from one another. Answering the question to themselves of what they do better than anyone else in this space (or at least from the vendors they see as their main competitors) isn’t easy
  2. Vendors often times don’t focus. They try to be everything to everyone, ending up being nothing to most. You can see what they are good for if you look from the sidelines, feel how they pitch, operate, think – but they can’t see it themselves
  3. Vendors attempt to differentiate over price, quality and ease of use. This is useless.
Table Stakes

Most vendors today have pretty decent quality with a set of APIs that are easy. Pricing varies, but usually reasonable. While some customers are sensitive to pricing, others are more focused on getting their proof of concept or initial beta going – and there, the price differences doesn’t matter in the short to medium term anyway.

The problem is mainly vendor lock-in, where starting to use a specific vendor means sticking with it due to high switching costs later on. But then, savvy developers use multiple vendors or prepare adapter layers to abstract that vendor lock-in.

Vendors need to think more creatively at how they end up differentiating themselves. From carving a niche to offering unique value.

My Virtual Coffee

This is the topic for my first Virtual Coffee session, which takes place on October 14.

It is something new that I am trying out – a monthly meeting of sorts. Not really a webinar. But not a conference either.

Every month, I will be hosting an hour long session:

  • It will take place over a WebRTC service – I am dogfooding
  • It will cover a topic related to the WebRTC ecosystem (first one will be differentiation of WebRTC API Platform vendors)
  • It will include time for Q&A. On anything
  • Sessions will be recorded and available for playback later on
  • It is open to my consulting customers and those who purchased my report in the past year

If you are not sure if you are eligible to join, just contact me and we’ll sort things out.

I’d like to thank the team at Drum for letting me use their ShareAnywhere service for these sessions – they were super responsive and working with them on this new project was a real joy for me.

Virtual Coffee #1 Title: WebRTC PaaS Growth Strategies How WebRTC API vendors differentiate and attempt to grow their business When: Oct 14. 13:30 EDT (add to calendar) Where: Members only What’s next?

Want to learn more about this space? The latest update of my report is just what you need

 

The post How NOT to Compete in the WebRTC API Space appeared first on BlogGeek.me.

Kamailio v4.3.3 Released

miconda - Fri, 10/02/2015 - 21:00
Kamailio SIP Server v4.3.3 stable is out – a minor release including fixes in code and documentation since v4.3.2 – configuration file and database compatibility is preserved.Kamailio (former OpenSER) v4.3.3 is based on the latest version of GIT branch 4.3, therefore those running previous 4.3.x versions are advised to upgrade. There is no change that has to be done to configuration file or database structure comparing with older v4.3.x.Resources for Kamailio version 4.3.3Source tarballs are available at:Detailed changelog:Download via GIT: # git clone git://git.kamailio.org/kamailio kamailio
# cd kamailio
# git checkout -b 4.3 origin/4.3Binaries and packages will be uploaded at:Modules’ documentation:What is new in 4.3.x release series is summarized in the announcement of v4.3.0:

Android Does… RCS !? What About WebRTC? Hangouts?

bloggeek - Thu, 10/01/2015 - 10:10

Some people are fidgeting on their chairs now, while others are happier than they should be.

I’ll start by a quick disclaimer: I like Google. They know when you acquire companies to fit my schedule – just got back from vacation – so I actually have time to cover this one properly.

Let’s start from the end:

Google and Apple are the only companies that can make RCS a reality.

To all intent and purpose, Google just gave RCS the kiss of life it needed.

Google just acquired Jibe Mobile, a company specializing in RCS. The news made it to the Android official blog. To understand the state of RCS, just look at what TechCrunch had to say about it – a pure regurgitation of the announcement, with no additional value or insights. This isn’t just TechCrunch. Most news outlets out there are doing the same.

Dataset subscribers have the acquisitions table updated with this latest information

Why on earth is Google investing in something like RCS?

RCS

RCS stands for Rich Communication Suite. It is a GSMA standard that has been around for a decade or so. It is already in version 5.2 or so with little adoption around the world.

What is has on offer is an OTT-style messaging capabilities – you know the drill – an address book, some presence information, the ability to send text and other messages between buddies. Designed by committee, it has taken a long time to stabilize – longer than it took Whatsapp to get from 0 to 800. Million. Monthly active users.

The challenge with RCS is the ecosystem it lives in – something that mires other parts of our telecom world as well.

Put simply, in order to launch such a service that needs to take any two devices in the world and connect them, we need the following vendors to agree on the need, on the priority, on the implementation details and on the business aspects:

  • Chipset vendors
  • Handset vendors
  • Mobile OS vendors
  • Telco vendors
  • Telcos

Call it an impossible feat.

In a world where Internet speeds dictate innovation and undercut slower players, how can a Telco standard succeed and thrive? The moment it gets out the door it feels old.

Google and Messaging

Google has many assets today related to messaging:

  • Android, the OS powering 1.4 billion devices, where 1 billion of them call home to Google’s Play service on a monthly basis
  • Hangouts, their own chat/voice/video service that is targeted at both consumers and enterprises. It is part of Android, but also works as an app or through the browser virtually everywhere
  • Firebase, a year-old acquisition that is all about powering messaging (and storage) for developing apps

As Kranky puts it, they were missing an iMessage service. But not exactly.

Google thrives from large ecosystems. The larger the better – these are the ones you can analyze, optimize and monetize. And not only by building an AdWords network on top of it.

The biggest threats to Google today, besides regulators around the globe, would be:

  1. Apple, who is doing its darnedest today to show off their better privacy policies compared to Google
  2. Facebook, who is vying after Google’s AdWords money with its own social network/ads empire
  3. Telcos, who can at a whim decide to shut off Google’s ambitions – by not promoting Android, making it hard for YouTube or other services to run, etc.

Getting into RCS and committing to it, as opposed to doing a half witted job at an RCS client on vinyl Andorid, gives Google several advantages:

  • It puts them at the good side of Telcos, which can’t be bad
  • Improves Android’s standing as an ecosystem, and making it easier for Google to force the hands of handset manufacturers and chipset vendors in adjacent domains
    • Maybe getting the codecs they want embedded as part of the device for example?
    • Forcing improvements on mobile chipset designs that offer better power management/performance for all messaging apps
  • Opens the door to deeply integrating Hangouts with RCS/Telco messaging
  • Enabling Google to become the gateway to the telco messaging space
    • Got a device running Android? An RCS client is already there and running
    • Don’t have Android? Connect through your browser from everywhere
    • Or just install that Google RCS app – it already has a billion downloads on it, as opposed to a measly 5,000 downloads of an operator-brand app
  • Becoming the glue between consumer and enterprise
    • Hangouts may well be a consumer type of a product, but it is part of the Google Apps offering to enterprises
    • Carriers are struggling in monetizing consumer services these days besides connectivity, and Google is fine with giving consumers a free ride while making money elsewhere
    • Google is struggling with getting into the enterprise space. Hangouts is marginal compared to Microsoft Lync/Skype and Cisco
    • Offering direct connectivity to the carrier’s messaging for consumers can bridge that gap. It increases the value of RCS to the enterprise, making Google a player that can integrate better with it than competition
Why Acquire Jibe?

Beside being a nice signal to the market about seriousness, Jibe offers a few advantages for Google.

  1. They are already deployed through carriers
  2. Their service is cloud based, which sits well with Google. It means traffic goes through Jibe/Google – something which places Google as the gateway between the customer and the Telco – a nice position to be in

In a way, Jibe isn’t caught up in the old engineering mentality of telco vendors – it provides a cloud service to its customers, as opposed to doing things only on premise. While Google may not need the architecture or code base of Jibe Mobile, it can use its business contracts to its advantage – and grow it tenfold.

When your next RCS message will be sent out, Google will know about it. Not because it sits on your device, but because it sits between the device and the network.

Why will Telcos Accept this?

They have no choice in the matter.

RCS has been dead for many years now. Standardization continues. Engineers fly around the world, but adoption is slow. Painfully slow. So slow that mid-sized OTT players are capable of attracting more users to their services. It doesn’t look well.

And the problem isn’t just the service or the UI – it is the challenge for a carrier to build the whole backend infrastructure, build the clients for most/all devices on its network and then launch and attract customers to it.

Google embedding the client front end directly into Android and a part of the devices means there’s no headache in getting the service to the hands of customers and putting it as their default means of communications.

Google offering the backend for telcos in a cloud service means they no longer have to deal with the nasty setup and federation aspects of deploying RCS.

Only thing they need to do is sign a contract and hit the ground running.

An easy way out of all the sunk costs placed in RCS so far. It comes at a price, but who cares at this point?

The End Game

There are three main benefits for Google in this:

  1. Selling more Google devices
    • If these devices come equipped with RCS, and their backend comes from the same Telco and operated by Google, then why should a Telco promote another device to its customers?
    • It isn’t limited to Android versus an iOS device – it also relates to Chrome OS versus Windows 10
    • When mobility needs will hit tablets and laptops and the requirement to be connected everywhere with these devices will grow, we might start seeing Telcos actually succeeding in selling such devices with connectivity to their network. Having RCS embedded in these devices becomes interesting
  2. The next billion
    • Facebook and Google are furiously thinking of the next billion users. How to reach them and get them connected
    • With RCS as part of the messaging service a Telco has on offer, they are less dependent on third party apps to connect
    • With Google having both RCS and Hangouts, it increases the size of their applicable users base and the size of their ecosystem
  3. Carrier foothold
    • Carriers are reluctant when it comes to Google. They aren’t direct competitors, but somehow, it can feel that way at times – Google Fiber and Google Fi are prime examples of what Google can do and is doing
    • This is why having cloud services owned by Google and connected to the heart of a Telco is enticing to Google. It gives them a better foothold inside the carrier’s network
Where’s WebRTC?

Not really here. Or almost not. It isn’t about WebRTC. It is about telecom and messaging. Getting federated access that really works to the billions of mobile handsets out there.

Jibe has its own capabilities in WebRTC, a gateway of sorts that enables communicating with the carrier’s own network from a browser. How far along is it? I don’t know, and I don’t think it even matters. Connecting Jibe RCS cloud offering to Google Hangouts will include a WebRTC gateway. If it will or won’t be opened and accessible to others is another question (my guess is that it won’t be in the first year or two).

An interesting and unexpected move by Google that can give RCS the boost it desperately needs to succeed.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post Android Does… RCS !? What About WebRTC? Hangouts? appeared first on BlogGeek.me.

Pages

Subscribe to OpenTelecom.IT aggregator

Using the greatness of Parallax

Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.

Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.

Get free trial

Wow, this most certainly is a great a theme.

John Smith
Company name

Yet more available pages

Responsive grid

Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

More »

Typography

Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

More »

Startup Growth Lite is a free theme, contributed to the Drupal Community by More than Themes.