News from Industry

What Comes Next in Communications?

bloggeek - Mon, 05/07/2018 - 12:00

There are opposite forces at play when it comes to the next wave of communication technologies.

There are a lot of changes going on at the moment, being introduced into the world of communications. If I had to make a shopping list of these technologies, I’d probably end up with something like this:

  1. Cloud, as a Service
  2. APIs and programmability
  3. Business messaging, social messaging
  4. “Teams”, enterprise messaging
  5. Contextual everything
  6. Artificial Intelligence, NLP, NLU, ML
  7. X Reality – virtual, augmented, mixed, …

Each item is worthy of technobabble marketing in its own rite, but the thing is, they do affect communications. The only question is in what ways.

I have been looking at it lately a lot, trying to figure out where things are headed, building different models to explain things. And looking at a few suggested models by other industry experts.

Communication domains – simplified

Ignoring outliers, there are 3 main distinct communication domains within enterprises:

  1. UC – Unified Communications
  2. CC – Contact Center
  3. CP – Communications Platform

Usually, we will be using the obligatory “aaS” to them: UCaaS, CCaaS and CPaaS

I’ll give my own simplified view on each of these acronyms before we proceed.

UCaaS

Unified Communications looks inwardly inside the company.

A company has employees. They need ways and means to communicate with each other. They also need to communicate with external entities such as suppliers, partners and customers. But predominantly, this is about internal communications. The external communications usually takes a second-class citizen position, with limited capabilities and accessibility; oftentimes, external communications will be limited to email, phone calls and SMS.

What will interest us here will be collaboration and communication.

CCaaS

Contact Centers are about customers. Or leads, which are potential customers.

We’ve got agents in the contact center, be it sales or customer care (=support), and they need to talk to customers.

Things we care about in contact centers? Handling time, customer satisfaction, …

CPaaS

Communication Platform as a Service is different.

It is a recent entry to the communications space, even if some would argue it has always been there.

CPaaS is a set of building blocks that enable us to use communications wherever we may need them. Both CCaaS and UCaaS can be built on top of CPaaS. But CPaaS is much more flexible than that. It can fit itself to almost any use case and scenario where communications is needed.

Communications in Consolidation

There’s a consolidation occuring in communications. One where vendors in different part of communications are growing their offering into the adjacent domains.

We are in a migration from analog to digital when it comes to communications. And from pure telecom/telephony towards browser based, internet communications. Part of it is the introduction of WebRTC technology (couldn’t hold myself back from mentioning WebRTC).

This migration opens up a lot of opportunities and even contemplation on how should we define these communication domains and are they even separate at all.

There have been some interesting moves lately in this space. Here are a few examples of where these lines get blurred and redefined:

  • Dialpad just introduced a contact center, tightly integrated and made a seamless part of its unified communications platform
  • Vonage acquires Nexmo, which is one of the leading CPaaS vendors. Other UC vendors have added APIs and developer portals to their UC offerings
  • Twilio just announced Flex, its first foray out of CPaaS and into the contact center realm

These are just examples. There are other vendors in the communication space who are going after adjacent domains.

The idea here is communication vendors looking into the communications venn diagram and reaching out to an adjacency, with the end result being a consolidation throughout the whole communications space.

External disruption to communications

This is where things get really interesting. The forces at play are pushing communications outwards:

UCaaS, CCaaS, CPaaS. It was almost always about real time. Communications happening between people in real time. When the moment is over, the content of that communications is lost – or more accurately – it becomes another person’s problem. Like a contact center recording calls for governance or quality reasons only, or having the calls transcribed to be pushed towards a CRM database.

Anything that isn’t real time and transient isn’t important with communications. Up until now.

We are now connecting the real time with the asynchronous communications. Adding messaging and textual conversations. We are thinking about context, which isn’t just the here and now, but also the history of it all.

Here’s what’s changing though:

UC and Teams

Unified Communications is ever changing. We’ve added collaboration to it, calling it UC&C. Then we’ve pushed it to the cloud and got UCaaS. Now we’re adding messaging to it. Well… we’re mostly adding UC to messaging (it goes the other way around). So we’re calling it Teams. Or Team Collaboration. Or Workstream Collaboration (WSC). Or Workstream Communication and Collaboration (WCC). I usually call it Enterprise Messaging.

The end result is simple. We focus on collaboration between teams in an organization, and we do that via group chat (=messaging) as our prime modal for communications.

Let’s give it a generic name that everyone understands: Slack

The question now is this: will UC gobble up Team communication vendors such as Slack (and now Workplace by Facebook; as well as many other “project management” and messaging type tools) OR will Slack and the likes of it gobble up UC?

I don’t really know the answer.

CC and CRMs

What about contact centers? These live in the world of CRM. The most important customer data resides in CRMs. And now, with the introduction of WebRTC, and to an extent CPaaS vendors, a CRM vendor can decide to add contact center capabilities as part of his offering. Not through partnerships, but through direct implementation.

Can contact centers do the same? Can they expand towards the CRM domain, starting to handle the customer data itself?

If salesforce starts offering a solid contact center solution in the cloud as part of its offering, that is highly integrated with the Salesforce experience, adding to it a layer of sophistication that contact center vendors will find hard to implement – what will customers do? NOT use it in favor of another contact center vendor or source it all from Salesforce? Just a thought.

There’s an additional trend taking place. That’s one of context and analytics. We’re adding context and analytics into “customer journeys”, sales funnels and marketing campaigns. These buzzwords happen to be part of what contact centers are, what modern CRMs can offer, and what dedicated tools do.

For example, most chat widget applications for websites today offer a backend CRM-like dashboard that also acts like a messaging contact center, and at the same time, these same tools act similarly to Google Analytics by following users as they visit your website trying to derive insights from their journey so the contact center agent can use it throughout the conversation. Altocloud did something similar and got acquired recently by Genesys, a large contact center vendor.

CP and PaaS

CPaaS is different a bit. We’re dealing with communication APIs here.

CPaaS market is evolving and changing. There are many reasons for it:

  1. SMS and voice is commoditized, with a lot of vendors offering these services
  2. IP based services are considered “easier” to implement, eroding their price point and popularity
  3. UCaaS vendors adding APIs, at times wanting to capture some of the market due to Twilio’s success
  4. As the market grows, there’s a looming sense of what would tech giants do – would Amazon add more CPaaS capabilities into AWS?

That last one is key. We’ve seen the large cloud vendors enhancing their platforms. Moving from pure CPU and storage services up the food chain. Amazon AWS has so many services today that it is hard to keep up. The question here is when will we reach an inflection point where AWS, GCE and Azure start adding serious CPaaS capabilities to their cloud platforms and compete directly with the CPaaS vendors?

Where is CPaaS headed anyway?

  • Does the future of CPaaS lies in attacking adjacent communication markets like Twilio is doing with Flex?
  • Will CPaaS end up being wrapped and baked into UC and “be done with it”?
  • Is CPaaS bound to be gobbled up by cloud providers as just another set of features?
  • Will CPaaS stay a distinct market on its own?
The Future of Communications

The future can unfold in three different ways when it comes to communications:

  1. Specialization in different communication domains continues and deepens
    • UC ,CC and CP remain distinct domains
    • May be a 4th domain comes in (highly unlikely to happen)
  2. Communication domains merge and we refer to it all as communications
    • UC does CC
    • CP used to build UC and CC
    • Customers going for best of suite (=single vendor) who can offer UC, CC and CP in a single platform
  3. Communication domains get gobbled up by their adjacencies
    • CC gets wrapped into CRM tools
    • UC being eaten by messaging and teams experiences (probably to be called UC again at the end of the process)
    • CP becoming part of larger, more generic cloud platforms

How do you think the future will unfold?

The post What Comes Next in Communications? appeared first on BlogGeek.me.

WAN backup routing via LTE

TXLAB - Sat, 05/05/2018 - 22:16

A Linux device, such as PC Engines APU, can be equipped with an LTE modem, but sometimes it’s desirable to use the mobile connection only if the wired connection is unavailable.

The following scenario is for Debian 9 on an APU box, but it’s also applicable to any other Linux device.

The DHCP client is tweaked to ignore the DNS server addresses that are coming with  DCHP offer. Otherwise, the LTE provider may provide DNS addresses that are not usable via the ethernet WAN link.

The “ifmetric” package allows setting metrics in interface definitions in Debian. This way we can have two default routes with a preferred metric over LAN interface. The default route with lower metric is chosen for outbound traffic.

The watchdog process checks availability of a well-known public IP address over each of the uplinks, and shuts down and brings up again the corresponding interface. It only protects from next-hop failures. If you want to protect from failures in the whole WAN service, you need to increase the Ethernet port metric if it fails, and then start checking the connectivity, and lower the metric when it’s stable again.

Also the second NIC on the box is configured to provide DHCP address and to NAT all outbound traffic.

Detailed installation instructions are presented here: https://gist.github.com/ssinyagin/1afad07f8c2f58d9d5cc58b2ddbba0a7

 

Ubiquiti EdgeRouter X, a powerful $50 device

TXLAB - Sat, 05/05/2018 - 01:47

Ubiquiti EdgeRouter X is a tiny and cheap (around $50) router with a decent amount of memory: 256MB RAM and 256MB flash. The router offers 5 GigE copper ports, and there’s also a model with an additional SFP port. The device is produced since 2014, and it’s still up to date and a good value for money.

On hardware level, the device consists of a Gigabit Ethernet switch, with one GigE port attached to the MIPS CPU and used as a 802.1q trunk. Also inside the enclosure, serial console port is available for easy debugging or manipulating the boot loader.

The router comes with stock Ubuquiti software which is based on Debian Wheezy, so many files are from 2013-2014. OpenVPN package is pre-installed, but only version 2.3 is available. The software offers a nice GUI and SSH access.

OpenWRT provides excellent support for this hardware. The router is able to perform IP routing at more than 400Mbps (I haven’t tested it with back-to-back connection, so I don’t know the limit).

Also with OpenVPN 2.4 that is available in up-to-date OpenWRT packages, the box performs at up to 20Mbps with 256-bit AES encryption, and at about 55Mbps with encryption and authentication disabled.

In default OpenWRT configuration, the switch port 0 is dedicated to WAN link, and ports 1-4 are used as a LAN bridge. The WAN link acts as a DHCP client, and LAN is configured with DHCP service in 192.168.1.0/24 range. The command-line configuration utilities are quite straightforward, and there’s a Web UI as well.

OpenVPN scenarios and scripts

TXLAB - Mon, 04/30/2018 - 12:09

Here’s a new repository for OpenVPN deployment scenarios and example configurations:

https://github.com/txlab/ovpn-scripts

At the moment it lists two scenarios with configuration generation scripts:

  1. routed VPN for remote management
  2. bridged VPN for anti-censorship isolation of a home LAN

 

In Search of WebRTC Developers

bloggeek - Mon, 04/30/2018 - 12:00

WebRTC developers are really hard to come by. I want to improve my ability to help companies in search of such skill.

If there’s something that occurs time and again, it is entrepreneurs and vendors who ask me if I know of anyone who can build their application. Some are looking to outsource the project as a whole or part of it, and then they are looking for agencies to work with. Others are looking for a single expert to work with on a specific task, or someone they could hire for long stretches of time who has WebRTC skills.

You a WebRTC Developer?

Great!

I’d like to know more about you IF you are looking for projects or for a new employer.

Here are a few things first:

  1. Even if you think I know you, please fill out the form
  2. No agencies. If you are an agency, contact me and we can have a chat. I know a few that I am comfortable working with
  3. Only starting out with WebRTC? Don’t fill out the form. Mark this page, get some experience and then fill it out
  4. The form is short, so shouldn’t take more than 5 of your minutes to fill
  5. Don’t beautify things more than they are – that will just get you thrown out of my radar. Tell things as they are

Fill out this form for me please (or via this link):

Loading…

I won’t be reaching out to you immediately (or at all). I’ll be using this list when others ask for talent that fits your profile.

You looking for WebRTC Developers?

Got a need for developers that have WebRTC skills?

I am not sure exactly how to find them and where, but I am trying to get there.

Two ways to get there:

  1. I am thinking of opening up a job listing on WebRTC Weekly
    1. Payment will be needed to place a listing on the WebRTC Weekly, which reaches over 2,500 subscribers at the moment
    2. Cost will be kept low, especially considering the cost of talent acquisition elsewhere and the lack of available WebRTC developers out there
    3. I had a job listing sub-site in the past, didn’t work – this is another attempt I am trying out. If you want to try this one with me, I’ll be happy to take the leap
    4. Interested? Contact me
  2. Need a bit more than just finding a developer? I offer consulting services
    1. There are hourly rates available, as well as one-off consulting sessions
    2. I’ll be using the list I’ll be collecting of the WebRTC developers above to match you up with a candidate if you need – or just connect you with the agencies I am comfortable working with

 

The post In Search of WebRTC Developers appeared first on BlogGeek.me.

Kamailio World 2018 – Event Preview

miconda - Thu, 04/26/2018 - 23:29
Two weeks and few more days till the start of Kamailio World Conference 2018, to take place again in Berlin, during May 14-16. It will be the 6th edition of the event, hosted like all previous ones by Fraunhofer Forum, courtesy of Fraunhofer Fokus Research Institute, in the beautiful city center of Berlin, just across the river from Berlin Cathedral, few minutes walking from Alexanderplatz.For this edition we had an increased number of speaking proposals, we tried to accommodate as many as possible from the very interesting ones, therefore we are introducing a group of Lightning Talks, most of them being on Monday, May 14, in the afternoon.The range of topics is also richer, covering from the common use cases for IP telephony, VoLTE/5G or WebRTC, to scalability on demand with Docker, Blockchains in telecom, use of Redis backend for data sharing among many Kamailio instances, leveraging Lua for call routing and testing or VoIP security.Related projects in the RTC world are again very well represented: Asterisk, FreeSwitch, Kazoo, SIP:Provider, Homer SIP Capture, Janus Gateway, CGRateS, FusionPBX and reSIProcate.We continue to have the two interactive sessions that never missed a Kamailio World edition so far:
  • VUC Visions coordinated by Randy Resnick – expect again an engaging debate about the future of RTC, the impact of sharing personal data and privacy matters with the Internet giants services or surprise topics brought on table by panelists and audience
  • Dangerous Demos coordinated by James Body – prepare your demo and be part of a very entertaining contest that can make you famous as well as reward your work with a prize
A novelty for this edition is an open discussion with Kamailio developers – Ask Me Anything – aiming to get the participants face to face with several main Kamailio developers to chat and clarify any doubts about using or developing the project.To see the full schedule of the event, visit:As usual, there will be couple expo spots where the sponsors are going to make demos of their products and services.We are very grateful to all our sponsors and partners that made possible this edition of Kamailio World Conference: FhG FokusFhG ForumAsiptoEvosipNetaxis Solutions2600hz and KazooConSipwiseSipgateSimwoodLODDigiumPascomEvariste SystemsNG VoiceCore Network Dynamics and VUC.Should you plan to participate at Kamailio World 2018, do not delay your registration, we expect to be again fully booked – secure your registration now:Looking forward to meeting many of you in Berlin at Kamailio World 2018!Thanks for flying Kamailio!

Kamailio v5.1.3 Released

miconda - Tue, 04/24/2018 - 23:28
Kamailio SIP Server v5.1.3 stable is out – a minor release including fixes in code and documentation since v5.1.2. The configuration file and database schema compatibility is preserved, which means you don’t have to change anything to update.Kamailio® v5.1.3 is based on the latest source code of GIT branch 5.1 and it represents the latest stable version. We recommend those running previous 5.1.x or older versions to upgrade. There is no change that has to be done to configuration file or database structure comparing with the previous releases of the v5.1 branch.Resources for Kamailio version 5.1.3Source tarballs are available at:Detailed changelog:Download via GIT: # git clone https://github.com/kamailio/kamailio kamailio
# cd kamailio
# git checkout -b 5.1 origin/5.1Relevant notes, binaries and packages will be uploaded at:Modules’ documentation:What is new in 5.1.x release series is summarized in the announcement of v5.1.0:Do not forget about the next Kamailio World Conference, taking place in Berlin, Germany, during May 14-16, 2018. The schedule has been published, registration is open!Thanks for flying Kamailio!

RCS now Google Messages. What’s Next in Consumer Messaging?

bloggeek - Mon, 04/23/2018 - 12:00

Chat won’t bring carriers to their SMS-glory days.

The Verge came out with an exclusive last week that everyone out there is regurgitating. This is my attempt at doing the same

We’re talking about Google unveiling its plans for the consumer chat experience. To put things in quick bulleted points:

  • There’s a new service called “Chat”, which is supposed to be Google’s and the carrier’s answer to Apple iMessage, Facebook Messenger and the rest
  • Google’s default messages app on Android for SMS is getting an upgrade to support RCS, turning it into a modern messaging application
  • The moment this happens will vary between the different carriers, who are, by the way, those who make the decision and control and own the service
  • Samsung and other Android handset manufacturers will probably come out with their own messaging app instead of the one provided by Google
  • This is a risky plan with a lot of challenges ahead of it

I’d like to share my viewpoints and where things are going to get interesting.

SMS is dead

I liked Mashable’s title for their take on this:

Google’s plan to fix texting on Android is really about the death of SMS

While an apt title, my guess is that beyond carriers and reports written to them, we all know that already.

SMS has long been dead. The A2P (Application 2 Person) SMS messages are all that’s left out of it. Businesses texting us either their PIN codes and passwords for 2FA (2 Factor Authentication) and OTP (One Time Passwords) or just sending us marketing junk for us to ignore.

I asked a few friends of mine on a group chat yesterday (over Whatsapp, of course) when and how do they use SMS and why. Here are the replies I got (I translated them to English):

  • I prefer Whatsapp. It is the most lightweight and friendly alternative. I only use SMS when they are automatically sent to me on missed calls
  • Whatsapp is accessible. It has quick indicators and it is lightweight. It remembers everything in an orderly fashion
  • I noticed that people take too long to respond on SMS while they respond a lot faster over Whatsapp. Since SMS is more formal to me, I use it when sending messages for the first time to people I don’t know
  • I send SMS only to people I don’t know. I feel that Whatsapp is more personal
  • I use iMessage only with my boss. She’s ultra religious so she doesn’t have Whatsapp installed. For everything else I use Whatsapp
  • I mostly use Whatsapp for messages. I text via SMS only with my wife when I am flooded with Whatsapp messages and just want her notifications to be more prominent
  • SMS is dead for me. I don’t even have it on my home screen, and that says anything. I use SMS only to receive PIN codes from businesses
  • SMS is the new fax

These are 40 year olds in Israel. Most working out of the IT domain. The answers will probably vary elsewhere, but here in Israel, most will give you similar answers. Whatsapp has become the go-to app for communications. So much so, that we were forced to give our daughter her first smartphone at the age of 8 only so she can communicate with her friends via Whatsapp and won’t stay behind. Everyone uses it here in Israel.

You should also know that plans upwards of 2Gb of monthly data including unlimited voice and SMS in Israel cost less than $15 a month in Israel, so this has nothing to do with price pressure anymore. It has to do with network effects and simple user experience.

SMS is no longer ubiquitous across the globe. I can’t attest to other countries, but I guess Israel isn’t alone in this. SMS is just the last alternative to use when all else has failed.

Why is SMS interesting in this context?

Because a lot of what’s at stake here for Google relates to the benefits and characteristics of SMS.

RCS is (still) dead

RCS is the successor of SMS for getting carriers into the 21st century. It has been discussed for many years now, and it will most definitely, utterly, completely, unquestionably get people back from their Messenger, WhatsApp and WeChat back to the clutches of the carriers. NOT.

RCS is a design-by-committee solution, envisioned by people my age and older, targeting a younger audience across the globe in an attempt to kill fast moving social network with a standardized, ubiquitous, agreed upon specification that then needs to be implemented by multiple vendors, handset manufacturers and carriers globally to make any sense.

Not going to happen.

Google’s take on this was to acquire an RCS vendor – Jibe – two years ago for this purpose. The idea was probably to provide a combination of an infrastructure and a mobile client to speed up RCS deployments around the globe and make them interoperable faster than the carriers will ever achieve on their own.

Two years passed, and we’ve got nothing but a slide (and the article on The Verge) to show for this effort:

An impressive list of operators, OEMs and OS providers that are behind this RCS initiative. Is that due to Google? To some part, probably so.

In a way, this reminds me also of Google’s other industry initiative – the Alliance of Open Media, where it is one of 7 original founding members that just recently came out with AV1, a royalty free video codec. It is a different undertaking:

  • RCS will be controlled by carriers, who were never kind or benevolent to their users
  • For carriers, the incentive can be found in the GSMA’s announcement: “GSMAi estimate that this will open up an A2P RCS business worth an estimated $74bn by 2021”
    • This is about securing A2P SMS revenues by migrating to RCS
    • The sentences before this one in that announcement explain how they plan on reaching there: “The Universal Profile ensures the telecoms industry remains at the centre of digital communications by enabling Operators, OEMs and OS Providers to deliver this exciting new messaging service consistently, quickly and simply.”
    • Problem is, they are not the centre of digital communications, so this isn’t about ensuring or remaining. It is about winning back. And you can’t do that if your focus is A2P
  • This isn’t about an open platform for innovation. Of a level playing field for all. And that makes it starkly different from the AV1 initiative. It is probably closer to MPEG-LA’s response in a way of a new video codec initiative

Why is Google going into bed with the carriers on this one?

Google had no choice

The Verge had an exclusive interview with Anil Sabharwal, the Google VP leading this effort. This led to the long article about this initiative. The numbers that Anil shared were eye opening as to the abysmal state of Google’s messaging efforts thus far.

I went ahead and placed these numbers next to other announced messaging services for comparison:

A few things to note here:

  • Telegram, Facebook Messenger and Whatsapp are all apps users make a decision to install, and they are making that decision en masse
  • Apple has upwards of 1.3 billion active devices, which indicate the general size of its iMessage service
  • Google Messages is the default app on Android for SMS, unless:
    • Carriers replace it with their own app
    • Handset manufacturers replace it with their own app
    • Users replace it with another app they install
  • Google Messages sees around 100 million monthly active users – the table-stakes entry number to be relevant in this market, but rather low for an ubiquitous, default app
  • Google Allo has less than 50 million downloads. That’s not even monthly active users
  • Google Hangouts stopped announcing its user base years ago, and frankly, they stopped investing in it as well. The mobile app is defunct (for me) for quite some time now, with unusual slowness and unresponsiveness

Google failed to entice its billion+ Android users to install or even use its messaging applications.

Without the numbers, it couldn’t really come up with a strategy similar to Apple iMessage, where it essentially hijacks the messaging traffic from carriers, onboarding the users to its own social messaging experience.

Trying to do that would alienate Google with the carriers, which Google relies on for Android device sales. Some would argue that Google has the klout and size to do that, but that is not the case.

Android is open, so handset manufacturers and carriers could use it without Google’s direct approval, throwing away the default messaging app. Handset manufacturers and carriers would do that in an effort to gain more control over Android, which would kill the user experience, as most such apps by handset manufacturers and carriers do. The end result? More users purchasing iPhones, as carriers try to punish Google for the move.

What could Google do?

  1. Double down on their own social messaging app – hasn’t worked multiple times now. What can they do different?
  2. Build their own iMessage – alienate the Android ecosystem, with the risk of failing attracting users as they failed in the past
  3. Partner with carriers on RCS

Two years ago, Google decided to go for alternatives (1) and (3). Allo was their own social messaging app. Had it succeeded, my guess is that Google would have gone towards approach (2). In parallel, Google acquired Jibe in an effort to take route (3), which is now the strategy the company is behind for its consumer messaging.

The big risk here is that the plan itself relies on carriers and their decisions. We don’t even know when will this get launched. Reading between the lines of The Verge’s article, Google already completed the development and got the mobile client ready and deployed. It just isn’t enabled unless the carrier being used approves. Estimates indicate 6-12 months until that happens, but for which of the carriers? And will they use the stock Android app for that or their own ambitious better-than-whatsapp app?

E2EE can kill this initiative and hurt Google

The biggest risk to Google is the lack of E2EE (end to end encryption).

In each and every regurgitated post of The Verge article and in The Verge itself this is emphasized. Walt Mossberg’s tweet was mentioned multiple times as well:

Bottom line: Google builds an insecure messaging system controlled by carriers who are in bed with governments everywhere at exactly the time when world publics are more worried about data collection and theft than ever.

— Walt Mossberg (@waltmossberg) April 20, 2018

Bottom line: Google builds an insecure messaging system  controlled by carriers who are in bed with governments everywhere at exactly the time when world publics are more worried about data collection and theft than ever.

The problem for Google is that the news outlets are noticing and giving it a lot of publicity. And it couldn’t come at a less convenient time, where Facebook is being scrutinized for its malpractice of how it uses and protects user data in the Cambridge Analytica scandal. Google for the most part, has come unscathed out of it, but will this move put more of the spotlight on Google?

The other problem is that all the other messaging apps already have E2EE supported in one way or another. The apps usually mentioned here are Apple iMessage, Signal and Telegram. Whatsapp switched to E2EE by default two years ago. And Facebook Messenger has it as an option (though you do need to enable it manually per conversation).

Will customers accept using “Chat” (=RCS) when they know it isn’t encrypted end to end?

On the other hand, Russia is attempting to close Telegram by blocking millions of IP addresses in the country, and taking down with it other large services. If this succeeds, then Russia will do the same to all other popular messaging applications. And then other countries will follow. The end result will be the need to use the carrier (and Google’s) alternative instead. Thankfully, Russia is unsuccessful. For the time being.

Who owns the data?

Carriers do.

With RCS, the carriers are the ones that are intercepting, processing and forwarding the messages. In a way, it alludes to the fact that Google isn’t going to be the one reading these messages, at least not from the server.

This means that either Google decided there’s not enough value in these messages and in monetizing them – or – that they have other means to gain access to these messages.

Here are a few alternatives Google can use to accessing these messages:

  1. Through licensing and operating the servers on behalf of carriers. Not all carriers will roll their own and may prefer using Google as a service here. Having the messages in unencrypted format on the server side is beneficial for Google in a way, especially when they can “blame” the carriers and regulations
  2. Via Google’s Messages app. While messages might be sent via a carrier’s network, the client sending and receiving these messages is developed and maintained by Google, giving them the needed access. This can be coupled with features like backing up the messages in Google Drive or letting Google read the messages to improve its services for the user
  3. By coupling features such as Google Assistant and Smart Replies into it, which means Google needs to read the messages to offer the service

Google might have figured it has other means to get to the messages besides owning and controlling the whole experience – similar to how Google Photos is one of the top camera apps in Apple iTunes.

By offering a better experience than other RCS client competitors, it might elicit users to download its stock Chat app on devices who don’t have it by default. Who knows? It might even be able to get people to download and use it on an iPhone one day.

The success of Google here will translate into RCS being a vehicle for Google to get back to messaging more than the means for carriers to gain relevance again.

Ubiquity is here already, but not via SMS or RCS

I’ll put the graph here again – to make a point.

1.5 billion people is ubiquitous enough for me. Especially when the penetration rates in Israel are 100% in my network of connections.

People tend to talk about the ubiquity of SMS and how RCS will inherit that ubiquity.

They fail to take into account the following:

  1. SMS is ubiquitous, but it took it many years to get there
  2. It is used for marketing and 2FA mostly
  3. The marketing part is less valuable
    1. It can be treated as spam by consumers for the most part
    2. It is one way in nature, where social networks are around conversations
    3. Spam and unsolicited messages don’t work that well in social networks
  4. 2FA will be shifting away from SMS (see here)
    1. Google does a lot of its 2FA without SMS today
    2. Google can open it up to third parties at any point in time
    3. Apple can do the same with the iPhone
  5. The shift towards RCS won’t be done in a single day. It will be done in a patchwork fashion across the globe by different carriers

Think about it.

You can now send out an RCS message from your device. To anyone. If the other party has no RCS installed, the message gets converted to SMS. Sweet.

But what happens when the person you are sending that RCS message is located abroad? Are you seriously happy with getting a payment request from your carrier on a stupid international SMS message, or a full conversation of such for a thing you could have easily used Whatsapp for instead? And for free.

Ubiquity isn’t the word that comes to my mind when thinking about RCS.

The holy grail is business messaging

Consumer messaging is free these days. There is no direct monetary value to be gained by offering this service to consumers. Carriers won’t be able to put that jinni back into its bottle and start collecting money from users. Their only approach here might be to zero-rate RCS traffic, but that also isn’t very interesting to most consumers – at least not here in Israel.

The GSMA already suggested where the money is – in business messaging. They see this as a $74bn opportunity by 2021. The problem is that rolling RCS 6-12 months from now, by only some of the carriers, isn’t going to cut it. Apple Business Chat was just released, vertically integrated, with a lot of thought put into businesses, their discovery process and free of charge.

Then there’s the rest of the social networks opening their APIs towards the businesses, and contact center solutions driving the concept of omnichannel experiences for customers.

Carriers are getting into this game late and unprepared. On top of that, they will try to get money out of this market similar to how they do with SMS. But the price points they are used to make no sense anymore. Something will need to change for the carriers to be successful here.

Will carriers be able to succeed with RCS? I doubt it.

Will google be able to succeed with Chat? Maybe. But it is up to the carriers to allow that to happen.

The post RCS now Google Messages. What’s Next in Consumer Messaging? appeared first on BlogGeek.me.

Upcoming Kamailio Events – Spring To Autumn 2018

miconda - Thu, 04/19/2018 - 23:25
With a vibrant community world wide, Kamailio project is going to be represented at many events for the rest of the year, among the confirmed at this moment are:
  • Hannover Messe – April 23-26, 2018, in Hannover, Germany
  • KazooCon – April 30-May 02, 2018, in San Jose, USA
  • Kamailio World – May 14-16, 2018, in Berlin, Germany
  • CeBIT – June 11-15, 2018, in Hannover, Germany
  • CommCon – June 25-29, 2018, in Wotton House, Surrey, UK
  • ClueCon – July 23-26, 2018, in Chicago, USA
  • AstriCon – October 9-11, 2018, in Orlando, USA
Should you plan to go to any of these events, get in touch with the rest of the community via sr-users mailing list and try to organise meetups with other Kamailio folks!Thanks for flying Kamailio!

Kamailio Unit Testing Framework With Docker

miconda - Mon, 04/16/2018 - 21:00
Kamailio source tree include a set of few dozen of shell-based unit tests developed several years ago, residing inside test/unit/ of the source code tree. They were more or less not actively maintained during the past few years.Based on the interest from the community and discussions during past IRC development meetings as well as panels at Kamailio World Conference, a new effort was started recently to built a unit testing framework leveraging Docker.The first version has been published on GitHub, being available at:It has only few tests by now, but we hope to grow their number significantly in the near future. As a matter of fact, Giacomo Vacca is going to do an workshop about it at the next Kamailio World Conference, during May 14-16, in Berlin, Germany.The unit tests have been run when releasing a new stable version during the past months. They leverage tools such as sipp or sipsak for generating SIP traffic and testing routing scenarios, but some of them go beyond SIP and detect source code issues such as missing symbols or broken dependencies.The architecture of the unit testing framework is still a moving target, we aim to provide something easy for community members to contribute to as well as require for newly added modules to be accompanied by a set of basic tests.One of the main benefits it would be to have the reports of the issues that can be reproduced submitted along with a unit test. It would make it easier to troubleshoot and then after fixing it, would be tested always before releases in order to avoid regressions.Right now, a good help from community would represent converting the old unit tests to the new framework, afterwards we can decommission test/unit. The conversion should be rather easy, as we still rely on shell and the sip tools like sipp/sipsak … If you want to help here by you need clarifications or get stuck somewhere, just write to mailing list and I would be more than happy to assist. Such contributions should be submitted as pull requests in order to be easy to review:This post is to open the discussion about the unit testing framework to the broad community. To participate, write us on sr-users mailing list.Looking forward to meeting many of you at Kamailio World Conference 2018, it is only one month away!

New Kamailio module: acc_json

miconda - Wed, 04/11/2018 - 18:43
Julien Chavanton from Flowroute added recently a new module: acc_jsonThe module builds JSON documents from the accounting records and can send them to mqueue to be consumed by other processes or write to syslog. For example, when using it configured with mqueue, the consumers (e.g., started with rtimer module) can send the accounting JSON document to an external system via HTTP (see http_client or http_async_client modules), rabbitmq, nsq or even as a payload to a new SIP message (see uac module).More details about acc_json module can be read at:And do not forget about the next Kamailio World Conference, taking place in Berlin, Germany, during May 14-16, 2018. It is the place to network with Kamailio developers and community members!Thanks for flying Kamailio!

WebRTC 1.0 Training and Free Webinar Tomorrow (on Tuesday)

bloggeek - Sun, 04/08/2018 - 12:00

Join Philipp Hancke and me for a free training on WebRTC 1.0, prior to the relaunch of my advanced WebRTC training.

Here’s something that I get at least once a week through my website’s chat widget:

It is one of the main reasons why I’ve created my advanced WebRTC course. It is a paid WebRTC course that is designed to fill in the gaps and answer the many questions developers face when needing to deal with WebRTC.

Elephants, blind Men, alligators and WebRTC

I wanted to connect it to the parable of the six blind man and an elephant, explaining how wherever you go in the Internet, you are going to get a glimpse about WebRTC and never a full clear picture. I even searched for a good illustration to use for it. Then I bumped into this illustration:

It depicts what happens with WebRTC and developers all too well.

If you haven’t guessed it, the elephants here are WebRTC and the requirements of the application and that flat person is the developer.

This fits well with another joke I heard yesterday from a friend’s kid:

Q: Why can’t you go into the woods between 14:00-16:00?

A: Because the elephants are skydiving

There’s a follow up joke as well:

Q: Why are the alligators flat?

A: Because they entered the woods between 14:00-16:00

WebRTC development has a lot of rules. Many of which are unwritten.

WebRTC 1.0

There is a lot of nuances about WebRTC. A lot of written material, old and new – some of it irrelevant now, the rest might be correct but jumbled. And WebRTC is a moving target. It is hard to keep track of all the changes. There’s a lot of knowledge around WebRTC that is required – knowledge that doesn’t look like an API call or written in the standard specification.

This means that I get to update my course every few months just to keep up.

With WebRTC 1.0, there’s both a real challenge as well as an opportunity.

It is a challenge:

  • WebRTC 1.0 still isn’t here. There’s a working draft, which should get standardized *soon* (=soon started in 2015, and probably ends in 2018, hopefully)
  • Browser implementations lag behind the latest WebRTC 1.0 draft
  • Browser implementations don’t behave the same, or implement the same parts of the latest WebRTC 1.0 draft

It is an opportunity:

We might actually get to a point where we have a stable API with stable implementations.

But we’re still not there

Should you wait?

No.

We’re 6-7 years in with WebRTC (depending who does the counting), and this hasn’t stopped well over a 1,000 vendors to jump in and make use of WebRTC in production services.

There’s already massive use of WebRTC.

Me and WebRTC 1.0

For me, WebRTC 1.0 is somewhat of a new topic.

I try to avoid the discussions going on around WebRTC in the standardization bodies. The work they do is important and critical, but often tedious. I had my fair share of it in the past with other standards and it isn’t something I enjoy these days.

This caused a kind of a challenge for me as well. How can I teach WebRTC, in a premium course, without explaining WebRTC 1.0 – a topic that needs to be addressed as developers need to prepare for the changes that are coming.

The answer was to ask Philipp Hancke to help out here, and create a course lesson for me on WebRTC 1.0. I like doing projects with Philipp, and do so on many fronts, so this is one additional project. It also isn’t the first time either – the bonus materials of my WebRTC course includes a recorded lesson by Philipp about video quality in WebRTC.

Free WebRTC 1.0 Webinar

Tomorrow, we will be recording the WebRTC 1.0 lesson together for my course. I’ll be there, and this time,  partially as a student.

To make things a bit more interesting, as well as promoting the whole course, this lesson will be given live in the form of a free webinar:

  • Anyone can join for free to learn about WebRTC 1.0
  • The recording will only be available as part of the advanced WebRTC course

This webinar/lesson will take place on

Tuesday, April 10

2-3PM EST (view in your timezone)

Save your seat →

The session’s recording will NOT be available after the event itself. While this lesson is free to attend live, the recording will become an integral part of the course’ lessons.

The post WebRTC 1.0 Training and Free Webinar Tomorrow (on Tuesday) appeared first on BlogGeek.me.

So your VPN is leaking because of Chrome’s WebRTC…

webrtchacks - Tue, 04/03/2018 - 03:14

We have covered the “WebRTC is leaking your IP address” topic a few times, like when I reported what the NY Times was doing and in my WebRTC-Notifier. Periodically this topic comes up now and again in the blogosphere, generally with great shock and horror. This happened again recently, so I here is an updated look […]

The post So your VPN is leaking because of Chrome’s WebRTC… appeared first on webrtcHacks.

AV1 Specification Released: Can we kiss goodbye to HEVC and royalty bearing video codecs?

bloggeek - Mon, 04/02/2018 - 12:00

AV1 for video coding is what Opus is for audio coding.

The Alliance of Open Media (AOMedia) issued last week a press release announcing its public release of the AV1 specification.

Last time I wrote about AOMedia was over a year ago. AOMedia is a very interesting organization. Which got me to sit down with Alex Eleftheriadis, Chief Scientist and Co-founder of Vidyo, for a talk about AV1, AOMedia and the future of real time video codecs. It was really timely, as I’ve been meaning to write about AV1 at some point. The press release, and my chat with Alex pushed me towards this subject.

TL;DR:

  • We are moving towards a future of royalty free video codecs
  • This is due to the drastic changes in our industry in the last decade
  • It won’t happen tomorrow, but we won’t be waiting too long either

Before you start, if you need to make a decision today on your video codec, then check out this free online mini video course

H.264 or VP8?

Now let’s start, shall we?

AOMedia and AV1 are the result of greed

When AOMedia was announced I was pleasantly surprised. It isn’t that apparent that the founding members of AOMedia would actually find the strength to put their differences aside for the greater good of the video coding industry.

Video codec royalties 101

You see, video codecs at that point in time was a profit center for companies. You invested in research around video coding with the main focus on inventing new patents that will be incorporated within video codecs that will then be globally used. The vendors adopting these video codecs would pay royalties.

With H.264, said royalties came with a cap – if you distributed above a certain number of devices that use H.264, you didn’t have to pay more. And the same scheme was put in place when it came to HEVC (H.265) – just with a higher cap.

Why do we need this cap?

  1. Companies want to cap their commitment and expense. In many cases, you don’t see direct revenue per device, so no cap means this it is harder to match with asymmetric business models and applications that scale today to hundreds of millions of users
  2. If a company needs to pay based on the number of devices they sell, then the one holding the patents and getting the payment for royalties knows that number exactly – something which is considered trade secret for many companies

So how much money did MPEG-LA took in?

Being a private company, this is hard to know. I’ve seen estimates of $10M-50M, as well as $17.5B on Quora. The truth is probably somewhere in the middle. Which is still a considerable amount of money that was funnelled to the patent owners.

With royalty revenues flowing in, is it any wonder that companies wanted more?

An interesting tidbit about this greed (or shall we say rightfulness) can be found in the Wikipedia page of VP8:

In February 2011, MPEG LA invited patent holders to identify patents that may be essential to VP8 in order to form a joint VP8 patent pool. As a result, in March the United States Department of Justice (DoJ) started an investigation into MPEG LA for its role in possibly attempting to stifle competition. In July 2011, MPEG LA announced that 12 patent holders had responded to its call to form a VP8 patent pool, without revealing the patents in question, and despite On2 having gone to great lengths to avoid such patents.

So… we have a licensing company whose members are after royalty payments on patents. They are blinded by the success of H.264 and its royalty scheme and payments, so they go after anything and everything that looks and smells like competition. And they are working towards maintaining their market position and revenue in the upcoming HEVC specification.

The HEVC/H.265 royalties mess

Leonardo Chiariglione, founder and chairman of MPEG, attests in a rather revealing post:

Good stories have an end, so the MPEG business model could not last forever. Over the years proprietary and “royalty free” products have emerged but have not been able to dent the success of MPEG standards. More importantly IP holders – often companies not interested in exploiting MPEG standards, so called Non Practicing Entities (NPE) – have become more and more aggressive in extracting value from their IP.

HEVC, being a new playing ground, meant that there were new patents to be had – new areas where companies could claim having IP. And MPEG-LA found itself one of many patent holder groups:

MPEG-LA indicated its wish to take home $0.2 per device using HEVC, with a high cap of around $25M.

HEVC Advance started with a ridiculously greedy target of $0.8 per device AND %0.5 of the gross margin of streaming services (unheard of at the time) – with no cap. It since rescinded, making things somewhat better. It did it a bit too late in the game though.

Velos Media spent money on a clean and positive website. Their Q&A indicate that they haven’t yet made a decision on royalties, caps and content royalties. Which gives great confidence to those wanting to use HEVC today.

And then there are the unaffiliated. Companies claiming patents related to HEVC who are not in any pool. And if you think they won’t be suing anyone then think again – Blackberry just sued Facebook for messaging related patents – easy to see them suing for HEVC patents in their current position. Who can blame them? They have been repeatedly sued by patent trolls in the past.

HEVC is said to be the next biggest thing in video coding. The successor of our aging H.264 technology. And yet, there’s too many unknowns about the true price of using it. Should one pay royalties to MPEG-LA, HEVC Advance and Velos Media or only one of them? Would paying royalties protect from patent litigation?

Is it even economically viable to use HEVC?

Yes. Apple has introduced HEVC in iOS 11 and iPhone X. My guess is that they are willing to pay the price as long as this keeps the headache and mess on the Android camp (I can’t see the vendors there coming to terms of who is the one in the value chain that will end up paying the royalties for it).

With such greed and uncertainty, a void was left. One that got filled by AOMedia and AV1.

AOMedia – The who’s who of our industry

AOMedia is a who’s who list of our industry. It started small, with just 7 big names, and now has 12 founding members and 22 promoter members.

Some of these members are members of MPEG-LA or already have patents in HEVC and video coding. And this is important. Members of AOMedia effectively allow free access to essential patents in the implementation of AOMedia related specifications. I am sure there are restrictions applied here, but the intent is to have the codecs coming out of AOMedia royalty free.

A few interesting things to note about these members:

  • All browser vendors are there: Google, Mozilla, Microsoft and Apple
  • All large online streaming vendors are there: Google (=YouTube), Amazon and Netflix
  • From that same streaming industry, we also have Hulu, Bitmovin and Videolan
  • Most of the important chipset vendors are there: Intel, AMD, NVidia, Arm and Broadcom
  • Facebook is there
  • Of the enterprise video conferencing vendors we have Cisco, Vidyo and Polycom
  • Qualcomm is missing

AOMedia is at a point that stopping it will be hard.

Here’s how AOMedia visualize its members’ products:

What’s in AV1?

AV1 is a video codec specification, similar to VP8, H.264, VP9 and HEVC.

AV1 is built out of 3 main premises:

  1. Royalty free – what gets boiled into the specification is either based on patents of the members of AOMedia or uses techniques that aren’t patented. It doesn’t mean that companies can’t claim IP on AV1, but as far as the effort on developing AV1 goes, they aren’t knowingly letting in patents
  2. Open source reference implementation – AV1 comes with an open source implementation that you can take and start using. So it isn’t just a specification that you need to read and build with a codec from scratch
  3. Simple – similar to how WebRTC is way simpler than other real time media protocols, AV1 is designed to be simple

Simple probably needs a bit more elaboration here. It is probably the best news I heard from Alex about AV1.

Simplicity in AV1

You see, in standardization organizations, you’ll have competing vendors vying for an advantage on one another. I’ve been there during the glorious days of H.323 and 3G-324M. What happens there, is that companies come up with a suggestion. Oftentimes, they will have patents on that specific suggestion. So other vendors will try to block it from getting into the spec. Or at the very least delay it as much as they can. Another vendor will come up with a similar but different enough approach, with their own patents, of course. And now you’re in a deadlock – which one do you choose? Coalitions start emerging around each approach, with the end result being that both approaches will be accepted with some modifications and get added into the specification.

But do we really need both of these approaches? The more alternatives we have to do something similar, the more complex the end result. The more complex the end result, the harder it is to implement. The harder it is to implement, well… the closer it looks like HEVC.

Here’s the thing.

From what I understand, and I am not privy to the intricate details, but I’ve seen specifications in the past, and been part of making them happen, HEVC is your standard design-by-committee specification. HEVC was conceived by MPEG-LA, which in the last 20 years have given us MPEG-2, H.264 and HEVC. The number of members in MPEG-LA with interests in getting some skin in this game is large and growing. I am sure that HEVC was a mess of a headache to contend with.

This is where AV1 diverges. I think there’s a lot less politics going on in AOMedia at the moment than in MPEG-LA. Probably due to 2 main reasons:

  1. It is a newer organization, starting fresh. There’s politics there as there are multiple companies and many people, but since it is newer, the amount of politics involved will be lower than an organization that has been around for 20+ years
  2. There’s less money involved. No royalties means no pie to split between patent holders. So less fights about who gets his tools and techniques incorporated into the specification

The end result? The design is simpler, which makes for better implementations that are just easier to develop.

AV1 IRL

In real life, we’re yet to see if AV1 performs better than HEVC and in what ways.

Current estimates is that AV1 performans equal or better than HEVC when it comes to real time. That’s because AV1 has better tools for similar computation load than what can be found in HEVC.

So… if you have all the time in the world to analyze the video and pick your tools, HEVC might end up with better compression quality, but for the most part, we can’t really wait that long when we encode video – unless we encode the latest movie coming out from Hollywood. For the rest of us, faster will be better, so AV1 wins.

The exact comparison isn’t there yet, but I was told that experiments done on the implementations of both AV1 and HEVC shows that AV1 is equal or better to HEVC.

Streaming, Real Time and SVC

There is something to be said about real time, which brings me back to WebRTC.

Real time low delay considerations of AV1 were discussed from the onset. There are many who focus on streaming and offline encoding of videos within AOMedia, like Netflix and Hulu. But some of the founding members are really interested in real time video coding – Google, Facebook, Cisco, Polycom and Vidyo to name a few.

Polycom and Vidyo are chairing the real time work group, and SVC is considered a first class citizen within AV1. It is being incorporated into the specification from the start, instead of being bolt-on into it as was done with H.264 and VP9.

Low bitrate

Then there’s the aspect of working at low bitrates.

With the newer codecs, you see a real desire to enhance the envelope. In many cases, this means increasing the resolution and frame rates a video codec supports.

As far as I understand, there’s a lot of effort being put into AV1 in the other side of the scale – in working at low resolutions and doing that really well. This is important for Google for example, if you look at what they decided to share about VP9 on YouTube:

For YouTube, it isn’t only about 4K and UHD – it is on getting videos to be streamed everywhere.

Based on many of the projects I am involved with today, I can say that there are a lot of developers out there who don’t care too much about HD or 4K – they just want to get decent video being sent and that means VGA resolutions or even less. Being able to do that with lower bitrates is a boon.

Is AV1 “next gen”?

I have always considered AV1 to be the next next generation:

We have H.264 and VP8 as the current generation of video codecs, then HEVC and VP9 as the next generation, and then there’s AV1 as the next next generation.

In my mind, this is what you’d get when it comes to compression vs power requirements:

Alex opened my eyes here, explaining that reality is slightly different. If I try translating his words to a diagram, here’s what I get:

AV1 is an improvement over HEVC but probably isn’t a next generation video codec. And this is an advantage. When you start working on a new generation of a codec, the work necessary is long and arduous. Look at H.261, H.263, H.264 and HEVC codec generations:

Here are some interesting things that occured to me while placing the video codecs on a timeline:

  • The year indicated for each codec is the year in which an initial official release was published
  • Understand that each video codec went through iterations of improvements, annexes, appendices and versions (HEVC already has 4 versions)
  • It takes 7-10 from one version until the next one gets released. On the H.26x track, the number of years between versions has grown through time
  • VP8 and VP9 have only 4 years between one and the other. It makes sense, as VP8 came late in the game, playing catch-up with H.264 and VP9 is timed nicely with HEVC
  • AV1 comes only 6 years after HEVC. Not enough time for research breakthroughs that would suggest a brand new video codec generation, but probably enough to make improvements on HEVC and VP9
About the latest press release

AOMedia has been working towards this important milestone for quite some time – the 1.0 version specification of AV1.

The first thing I thought when seeing it is: they got there faster than WebRTC 1.0. WebRTC has been announced 6 years ago and we’re just about to have it announced (since 2015 that is). AOMedia started in 2015 and it now has its 1.0 ready.

The second one? I was interested in the quotes at the end of that release. They show the viewpoints of the various members involved.

  • Amazon – great viewing experience
  • Arm – bringing high-quality video to mobile and consumer markets
  • Cisco – ongoing success of collaboration products and services
  • Facebook – video being watched and shared online
  • Google – future of media experiences consumers love to watch, upload and stream
  • Intel – unmatched video quality and lower delivery costs across consumer and business devices as well as the cloud’s video delivery infrastructure
  • NVIDIA – server-generated content to consumers. […] streaming video at a higher quality […] over networks with limited bandwidth
  • Mozilla – making state-of-the-art video compression technology royalty-free and accessible to creators and consumers everywhere
  • Netflix – better streaming quality
  • Microsoft – empowering the media and entertainment industry
  • Adobe – faster and higher resolution content is on its way at a lower cost to the consumer
  • AMD – best media experiences for consumers
  • Amlogic – watch more streaming media
  • Argon Design – streaming media ecosystem
  • Bitmovin – greater innovation in the way we watch content
  • Broadcom – enhance the video experience across all forms of viewing
  • Hulu – Improving streaming quality
  • Ittiam Systems – the future of online video and video compression
  • NGCodec – higher quality and more immersive video experiences
  • Vidyo – solve the ongoing WebRTC browser fragmentation problem, and achieve universal video interoperability across all browsers and communication devices
  • Xillinx – royalty-free video across the entire streaming media ecosystem

Apple decided not to share a quote in the press release.

Most of the quotes there are about media streaming, with only a few looking at collaboration and social. This somewhat saddens me when it comes from the likes of Broadcom.

I am glad to see Intel and Arm taking active roles. Both as founding members and in their quotes to the press release. It is bad that Qualcomm and Samsung aren’t here, but you can’t have it all.

I also think Vidyo are spot-on. More about that later.

What’s next for AOMedia?

There’s work to be done within AOMedia with AV1. This is but a first release. There are bound to be some updates to it in the coming year.

Current plans are to have some meaningful software implementation of AV1 encoder/decoder by the end of 2018, and somewhere during 2019 (end of most probably) have hardware implementations available. Here’s the announced timeline from AOMedia:

Rather ambitious.

Realistically, mass adoption would happen somewhere in 2020-2022. Until then, we’ll be chugging along with VP8/H.264 and fighting it out around HEVC and VP9.

There are talks about adding still image format based on the work done in AV1, which makes sense. It wouldn’t be farfetched to also incorporate future voice codecs into AOMedia. This organization has shown it can bring into it the industry leaders into a table and come up with royalty free codecs that benefit everyone.

AV1 and WebRTC

Will we see AV1 in WebRTC? Definitely.

When? Probably after WebRTC 1.0. Or maybe not

It will take time, but the benefits are quite clear, which is what Alex of Vidyo alluded to in the quote given in the press release:

“solve the ongoing WebRTC browser fragmentation problem, and achieve universal video interoperability across all browsers and communication devices”

We’re still stuck in the challenge of which video codec to select in WebRTC applications.

  • Should we go for VP8, just because everyone does, it is there and it is royalty free?
  • Or should we opt for H.264, because Safari supports it, and it has better hardware support.
  • Maybe we should go for VP9 as it offers better quality, and “suffer” the computational hit that comes with it?

AV1 for video coding is what Opus is to audio coding. That article I’ve written in 2013? It is now becoming true for video. Once adoption of AV1 hits – and it will in the next 3-5 years, the dilemma of which video codec to select will be gone.

Until then, check out this free mini course on how to select the video codec for your application

Sign up for free

The post AV1 Specification Released: Can we kiss goodbye to HEVC and royalty bearing video codecs? appeared first on BlogGeek.me.

Progressive Web Apps (PWA) for WebRTC (Trond Kjetil Bremnes)

webrtchacks - Wed, 03/28/2018 - 13:30

One of WebRTC’s biggest challenges has been providing consistent, reliable support across platforms. For most apps, especially those that started on the web, this generally means developing a native or hybrid mobile app in addition to supporting the web app.  Progressive Web Apps (PWA) is a new concept that promises to unify the web for […]

The post Progressive Web Apps (PWA) for WebRTC (Trond Kjetil Bremnes) appeared first on webrtcHacks.

Kamailio World 2018 – Participation Grants

miconda - Tue, 03/27/2018 - 19:30
Once again, we are committing to the program from the last years to give free event passes at next Kamailio World (May 14-16, 2018) to several people from academic environment (universities or research institutes – bachelor, master or PhD programs qualify) as well as people from underrepresented groups.Kamailio has its origin in the academic environment, being started by FhG Fokus Research Institute, Berlin, Germany, evolving over the time into a world wide developed project, with an open and friendly community.If you think you are eligible and want to participate, email to <registration [at] kamailio.org> . Participation to all the content of the event (workshops, conference and social event) is free, but you will have to take care of expenses for traveling and accommodation. Write a short description about your interest in real time communications and, when it is the case what is the university or the research institute you are affiliate to.Also, if you are not a student, but you are in touch with some or have access to students forums/mailing lists, it will be very appreciated if you forward these details.All these are possible thanks to Kamailio World Conference sponsors: Evosip, 2600hz, Sipwise, Netaxis, Sipgate, FhG Fokus, Asipto, Simwood, LOD.com, NG-Voice, Evariste Systems, Digium, VoiceTel, Pascom and Core Network Dynamics.More information about Kamailio World Conference 2018 is available on the web site:Thanks for flying Kamailio!

Get trained to be your company’s WebRTC guy

bloggeek - Mon, 03/26/2018 - 12:00

Demand for WebRTC developers is stronger than supply.

My inbox is filled with requests for experienced WebRTC developers on a daily basis. It ranges from entrepreneurs looking for a technical partner, managers searching for outsourcing vendors to help them out. My only challenge here is that developers and testers who know a thing or two about WebRTC are hard to find. Finding developers who are aware of the media stack in WebRTC, and not just dabbled into using a github “hello world” demo – these are truly rare.

This is why I created my WebRTC course almost 2 years ago. The idea was to try and share my knowledge and experience around VoIP, media processing and of course WebRTC, with people who need it. This WebRTC training has been a pleasant success, with over 200 people who took it already. And now it is time for the 4th round of office hours for this course.

Who is this WebRTC training for?

This WebRTC course is for anyone who is using WebRTC in his daily work directly or indirectly. Developers, testers, software architects and product managers will be those who benefit from it the most.

It has been designed to give you the information necessary from the ground up.

If you are clueless about VoIP and networking, then this course will guide you through the steps needed to get to WebRTC. Explaining what TCP and UDP are, how HTTP and WebSockets fit on top of it, going to the acronyms used by WebRTC (SRTP, STUN, TURN and many others).

If you have VoIP knowledge and experience, then this course will cover the missing parts – where WebRTC fits into your world, and what to take special attention to, assuming a VoIP background (WebRTC brings with it a different mindset to the development process).

What I didn’t want to do, is have a course that is so focused on the specification that: (1) it becomes irrelevant the moment the next Chrome browser is released; (2) it doesn’t explain the ecosystem around WebRTC or give you design patterns of common use cases. Which is why I baked into the course a lot of materials around higher level media processing, the WebRTC ecosystem and common architectures in WebRTC.

TL;DR – if you follow this blog and find it useful, then this course is for you.

Why take it?

The question should be why not?

There are so many mistakes and bad decisions I see companies doing with WebRTC. From deciding how to model their media routes, to where to place their TURN servers (or configure them). Through how to design scale out, to which open source frameworks to pick. Such mistakes end up a lot more expensive than any online course would ever be.

In April, next month, I will be starting the next round of office hours.

While the course is pre-recorded and available online, I conduct office hours for a span of 3-4 months twice a year. In these live office hours I go through parts of the course, share new content and answer any questions.

What does it include?

The course includes:

  • 40+ lessons split into 7 different modules with an additional bonus module
  • 15 hours of video content, along with additional links for extra reading material
  • Several e-books available only as part of the course, like how the Jitsi team scales Jitsi Meet, and what are sought after characteristics in WebRTC developers
  • A private online forum
  • The office hours

In the past two months I’ve been working on refreshing some of the content, getting it up to date with recent developments. We’ve seen Edge and Safari introducing WebRTC during that time for example. These updated lessons will be updated in the course before the official launch.

When can I start?

Whenever you want. In April, I will be officially launching the office hours for this course round. At that point in time, the updated lessons will be part of the course.

What more, there will be a new lesson added – this one about WebRTC 1.0. Philipp Hancke was kind enough to host this lesson with me as a live webinar (free to attend live) that will become an integral lesson in the course.

If you are interested in joining this lesson live:

Free WebRTC 1.0 Live Lesson

What if I am not ready?

You can always take it later on, but I won’t be able to guarantee pricing or availability of the office hours at that point in time.

If you plan on doing anything with WebRTC in the next 6 months, you should probably enroll today.

And by the way – if you need to come as a team to up the knowledge and experience in WebRTC in your company, then there are corporate plans for the course as well.

CONTENT UPGRADE: If you are serious about learning WebRTC, then check out my online WebRTC training:

Enroll to course

The post Get trained to be your company’s WebRTC guy appeared first on BlogGeek.me.

YouTube Does WebRTC – Here’s How

webrtchacks - Fri, 03/23/2018 - 15:22

I logged into YouTube on Tuesday and noticed this new camera icon in the upper right corner, with a “Go Live (New)” option, so I clicked on it to try. It turns out you can now live stream directly from the browser. This smelled a lot like WebRTC, so I loaded up chrome://webrtc-internals to see […]

The post YouTube Does WebRTC – Here’s How appeared first on webrtcHacks.

New Kamailio module: app_python3

miconda - Tue, 03/20/2018 - 21:00
A while ago app_python3 module was added to Kamailio’s GIT master branch (to be released as stable version 5.2.0 in several months), thanks to the development efforts of Anthony Alba.Although it started from the old app_python, besides being implemented to work with Python3, the new modules added a lot of improvements, leveraging the Python3 architecture for better performances, as well as including the support for Python script reload at runtime via an RPC command (so no need to restart Kamailio — the feature was ported to app_python meanwhile). The readme of the module is available at:Now all the Kemi interpreter modules can reload the SIP routing scripts without restarting Kamailio — it works for Lua, JavaScript, Python2/3 and Squirrel languages.Happy SIP routing in Python3! You can learn more about the Kemi scripting languages at Kamailio World Conference 2018 — an workshop is dedicated to this topic!Thanks for flying Kamailio!

How WebRTC Statistics and Performance Monitoring Changed VoIP Monitoring

bloggeek - Mon, 03/19/2018 - 12:00

Monitoring focus is shifting from server-side to client-side in WebRTC statistics collection.

WebRTC happens to decentralize everything when it comes to VoIP. We’re on a journey here to shift the weight from the backend to the edge devices. While the technology in WebRTC isn’t any different than most other VoIP solutions, the way we end up using it and architecting our services around it is vastly different.

One of the prime examples here is how we shifted focus for group calling from an MCU mixing model to an SFU routing model. Suddenly, almost overnight, the notion of deploying MCU started to seem ridiculous. And believe me – I should know – I worked at a company where %60+ came from MCUs.

The shift towards SFU means we’re leaning more on the capabilities and performance of the edge device, giving it more power in the interaction when it comes to how to layout the display, instead of doing all the heavy lifting in the backend using an MCU. The next step here will be to build mesh networks, though I can’t see that future materializing any time soon.

VoIP != WebRTC. Maybe not from a direct technical point, but definitely from how we end up using it. If you need to learn more about WebRTC, then my WebRTC training is exactly what you need:

Enroll to course

What I wanted to mention here is something else that is happening, playing towards the same trend exactly – we are moving the collection of VoIP performance statistics (or more accurately WebRTC statistics) from the backend to the edge – we now prefer doing it directly from the browser/device.

VoIP Statistics Collection and Monitoring

If you are not familiar with VoIP statistics collecting and monitoring, then here’s a quick explainer for you:

VoIP is built out of the notion of interoperability. Developers build their products and then test it against the spec and in interoperability events. Then those deploying them integrate, install and run a service. Sometimes this ends up by using a single vendor, but more often than not, multiple vendor products run in the same deployment.

There is no real specification or standard to how monitoring needs to happen or what kind of statistics can, should or is collected. There are a few means of collecting that data, and one of the most common approaches is by employing HEP/EEP. As the specification states:

The Extensible Encapsulation protocol (“EEP”) provides a method to duplicate an IP datagram to a collector by encapsulating the original datagram and its relative header properties (as payload, in form of concatenated chunks) within a new IP datagram transmitted over UDP/TCP/SCTP connections for remote collection. Encapsulation allows for the original content to be transmitted without altering the original IP datagram and header contents and provides flexible allocation of additional chunks containing additional arbitrary data. The method is NOT designed or intended for “tunneling” of IP datagrams over network segments, and best serves as vector for passive duplication of packets intended for remote or centralized collection and long term storage and analysis.

Translating this to plain English: media packets are duplicated for the purpose of sending them off to be analyzed via a monitoring service.

The duplication of the packets happens in the backend, through the different media servers that can be found in a VoIP network. Here’s how it is depicted on HOMER/SIPCAPTURE’s website:

HOMER collects its data directly from the servers – OpenSIPS, FreeSWITCH, Asterisk, Kamailio – there’s no user devices here – just backend servers.

Other systems rely on the switches, routers and network devices that again reside in the backend infrastructure. Since in VoIP production networks, we almost always route the media through the backend servers, the assumption is that it is easier to collect it here where we have more control than from the devices.

This works great, but not really needed or helpful with WebRTC.

WebRTC Statistics Collection and Monitoring

With WebRTC, there are only a handful of browsers (4 to be exact), and they all adhere to the same API (that would be WebRTC). And they all have that thing called getstats() implemented in them. These get the same information you find in chrome://webrtc-internals.

Many deployments end up running peer-to-peer, having the media traverse directly through the internet and not through the backend of the service itself. Google Hangouts decided to take that route two years ago. Jitsi added this capability under the name Jitsi P2P4121. How do these services control and understand the quality of their users?

If you look at other media servers out there, most of them are a few years old only. WebRTC is just 6 years old now. So everyone’s focused on features and stability right now. Quality and monitoring is not in their focus area just yet.

Last, but not least, WebRTC is encrypted. Always. And everywhere. So sniffing packets and deducing quality from them isn’t that easy or accurate any longer.

This led to the focus of WebRTC applications in gathering WebRTC statistics from the browsers and devices directly, and not trying to get that information from the media servers.

The end result? Open source projects such as rtcstats and commercial services such as callstats.io. At the heart of these, WebRTC statistics gets collected using the getstats() API at an interval of one or more seconds, sent over to a monitoring server, where it is collected, stored, aggregated and analyzed. We use a similar mechanism at testRTC to collect, analyze and visualize the results of our own probes.

What does that give us?

  1. The most accurate indication of performance for the end user – since the statistics are collected directly on the user’s device, there’s no loss of information from backend collection
  2. Easy access to the information – there’s a uniform means of data collection here taking place. One you can also implement inside native mobile and desktop apps that use WebRTC
  3. Increased reliance on the edge, a trend we see everywhere with WebRTC anyway
What’s Next?

WebRTC chances a lot of things when it comes to how we think and architect VoIP networks. The part of how and why this is done on statistics and monitoring is something I haven’t seen discussed much, so I wanted to share it here.

The reason for that is threefold:

  1. Someone asked me a similar question on my contact page in the last couple of days, so it made sense to write a longform answer as well
  2. We’re contemplating at testRTC offering a passive monitoring product to use “on premise”. If you want to collect, store and analyze your own WebRTC statistics without giving it to any third party cloud service, then ping us at testRTC
  3. My online WebRTC training is getting a refresher and a new round of office hours. This all starts in April. Time to enroll if you want to educate yourself on WebRTC

 

The post How WebRTC Statistics and Performance Monitoring Changed VoIP Monitoring appeared first on BlogGeek.me.

Pages

Subscribe to OpenTelecom.IT aggregator

Using the greatness of Parallax

Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.

Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.

Get free trial

Wow, this most certainly is a great a theme.

John Smith
Company name

Yet more available pages

Responsive grid

Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

More »

Typography

Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

More »

Startup Growth Lite is a free theme, contributed to the Drupal Community by More than Themes.