News from Industry

FUSECO Forum 2017

miconda - Tue, 10/31/2017 - 18:01
The 8th edition of FUSECO Forum conference is organized by Fraunhofer Fokus Institute in Berlin, during Nov 9-10, 2017:The event’s chairman is once again Prof. Dr. Thomas Magedanz, from TU Berlin and Fraunhofer Fokus.This year’s event will again feature three dedicated tracks consisting of tutorials and interactive workshops on the first day, namely:
  1. Multi-access Network Technologies in 5G-Ready Networks
  2. 5G Edge and Core Software Networks and Emerging 5G Applications
  3. Network Virtualization and Network Slicing for 5G-Ready Networks
The second day features a full-day conference uniting these topics under one umbrella „The 5G Reality Check: 5G-Ready Applications and Technological Enablers for 5G Implementation“.Daniel-Constantin Mierla, co-founder Kamailio project, will participate to the event, being part of the panel “Practical Experiences in Moving to NFV Infrastructures and Open Challenges” during the first day. The agenda of the two days is available at:Fraunhofer Fokus is the place where Kamailio Project was started back in 2001 (as SIP Express Route, aka SER), a research institute in next generation communication technologies. If you want to learn about what’s going to happen next in RTC, this is a must attend event.Besides using it in research projects, Fraunhofer Fokus Institute keeps close to Kamailio project, hosting and co-organizing all the editions so far of Kamailio World Conference.Thanks for flying Kamailio!

Kranky Geek 2017: What Does the Pulse of WebRTC Tells Us?

bloggeek - Mon, 10/30/2017 - 12:00

Kranky Geek 2017 has been a roller coaster event for me. Time to discuss what I learned about the WebRTC last week.

Yap. We had a full room.

Well… More like 2 full rooms.

When talking to Lawrence some time in the afternoon, he joked with me, saying that apparently we have a problem – the overflow room is overflowing.

The best problem an event organizer could ever ask for.

If you are looking for the event videos, then they are already on YouTube.

I want to share some of my thoughts prior to the event and during to the event. And if possible, try and shed some light on where we’re headed from here.

Want to keep abreast of the WebRTC ecosystem? Join the WebRTC Weekly Challenges Abound

Putting up an event is a stressful undertaking. There are a lot of aspects that needs to be covered with this constant worry that you’ll end up forgetting something or that something will screw you over. Both are guaranteed to happen no matter how much planning and effort you put into it.

This time, our challenges started early on. It was somewhat harder than usual to decide how to price the event to make it worthwhile doing. Kranky Geek events are expensive to run. From the beginning, we’ve aimed for events that are free to attend (I consider a $10 admission fee that gets donated as a free to attend event). This left us with covering our expenses and making some revenue out of it something that relies on sponsors.

Kranky Geek is all about quality content. High quality content. Top notch. The best you can find.

Which means that we select the topics we want. We then hunt for the speakers that fit into that. And we work with our speakers to make them shine.

This process doesn’t always work with sponsors… it is sometimes hard to explain how we operate and why. And at times, sponsors can focus on hard selling their warez, which doesn’t fit into the Kranky Geek spirit (and definitely not to our audience).

This time, it took us slightly longer than usual to get the sponsors onboard and to be certain that we can pull off the event.

It also caused some more stress than usual among us partners. Kranky Geek is a joint effort of 3 people: Chris Koehncke (aka Chris Kranky), Chad Hart (the living spirit behind webrtcHacks) and me.

We don’t always agree, but somehow we fit well together, each one covering the other one’s shortcomings. We make a good team for getting these events done. I hope

Why am I sharing all this?

To set the stage to what comes next for Kranky Geek, but also to explain the amount of work, effort,time, stress, pain and love that has been put into the Kranky Geek events in general and to this one in particular.

It hasn’t been all happy, but I am proud of the result and happy that we did this.

We Had a Fire Drill!

During the day, we’ve had our share of technical challenges.

The projectors in the main room didn’t work at the beginning (that was before we started the day), and then a few other issues cropped up on us.

Doing this event in Google’s San Francisco office meant we had the best A/V team in the world on site to help us. The crew Google is working with there is top notch. The best I worked with. They made the problems seem easy to solve.

We had this to deal with…

Great @KrankyGeek schedule at #webrtclive this year includes exercise and fresh air, with @Google providing simulated earthquakes & flames! pic.twitter.com/r0QHATG5Wj

— Lawrence Byrd (@LawrenceByrd) October 27, 2017

A week before the event we were told we will have a fire drill in the building on the day of the event. The time kept moving around, settling at 2pm. We’ve scheduled our breaks and sessions around it, with a huge worry of having people leave once the fire drill started.

(that’s Kranky going down the staircase during the drill)

We decided to embrace the fire drill and tried to celebrate it with our audience, and I hope we succeeded. Back from the fire drill, we had almost everyone back.

We should probably make fire drills an integral part of Kranky Geek events.

Time to stop rambling.

The Event Recordings

The recordings are available online.

You can find them here.

We’ve had to reorder the sessions from our original agenda due to constraints we had with some of our speakers – late arrivals and early exits.

So I’ve reordered the sessions here. Following this, are the 13 sessions we had, in the original order we wanted (not that it really mattered).

I added some of my commentary on what I liked and learned in each of the sessions.

Kranky Geek Team

Nothing to say here really, besides the fact that I envy Chad’s ability to create slides and present them.

Facebook

This is the first time we had Facebook join us and share a story at Kranky Geek. We had the pleasure to have Li-Tal Mashiach an Engineering Manager at Facebook do the talk.

The numbers there are impressive as hell. 400 million monthly active users doing voice and video calls on Facebook Messenger using WebRTC. 400 million.

The next one who asks me if WebRTC is being adopted – I’ll just say 400 million. And then he’ll complain that this isn’t an enterprise application…

Anyways, what I found really interesting is how Facebook is dealing with optimization. The effort placed in the decision making process around video codecs, bitrates, etc.

WebRTC comes in a neat open source package that anyone can use. But it needs a lot more love and care when it comes to making it work at scale – just like any other technology.

TokBox

Badri Rajasekar, CTO of TokBox, shared an experiment that TokBox has been running recently. It was about using head tracking technology to improve video quality.

The idea behind it is that you can scale up a region of interest in an image sacrificing other regions, which ends up putting more pixels encoded for these regions.

The great thing here, that you do it without touching the encoder or the decoder. Why do we want that? Because the more generic you can make an encoder, the easier it is to implement it in hardware.

VoiceBase

Walter Bachtiger, Co-founder and CEO of VoiceBase talked about NLP (Natural Language Processing), and how great insights can be derived out of voice.

It was a bit of creepy, understanding how accurate machine learning can be at scale in a contact center.

The part I liked best in this one was how a contact center can decide within 30 seconds how likely you are to buy – if only the people who call me would have used it… it would have saved me a lot of time as a customer.

Atlassian

Emil Ivov, Chief Video Architect at Atlassian, and a serial speaker at Kranky Geek gave a very interesting talk about machine learning and bandwidth estimation.

The team at Jitsi now use Tensor Flow to sift through metadata they have of calls to try and understand how the network behaves and what strategy would work best in improving network quality.

It seems like reducing bitrate doesn’t always have the necessary effect on things, and FEC might end up working better.

Vidyo

Roi Sasson, CTO of Vidyo, talked about scale.

This wasn’t about how to scale a service, but rather how to scale a single call. Want 10 people on a call? You may not need to worry, but if you go to a 100 or a 1,000 – you need to think differently about it.

Which is where taking SFUs and cascading them, both within a single data center and geographically, starts making a lot of sense.

WebKit

For the first time, we had a representative from Safari. We got to hear what Apple’s default browser does with WebRTC and how from Youenn Fablet, a contributor to WebKit.

It was great to have WebKit join us at Kranky Geek, and to hear their fresh thinking about privacy in WebRTC and how they’ve taken care of that in Safari.

Peer5

Hadar Weiss, Co-founder and CEO of Peer5 talked about P2P CDN and using the WebRTC data channel.

We never did have a focused talk at the data channel in Kranky Geek, so this was a first.

I found really interesting how Peer5 does things differently than the rest of the WebRTC community. Mostly because they care less about call setup times and TURN connectivity and a lot more about throughput.

Hadar showed a few techniques I really liked, like the simple compression of SDP messages (which starts to make sense when you process and send millions of these a day).

Slack

From Slack we had Lynsey Haynes and Andrew MacDonald.

Two things interesting about this session:

  1. The shift they made from a custom WebRTC implementation towards the use of Electron with a vinyl WebRTC implementation in Chromium – all due to maintenance costs
  2. Switching from a custom Janus media server towards a self developed one written in Elixir

During the Q&A (which didn’t make it to the recording), Slack were asked about their support of Firefox. Andrew answered that support for Firefox is unlikely to come due to the shift of Slack towards focusing on less browsers and on their Electron-based desktop application. I see this thought process taking place elsewhere as well – it doesn’t bode well to the future of browsers.

Twilio

Rob Brazier from Twilio showed an AR (Augmented Reality) use case.

I’ve never been a fan of these acronyms such as IOT, AR, VR. Marrying them with WebRTC always seemed to me somewhat forced.

That said, Rob did a great job in making a case for AR in communication interactions. I am sure more exist.

Frozen Mountain

Anton Venema, CTO of Frozen Mountain was there to give an interesting demo.

He cobbled up text to speech, translation and speech to text to their media server platform, doing a demo of live language translation taking place in a WebRTC session.

Google

Niklas Blum, Huib Kleinhout and Justin Uberti from Google shared the progress made in WebRTC towards WebRTC 1.0.

This one had a lot of details for developers about things they need to know with the latest versions of Chrome and what to prepare for moving forward.

Appear.in

This year’s closing session was given by Philipp Hancke of appear.in. He’s a repeat speaker at Kranky Geek.

Philipp delved into NSFW (Not Safe For Work) related technologies, experimenting with recognizing such content and deciding what to do with it.

It was an interesting mix of technologies, human behavior and compromises.

Our Event Sponsors

Did I already say that Kranky Geek relies of its sponsors?

This year we had 6 of them:

I’d like to again thank our sponsors.

Diversity and Kranky Geek

For the first time, we had female speakers. Great female speakers.

I want more of this.

If you are a woman, or know of a woman. One that has technical WebRTC chops. And a desire to share your experiences. Contact me…

What’s Next for Kranky Geek?

We weren’t sure if we will have another Krank Geek event. But due to the success of the one we just had, there’s high probability that we will do another one next year.

So…

Get ready for Kranky Geek 2018.

With more great content, and maybe – a fire drill.

And while at it, if you increase your visibility in the market, know that sponsoring a Kranky Geek is a great way to go about it. So put some budget aside for it. Q3/Q4 2018 is where it will take place.

Want to keep abreast of the WebRTC ecosystem? Join the WebRTC Weekly

The post Kranky Geek 2017: What Does the Pulse of WebRTC Tells Us? appeared first on BlogGeek.me.

Kamailio v5.0.4 Released

miconda - Wed, 10/25/2017 - 19:39
Kamailio SIP Server v5.0.4 stable is out – a minor release including fixes in code and documentation since v5.0.3. The configuration file and database schema compatibility is preserved, which means you don’t have to change anything to update.Kamailio v5.0.4 is based on the latest version of GIT branch 5.0. We recommend those running previous 5.0.x or older versions to upgrade. There is no change that has to be done to configuration file or database structure comparing with the previous release of the v5.0 branch.Resources for Kamailio version 5.0.4Source tarballs are available at:Detailed changelog:Download via GIT: # git clone https://github.com/kamailio/kamailio kamailio
# cd kamailio
# git checkout -b 5.0 origin/5.0Relevant notes, binaries and packages will be uploaded at:Modules’ documentation:What is new in 5.0.x release series is summarized in the announcement of v5.0.0:Thanks for flying Kamailio!

Kamailio Autumn Events Summary

miconda - Mon, 10/23/2017 - 14:33
Time flies, feels like just returning from summer vacation, but it’s already past the mid of autumn and Kamailio members and community member have been present at several event world wide.After Cluecon in Chicago, USA, which ended the summer events in August, it was the time for TADHack Global 2017, running two rounds during Sep 22-24 and Sep 29-Oct 1.IIT RTC Conference happened during Sep 25-28, 2017, in Chicago, hosted as usual by the Illinois Institute of Technology.Astricon 2017 (Orlando, FL, Oct 3-5) was a big hit for Kamailio project, a good location and great time meeting many kamailians and friends from the VoIP world.November is rather busy, next are the events where you can meet people from Kamailio project:New events may be added to the list, keep an eye on our website!Should you participate to a local or global event and involve Kamailio in some way, contact us, we are more than happy to publish an article about the event.Thanks for flying Kamailio!

6 Ways Vendors Sell WebRTC Developer Tools

bloggeek - Mon, 10/23/2017 - 12:30

How can you make a living from WebRTC? You offer WebRTC developer tools.

One of the interesting questions is around monetizing WebRTC. The truth is, it is hard to monetize a concept, or a piece of technology. Kranky said it well over 3 years ago – WebRTC Market Size (is 0).

What does this mean? That you can either make money by selling tools to developers who need WebRTC. Or you make money by offering a service that makes use of WebRTC, but we can now debate if that’s WebRTC or not.

Anything that isn’t WebRTC developer tools talls into other market niches – healthcare, education, gaming, … all these compete and create business far from the WebRTC core itself.

Want to learn who’s offering WebRTC Developer Tools? Check out my WebRTC Developer Tools Landscape infographic.

WebRTC developer tools though – that’s where a small WebRTC market niche exist. And there are several ways to make money in this market. Here are 6 different types of services you can offer to sell WebRTC to developers – some will offer multiple services.

#1 – Sell a Managed Service (SaaS)

You can sell a managed service.

Find something that developers need.

Create a service that offers that solution.

Sell it in XaaS model.

  • We do it at testRTC for testing and monitoring WebRTC services.
  • Callstats.io does that for monitoring.
  • XirSys and a few others offer a managed service for NAT Traversal (=someone else hosts the TURN and STUN servers that your application uses)
  • Mobilinq and others offer a customized hosted offering
  • And then there are CPaaS vendors. Many of them offering WebRTC as well (check out this report on WebRTC CPaaS)

This market is rather challenging, as the name of the game is scale, and getting there is hard. For some reason, this is also where most customers end up penny pinchin.

#2 – License Software

You can develop a product that others need and offer it under a commercial license.

There are those who want or need to run their own service, not relying on managed services. And at times, they are happy to pay for a commercial license that comes with an SLA and someone you can shout at and threaten.

The best thing about most commercially licensed software is that the people behind it work on that software. And once they have paying customers, they are bound by contracts to support and maintain it, usually for long periods of time.

In this category, you can find companies such as Dialogic, Frozen Mountain and SwitchRTC.

#3 – Support and Customization of Open Source

Open Source doesn’t mean free.

People need to be able to make money out of their work – even if they are idealists who are just contributing to the community as a whole.

The way to go about doing that is by writing software that then gets distributed freely under an open source license. This allows anyone to take that software, use it, modify it and even try and contribute back to it and improve upon it.

For popular open source projects, this creates a nice feedback loop that everyone enjoys. For the most obscure projects, it remains the work of a single maintainer.

So how can someone make a living out of open source? By offering one of three different alternatives (usually a mix of them):

  1. Support contracts – if you’re the owner and main maintainer of the open source, then you can sell support contracts. Those who use your open source project may have questions, and giving them priority support can be an income source. For companies, having support available on the open source projects they use can be an important aspect of choosing one open source project over another
  2. Customization work – copmanies who adopt open source projects sometimes need modifications to these projects. They can attempt to do it on their own, or they can just have the main maintainer of the project do it for them at a price
  3. Commercial license – LGPL, GPL, AGPL and other open source licenses are often considered as cancerous licenses for commercial products. The reason for that is that they “contaminate” the code written around them forcing their license terms on that code as well. There are other open source licenses that are more tolerable to companies (more about it here). Which is why in many cases, a company would prefer paying to get a commercial license instead of using the free open source licenses of a project. Dual licensing is another way of making a living

Jitsi, for example, was distributed under an LGPL license. This allowed the team behind it to make a living through all 3 approaches: support contracts, customization work and offering commercial licenses. After its acquisition by Atlassian, it switched from LGPL to a more lenient APL license. The main reason? Atlassian had other objectives for Jitsi and they weren’t about deriving direct monetary value from it. The Jitsi team no longer offers paid support or customization – it doesn’t mean they don’t support the code base, it just means that you can’t pay them for priority support.

Kurento got acquired by Twilio. Naevatec, the company behind Kurento made most of its direct revenue from Kurento by offering support and customization work. After the acquisition, Naevatec was left without its engineers that were experienced with Kurento and has since been struggling to maintain the Kurento codebase.

Janus is still an open source project. The company behind it offers support and customization work if someone needs it.

To be able to make a living out of an open source project, it needs to be one that is mission critical to the companies who use it, and it needs to be popular enough. If you plan on taking that route, remember that maintaining such a project can make you proud at the number of companies that end up adopting it, but may well frustrate you if you look at how many of these companies won’t be willing to pay for it at all.

#4 – Conduct Analysis

This is something I wasn’t aware of up until several months ago.

There’s this interesting market niche in WebRTC, and I am not sure how prevalent it is with other technologies.

It is of companies and enterpreneurs who set out building a product with not enough knowledge and experience in WebRTC. They try to learn as they go along, floundering while at it. Many reasons why this happens:

  • They are doing it with an itnernal team that doesn’t have the skill set
  • They outsourced the project to an open source vendor who knows nothing about WebRTC, but knows how to build a mobile app, a website or even a VoIP service
  • They outsource the project but don’t scope it properly, getting a product that isn’t what they really wanted – and then blaming the outsourcing company about it
Need to beef up your WebRTC experience? Enroll your developers to the Advanced WebRTC Architecture course.Enroll to the WebRTC course

When this happens, companies start looking for alternatives. And there really are only 4 things to do here:

  1. Close shop and go home. Consider this a failure and just move on to other projects
  2. Reboot. Look at all of it as sunk costs and start from scratch
  3. Fix. Get your team or pay the outsourcing vendor (or other outsourcing vendors) to continue working on the project until it is working
  4. Salvage. Get an expert to look at the existing codebase, analyse it, offer his advice and even let him do the fixing

Salvage is somewhat different from fixing, as it focuses on analyzing the whole architecture along with the implementation instead of just diving right in and continuing with the same approach that brought you to where you are in the first place.

And there are companies who offer such packaged services. Look at Blacc Spot Media and WebRTC.ventures for that if this is what you’re after.

#5 – Outsource Your R&D Skills

You’re good with coding and know WebRTC?

Great.

Outsource it to others.

Many of the people who contact me are after developers with WebRTC experience. Some of them want to have these developers work as freelancers. Others want to outsource to a company. Others still are looking to recruit skilled workers, but understand they may end up outsourcing anyway.

There are quite a few companies and individuals who offer their outsourcing services around WebRTC.

The known freelancers who do WebRTC work are usually fully booked. It is hard to get their attention and time for new projects, but it is worth a try.

The outsourcing companies come in different shapes and sizes. Many don’t have the relevant skillset. Some will place inexperienced developers on your project. Some will do the best work for you.

Quality here varies greatly, so you should take the time to pick the right outsourcing vendor to work with.

In many cases, my role in such projects is to assist in deciding on the exact requirements, selecting the outsourcing vendor and “translating” the requirements between the company and the outsourcing vendor.

#6 – Consult

There are those who simply offer consulting (I do that by the way).

Their role is to assist in the thought processes – be it the initial phases of helping in fleshing out the product’s roadmap and differentiation, assisting in the competitive analysis, in writing down the RFPs (or the response to an RFP), selecting vendors, suggesting architecture, etc.

Many of the experienced outsourcing vendors will usually add a consulting component into their service, and their customers will usually benefit from that consulting.

What’s Next?

Looking to start a WebRTC project? Trying to understand how to get that done? Know that the market is dynamic and always changes.

Which is why I am in the process of updating two resources on my site:

  1. Choosing a WebRTC API Platform report
    1. If you think a vendor that isn’t in the report needs to be added to it – tell me
    2. If you plan on purchasing this report, then the best time would be from now until the publication of the update (see below)
  2. WebRTC Developer Tools Landscape will be updated soon – if you miss vendors here – tell me
My WebRTC API Report is getting an update and you’re getting a discount. From now, until the report gets updated during December, there’s a 20% discount. The discount will include the upcoming update (and a full year of updates).

Get your discounted report

 

The post 6 Ways Vendors Sell WebRTC Developer Tools appeared first on BlogGeek.me.

AstriCon 2017 Remarks

miconda - Wed, 10/18/2017 - 14:31
The 2017 edition of AstriCon was very intense, or at least it was for me (Daniel-Constantin Mierla, Asipto) and the Kamailio presence at the event. Three days without any time to rest!Before summarising the event from personal perspective, I want to give credits to the people that helped at Kamailio booth and around. Big thanks to Fred Posner (The Palner GroupLOD), he did the heavy lifting on all booth logistics, from preparing required things in advance, setting up the space, banners and rollups, stickers, a.s.o. Of course, Yeni from DreamDayCakes baked again the famous Kamailio and Asterisk cookies, very delicious bits that people could taste at our booth.Carsten Bock from NG-Voice was there with his Kamailio-VoLTE demos and devices. Torrey Searle’s giveaways from Voxbone were very popular again. Alex Balashov from Evariste Systems ensured that anyone in doubt understands properly the role of Kamailio in a VoIP network and the benefits of using it along with PBX systems. Joran Vinzens from Sipgate completed our team, being around as we needed.It was a great time to catch up with many friends, VoIP projects and companies in the expo area, sharing the space with Dan Bogos from CGRateS project, chatting with the guys from Obihai, IssabelPBX, FreeSwitch, Janus WebRTC Gateway, Simwood, Bicom, Homer Sipcapture, Telnyx, Greenfield…There were four presentations by the people at the booth — I, Carsten, Fred and Joran had talks on Wednesday or Thursday. On Tuesday, I, Fred and Torrey participated to AstriDevCon, as always a very good full day session with technical debates, with Mathew Friedrikson and Matt Jordan coordinating and talking about what’s expected next in Asterisk.Close to the end of the event on Thursday, it was the open source project management panel, with me among the panelists. Being completely warmed up and with some pressure from James Body, I also did the Dangerous Demos, where the Ubuntu Phone decided to reboot as I was on stage, leading me towards the Riskiest Demo Prize (aka Crash & Burn). Carsten and Torrey did dangerous demos as well, with Carsten being the runner up on one of the tracks, which secured him a nice prize as well.Kamailio related presentations will be collected at:I expect that recordings of the sessions will become available in the near future from the organizers of Astricon.During the breaks and evenings, I enjoyed amazing time with friends and kamailians around the world. It’s no time to bore with people such as James Body, Simon Woodhead, Susanne Bowen, David Duffett, Nir Simionovich, Lorenzo Miniero … and many others that I miss to remember at this moment…Definitely it was one of the best AstriCon ever, credits to Digium and the organizing team! Kamailio had a great time there, see you at the next editions!Thanks for flying Kamailio!

Development For Kamailio v5.1 Series Is Frozen

miconda - Tue, 10/17/2017 - 14:30
A short note to mark the freezing of development for Kamailio v5.1 series.For few weeks, no new features will be pushed in the master branch. Once the branch 5.1 is created (expected to happen in 3-4 weeks from now), the master branch becomes again open for new feature. Meanwhile the focus is going to be on testing current code.Work on related tools (e.g., kamctl) or documentation can still be done as well as getting the new modules in 5.1 in good shape, plus adding exports to kemi interface (which should not interfere with old code).The entire testing phase is expected to be 4 to 6 weeks, then the release of v5.1.0 – likely by end of November should be out.What is new in current master branch comparing with previous stable series (v5.0.x) will be collected at:Changes required to do the update from v5.0.x will be made available at:Helping with testing is always very appreciated, should you find any problem in current master branch, just open an issue on bug tracker from:Thanks for flying Kamailio!

Do We Need WebRTC Events?

bloggeek - Mon, 10/16/2017 - 12:00

Yes. We do need WebRTC events. Which is why you should join us at Kranky Geek next week.

I’ve been asked a few times in the past several months by people about events to go to.

Should I go to that event? Will it help me with my current WebRTC project?

What event should I go to, considering I am in need of WebRTC technology?

Where can I travel to learn about WebRTC? Is there a specific event?

Which event will guide me towards what I need with WebRTC? Have me understand the market dynamics? Be a place to mingle with the industry?

Register for a Kranky Geek AMA webinar – a week ahead of our event, Chad Hart will be joining me to discuss WebRTC statistics and what to expect from this year’s Kranky Geek event

Register to the pre-event AMA webinar

The problem with events and WebRTC

If you’re in telecom, then this is how you see WebRTC:

For telecom, WebRTC is just a piece of telecom. An evolution of it. Some way of getting the telecom and VoIP infrastructure into a web browser.

If you’re in web development, then this is how you see WebRTC:

For web developers, WebRTC seems just like another piece of the HTML5 technology stack. You learn a few JS APIs. Maybe some nifty CSS and a few HTML5 tags and you’re done.

And this is how I see WebRTC:

Now, most WebRTC related events so far have been initiated by people in the telecom industry. The end result is usually a very narrow prism of what WebRTC is what it is capable of achieving. And the side tracks done in the web related events? Most of them end up explaining what WebRTC is, not going nearly deep enough.

The end result has been unsatisfying. At least for me.

This was one of the reasons I started Kranky Geek along with the help of Chris Koehncke some 4 years ago. We’ve since had Chad Hart join.

4 years into it, the question starts to crop up – do we still need WebRTC events?

Why do we still need WebRTC events?

Is there still room with a WebRTC centric theme to it?

Shouldn’t WebRTC just be wrapped into all the telecom, communications and web events out there and be done with it?

I mean, we’ve got enough meetup groups around the world for this technology, but who wants to attend a longer event on WebRTC?

I think it boils down to that illustration up there – the one where WebRTC is smack in the middle of VoIP (telecom) and the web (internet). In a way, we’re still figuring out what that means exactly. How does the infrastructure of such a thing needs to be designed; how do you scale it; what kind of monitoring mechanisms do you need to have in place; what’s the team sizes, resources and time needed to get something from a proof of concept to production.

WebRTC might not be new, but the fact that it relies on a mix of technologies and disciplines make for a rather complex and interesting ecosystem.

Join us at Kranky Geek SF 2017

Our next Kranky Geek event takes place on October 27 in San Francisco.

Kranky Geek is about WebRTC developers. Our role is to educate and share the experience coming from developers to developers.

The theme we’ve selected this time is twofold: implementation and beyond RTC.

  1. Implementation: Production ready systems. Those that have battle scars and live to tell their story. We have companies who’ve been running WebRTC in production, at scale for quite some time, and now they are here to explain what they are doing – the challenges they faced and the solutions they came up with
  2. Beyond RTC: You’ve probably heard a word or two about VR, AR, NLP, AI – acronyms that seem to be capturing the news and the imagination lately. We’ve decided to bring in a few experts in this field to explain how that fits into the story of WebRTC

We reached out to Youenn Fablet, who works on the WebKit WebRTC implementation. He will be speaking about iOS and Safari support of WebRTC.

Google will talk about their progress and roadmap of WebRTC.

Talking about Implementations, we will have Atlassian, Facebook, Peer5, Slack and Vidyo- each talking about different aspects of implementations and scaling.

Affectiva, TokBox, Twilio and VoiceBase will cover issues beyond RTC.

For our end-of-day session, we will have a repeat speaker at Kranky Geek – Philipp Hancke from appear.in – working his way around NSFW. Knowing Philipp (and seeing his draft slides), you definitely want to stick around for this one.

Register for a Kranky Geek AMA webinar – a week ahead of our event, Chad Hart will be joining me to discuss WebRTC statistics and what to expect from this year’s Kranky Geek event

Register to the pre-event AMA webinar

There’s a token admission fee in place, to control headcount and showups (free events tend to be under-attended, and we’re shifting away from that). The way this event ends up being funded is by our sponsors, who make this thing happen at all. They are part of our speakers and play an important role in the event itself.

This time, we’ve got Frozen Mountain, Google, Tokbox, Twilio, Vidyo and VoiceBase as our sponsors.

See you at Kranky Geek.

 

The post Do We Need WebRTC Events? appeared first on BlogGeek.me.

Freezing Development For Kamailio v5.1

miconda - Mon, 10/09/2017 - 23:25
The development of new features for next major release, Kamailio v5.1, is going to be frozen on Monday, October 16, 2017. The master branch received plenty of new features since the release of v5.0, which was out by end of February 2017.Next release will bring at least 7 new modules (although we are expecting one or more to make it in during next days). Not really up to date, the list of new features is collected in the wiki page at:After the freeze date, we start the testing phase, which is expected to last for 4 to 6 weeks, then we will have the first release in the 5.1 series, respectively the version 5.1.0.Shall you have plans to include new features in v5.1, it is time to hurry up and have the commit or pull request ready by end of next Monday.Thanks for flying Kamailio!

Thoughts about Twilio Studio and the Future of CPaaS

bloggeek - Mon, 10/09/2017 - 12:00

How does Twilio Studio fit into Twilio’s Ask Your Developer campaign?

Last month I participated in Twilio’s Signal event that took place in London. I was invited to speak there on test automation in WebRTC. You can watch my video session on YouTube. That isn’t the point of this article though.

Signal is where Twilio announces most of its major new releases. Last time, earlier this year, it was all about the engagement cloud – a restructuring of how Twilio explains its services – and a migration from a single channel world into an omnichannel one. I’ve written at length about it in Is Twilio Redefining CPaaS (hint: it is). I wrote there:

Twilio has introduced a new paradigm for the way it is layering its product offerings.

In the process, it repositioned all of its higher level APIs as the Engagement Cloud. It stitched these APIs to use its lower Programmable Communications APIs, adding business logic and best practices. And it is now looking into machine learning as well.

It is a powerful package with nothing comparable on the market.

Twilio are the best of suite approach of CPaaS – offering the largest breadth of support across this space. And it is making sure to offer powerful building blocks to make developers think twice before going for an alternative.

I think that at Signal London 2017, they outdid that with the introduction of Twilio Studio.

Trying to figure out the best approach for developing your application? Check out this free WebRTC Development Paths Matrix to understand your alternatives

Get your WebRTC Development Paths Matrix

Before We Begin

You might want to take the time to watch Signal London 2017 keynote by Jeff Lawson.

A large part of the London keynote was a rehash of what was said in San Francisco earlier this year. It was about the shift towards omnichannel and the engagement cloud. The words that struck to to me when explaining the engagement cloud were BEST PRACTICES, BUSINESS PROCESSES, REINVENT THE WHEEL (=what not to do).

I’d like to touch in this articles a few main themes and approaches that Twilio is taking, which are shaping its vision and execution at the moment.

“Ask Your Developer” is The Wrong Approach

I’ll start with where I think Twilio is missing the mark.

Ask Your Developer took center stage. Jeff Lawson wanted companies and the business people inside it to go ask their developers what they can do. How they can improve the business.

It gives us developers a great feeling of being in control. Of being valued. But for the most part, and for most developers, this is probably the wrong approach.

Most developers would be happy to work by spec.

The few that aren’t will be promoted quite fast to system architects, managerial roles in development or god forbid to product managers. Why? Because they can see the big picture.

They are the people that get asked. Or the people that answer without asking.

We should be asking our developers, but it should not be our strategy.

Which is where the miss came.

Twilio announced later on in the keynote Twilio Studio. A tool that takes some of that control from developers, putting it at the hands of decision makers.

You no longer have to ask your developer. You can work with him. Together.

More about this later.

The Code that Counts

Some 20 minutes into the keynote, Jeff Lawson invited Patrick Malatack. He started with this:

It was core to how Twilio approaches its customers. Patrick explained that this is the most important code – it is the code that counts.

The idea being that your life as a developer should be made easy, so Twilio is adding not only APIs that serve the functions you need, but also a runtime behind it to facilitate rapid development and deployment – from helper libraries, to logging and debugging facilities, the new Twilio Functions, etc.

I think the code that counts here is developers focusing on their specific business problem – abstracting everything else.

It ended up being a concept of what Twilio Runtime is:

The yellow parts in that screenshot above are the newest announcements. The rest were there earlier. Twilio isn’t only adding more features to its platform – it is beefing up its runtime, making it another competitive advantage in front of many others where it comes to pure SMS and voice capabilities.

The message here is an interesting one, but it wasn’t polished enough. I think this is where we will see more in future Signal events from Twilio.

Twilio Studio

At about 1:24:00 of the keynote, Jeff Lawson introduces Twilio Studio.

It starts by explaining that building is fun but maintaining isn’t (he is correct).

The goal, based on Jeff Lawson, is to massively accelerate roadmaps of Twilio’s customers.

I think it is a lot more than that.

Because this is so new and fresh, still in developer preview (and something I’ve started playing with a bit), it is hard to write this in an ordered fashion. Which means I’ll be going for a bulleted list instead

  • This is a really cool tool. From the demos and the time I’ve spent with Twilio Studio, it is really powerful
  • Getting UI tools that handle state machines for developers is not easy. The Twilio Studio experience has a nice feel to it – I liked the experience
  • Twilio Studio reminds me of Zapier. But where Zapier has a 1D linear approach to tooling and integration, Studio is its big brother, offering 2D visualization to communication state machines
  • There’s no support for the visible communication parts in Twilio Studio. Yet
    • You can send and receive programmable SMS and voice with it
    • A bit of messaging as well
    • But you can’t connect it to the voice in your SDK or manage a video chat room with it
    • This will need to be added later at some point to complete the puzzle
  • Is Twilio Studio the centerpoint of a customer’s flow or a corner piece of it?
    • Twilio Studio can be used to express your whole business process, fleshing out the important parts and branching away to your integrations
    • It can also be used to solve a minor piece of your bigger puzzle
    • It is up to you to decide how you use it
  • At the hand of an experienced architect, Twilio Studio will offer super powers
    • There are many ways to define and template what you need
    • Some approaches will work better, offering more flexibility
    • The focus should be around inclusion of as many stakeholders in the company as possible – being able to show them and interact with them by looking at a Twilio Studio Flow
  • Here’s a question: Is Twilio Studio a tool for Developers? Designers? Implementers? Analysts?
    • Twilio Studio today is fit for developers, but it won’t stay that way long
    • It can be used by implementers that know a bit about code but aren’t developers
    • It can be used to open a discussion between a developer and a business analyst
    • This is a way for expanding the target market within a Twilio’s customer from solely one of developers towards a larger audience. The motto is no longer “Ask your developer”
  • Twilio Studio can be enhanced
    • It is a great first step, but the next ones are a lot more interesting
    • They are also a lot more threatening to competitors
    • If Twilio succeeds here, it will dominate this space with the companies that matter the most
  • Twilio Studio is the ultimate vendor lock-in
    • Enterprises will adopt it, due to its many benefits
    • They will find it hard to switch because of these benefits
    • Enterprises won’t want to switch… Twilio Studio will be too valuable. Too transformative

This tool can do to contact centers what marketing automation is doing to email newsletters. If I were a contact center vendor… I’d consider Twilio Studio my biggest threat moving forward.

Pricing

There were 3 price points for Studio:

  1. FREE – up to 1,000 Engagements. To get developers hooked up to this tool and make them not bother with actually “developing” using “code”. It is also a great way of getting developers to NOT look at other competing vendors
  2. The minimal plan, at +$100/month price point. Covers up to 20,000 Engagements. This is probably where most small companies will be “living”, which is just fine
  3. The enterprise, unlimited plan, at $10,000/month or more. Expensive, but it depends how much traffic you’re handling

Then there’s the question of what an Engagement is exactly. Is it a flow of a single event in a Flow? Is it a widget being accessed inside a Flow? In a 2-way bot conversation, each message exchange is probably an exchange I am assuming – the more talkative your app – the more Engagements it will eat up.

Not sure if I am missing a tier between PLUS and ENTERPRISE here. There seems to be too big of a gap in there.

Positioning

One last thing – Twilio Studio has been positioned by Jeff Lawson inside the Engagement Cloud, below all of its current logical components:

I’d place it as a vertical bar next to the whole Twilio stack. Probably adding Functions write next to it:

My guess? Product management had a lot of internal discussions on this one, trying to decide where to place Studio – inside the engagement cloud, above it, right next to it. They ended up picking inside it.

A Word About GDPR

GDPR stands for General Data Protection Regulation. It is a piece of legislation that will become effective May 2018, in less than a year. A period of two years of grace has been given to reach that date.

It deals with the protection and processing of private information of citizens of the EU, which practically covers any global player out there, and even many who aren’t.

In a nutshell, it is a headache. Especially if you’re making use of analytics, personalization, automation, chat bots, AI or any other big data related technology. It is also relevant if you just hold an SQL database of your customers.

If you were working in a specific regulated vertical, such as healthcare or finance, then you might be used to such things. If you’re not, then you should start paying attention. Especially with the communication part of whatever it is that you do – this is where personal information gets passed along with the metadata that needs to be handled with care.

Twilio pushing GDPR this early on means two things to me:

  1. They are looking at the enterprise, and making sure their platform is fit for their purpose (large multinational enterprises will be the first to adopt and adhere to something like GDPR)
  2. They are making sure that they are leading the CPaaS pack here. I am unaware of any other CPaaS vendor who has been pushing GDPR besides stating that they will be ready by May 2018. Twilio is trying to make sure it is synonymous with “GDPR compliant CPaaS”.

It also means that communication – telecom or IP based – is becoming slightly harder to handle. Something that works well for a vendor like Twilio whose purpose in life is simplifying complexity (=the more complexity the more value derived by Twilio).

Where do we go from here?

Twilio was and still is the undisputed CPaaS king. They are bigger than anyone else by a large margin and they are working hard on maintaining a technology edge on everyone else.

Twilio’s stock has been somewhat volatile lately with Uber’s announcement and later Amazon’s text messaging announcement (which ended up about Amazon using Twilio). Twilio seem vulnerable.

The two main announcements here were Studio and GDPR. Studio brings Twilio to a larger audience and increases their vendor lock-in, whereby reducing the effectiveness of their competition. GDPR is put in place as another headache Twilio solves for its customers – the more regulatory and bureaucracy like GDPR the better for a company like Twilio – it reduces the competition from in-house developers – which is doubly important now.

These two announcements are there to deal with its perceived vulnerability. They make developing using Twilio easier than ever – almost risk-free. And it makes it harder for competition to succeed in future land grabs trying to go after Twilio’s bigger accounts.

It will be interesting to see how competitors would react to this in the long run, and even more interesting to see what will Twilio Studio grow into.

Trying to figure out the best approach for developing your application? Check out this free WebRTC Development Paths Matrix to understand your alternatives

Get your WebRTC Development Paths Matrix

The post Thoughts about Twilio Studio and the Future of CPaaS appeared first on BlogGeek.me.

H.264 or VP8 in Your WebRTC Application?

bloggeek - Mon, 10/02/2017 - 12:00

No simple answer.

Apple recently announced that Safari will be supporting WebRTC. That support isn’t there yet to the point where it is stable enough, but we already know one thing:

Safari supports only the H.264 video codec.

Codec wars are over? 2 MTI (mandatory to implement) codecs in the form of VP8 and H.264?

Who cares?

Reality is that Apple decided at this stage not to support VP8 – and it hasn’t said anything about plans to support or not support VP8 in the future. That said, all signals indicate that support for VP8 in Safari is unlikely to happen.

This brings us to a simple yet challenging question:

When writing a WebRTC application. Should you make use of VP8 or H.264?

The answer isn’t a simple one. Choosing VP8 will leave you without Safari. Choosing H.264 will leave you without other important features and capabilities, as well as create a potential legal headache.

This is why I decided to create a new free video mini course – to guide you through the process and help you make the best decision here.

This video course, Picking a WebRTC Video Codec, is free and includes 4 lessons and a cheat sheet.

Find out which codec to use: VP8 or H.264

The post H.264 or VP8 in Your WebRTC Application? appeared first on BlogGeek.me.

What’s in my Online WebRTC Course?

bloggeek - Mon, 09/25/2017 - 12:00

Looking for a WebRTC training? Search no more. My online WebRTC course is here.

I will be relaunching my Advanced WebRTC Architecture Course next week, so it is time to see what you’ll find in this WebRTC training program I’ve created and fine-tuned for over a year now.

Prefer watching and listening more than reading? Join my free webinar on Wednesday for a quick lesson on WebRTC architecture related topics, where I’ll also be explaining the WebRTC training course and its contents.

Register and Grok media in WebRTC

The sections below explain the various parts of this unique WebRTC training. These are decidedly focused on delivering the best learning experience possible.

WebRTC Training Main Modules

The course is designed and built around 7 main modules:

Each module includes multiple lesson, and each lesson is a recorded video session of anywhere between 10-40 minutes of length. Most lessons also include additional links and some written content in them.

Module 1 gives you the baseline information about what WebRTC is. Consider it your introduction to the topic.

Modules 2-3 focus on signaling. They’ll take you from an understanding of UDP and TCP up to deciding what signaling protocol to use in each case and why.

Modules 4-5 are all about media. They explain voice and video codecs – in the context of their relevance to WebRTC. They also deal with the various media architectures available in group calling and recording scenarios.

Module 6 is all about the ecosystem. It lists the different strategies developers have in front of them when designing a WebRTC application, and then goes into details of each one of these.

Module 7 brings it all together. It takes different scenarios and use cases, analyzes them and builds the necessary architectures to support each use case. This is where the theory comes into practice.

The total length of the recordings in all modules and lessons? Over 15 hours.

You progress with the material at your own pace, jumping between lessons as you see fit, or through the original order they were laid out in.

If you’re looking for something to print and share, there’s a PDF version of the WebRTC course syllabus available.

Get Your WebRTC Questions Answered in the Course Forum

The course itself is supplemented with an online forum.

I’ve been contemplating making that forum a Slack channel or a Facebook group. Decided against it. While that may change with time, the course does have a forum built into it.

When you enroll to the course, you also gain access to the forum, which is where you can ask questions and get answers to them.

At any point in time.

Be it about a specific lesson, or a challenge you have in what you’re currently doing at work with WebRTC.

And if sharing openly isn’t your thing, you can always just email me directly.

WebRTC Training Office Hours

Twice a year, a series of office hours are provided for the course.

There are 12 such live sessions, taking place on roughly a weekly basis. They happen in 2 different times of the day, to fit different timezones.

These office hours include two parts in them:

  1. Me rambling about a topic. Call it a live lesson. It can be something from the actual course, or just thoughts and updates on what’s been going on lately with WebRTC out there
  2. Q&A. In this part, those enrolled to the course can ask anything they want. It is a part of the course which not many use, but those that do seem to enjoy it and derive benefit from it

The office hours are recorded and available for playback as well, so if you miss a session – you can always return to it and play it back.

WebRTC Course Bonus Materials

Besides all the 7 course module, I’ve added a bonus module.

This one contains some extra lessons as well as cheat sheets and templates that are spread all over my site in an easy to reach location.

What lessons are in the bonus materials?

4 recorded lessons

  1. WebRTC standardization
  2. Writing RFP requirements for WebRTC
  3. Media algorithms
  4. Using testRTC

The media algorithms lesson is really important. It covers topic that I touch only lightly during the course such as echo cancellation and jitter buffer.

2 recorded guest lessons

In my last round of the course, appear.in, who took the corporate plan, were also kind enough to share two new guest lessons:

  1. Video Quality in WebRTC
  2. Deploying (co)TURN on AWS

Philipp Hancke and Bradley T. Hughes were the instructors for these two and I found myself learning a lot in these lessons as well. Now, they are part of the course bonus materials.

What’s New in This Round of the WebRTC Course?

This is the third time I am running this course, and the second round of updates to it.

  1. I’ve updated some of the materials where appropriate (someone told me recently that Apple is doing something with WebRTC, so it had to find its way to the course )
  2. I also recorded a session from scratch because apparently, the audio recording of that one wasn’t the best
  3. The bonus materials (described above), are going to go away. They will be available only during course launch periods (=this week) or for corporate plans
  4. There’s a new eBook that is going to be added as a bonus to the course. It is called “Built to Scale”, and it is a look behind the scenes of how meet.jit.si is… built to scale

A Few Questions Answered About the WebRTC Course

I am now adding an option to take my WebRTC training as part of every consulting project I take. Sometimes, the customer takes me up on the offer, and other times they don’t. There are questions that get asked almost all the time about the course by these customers, so I decided to answer the most common ones here.

How long will it take to work through the WebRTC course?

It is entirely up to you.

There’s over 15 hours of recorded content in the course. More if you start going through the links, external slide decks and videos that I share in the course lessons.

But at the end of the day:

  1. You decide on the pace of your WebRTC studies
  2. You decide which lessons to start with first
  3. You decide if there are lessons you prefer skipping
  4. You decide if you want to watch to a specific lesson again

If you take a lesson in each working day, then 2 months is approximately what you’ll need to get from start to end.

Is there any prerequisite to taking this WebRTC training?

This WebRTC training program assumes you have some good understanding of technology. The rest – it fills in with the various modules of the course.

You don’t need to have knowledge in VoIP to take this course. You don’t need to be a web developer either. What you do need, is to have some technical grasp and understanding.

If you already have prior knowledge, then that’s fine – this WebRTC course isn’t forcing you to take its modules and lessons by their order, so you can skip to the relevant topics that interest you.

Is there a certificate?

As most online learning courses go, so too the WebRTC course offers a certificate.

Once you’ve completed the course, you will be receiving a WebRTC certificate indicating you’ve passed the course.

For companies, there’s a separate plan, which enables them to hold a badge of the WebRTC course. You can find the vendors that have taken this plan in the corporate partners page.

What’s Next?

Want to learn more about media in WebRTC? Join this free webinar to see an analysis of a real case study I came across recently. What did the company had in mind to build and how they botched their architecture along the way.

Register and Grok media in WebRTC

And if you’re really serious, enroll to my Advanced WebRTC Architecture Course.

The post What’s in my Online WebRTC Course? appeared first on BlogGeek.me.

AstriCon 2017

miconda - Fri, 09/22/2017 - 15:07
Kamailio is going to be present at AstriCon 2017, the Asterisk User Conference and Exhibition organized by Digium, to take place in Orlando, FL, USA, during October 3-5, 2017.Carsten BockDaniel-Constantin MierlaFred Posner and Jöran Vinzens will have presentations about Kamailio (see the schedule here). Besides the conference sessions, Kamailio project has a booth the expo area, be sure you stop by for a chat and some cool demos of using Kamailio alone or together with Asterisk.We expect a consistent group of people from the Kamailio community, the event being a great chance to meet with many world wide friends, especially the North American kamailians.Looking forward to meeting many of you in Orlando by beginning of October!Thanks for flying Kamailio!

Next Kamailio IRC Devel Meeting

miconda - Wed, 09/20/2017 - 15:06
To sync properly for the next major release of Kamailio (v5.1.0) and ongoing development, we propose an IRC devel meeting for next week, on Wednesday, Sep 27, 2017. An alternative would be the following day, Sep 28, or if there are many devs that want to attend and cannot do it these days, we can look at another date. Just propose new day and time via sr-dev mailing list.The meeting is going to be held as usual in the #kamailio channel on freenode.net IRC network.A wiki page was created to collect the topics that are wanted to be discussed:Fell free to add there or reply to the mailing list with what you think it is relevant to discuss.Thanks for flying Kamailio!

SDP: Your Fears Are Unleashed (Iñaki Baz Castillo)

webrtchacks - Wed, 09/20/2017 - 12:55

We have have had many posts on Session Description Protocol (SDP) here at werbrtcHacks. Why? Because it is often the most confusing yet critical aspects of WebRTC. It has also been among the most controversial. Earlier in WebRTC debates over SDP lead the to the development of the parallel ORTC standard which is now largely merging back into the […]

The post SDP: Your Fears Are Unleashed (Iñaki Baz Castillo) appeared first on webrtcHacks.

Grokking Media in WebRTC (a free webinar for my WebRTC Course)

bloggeek - Mon, 09/18/2017 - 12:00

Media in WebRTC.

What makes it so challenging?

I guess it can be attributed to the many disciplines and different areas of knowledge that you are expected to grok.

My last two articles? They were about the differences between VoIP, WebRTC and the web.

By now, you probably recognize this:

If you’ve got some VoIP background, then you should know how WebRTC is different than VoIP.

If you’ve got a solid web background, then you should know why WebRTC development is different than web development.

When it comes to media, media flows and media related architectures, there seems to be an even bigger gap. People with VoIP background might have some understanding of voice, but little in the way of video. People with web background are usually clueless about real time media processing.

The result is that in too many cases, I see WebRTC architectures that make no sense in how they fit to what the vendor had in mind to create.

Want to learn more about media in WebRTC? Join this free webinar to see an analysis of a real case study I came across recently. What did the company had in mind to build and how they botched their architecture along the way.

Register and Grok media in WebRTC

Here are 4 reasons why media is so challenging:

#1 – Media is as Real Time as it Gets

Page load speed is important. People leave if your site doesn’t load fast. Google incorporates it as an SEO ranking parameter.

This is how it is depicted today:

So… every second counts. And the post slug is “your-website-design-should-load-in-4-seconds”.

From a WebRTC point of view, here’s what I have to say about that:

If I were given a full second to get things done with WebRTC I’d be… (fill in the blank)

Seriously though, we’re talking about real time conversations between people.

Not this conversation:

But the one that requires me to be able to hold a real, live one. With a person that needs to listen to me with his ears, see me with his eyes, and react back by talking to me directly.

400 milliseconds of a roundtrip or less (that’s 200 milliseconds to get media from your camera to the display on the other side) is what we’re aiming for. A full second would be disastrous and not really usable.

Real time.

For real.

#2 – Media Requires Bandwidth. Lots and Lots of Bandwidth

This one seems obvious but it isn’t.

Here’s a typical ADSL line:

Most people live in countries where this is the type of a connection you have into your home. You’ll have 20, 40 or maybe 100MB downlink – that’s the maximum bitrate you can receive. And then you’ll have 1, 2 or god forbid 3MB uplink – that’s the maximum bitrate you can send.

You see, most of the home use of the internet is based on the premise that you consume more than you generate. But with WebRTC, you’re generating media at all times (if it isn’t a live streaming type of a use case). And that media generation is going to eat on your bandwidth.

Here’s how much it takes to deliver this page to your browser (text+code, text+code+images) versus running 5 minutes of audio (I went for 40kbps) and 5 minutes of video (I went for 1Mbps). I made sure the browser wasn’t caching any page elements.

There’s no competition here.

Especially if you remember that with the page it is you who is downloading it, while with audio and video you’re both sending and receiving – it it is relentless as long as the conversation goes on the data use will grow.

Three more things to consider here:

  1. Usually, the assumption is that you need twice the bandwidth available than what you’re going to effectively send or receive (overheads, congestion and pure magi)
  2. You’re not alone on your network. There are more activities running on your devices competing over the same bandwidth. There can be more people in your house competing over the same bandwidth
  3. If you’re connecting over WiFi, you need to factor in stupid issues such as reception, air interferences, etc. These affect the effective bandwidth you’ll have as well as the quality of the network
#3 – Media is a Resource Hog

So it’s real time and it eats bandwidth. But that’s only half the story.

The second half involves anything else running on your device.

To encode and decode you’re going to need resources on that device.

CPU. Something capable. A usable hardware acceleration for the codecs to assist is welcomed.

Memory. Encoding and decoding are taxing processes. They need lots and lots of memory to work well. And also remember that the higher the resolution and frame rate of the video you’re pumping out – the higher the amount of memory you’ll be needing to be able to process it.

Bus. Usually neglected, there’s the device’s bus. Data needs to flow through your device. And video processing takes its toll.

Doing this in real time, means opening dedicated threads, running algorithms that are time sensitive (acoustic echo cancellation for example), synchronizing devices (lip syncing). This is hard. And doing it while maintaining a sleek UI and letting other unrelated processes run in the background as well makes it a tad harder.

So thinking of running multiple encoders and decoders on the device, working in mesh topologies in front of a large number of other users, or any other tricks you’re planning need to account for these challenges. And they need to put in focus the fact that browser vendors need to be aware of these topologies and use cases and take their time to optimize WebRTC to support them.

#4 – Media is Just… Different

Then there’s this minor fact of media just being so darn different.

It isn’t TCP, like HTTP and Websocket.

It requires 3 (!) different servers to just get a peer to peer session going (and they dare call it peer to peer).

Here’s how most websites would indicate their interaction with the browser:

And this is how a basic one would look like for WebRTC:

We’ve got here two browsers to make it interesting. Then there’s the web server and a STUN/TURN server.

It gets more complicated when we want to add some media servers into the mix.

In essence, it is just different than what we’re used to in the web – or in VoIP (who decided to do signaling with HTTP anyway? Or rely on STUN and TURN instead of placing an SBC?).

What’s Next?

These reasons of media being challenging? Real time, bandwidth-needy, resource hog and being different; That’s on the browser/client side only. Servers that need to process media suffer from the same challenges and a few more. One that comes to mind is handling scale.

So we’ve only touched the tip of the iceberg here.

This is why I created my Advanced WebRTC Architecture Course a bit over a year ago. It is a WebRTC training that aims at improving the WebRTC understanding of developers (and the semi-technical people around them).

In the coming weeks, I’ll be relaunching the office hours that run alongside the course for its third round. Towards that goal, I’ll be hosting a free webinar about media in WebRTC.

I’ll be doing something different this time.

I had an interesting call recently with a company moving away from CPaaS towards self development. The mistake they made was that they made that decision with little understanding of WebRTC.

Here’s what we’ll do during the webinar:

  1. Introduce the requirements they had
  2. Explain the architecture and technology stack they selected
  3. Show what went wrong
  4. Suggest an alternate route

Similar to my last launch, there will be a couple of time limited bonuses available to those who decide to enroll for the course.

Want to learn more about media in WebRTC? Join this free webinar to see an analysis of a real case study I came across recently. What did the company had in mind to build and how they botched their architecture along the way.

Register and Grok media in WebRTC

And if you’re really serious, enroll to my Advanced WebRTC Architecture Course.

 

The post Grokking Media in WebRTC (a free webinar for my WebRTC Course) appeared first on BlogGeek.me.

PyFreeBilling – OSS Billing Platform

miconda - Mon, 09/11/2017 - 20:53
We want to highlight another project that uses Kamailio, which together with FreeSwitch, is part of PyFreeBilling, an open source billing platform targeting VoIP wholesale. It is released under AGPLv3.The project sources are hosted on Github at:The project has its own website at:While not tried yet here, the screenshots show a modern design and the list of features is quite impressive — next is an excerpt taken from project’s docs:
  • Customer add/modify/delete
    • IP termination
    • SIP authentication
    • Prepaid and/or postpaid
    • Realtime billing
    • Block calls on negative balance (prepaid) or balance under credit limit (postpaid)
    • Block / allow negative margin calls
    • Email alerts
    • Daily balance email to customer
    • Limit the maximum number of calls per customer and/or per gateway
    • Multiple contexts
    • Tons of media handling options
    • Powerfull ratecard engine
  • Provider add/modify/delete
    • Powerful LCR engine
    • Routing based on area code
    • CLI Routing
    • Routing decision based on quality, reliability, cost or load balancing (equal)
    • Limit max channels by each provider gateway
  • Extensive call and financial reporting screens (TBD)
  • CDR export to CSV
  • Customer panel
  • Design for scalability
Definitely worth a try!Enjoy! Thanks for flying Kamailio!PS. Should you develop a project related to Kamailio or be aware of such project, do not hesitate to contact us, we are glad to publish articles about them! 

Why Developing With WebRTC is Different than Web Development?

bloggeek - Mon, 09/11/2017 - 12:00

Soda and Mentos.

Last week I wrote about the difference between WebRTC and VoIP development. This week let’s see how WebRTC development is different from web development.

Let’s start by saying this for starters:

WebRTC is about Web Development

Well, mostly. It is more about doing RTC (real time communications). And enabling to do it over the web. And elsewhere. And not necessarily RTC.

WebRTC is quite powerful and versatile. It can be used virtually everywhere and it can be used for things other than VoIP or web.

When we do want to develop WebRTC for a web application, there are still differences – in the process, tools and infrastructure we will need to use.

Why is that?

Because real time media is different and tougher than most of the rest of the things you happen to be doing on the browser itself.

It boils down to this illustration (from last week):

So yes. WebRTC happens to run in the web browser. But it does a lot of things the way VoIP works (it is VoIP after all).

WebRTC dev != Web dev. And one of the critical parts is the servers we need to make it work. Join my free mini video WebRTC course that explains the server story of WebRTC.

Join the free server side WebRTC course

If you plan on doing anything with WebRTC besides a quick hello world page, then there’s lots of new things for you to learn if you’re coming from a web development background. Which brings me to the purpose of this article.

Here are 10 major differences between developing with WebRTC and web development:

#1 – WebRTC is P2P

Seriously. You can send voice, video and any other arbitrary data you wish directly from one browser to another. On a secure connection. Not going through any backend server (unless you need a relay – more on that in #6).

That triangle you see there? For VoIP that’s obvious. But for the web that’s magical. It opens up a lot of avenues for new types of services that are unrelated to VoIP – things like WebTorrent and Peer5; The ability to send direct private messages; low latency game controllers; the alternatives here are endless.

But what does this triangle mean exactly?

It means that you are not going to send your media through a web server. You are going to either send it directly between the browsers. Or you are going to send it to a media server – dedicated to this task.

This also means that a lot of the things you’ll need to keep track of and monitor don’t even get to your servers unless you do something about it to make it happen.

#2 – It isn’t all Javascript and JSON

Yes. I know last time I said it is all Javascript.

But if what you know is limited to Javascript then life is going to be a world of pain for you with WebRTC.

Media servers for example are almost always developed using C/C++ or Java. If you’ll need to debug them (and the serious companies do that), then you’ll need to understand these languages as well.

The second part is more JSON and less Javascript related – there’s one part of WebRTC that is ugly as hell but working. That’s the SDP that is used in the offer-answer negotiation process.

Besides being hard to interpret (different people understand SDP differently which later means they develop parsers and code for it differently), SDP is also hard to parse using Javascript. It isn’t built as a JSON blob, so the code to fetch a field or modify a field in SDP isn’t trivial (doable, but a pain).

#3 – There’s This Thing Called UDP

I guess this is the start of the following points as well, so here we go.

Today, the web is built on top of TCP. It started with HTTP. Moved to Websockets (also on top of TCP). And now HTTP/2 (also TCP).

There are attempts to allow for UDP type of traffic – QUIC is an example of it. But that isn’t there yet. And for most web developers that’s just under the hood anyway.

With WebRTC, all media is sent over UDP as much as possible. It can work over TCP if needed (I sent you to #6 didn’t I?), but we try to refrain for it – you get better media quality with UDP.

The table above shows the differences between UDP and TCP. This lies at the heart of how media is sent. We use unreliable connections with best effort.

#4 – Compromise is the Name of the Game

That UDP thing? It adds unreliability into the mix. Which also means that what you send isn’t what you get. Coupled with the fact that codecs are resource hogs, we get into a game of compromise.

In VoIP (and WebRTC), almost any decision we make to improve things in one axis will end up costing us in another axis.

Want better compression? Lose quality.

Don’t want to lose quality? Use more CPU to compress.

Want to lower the latency? Lose quality (or invest more CPU).

On and on it goes.

While CPUs are getting better all the time, and available bandwidth seems to be getting higher as well, our demand of our media systems is growing just as well. At times even a lot faster.

That ends up with the need to compromise.

All the time.

You’ll need to know and understand media and networking in order to be able to decide where to compromise and where to invest.

#5 – Best Effort is the Other Name

Here’s something I heard once in a call I had:

“We want our video quality to be a lot better than Skype and Hangouts”.

I am fine with such an approach.

But this is something I heard from:

  • 2 entrepreneurs with no experience or understanding if video compression
  • For a use case that needs to run in developing countries, with choppy cellular reception at best
  • And they assumed they will be able to do it all by themselves using WebRTC

It just doesn’t work.

WebRTC (and VoIP) are a best effort kind of a play.

You make do with what you get, trying to make the best of it.

This is why WebRTC tries to estimate the bandwidth available to it, and will then commence eating up all that available bandwidth to improve the video quality.

This is why when the network starts to act (packet loss), WebRTC will reduce the bitrate it needs and reduce the media quality in order to accommodate what is now available to it.

Sometimes these approaches work well. Other times not so well.

And yes. A lot of the end result will be reliant on how well you’ve designed and laid out your infrastructure for the service.

#6 – NAT Traversal Rules Your Life

Networks have NATs and Firewalls. These are nothing new, but if you are a web developer, then most likely they never did make life any difficult for you.

That’s because in the “normal” web, the browser will reach out to the server to connect to it. And being the main concept of our current day web, NATs and Firewalls expect that and allow this to happen.

Peer to peer communications, direct across browsers, as WebRTC operates. And with the use of UDP no less (again, something that isn’t usually done in the web browser)… these are things that firewalls and the IT personnel configuring them usually don’t need to contend with.

For WebRTC, this means the addition of STUN/TURN servers. Sometimes, you’ll hear the word ICE. ICE is an algorithm and not a server. ICE makes use and STUN and TURN. STUN and TURN are two protocols for NAT traversal, each using its own server. And usually, STUN and TURN servers are implemented in the same code and deployed using a single process.

WebRTC is doing a lot of effort to make sure its sessions will get connected. But at the end of the day, even that isn’t always enough. There are times when sessions just can’t get connected – whoever configured the firewall made sure of it.

#7 – Server Scaling is Ridiculous

Server scaling with WebRTC is slightly different than that of regular web.

There are two main reasons for that:

  1. The numbers are usually way smaller. While web servers can handle 5 digit connections or more, their WebRTC counterparts will often struggle with the higher end of 3 digits. There’s a considerable cost of hosting HD video and media server processing
  2. WebRTC requires statefulness. Severing a connection and restarting it will always be noticeable – a lot more than in most other web related use cases. This makes high availability, fault tolerance, upgrading and similar activities harder to manage with WebRTC

You’ll need to understand how each of the WebRTC servers work in order to understand how to scale it.

#8 – Bandwidth is Expensive

With web pages things are rather simple. The average web page size is growing year to year. We’ve got above 2.3MB in 2016. But that page is constructed out of different resources pulled from different servers. Some can be cached locally in the browser.

A 5 minute HD video at 2Mbps (not unheard of and rather common) will take up 75 MB during that 5 minutes.

If you are just doing 1:1 video calls with a 10% TURN relay factor, that can be quite taxing – running just 1,000 calls a day with an average of 5 minutes each will eat up 15 GB a day in your TURN server bandwidth costs. You probably want more calls a day and you want them running for longer periods of time as well.

Using a media server for group calling or recording makes this even higher.

As an example, at testRTC we can end up with tests that run into the 100’s of GBs of data per test. Easily…

When you start to work out your business model, be sure to factor in your bandwidth costs.

#9 – Geography is Everything for Media Delivery

For the most part, and for most services, you can get away with running your service off a specific data center.

This website of mine is hosted somewhere in the US (I don’t even care where) and hooked up to CDN services that take care of the static files. It has never been an issue for me. And performance is reasonable.

When it comes to real time live media, which is where WebRTC comes in, this won’t always do.

Getting data from New York to Paris can easily take 100 milliseconds or more, and since one of the things we’re striving for is real time – we’d like to be able to reduce that as much as we can.

Which gets us to the illustration above. Imagine two people in Paris having a WebRTC conversation that gets relayed through a TURN server in New York. Not even mentioning the higher possibility of packet losses, there’s clearly a degradation in the quality of the call just by the added delay of this route taken.

WebRTC, even for a small scale service, may need a global deployment of its infrastructure servers.

#10 – Different Browsers Behave Differently

Well… you know this one.

As a web developer, I am sure you’ve bumped into browsers acting differently with your HTML and CSS. Just recently, I tried to use <button> outside of a form element, only to find out the link that I placed inside it got ignored by Firefox.

The same is true for WebRTC. The difference is that it is a lot easier to bump into and it messes things up in two different levels:

  1. The API behavior – not all browsers support the exact same set of APIs (WebRTC isn’t really an official standard specification yet – just a draft; and browser implementations mostly adhere to recent variants of that draft)
  2. The network behavior – WebRTC means you communicate between browsers. At times, you might not get a session connected properly from one browser to another if they are different. They process SDP differently, they may not support the same codecs, etc.

As time goes by, this should get resolved. Browser vendors will shift focus from adding features and running after the specification towards making sure things interoperate across browsers.

But until then, we as developers will need to run after the browsers and expect things to break from time to time.

#11 – You Know More Than You Think

The majority of WebRTC is related to VoIP. That’s because at the end of the day, is it a variant of VoIP (one of many). This means that VoIP developers have a huge head start on you when it comes to understanding WebRTC.

The problem for them is that they have a different education than you do. Someone taught them that a call has a caller and a callee. That you need to be able to put a call on hold. To transfer the call. To support blind transfer. Lots and lots of notions that are relevant to telephony but not necessarily to communications.

You aren’t “tainted” in this way. You don’t have to unlearn things – so that nagging part of an ego telling you how things are done with VoIP – it doesn’t exist. I had my share of training sessions where most of my time was spent on this unlearning part.

This means that in a way you already know one important thing with WebRTC – that there’s no right and wrong in how sessions are created – and you are free to experiment and break things with it before coming to a conclusion of how to use it.

That’s powerful.

What’s Next?

If you have web development background, then there’s much you need to learn about how VoIP is done in order to understand WebRTC better.

WebRTC looks simple when you start with it. Most web developers will complain after a day or two of how complex it is. What they don’t really understand is how much more complicated VoIP is without WebRTC. We’ve been given a very powerful and capable tool with WebRTC.

Need to warm up to WebRTC? Try my free WebRTC server side mini course.

And if you’re really serious, enroll to my Advanced WebRTC Architecture Course.

 

The post Why Developing With WebRTC is Different than Web Development? appeared first on BlogGeek.me.

Why Developing With WebRTC is Different than VoIP Development?

bloggeek - Mon, 09/04/2017 - 12:00

Water and oil?

Let’s start by saying this for starters:

WebRTC is VoIP

That said, it is different than VoIP in the most important of ways:

  1. In the ways entrepreneurs make use of it to bring their ideas to life
  2. In the ways developers yield it to build applications

Why is that?

Because WebRTC lends itself to two very different worlds, all running over the Internet: The World Wide Web. And VoIP.

And these two worlds? They don’t mix much. Beside the fact that they both run over IP, there’s not a lot of resemblance between them. Well, that and the fact that both SIP and HTTP has a 200 OK message.

Everyone is focused on the browser implementation of WebRTC. But what of the needed backend? Join my free mini video WebRTC course that explains the server story of WebRTC.

Join the free server side WebRTC course

If you ever developed anything in the world of VoIP, then you know how calls get connected. You’re all about ring tones and the many features that comprise a Class 5 softswitch. The turth of the matter is, that this kind of knowledge can often be your undoing when it comes to WebRTC.

Here are 10 major differences between developing with WebRTC and developing with VoIP:

#1 – You are No Longer in Control

With VoIP, life was simple. All pieces of the solution was yours.

The server, the clients, whatever.

When something didn’t work, you’d go in, analyze it, fix the relevant piece of software, and be done with it.

WebRTC is different.

You’ve got this nagging thing known as the “browser”.

4 of them.

And they change. And update. A lot.

Here’s what happened in the past year with Chrome and Firefox:

A version every 6-8 weeks. For each of them.

And these versions? They tend to change things in how the browsers change their behavior when it comes to WebRTC. These changes may cause services to falter.

These changes means that:

  1. You are not in control over the whole software running your service
  2. You are not in control of when pieces of your deployment get upgraded (browsers will upgrade without you having a say in it)

VoIP doesn’t work this way.

You develop, integrate, deploy and then you decide when to upgrade or modify things. With WebRTC that isn’t the case any longer.

You must continuously test against future browser versions (beta, unstable, Canary and nightly should become part of your vocabulary). You need to have the means to easily and quickly upgrade a production service – at runtime. And be prepared to do it rather frequently.

#2 – Javascript is King

My pedigree comes from VoIP.

I am a VoIP developer.

I did development, project management, product management and then been a CTO of a business unit where what we did was develop VoIP software SDKs that were used (and are still used) in many communication products.

I am a great developer. Really. One of the best I know. At least when it comes to coding in C.

VoIP was traditionally developed in C/C++ and Java.

With Javascript I know my way but by no means am I even an average developer. My guess is that a lot of VoIP engineers have a similar background to me.

WebRTC is all about Javascript.

In WebRTC, JavaScript is King
Click To Tweet

Yes. WebRTC has a Javascript API. But that’s half the story. Many of the backend systems written for use with WebRTC ends up using Node.js. Which uses Javascript.

WebRTC isn’t limited to Javascript. There are systems written in C, Java, Python, C#, Erlang, Dart and even PHP that are used. There are .Net systems. On mobile, native apps use Objective C, Swift or Java in their implementations of client-side WebRTC SDKs.

But the majority? That’s Javascript.

Three main reasons I can see for it:

  1. Fashion. Node.js is fashionable and new. WebRTC is also new, so there’s a fit
  2. Asynchronous. The signaling in WebRTC needs to be snappy and interactive. It needs to have a backend that can fit nicely with its model of asynchronous interactions and interfaces. Node.js offers just that and makes it easier to think of signaling on the frontend and backend at the same time. Which leads us to the third and probably most important reason –
  3. Javascript. You use it in the frontend and backend. Easier for developers to use a single language for both. Easier to shift bits and pieces of code from one side to the other if and when needed
#3 – A Big Island

VoIP is all about interoperability. A big happy family of vendors. All collaborating and cooperating. The idea is that if you purchase a phone from one vendor, you *should* be able to dial another vendor’s phone with it via a third vendor’s PBX. It works. Sometimes. And it requires a lot of effort in interoperability testing and tweaking. An ongoing arduous task. The end result though is a system where you end up testing a small set of vendors that are approved to work within a certain deployment.

VoIP and interoperability abhors the idea of islands. Different communication services that can’t connect to each other.

WebRTC is rather different. You no longer build one VoIP product or device that is designed to communicate with VoIP devices of other vendors. You build the whole shebang.

An island of sorts, but a rather big one. One where you can offer access through all browsers, operating systems and mobile devices.

You no longer care about interoperability with other vendors – just with interoperability of your service with the browsers you are relying on. It simplifies things some while complicating the whole issue of being in control (see #1 above).

#4 – It is Cloudy

It seems like VoIP was always mean to run in local deployments. There are a few cases where you see it deployed globally, but they aren’t many. Usually, there’s a geography that goes into the process.

This is probably rooted with the origins of VoIP – as a replacement / digital copy of what you did in telecom before. It also relates to the fact that the world was bigger in the past – the cloud as we know it today (AWS and the many other cloud providers that followed) didn’t really exist.

Skype is said to have succeeded so much as it did due to the fact that it had a great speech codec at the time that was error resilient (it had FEC built-in at a time companies conceptualized about bickering in the IETF and the ITU standard bodies about adding FEC in the RTP layer). It also had NAT traversal that just worked (again, when STUN and TURN were just ideas). The rest of the world? We were all happy enough to instruct customers to install their gatekeepers and B2BUAs in the DMZ.

Since then VoIP has evolved a lot. It turned towards the SBC (more on this in #10).

WebRTC has bigger challenges and requirements ahead of it.

For the most part, and with most deployments of WebRTC, there are three things that almost always are apparent:

  1. Deployments are global. You never know from where the users will be joining. Not globally and not their type of network
  2. Networks are unmanaged. This is similar to the above. You have zero control over the networks, but your users will still complain about the quality (just check out any of Fippo’s analysis posts)
  3. We deploy them on AWS. All the time. On virtual machines. Inside Docker containers. Layers and layers of abstraction. For a real time service. It it seems to work
#5 – Bring Your Own Signaling

VoIP is easy. It is standardized. Be it SIP, H.323, XMPP or whatever you bring to the table. You are meant to use a signaling protocol. Something someone else has thought of in the far dark rooms in some standards organization. It is meant to keep you safe. To support the notion and model of interoperability. To allow for vendor agnostic deployments.

WebRTC did away with all this, opting to not have a signaling protocol at all out of the box.

Some complain about it (mostly VoIP people). I’ve written about it some 4 years ago – about the death of signaling.

With WebRTC you make the decision on what signaling protocol you will be using. You can decide to go for a standards based solution such as SIP over WebSocket, XMPP over BOSH or WebSocket – or you can use a newly created signaling protocol invented only for your specific scenario – or use whatever you already have in your app to signal people.

As with anything in WebRTC, it opens up a few immediate questions:

  1. Should you use a standards based signaling protocol or a proprietary one?
  2. Should you built it on your own from scratch or use a third party framework for it?
  3. Should you host and manage it on your own or use it as a service instead?

All answers are now valid.

#6 – Encryption and Privacy are MANDATORY

With VoIP, encryption was always optional. Seldom used.

I remember going to these interoperability events as a developer. The tests that almost never really succeeded were the ones that used security. Why? You got to them last during the week long event, and nobody got that part quite the same as others.

That has definitely changed over the years, but the notion of using encryption hasn’t. VoIP products are shipped to customers and deployed without encryption. The encryption piece is an optional configuration that many skip. Encryption makes it hard to use wireshark to understand what goes in the network, it takes up CPU (not much anymore, but still conceptually it is), it complicates things.

WebRTC on the other hand, has only encryption configured into it. No way to use it with clear RTP. even if you really really want to. Even if you swear all browsers and their communications run inside a secure network. Nope. can’t take security out of WebRTC.

#7 – If it is New, WebRTC Will be Using it

When WebRTC came out, it made use of the latest most recent RFCs that were VoIP related in the media domain.

Ability to bundle RTP and RTCP on the same stream? Check.

Ability to multiplex audio and video on the same stream? Check.

Ability to send FIR commands over RTCP and not signaling? Check.

Ability to negotiate keys over DTLS-SRTP instead of SDES? Check.

There are many other examples for it.

And in many cases, WebRTC went to the extreme of banning the other, more common, older mechanisms of doing things.

VoIP was always made with options in mind. You have at least 10 different ways in the standard to do something. And all are acceptable.

WebRTC takes what makes sense to it, throwing the rest out the window, leaving the standard slightly cleaner in the end of it.

Just recently, a decision was made about supporting non-multiplexed streams. This forced Asterisk and all of its users to upgrade.

VoIP and SIP were never really that important to WebRTC. Live with it.

#8 – Identity Management and Authorization are Tricky

There’s no identity management in WebRTC.

There’s also no clear authorization model to be heard of.

Here’s a simple one:

With SIP, the way you handle users is giving them usernames and passwords.

The user clicks that into the client and this gets used to sign up to the service.

With regular apps, it is easy to set that username/password as your TURN credentials as well. But doing it with WebRTC inside a browser opens up a world of pain with the potential of harvesting that information to piggyback on your TURN servers, costing you money.

So instead you end up using ephemeral passwords in TURN with WebRTC. Here’s an explanation how to do just that.

In many other cases, you simply don’t care. If the user already logged into the page, and identified and authenticated himself in front of your service, then why have an additional set of credentials for him? You can just as easily piggyback a mechanism such as Facebook connect, Twitter, LinkedIn or Google accounts to get the authentication part going for you.

#9 – Route. Don’t Mix

If you come from VoIP, then you know that for more than two participants in a call you mix the media. You do it usually for audio, but also for the video. That’s just how things are (were) done.

But for WebRTC, routing media through an SFU is how you do things.

It makes the most sense because of a multitude of reasons:

  1. For many use cases, this is the only thing that can work when it comes to meeting your business model. It strikes that balance between usability and costs
  2. This in turn, brings a lot of developers and researchers to this domain, improving media routing and SFU related technologies, making it even better as time goes by
  3. In WebRTC, the client belongs to the server – the server sends the client as HTML/JS code. With the added flexibility of getting multiple media streams, comes an added flexibility to the UI’s look and feel as well as behavior

There are those who are still resistant to the routing model. When these people have a VoIP pedigree, they’ll lean towards the mixing model of an MCU, calling it superior. It will usually cost 10 times or more to deploy an MCU instead of an SFU.

Be sure to know and understand SFUs if you plan on using WebRTC.

#10 – SBCs are Useless

Or at least not mandatory anymore.

Every. SBC. vendor. out. there. is. adding. WebRTC.

And I get it. If you’re building an SBC – a Session Border Controller – then you should also make sure it supports WebRTC so all these pesky people looking to get access through the browser can actually get it.

An SBC was an abomination added to VoIP. It was a necessary evil.

It served the purpose of sitting in the DMZ, making sure your internal network is protected against malicious VoIP access. A firewall for VoIP traffic.

Later people bolted on that SBC the ability to handle interoperability, because different vendor products never really worked well with one another (we’ve already seen that in #3). Then transcoding was added, because we could. And then other functions.

And at some point, it was just obvious to place SBCs in VoIP infrastructure. Well… WebRTC doesn’t need an SBC.

VoIP needs an SBC that handles WebRTC. But if you’re planning on doing a WebRTC based application that doesn’t have much of VoIP in it, you can skip the SBC.

#11 – Ecosystem Created by the API and Not the Specification

Did I say 10 differences? So here’s a bonus difference.

Ecosystems in VoIP are created around the network protocol.

You get people to understand the standard specification of the network protocol, and from there you build products.

In WebRTC, the center is not the network protocol (yes, it is important and everything) – it is the WebRTC APIs. The ones implemented in the browsers that enable you to build a client on top. One that theoretically should run across all browsers.

That’s a huge distinction.

Many of the developers in WebRTC are clueless about the network, which is a shame.  On the other hand, many VoIP developers think they understand the network but fail to understand the nuanced differences between how the network works in VoIP and in WebRTC.

What’s Next?

If you have VoIP background, then there are things for you to learn when shifting your focus towards WebRTC. And you need to come at it with an open mind.

WebRTC seems very similar to VoIP – and it is – because it is VoIP. But it is also very different. In the ways it is designed, thought of and used.

Knowing VoIP, you should have a head start on others. But only if you grok the differences.

Need to warm up to WebRTC? Try my free WebRTC server side mini course.

And if you’re really serious, enroll to my Advanced WebRTC Architecture Course.

 

The post Why Developing With WebRTC is Different than VoIP Development? appeared first on BlogGeek.me.

Kamailio v5.0.3 Released

miconda - Fri, 09/01/2017 - 21:00
Kamailio SIP Server v5.0.3 stable is out – a minor release including fixes in code and documentation since v5.0.2. The configuration file and database schema compatibility is preserved, which means you don’t have to change anything to update.Kamailio v5.0.3 is based on the latest version of GIT branch 5.0. We recommend those running previous 5.0.x or older versions to upgrade. There is no change that has to be done to configuration file or database structure comparing with the previous release of the v5.0 branch.Resources for Kamailio version 5.0.3Source tarballs are available at:Detailed changelog:Download via GIT: # git clone https://github.com/kamailio/kamailio kamailio
# cd kamailio
# git checkout -b 5.0 origin/5.0Relevant notes, binaries and packages will be uploaded at:Modules’ documentation:What is new in 5.0.x release series is summarized in the announcement of v5.0.0:Thanks for flying Kamailio!

Pages

Subscribe to OpenTelecom.IT aggregator

Using the greatness of Parallax

Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.

Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.

Get free trial

Wow, this most certainly is a great a theme.

John Smith
Company name

Yet more available pages

Responsive grid

Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

More »

Typography

Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

More »

Startup Growth Lite is a free theme, contributed to the Drupal Community by More than Themes.