News from Industry

Who Needs WebSockets in an HTTP/2 World?

bloggeek - Tue, 07/28/2015 - 12:00

I don’t know the answer to this one…

I attended an interesting meetup last month. Sergei Koren, Product Architect at LivePerson explained about HTTP/2 and what it means for those deploying services. The video is available online:

One thing that really interest me is how these various transports are going to be used. We essentially now have both HTTP/2 and WebSocket capable of pretty much the same things:

HTTP/2 WebSocket Headers Binary + compression Binary, lightweight Content Mostly text + compression Binary or text Multiplexed sessions Supported Supported Direction Client to server & server push Bidirectional

What HTTP/2 lacks in binary content, it provides in compression.

Assuming you needed to send messages back and forth between your server and its browser clients, you’ve probably been considering using HTTP based technologies – XHR, SSE, etc. A recent addition was WebSocket. While the other alternatives are mostly hacks and workarounds on top of HTTP, a WebSocket essentially hijacks an HTTP connection transforming it into a WebSocket – something defined specifically for the task of sending messages back and forth. It made WebSocket optimized for the task and a lot more scalable than other alternatives.

With HTTP/2, most of the restrictions that existed in HTTP that required these hacks will be gone. This opens up the opportunity for some to skip WebSockets and stay on board with HTTP based signaling.

Last year I wrote about the need for WebSockets for realtime and WebRTC use cases. I am now wondering if that is still true with HTTP/2.

Why is it important?
  • BOSH, Comet, XHR, SSE – these hacks can now be considered legacy. When you try to build a new service, you should think hard before adopting them
  • WebSocket is what people use today. HTTP/2 is an interesting alternative
  • When architecting a solution or picking a vendor, my suggestion would be to understand what transports they use today and what’s in their short-term and mid-term roadmap. These will end up affecting the performance of your service

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post Who Needs WebSockets in an HTTP/2 World? appeared first on BlogGeek.me.

New WebRTC WG Charter

webrtc.is - Mon, 07/27/2015 - 21:36

The new charter for the WebRTC Working Group has been approved. Current members will need to re-join, from the WebRTC WG mail list…

Hi all,

Great news, the new W3C WebRTC Working Group charter [1] has been officially approved by the W3C Director [2].

The revised charter adds a deliverable for the next version of WebRTC, has an updated list of deliverables based on the work started under the previous charter, clarifies its decision policy, and extends the group
until March 2018.

The charter of this Working Group includes a new deliverable that require W3C Patent Policy licensing commitments from all Participants.

Consequently, all Participants must join or re-join the group, which involves agreeing to participate under the terms of the revised charter and the W3C Patent Policy. Current Participants may continue to attend meetings (teleconferences and face-to-face meetings) for 45 days after this announcement, even if they have not yet re-joined the group. After 45 days (ie. September 10, 2015), ongoing participation (including meeting attendance and voting) is only permitted for those who have re-joined the group.

Use this form to (re)join:
https://www.w3.org/2004/01/pp-impl/47318/join

Instructions to join the group are available at:
http://www.w3.org/2004/01/pp-impl/47318/instructions

Thanks,
Vivien on behalf of the WebRTC WG Chairs and Staff contacts

[1] http://www.w3.org/2015/07/webrtc-charter.html
[2] https://lists.w3.org/Archives/Member/w3c-ac-members/2015JulSep/0024.html


WebRTC Basics: How (and Why) WebRTC Uses your Browser’s IP Address

bloggeek - Mon, 07/27/2015 - 12:00

To reach out to you.

I’ve been asked recently to write a few more on the topic of WebRTC basics – explaining how it works. This is one of these posts.

There’s been a recent frenzy around with the NY Times use of WebRTC. The fraud detection mechanism for the ads there used WebRTC to find local addresses and to determine if the user is real or a bot. Being a cat and mouse game over ad money means this will continue with every piece of arsenal both sides have at their disposal, and WebRTC plays an interesting role in it. The question was raised though – why does WebRTC needs the browser’s IP address to begin with? What does it use it for?

To answer, this question, we need to define first how the web normally operates (that is before WebRTC came to be).

The illustration above explains it all. There’s a web server somewhere in the cloud. You reach it by knowing its IP address, but more often than not you reach it by knowing its domain name and obtaining its IP address from that domain name. The browser then goes on to send its requests to the server and all is good in the world.

Now, assume this is a social network of sorts, and one user wants to interact with another. The one and only way to achieve that with browsers is by having the web server proxy all of these messages – whatever is being sent from A to B is routed through the web server. This is true even if the web server has no real wish to store the messages or even know about them.

WebRTC allows working differently. It uses peer-to-peer technology, also known as P2P.

The illustration above is not new to VoIP developers, but it has a very important difference than how the web worked until the introduction of WebRTC. That line running directly between the two web browsers? That’s the first time that a web browser using HTML could communicate with another web browser directly without needing to go through a web server.

This is what makes all the difference in the need for IP addresses.

When you communicate with a web server, you browser is the one initiating the communication. It sends a request to the server, when will then respond through that same connection your browser creates. So there’s no real need for your browser to announce its IP address in any way. But when one browser needs to send messages to another – how can it do that without an IP address?

So IP addresses need to be exchanged between browsers. The web server in the illustration does pass messages between browsers. These messages contain SDP, which among other things contains IP addresses to use for the exchange of data directly between the browsers in the future.

Why do we need P2P? Can’t we just go through a server?

Sure we can go through a server. In fact, a lot of use cases will end up using a server for various needs – things like recording the session, multiparty or connecting to other networks necessitates the use of a server.

But in many cases you may want to skip that server part:

  • Voice and video means lots of bandwidth. Placing the burden on the server means the service will end up costing more
  • Voice and video means lost of CPU power. Placing the burden on the server means the service will end up costing more
  • Routing voice and video through the server means latency and more chance of packet losses, which will degrade the media quality
  • Privacy concerns, as when we send media through a server, it is privy to the information or at the very least to the fact that communication took place

So there are times when we want the media or our messages to go peer-to-peer and not through a server. And for that we can use WebRTC, but we need to exchange IP addresses across browsers to make it happen.

Now, this exchange may not always translate into two web browsers communicating directly – we may still end up relaying messages and media. If you want to learn more about it, then check out the introduction to NATs and Firewalls on webrtcHacks.

 

Kranky and I are planning the next Kranky Geek in San Francisco sometime during the fall. Interested in speaking? Just ping me through my contact page.

The post WebRTC Basics: How (and Why) WebRTC Uses your Browser’s IP Address appeared first on BlogGeek.me.

Will Patents Kill H.265 or Will H.265’s Patents Kill WebRTC?

bloggeek - Thu, 07/23/2015 - 12:00

To H.265 (=HEVC) or not to H.265? That is the question. And the answer will be determined by the browser vendors.

I gave a keynote at a UC event here in Israel last week. I really enjoyed it. One of the other speakers, made it a point to state that their new top of the line telepresence system now supports… H.265. And 4K. I was under impressed.

H.265 is the latest and greatest in video compression. Unless you count VP9. I’ve written about these codecs before.

If you think about WebRTC in 2016 or even 2017, you need to think beyond the current video codecs – H.264 and VP8. This is important, because you need to decide how much to invest in the media side of your service, and what implications these new codecs will bring to your architecture and development efforts.

I think H.265 is going to have a hard time in the market, and not just because VP9 is already out there, streamed over YouTube to most Chrome and Firefox browsers. It will be the case due to patents.

In March this year, MPEG-LA, the good folks counting money from H.264 patents, have announced a new patent pool for HEVC (=H.265). Two interesting posts to read about this are Jan Ozer‘s and Faultline‘s. Some things to note:

  • There currently are 27 patent holders
  • Over 500 essential patents are in the pool
  • Not everyone with patents around H.265 have joined the pool, so licensing H.265 may end up being a nightmare
  • Missing are Google and Microsoft from the patent pool
  • Missing are also video conferencing vendors: Polycom, Avaya and Cisco
  • Unit cost for encoder or decoder is $0.20/unit
  • There’s an annual cap of $25M

What does that mean to WebRTC?

  • Internet users are estimated above 3 billion people and Firefox has an estimated market share of around 12%. With over 300 million Firefox users, that places Mozilla way above the cap. Can Mozilla pay $25M a year to get H.265? Probably not
  • It also means every successful browser vendor will need to shell these $25M a year to MPEG-LA. I can’t see this happening any time soon
  • Google has their own VP9, probably with a slew of relevant patents associated with it. These will be used in the upcoming battle with H.265 and the MPEG-LA I assume
  • Microsoft not joining… not sure what that means, but it can’t be good. Microsoft might just end up adopting VP9 and going with Google here, something that might actually look reasonable
  • Apple being Apple, if they decide to support WebRTC (and that’s still a big if in 2015 and 2016), they won’t go with the VPx side of the house. They will go with H.265 – they are part of that patent pool
  • Cisco isn’t part of this pool. I don’t see them shelling $25M a year on top of the estimated $6M they are already “contributing” for OpenH264 towards MPEG-LA

This is good news for Google and VP9, which is the competing video technology.

When we get to the WebRTC wars around H.265 and VP9, there will be more companies on the VP9 camp. The patents and hassles around H.265 will not make things easy:

  • If WebRTC votes for VP9, it doesn’t bode well for H.265
    • WebRTC is the largest deployment of video technology already
    • Deciding to ignore it as a video codec isn’t a good thing to do
  • If WebRTC votes for H.265, unlikely as it may seem, may well kill standards based high quality video support across browsers in WebRTC
    • Most browsers will probably prefer ignoring it and go with VP9
    • Handsets might go with H.265 due to a political push by 3GPP (a large portion of the patent owners in H.265 are telecom operators and their vendors)
    • This disparity between browsers and handsets won’t be good for the market or for WebRTC

The codec wars are not behind us. Interesting times ahead. Better be prepared.

 

Kranky and I are planning the next Kranky Geek in San Francisco sometime during the fall. Interested in speaking? Just ping me through my contact page.

The post Will Patents Kill H.265 or Will H.265’s Patents Kill WebRTC? appeared first on BlogGeek.me.

Announcing the ClueCon Coder Games!

FreeSWITCH - Wed, 07/22/2015 - 03:09
So You Think You Can Code?

You’ve seen the presentations, you’ve asked your questions, you have the resources, now it is your time to shine by using the sponsor APIs to create something exciting! We want to see what you can do! Bonus points for each API you can incorporate! Go check out the APIs now to get a head start on the competition and get those creative juices flowing! You have less than two weeks to prepare!

Sponsor APIs: FreeSWITCH, Tropo, Kandy, Twilio, Plivo, and more…

IPv6 Round Table

IPv6 and why you should deploy it ASAP: John Brzozowski, Fellow and Chief Architect, IPv6 at Comcast, Bill Sandiford President of CNOC, Member of the board at ARIN.

Flowroute – Jeopardy

Think you know about SIP? Do you know enough to beat the competition? Flowroute is hosting a SIP themed game of Jeopardy! Put your brain to the test and come see how much you really know!

DTMF-u

Do you like all things games? Well, then this is the game for you! But there is a twist! Before you can win the game you must build it! Using your choice of language or API you must build a game! There will be three categories of DTMF-u; Tic-tac-toe, DTMF pattern recognition, or Freestyle. You could build something that plays a random DTMF sequence, receives player input, and then either continues or fails the player. Or maybe a WebRTC based game of tic-tac-toe? Or surprise us! Have fun with it! Creative ways to fail a player may give you bonus points. The top three games will be played by everyone and the winner of each will take home a prize! All gaming bots will be screened via a Turing test to ensure no unintended apocalyptic consequences.

Show and Tell

Alright, now is your chance! You have been playing with the code all day and this is your chance to show off! We want to know what you’ve done and how you did it. Use your creativity and skills as a programmer to impress the judges and win a prize! This is a no holds barred all out free for all! Any language doing anything! Knock our socks off and take home a fabulous prize and a year’s supply of bragging rights!

Raffle Grand Prize!

The grand prize is a laser engraved commemorative FreeSWITCH 1.6 Edition dual-core 13″ Retina MacBook Pro!

 

Is Microsoft Edge Going to be the Best Browser Around?

bloggeek - Tue, 07/21/2015 - 12:00

The newest game in town.

Apple’s Safari. Haven’t used it so can’t say anything. Just that most people I know are really comfortable using Chrome on Macs.

Chrome? Word’s around that it is bloated and kills your CPU. I know. On a machine with 4Gb of memory, you need to switch and use Firefox instead. Otherwise, the machine won’t survive the default tabs I have open.

Firefox? Hmm. Some would say that their Hello service is bloatware. I don’t really have an opinion. I am fine with using Firefox, but I prefer Chrome. No specific reason.

From a recent blog post from Microsoft, it seems like Microsoft Edge is faster than Chrome:

In this build, Microsoft Edge is even better and is beating Chrome and Safari on their own JavaScript benchmarks:

  • On WebKit Sunspider, Edge is 112% faster than Chrome
  • On Google Octane, Edge is 11% faster than Chrome
  • On Apple JetStream, Edge is 37% faster than Chrome

Coming from Microsoft’s dev team, I wouldn’t believe it. Not immediately. Others have slightly different results:

Here’s the rundown (click on an individual test to see the nitty-gritty details):

Some already want to switch from Chrome to Edge.

Edge is even showing signs of WebRTC support, so by year end, who knows? I might be using it regularly as well.

Edge is the new shiny browser.

Firefox is old news. Search Google for Firefox redesign. They had a major one on a yearly basis. Next in line is their UI framework for extensions as far as I can tell.

Safari is based on WebKit. WebKit was ditched by Google so Chrome can be developed faster. As such, Chrome is built on the ashes of WebKit.

Internet Explorer anyone?

Edge started from a clean slate. A design from 2014, where developers thought of how to build a browser, as opposed to teams doing that before smartphones, responsive design or life without Flash.

Can Edge be the best next thing? A real threat to Chrome on Windows devices? Yes.

Kranky and I are planning the next Kranky Geek in San Francisco sometime during the fall. Interested in speaking? Just ping me through my contact page.

The post Is Microsoft Edge Going to be the Best Browser Around? appeared first on BlogGeek.me.

FreeSWITCH Week in Review (Master Branch) July 11th-July 17th

FreeSWITCH - Tue, 07/21/2015 - 01:18

Hello, again. This passed week in the FreeSWITCH master branch we had 43 commits. We had a number of cool new features this week including: added functionality for capturing screenshots from both legs to uuid_write_png, the addition of new multi-canvas and telepresence features in mod_conference, the addition of vmute member flag to mod_conference, and an API for removing an active ladspa effect on a channel.

Join us on Wednesdays at 12:00 CT for some more FreeSWITCH fun! And head over to freeswitch.com to learn more about FreeSWITCH support.

New features that were added:

  • FS-7769 [mod_conference] Add new multi-canvas and telepresence features
  • FS-7847 [mod_conference] Add layers that do not match the aspect ration of conference by using the new hscale layer param for horizontal scale, and add zoom=true param to crop layer instead of letterbox, add grid-zoom layout group that demonstrates these layouts, and fix logo ratios and add borders too.
  • FS-7813 [mod_conference] Add vmute member flag.
  • FS-7846 [mod_dptools] Add eavesdrop_whisper_aleg=true and eavesdrop_whisper_bleg=true channel variables to allow you to start eavesdrop in whisper mode of specific call leg
  • FS-7760 [mod_sofia] Revise channel fetch on nightmare transfer and add dial-prefix and absolute-dial-string to the nightmare xml
  • FS-7829 [mod_opus] Add sprop-stereo fmtp param to specify if a sender is likely to send stereo or not so the receiver can safely downmix to mono to avoid wasting receiver resources
  • FS-7830 [mod_opus] Added use-dtx param in config file (enables DTX on the encoder, announces in fmtp)
  • FS-7824 [mod_png] Add functionality for capturing screenshots from both legs to uuid_write_png
  • FS-7549 [mod_ladspa] Added an API for removing an active ladspa effect on a channel. For conformance reasons, the uuid_ladspa command now accepts ‘stop’ and ‘start’, while the previous functionality (without any verb) which will simply add ladspa remains intact.

Improvements in build system, cross platform support, and packaging:

  • FS-7845 [mod_conference] Break up mod_conference into multiple source files to improve build performance
  • FS-7769 [mod_conference] Fixed a build issue
  • FS-7820 Fix build system typo. Don’t assign the same variable twice.
  • FS-7043 Fixed apr1 unresolved symbols in libfreeswitch.so.1.0.0
  • FS-7130 Make /run/freeswitch persistent in the Debian packages, so it will start under systemd

The following bugs were squashed:

  • FS-7849 [verto] Remove extra div breaking full screen in html
  • FS-7832 [mod_opus] Fixes when comparing local and remote fmtp params
  • FS-7731 [mod_xml_cdr] Fixed a curl default connection timeout
  • FS-7844 Fix packet loss fraction when calculating loss average

Kamailio v4.3.1 Released

miconda - Mon, 07/20/2015 - 23:10
Kamailio SIP Server v4.3.1 stable is out – a minor release including fixes in code and documentation since v4.3.0 – configuration file and database compatibility is preserved.Kamailio (former OpenSER) v4.3.1 is based on the latest version of GIT branch 4.3, therefore those running previous 4.3.x versions are advised to upgrade. There is no change that has to be done to configuration file or database structure comparing with older v4.3.x.Resources for Kamailio version 4.3.1Source tarballs are available at:Detailed changelog:Download via GIT: # git clone git://git.kamailio.org/kamailio kamailio
# cd kamailio
# git checkout -b 4.3 origin/4.3Binaries and packages will be uploaded at:Modules’ documentation:What is new in 4.3.x release series is summarized in the announcement of v4.3.0:

Now That Flash and Plugins are out the Door, What’s Holding you from Adopting WebRTC?

bloggeek - Mon, 07/20/2015 - 12:00

All routes are leading towards WebRTC.

Somehow, people are still complaining about adoption of WebRTC in browsers instead of checking their alternatives.

Before WebRTC came to our lives, we had pretty much 3 ways of getting voice and video calling into our machines:

  1. Build an application and have users install it on their PCs
  2. Use Flash to have it all inside the browser
  3. Develop a plugin for the service and have users install it on their browsers

We’re now in 2015, and 3 (again that number) distinct things have changed:

  1. On our PCs we are less tolerant to installing “stuff”
    • As more and more services migrate towards the cloud, so are our habits of using browsers as our window to the world instead of installed software
    • Chromebooks are becoming popular in some areas, so installing software is close to impossible in them
  2. Plugins are dying. Microsoft is banning plugins in Edge, joining Google’s Chrome announcement on the same topic
  3. Flash is being thrown out the window, which is what I want to focus about here

There have been a lot of recent publicity around a new round of zero day exploits and vulnerabilities in Flash. It started with a group called The Hacking Team being hacked, and their techniques exposed. They used a few Flash vulnerabilities among other mechanisms. While Adobe is actively fixing these issues, some decided to vocalize their discontent with Flash:

Facebook’s Chief Security Officer wants Adobe to declare an end-of-life date for Flash.

It is time for Adobe to announce the end-of-life date for Flash and to ask the browsers to set killbits on the same day.

— Alex Stamos (@alexstamos) July 12, 2015

Mozilla decided to ban Flash from its browser until the recent known vulnerabilities are patched.

Don’t get me wrong here. Flash will continue being with us for a long time. Browsers will block Flash and then re-enable it, dealing with continuing waves of vulnerabilities that will be found. But the question then becomes – why should you be using it any longer?

  • You can acquire camera and microphone using WebRTC today, so no need for Flash
  • You can show videos using HTML5 and MPEG-DASH, so no need for Flash
  • You can use WebGL and a slew of other web technologies to build interactivity into sites, so no need for Flash
  • You can run voice and video calls at a higher quality than what Flash ever could with WebRTC
  • And you can do all of the above within environments that are superior to Flash in both their architecture, quality and security

Without Flash and Plugin support in your future, why would you NOT use WebRTC for your next service?

 

Kranky and I are planning the next Kranky Geek in San Francisco sometime during the fall. Interested in speaking? Just ping me through my contact page.

The post Now That Flash and Plugins are out the Door, What’s Holding you from Adopting WebRTC? appeared first on BlogGeek.me.

Wiresharking Wire

webrtchacks - Thu, 07/16/2015 - 21:07

This is the next decode and analysis in Philipp Hancke’s Blackbox Exploration series conducted by &yet in collaboration with Google. Please see our previous posts covering WhatsApp, Facebook Messenger and FaceTime for more details on these services and this series. {“editor”: “chad“}

Wire is an attempt to reimagine communications for the mobile age. It is a messaging app available for Android, iOS, Mac, and now web that supports audio calls, group messaging and picture sharing. One of it’s often quoted features is the elegant design. As usual, this report will focus on the low level VoIP aspects, and leave the design aspects up for the users to judge.

As part of the series of deconstructions, the full analysis is available for download here, including the wireshark dumps.

Half a year after launching the Wire Android app currently has been downloaded between 100k and 500k times. They also recently launched a web version, powered by WebRTC. Based on this, it seems to be stuck with what Dan York calls the directory dilemma.

What makes Wire more interesting from a technical point of view is that they’re strong proponents of the Opus codec for audio calls. Maybe there is something to learn here…

The wire blog explains some of the problems that they are facing in creating a good audio experience on mobile and wifi networks:

The WiFi and mobile networks we all use are “best effort” — they offer no quality of service guarantees. Devices and apps are all competing for bandwidth. Therefore, real-time communications apps need to be adaptive. Network adaptation means working around parameters such as variable throughput, latency, and variable latency, known as jitter. To do this, we need to measure the variations and adjust to them in as close to real-time as possible.

Given the preference of ISAC over Opus by Facebook Messenger, the question which led to investigating Wire was whether they can show how to successfully use Opus on mobile.

Results

The blog post mentioned above also describes the Wire stackas “a derivate of WebRTC and the IETF standardized Opus codec”. It’s not quite clear what exactly “derivate of WebRTC” means. What we found when looking at Wire was, in comparison to the other apps reviewed, was a more “out of the box” WebRTC app, using the protocols as defined in the standards body.

Comparison with WebRTC  Feature WebRTC/RTCWeb Specifications Wire SDES MUST NOT offer SDES does not offer SDES ICE RFC 5245 RFC 5245 TURN usage used as last resort used as last resort Audio codec Opus or G.711 Opus Video codec H.264 or VP8 none (yet?) Quality of experience

Audio quality did turn out to be top notch, as our unscientific tests on various networks showed.
Testing on simulated 2G and 3G networks showed some adaptivity to the situations there.

Implementation

The STUN implementation turned out to be based on the BSD-licensed libre by creytiv.com, which is compatible with both the Chrome and Firefox implementations of WebRTC. Binary analysis showed that the webrtc.org media engine along with libopus 1.1 is used for the upper layer.

Privacy

Wire is company that prides itself on the user privacy protection that comes from having it’s HQ in Switzerland, yet has it’s signalling and TURN servers in Ireland. They get strong kudos for using DTLS-SRTP. To sum it up, Wire offers a case study in how to fully adopt WebRTC for both Web and native mobile.
Related articles across the web

    Want to keep up on our latest posts? Please click here to subscribe to our mailing list if you have not already. We only email post updates. You can also follow us on twitter at @webrtcHacks for blog updates and news of technical WebRTC topics or our individual feeds @chadwallacehart, @victorpascual and @tsahil.

    The post Wiresharking Wire appeared first on webrtcHacks.

    What I Learned About the WebRTC Market from a Webinar on WebRTC Testing

    bloggeek - Thu, 07/16/2015 - 12:00

    We’re a lot more than I had known.

    One of my recent “projects” is co-founding a startup called testRTC which offers testing and monitoring services for WebRTC based services. The “real” public announcement made about this service was here in these last couple of days and through a webinar we did along with SmartBear on the impact of WebRTC on testing.

    I actively monitor and maintain a dataset of WebRTC vendors. I use it to understand the WebRTC ecosystem better. I make it a point to know as many vendors as possible  through various means. I thought I had this space pretty much covered.

    What surprised me was the barrage of requests for information and demos by vendors with real services out there that came into our testRTC contact page that I just wasn’t aware of. About 50% of the requests from vendors came from someone I didn’t know existed.

    My current dataset size is now reaching 700 vendors and projects. There might be twice that much out there.

    Why is this important?
    • A lot of the vendors out there are rather silent about what they are doing. This isn’t about the technology – it is about solving a problem for a specific customer
    • There are enough vendors today to require a solid, dedicated testing tool focused on WebRTC. I am more confident about this decision we made with testRTC
    • If you are building something, be sure to let me know about it or to add it to the WebRTC Index

    Oh – and if you want to see a demo of testRTC in action, we will be introducing it and demoing it at the upcoming VUC meeting tomorrow.

     

    Want to make the best decision on the right WebRTC platform for your company? Now you can! Check out my WebRTC PaaS report, written specifically to assist you with this task.

    The post What I Learned About the WebRTC Market from a Webinar on WebRTC Testing appeared first on BlogGeek.me.

    Cluecon 2015

    miconda - Wed, 07/15/2015 - 21:51
    During August 3-6, 2015, takes place a new edition of Cluecon Conference, in Chicago, USA. Backed up mainly by the developers of FreeSwitch project, the topics at the event cover many other open source real time communication projects as well as open discussion round tables.I will present about Kamailio on Tuesday, August 4, 2015.Cluecon is a place gathering lots of VoIP folks around the word, many from Kamailio community, it is one of those events that one should not miss.If you are at the event or around Chicago area during that time and want to meet to discuss about Kamailio, get in touch via sr-dev mailing list. If there are many interested, we can have some ad-hoc sessions and group meetings (e.g., dinner) to approach various topics about Kamailio.For private discussions, you can contact me directly (email to miconda at gmail dot com).


    Is the Web Finally Growing up and Going Binary?

    bloggeek - Tue, 07/14/2015 - 12:00

    Maybe.

    I remember the good old days. I was hired to work on this signaling protocol called H.323. It used an interesting notation called ASN.1 with a binary encoding, capable of using a bit of data for a boolean of information. Life was good.

    Then came SIP. With its “simple” text notation, it conquered the market. Everyone could just use and debug it by looking at the network. It made things so much easier for developers. So they told me. What they forgot to tell us then was how hard it is to parse text properly – especially for mere machines.

    Anyway, it is now 2015. We live in a textual internet world. We use HTTP to describe our web pages. CSS to express its design and we code using JavaScript and JSON. All of these protocols are textual in nature. Our expectation is that this text that humans write (and read to debug), will be read and processed by machines.

    This verbosity of text that we use over the internet is slowing us down twice:

    1. Text takes more space than binary information, so we end up sending more data over the network
    2. Computers need to work harder to parse text than they do binary

    So we’ve struggled through the years to fix these issues. We minify the text, rendering it unreadable to humans. We use compression on the network, rendering it unreadable to humans over the network. We cache data. We use JIT (Just In Time) compilation on JavaScript to speed it up. We essentially lost most of the benefits of text along the way, but remained with the performance issues still.

    This last year, several initiatives have been put in place that are about to change all that. To move us from a textual web into a binary one. Users won’t feel the difference. Most web developers most feel it either. But things are about to change for the better.

    Here are the two initiatives that are making all the difference here.

    HTTP/2

    HTTP/2 is the latest and greatest in internet transport protocols. It is an official standard (RFC 7540) for almost 2 full months now.

    Its main objective is to speed up the web and to remove a lot of the hacks we had to use to build web pages and run interactive websites (BOSH, Comet and CSS sprites come to mind here).

    Oh – and it is binary. From the RFC:

    Finally, HTTP/2 also enables more efficient processing of messages through use of binary message framing.

    While the content of our web pages will remain textual and verbose (HTML), the transport protocol used to send them, with its multitude of headers, is becoming binary.

    To make things “worse”, HTTP/2 is about to encrypt everything by default, simply because the browsers who implemented it so far (Chrome and Firefox) decided not to support non-encrypted connections with HTTP/2. So the verbosity and the ability to watch messages on the network and debug things has gone down the drain.

    WebAssembly

    I’ve recently covered WebAssembly, comparing the decisions around it to those of WebRTC.

    WebAssembly is a binary format meant to replace the use of JavaScript in the browser.

    Developers will write their frontend code in JavaScript or whatever other language they fancy, and will have to compile it to WebAssembly. The browser will then execute WebAssembly without the need to parse too much text as it needs to do today. The end result? A faster web, with more languages available to developers.

    This is going to take a few years to materialize and many more years to become dominant and maybe replace JavaScript, but it is the intent here that matters.

    Why is it important?

    We need to wean ourselves from textual protocols and shift to binary ones.

    Yes. Machines are becoming faster. Processing power more available. Bandwidth abundant. And we still have clogged networks and overloaded CPUs.

    The Internet of Things won’t make things any easier on us – we need smaller devices to start and communicate. We need low power with great performance. We cannot afford to ruin it all by architectures and designs based on text protocols.

    The binary web is coming. Better be prepared for it.

    The post Is the Web Finally Growing up and Going Binary? appeared first on BlogGeek.me.

    FreeSWITCH Week in Review (Master Branch) July 4th-July 11th

    FreeSWITCH - Tue, 07/14/2015 - 02:20

    Hello, again. This passed week in the FreeSWITCH master branch we had 54 commits. We had a bunch of new commits for features this week! Those pull requests are really helping to get things reviewed and accepted. Keep it up community!

    Join us on Wednesdays at 12:00 CT for some more FreeSWITCH fun! And head over to freeswitch.com to learn more about FreeSWITCH support.

    New features that were added:

    • FS-7780 Add new channel variable max_session_transfers. If set, this variable is used to count the number of session transfers allowed instead of the max_forwards variable. If not set, the existing behavior is preserved.
    • FS-7783 Add channel variable for capturing DTMF input when using play_and_get_digits when the response does not match
    • FS-7772 [mod_opus] Add functionality to keep FEC enabled on the encoder by modifying the bitrate if packet loss changes (Opus codec specific behaviour).
    • FS-7799 [mod_png] Add API command uuid_write_png
    • FS-7801 [mod_opus] Added support to set CBR mode
    • FS-7685 [mod_say_nl] Fix Dutch numbers pronunciation
    • FS-7198 Add coma separated values and reverse ranges for time-of-day and day-of-week matches
    • FS-7809 [mod_opus] Added 60 ms ptime for Opus at 8 khz ( opus@8000h@60i )
    • FS-7405 [mod_dialplan_xml] Fix condition regex=”all” to work with time conditions
    • FS-7819 [mod_opus] Restore bitrate (if there’s no more packet loss) and added step for 60 ms
    • FS-7773 [mod_sofia] Adding additional transfer events when the fire-transfer-events=true profile parameter is set
    • FS-7820 FreeSWITCH automated unit test and micro benchmark framework

    Improvements in build system, cross platform support, and packaging:

    • FS-7628 [mod_erlang_event] Removed unused variables causing a compilation error
    • FS-7776 Add mod_kazoo to packaging

    The following bugs were squashed:

    • FS-7778 [mod_sofia] Fixed a bug causing a SQL statement to fail because of a double quote instead of a single quote
    • FS-7754 [freetdm] Fixed a bug relating to single digit dial-regex with analog devices
    • FS-7785 [mod_opus] Fix for invalid ptime 30 ms for opus@8000h . Replaced 30 ms with 40 ms.
    • FS-7762 [mod_av] Handle buffer allocation failures of large buffers

    And, this passed week in the FreeSWITCH 1.4 branch we had no new commits merged in from master.

    Dear NY Times, if you’re going to hack people, at least do it cleanly!

    webrtchacks - Tue, 07/14/2015 - 00:07

    So the New York times uses WebRTC to gather your local ip addresses… Tsahi describes the non-technical parts of the issue in his blog. Let’s look at the technical details… it turns out that the Javascript code used is very clunky and inefficient.

    First thing to do is to check chrome://webrtc-internals (my favorite tool since the hangouts analysis). And indeed, nytimes.com is using the RTCPeerConnection API. We can see a peerconnection created with the RtpDataChannels argument set to true and using stun:ph.tagsrvcs.com as a STUN server.
    Also, we see that a data channel is created, followed by calls to createOffer and setLocalDescription. That pattern is pretty common to gather IP addresses.

    Using Chrome’s devtools search feature it is straightforward to find out that the RTCPeerConnection is created in the following Javascript file:
    http://s.tagsrvcs.com/2/4.10.0/loaded.js

    Since it’s minified here is the de-minified snippet that is gathering the IPs:

    Mt = function() { function e() { this.addrsFound = { "0.0.0.0": 1 } } return e.prototype.grepSDP = function(e, t) { var n = this; if (e) { var o = []; e.split("\r\n").forEach(function(e) { if (0 == e.indexOf("a=candidate") || 0 == e.indexOf("candidate:")) { var t = e.split(" "), i = t[4], r = t[7]; ("host" === r || "srflx" === r) && (n.addrsFound[i] || (o.push(i), n.addrsFound[i] = 1)) } else if (0 == e.indexOf("c=")) { var t = e.split(" "), i = t[2]; n.addrsFound[i] || (o.push(i), n.addrsFound[i] = 1) } }), o.length > 0 && t.queue(new y("webRTC", o)) } }, e.prototype.run = function(e) { var t = this; if (c.wrip) { var n = window.RTCPeerConnection || window.webkitRTCPeerConnection || window.mozRTCPeerConnection; if (n) { var o = { optional: [{ RtpDataChannels: !0 }] }, i = []; - 1 == w.baseDomain.indexOf("update.") && i.push({ url: "stun:ph." + w.baseDomain }); var r = new n({ iceServers: i }, o); r.onicecandidate = function(n) { n.candidate && t.grepSDP(n.candidate.candidate, e) }, r.createDataChannel(""), r.createOffer(function(e) { r.setLocalDescription(e, function() {}, function() {}) }, function() {}); var a = 0, s = setInterval(function() { null != r.localDescription && t.grepSDP(r.localDescription.sdp, e), ++a > 15 && (clearInterval(s), r.close()) }, 200) } } }, e }(),

    Let’s look at the run function first. It is creating a peerconnection with the optional RtpDataChannels constraint set to true. No reason for that, it will just unnecessarily create candidates with an RTCP component in Chrome and is ignored in Firefox.

    As mentioned earlier,
    stun:ph.tagsrvcs.com
    is used as STUN server. From Wireshark dumps it’s pretty easy to figure out that this is running the [coturn stun/turn server](https://code.google.com/p/coturn/); the SOFTWARE field in the binding response is set to
    Coturn-4.4.2.k3 ‘Ardee West’.

    The code hooks up the onicecandidate callback and inspects every candidate it gets. Then, a data channel is created and createOffer and setLocalDescription are called to start the candidate gathering process.
    Additionally, in the following snippet

    var a = 0, s = setInterval(function() { null != r.localDescription && t.grepSDP(r.localDescription.sdp, e), ++a > 15 && (clearInterval(s), r.close()) }, 200)

    the localDescription is searched for candidates every 200ms for three seconds. That polling is pretty unnecessary. Once candidate gathering is done, onicecandidate would have been called with the null candidate so polling is not required.

    Lets look at the grepSDP function. It is called in two contexts, once in the onicecandidate callback with a single candidate, the other time with the complete SDP.
    It splits the SDP or candidate into individual lines and then parses that line, extracting the candidate type at index 7 and the IP address at index 4.
    Since without a relay server one will never get anything but host or srflx candidates, the check in following line is unnecessary. The rest of this line does eliminate duplicates however.

    Oddly, the code also looks for an IP in the c= line which is completely unnecessary as this line will not contain new information. Also, looking for the candidate lines in the localDescription.sdp will not yield any new information as any candidate found in there will also be signalled in the onicecandidate callback (unless someone is using a 12+ months old version of Firefox).

    Since the JS is minified it is rather hard to trace what actually happens with those IPs.
    If you’re going to hack people, at least do it cleanly!

    {“author”: “Philipp Hancke“}

    Want to keep up on our latest posts? Please click here to subscribe to our mailing list if you have not already. We only email post updates. You can also follow us on twitter at @webrtcHacks for blog updates and news of technical WebRTC topics or our individual feeds @chadwallacehart, @victorpascual and @tsahil.

    The post Dear NY Times, if you’re going to hack people, at least do it cleanly! appeared first on webrtcHacks.

    WebRTC on the New York Times – Not as an Article or a Video Chat Feature

    bloggeek - Mon, 07/13/2015 - 12:00

    WebRTC has been mentioned with regards to the New York Times. It isn’t about an article covering it – or a new video chat service they now offer.

    I was greeted this weekend by this interesting tweet:

    WebRTC being used now by embedded 3rd party on http://t.co/AaD7p3qKrE to report visitors' local IP addresses. pic.twitter.com/xPdh9v7VQW

    — Mike O'Neill (@incloud) July 10, 2015

    I haven’t been able to confirm it – didn’t find the culprit code piece in the several minutes I searched for it, but it may well be genuine.

    The New York Times may well be using WebRTC to (gasp) find your private IP address.

    In the WebRTC Forum on Facebook, a short exchange took place between Cullen Jennings (Cisco) and Michael Jerris (FreeSWITCH):

    Cullen: I’ve been watching this for months now – Google adds served on slash dot for example and many other sites do this. I don’t think it is to exactly get the local ip. I agree they get that but I think there is more interesting things gathered as straight up fingerprinting.

    Michael: local ip doesn’t seem that useful for marketers except as a user fingerprinting tool. They already have your public ip, this helps them differentiate between people behind nat. it’s a bit icky but not such a big deal. This issue blows up again when someone starts using it maliciously, which I’m sure will happen soon enough. I don’t get why exactly we don’t just prompt for this the same way we do camera and mic, it wouldn’t be a huge deal to work that into the spec. That being said, I don’t think it’s actually as big of a deal as it has been made either

    Cullen: It’s not exactly clear to me exactly how one uses this maliciously. I can tell you most peoples IP address right now 192.168.0.1 and knowing that a large percentage of the world has that local IP does directly help you hack much. To me the key things is browsers need to not allow network connections to random stuff inside the firewall that is not prepared to talk to a browser. I think the browser vendors are very aware of this and doing the righ thting.

    My local IP address is 10.0.0.1 which is also quite popular.

    In recent months, we’ve seen a lot of FUD going on about WebRTC and the fact that it leaks local IP addresses. I’ve been struggling myself in trying to understand what the fuss is. It does seem bad, a web page knowing too much about me. But how is that hurting me in any way? I am not a security expert, so I can’t really say, but I do believe the noise levels around this topic are higher than they should be.

    When coming to analyze this, there are a couple of things to remember:

    • As Cullen Jennings points out, for the most part, the local IP address is mostly known. At least for the consumers at home
    • We are already sharing so much about ourselves out of our own volition, then I don’t see how this is such an important piece of information we are giving away now
    • The alternative isn’t any good either: I currently have installed on my relatively new laptop at least 4 different communication apps that have “forced” themselves on my browser. They know my local IP address and probably a lot more than that. No one seems to care about it. I can install them easily on most/all enterprise machines as well
    • Browser fingerprinting isn’t new. It is the process of finding out who you are and singling you out when you surf across the web through multiple websites. Does it need WebRTC? Probably not. Go on and check if your browser have a unique fingerprint – all of the 4 browsers I checked (on 3 devices, including my smartphone’s) turned out rather unique – without the use of WebRTC
    • The imminent death of plugins and the commonality of browsers on popular smartphones means that browser fingerprints may become less unique, reducing their usefulness. WebRTC “fixes” that by adding the coupling of the additional local and public IP address information. Is that a good thing? A bad thing?

    One thing is clear. WebRTC has a lot more uses than its original intended capability of simply connecting a call.

    The post WebRTC on the New York Times – Not as an Article or a Video Chat Feature appeared first on BlogGeek.me.

    Can an Open Source SFU Survive Acquisition? Q&A with Jitsi & Atlassian HipChat

    webrtchacks - Sun, 07/12/2015 - 22:12

    Atlassian’s HipChat acquired BlueJimp, the company behind the Jitsi open source project. Other than for positive motivation, why should WebRTC developers care? Well, Jitsi had its Jitsi Video Bridge (JVB) which was one of the few open source Selective Forwarding Units (SFU) projects out there. Jitsi’s founder and past webrtcHacks guest author, Emil Ivov, was a major advocate for this architecture in both the standards bodies and in the public. As we have covered in the past, SFU’s are an effective way to add multiparty video to WebRTC. Beyond this one component, Jitsi was also a popular open source project for its VoIP client, XMPP components, and much more.

    So, we had a bunch of questions: what’s new in the SFU world? Is the Jitsi project going to continue? What happens when an open source project gets acquired? Why the recent licensing change?

    To answer these questions I reached out to Emil, now Chief Video Architect at Atlassian and Jitsi Project Lead and Joe Lopez, Senior Development and Product Manager at Atlassian who is responsible for establishing and managing Atlassian Open Source program. 

    Photo courtesy of Flickr user Scott Cresswell

    webrtcHacks: It has been a while since we have covered multi-party video architectures here. Can you give us some background on what a SFU is and where it helps in WebRTC?

    Emil: A Selective Forwarding Unit (SFU) is what allows you to build reliable and scalable multi-party conferences. You can think of them as routers for video, as they receive media packets from all participants and then decide if and who they need to forward them to.

    Compared to other conferencing servers, like video mixers – i.e. Multipoint Control Units (MCUs) – SFUs only need a small amount of resources and therefore scale much better. They are also significantly faster as they don’t need to transcode or synchronize media packets, so it is possible to cascade them for larger conferences.

    webrtcHacks: is there any effort to standardize the SFU function?

    Emil: Yes. It is true that SFUs operate at the application layer, so, strictly speaking, they don’t need to be standard the way IP routers do. Different vendors implement different features for different use cases and things work. Still, as their popularity grows, it becomes more and more useful for people to agree on best practices for SFUs. There is an ongoing effort at the IETF to describe how SFUs generally work: draft-ietf-avtcore-rtp-topologies-update. This helps the community understand how to best build and use them.

    Having SFUs well described also helps us optimize other components of the WebRTC ecosystem for them. draft-aboba-avtcore-sfu-rtp-00.txt, for example, talks about how a number of fields that encoders use are currently shared by different codecs (like VP8 and H.264) but are still encoded differently, in codec-specific ways. This is bad as it means developers of more sophisticated SFUs need to suddenly start caring about codecs and the whole point of moving away from MCUs was to avoid doing that. Therefore, works like draft-berger-avtext-framemarking and draft-pthatcher-avtext-esid aim to take shared information out of the media payload and into generic RTP header extensions.

    Privacy is another issue that has seen significant activity on the IETF is about improving the end-to-end privacy in SFUs. All existing SFUs today need to decrypt all incoming data before they can process it and forward it to other participants. This obviously puts SFUs in a position to eavesdrop on calls, which, unless you are running your own instance just for yourself, is not a great thing. It also means that the SFU needs to do allocate a lot of processing resources to transcrypting media and avoiding this would improve scalability even further.

    Selective Forwarding Middlebox diagram from https://tools.ietf.org/html/draft-ietf-avtcore-rtp-topologies-update-08

    webrtcHacks: Other than providing multi-party video functionality, what else do SFU’s like the JVB do? 

    Emil: Simple straightforward relaying from everyone to everyone – what we call full star routing – means you may end up sending a lot of traffic to a lot of people … potentially more than they care or are able to receive. There are two main ways to address that issue.

    First, you can limit the number of streams that everyone receives. This means that in a conference with a hundred participants, rather than getting ninety-nine streams, everyone would only receive the streams for the last four, five or N active speakers. This is what we call Last N and it is something that really helps scalability. Right now N is a number that JVB deployments have as a config param but we are working on making it adaptive so that JVB would adapt it based on link quality.

    Another way we improve bandwidth usage is by using “simulcast”. Chrome has the option of generating multiple outgoing video streams in different resolutions. This allows us to pick the higher resolution for active speakers (presumably those are the ones you would want to see in good quality) and resend it to participants who can afford the traffic. It will just relay the lower resolution for and to everyone else.

    A few other SFUs, like the one Google Hangouts are using implement simulcast as it saves a lot of resources on the server but also on the client side.We are actually working on some improvements there right now.

    webrtcHacks: Joe – now that you have had a chance to get to know the Blue Jimp/Jitsi team and technology, can you give an update on your plans to incorporate Jitsi into HipChat and Atlassian?

    Joe: Teams of all sizes use HipChat every day to communicate in real-time all over the world. Teams are stronger when they feel connected, and video is an integral part of that. Users have logged millions of minutes of 1:1 video using HipChat, which helps teams collaborate and build their company culture regardless of whether they work in the same location. With Jitsi we can give users so much more! We’re in the process of developing our own video, audio, and screen-sharing features using Jitsi Video Bridge. It’s a little early for us to comment on exact priority order and timing, but our objective is to make it easier for teams to connect effortlessly anywhere, anytime, on any device.

    webrtcHacks: what is the relationship between the core Atlassian team, HipChat, and Jitsi? How does Jitsi fit in your org structure?

    Joe: The Jitsi team joined Atlassian as part of HipChat team and makes up the core of our video and real-time communication group. They are experts on all things RTC and we’re now leveraging their expertise. The Jitsi developers are relocating to our Austin offices and will keep working on Jitsi and Atlassian implementations.

    webrtcHacks: Can you disclose the terms of the deal?

    Emil: I can only say that the acquisition was a great thing for both BlueJimp and Jitsi.

    Joe: Same here. We really wanted to add to our team the sort of expertise and technology that BlueJimp brings.

    webrtcHacks: I had to try.. So, how big is the Jitsi community? Do you know how many active developers you have using your various projects?

    Emil Ivov – founder of the Jitsi project

    Emil:  We haven’t been tracking our users so it’s hard to say. In terms of development, a lot of the work on Jitsi Videobridge and Jitsi Meet is done by BlueJimp, but we are beginning to get some pretty good patches. Hopefully, the trend will continue in that direction.

    As for the Jitsi client, we haven’t had a lot of time for it in the past couple of years so community contributions there are likely surpassing those of the company.

    webrtcHacks: Your public statements indicate the primary focus of the acquisition was the Jitsi Videobridge. Jitsi had many other popular products, including the Jitsi client, a TURN server, and many others. What is the future of these other projects? Does Atlassian have justification to continue to maintain these elements?

    Joe: Our plan is to continue developing the Jitsi Videobridge as well as the other projects including libjitsi, Jitsi Meet, Jirecon, Jigasi and other WebRTC related projects in the Jitsi community. We’re also going to continue providing the build infrastructure for the Jitsi client just as BlueJimp has been doing.  But, we don’t have immediate plans for substantial development on the purely client-side.

    Emil: It’s worth pointing out that the heart of Jitsi Videobridge, libjitsi, is something it shares with the Jitsi client. There’s also a lot of code that Jigasi, our SIP gateway, imports directly from the client.  So, while the upper UX layers in the client are not our main focus, we will continue working heavily on the core.

    Above all, however, the developer community around the client is much older and more mature than that of our newer projects. There are developers like Ingo Bauersachs or Danny van Heumen, for example, who have been long involved with the project and who continue working on it. Developers coming and going or changing focus is a natural part of any FLOSS project, and as Joe mentioned, we are going to continue providing the logistics.

    webrtcHacks: While Atlassian has initiatives to help open source projects, your core products are not based on open source and you are not known for having many open source projects of its own. Can you address this concern? Is the Jitsi acquisition an attempt to change this? If so, what else has Atlassian done to accommodate more of an open sourcing culture internally?

    Joe: Actually, Atlassian uses, supports and develops a number of open source projects.  We just haven’t been very vocal about it. That will be changing soon. In addition to managing our real-time communication project, I’m also responsible for our open source program. We’re in the process of restructuring how Atlassian supports open source, and the Jitsi project is one of the first initiatives in our plan. We see Jitsi as a great opportunity, and we are *very* serious about making this project a success. We’ll have more to say about open source later this year.

    Finally we think that Jitsi Videobridge is the most advanced open source Selective Forwarding Unit, which puts the team in a unique position to contribute to the WebRTC ecosystem. We are very keen on doing this.

    webrtcHacks: last week you moved all the Jitsi licenses from LGPL to Apache.

    Emil, why did you choose LGPL in the first place?

    Emil: Ever since we started the project, one of our primary motivations had been to get our code in the hands of as many people as possible, so we wanted to lower adoption barriers. Licensing is one of the important components here and, during its very early stages, around 2003, 2004, Jitsi (then SIP Communicator) started with an Apache license.

    Then we had to think a little bit more seriously about how we were going to make a living off of our work, because otherwise there wouldn’t have been any project at all. That’s when we thought we might be better off if BlueJimp had protection and decided to switch to LGPL.

    So, although it wasn’t our first choice, it did give us a certain measure of protection.

    I am very happy that Atlassian has decided to take the risk and relinquish that protection. I firmly believe this is the best option for Jitsi and its users.

    Joe Lopez of Atlassian at an Austin WebRTC Meetup

    webrtcHacks: Joe – why the move to Apache? Why not other licenses like MIT, BSD, etc?

    Joe: As for why Apache over MIT/BSD, it’s actually very simple: like at many organizations, Apache is our preferred license of choice when using other people’s open source work. So, it made sense to us that this is what we should choose for our projects. We talked with a number of people internally and externally, and even went so far as to evaluate all licenses. But our technical and legal experts found Apache to be a tried and tested license respected by many organizations for their terms and clarity. At the end of the day we chose Apache because it best fit our organization and others.

    webrtcHacks: how do you expect this license change will impact existing Jitsi users?

    Emil: Very positively! A number of developers and companies are looking at using Jitsi Videobridge for their new startups, products and services. We expect the Apache license to make Jitsi significantly more appealing to them.

    When you are integrating a technology, the more permissive the license is, the less it precludes you from certain choices in the future. When launching a new service or a product, it is very hard to know that you would never need to keep some parts of it proprietary. This is especially true for startups, and I am saying it from experience.

    You simply need to keep that option open because sometimes it makes all the difference between a company closing its doors or thriving for years.

    That’s the liberty that you get from Apache.

    webrtcHacks: the Meet application was previously a MIT license. How are you handling that? Some argue that going from MIT to Apache is a step in the wrong direction.

    Emil: That’s true – the first lines of code in the Jitsi Meet project did come under MIT.  But there’s not much to handle there. The MIT license allows for code to be redistributed under any other license, including Apache, and Joe already pointed out why we think Apache is a better choice.

    Joe: There is also a purely practical side to this. As I mentioned, we’re in the process of restructuring our open source story, and Jitsi is one of the first in this effort.  So, it’s important for us to apply the same policy everywhere. The more exceptions we have, the harder it will be to manage and ensure a good experience for any Atlassian contributor.

    webrtcHacks: Jitsi was known in the past for soliciting community input before making major decisions. Why didn’t you announce the plans to change your licensing model before the actual change this time?

    Emil: Knowing the project as I do, it just never crossed my mind that this would be a problem for anyone. Throughout the past years I only heard concerns from people that found the LGPL too restrictive for them, so I only expected positive opinions. And the overwhelming majority have reacted positively.

    For the few people who have raised concerns, let me reiterate that we think this is the best possibility for Jitsi, and we also need to be practical and use a uniform license for all Atlassian projects.

    People who feel that LGPL was a better match for them are completely free to take last week’s version of the project and continue maintaining it under that license.

    webrtcHacks: what level of transparency can the Jitsi community expect going forward?

    Emil: This is actually one of the main ways in which BlueJimp’s acquisition is going to be beneficial to Jitsi.

    A lot of the work that BlueJimp did in the past was influenced by customer demand.  As a result, we never really knew exactly what to expect a month in the future. This is now over. Today it is much easier for us to define a roadmap and stick to it. Obviously we will still remain flexible as we listen to requests and important use cases from the community, but we are going to have significantly more visibility than before.

    webrtcHacks: the github charts indicate a slow down in activity vs. last year. Was this due to distraction from the acquisition? What level of public commits should we expect out of the new Atlassian Jitsi team going forward?

    Emil:  As with any project, there’s a lot that needs to be done in the early stages.  Ninety percent of what you do is push code. This gradually changes with time as the problems you are solving become more complex. At that point you spend a lot of time thinking, testing and debugging. As a result your code output diminishes.

    Take our joint efforts with Firefox, for example. This took a lot of time looking through wireshark traces, debugging and making small adjustments. The time it took to actually write the code was negligible compared to everything else we needed to do. Still, adding Firefox compatibility was important to Jitsi, and that happened within Atlassian.

    In addition, the entire team is relocating to Austin, and a relocation can be time-consuming.  

    But, there haven’t been any private commits, if that’s what you are thinking of :).

    Illustration of a Selective Forwarding Unit (SFU) architecture with 3 participants

    webrtcHacks: can you share some of your roadmap & plans for Jitsi?

    Emil: Gladly! We are really excited to continue working on what makes Jitsi Videobridge the most advanced SFU out there. This includes things like bandwidth adaptivity, for instance. We have big changes coming to our Simulcast and Last N support. Scalability and reliability will also be a main focus in the next months. This includes being able to do more conferences per deployment but also more people per conference. We are also going to be working on mobile, to make it easier for people to use the project on iOS and Android. Supporting other browsers and switching to Maven are also on the roadmap.

    We’re not ready to say when, or in what order these things will be happening – but they’re coming.

    webrtcHacks: should we expect to see the Jitsi source move from github to bitbucket?

    Joe: We’re keeping Jitsi on GitHub, since they excel at being a place for open source projects. Bitbucket is better designed for software teams within organizations that want greater control over their source code, to restrict access within their organization, teams, or even to specific individuals. However, one area that we do want to address is issue tracking. This has been a source of pain for Jitsi, so we’re considering moving issue tracking to JIRA, Atlassian’s issue tracking and management software, which will provide us with everything we need for better project management.

    We will be discussing this with the community in the coming weeks.

    {“interviewer”, “chad“}

    {“interviewees”, [“Emil Ivov“, “Joe Lopez“]}

    Want to keep up on our latest posts? Please click here to subscribe to our mailing list if you have not already. We only email post updates. You can also follow us on twitter at @webrtcHacks for blog updates and news of technical WebRTC topics or our individual feeds @chadwallacehart, @victorpascual and @tsahil.

    The post Can an Open Source SFU Survive Acquisition? Q&A with Jitsi & Atlassian HipChat appeared first on webrtcHacks.

    3CX and WebRTC: An Interview With Nick Galea

    bloggeek - Thu, 07/09/2015 - 12:00
    isVisible=false; function show_hide_searchbox(w){ if(isVisible){ document.getElementById('filterBoxSelection').style.display = 'none'; w.innerText='Filter ▼'; }else{ document.getElementById('filterBoxSelection').style.display = 'block'; w.innerText='Filter ▲'; } isVisible=!isVisible; } function checkIfSelected(chk){ if(chk.checked==1) chk.parentNode.className = "selected"; else chk.parentNode.className = "notselected"; getSelectedValues(); } function getSelectedValues(){ var a=document.getElementsByClassName('selected'); var vtVal=[] , ctVal=[] , ftVal=[]; var ct=0,vt=0,ft=0; for (x = 0; x < a.length; ++x) { try{ if(a[x].getElementsByTagName('input')[0].className=='companyType'){ ctVal[ct]= a[x].getElementsByTagName('input')[0].value; ct++; } if(a[x].getElementsByTagName('input')[0].className=='vendorType'){ vtVal[vt]= a[x].getElementsByTagName('input')[0].value; vt++; } if(a[x].getElementsByTagName('input')[0].className=='focusType'){ ftVal[ft]= a[x].getElementsByTagName('input')[0].value; ft++; } }catch(err){ } } search_VType(vtVal); search_CType(ctVal); search_FType(ftVal); } function search_VType(val){ var a=document.getElementsByClassName('interview-block'); for(x=0;x=0 && val[i]!=null){ a[x].style.display='block'; } } if(val.length==0){ a[x].style.display='block'; } } } function search_CType(val){ var a=document.getElementsByClassName('interview-block'); for(x=0;x=0 && val[i]!=null && a[x].style.display=='block'){ break; } if(i==val.length-1){ a[x].style.display='none'; } } } } function search_FType(val){ var a=document.getElementsByClassName('interview-block'); for(x=0;x=0 && val[i]!=null && a[x].style.display=='block'){ break; } if(i==val.length-1){ a[x].style.display='none'; } } } } Check out all webRTC interviews >>

    3CX: Nick Galea

    July 2015

    Enterprise web meetings

    WebRTC video conferencing for the enterprise.

    [If you are new around here, then you should know I’ve been writing about WebRTC lately. You can skim through the WebRTC post series or just read what WebRTC is all about.]

     

    I have been following 3CX for several years. They were one of the first in the enterprise communication solution vendors that offered WebRTC. Recently, they introduced a new standalone service called 3CX WebMeeting. It has all the expected features of an enterprise multiparty video calling service. And it uses WebRTC.

    I had a chat with Nick Galea, CEO of 3CX. I wanted to know what are they doing with WebRTC and what are his impressions of it.

    Here are his answers.

     

    What is 3CX all about?

    3CX provides a straightforward and easy to use & manage communication solution that doesn’t lack in functionality or features and is still highly affordable. We recognised that there was a need for a Windows-based software PBX and so this is where 3CX began.

    Given the fact that the majority of businesses already use Windows, 3CX provides a solution that is easy to configure and manage for IT Admins. There’s no need for any additional training that can be time-consuming and costly. We also help businesses to save money on phone bills with the use of SIP trunking and free interoffice calls and travel costs can be reduced by making use of video conferencing with 3CX WebMeeting. As a UC solutions provider, we focus on cost savings, management, productivity and mobility, and we help our customers to achieve improvements in all four aspects.

    Our focus is on innovation and thus, our development team works nonstop to bring our customers and partners the very best. We are always looking out for the latest great technologies and how we can use them to make 3CX Phone System even better and so of course, WebRTC was a technology that we just had to implement.

     

    You decided to plunge into the waters and use WebRTC. Why is that?

    To us, unified communications is not only about bringing all methods of communication into one user-friendly interface, but about making those methods of communication as seamless, enjoyable and productive for all involved, whether that be for the organisation that invested in the system, or a partner or client that simply has a computer and internet connection to work with.

    Running a business is not an easy feat, and the whole purpose of solutions such as 3CX Phone System and 3CX WebMeeting is to make everyday business processes easier. So, for us, WebRTC was a no-brainer. We believe in plugin-free unified communications and with such technology available for us to leverage, the days of inconvenient downloads and time-consuming preparation in order to successfully (or in some cases, unsuccessfully) hold a meeting are over.

    What signaling have you decided to integrate on top of WebRTC?

    Signalling is performed through websocket for maximum compatibility. Messages and commands are enveloped in JSON objects. ICE candidates are generated by our server library while SDP are parsed and translated by MCU. This allows full control over SDP features like FEC and RTX in order to achieve best video performance.

     

    Backend. What technologies and architecture are you using there?

    The platform is based on a web application written on PHP. We developed a custom MCU service (actually it’s a Selective Forward Unit aka SFU). This service allows us to handle a very large number of media streams in real time. Performance is optimized to reduce latency to a minimum. Raw media streams can be saved to disk, then our Converter Service automatically produces a standard video file with meeting recording.

    A key component of web application is the MCU Cluster Manager, which is able to handle several MCUs scattered in different areas, distribute load and manage user location preference.

     

    Since you cater the enterprise, can you tell me a bit about your experience with Internet Explorer, WebRTC and customers?

    So far most people are using Chrome without any complaints so it doesn’t concern me that WebRTC is not supported by Internet Explorer. We haven’t come across any issues with customers as they are aware that this is a limitation of the technology and not the software and actually our stats show that 95% of people connect or reconnect with Chrome after receiving the warning message, so for most users Chrome is not a problem.

     

    Where do you see WebRTC going in 2-5 years?

    I think that WebRTC will become the de facto communications standard for video conferencing, and maybe even for calls. WebRTC is a part of how technology is evolving and we may even see some surprising uses for it outside the realms of what we’re imagining right now. It’s incredibly easy to use and no other technology is able to compete. It’s what the developers are able to do with it that is really going to make the difference and I believe there is still so much more to come in terms of how WebRTC can be utilised.

     

    If you had one piece of advice for those thinking of adopting WebRTC, what would it be?

    That they should have adopted it earlier :).

     

    Given the opportunity, what would you change in WebRTC?

    Nothing really but the technology is still growing so I’m looking forward to see what’s in store for WebRTC and how it’s going to improve.

     

    What’s next for 3CX?

    We’re working on tighter integration between 3CX WebMeeting and 3CX Phone System and integrating our platform more closely with other vendors of third-party apps such as CRM systems and so on.

    The interviews are intended to give different viewpoints than my own – you can read more WebRTC interviews.

    The post 3CX and WebRTC: An Interview With Nick Galea appeared first on BlogGeek.me.

    FreeSWITCH Week in Review (Master Branch) June 27th-July 3rd

    FreeSWITCH - Tue, 07/07/2015 - 05:46

    Hello, again. This passed week in the FreeSWITCH master branch we had 49 commits. This week we had a bunch of new features with most of them being helpful little improvements, but we also had two new modules merged in! The 2600hz guys added mod_kazoo and William King merged in mod_smpp. You can find out more about mod_smpp by going here. And, the 2600hz patches are all slated to be merged in by the 1.6 release.

    Join us on Wednesdays at 12:00 CT for some more FreeSWITCH fun! And head over to freeswitch.com to learn more about FreeSWITCH support.

    New features that were added:

    • FS-7732 Continue recording with uuid_transfer
    • FS-7752 [mod_rayo] Increase maximum number of elements from 30 to 1024 to allow adhearsion to create large grammars to navigate IVR menus.
    • FS-7750 [mod_commands] Allow for uuid_setvar to handle arrays
    • FS-7758 [mod_loopback] Emit an event if a loopback bowout occurs
    • FS-7759 [mod_sofia] Added the channel variable ignore_completed_elsewhere to suppress setting the completed elsewhere cause
    • FS-7771 Set a channel variable if the recording is terminated due to silence hits
    • FS-7760 Added xml fetch for channels to externally support nightmare transfer depends on channel-xml-fetch-on-nightmare-transfer profile param (default is disabled)
    • FS-7730 [mod_smpp] Added mod_smpp as an event handler module
      and fixed the default configs to provided sample load option for mod_sms and mod_smpp
    • FS-7774 Add mod_kazoo

    Improvements in build system, cross platform support, and packaging:

    • OPENZAP-238 [freetdm] Fix some GSM compilation errors and do a bit of code cleanup
    • OPENZAP-237 [freetdm] Use __func__ instead of __FUNCTION__ to comply with c99 in gcc 5.1

    The following bugs were squashed:

    • FS-7734 [mod_nibblebill] Fixed a deadlock
    • FS-7726 Fixed a bug with recording a video session on DTMF command
    • FS-7721 Fixed a segfault caused when using session:recordFile() and session:unsetInputCallback in a lua script
    • FS-7429 [mod_curl] Fixed to output valid json
    • FS-7746 [mod_verto] Fixed a device permission error in verto client
    • FS-7753 [mod_local_stream] Fixed some glitching and freezing video when using hold/unhold
    • FS-7761 [core] Fix shutdown races running api commands during shutdown
    • FS-7767 [mod_sofia] Fixed a segfault caused by invalid arguments to sip_dig
    • FS-7744 [mod_conference] Fixed a bug causing the first user’s video stream to stop when another verto user calls the conference
    • FS-7486 [mod_sofia] Fixed the handling of queued requests
    • FS-7775 [mod_conference] Fix threading issue causing stuck worker threads
    • FS-7777 [mod_imagick] Fixed a regression causing a segfault when playing png & pdf in conference

    And, this passed week in the FreeSWITCH 1.4 branch we had 2 commits merged in from master.

    • FS-7486 [mod_sofia] Fixed the handling of queued requests
    • FS-7750 [mod_commands] Set uuid_setvar to handle arrays

    How do You Test Your WebRTC Service?

    bloggeek - Mon, 07/06/2015 - 12:00

    Join the webinar on WebRTC and its Impact on Testing to get a clear answer.

    I’ve been working recently with some friends on solving an issue for WebRTC that seems to be ignored by most – testing WebRTC services.

    We had a concept in mind, and decided to follow through and develop a service for it, naming it testRTC. Since we started, we’ve enhanced the service greatly to include monitoring and analytic due to customers request.

    What I noticed in our calls with customers is that there are 3 different paradigms used for testing WebRTC-based services:

    1. Not testing at all, assuming things will work out just fine or that customers complaining will just get us to fix bugs
      • I thought only startups with a demo or a proof of concept will be in this category, but was surprised to see large vendors in this category as well
      • Some assume that if they test their product in other ways (SIP/VoIP aspects of it for example), then this would be enough
      • Others assume that things will just work because Google is testing WebRTC in Chrome
    2. Testing manually, where you have QA teams (more likely the developer himself) go through the motions of testing the service
      • Which begs the question of how exactly can this scale or work with the continuous deployment nature of WebRTC, where browsers release new versions every 6-8 weeks
    3. Framework DIY, where an automation framework is put in place to test WebRTC
      • This usually amounts for a one-time investment that is then never improved upon
      • These frameworks will usually lack functionality, as other priorities related to feature requests and functionality trickle in

    You might think that using a solid VoIP testing product would suffice for WebRTC, but the reality is starkly different. While WebRTC is VoIP, it bears little resemblance to it when it comes to testing.

    If you wish to learn more, check out what we are doing at testRTC. Or better yet – join SmartBear and testRTC for a free webinar:

    WebRTC and Its Impact on Testing – July 8, 2015, 2:00 p.m. EDT

    Nikhil Kaul and I will be discussing the challenges WebRTC posts to testing and suggest best practices to meet these challenges. See you there!

    The post How do You Test Your WebRTC Service? appeared first on BlogGeek.me.

    Pages

    Subscribe to OpenTelecom.IT aggregator

    Using the greatness of Parallax

    Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.

    Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.

    Get free trial

    Wow, this most certainly is a great a theme.

    John Smith
    Company name

    Yet more available pages

    Responsive grid

    Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

    More »

    Typography

    Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

    More »

    Startup Growth Lite is a free theme, contributed to the Drupal Community by More than Themes.