News from Industry

The WebRTC Troubleshooter: test.webrtc.org

webrtchacks - Mon, 04/20/2015 - 09:00

WebRTC-based services are seeing new and larger deployments every week. One of the challenges I’m personally facing is troubleshooting as many different problems might occur (network, device, components…) and it’s not always easy to get useful diagnostic data from users.

troubleshooting (Image source: google)

Earlier this week, Tsahi, Chad and I participated at the WebRTC Global Summit in London and had the chance to catch up with some friends from Google, who publicly announced the launch of test.webrtc.org. This is great diagnostic tool but, to me, the best thing is that it can be easily integrated into your own applications; in fact, we are already integrating this in some of our WebRTC apps.

Sam, André and Christoffer from Google are providing here a brief description of the tool. Enjoy it and happy troubleshooting!

{“intro-by”: “victor“}

The WebRTC Troubleshooter: test.webrtc.org (by Google) Why did we decide to build this?

We have spent countless hours debugging things when a bug report comes in for a real-time application. Besides the application itself, there are many other components (audio, video, network) that can and will eventually go wrong due to the huge diversity among users’ system configurations.

By running small tests targeted at each component we hoped to identify issues and create the possibility to gather information on the system reducing the need for round-trips between developers and users to resolve bug reports.

Test with audio problem


What did we build?

It was important to be able to run this diagnostic tool without installing any software and ideally one should be able to integrate very closely with an application, thus making it possible to clearly identify bugs in an application from the components that power it.

To accomplish this, we created a collection of tests that verify basic real-time functionality from within a web page: video capture, audio capture, connectivity, network limitations, stats on encode time, supported resolutions, etc… See details here. 

We then bundled the tests on a web page that enables the user to download a report, or make it available via a URL that can be shared with developers looking into the issue.

How can you use it?

Take a look at test.webrtc.org and find out what tests you could incorporate in your app to help detect or diagnose user issues. For example, simple tests to distinguish application failures from system components failures, or more complex tests such as detecting if the camera is delivering frozen frames, or tell the user that their network signal quality is weak. 

https://webrtchacks.com/wp-content/uploads/2015/04/test.webrtc.org_.mp4

You are encouraged by us to take ideas and code from GitHub and integrate similar functionality in your own UX. Using test.webrtc.org should be part of any “support request” flow for real-time applications. We encourage developers to contribute! 

In particular we’d love some help getting a uniform getStats API between browsers.

test.webrtc.org repo

What’s next?

Working on adding more tests (e.g. network analysis detecting issues that affect audio and video performance is on the way).

We want to learn how developers integrate our tests into their apps and we want to make them easier to use!

{“authors”: [“Sam“, “André“, “Christoffer”]}

Want to keep up on our latest posts? Please click here to subscribe to our mailing list if you have not already. We only email post updates. You can also follow us on twitter at @webrtcHacks for blog updates and news of technical WebRTC topics or our individual feeds @chadwallacehart@reidstidolph, @victorpascual and @tsahil.

The post The WebRTC Troubleshooter: test.webrtc.org appeared first on webrtcHacks.

3CX è sponsor al Microsoft Ignite 2015!

Libera il VoIP - Thu, 04/16/2015 - 16:12

3CX è Silver Sponsor al Microsoft Ignite 2015, che si terrà a Chicago dal 4 all’8 Maggio.

Il focus principale del Microsoft Ignite di quest’anno è la tecnologia Cloud, la Unified Communication e la Mobilità: in pratica è su misura per 3CX! Addetti ai lavori, esperti e opinion leaders parteciperanno all’evento, quindi iscrivetevi e partecipate ai lavori.

Durante tutti i giorni della conferenza verranno effettuate dimostrazioni live di 3CX Phone System e della nostra soluzione integrata di webconference, 3CX WebMeeting, basata su tecnologia WebRTC.

Venite ad incontrare il team 3CX USA e il CEO di 3CX Nick Galea allo stand #307

Per evitare contrattempi o sovrapposizioni, siete pregati di fissare un appuntamento via e-mail

Non vediamo l’ora di incontrarvi di persona al Microsoft Ignite 2015!

Approfondimenti
  • Anche Microsoft entra in campo

    Dopo AOL, Google, Yahoo ecc ecc anche il colosso di Redmont entra nel mercato della fonia over ip, e lo fa sviluppando, in collaborazione con importanti produttori hardware, una soluzione pensata per [...]

  • Response Point: Microsoft abbandona il voip

    Response Point: doveva essere il cavallo di razza attraverso il quale espandere la propria “leadership” anche al settore della fonia over ip. A quanto pare però l’esperienza di Microsoft si può già [...]

  • TellMe by Microsoft: motore di ricerca vocale per BlackBerry

    Con un “colpo di scena inaspettato” (almeno per me) TellMe, azienda recentemente acquisita da Microsoft, ha lanciato un nuovo applicativo per piattaforma RIM che permette di effettuare ricerche attraverso comandi vocali.

    Il funzionamento [...]

Kamailio World 2015 – The Workshops

miconda - Wed, 04/15/2015 - 23:57
It is now about one month and a half till the start of Kamailio World Conference 2015. Continuing with the same event structure like in 2014, the afternoon of the first day, the 27th of May, is filled with several technical workshops. These sessions are intended to give a more hands-on perspective on the subjects, with deeper technical content.Last year, Sipwise showed how to deploy sip:provide CE – the open source out of the box IP Telephony Operator Platform – in a matter of minutes and customize it to fit better your needs. This year, Daniel Grotti, a long time SIP and Kamailio fellow, is going to show how to enable WebRTC for sip:provider CE in order to bridge the communication between the web world and the classic SIP phones. Few other typical use cases will be approached during the session.Carsten Bock, from NG Voice, is returning with another tutorial to show more of what can be done with Kamailio for IMS and VoLTE deployments. Besides the tutorial, the plan is to have a VoLTE testbed on site for the duration of the entire event, so the participants can test with their own devices.After presenting at the past editions the concept and the development of CGRateS, a carrier grade open source CDR rating engine, Dan Bogos is now coming with a hands-on session about how to integrate it with Kamailio for prepaid and postpaid billing.Ability to troubleshoot SIP routing and analyze the flows on the wire is one of the core elements required for VoIP engineers. Lorenzo Mangani, one of the co-founders of Homer SIP Capture project, is going to deliver a session on how to use existing open source tools (including Homer and sipgrep, but not limited to) to make the SIP troubleshooting process easier.All together are providing an amazing amount of knowledge from the people with first hand experience, those that built the systems. It is a unique opportunity at Kamailio World to get face to face to interact with such people.The content of conference days is filled with other very interesting sessions, including as well valuable technical details, presenting scalable and secure architectures or other products that can be used to complete the VoIP platforms with new features. Right now you can see details for a sections of presentations in the Schedule page.Be sure you don’t miss Kamailio World Conference 2015, during May 27-29, in Berlin, Germany – it is the open source real time communications event in Europe!Secure your participation and register now!See you in Berlin!

Put in a Bug in Apple’s Apple – Alex Gouaillard’s Plan

webrtchacks - Tue, 04/14/2015 - 13:08

Apple Feast photo courtesy of flikr user Overduebook. Licensed under Creative Commons NC2.0.

One of the biggest complaints about WebRTC is the lack of support for it inside Safari and iOS’s webview. Sure you can use a SDK or build your own native iOS app, but that is a lot of work compared to Android which has Chrome and WebRTC inside the native webview on Android 5 (Lollipop) today. Apple being Apple provides no external indications on what it plans to do with WebRTC. It is unlikely they will completely ignore a W3C standard, but who knows if iOS support is coming tomorrow or in 2 years.

Former guest webrtcHacks interviewee Alex Gouillard came to me with an idea a few months ago for helping to push Apple and get some visibility. The idea is simple – leverage Apple’s bug process to publicly demonstrate the desire for WebRTC support today, and hopefully get some kind of response from them. See below for details on Alex’s suggestion and some additional Q&A at the end.

Note: Alex is also involved in the webrtcinwebkit project – that is a separate project that is not directly related, although it shares the same goal of pushing Apple. Stay tuned for some coverage on that topic.

{“intro-by”: “chad“}

Plan to Get Apple to support WebRTC The situation

According to some polls, adding WebRTC support to Safari, especially on iOS and in native apps in iOS, is the most wanted WebRTC item today.

The technical side of the problem is simple: any native app has to follow Apple’s store rules to be accepted in the store. These rules state that any apps that “browse the web” need to use Apple provided WebView [rule 2.17] based on the WebKit framework. Safari is also based on WebKit. WebKit does not Support WebRTC… yet!

First Technical step

The webrtcinwebkit.org project aims at addressing the technical problem within the first half of 2015. However, bringing WebRTC support to WebKit is just part of the overall problem. Only Apple can decide to use it in their products, and they are not commenting about products that have not been released.

There have been lots of signs though that Apple is not opposed to WebRTC in WebKit/Safari.

  • Before the Chrome fork of WebKit/WebCore in what became known as blink, Apple was publicly working on parts of the WebRTC implementation (source)
  • Two umbrella bugs to accept implementation of WebRTC in WebKit are still open and active in WebKit’s bugzilla, with an Apple media engineer in charge (Bug 124288 &  Bug 121101)
  • Apple Engineers, not the usual Apple standard representative, joined the W3C WebRTC working group early 2014 (public list), and participated to the technical plenary meeting in November 2014 (w3c members restricted link)
  • Finally, an early implementation of Media Streams and GetUserMedia API in WebKit was contributed late 2014 (original bug & commit).

So how to let Apple know you want it and soon – potentially this year?

Let Apple know!

Chrome and Internet Explorer (IE), for example, have set up pages for web developers to directly give their feedback about which feature they want to see next (WebRTC related items generally rank high by the way).  There is no such thing yet for Apple’s product.

The only way to formally provide feedback to Apple is through the bug process. One needs to have or create a developer account, and open a bug to let Apple know they want something.  Free accounts are available, so there is no financial cost associated with the process. One can open a bug in any given category, the bugs are then triaged and will end up in “WebRTC” placeholder internally.

Volume counts. The more people will ask for this feature, the most likely Apple is to support it. The more requests the better.

But that is not the only thing that counts. Users of WebRTC libraries, or any third party who has a business depending on WebRTC can also raise their case with Apple that their business would profit from Apple supporting WebRTC in their product. Here too, volume (of business) counts.

As new releases of Safari are usually made with new releases of the OS, and generally in or around September, it is very unlikely to see WebRTC in Safari (if ever) before the next release, late 2015.

We need you

You want WebRTC support on iOS? You can help. See below for a step-by-step guide on how.

How to Guide Step-by-step guide
  1. Register a free Apple Developer account. Whether you are a developer or not does not matter eventually. You will need to make an Apple ID if you do not have one already.
  2. Sign in to the Bug Reporter:
  3. Once signed in, you should see the following screen:
  4. Click on Open, then select Safari:
  5. Go ahead and write the bug report:

It is very important here that you write WHY, in your own words, you want WebRTC support in Safari. There are a multiple of different reasons you might want it:

  • You’re a developer  you have developed a website that requires WebRTC support, and you cannot use it on Safari. If your users are requesting it, please share the volume of request, and/or share the volume of usage you’re getting on non-safari browsers to show the importance of the this for Apple.
  • You’re a company with a WebRTC product or service. You have the same problem as above, and the same suggestions apply.
  • You’re a user of a website that requires WebRTC, and owner of many Apple devices. You would love to be able to use your favorite WebRTC product or service on your beloved device.
  • You’re a company that propose a plugin for WebRTC in Safari, and you would love to get rid of it.
  • others

Often times, some communities organize “bug writing campaigns” that include boilerplate text to include in a bug.  It’s a natural tendency for reviewers to discount those bugs somewhat because they feel like more of a “me too” than a bug filed by someone that took 60 seconds to write up a report in their own words.

{“author”, “Alex Gouaillard“}

{“editor”, “chad“}

Chad’s follow-up Q&A with Alex

Chad: What is Apple’s typical response to these bug filing campaigns?

Alex: I do not have the direct answer to this, and I guess only Apple has. However, here are two very clear comments by an Apple representative:

The only way to let Apple know that a feature is needed is through bug filling.

I would just encourage people to describe why WebRTC (or any feature) is important to them in their own words. People sometimes start “bug writing campaigns” that include boilerplate text to include in a bug, and I think people here have a natural tendency to discount those bugs somewhat because they feel like more of a “me too” than a bug filed by someone that took 60 seconds to write up a report in their own words.”

So my initiative here is not to start a bug campaign per say, where everybody would copy paste the same text, or click the same report to increment a counter. My goal here is to let the community know they can let Apple know their opinion in a way that counts.

[Editor’s note: I was not able to get a direct confirmation from Apple (big suprise) – I did directly confirm  evidence that at least one relevant Apple employee agrees with the sentiment above.]

Chad: Do you have any examples of where this process has worked in the past to add a whole new W3C-defined capability like WebRTC?

Alex: I do not. However, the comment #1 above by Apple representative was very clear that whether it will eventually work or not, there is no other way.

Chad: Is there any kind of threshold on the number of bug filings you think the community needs to meet?

Alex: My understanding is that it’s not so much about the number of people that send bugs, it’s more about the case they make. It’s a blend between business opportunities and number of people. I guess volume counts – whether it is people or dollars. This is why it is so important that people use they own words and describe their own case. 

Let’s say my friends at various other WebRTC Platform-as-a-Service providers desire to show the importance for them of having WebRTC in iOS or Safari- one representative of the company could go in and explain their use case and their numbers for the platform / service. They could also ask their devs to file a bug describing their application they developed on top of their WebRTC platform. They could also ask their users to describe why as users of the WebRTC app that they feel segregated against their friends who owns a Samsung tablet and who can enjoy WebRTC while they cannot on their iPad. (That is just an example, and I do not suggest that they should write exactly this. Again, everybody should use their own word.)

If I understand correctly, it does not matter whether one or several employees of the above named company fill only one or several bugs for the same company use case.

Chad: Are you confident this will be a good use of the WebRTC developer’s community’s time?

Alex: Ha ha. Well, let’s put it that way, the whole process takes around a couple of minutes in general, and maybe just a little bit more for companies that have a bigger use case and want to weight in the balance. Less than what you are spending reading this blog post. If you don’t have a couple of minute to fill a bug to Apple, then I guess you don’t really need the feature.

More seriously, I have been contacted by enough people that just wanted to have a way, anyway, to make it happen, that I know this information will be useful. For the cynics out there, I’m tempted to say, worse case scenario you lost a couple of minutes to prove me wrong. Don’t miss the opportunity.

Yes, I’m positive this will be a good use of everybody’s time.

{“interviewer”, “chad“}

{“interviewee”, “Alex Gouaillard“}

Want to keep up on our latest posts? Please click here to subscribe to our mailing list if you have not already. We only email post updates. You can also follow us on twitter at @webrtcHacks for blog updates and news of technical WebRTC topics or our individual feeds @chadwallacehart@reidstidolph, @victorpascual and @tsahil.

The post Put in a Bug in Apple’s Apple – Alex Gouaillard’s Plan appeared first on webrtcHacks.

Testing FreeSWITCH performance on Scaleway C1

TXLAB - Sat, 04/11/2015 - 02:23

The dedicated ARM hosting servers at Scaleway appear to be a decent platform for a mid-sized PBX.

In short, the platform displays the following results in performance tests:

  • OPUS<->PCMA transcoding: 16 simultaneous calls with  at about 95% total CPU load and no noticeable distortions.
  • SILK<->PCMA transcoding: 72 simultaneous calls were going without distortions, with average total CPU load at 63%. Higher number of calls resulted in noticeable distortions.
  • G722<->PCMA transcoding: 96 simultaneous calls without distortions, at 76% CPU load, and noticeable distortions for higher numbers.

Test 1: sequential transcoding

The following tests are a slight modification of my previous test scenario: it appears that a channel in OPUS codec cannot execute `echo` or `delay_echo` FreeSWITCH applications, as they copy RTP frames, and the OPUS codec is stateful and does not accept such copying. So, an extra bridge is made to ensure that echo is always executed on a PCMA channel.

XML dialplan in public context (here IPADDR is the public address on the Scaleway host):

  <!-- Extension 100 accepts the initial call, plays echo,        and on pressing *1 it transfers to 101  -->   <extension name="100">     <condition field="destination_number" expression="^100$">       <action application="answer"/>       <action application="bind_meta_app" data="1 a si transfer::101 XML ${context}"/>       <action application="delay_echo" data="1000"/>     </condition>   </extension>       <!-- Extension 101 plays a beep, then makes an outgoing SIP call to        our own external profile and extension 200  -->   <extension name="101">     <condition field="destination_number" expression="^101$">       <action application="playback" data="tone_stream://%(100,100,1400,2060,2450,2600)"/>       <action application="unbind_meta_app" data=""/>       <action application="bridge"               data="{absolute_codec_string=PCMA}sofia/external/200@IPADDR:5080"/>     </condition>   </extension>   <!-- Extension 200 enforces transcoding and sends the call to 201 -->   <extension name="200">     <condition field="destination_number" expression="^200$">       <action application="answer"/>       <action application="bridge"               data="{max_forwards=65}{absolute_codec_string=OPUS}sofia/external/201@IPADDR:5080"/>     </condition>   </extension>       <!-- Extension 201 returns the call to 100, guaranteeing it to be in PCMA -->   <extension name="201">     <condition field="destination_number" expression="^201$">       <action application="answer"/>       <action application="bridge"               data="{max_forwards=65}{absolute_codec_string=PCMA}sofia/external/100@IPADDR:5080"/>     </condition>   </extension>

The initial call is sent to extension 100 in the public context, and then by pressing *1, 6 additional channels are created, of which two calls perform the transcoding from PCMA to OPUS and back. So, if “show channels” shows 43 total channels, it corresponds to 42 = 6*7 test channels plus the incoming one, or 14 transcoding calls.

#### Good quality #### # fs_cli -x 'show channels' | grep total 43 total. # mpstat -P ALL 10                       Linux 3.19.3-192 (scw01)    04/10/2015      _armv7l_        (4 CPU) 10:08:41 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle 10:08:51 PM  all   82.67    0.00    2.75    0.00    0.00    1.30    0.00    0.00   13.28 10:08:51 PM    0   92.80    0.00    1.30    0.00    0.00    5.20    0.00    0.00    0.70 10:08:51 PM    1   95.30    0.00    1.60    0.00    0.00    0.00    0.00    0.00    3.10 10:08:51 PM    2   89.90    0.00    2.50    0.00    0.00    0.00    0.00    0.00    7.60 10:08:51 PM    3   52.70    0.00    5.60    0.00    0.00    0.00    0.00    0.00   41.70 10:08:51 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle 10:09:01 PM  all   84.88    0.00    2.43    0.00    0.00    1.23    0.00    0.00   11.47 10:09:01 PM    0   94.50    0.00    0.50    0.00    0.00    4.90    0.00    0.00    0.10 10:09:01 PM    1   97.60    0.00    1.50    0.00    0.00    0.00    0.00    0.00    0.90 10:09:01 PM    2   87.70    0.00    2.20    0.00    0.00    0.00    0.00    0.00   10.10 10:09:01 PM    3   59.70    0.00    5.50    0.00    0.00    0.00    0.00    0.00   34.80 #### quite OK quality, with some minor distortions #### # fs_cli -x 'show channels' | grep total 49 total. # mpstat -P ALL 10                       Linux 3.19.3-192 (scw01)    04/10/2015      _armv7l_        (4 CPU) 10:10:29 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle 10:10:39 PM  all   95.65    0.00    2.40    0.00    0.00    0.83    0.00    0.00    1.12 10:10:39 PM    0   95.30    0.00    1.20    0.00    0.00    3.30    0.00    0.00    0.20 10:10:39 PM    1   96.90    0.00    2.20    0.00    0.00    0.00    0.00    0.00    0.90 10:10:39 PM    2   95.80    0.00    3.50    0.00    0.00    0.00    0.00    0.00    0.70 10:10:39 PM    3   94.60    0.00    2.70    0.00    0.00    0.00    0.00    0.00    2.70 10:10:39 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle 10:10:49 PM  all   91.55    0.00    1.55    0.00    0.00    0.78    0.00    0.00    6.12 10:10:49 PM    0   89.90    0.00    1.20    0.00    0.00    3.10    0.00    0.00    5.80 10:10:49 PM    1   96.60    0.00    0.70    0.00    0.00    0.00    0.00    0.00    2.70 10:10:49 PM    2   90.60    0.00    1.70    0.00    0.00    0.00    0.00    0.00    7.70 10:10:49 PM    3   89.10    0.00    2.60    0.00    0.00    0.00    0.00    0.00    8.30 #### bad quality, barely audible #### # fs_cli -x 'show channels' | grep total 55 total.

If OPUS codec is replaced with SILK in the above configuration, the test is not usable, as SILK appears not to tolerate multiple transcodings, and after 4 transcodings the sound is almost not propagated at all. Also further transcoding sessions treat the input as silence, and do not load CPU.

If G722 is used, 36 transcoded calls still leave plenty of CPU resources for other tasks:

# fs_cli -x 'show channels' | grep total 109 total. # mpstat -P ALL 10                       Linux 3.19.3-192 (scw01)    04/10/2015      _armv7l_        (4 CPU) 10:37:31 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle 10:37:41 PM  all   19.75    0.00    5.40    0.00    0.00    0.00    0.00    0.00   74.85 10:37:41 PM    0   27.00    0.00   12.10    0.00    0.00    0.00    0.00    0.00   60.90 10:37:41 PM    1    4.30    0.00    9.50    0.00    0.00    0.00    0.00    0.00   86.20 10:37:41 PM    2   47.60    0.00    0.00    0.00    0.00    0.00    0.00    0.00   52.40 10:37:41 PM    3    0.10    0.00    0.00    0.00    0.00    0.00    0.00    0.00   99.90 10:37:41 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle 10:37:51 PM  all   17.57    0.00    7.42    0.00    0.00    0.00    0.00    0.00   75.00 10:37:51 PM    0    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00 10:37:51 PM    1   20.30    0.00   29.70    0.00    0.00    0.00    0.00    0.00   50.00 10:37:51 PM    2   50.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00   50.00 10:37:51 PM    3    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00  Test 2: parallel transcoding

The following piece of public dialplan takes the call at extension 300, makes a call in OPUS to extension 301, and then the call is bridged to 302 in PCMA where a speech test file is played endlessly. Thus, a call to 300 produces 5 channels, which are equivalent of two transcoded calls.

  <extension name="300">     <condition field="destination_number" expression="^300$">       <action application="answer"/>       <action application="bridge"               data="{absolute_codec_string=OPUS}sofia/external/301@IPADDR:5080"/>     </condition>   </extension>       <extension name="301">     <condition field="destination_number" expression="^301$">       <action application="answer"/>       <action application="bridge"               data="{absolute_codec_string=PCMA}sofia/external/302@IPADDR:5080"/>     </condition>   </extension>       <extension name="302">     <condition field="destination_number" expression="^302$">       <action application="answer"/>       <action application="endless_playback" data="/var/tmp/t02.wav"/>     </condition>   </extension>

In parallel to a call to 300 from outside, additional endless calls were produced from fs_cli:

originate sofia/external/300@IPADDR:5080 &endless_playback(/var/tmp/t02.wav)

This originate command produced 6 new channels, equivalent to two transcoded calls. The command was repeated until the human caller hears any distortions.

OPUS transcoding was functioning fine with 16 transcoded calls and 95% average CPU load, while SILK and G722 started showing distortions at around 65-75% of CPU load.

 

 


Filed under: Networking Tagged: arm, freeswitch, pbx, scaleway, voip

From SQL Tables to Kamailio Hash Tables

miconda - Thu, 04/09/2015 - 23:55
Eloy Coto Pereiro has published recently another blog post that can be useful in the case one needs to cache content of custom database tables in Kamailio’s memory via htable module. The article uses Postgresql as database server, but same mechanism can be used for other database servers.You can read the article at:Using caching is a good way to improve the performances and htable is a very flexible mechanism in Kamailio configuration file, with plenty of options to tune the caching rules.Enjoy!

Installing FreeSWITCH on Scaleway C1

TXLAB - Wed, 04/08/2015 - 13:13

Scaleway (a cloud service by online.net) offers ARM-based dedicated servers for EUR9.99/month, and the first month free. The platform is powerful enough to run a small or FreeSWITCH server, and it shows nice results in voice quality tests.

These instructions are for Debian Wheezy distribution.

By default, the server is created with Linux kernel 3.2.34, and this kernel version does not have a high-resolution timer. You need to choose 3.19.3 in server settings.

At Scaleway, you get a dedicated public IP address and 1:1 NAT to a private IP address on your server. So, FreeSWITCH SIP profiles need to be updated (“ext-rtp-ip” and “ext-sip-ip” to point to you rpublic IP address).

FreeSWITCH compiles and links “mpg123-1.13.2″ library, which fails to compile on ARM architecture.  You need to edit the corresponding files to point to “mpg123-1.19.0″ and commit back to Git, because the build scripts check if any modified and uncommitted files exist in the source tree. Also the patch forces to use gcc-4.7, as 4.6 is known with some problems on ARM architecture.

apt-get update && apt-get install -y make curl git sox flac mkdir -p /usr/src/freeswitch cd /usr/src/freeswitch/ git clone https://gist.github.com/b27f4e41cc02f49d31a0.git git clone -b v1.4 https://stash.freeswitch.org/scm/fs/freeswitch.git /usr/src/freeswitch/src cd src git apply ../b27f4e41cc02f49d31a0/freeswitch-arm.patch git add --all git commit -m 'mpg123-1.19.0.patch' ./debian/util.sh build-all -i -z1 -aarmhf -cwheezy # This will run for about 4 hours, and you can build the sound packages in parallel in another terminal. mkdir /usr/src/freeswitch-sounds cd /usr/src/freeswitch-sounds git clone https://github.com/traviscross/freeswitch-sounds.git music-default cd music-default ./debian/bootstrap.sh -p freeswitch-music-default ./debian/rules get-orig-source tar -xv --strip-components=1 -f *_*.orig.tar.xz && mv *_*.orig.tar.xz ../ dpkg-buildpackage -uc -us -Zxz -z1 cd /usr/src/freeswitch-sounds git clone https://github.com/traviscross/freeswitch-sounds.git sounds-en-us-callie cd sounds-en-us-callie ./debian/bootstrap.sh -p freeswitch-sounds-en-us-callie ./debian/rules get-orig-source tar -xv --strip-components=1 -f *_*.orig.tar.xz && mv *_*.orig.tar.xz ../ dpkg-buildpackage -uc -us -Zxz -z1 cd /usr/src/freeswitch-sounds dpkg -i *.deb cd /usr/src/freeswitch # this will fail because dependencies are not installed dpkg -i freeswitch-all_* # this will add dependencies apt-get -f install # finally, install FreeSWITCH dpkg -i freeswitch-all_* # Minimal configuration that you can use cd /etc git clone https://github.com/voxserv/freeswitch_conf_minimal.git freeswitch # edit sip_profiles/*.xml and put the public IP address into "ext-rtp-ip" and "ext-sip-ip" insserv freeswitch service freeswitch start
Filed under: Networking Tagged: arm, freeswitch, pbx, scaleway, voip

Kamailio v4.2.4 Released

miconda - Thu, 04/02/2015 - 17:21
Kamailio SIP Server v4.2.4 stable is out – a minor release including fixes in code and documentation since v4.2.3 – configuration file and database compatibility is preserved.Kamailio (former OpenSER) v4.2.4 is based on the latest version of GIT branch 4.2, therefore those running previous 4.2.x versions are advised to upgrade. There is no change that has to be done to configuration file or database structure comparing with older v4.2.x.Resources for Kamailio version 4.2.4Source tarballs are available at:Detailed changelog:Download via GIT: # git clone git://git.kamailio.org/kamailio kamailio
# cd kamailio
# git checkout -b 4.2 origin/4.2Binaries and packages will be uploaded at:Modules’ documentation:What is new in 4.2.x release series is summarized in the announcement of v4.2.0:Looking forward to meeting many of you at Kamailio World 2015!

3CX vince il premio “Prodotto più Innovativo” con 3CX WebMeeting

Libera il VoIP - Tue, 03/31/2015 - 18:11

MONACO DI BAVIERA, GERMANIA, 27 MARZO 20153CX, azienda sviluppatrice del centralino software di ultima generazione 3CX Phone System, con il nuovo prodotto 3CX WebMeeting ha sbaragliato i concorrenti nella categoria “Unified Communication” per il premio “Prodotto più Innovativo”. Questo è avvenuto in occasione del CeBIT 2015 di Hannover, una delle più importanti fiere IT del mondo. Il premio è stato ritirato dal CEO Nick Galea e da Markus Kogel, Sales Manager area EMEA.

3CX WebMeeting è stato scelto per il suo uso innovativo della tecnologia WebRTC. WebRTC è la nuova piattaforma open standard di Google che consente agli utenti di lanciare webmeetings direttamente dal browser, senza dover scaricare ed installare nessun client. 3CX ha lanciato la versione hosted di 3CX WebMeeting nell’agosto 2014 e la versione on-premise nel febbraio 2015. Fin dal suo lancio 3CX WebMeeting ha ricevuto riscontri positivi sia dai partner che dagli utenti finali. 3CX WebMeeting è gratis fino a 10 utenti contemporanei per tutte le licenze di 3CX Phone System v12.5.

I premi Innovationpreis-IT 2015 Award sono organizzati da Initiative Mittelstand, un portale on-line di informazione che fornisce alle aziende aggiornamenti sui prodotti e le tecnologie più innovative disponibili.

Nick Galea, 3CX CEO ha detto:

“Questo premio è il riconoscimento di 3CX come un’azienda all’avanguardia nel settore della telefonia e dell’Unified Communications. Siamo i primi vendor ad offrire una soluzione di videoconferenza multi-punto su tecnologia WebRTC che è inoltre integrata con il nostro centralino senza costi aggiuntivi. Il premio “Prodotto più innovativo”, selezionato da una giuria di esperti, è un riconoscimento molto prestigioso in Germania e noi siamo molto felici del fatto che la nostra capacità di innovare venga riconosciuta all’interno del settore IT”

Informazioni su 3CX (www.3cx.it)

3CX è lo sviluppatore del sistema telefonico 3CX, una piattaforma di comunicazione unificata a standard aperto per Windows che funziona con telefoni standard SIP e sostituisce qualunque tipo di centralino telefonico proprietario. Il sistema telefonico 3CX è gestibile più facilmente rispetto ai sistemi PBX standard e garantisce un notevole risparmio sui costi ed un aumento della produttività. Alcune fra le più importanti aziende e organizzazioni mondiali utilizzano il sistema telefonico 3CX, tra cui Boeing, Mitsubishi Motors, Intercontinental Hotels & Resorts, Harley Davidson, Città di Vienna e Pepsi.

3CX è stato insignito del 2014 Comms National Award per la categoria ‘Miglior soluzione enterprise per l’installazione in loco’, è stato annoverato nella Annual Network Connectivity Services Partner Program Guide di CRN per il 2014 ed è stato premiato con un punteggio di 5 stelle nel programma partner di CRN nel 2013. 3CX è stato inoltre riconosciuto come Venditore Emergente da CRN nel 2011 e nel 2012, ha ricevuto la certificazione Windows Server e ha vinto svariati premi: il Gold Award di Windowsnetworking.com, il Windows IT Pro 2008 Editor’s Best Award ed un premio come miglior prodotto da Computer Shopper.

3CX ha uffici in Australia, Cipro, Germania, Italia, Sud Africa, Regno Unito e Stati Uniti. Visitate il sito web http://www.3cx.com, la pagina Facebook www.facebook.com/3CX e il canale Twitter @3cx.

Approfondimenti

  • 3CX è sponsor al Microsoft Ignite 2015!

    3CX è Silver Sponsor al Microsoft Ignite 2015, che si terrà a Chicago dal 4 all’8 Maggio.

    Il focus principale del Microsoft Ignite di quest’anno è la tecnologia Cloud, la Unified Communication e [...]

  • Fon Antenna: l’analisi del prodotto !

    Il movimiento FON ha da poco lanciato sul mercato la FONTENNA appositamente studiata per ampliare la portata dei nostri hotspot casalinghi. Analizziamo da vicino le caratteristiche di questa antenna da 6,5db di [...]

Mai più chiamate perse con il nuovo 3CXPhone per Mac

Libera il VoIP - Tue, 03/31/2015 - 18:09

Fedele alla sua reputazione di azienda innovatrice, 3CX è uno dei primi produttori di centralini telefonici ad offrire un client per Mac completo di funzionalità professionali. Con il nuovo aggiornamento del popolare 3CXPhone per Mac, gli utenti ricevono una notifica via mail quando perdono una chiamata. Questo è perfetto per gli utenti sempre in viaggio e lontani dalla scrivania che saranno così sempre avvisati di ogni chiamata persa e potranno richiamare.

Altre novità nell’aggiornamento del 3CXPhone per Mac
  • Nuovo VoIP Client Engine.
  • Possibilità di ri-approvvigionamento dalla Console di gestione del 3CX Phone System.
  • Notifica per chiamate che abbandonano la coda.
  • Aggiunto tema su base “White”.
  • Aggiunto supporto in linguaggio internazionale.
  • Aggiunta funzionalità “drag and drop” per i files .3cxconfig, .cer e .crt.
  • Aggiunti campi “Business Fax” e “Home Fax” nella descrizione Contatti.
  • Aggiunta “SLA Breach” per le chiamate in coda.
  • Aggiunta opzione DND nello stato Auto Profile Status quando l’app è in idle.

Per maggiori informazioni sulle nuove funzionalità guarda quì. Scarica 3CXPhone per Mac quì.

Approfondimenti

VUC – 8 Years

miconda - Tue, 03/31/2015 - 15:09
The VoIP Users Conference is celebrating 8 years on the air. The weekly online meetup is going to have its 535th session during a 24 hours voipathon, starting at 12:00pm PDT (20:00 London time) on Thursday, the 2nd of April, 2015. You can find more details about the session, including the options to join via audio, video or irc, at:Big credits to Randy Resnick, who started VUC, kept it going every week for the past years and he is still steering its future. Kamailio developers and users are glad to have been part of many sessions, presenting about latest news related to the project or joining sessions to debate hot topics of the real time communications world at the moment.Prepare yourself to pop up online and join the VUC voipathon even for a bit, say hi and tell shortly what is new in your world of communications!Randy and many VUC friends will be at Kamailio World Conference 2015, May 27-29, in Berlin, Germany, with VUC Visions session – be sure don’t miss the event where you can meet the people that had a relevant impact in transformation of the real time communications over the past years and work on defining their future!

The Minimum Viable SDP

webrtchacks - Tue, 03/31/2015 - 13:30

Unnatural shrinkage. Photo courtesy Flikr user Ed Schipul

 

One evening last week, I was nerd-sniped by a question Max Ogden asked:

That is quite an interesting question. I somewhat dislike using Session Description Protocol (SDP)  in the signaling protocol anyway and prefer nice JSON objects for the API and ugly XML blobs on the wire to the ugly SDP blobs used by the WebRTC API.

The question is really about the minimum amount of information that needs to be exchanged for a WebRTC connection to succeed.

 WebRTC uses ICE and DTLS to establish a secure connection between peers. This mandates two constraints:

  1. Both sides of the connection need to send stuff to each other
  2. You need at minimum to exchange ice-ufrag, ice-pwd, DTLS fingerprints and candidate information

Now the stock SDP that WebRTC uses (explained here) is a rather big blob of text, more than 1500 characters for an audio-video offer not even considering the ICE candidates yet.

Do we really need all this?  It turns out that you can establish a P2P connection with just a little more than 100 characters sent in each direction. The minimal-webrtc repository shows you how to do that. I had to use quite a number of tricks to make this work, it’s a real hack.

How I did it Get some SDP

First, we want to establish a datachannel connection. Once we have this, we can potentially use it negotiate a second audio/video peerconnection without being constrained in the size of the offer or the answer. Also, the SDP for the data channel is a lot smaller to start with since the is no codec negotiation. Here is how to get that SDP:

var pc = new webkitRTCPeerConnection(null); var dc = pc.createDataChannel('webrtchacks'); pc.createOffer( function (offer) { pc.setLocalDescription(offer); console.log(offer.sdp); }, function (err) { console.error(err); } );

The resulting SDP is slightly more than 400 bytes. Now we need also some candidates included, so we wait for the end-of-candidates event:

pc.onicecandidate = function (event) { if (!event.candidate) console.log(pc.localDescription.sdp); };

The result is even longer:

v=0 o=- 4596489990601351948 2 IN IP4 127.0.0.1 s=- t=0 0 a=msid-semantic: WMS m=application 47299 DTLS/SCTP 5000 c=IN IP4 192.168.20.129 a=candidate:1966762134 1 udp 2122260223 192.168.20.129 47299 typ host generation 0 a=candidate:211962667 1 udp 2122194687 10.0.3.1 40864 typ host generation 0 a=candidate:1002017894 1 tcp 1518280447 192.168.20.129 0 typ host tcptype active generation 0 a=candidate:1109506011 1 tcp 1518214911 10.0.3.1 0 typ host tcptype active generation 0 a=ice-ufrag:1/MvHwjAyVf27aLu a=ice-pwd:3dBU7cFOBl120v33cynDvN1E a=ice-options:google-ice a=fingerprint:sha-256 75:74:5A:A6:A4:E5:52:F4:A7:67:4C:01:C7:EE:91:3F:21:3D:A2:E3:53:7B:6F:30:86:F2:30:AA:65:FB:04:24 a=setup:actpass a=mid:data a=sctpmap:5000 webrtc-datachannel 1024

Only take what you need

We are only interested in a few bits of information here: 

  1. the ice-ufrag: 1/MvHwjAyVf27aLu
  2. the ice-pwd: 3dBU7cFOBl120v33cynDvN1E
  3. the sha-256 DTLS fingerprint: 75:74:5A:A6:A4:E5:52:F4:A7:67:4C:01:C7:EE:91:3F:21:3D:A2:E3:53:7B:6F:30:86:F2:30:AA:65:FB:04:24
  4. the ICE candidates

The ice-ufrag is 16 characters due to randomness security requirements from RFC 5245. While it is possible to reduce that, it’s probably not worth the effort. The same applies to the 24 characters of the ice-pwd. Both are random so there is not much to gain from compressing them even.

The DTLS fingerprint is a representation of the 256 bytes of the sha-256 hash. It’s length can easily be reduced from 95 characters to almost optimal (assuming we want to be binary-safe) 44 characters: 

var line = "a=fingerprint:sha-256 75:74:5A:A6:A4:E5:52:F4:A7:67:4C:01:C7:EE:91:3F:21:3D:A2:E3:53:7B:6F:30:86:F2:30:AA:65:FB:04:24"; var hex = line.substr(22).split(':').map(function (h) { return parseInt(h, 16); }); console.log(btoa(String.fromCharCode.apply(String, hex))); // yields dXRapqTlUvSnZ0wBx+6RPyE9ouNTe28whvIwqmX7BCQ=

So we have So we’re at 84 characters now. We can hardcode everything else in the application.

Dealing with candidates

Let’s look at the candidates. Wait, we got only host candidates. This is not going to work unless people are on the same network. STUN does not help much either since it only works in approximately 80% of all cases.

So we need candidates that were gathered from a TURN server. In Chrome, the easy way to achieve this is to set the iceTransports constraint to ‘relay’ which will not even gather host and srflx candidates. In Firefox, you need to ignore all non-relay candidates currently.

If you use the minimal-webrtc demo you need to use your own TURN credentials, the ones in the repository will no longer work since they’re using the time-based credential scheme. Here is what happened on my machine was that two candidates were gathered:

a=candidate:1211076970 1 udp 41885439 104.130.198.83 47751 typ relay raddr 0.0.0.0 rport 0 generation 0 a=candidate:1211076970 1 udp 41819903 104.130.198.83 38132 typ relay raddr 0.0.0.0 rport 0 generation 0

I believe this is a bug in chrome which gathers a relay candidate for an interface which is not routable, so I filed an issue.

Lets look at the first candidate using the grammar defined in RFC 5245: 

  1. the foundation is 1211076970
  2. the component is 1. Another reason for using the datachannel, there are no RTCP candidates
  3. the transport is UDP
  4. the priority is 41885439
  5. the IP address is 104.130.198.83 (the ip of the TURN server I used)
  6. the port is 47751
  7. the typ is relay
  8. the raddr and rport are set to 0.0.0.0 and 0 respectively in order to avoid information leaks when iceTransports is set to relay
  9. the generation is 0. This is a Jingle extension of vanilla ICE that allows detecting ice restarts

If we were to simply append both candidates to the 84 bytes we already have we would end up with 290 bytes. But we don’t need most of the information in there.

The most interesting information is the IP and port. For IPv4, that is 32bits for the IP and 16 bits for the port. We can encode that using btoa again which yields 7 + 4 characters per candidate. Actually, if both candidates share the same IP, we can skip encoding it again, reducing the size.

After consulting RFC 5245 it turned out that the foundation and priority can actually be skipped, even though that requires some effort. And everything else can be easily hard-coded in the application. 

sdp.length = 106

Let’s summarize what we have so far: 

  1. the ice-ufrag: 16 characters
  2. the ice-pwd: 22 characters
  3. the sha-256 DTLS fingerprint: 44 characters
  4. the ip and port: 11 characters for the first candidate, 4 characters for subsequent candidates from the same ip.

Now we also want to encode whether this is an offer or an answer. Let’s use uppercase O and A respectively. Next, we concatenate this and separate the fields with a ‘,’ character. While that is less efficient than a binary encoding or one that relies on fixed field lengths, it is flexible. The result is a string like:

O,1/MvHwjAyVf27aLu,3dBU7cFOBl120v33cynDvN1E, dXRapqTlUvSnZ0wBx+6RPyE9ouNTe28whvIwqmX7BCQ=, 1k85hij,1ek7,157k

106 characters! So that is tweetable. Yay!

You better be fast

Now, if you try this it turns out it does not usually work unless you are fast enough pasting stuff.

ICE is short for Interactive Connectivity Establishment. If you are not fast enough in transferring the answer and starting ICE at the Offerer, it will fail. You have less than 30 seconds between creating the answer at the Answerer and setting it at the Offerer. That’s pretty tough for humans doing copy-paste. And it will not work via twitter.

What happens is that the Answerer is trying to perform connectivity checks as explained in RFC 5245. But those never reach the Offerer since we are using a TURN server. The TURN server does not allow traffic from the Answerer to be relayed to the Offerer before the Offerer creates a TURN permission for the candidate, which it can only do once the Offerer receives the answer. Even if we could ignore permissions, the Offerer can not form the STUN username without the Answerer’s ice-ufrag and ice-pwd. And if the Offerer does not reply to the connectivity checks by Answerer, the Answerer will conclude that ICE has failed.

 

So what was the point of this?

Now… it is pretty hard to come up with a use-case for this. It fits into an SMS. But sending your peer an URL where you both connect using a third-party signaling server is a lot more viable most of the time. Especially given that to achieve this, I had to make some tough design decisions like forcing a TURN server and taking some shortcuts with the ICE candidates which are not really safe. Also, this cannot use trickle ice.

¯\_(ツ)_/¯

(thanks, Max)

So is this just a case study in arcane signaling protocols? Probably. But hey, I can now use IRC as a signaling protocol for WebRTC. IRC has a limit of 512 characters so one can include more candidates and information even. CTCP WEBRTC anyone?

{“author”: “Philipp Hancke“}

Want to keep up on our latest posts? Please click here to subscribe to our mailing list if you have not already. We only email post updates. You can also follow us on twitter at @webrtcHacks for blog updates and news of technical WebRTC topics or our individual feeds @chadwallacehart@reidstidolph, @victorpascual and @tsahil.

The post The Minimum Viable SDP appeared first on webrtcHacks.

Update of Keepassx Autotyping on Mac OS X

miconda - Tue, 03/31/2015 - 01:08
Back in 2009 I published the article on this blog about doing autotyping in Keepassx for Mac OS X using AppleScript and some other helper application, MoreInternet. That article is available at:
It is still a popular reading on my blog, however MoreInternet is no longer available for recent releases of Mac OS X. But that's for a better option as now Mac OS X can auto-register URL handlers on first run of an application that advertises the capability.
Lately I am using mainly iTerm2 instead of the classic Terminal.app, therefore I spent a bit of time in upgrading the AppleScript to fit better with my current environment, Mac OS X 10.10 (Yosemite) and iTerm2 (as main option).
The AppleScript is available on GitHub, feel free to fork and make pull requests with enhancements:With the new version comes few alternatives of specifying the URL schema in order to make it nicer looking inside the Keepassx. The old format with 'kpx' is still available, allowing the variants:  kpx://proto?username:password@address:port/path
  kpx://proto?username:password:address:port/pathThe proto field can be ssh, http or https.
The new variants use kpx-proto for URL schema, getting rid of the strange url with 'proto?' inside it, resulting in something more closer to actual URLs. The new URL format is:kpx-proto://username:password@address:port/pathAgain, the proto can be ssh, http or https. For both formats, old and new, the port and path are optional. For ssh, the path must not be provided. Next are some examples:kpx-ssh://alice:secret@10.0.0.10 kpx-https://alice:secret@mywebsite.com/login
It is possible to use the KeePassX self expanding variables such as {USERNAME} or {PASSWORD}.
kpx-https://{USERNAME}:{PASSWORD}@mywebsite.com/login
Installation
Download the kpx.as file from GitHub repository.
Open Script Editor from Applications => Utilities, paste the content of kpx.as into it and export it as 'Application', save it as kpx.app somewhere on your disk.
With a text editor like 'vim', edit kpx.app/Contents/PkgInfo and set the content to "APPLokpx" (no double quotes). Edit kpx.app/Contents/Info.plist and set the bundle signature to the last 4 letters of the value in PkgInfo file and add details about 'kpx' URL handling, you should get to something like this:    <key>CFBundleSignature</key> <string>okpx</string> <key>CFBundleURLTypes</key> <array> <dict> <key>CFBundleURLName</key> <string>KeePassX</string> <key>CFBundleURLSchemes</key> <array> <string>kpx</string> <string>kpx-ssh</string> <string>kpx-http</string> <string>kpx-https</string> </array> </dict> </array>Note: CFBundleSignature should be there already, just update the string value. CFBundleURLTypes (and the array value) must be added.
Save the files you edited and the execute kpx.app from Finder. This operation is registering the kpx URL handlers. The execution is practically exiting immediately, but afterwards Keepassx will be able to launch it for its registered URL schemes.
Again, if you have an older version of Mac OS X, you may need to install More Internet application to register new URL handlers, for more details see the blog for older version of this script. Read that article if you didn't do it in the past, because it provides other useful hints for testing and using as well as screenshots.
Mentioned before that iTerm2 is preferred terminal, if you prefer the Terminal.all, edit the downloaded kpx.as and replace the line:set myTerm to "iTerm2"with:set myTerm to "Terminal"Then do the same steps as above for installation.
The terminal application is used for ssh handling. For http/https, the Safari browser is used.
An important note is to re-install and run the application every time you change something in the AppleScript file kpx.as, before attempting to use Keepassx with the modifications done in kpx.as.
Hopefully this article will be useful for some people out there!
No effective time to work on at this moment, but it in the future I am thinking to add the option to start mosh instead of ssh and work with other web browsers Chromium/Chrome, Firefox or Opera -- I haven't checked which browsers have support for AppleScript commands. Of course, these or other contributions are welcome!

Roadmap to Kamailio v4.3.0

miconda - Mon, 03/30/2015 - 12:58
Next major release of Kamailio is going to be versioned.The plan to release it was sketched during the last IRC devel meeting back in February, proposing to get it out by beginning of June 2015. Given there has to be at least one month of testing, the next milestones to release date were proposed:
  • freezing the development: Wednesday, April 22, 2015
  • if testing goes smooth, then branching 4.3 after about one month: During the week starting May 18
  • test more in beta phase, prepare packaging, etc. and release after 2-3 weeks: One of the days between June 4 and 11
You can join the discussion with other suggestions or adjustments on Kamailio mailing lists.

Kamailio World 2015 – VoLTE Testbed and Demo

miconda - Fri, 03/27/2015 - 23:10
Two months till the start of Kamailio World Conference & Exhibition 2015. Prepare yourself for three days full of interesting presentations and demos during May 27-29 in Berlin, Germany!With the accelerated propagation of LTE and hot discussions about what 5G is going to be, definitely VoLTE is a top topic these days. Kamailio has a consistent set of IMS extensions, making it one of the most flexible options to consider for rolling out VoLTE platforms, already with live deployments in Europe, Asia, Africa and South America.Kamailio World is the place where you can play with VoLTE yourself, FhG FokusCore Network Dynamics and NG Voice are preparing a testbed on site with a local LTE network and a Kamailio-based VoLTE platform. Bring your VoLTE capable device (e.g., iPhone 6 or most of the latest models with Android from Samsung, LG, Huaweii …) and experience yourself the technology of your calls in the near future, with high definition voice and proper integration with other IP based services, including WebRTC.
Don’t forget to check the other presentations, workshops and exhibitors, it is going to be one of the best events for real time communications and open source in Europe. Registration is open, be sure you secure your participation before the event is sold out!

Simple PBX tutorial for FreeSWITCH

TXLAB - Thu, 03/19/2015 - 01:53

Here is a short tutorial that helps building a PBX with FreeSWITCH.


Filed under: Networking Tagged: freeswitch, pbx, sip, voip

Kamailio & Statsd – Best Practices

miconda - Tue, 03/17/2015 - 23:08
Eloy Coto Pereiro has published a very interesting article on his blog about using Kamailio and Statsd. Being the developer of the statsd module in Kamailio, he presents more details about the benefits and how to put all pieces together in order to have the statistics exported by Kamailio and graphs build by Graphite.Next is a screenshot from the article of what you can get as a result:

    Enjoy!

    New Kamailio Module: TCPOPS

    miconda - Tue, 03/10/2015 - 23:06
    Camille Oudot, from Orange, France, has recently published a new module for Kamailio, collecting a set of configuration file functions for operations on TCP/TLS connections. The module is named tcpops and the documentation is available at:The module allows admins to enable/disable keepalives per connection as well as setting custom lifetime for each connection.Camille has also added new functions to the usrloc module  that gives the admin the ability to close the TCP connection if the registration request that opened it has expired.TCPOPS will be part of the next release of Kamailio, which is labelled 4.3.

    Kamailio 2014 Awards

    miconda - Mon, 03/09/2015 - 10:25

    Here we are, the 8th edition of Kamailio Awards granted for the activity during the previous year, respectively 2014. Continuing the tradition, there are two winners for each category.The Kamailio project continued to grow in number of contributions and add plenty of new features. The second edition of Kamailio World Conference and the major release of version 4.2.0 are among highlights of 2014.
    Next are the categories and the winners!
      New Contributions
      • statsd - provides native integration with statsd and graphite from Kamailio configuration file, allowing to publish statistics from Kamailio runtime environment and build graphics that makes the monitoring process easier. The module is developed by Eloy Coto Pereiro, from Foehn, UK
      • tsilo - provides a mechanism to add new branches to not-yet-answered calls while other branches are still active. For example, in the world dominated more and more by mobile networks, the module enables Kamailio to forward the INVITE to a new endpoint that just registered (e.g., triggered by a push notification) event the INVITE was routed before to a different destination (e.g., desk phone). The module is developed by Federico Cabiddu, from Orange Vallee, France
      Developer Remarks
      • Lucian Balaceanu - from 1und1 AG - besides the work on the modules published over the time by 1&1,  he has the merits of being interested into the core components, providing several important patches on handling SIP replies via onsend route as well as pushing new features to modules such as acc, siptrace and sipcapture
      • Luis Azedo - from 2600hz.com - the main developer for kazoo module, he provided valuable feedback and patches for improvements and new features to many other modules, such as presence, db_text, registrar and usrloc
      Advocating
      • Alex Balashov - from evaristesys.com - long time community member and developer of the project, Alex spent consistent resources traveling within and outside of USA for promoting Kamailio, relevant for 2014 would be Cluecon and Kamailio World conferences
      • Giacomo Vacca - from rtcsoft.net - sharing a lot of useful information via his blog (like deploying Kamailio with Puppet or within Docker containers), Giacomo had a year very rich in traveling as well, covering Kamailio World, Cluecon and Astricon, presenting about automatic deployments of Kamailio
      Technical Support
      • Charles Chance - running Sipcentric Ltd, UK - he is an active developer, with many contributions to distributed message queue (dmg) and other components such as htable or registrar, Charles has been helping very often on mailing lists
      • Will Ferrer - from switchsoft.com - very active on users mailing list, with good reports and troubleshooting of problems reported by others. He has been helping many to sort out their issues and get their Kamailio running smooth
      Blogging
      • beingasysadmin.wordpress.com - with blog posts for Kamailio deployment for IM and integration with FreeSwitch. Besides these, the site has couple of articles that are very useful for people managing realtime communication platforms
      • loadmultiplier.com - it has a interesting set of blog articles related to Kamailio and IMS. Given that IMS has a complex architecture, any share of knowledge is very useful. As LTE is being deploying world wide as a fast pace, the Kamailio and its IMS extensions can help in deploying the VoLTE service
      Related Projects
      • Elastix - an open source unified communication server has now a variant targeting large deployments, packaged with Kamailio as SIP routing proxy and Asterisk for PBX telephony services
      • Kazoo - an open source, scalable, distributed, cloud-based telephony platform that allows to build powerful telephony applications with a web management interface and a rich set of APIs to extend and integrate with other systems. Its telephony engine is built using Kamailio and FreeSwitch
      Business Initiatives
      • IT Center Portugal - known also for their deployment of several hundreds of Kamailio-Asterisk pairs offering unified communications services for the Portugal academic network, IT Center has built a VoIP core platform with Kamailio routing the SIP traffic, relying on OpenStack to offer the elasticity to scale based on load demands
      • Toky.co - a venture from Carlos Ruiz Diaz with support by Wayra, Telefónica's startup accelerator, Toky is leveraging WebRTC technology for offering communication services without the hassle of installing applications. Kamailio with its websocket module is used for routing signaling between WebRTC endpoints.
      Events
      • AstriDevCon - somehow attached to AstriCon, the AstriDevCon is actually the top day within that week for developers working with IP telephony technologies. One of the rare moments within an year where you find a high density in the same room of people that have the experience and can find a solution for anything going wrong in real time communications
      • KazooCon - the Kazoo project has a fast growing community, fuelling it with an event orgaized by 2600hz.com at the heart of IT industry: San Francisco - Silicon Valley. Embedding Kamailio, Kazoo has become a choice for those willing to start with an out of the box telephony system and enhance it with more features offered by Kamailio.

      Friends of Kamailio
      Awarded to people not necessarily working directly with Kamailio, but whose activity has a good impact on the project and open source real time communications.
      • Matthew Jordan - he became a respected leader of Asterisk project in very short time, creating a new vibe around the development of the project, in a time of consistent refactoring for Asterisk application. The success to transform a big project to cope better with the new models of communications is demonstrating once again that one can rely on open source for long term business and not be afraid of being stuck with unmaintained or old technologies. Irrelevant to say that he supported always the efforts to make Kamailio-Asterisk integration simpler and clarify the role of the applications in real world deployments.  
      • Peter Saint-Andre - he is one of the people that bridged (and still does it) the open source communities and standardisation bodies over a very long period of time. Getting involved in standardisation process is something that open source should do more, because it can ensure that their development model is protected and practical specifications are standardized instead of hypothetical and over complicated concepts. With open source being driven a lot by immediate needs and standardisation bodies caught a lot in bureaucratic and theoretic approaches, Peter's activity is really remarkable. Although mostly involved in XMPP, he has worked a lot on the specifications for SIP-XMPP interworking, welcoming always Kamailio when presenting about this topic. 
      This is it for 2014. If you want to check the previous turn of awards, visit:The activity within Kamailio project during 2015 so far is very rich, check the project web site for announcements on what is new and the plans for the future.
      Looking forward to meeting many of you soon in Berlin, during May 27-29, 2015, at the 3rd edition of Kamailio World Conference & Exhibition.

      Avoiding Contact Center IVR Hell with WebRTC

      webrtchacks - Mon, 03/02/2015 - 14:08

      A couple of decades ago if you bought something of any reasonable complexity, odds are it came with a call center number you had to call in case something went wrong. Perhaps like the airline industry, economic pressures on contact centers shifted their modus operandi from customer delight to cost reduction. Unsurprisingly this has not done well for contact center public sentiment. Its no wonder the web came along to augment and replace much of this experience –  but no where near all it. Today, WebRTC offers a unique opportunity for contact centers to combine their two primary means of customer interaction – the web and phone calls – and entirely change the dynamic to the benefit of both sides.

      To delve into what this looks like, we invited Rob Welbourn to walk us through a typical WebRTC-enabled contact center infrastructure. Rob has been working on the intersection of telephony and web technologies for more than 8 years, starting at Covergence. Rob continued this work which eventually coalesced into deep enterprise and contact center WebRTC expertise at Acme Packet, Oracle, Cafe X, and now as an consultant for hire.

      Please see Rob’s great technology brief on WebRTC architectures in the Contact Center below.

      {“intro-by”: “chad“}

      Robert Welbourn

      Introduction

      If ever there was an area where WebRTC is expected to have a major impact, it is surely the contact center.  By now most readers of this blog have seen the Amazon Kindle Fire commercials, featuring the get-help-now Mayday button and Amy, the annoyingly perky call center agent:

      Those in the industry know that Mayday’s voice and video capability use WebRTC, as detailed by Chad and confirmed by Google WebRTC development lead Justin Uberti.   When combined with screen sharing, annotation and co-browsing, this makes for a compelling package. Executives in charge of call centers have taken notice, and are looking to their technology suppliers to spice up their call centers in the same way.

      Indeed, the contact center is a very instructive example of how WebRTC can be used to enhance a well-established, existing system.  For those who doubt that the technology isn’t mature enough for widespread deployment, I’ll let you into a dirty little secret: WebRTC on the consumer side of the call center isn’t happening in web browsers, it’s happening in mobile apps.  I’ll say more about this later.

      What a Contact Center looks like

      Before we examine how we can turbocharge a contact center with WebRTC, let’s take a look at the main component parts, and some of the pain points that both customers and call center staff encounter in their daily lives.

      (Disclaimer:  This sketch is a simplified caricature of a call center, drawn from the author’s experience with a number of different systems.   The same is true for the descriptions of WebRTC gateways in the following sections, which should be viewed as idealized and not a description of any one vendor’s offerings.)

      Generic Contact Center Architecture

      The web-to-call correlation problem

      Let’s imagine that we’re a consumer, calling our auto insurance company.  Perhaps we’ve been to their website, or maybe we’re using their shiny new mobile app on our smartphone.  Either way, we’ve logged into the insurer’s web portal, to get an update on an insurance claim, update our coverage, or whatever.  (And yes, even if we’re using a mobile app, we’re most likely still communicating with a web server.  It’s only the presentation layer that’s different.)

      Now suppose that we actually want to talk to a human being who can help us.  If we’re lucky, the web site will provide a phone number in an easy-to-find place, or maybe our mobile app will automatically bring up the phone’s dialer to make the call.  However, at this point, all of our contextual information, such as our identity and the web page we were on, gets lost.

      The main problem here is that it is not easy to correlate the web session with the phone call.  The PSTN provides no way of attaching a context identifier from a web session to a phone call, leaving the caller ID or dialed number as the only clues in the call signaling.   That leaves us with the following possibilities:

      • Use the caller ID.  This is ambiguous at best, in that a phone number doesn’t definitively identify a person, and mobile device APIs in any case forbid apps from harvesting a device’s phone number, so it can’t be readily passed into the contact center by the app.
      • Use the called number.  Some contact centers use the concept of the steering pool, where a phone number from a pool is used to temporarily identify a particular session, which could potentially be used by a mobile app.  However, the redial list is the enemy of this idea; since the number is temporarily allocated to a session, you wouldn’t want a customer mistakenly thinking they could use the same number to call back later.
      • Have the contact center call the customer back when it’s their turn, and an agent is about to become available.  This is in fact a viable approach, but complex to implement, largely for reasons of not tying up an agent while an attempt is made to reach the customer and verify they still want the call.
      • Use WebRTC for in-app, contextual communications.
      Customer-side interaction

      But let’s continue with the premise that the customer has made a regular phone call to the contact center.  From the diagram above, we can see that the first entity the call hits is the ingress gateway (if via TDM) or Session Border Controller (if via a SIP trunk).  This will most likely route the call directly to an Interactive Voice Response (IVR) system, to give the caller an opportunity to perform self-service actions, such as looking up their account balance.  Depending on the vendor, the ingress gateway or SBC may itself take part in the interaction, by hosting a VoiceXML browser, as is the case with Cisco products; or else the IVR may be an application running on a SIP-connected media server platform.

      Whatever the specific IVR architecture, it will certainly connect to the same customer database used by the web portal, but using DTMF to input an account number and PIN, rather than a username and password.  If the customer is lucky, they have managed to find an account statement that tells them what their account number is; if not, the conversation with agent is going to start by having them spell their name, give the last four digits of their Tax ID, and so on.  Not only that, but if a PIN is used, it is doubtless the same one used for their bank card, garage door opener and everything else, which hardly promotes security.  This whole process is time-consuming for both customer and agent, error-prone, and generally frustrating.

      At this point the IVR has determined who the caller is, and why they are calling – “Press 1 for auto claims, 2 for household claims…”; the call now needs to be held in a queue, awaiting a suitably qualified agent.  The job of managing the pool of agents with their various skills, and the queues of incoming calls, is the job of the Automated Call Distributor (ACD).  An ACD typically has a well-defined but proprietary interface or protocol by which it interacts with an IVR.  The IVR will submit various data items to the ACD, notably the caller ID, called number, customer identity and required skill group.  The ACD may then itself interrogate the customer database, perhaps to determine whether this is a customer who gets priority service, or whether they have a delinquent account and need to be handled specially, and so on, so that the call can be added to the appropriate queue.  The ACD may also be able to provide the IVR with the estimated wait time for an agent, for feedback to the caller.

      Agent-side interaction

      Let’s turn for a moment to the agent’s side of the contact center.  An agent will invariably have a phone (whether a physical device or a soft client), an interface to the ACD (possibly a custom “thick client”, but increasingly a web-based one in modern contact centers) and a view into the customer database.  For business-to-business contact centers, the agent may also be connected to a CRM system: Salesforce.com, Siebel, Oracle CRM, Microsoft Dynamics, and so on.

      For the purposes of our discussion, the agent’s phone is connected to a PBX, the PBX will provide call status information to the ACD using a standard telephony interface such as JTAPI, and the ACD will in turn use the same interface to direct incoming calls to agents.  This would typically be the case where an organization has a Cisco or Avaya PBX, for example, and the use of standard JTAPI allows for the deployment of a multi-vendor call center.  Other vendors, notably Genesys, have taken the approach of building their call center software using a SIP proxy as a key component, and the agents register their phones directly with the ACD rather than with a PBX.

      The agent will log into the ACD at the beginning of their shift, signaling that they are available.  Call handling is then directed by the ACD, and when a call is passed to an agent, the ACD pushes down the customer ID to the agent’s desktop, which is then used to automatically do a “screen pop” of the customer’s account details from the customer database or CRM system.

      Call handling in a contact center is thus a complex orchestration between an IVR, ACD, PBX and various pieces of enterprise software, usually requiring the writing of custom scripts and plugins to make everything work together  Not only this, but contact centers also make use of analytics software, call recording systems, and so on.

      The caller experience

      Let’s return to our caller, parked on the IVR and being played insipid on-hold music.  When the call eventually reaches the head of the queue, the ACD will instruct the IVR to transfer the call to a specific agent.  The agent gets the screen pop, asks the caller to verify their identity, and then begins the process of asking why they called.

      To summarize, the contact center experience typically involves:

      • Loss of contextual information from an existing web or mobile app session.
      • Navigating IVR Hell.
      • Waiting on hold.
      • Re-establishing identity and context.
      • A voice-only experience with a faceless representative, and lack of collaboration tools.

      It’s no wonder this is judged a poor experience, for customers and contact center agents alike.

      Adding WebRTC to the Contact Center

      WebRTC is part of what is called in the contact center business, the “omnichannel experience”, in which multiple modalities of communication between a customer and the contact center all work together seamlessly.  An interaction may start on social media, be escalated to chat, from there to voice and video, and possibly be accompanied by screen sharing and co-browsing.  But how is this accomplished?

      Contact Center with WebRTC Gateways and Co-browse Server

      The key thing to hold in mind is that voice and video are integrated into the contact center app, and that context is at all times preserved.  As a customer, you have already established your identity with the contact center’s web portal; there’s no need to have the PSTN strip that away when you want to talk to a human being.  And when you do get put through to an agent, why shouldn’t they be able to view the same web page that you do?  (Subject to permission, of course.)

      To do this, we need the following components (shown colored purple in the above diagram):

      • A back-end to the web portal that is capable of acting as a pseudo-IVR.  As far as the ACD is concerned, it’s getting a regular incoming call, which has to be queued and transferred to an agent as usual.  The fact that this is a WebRTC call and not from the PSTN is totally transparent to the ACD.
      • A co-browsing server – this acts as a rendezvous point between the customer and the agent for a particular co-browsing session, where changes to the customer’s web page are published over a secure WebSockets (WSS) connection, and the agent subscribes to those changes.  The actual details of how this works are proprietary and vary between vendors; however, the DOM Mutation Observer API is generally at the heart of the toolkit used.  When the agent wishes to navigate on behalf of the customer, mouse-click  events are sent back over the WSS connection from the agent and injected into the customer’s web page using a JavaScript or jQuery simulated mouse click event.  Annotation works similarly, with a mousedown event being passed over the WSS connection and used to paint on an HTML canvas element overlaying the customer’s web page.
      • A WebRTC-to-SIP signaling gateway (as webrtcHacks has covered here).
      • A media gateway, which transforms the SRTP used by WebRTC to the unencrypted RTP used by most enterprise telephony systems, and vice-versa.  This element may also carry out H.264 to VP8 video transcoding and audio codec transcoding if required.

      The signaling and media gateways are common components for vendors selling WebRTC add-ons for legacy SIP-based systems, and are functionally equivalent in the network to a Session Border Controller.  Indeed, several such products are based on SBCs, or a combination of an SBC for the media and a SIP application server for the signaling gateway.  On the other hand, the pseudo-IVR and co-browse servers are rather more specialized elements, designed for contact center applications.

      The work of this array of network elements is coordinated by the web portal, using their APIs and supporting SDKs.  The sequence diagrams in the next section show how the web portal and the ACD between them orchestrate a WebRTC call from its creation to being handed off as a SIP call to an agent, and how it is correlated with a co-browsing session.

      Finally, it should be noted that a reverse HTTP proxy is generally required to protect the web servers in this arrangement, which reside within the inner firewall.  The media gateway would normally be placed within the DMZ.  The use of multiplexing to allow the media streams of multiple calls to use a single RTP port is a particularly noteworthy feature of WebRTC, which is deserving of appreciation by those whose job it is to manage firewalls.

      Call Flows

      In the diagrams that follow, purple lines indicate web-based interactions, often based on REST APIs. Some interactions may use WebSockets because of their asynchronous, bidirectional nature, which is particularly useful for call signaling and event handling.

      Preparing for a call

      Let us start at the point where the customer has already been authenticated by the web portal, and has been perusing their account details. Seeing the big friendly ‘Get Help’ button on their mobile app (remember, this is a mobile-first deployment), they decide they want to talk to a human. Inevitably, an agent is never just sitting around waiting for the call, so there is work to be done to make this happen.

      WebRTC Contact Center Call Flow, Part 1

      The first step in preparing for the call is for the web portal code to allocate a SIP identity for the caller, in other words, the ‘From’ URI or the caller id. This could be any arbitrary string or number, but it should be unique, since we’re also going to use it to identify the co-browse session. Next, the portal requests the WebRTC signaling gateway to authorize a session for this particular URI, because, well, you don’t want people hacking into your PBX and committing toll fraud using WebRTC. The signaling gateway obliges, and passes back to the web portal a one-time authorization token. Armed with the token, the portal instructs the client app (or browser) to prepare for the WebRTC call. It provides the token, the From URI, the location of a STUN server and information on how to contact the signaling gateway.

      While the client is being readied, the portal makes a web services call to the ACD to see when an agent is expected to become available, given the customer’s identity and the nature of their inquiry. (The nature of the inquiry will be determined by what page of the website or app they were on when they pressed the ‘Get Help’ button.) Assuming an agent is not available at that very moment, the portal passes back the estimated wait time to be displayed by the client.

      But what about the insipid on-hold music I mentioned earlier? Don’t we need to transfer the customer to a media server to play this? Well, no, we don’t. This is the Web we’re talking about, and we can readily tell the client to play a video from YouTube, or wherever, while they are waiting.

      Next, the web portal submits the not-yet-created call to the ACD for queuing, via the pseudo-IVR component. Key pieces of information submitted are the From URI, the customer ID and the queue corresponding to the reason for the call. When the call reaches the head of the queue, the ACD instructs the call to be transferred to the selected agent.

      (Side-note: Pseudo-IVR adapters for contact centers are used for a variety of purposes. They may be used to dispatch social media “tweets”, inbound customer service emails and web-chat sessions, as well as WebRTC calls.)

      For modern deployments, agent desktop software may be constructed from a web-based framework, which allows third-party plugin components to pull information from the customer database, to connect to a CRM system, and in our case, to connect to the co-browse server. The screen pop to the agent uses the customer URI to connect to the correct session on the co-browse server.

      Making the call

      Now that the customer and agent are both ready, the web portal instructs the WebRTC client to call the agent’s URI. The actual details of how this is done depend on the vendor-specific WebRTC signaling protocol supported by the gateway; however, on the SIP side of the gateway they are turned into the standard INVITE, with the SDP payload reflecting the media gateway’s IP address and RTP ports.

      WebRTC Contact Center Call Flow, Part 2

      The fact that this is a video call is transparent to the ACD. The pool of agents with video capability can be put in their own skill group for the purposes of allocating them to customers using WebRTC.  The agents could be using video on suitably equipped phone handsets, or they could themselves be using WebRTC clients.  Indeed, some contact center vendors with whom I have spoken point to the advantages of delivering the entire agent experience within a browser: delivering updates to a SIP soft client or thick-client agent desktop software then becomes a thing of the past.

      After the video session has been established, the customer may assent to the sharing of their mobile app screen or web-browsing session.  The co-browse server acts as the rendezvous point, with the customer’s unique URI acting as the session identifier.

      Concluding thoughts: It’s all about mobile, stupid!

      The fact that WebRTC is not ubiquitous, that it is not currently supported in major browsers such as Safari and Internet Explorer, might be thought an insurmountable barrier to deploying it in a contact center.  But this is not the case.  The very same infrastructure that works for web browsers also works for mobile apps, which in many cases are simply mobile UI elements placed on top of a web application, making the same HTTP calls on a web server.

      All that is required is a WebRTC-based SDK that works in Android or iOS.  Happily for us, Google has made its WebRTC code readily available through the Chromium project.  Several vendors have made that code the basis of their mobile SDKs, wrapping them with Java and Objective-C language bindings equivalent to the JavaScript APIs found in browsers.

      For contact center executives, a mobile-first approach offers the following advantages:

      • You don’t want your customers messing around trying to install WebRTC browser plugins for Safari and IE.  If they’re going to download anything, it may as well be your mobile app.
      • Mobile devices are near-ubiquitous.  Both Pew and Nielsen report their popularity amongst older demographics in particular, where regular PCs might not be used.
      • Microphones and cameras on mobile devices are near-universal and of excellent quality, and echo cancellation works well.  That old PC with a flaky webcam?  Perhaps not so much.
      • If your customer is having a real-world problem, then the back-facing camera on a phone or tablet is a great way of showing it.  The auto insurance industry comes readily to mind.
      • Although the Great Video Codec Compromise now promises H.264 support in browsers, mobile SDKs have been able to take advantage of those devices’ hardware support for H.264 video encoding for some time.  When your contact center agents have sleek enterprise-class, video-capable phones that don’t support VP8, you don’t want to have to buy a pile of servers simply to do video transcoding.

      In the call center industry, Amazon and American Express have shown the way in supporting video in their tablet apps, and both these services use WebRTC under the hood.  Speaking at the 2014 Cisco Live! event in San Francisco, Amex executive Todd Walthall related how users of the Amex iPad app who used the video feature had greater levels of customer satisfaction, through a more personal experience.  This should not surprise us, as it’s much easier to empathize with a customer service representative if they’re not just a disembodied voice.

      For companies deploying WebRTC, it’s an incremental approach that doesn’t require significant architectural change or the replacement of existing systems. Early adopters are seeing shorter calls, as context is preserved and co-browsing allows problems to be resolved more quickly. One day we will look back at IVR Hell, waiting on endless hold with only a lo-fi rendition of Mantovani for company, trying in vain to find our account number and PIN, as if it were a childhood nightmare.

      {“author”: “Robert Welbourn“}

      Want to keep up on our latest posts? Please click here to subscribe to our mailing list if you have not already. We only email post updates. You can also follow us on twitter at @webrtcHacks for blog updates and news of technical WebRTC topics or our individual feeds @chadwallacehart@reidstidolph, @victorpascual and @tsahil.

      The post Avoiding Contact Center IVR Hell with WebRTC appeared first on webrtcHacks.

      Pages

      Subscribe to OpenTelecom.IT aggregator

      Using the greatness of Parallax

      Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.

      Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.

      Get free trial

      Wow, this most certainly is a great a theme.

      John Smith
      Company name

      Yet more available pages

      Responsive grid

      Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

      More »

      Typography

      Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

      More »

      Startup Growth Lite is a free theme, contributed to the Drupal Community by More than Themes.