WebRTC has been mentioned with regards to the New York Times. It isn’t about an article covering it – or a new video chat service they now offer.
I was greeted this weekend by this interesting tweet:
WebRTC being used now by embedded 3rd party on http://t.co/AaD7p3qKrE to report visitors' local IP addresses. pic.twitter.com/xPdh9v7VQW
— Mike O'Neill (@incloud) July 10, 2015
I haven’t been able to confirm it – didn’t find the culprit code piece in the several minutes I searched for it, but it may well be genuine.
The New York Times may well be using WebRTC to (gasp) find your private IP address.
In the WebRTC Forum on Facebook, a short exchange took place between Cullen Jennings (Cisco) and Michael Jerris (FreeSWITCH):
Cullen: I’ve been watching this for months now – Google adds served on slash dot for example and many other sites do this. I don’t think it is to exactly get the local ip. I agree they get that but I think there is more interesting things gathered as straight up fingerprinting.
Michael: local ip doesn’t seem that useful for marketers except as a user fingerprinting tool. They already have your public ip, this helps them differentiate between people behind nat. it’s a bit icky but not such a big deal. This issue blows up again when someone starts using it maliciously, which I’m sure will happen soon enough. I don’t get why exactly we don’t just prompt for this the same way we do camera and mic, it wouldn’t be a huge deal to work that into the spec. That being said, I don’t think it’s actually as big of a deal as it has been made either
Cullen: It’s not exactly clear to me exactly how one uses this maliciously. I can tell you most peoples IP address right now 192.168.0.1 and knowing that a large percentage of the world has that local IP does directly help you hack much. To me the key things is browsers need to not allow network connections to random stuff inside the firewall that is not prepared to talk to a browser. I think the browser vendors are very aware of this and doing the righ thting.
My local IP address is 10.0.0.1 which is also quite popular.
In recent months, we’ve seen a lot of FUD going on about WebRTC and the fact that it leaks local IP addresses. I’ve been struggling myself in trying to understand what the fuss is. It does seem bad, a web page knowing too much about me. But how is that hurting me in any way? I am not a security expert, so I can’t really say, but I do believe the noise levels around this topic are higher than they should be.
When coming to analyze this, there are a couple of things to remember:
One thing is clear. WebRTC has a lot more uses than its original intended capability of simply connecting a call.
The post WebRTC on the New York Times – Not as an Article or a Video Chat Feature appeared first on BlogGeek.me.
Atlassian’s HipChat acquired BlueJimp, the company behind the Jitsi open source project. Other than for positive motivation, why should WebRTC developers care? Well, Jitsi had its Jitsi Video Bridge (JVB) which was one of the few open source Selective Forwarding Units (SFU) projects out there. Jitsi’s founder and past webrtcHacks guest author, Emil Ivov, was a major advocate for this architecture in both the standards bodies and in the public. As we have covered in the past, SFU’s are an effective way to add multiparty video to WebRTC. Beyond this one component, Jitsi was also a popular open source project for its VoIP client, XMPP components, and much more.
So, we had a bunch of questions: what’s new in the SFU world? Is the Jitsi project going to continue? What happens when an open source project gets acquired? Why the recent licensing change?
To answer these questions I reached out to Emil, now Chief Video Architect at Atlassian and Jitsi Project Lead and Joe Lopez, Senior Development and Product Manager at Atlassian who is responsible for establishing and managing Atlassian Open Source program.
webrtcHacks: It has been a while since we have covered multi-party video architectures here. Can you give us some background on what a SFU is and where it helps in WebRTC?
Emil: A Selective Forwarding Unit (SFU) is what allows you to build reliable and scalable multi-party conferences. You can think of them as routers for video, as they receive media packets from all participants and then decide if and who they need to forward them to.
Compared to other conferencing servers, like video mixers – i.e. Multipoint Control Units (MCUs) – SFUs only need a small amount of resources and therefore scale much better. They are also significantly faster as they don’t need to transcode or synchronize media packets, so it is possible to cascade them for larger conferences.
webrtcHacks: is there any effort to standardize the SFU function?
Emil: Yes. It is true that SFUs operate at the application layer, so, strictly speaking, they don’t need to be standard the way IP routers do. Different vendors implement different features for different use cases and things work. Still, as their popularity grows, it becomes more and more useful for people to agree on best practices for SFUs. There is an ongoing effort at the IETF to describe how SFUs generally work: draft-ietf-avtcore-rtp-topologies-update. This helps the community understand how to best build and use them.
Having SFUs well described also helps us optimize other components of the WebRTC ecosystem for them. draft-aboba-avtcore-sfu-rtp-00.txt, for example, talks about how a number of fields that encoders use are currently shared by different codecs (like VP8 and H.264) but are still encoded differently, in codec-specific ways. This is bad as it means developers of more sophisticated SFUs need to suddenly start caring about codecs and the whole point of moving away from MCUs was to avoid doing that. Therefore, works like draft-berger-avtext-framemarking and draft-pthatcher-avtext-esid aim to take shared information out of the media payload and into generic RTP header extensions.
Privacy is another issue that has seen significant activity on the IETF is about improving the end-to-end privacy in SFUs. All existing SFUs today need to decrypt all incoming data before they can process it and forward it to other participants. This obviously puts SFUs in a position to eavesdrop on calls, which, unless you are running your own instance just for yourself, is not a great thing. It also means that the SFU needs to do allocate a lot of processing resources to transcrypting media and avoiding this would improve scalability even further.
webrtcHacks: Other than providing multi-party video functionality, what else do SFU’s like the JVB do?
Emil: Simple straightforward relaying from everyone to everyone – what we call full star routing – means you may end up sending a lot of traffic to a lot of people … potentially more than they care or are able to receive. There are two main ways to address that issue.
First, you can limit the number of streams that everyone receives. This means that in a conference with a hundred participants, rather than getting ninety-nine streams, everyone would only receive the streams for the last four, five or N active speakers. This is what we call Last N and it is something that really helps scalability. Right now N is a number that JVB deployments have as a config param but we are working on making it adaptive so that JVB would adapt it based on link quality.
Another way we improve bandwidth usage is by using “simulcast”. Chrome has the option of generating multiple outgoing video streams in different resolutions. This allows us to pick the higher resolution for active speakers (presumably those are the ones you would want to see in good quality) and resend it to participants who can afford the traffic. It will just relay the lower resolution for and to everyone else.
A few other SFUs, like the one Google Hangouts are using implement simulcast as it saves a lot of resources on the server but also on the client side.We are actually working on some improvements there right now.
webrtcHacks: Joe – now that you have had a chance to get to know the Blue Jimp/Jitsi team and technology, can you give an update on your plans to incorporate Jitsi into HipChat and Atlassian?
Joe: Teams of all sizes use HipChat every day to communicate in real-time all over the world. Teams are stronger when they feel connected, and video is an integral part of that. Users have logged millions of minutes of 1:1 video using HipChat, which helps teams collaborate and build their company culture regardless of whether they work in the same location. With Jitsi we can give users so much more! We’re in the process of developing our own video, audio, and screen-sharing features using Jitsi Video Bridge. It’s a little early for us to comment on exact priority order and timing, but our objective is to make it easier for teams to connect effortlessly anywhere, anytime, on any device.
webrtcHacks: what is the relationship between the core Atlassian team, HipChat, and Jitsi? How does Jitsi fit in your org structure?
Joe: The Jitsi team joined Atlassian as part of HipChat team and makes up the core of our video and real-time communication group. They are experts on all things RTC and we’re now leveraging their expertise. The Jitsi developers are relocating to our Austin offices and will keep working on Jitsi and Atlassian implementations.
webrtcHacks: Can you disclose the terms of the deal?
Emil: I can only say that the acquisition was a great thing for both BlueJimp and Jitsi.
Joe: Same here. We really wanted to add to our team the sort of expertise and technology that BlueJimp brings.
webrtcHacks: I had to try.. So, how big is the Jitsi community? Do you know how many active developers you have using your various projects?
Emil: We haven’t been tracking our users so it’s hard to say. In terms of development, a lot of the work on Jitsi Videobridge and Jitsi Meet is done by BlueJimp, but we are beginning to get some pretty good patches. Hopefully, the trend will continue in that direction.
As for the Jitsi client, we haven’t had a lot of time for it in the past couple of years so community contributions there are likely surpassing those of the company.
webrtcHacks: Your public statements indicate the primary focus of the acquisition was the Jitsi Videobridge. Jitsi had many other popular products, including the Jitsi client, a TURN server, and many others. What is the future of these other projects? Does Atlassian have justification to continue to maintain these elements?
Joe: Our plan is to continue developing the Jitsi Videobridge as well as the other projects including libjitsi, Jitsi Meet, Jirecon, Jigasi and other WebRTC related projects in the Jitsi community. We’re also going to continue providing the build infrastructure for the Jitsi client just as BlueJimp has been doing. But, we don’t have immediate plans for substantial development on the purely client-side.
Emil: It’s worth pointing out that the heart of Jitsi Videobridge, libjitsi, is something it shares with the Jitsi client. There’s also a lot of code that Jigasi, our SIP gateway, imports directly from the client. So, while the upper UX layers in the client are not our main focus, we will continue working heavily on the core.
Above all, however, the developer community around the client is much older and more mature than that of our newer projects. There are developers like Ingo Bauersachs or Danny van Heumen, for example, who have been long involved with the project and who continue working on it. Developers coming and going or changing focus is a natural part of any FLOSS project, and as Joe mentioned, we are going to continue providing the logistics.
webrtcHacks: While Atlassian has initiatives to help open source projects, your core products are not based on open source and you are not known for having many open source projects of its own. Can you address this concern? Is the Jitsi acquisition an attempt to change this? If so, what else has Atlassian done to accommodate more of an open sourcing culture internally?
Joe: Actually, Atlassian uses, supports and develops a number of open source projects. We just haven’t been very vocal about it. That will be changing soon. In addition to managing our real-time communication project, I’m also responsible for our open source program. We’re in the process of restructuring how Atlassian supports open source, and the Jitsi project is one of the first initiatives in our plan. We see Jitsi as a great opportunity, and we are *very* serious about making this project a success. We’ll have more to say about open source later this year.
Finally we think that Jitsi Videobridge is the most advanced open source Selective Forwarding Unit, which puts the team in a unique position to contribute to the WebRTC ecosystem. We are very keen on doing this.
webrtcHacks: last week you moved all the Jitsi licenses from LGPL to Apache.
Emil, why did you choose LGPL in the first place?
Emil: Ever since we started the project, one of our primary motivations had been to get our code in the hands of as many people as possible, so we wanted to lower adoption barriers. Licensing is one of the important components here and, during its very early stages, around 2003, 2004, Jitsi (then SIP Communicator) started with an Apache license.
Then we had to think a little bit more seriously about how we were going to make a living off of our work, because otherwise there wouldn’t have been any project at all. That’s when we thought we might be better off if BlueJimp had protection and decided to switch to LGPL.
So, although it wasn’t our first choice, it did give us a certain measure of protection.
I am very happy that Atlassian has decided to take the risk and relinquish that protection. I firmly believe this is the best option for Jitsi and its users.
webrtcHacks: Joe – why the move to Apache? Why not other licenses like MIT, BSD, etc?
Joe: As for why Apache over MIT/BSD, it’s actually very simple: like at many organizations, Apache is our preferred license of choice when using other people’s open source work. So, it made sense to us that this is what we should choose for our projects. We talked with a number of people internally and externally, and even went so far as to evaluate all licenses. But our technical and legal experts found Apache to be a tried and tested license respected by many organizations for their terms and clarity. At the end of the day we chose Apache because it best fit our organization and others.
webrtcHacks: how do you expect this license change will impact existing Jitsi users?
Emil: Very positively! A number of developers and companies are looking at using Jitsi Videobridge for their new startups, products and services. We expect the Apache license to make Jitsi significantly more appealing to them.
When you are integrating a technology, the more permissive the license is, the less it precludes you from certain choices in the future. When launching a new service or a product, it is very hard to know that you would never need to keep some parts of it proprietary. This is especially true for startups, and I am saying it from experience.
You simply need to keep that option open because sometimes it makes all the difference between a company closing its doors or thriving for years.
That’s the liberty that you get from Apache.
webrtcHacks: the Meet application was previously a MIT license. How are you handling that? Some argue that going from MIT to Apache is a step in the wrong direction.
Emil: That’s true – the first lines of code in the Jitsi Meet project did come under MIT. But there’s not much to handle there. The MIT license allows for code to be redistributed under any other license, including Apache, and Joe already pointed out why we think Apache is a better choice.
Joe: There is also a purely practical side to this. As I mentioned, we’re in the process of restructuring our open source story, and Jitsi is one of the first in this effort. So, it’s important for us to apply the same policy everywhere. The more exceptions we have, the harder it will be to manage and ensure a good experience for any Atlassian contributor.
webrtcHacks: Jitsi was known in the past for soliciting community input before making major decisions. Why didn’t you announce the plans to change your licensing model before the actual change this time?
Emil: Knowing the project as I do, it just never crossed my mind that this would be a problem for anyone. Throughout the past years I only heard concerns from people that found the LGPL too restrictive for them, so I only expected positive opinions. And the overwhelming majority have reacted positively.
For the few people who have raised concerns, let me reiterate that we think this is the best possibility for Jitsi, and we also need to be practical and use a uniform license for all Atlassian projects.
People who feel that LGPL was a better match for them are completely free to take last week’s version of the project and continue maintaining it under that license.
webrtcHacks: what level of transparency can the Jitsi community expect going forward?
Emil: This is actually one of the main ways in which BlueJimp’s acquisition is going to be beneficial to Jitsi.
A lot of the work that BlueJimp did in the past was influenced by customer demand. As a result, we never really knew exactly what to expect a month in the future. This is now over. Today it is much easier for us to define a roadmap and stick to it. Obviously we will still remain flexible as we listen to requests and important use cases from the community, but we are going to have significantly more visibility than before.
webrtcHacks: the github charts indicate a slow down in activity vs. last year. Was this due to distraction from the acquisition? What level of public commits should we expect out of the new Atlassian Jitsi team going forward?
Emil: As with any project, there’s a lot that needs to be done in the early stages. Ninety percent of what you do is push code. This gradually changes with time as the problems you are solving become more complex. At that point you spend a lot of time thinking, testing and debugging. As a result your code output diminishes.
Take our joint efforts with Firefox, for example. This took a lot of time looking through wireshark traces, debugging and making small adjustments. The time it took to actually write the code was negligible compared to everything else we needed to do. Still, adding Firefox compatibility was important to Jitsi, and that happened within Atlassian.
In addition, the entire team is relocating to Austin, and a relocation can be time-consuming.
But, there haven’t been any private commits, if that’s what you are thinking of :).
webrtcHacks: can you share some of your roadmap & plans for Jitsi?
Emil: Gladly! We are really excited to continue working on what makes Jitsi Videobridge the most advanced SFU out there. This includes things like bandwidth adaptivity, for instance. We have big changes coming to our Simulcast and Last N support. Scalability and reliability will also be a main focus in the next months. This includes being able to do more conferences per deployment but also more people per conference. We are also going to be working on mobile, to make it easier for people to use the project on iOS and Android. Supporting other browsers and switching to Maven are also on the roadmap.
We’re not ready to say when, or in what order these things will be happening – but they’re coming.
webrtcHacks: should we expect to see the Jitsi source move from github to bitbucket?
Joe: We’re keeping Jitsi on GitHub, since they excel at being a place for open source projects. Bitbucket is better designed for software teams within organizations that want greater control over their source code, to restrict access within their organization, teams, or even to specific individuals. However, one area that we do want to address is issue tracking. This has been a source of pain for Jitsi, so we’re considering moving issue tracking to JIRA, Atlassian’s issue tracking and management software, which will provide us with everything we need for better project management.
We will be discussing this with the community in the coming weeks.
{“interviewer”, “chad“}
{“interviewees”, [“Emil Ivov“, “Joe Lopez“]}
Want to keep up on our latest posts? Please click here to subscribe to our mailing list if you have not already. We only email post updates. You can also follow us on twitter at @webrtcHacks for blog updates and news of technical WebRTC topics or our individual feeds @chadwallacehart, @victorpascual and @tsahil.
The post Can an Open Source SFU Survive Acquisition? Q&A with Jitsi & Atlassian HipChat appeared first on webrtcHacks.
Enterprise web meetings
Video Conf
Medium
Voice, Video
WebRTC video conferencing for the enterprise.
[If you are new around here, then you should know I’ve been writing about WebRTC lately. You can skim through the WebRTC post series or just read what WebRTC is all about.]
I have been following 3CX for several years. They were one of the first in the enterprise communication solution vendors that offered WebRTC. Recently, they introduced a new standalone service called 3CX WebMeeting. It has all the expected features of an enterprise multiparty video calling service. And it uses WebRTC.
I had a chat with Nick Galea, CEO of 3CX. I wanted to know what are they doing with WebRTC and what are his impressions of it.
Here are his answers.
What is 3CX all about?
3CX provides a straightforward and easy to use & manage communication solution that doesn’t lack in functionality or features and is still highly affordable. We recognised that there was a need for a Windows-based software PBX and so this is where 3CX began.
Given the fact that the majority of businesses already use Windows, 3CX provides a solution that is easy to configure and manage for IT Admins. There’s no need for any additional training that can be time-consuming and costly. We also help businesses to save money on phone bills with the use of SIP trunking and free interoffice calls and travel costs can be reduced by making use of video conferencing with 3CX WebMeeting. As a UC solutions provider, we focus on cost savings, management, productivity and mobility, and we help our customers to achieve improvements in all four aspects.
Our focus is on innovation and thus, our development team works nonstop to bring our customers and partners the very best. We are always looking out for the latest great technologies and how we can use them to make 3CX Phone System even better and so of course, WebRTC was a technology that we just had to implement.
You decided to plunge into the waters and use WebRTC. Why is that?
To us, unified communications is not only about bringing all methods of communication into one user-friendly interface, but about making those methods of communication as seamless, enjoyable and productive for all involved, whether that be for the organisation that invested in the system, or a partner or client that simply has a computer and internet connection to work with.
Running a business is not an easy feat, and the whole purpose of solutions such as 3CX Phone System and 3CX WebMeeting is to make everyday business processes easier. So, for us, WebRTC was a no-brainer. We believe in plugin-free unified communications and with such technology available for us to leverage, the days of inconvenient downloads and time-consuming preparation in order to successfully (or in some cases, unsuccessfully) hold a meeting are over.
What signaling have you decided to integrate on top of WebRTC?
Signalling is performed through websocket for maximum compatibility. Messages and commands are enveloped in JSON objects. ICE candidates are generated by our server library while SDP are parsed and translated by MCU. This allows full control over SDP features like FEC and RTX in order to achieve best video performance.
Backend. What technologies and architecture are you using there?
The platform is based on a web application written on PHP. We developed a custom MCU service (actually it’s a Selective Forward Unit aka SFU). This service allows us to handle a very large number of media streams in real time. Performance is optimized to reduce latency to a minimum. Raw media streams can be saved to disk, then our Converter Service automatically produces a standard video file with meeting recording.
A key component of web application is the MCU Cluster Manager, which is able to handle several MCUs scattered in different areas, distribute load and manage user location preference.
Since you cater the enterprise, can you tell me a bit about your experience with Internet Explorer, WebRTC and customers?
So far most people are using Chrome without any complaints so it doesn’t concern me that WebRTC is not supported by Internet Explorer. We haven’t come across any issues with customers as they are aware that this is a limitation of the technology and not the software and actually our stats show that 95% of people connect or reconnect with Chrome after receiving the warning message, so for most users Chrome is not a problem.
Where do you see WebRTC going in 2-5 years?
I think that WebRTC will become the de facto communications standard for video conferencing, and maybe even for calls. WebRTC is a part of how technology is evolving and we may even see some surprising uses for it outside the realms of what we’re imagining right now. It’s incredibly easy to use and no other technology is able to compete. It’s what the developers are able to do with it that is really going to make the difference and I believe there is still so much more to come in terms of how WebRTC can be utilised.
If you had one piece of advice for those thinking of adopting WebRTC, what would it be?
That they should have adopted it earlier :).
Given the opportunity, what would you change in WebRTC?
Nothing really but the technology is still growing so I’m looking forward to see what’s in store for WebRTC and how it’s going to improve.
What’s next for 3CX?
We’re working on tighter integration between 3CX WebMeeting and 3CX Phone System and integrating our platform more closely with other vendors of third-party apps such as CRM systems and so on.
–
The interviews are intended to give different viewpoints than my own – you can read more WebRTC interviews.
The post 3CX and WebRTC: An Interview With Nick Galea appeared first on BlogGeek.me.
Hello, again. This passed week in the FreeSWITCH master branch we had 49 commits. This week we had a bunch of new features with most of them being helpful little improvements, but we also had two new modules merged in! The 2600hz guys added mod_kazoo and William King merged in mod_smpp. You can find out more about mod_smpp by going here. And, the 2600hz patches are all slated to be merged in by the 1.6 release.
Join us on Wednesdays at 12:00 CT for some more FreeSWITCH fun! And head over to freeswitch.com to learn more about FreeSWITCH support.
New features that were added:
Improvements in build system, cross platform support, and packaging:
The following bugs were squashed:
And, this passed week in the FreeSWITCH 1.4 branch we had 2 commits merged in from master.
Join the webinar on WebRTC and its Impact on Testing to get a clear answer.
I’ve been working recently with some friends on solving an issue for WebRTC that seems to be ignored by most – testing WebRTC services.
We had a concept in mind, and decided to follow through and develop a service for it, naming it testRTC. Since we started, we’ve enhanced the service greatly to include monitoring and analytic due to customers request.
What I noticed in our calls with customers is that there are 3 different paradigms used for testing WebRTC-based services:
You might think that using a solid VoIP testing product would suffice for WebRTC, but the reality is starkly different. While WebRTC is VoIP, it bears little resemblance to it when it comes to testing.
If you wish to learn more, check out what we are doing at testRTC. Or better yet – join SmartBear and testRTC for a free webinar:
WebRTC and Its Impact on Testing – July 8, 2015, 2:00 p.m. EDT
Nikhil Kaul and I will be discussing the challenges WebRTC posts to testing and suggest best practices to meet these challenges. See you there!
The post How do You Test Your WebRTC Service? appeared first on BlogGeek.me.
Get a sneak peak of the brand new SmartPBX App. This efficient tool alone gives you VoIP functionality and allows manage and remove services for all of your telecom clients. To get a free demo, sign up! http://partner.2600hz.com/
Time for another update.
Only 4 months have passed since I released my last update to the Choosing a WebRTC API Platform report and things have already changed enough to merit another update.
Some of the things we’ve seen?
The report, as it is, currently covers 19 vendors: AddLive (Snapchat), APIdaze, Apizee, CafeX, Forge (Acision/Comverse), Kandy, OnSIP, ooVoo, OpenClove, Plivo, Requestec (Blackboard), Respoke, SightCall, Sinch, Temasys, TokBox, Tropo (Cisco), Twilio and VoxImplant.
AddLive and Requestec are now out of the game. Others may evaporate by year end. There are other players who are in this market and I am setting my sights on adding.
Which vendors do you think are missing in this report? What topics should I cover beyond those in the current table of contents?
I’d love to get your feedback.
The next update of this report will occur during September timeframe.
Want to make the best decision on the right WebRTC platform for your company? Now you can! Check out my WebRTC PaaS report, written specifically to assist you with this task.
The post I Need Your Help: Who is Missing from my WebRTC PaaS Report? appeared first on BlogGeek.me.
The FreeSWITCH 1.4.20 release is here!
This is a routine maintenance release and the resources are located here:
Security issues:
New features that were added:
Improvements in build system, cross platform support, and packaging:
The following bugs were squashed:
It took some time, but it finally happened.
Today is the first day in my adult life that I wake up in the morning needing to plan ahead for myself and myself only.
I started this journey three and a half years ago. It started small. With a post: Starting anew.
Two things happened at that time:
The blog grew nicely and formed into a WebRTC focused site. So much so that I reduced my work at Amdocs to part time and started offering consulting around WebRTC. Today I am completing that step. I have left Amdocs, an employer that was good to me in every way, in order to carve up my own path in the world.
For now, it will mostly be WebRTC. But not only.
It will be consulting. But also entrepreneurship. There is already an established startup I founded with some friends, and another one in the works.
Most of all, it will be exciting. And fun.
If you want to have a chat or get my assistance – I’ll be happy to be of help.
The post I an now Officially a 100% WebRTC… Hermit appeared first on BlogGeek.me.
Anthony Minessale and Michael Jerris will be featured on FLOSS Weekly tomorrow to talk about FreeSWITCH! Go check it out!
While you need to give direct access to your APIs, an SDK is a critical piece of your offering.
There was an article on the ProgrammableWeb on Sending.io NOT offering an SDK for their service. I think in most cases, this approach is wrong.
Sending.io decided to offer only an API layer for its customers. You can access their REST APIs, but how you do it is your problem – even when what they give is designed and built for mobile devices.
API and SDKI’ll start with a quick explanation of the two – at least in the scope of this post. There will be those who will definitely object my definitions here, but the idea is just to make the distinction I need here – and not to pontificate the meaning of the two.
Back to Sending.io and their reasons – from this article:
While this may work in the gaming industry, I think it is not workable in many other industries. Here are my thoughts on this one:
It all boils down to your executionThere are two ways to treat an SDK – as part of your offering or as an afterthought.
If you treat it as an afterthought, then performance issues, crashes and privacy issues will crop more frequently than not.
With most SDKs today built as frontends to a backend REST API, it makes perfect sense that some of them just aren’t written well: Backend developers are good at scaling a service to run in the cloud. For them, considerations of memory and performance of the single session in the same way that a native Android developer thinks about is foreign.
If you really want to offer an SDK, have a pro build it for you.
The customer’s controlAssuming what you have on offer is a closed binary SDK that the customer ends up using, then control may be an issue.
It doesn’t have to be this way.
There are 3 options you can take here, each with its own control points for customers:
There are several reasons that make an SDK so powerful:
Plan on offering a backend API for your customers?
You shouldn’t just ignore an SDK – especially not if you plan on having developers integrate with your APIs inside mobile apps.
Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.
The post Why an SDK is Critical to your API Offering appeared first on BlogGeek.me.
Hello, again. This passed week in the FreeSWITCH master branch we had 37 commits. There was one feature this week with improvements to play_and_detect_speech to set the current_application_response channel variable.
Join us on Wednesdays at 12:00 CT for some more FreeSWITCH fun! And head over to freeswitch.com to learn more about FreeSWITCH support.
New features that were added:
Improvements in build system, cross platform support, and packaging:
The following bugs were squashed:
This passed week in the FreeSWITCH 1.4 branch we had 30 commits merged in from master.
Security issues:
New features that were added:
Improvements in build system, cross platform support, and packaging:
The following bugs were squashed:
There are a lot of notable exceptions, but most WebRTC developers start with the web because well, Web RTC does start with web and development is much easier there. Market realities tells a very different story – there is more traffic on mobile than desktop and this trend is not going to change. So the next phase in most WebRTC deployments is inevitably figuring out how to support mobile. Unfortunately for WebRTC that has often meant finding the relatively rare native iOS and Android developer.
The team at eFace2Face decided to take a different route and build a hybrid plugin. Hybrid apps allows web developers to use their HTML, CSS, and JavaScript skills to build native mobile apps. They also open sourced the project and verified its functionality with the webrtc.org AppRTC reference. We asked them to give us some background on hybrid apps and to walk us through their project.
{“intro-by”, “chad“}
When deciding how to create a mobile application using WebRTC there is no obvious choice. There are several items that should be taken into consideration when faced with this difficult decision, like the existence of previous code base, the expertise, amount of resources and knowledge available. Maintenance and support are also a very important factors given the fragmentation of the mobile environment.
At eFace2Face we wanted to extend our service to mobile devices. We decided to choose our own path- exploring and filling in the gaps (developing new tools when needed) in order to create the solution that fitted us best.This post shares some of the knowledge and expertise we gained the hard way while doing so. We hope you find it useful!
What’s a hybrid application?There are two main approaches on how hybrid apps are built:
Creating Hybrid HTML5 app is the most extensive alternative and the one we prefer because it uses web specific technologies. You can get a deeper overview about native vs. HTML5 (and hybrid applications) in a recent blog post at Android Authority.
Hybrid App Pros & Cons Pros:From our point of view, a typical WebRTC application is not really graphic-intensive (i.e. it is not, for instance, a game with lots of animations and 3D effects). Most of the complex processes are done internally by the browser, not in JavaScript, so a graphical UX interface should be perfectly doable on a hybrid application and run without any significant perceptible slowdown. Instagram is a good example of a well-known hybrid app that uses web technologies in at least some of its components.
WebRTC on native mobile: current statusNative support in Android and iOS is a bit discouraging. Apple do not support it at all, and has no public information about when are they going to do so, if they decide to support it at all. On Android, the native WebView supported WebRTC starting in version 4.4 (but be cautious as it is based on Chromium 36) then in 5.0 and onwards.
Note that there are no “native WebRTC” APIs on Android or iOS yet, so you will have to use Google’s WebRTC library. Justin Uberti (@juberti) provides a very nice overview of how to do this (go here to see the slides).
SolutionsLet’s take a look at the conclusions of our research.
Android: CrosswalkIn Android, using the native WebView seems like a good approach; in fact we used it during our first attempt to create our application. But then we decided to switch to Intel’s Crosswalk, which includes what’s best described as a “full Chrome browser”. It actually allows us to use a fully updated version of native Chromium instead of WebView.
These were our reasons for choosing Crosswalk:
An advanced reader could think: “Ok, this is cool but I need to use different console clients (Cordova and Crosswalk) to generate my project, and I don’t like the idea of that.” You’re right, it would be a hassle, but we also found another trick here. This project allows us to add Crosswalk support to a Cordova project; it uses a new Cordova feature to provide different engines like any other plugin. This way we don’t need to have different baselines in the source code.
iOS: Cordova pluginAs explained before, there are frameworks that provide hybrid applications with the device functionality code via plugins. You can use them in your JavaScript code but they are implemented using native code. So, it should be possible to add the missing WebRTC JavaScript APIs.
There are several options available, but most of them provide custom APIs or are tightly coupled with some proprietary signaling from a service provider. That’s the reason that we released an open source WebRTC Cordova plugin for iOS.
The plugin is built on top of Google’s native WebRTC code and exposes the W3C WebRTC APIs. Also, as it is a Cordova plugin, it allows you to have the same Cordova application running on Android with Crosswalk, and on iOS with the WebRTC plugin. And both of them reuse all of the code base you are already using for your web application.
Show me the code!“Yes, I have heard this already”, you might say, so let’s get some hands-on experience. In order to demonstrate that it’s trivial to reuse your current code and have your mobile application running in a matter of days (if not hours), we decided to take Google’s AppRTC HTML5 application and create a mobile application using the very same source code.
You can find the iOS code on github, Here are the steps required to get everything we’re talking about working in minutes:
We needed to make some minor changes in order to make it work properly in the Cordova environment. Each of these changes didn’t require more than a couple of js/html/css lines:
Deciding whether to go hybrid or native for your WebRTC app is up to you. It depends on the kind of resources and relevant experience your company has, the kind of application that you want to implement, and the existing codebase and infrastructure you already have in place. The good news is our results show that using WebRTC is not a key factor in this decision, and you can have the mobile app version of your WebRTC web service ready in much less time than you probably expected.
References
{“authors”, [“Jesus Perez“,”Iñaki Baz“, “Sergio Garcia Murillo“]}
Want to keep up on our latest posts? Please click here to subscribe to our mailing list if you have not already. We only email post updates. You can also follow us on twitter at @webrtcHacks for blog updates and news of technical WebRTC topics or our individual feeds @chadwallacehart, @victorpascual and @tsahil.
The post Developing mobile WebRTC hybrid applications appeared first on webrtcHacks.
Friction.
A true story…
I had a meeting the other day. It was with a company that has been offering WebRTC video chat as part of its own services to their own customers for some time now, but internally, they used some other vendor for their own business meetings. My invitation was on that other vendor’s platform.
At the time of the meeting, I opened the calendar invitation, searching for the link to press.
Found it. Clicked it.
Got using my Chrome browser on my home desktop Ubuntu machine to the web page.
Clicked to join the meeting using my browser.
Was greeted with a message telling me Chrome isn’t supported due to a Chrome bug (with a link to a page detailing the issue on Chrome’s bug tracker) AND suggesting me to use Firefox.
Good.
Opened up Firefox, pasted the link to it.
Clicked to join the meeting using my browser.
Was greeted with a message telling me that only Windows and Mac are supported.
Great.
Opened my laptop to join. It runs Windows 8, so no issues there (I hoped).
Clicked the link on the email there, just to get Chrome opened there.
Somehow, the system knew this time that I should be able to use Chrome, so it happily instructed me to wait to download and then run the executable they were sending me.
Ok.
It took a minute or two to get that executable to run and start installing *something*.
But it got lost in all my windows. A bit of searching and I found the pesky window telling me to open the link yet again.
So I did.
It then went into this seemingly endless loop of trying to open up a meeting, failing and reopening.
This is when I noticed that the window being opened was an Internet Explorer one.
I cut the loop short and opened the link to the meeting on Internet Explorer.
It worked.
10 minutes later, frustrated, with another crappy installation of a client lurking around my Windows machine, I got to talk to the people who invited me.
Two were there with video – me one of them – we actually installed and executed that “plugin”.
Others joined by phone.
—
I am a technical person.
I worked in the video conferencing industry.
Why the hell should we use such broken tools and technologies in 2015?
I couldn’t care less if the video conferencing equipment that have been purchased ions ago don’t support VP8 or require conversion of SRTP to RTP or require translation from REST/WebSocket to H.323 signaling. I really don’t.
The only thing I want is to open a browser to a specific URL and have that URL just work.
On Ubuntu please.
—
The service in question?
Wasn’t a new one. They’ve been around for a decade or so.
They started with the desktop, so why can’t they get that experience to work well?
—
Yes. Internet Explorer and Safari are missing. I know. But I couldn’t care less.
If you want to provide a broken plugin experience for IE and Safari, then please do. But wherever possible make it easier for me to use.
It really isn’t hard. I attend a lot of video calls these days. The crushing majority of them are through WebRTC based services. Most of the services I used weren’t built by billion dollar companies.
Get your act together.
Start using WebRTC for your own business meetings.
The post Why I Hate Video Conferencing Plugins and LOVE WebRTC Services appeared first on BlogGeek.me.
As newly appointed co-chair in the W3C WebRTC WG, I just participated in my first Editor’s Call, and I’m impressed.
We had to address nearly dozens of Pull Requests and Issues on the associated github repos. We managed to knock down quite a few that ended up getting merged and a few that were closed today, despite not having 1 co-chair and 1 editor present.
There were some suggestions on how we could make the processes a bit more effective, allowing everyone to understand more what’s expected of them. It’s going to take a few meetings I suspect to get a real feel for how I can be adding the most value possible.
Overall, it feels like we are all trying our best to do what the new charter has set out, to get 1.0 done before getting on with the next chapter. I am excited to be part of it and look forward to continue helping!
If you have any thoughts on how the WebRTC Working Group could be doing things differently to be more effective and efficient, I would like to hear your thoughts.
Companies care little about standards. Unless it serves their selfish objectives.
The main complaint around WebRTC? When is Apple/Microsoft going to support it.
How can that be when WebRTC is being defined by the IETF and W3C? When it is part of HTML5?
WebAssemblyWe learned last week on a brand new initiative: WebAssembly. The concept? Have a binary format to replace JavaScript, act as a kind of byte-code. The result?
If the publication on TheNextWeb is accurate, then this WebAssembly thing is endorsed by all the relevant browser vendors (that’s Google, Apple, Microsoft & Mozilla).
WebAssembly is still just a thought. Nothing substantiate as WebRTC is. And yet…
WebAssembly yes and WebRTC no. Why is that?
Why is that?
Decisions happen to be subjective and selfish. It isn’t about what’s good for the web and end users. Or rather, it is, as long as it fits our objects and doesn’t give competitors an advantage or removes an advantage we have.
WebAssembly benefits almost everyone:
Google has no issue with this – they thrive on things running in browsers
Microsoft are switching towards the cloud, and are in a losing game with their dated IE – they switched to Microsoft Edge and are showing some real internet in modernizing the experience of their browser. So this fits them
Mozilla are trying to lead the pack, being the underdog. They will be all for such an initiative, especially when WebAssembly takes their efforts in asm.js and build assets from there. It validates their credibility and their innovation
Apple. TechCrunch failed to mention Apple in their article of WebAssembly. A mistake? On purpose? I am not sure. They seem to have the most to lose: Better web means less reliance on native apps, where they rule with current iOS first focus of most developers
All in all, browser vendors have little to lose from WebAssembly while users theoretically have a lot to gain from it.
WebRTCWith WebRTC this is different. What WebRTC has to offer for the most part:
The problem stems from the voice and video capability.
Google have Hangouts, but make money from people accessing web pages. Having ALL voice and video interactions happen in the web is an advantage to Google. No wonder they are so heavily invested in WebRTC
Mozilla has/had nothing to lose. They had no voice or video assets to speak of. At the time, most of their revenue also came from Google. Money explains a lot of decisions…
Microsoft has Skype and Lync. They sell Lync to enterprises and paid 8.5 billions for Skype. Why would they open up the door to competitors so fast? They are now headed there, making sure Skype supports it as well
Apple. They have FaceTime. They care about the Apple ecosystem. Having access to it from Android for anything that isn’t a Move to iOS app won’t make sense to them. Apple will wait for the last moment to support it, making sure everyone who wishes to develop anything remotely related to FaceTime (which was supposed to be standardized and open) have a hard time doing that
All in all, WebRTC doesn’t benefit all browser vendors the same way, so it hasn’t been adopted in the same zealousness that WebAssembly seems to attract.
Why is it important?Back to where I started: Companies care little about standards. Unless it serves their selfish objectives.
This is why getting WebRTC to all browser vendors will take time.
This is why federating VoIP/WebRTC isn’t on the table at this point in time – the successful vendors who you want to federate with wouldn’t like that to happen.
Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.
The post How the Politics of Standardization Plays in WebRTC, WebAssembly and Web Browsers appeared first on BlogGeek.me.
Hello, again. This passed week in the FreeSWITCH master branch we had 94 commits! We had a large amount of work done this week and a few of the highlights are: added mod_local_stream video support, added member status in json format to the conference live array, added function to enable debug information about Opus payload, and a security issue concerning enabling cert CN/SAN validation.
Join us on Wednesdays at 12:00 CT for some more FreeSWITCH fun! And head over to freeswitch.com to learn more about FreeSWITCH support.
Security issues:
FS-7708 Fixed docs on enabling cert CN/SAN validation
New features that were added:
FS-7656 [mod_localstream] Added mod_local_stream video support, and make mod_conference move the video in and out of a layer when the stream has video or not, scan for relative file in art/eg.wav.png and display it as video when playing audio files, put video banner up if artist or title is set, and fixed a/v sync on first connection
FS-7629 [mod_conference] Added member status in json format to the conference live array, add livearray-json-status to conference-flags to enable
FS-7517 FS-7519 [mod_av] [mod_openh264] Added H264 STAP-A packeting support so it would work with FireFox
FS-7664 [mod_verto] Set ICE candidate timeout to wait for only 1 second to fix media delays
FS-7660 [mod_opus] Enabled with new API command “opus_debug” to show information about Opus payload for debugging.
FS-7519 [mod_av] Fixed bitrate and added some presets
FS-7693 [mod_conference] Lower the default energy level in sample configs to improve voice quality
Improvements in build system, cross platform support, and packaging:
FS-7648 More work toward setting up a QA testing configuration, add condition testing for regex all and xor cases, adding profile-variable for testing cases , add lipsync tests for playback and local stream, add stereo, and configuration for mcu test
FS-7338 Fixed bug in Debian packaging when trying to build against custom repo
FS-7609 Enable building of mod_sangoma_codec for Debian Wheezy/Jessie
FS-7667 [mod_java] Fixed include directory detection when using Debian java packages and use detected directory
FS-7655 Make libvpx and libyuv optional (none of the video features will work without them) The following modules require these libraries to be installed still: mod_av mod_cv mod_fsv mod_mp4v2 mod_openh264 mod_vpx mod_imagick mod_vpx mod_yuv mod_png mod_vlc, fix build issue w/ strict prototypes, and fix a few functions that need to be disabled without YUV
FS-7605 Fixed default configuration directory in Debian packages and fixed Debian packaging dependencies on libyuv and libvpx
FS-7669 When installing from Debian packaging if you don’t have the /etc/freeswitch directory, we will install the default packages for you. If you already have this directory, we’ll let you deal with your own configs.
FS-7297 [mod_com_g729] Updated the make target installer
FS-7644 Added a working windows build without video support for msvc 2013
FS-7666 [mod_managed] Fixed error building mod_managed on non windows platforms
The following bugs were squashed:
FS-7641 Fixed a segfault in eavesdrop video support
FS-7649 [mod_verto] Fixed issue with h264 codec not being configured in verto.conf.xml
FS-7657 [mod_verto] Fixed a bug with TURN not being used. Note, you can pass an array of stun servers, including TURN, to the verto when you start it up. (see verto.js where iceServers is passed)
FS-7665 [mod_conference] Fixed a bug with the video floor settings not giving the video floor to the speaker
FS-7650 [mod_verto] Fixed crash when making a call from a verto user with profile-variables in their user profile
FS-7710 [mod_conference] Added the ability to set bandwidth to “auto” for conference config
FS-7432 Fixed dtls/srtp, use correct a=setup parameter on recovering channels
FS-7678 Fixed for fail_on_single_reject not working with | bridge
FS-7709 [mod_verto] Verto compatibility fixes for Firefox
FS-7689 [mod_lua] Fixed a bug with lua not loading directory configurations
FS-7694 [mod_av] Fixed for leaking file handles when the file is closed.
Jitsi switching to the Apache open source license is what the doctor ordered.
Blue Jimp, and with it Jitsi, was acquired by Atlassian in April this year. I wrote at the time about Jitsi’s open source license:
The problem with getting the Jitsi Videobridge to larger corporations was its open source license
You can read my explanation on open source licenses. If you read the comments as well, you’ll see how complex and mired with landmines this domain is.
Last week, an announcement was made in the jitsi-dev mailing list: Jitsi is switching from LGPL to Apache license:
LGPL, our current license allows everyone to integrate and ship our various jars. Once you start making changes and distributing them however, then you you need to make sure these changes are also available under LGPL, AKA the LGPL reciprocity clause.
What I found interesting weer the next two paragraphs:
As the copyright holder, in BlueJimp we have been been exempt from this reciprocity clause. Even though we rarely use it, we had the liberty to modify our code without making our changes public. No one else had this option.
Switching to Apache ends our advantage in this regard, and allows everyone to use, integrate and distribute Jitsi with a lot less limitations.
Some things to notice here:
All in all, this is a great move to our WebRTC ecosystem. Atlassian is doing the right moves in maintaining the Jitsi community happy and engaged while attracting the larger players in the market. I wouldn’t have done it any other way if I were in their shoes.
Want to make the best decision on the right WebRTC platform for your company? Now you can! Check out my WebRTC PaaS report, written specifically to assist you with this task.
The post Why Did Atlassian Switch Jitsi’s Open Source License from LGPL to Apache? appeared first on BlogGeek.me.
Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.
Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.
Wow, this most certainly is a great a theme.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.