WebRTC 1.0 uses SDP for negotiating capabilities between parties. While there are a growing number of objects coming to WebRTC to avoid this protocol from the 90’s , the reality is SDP will be with us for some time. If you want to do things like change codecs or adjust bandwidth limits, then you’re going to need to “munge” […]
The post How to limit WebRTC bandwidth by modifying the SDP appeared first on webrtcHacks.
No article today.
My course is launching today: Advanced WebRTC Architecture Course.
I’ve got some solid attendance for it, along with a good bulk of high quality material lined up.
Hopefully, this will be a success.
If you are taking the course – then good luck and please share your thoughts with me – I’ve built this course for you and I’d like you to benefit from it as much as possible.
If you aren’t taking it but still want to attend – feel free to enroll. I’ll be closing up course signups end of this week, with no clear indication if and when I’ll be running it next.
Now quiet please – there are people studying in here. Somewhere. Hopefully.
The post Quiet please – people are studying appeared first on BlogGeek.me.
WebRTC course starts Monday next week.
At long last, the wait will be coming to an end and my recent sleepless nights as well. I’ve been working these past months to put up the content for the course, not knowing how it will end up.
Most of the materials have been recorded, uploaded and prepared already, waiting for me to just manually add all the people who enrolled. There’s a lot of material in that course, and a lot more that I am sure is still missing in there. Trying to cover WebRTC in its entirety isn’t easy.
Through the process of putting this stuff up and out there, I’ve learned a lot myself.
The course is split into 7 sections:
Most of the lessons are already ready. There are around 6 lessons that I still need to write. Hopefully, they will be available on launch day, but if not, then the following week.
I want to answer a few quick questions here – things I’ve been asked time and again in the past month:
Is this a one-time thing?Yes and no.
The course takes place October 24 and lasts for 2 months. Those who enroll for office hours get an extended duration of 4 months (as well as office hours).
I don’t plan on doing this an ongoing thing where you can enroll whenever and do the course. I will be taking the time throughout these two months to listen to the students and see if there’s anything that requires updating in “real time”. I can’t do it if this is an ongoing thing.
This might change in the future, but for now, there’s this timing.
I might do that some months from now, after I rest a bit from the effort and decide if it makes financial sense to run it again.
If you have your own timing issues, then understand that the course is self-paced. You can “leave” for a week or two and come back, do it faster or slower.
Is the course for me?I can’t really say.
Here are a few types of students that I have already enrolled for the course:
The course doesn’t include too much code. There’s the occasional piece of code shown, but the idea isn’t to explain to you how to develop with WebRTC. In truth – most of you won’t develop with WebRTC directly anyway – you’ll end up using a framework or a third party for that.
The intent is to give you the understanding of the limits and capabilities of WebRTC. To know how to yield this amazing tool and how to use it effectively in your product.
How is the course conducted?If you enrolled, then you will be receiving an email a day or two prior to the course.
I will be registering you to the course mini-site inside the BlogGeek.me website. Once you login, you will be able to access all course sections and lessons.
Each lesson has a page of its own in the site. Most lessons have a recorded video session as the main bulk of it, along with text and additional reading material. In most cases, that additional reading material is important.
You can “tune in” to any lesson you wish and learn it at your own pace and in your free time.
There is an online forum for the course. Students will be able to raise their questions, issues and feedback there. If things require changes on my end, I’ll try making the changes to the lessons as we move along, maybe even adding course materials and lessons if the need will arise. I will also be using the forum to ask questions myself, and check out on the progress of students.
For those taking office hours, these will take place twice a week at different times to accommodate different time zones. In there, I will answer questions as they come and basically make myself available to you “in the flesh”. I haven’t decided yet which WebRTC service to use for that – suggestions are welcome.
I am still debating if I should use quizes as part of the course, placing them at the end of each section or not. If you have an opinion – please voice it (even if you’re not going to attend the course).
Enroll today
Learn how to design the best architecture for our WebRTC service in this new Advanced WebRTC Architecture course.
The course starts next week.
There’s a Q&A page that may answer additional questions you might have.
Official course syllabus is also available in PDF form.
I’d be happy to meet you if you decide to enroll to the course. This is a new thing for me and I an quite excited about it.
If you are not sure about the course – email me. If there’s no fit – I’ll tell you immediately. If this might help you, I’ll explain to you what you will gain from it so you can make a better decision
Until next Monday – have an awesome week.
The post Advanced WebRTC Acrhitecture Course – Updates appeared first on BlogGeek.me.
The FreeSWITCH 1.6.12 release is here!
This is also a routine maintenance release. Change Log and source tarball information below.
Release files are located here:
New features that were added:
Improvements in build system, cross-platform support, and packaging:
The following bugs were squashed:
The future of streaming includes WebRTC.
Disclaimer: I am an advisor for Peer5.
If you look at reports from Ericsson or Cisco what you’ll notice is the growth of video as a large portion of what we do over the Internet. As video takes up an order of magnitude more data to pass than almost anything else we share today this is no wonder. Here are a few numbers from Cisco’s forecast from Feb 2016:
Source: Cisco
I think there are a few reasons for this growth:
The challenge really begins when you look at the Internet technologies available to stream these massive amounts of content:
The challenge with HLS and MPEG-DASH is latency. While this might be suitable for many use cases, there are those who require low latency live streaming:
From my course on WebRTC architectureFor those who can use HLS and MPEG-DASH, there’s this nagging issue of needing to use CDNs and pay for expensive bandwidth costs (when you stream that amount of video, everything becomes expensive).
Which brings me to the recent deal between Peer5 and Dailymotion. To bring you up to speed:
There are other startups with similar technologies to Peer5, but this is the first time any of them has publicized a customer win, and with such a high profile to top it off.
In a way, this validates the technology as well as the need for new mechanisms to assist in our current state of video streaming – especially in large scales.
WebRTC seem to fit nicely in here, and in more than one way only. I am seeing more cases where companies use WebRTC either as a complementary technology or even as the main broadcast technology they use for their service.
It is also the reason I’ve added this important topic to my upcoming course – Advanced WebRTC Architecture. There is a lesson dedicated to low latency live broadcasting, where I explain the various technologies and how WebRTC can be brought into the mix in several different combinations.
If you would like to learn more about WebRTC and see how to best fit it into your scenario – this course is definitely for you. It starts October 24, so enroll now.
Learn how to design the best architecture for our WebRTC service in this new Advanced WebRTC Architecture course.
The post Dailymotion, Peer5 and the Future of Streaming appeared first on BlogGeek.me.
This week mod_opus got a cool new feature that allows the detection of a slow or contaminated link. And verto is converting to adapter.js as well as getting DTMF shortcuts. The FreeSWITCH configuration audit is ongoing with initial minor commits and will continue throughout the year. If you are looking to volunteer to help with that or would like more information email brian@freeswitch.org or join the Bug Hunt on Tuesdays at 12:00pm Central Time.
Join us Wednesdays at 12:00 CT for some more FreeSWITCH fun! And, head over to freeswitch.com to learn more about FreeSWITCH support.
New features that were added:
Improvements in build system, cross-platform support, and packaging:
The following bugs were squashed:
WebRTC isn’t limited to JavaScript.
This is something I don’t get asked directly, but it does pop up from time to time. Especially when people come up with a specific language in mind and ask if it is suitable for WebRTC.
While the answer is almost always yes, I think a quick explanation of where programming languages meet WebRTC exactly is in order.
We will start with a small “diagram”, to show where we can find WebRTC related entities and move from there.
We’ve got both client and server entities with WebRTC, and I think the above depicts the main ones. There are more as your service gets more complicated, but that’s all an issue of scaling and pure development not directly related to WebRTC.
Learn how to design the best architecture for our WebRTC service in this new Advanced WebRTC Architecture course.
So what do we have here?
Web appThe web app is what most people think about when they think WebRTC.
This is what ends up running in the browser, loaded from an HTML and its derivatives.
What this means is that the language you end up with is Java Script.
Mobile appWhen it comes to the mobile domain, there are two ways to end up with WebRTC. The first is by having the web app served inside a mobile browser, which brings you back to Java Script.
The more common approach though is to use WebRTC inside an app. You end up compiling and linking the WebRTC codebase as an SDK.
The languages here?
There’s also the alternative of C# via Xamarin or Java Script again if you use something like Crosswalk. With these approaches, someone should already have WebRTC wrapped for you in these platforms.
Embedded appEmbedded is where things get interesting.
There are cases where you want devices to run WebRTC for one reason or another.
Two main approaches here will dictate the languages of choice:
In general, here you’ll be going to lower levels of abstraction, getting as close as possible to the machine language (but stopping at C most probably).
TURN serverSTUN and TURN servers are also necessary. Most likely – you won’t be needing to do a thing about them besides compiling, configuring and running them.
So no programming languages here.
I would note that the popular open source alternatives are all written in C. Again – this doesn’t matter.
Media serverMedia servers come in different shapes and sizes. I’ve covered them here recently, discussing Jitsi/Kurento and later Kurento/Janus.
The programming languages here depend on the media server itself. Jitsi and Kurento are Java based. Janus is mostly C. In most cases – you wouldn’t care.
Media servers are usually entities that you communicate with via REST or Websocket, so you can just use whatever language you like on the controlling side. It is a very popular choice to juse Node.js (=Java Script) in front of a Kurento server for example. It also brings us to the last entity.
App/Signaling serverThe funny thing is that this is where the question is mostly targeted at. The application and/or signaling server is what stitches everything together. It serves the web app, communicates with the mobile and embedded apps. It offers the details of the TURN server and handles any ephemeral passwords with it, it controls the media servers.
And it is also where the bulk of the development happens since it holds the business logic of the application.
And here the answer is rather simple – use whatever you want.
In general, whatever you can use to build websites can be used to build a WebRTC service.
What’s your language?Back to you. What is the programming languages you use with WebRTC?
If you are looking for developers, then what would be the languages you’d view as mandatory and which ones as preferable with applicants?
–
This as well as other topics are covered in my upcoming Advanced WebRTC Architecture course. Be sure to enroll if you wish to deepen your understanding in this topic.
The post What’s Your Preferred Language for WebRTC Development? appeared first on BlogGeek.me.
So far so good, but it is time to add some more options for you.
A selection of three different course packagesI am working to complete all lessons for the course. It takes time to work things through, go over the lessons, make sure everything is in order and record the sessions.
The interesting thing to me is the variety of people that enroll to this course – they come from all over the globe, varying from small startups to large companies. I found some interesting vendors who are looking at WebRTC that I wasn’t aware of.
A few updates about the courseThere are a few minor updates that are taking place in the course:
The course duration is 8 weeks, give or take a few days.
That said, if you want access to the recorded materials for a longer period, then you might want to consider going for the Plus or Premium packages.
The Plus package extends access to the course materials, including the forum and the office hours by an additional 2 months.
Office hours happen twice a week, at two different times to accommodate multiple time zones. During office hours I will be reviewing with the students their learning and understanding of WebRTC and assist in person in areas that will arise. I might even decide to hold a quick online lesson on relevant or timely topics during the office hours.
The Premium package extends access to the curse materials up to a full year. More about the premium package below.
GroupsIf you want to enroll multiple employees or just come join as a team, they just contact me directly.
For large enough groups, I can offer discounts. For others, just the service of proforma invoice and wire transfer (which can still be better than PayPal for you).
We will be having 3-4 medium sized groups in our course this time, which will make things interesting – especially during office hours.
The Premium PackageI decided to add a premium package to the offering.
The idea behind it is to allow those who want more access to my time, and in a more private way.
The premium package offers two substantial additions on top of the Plus package:
In the past few months I’ve noticed a lot of small companies who end up wanting an advice. A few hours of my time to explain to me what they are doing and chat about it, to see if there’s anything I can suggest. I decided to offer this service through this course as well, by bundling it as two consultation calls that go on top of the course itself.
We select together the agenda of these calls and what you want to achieve in them before we start. We then schedule the time and medium to use for the call (think something with WebRTC and a webcam, but not necessarily). And then we sit and chat.
If you already enrolledIf you already enrolled via PayPal and haven’t heard anything from me other than an order form and an invoice – don’t worry. I will be reaching out to all students a week or two before the course.
I am excited to do this, and really hope you are too.
See you next month!
Learn how to design the best architecture for our WebRTC service in this new Advanced WebRTC Architecture course.
The post Advanced WebRTC Architecture Course: Adding a Premium Package appeared first on BlogGeek.me.
This week mod_conference and mod_verto saw the most action with added sounds and user variables, whitelisting, and syncing outbound calls with the user directory respectively. The FreeSWITCH configuration audit is ongoing with initial minor commits and will continue throughout the year. If you are looking to volunteer to help with that or would like more information email brian@freeswitch.org or join the Bug Hunt on Tuesdays at 12:00pm Central Time.
Join us Wednesdays at 12:00 CT for some more FreeSWITCH fun! And, head over to freeswitch.com to learn more about FreeSWITCH support.
New features that were added:
Improvements in build system, cross-platform support, and packaging:
The following bugs were squashed:
The FreeSWITCH 1.6.11 release is here!
This is also a routine maintenance release. Change Log and source tarball information below.
Release files are located here:
New features that were added:
Improvements in build system, cross platform support, and packaging:
The following bugs were squashed:
Recording WebRTC? Definitely server side. But maybe client side.
This article is again taken partially from one of the lessons in my upcoming WebRTC Architecture Course. There, it is given in greater detail, and i recorded.
Learn how to design the best architecture for our WebRTC service in this new Advanced WebRTC Architecture course.
Recording is obviously not part of what WebRTC does. WebRTC offers the means to send media, but little more (which is just as it should be). If you want to record, you’ll need to take matters into your own hands.
Generally speaking, there are 3 different mechanisms that can be used to record:
Let’s review them all and see where that leads us.
#1 – Server side recordingThis is the technique I usually suggest developers to use. Somehow, it fits best in most cases (though not always).
What we do in server-side recording is route our media via a media server instead of directly between the browsrs. This isn’t TURN relay – a TURN relay doesn’t get to “see” what’s inside the packets as they are encrypted end-to-end. What we do is terminate the WebRTC session at the server on both sides of the call – route the media via the server and at the same time send the decoded media to post processing and recording.
What do I mean by post processing?
There are many things that factor in to a recording decision besides just saying “I want to record WebRTC”.
If I had to put pros vs cons for server side media recording in WebRTC, I’d probably get to this kind of a table:
+–No change in client-side requirementsAnother server in the infrastructureNo assumptions on client-side capabilities or behaviorLots of bandwidth (and processing)Can fit resulting recording to whatever medium and quality level necessaryNow we must route media#2- Client side recordingIn many cases, developers will shy away from server-side recording, trying to solve the world’s problem on the client-side. I guess it is partially because many WebRTC developers tend to be Java Script coders and not full stack developers who know how to run complex backends. After all, putting up a media server comes with its own set of headaches and costs.
So the basics of client-side recording leans towards the following basic flow:
We first record stuff locally – WebRTC allows that.
And then we upload what we recorded to the server. Here’ we don’t really use WebRTC – just pure file upload.
Great on paper, somewhat less in reality. Why? There are a few interesting challenges when you record locally on machine you don’t know or control, via a browser:
It all leads to the fact that at the end of the day, client side recording isn’t something you can use. Unless the recording is short (a few minutes) or you have complete control over the browser environment (and even then I would probably not recommend it).
There are things you can do to mitigate some of these issues too. Like upload fragments of the recording every few seconds or minutes throughout the session, or even do it in parallel to the session continuously. But somehow, they tend not to work that well and are quite sensitive.
Want the pros and cons of client side recording? Here you go:
+–No need to add a media server to the media flowClient side logic is complex and quite dependent on the use caseRequires more on the uplink of the user – or more wait time at the end of the sessionNeed to know client’s device and behavior in advance#3 – Media forwardingThis is a lesser known technique – or at least something I haven’re really seen in the wild. It is here, because the alternative is possible to use.
The idea behind this one is that you don’t get to record locally, but you don’t get to route media via a server either.
What is done here, is that media is forwarded by one or both of participants to a recording server.
The latest releases of Chrome allows to forward incoming peer connection media, making this possible.
This is what I can say further about this specific alternative:
+–No need to add a media server into the flow – just as an additional external recording serverRequires twice the uplink or moreDo you want to be the first to try this technique?Things to rememberRecording doesn’t end with how you record media.
There’s meta data to treat (record, playback, sync, etc).
And then there’s the playback part – where, how, when, etc.
There are also security issues to deal with and think about – both on the recording end and on the playback side.
These are covered in a bit more detail in the course.
What’s next?If you are going to record, start by leaning towards server side recording.
Sit down and list all of your requirements for recording, archiving and playback – they are interconnected. Then start finding the solution that will fit your needs.
And if you feel that you still have gaps there, then why not enroll to the Advanced WebRTC Architecture course?
Learn how to design the best architecture for our WebRTC service in this new Advanced WebRTC Architecture course.
The post Recording WebRTC Sessions: client side or server side? appeared first on BlogGeek.me.
Analytics != Operation
Twilio just announced a new service to its growing cadre of services. This time – Voice Insights.
What to expect in the coming daysThis week Twilio announced several interesting initiatives:
Add to that their recent announcement on their new Enterprise offering and the way they seem to be adding more number choices in countries. What we get is too much work to cover a single vendor in this industry.
Twilio is enhancing its services in breadth and depth at the same time, doing so while trying to reach out to new customer types. I will be covering all of these issues soon enough. Some of it here, some on other blogs where I write. Customers with an active subscription for my WebRTC PaaS report will receive a longform written analysis separately covering all these aspects later this month.
What I want to cover in this articleI already wrote about Twilio’s Kurento acquisition. This time, I want to focus on Voice Insights.
All the media outlets I’ve checked to read about Voice Insights were regurgitating the Twilio announcement with little to add. At most, they had callstats.io to refer to. I think a lot is missing from the current conversation. So lets dig in.
What is Voice Insights?Voice Insights is a set of tools that can be used to understand what’s going on under the rug. When you use a communications API platform – or build your own for that matter – the first thing to notice is that there’s lack of understanding of what’s really happening.
Most dashboards focus on giving you the basics – what sessions you created, how long were they, how much money you owe us. Others add some indication of quality metrics.
The tools under the Voice Insights title at Twilio include:
Some of them were already available in some form or another in the Twilio offering – such as user feedback collection.
The features here can be split into two types:
Twilio gave a good introduction to all of thee capabilities, so I won’t be repeating them here.
What is interesting, is if and how they have decided to implement the real time triggers – do they get triggered from the backend or directly by running rules on the device. But I digress here.
How is it priced?Interestingly, Voice Insights is priced separately from the calling service itself.
If you want insights into the voice minutes you use on Twilio, there’s an extra charge associated with it.
Prices start from $0.004 per minute, going down to ~$0.002 per minute for those who can commit to 1 million voice minutes a month. It goes down to a shy above $0.001 a minute.
For comparison, SIP-to-SIP voice calling on Twilio starts at $0.005 per minute, making Voice Insights a rather expensive service.
Comparisons with callstats.io are necessary at this point. If you take a low tier of 10,000 voice minutes a month, callstats.io is priced at 19 EUR (based on their calculator – it can get higher or lower based on “data points”) whereas Twilio Voice Insights stands at 40 USD. How do these two vendors offer lower rates at bulk is an exercise I’ll leave for others to make.
Is this high? low? market price? I have no clue.
TokBox, on the other hand, has their own tool called Inspector and another feature called Pre-Call Test. And it is given for free as part of the service.
Where is it headed?Voice Insights can take several directions with Twilio:
With analytics, the sky usually isn’t the limit. It is just the beginning of the dreams and stories you can build upon a large data set. The problem is how can you take these dreams and make them come true.
Which brings us to the next issue.
The future of Analytics in Comm APIsThere’s a line drawn in the sand here. Between communications and analytics.
Analytics has a perceived value of its own – on top of enabling the interaction itself.
Will this hold water? Will other communication API vendors add such capabilities? Will they be charging extra for them?
I’ve had my share of stories around CEM (Customer Experience Management). Network equipment vendors and those handling video streaming are marketing it to their customers. Analytics on network data. This isn’t much different.
Time will tell if this is something that will become common place and desired, or just a failed attempt. I still don’t have an opinion where this will go.
Up nextNext in my quick series of articles on Twilio comes coverage of their new Enterprise plan, and how Twilio is trying to grow in breadth and depth at the same time.
Test and Monitor your WebRTC Service like a pro - check out how testRTC can improve your service' stability and performance.
The post Twilio’s Voice Insights for WebRTC – a line on the sand appeared first on BlogGeek.me.
If you haven’t yet enrolled to my Advanced WebRTC Architecture course – then why wait?
I just noticed that I haven’t written any specific post here about the upcoming course, so consider this one that announcement. To my defense – I sent it out a few days ago to the monthly newsletter I have.
Why a course on WebRTC architecture?I’ve been working with entrepreneurs, developers, product managers and people in general about their WebRTC products for quite some time. But somehow I missed to notice that in many such discussions there were large gaps in what people thought about WebRTC and what WebRTC really is.
There’s lots of beginner’s information out there for WebRTC, but somehow it always focuses on how to use the WebRTC APIs in the browser, or what the meaning of a specific feature in the standard is. There is also a large set of walk-throughs of different frameworks that you can use, but no one seems to offer a path for a developer to decide on his architecture. To answer the question of “what should I be choosing for my service?”
So I set out to put a course that answers that specific question. It gives the basics of what WebRTC is, and then dives into the part of what it means to put an architecture in place:
The easiest way is to go through the course syllabus. It is available online here and also in PDF form.
When will the course take place?The course is all conducted online, but not live.
It starts on October 24, and I am now in final preparation of recording the materials after creating them in the past two months.
The course is designed to be:
Enrolling to the course is $247 USD. Adding Office Hours on top of it means an additional $150 USD.
Until tomorrow, there’s a $50 USD discount – so enroll now if you’re already certain you want to.
There are discounts for those who want to enroll as a larger group – contact me for that.
Have more questions?Check the FAQ. I’ll be updating it as more questions come it.
If you can’t find what you need there – just contact me.
The post Discount on the Advanced WebRTC Architecture Course ends tomorrow appeared first on BlogGeek.me.
Open source media frameworks in WebRTC are all the rage these days.
Jitsi got acquired by Atlassian early last year and now Twilio grabs Kurento.
What to expect in the coming daysYesterday Twilio announced several interesting initiatives:
Add to that their recent announcement on their new Enterprise offering and the way they seem to be adding more number choices in countries. What we get is too much work to cover a single vendor in this industry.
Twilio is enhancing its services in breadth and depth at the same time, doing so while trying to reach out to new customer types. I will be covering all of these issues soon enough. Some of it here, some on other blogs where I write. Customers with an active subscription for my WebRTC PaaS report will receive a longform written analysis separately covering all these aspects later this month.
What I want to cover in this articleWhat I want to cover in this part of my analysis of the recent Twilio announcements is their acquisition of Kurento.
Things I’ll be touching is Why Kurento – how will it further Twilio’s goal – and also what will happen to the many users of Kurento.
I’ll also touch the open source media server space, and the fact that the next runner up in the acquisition roulette of our industry should be Janus.
But first things first.
What is Kurento?Kurento is an open source WebRTC server-side media framework implemented on top of GStreamer. While it may not be limited to WebRTC, my guess is that most if not all of its users make use of WebRTC with it.
What does that mean exactly?
I am seeing Kurento everywhere I go. Every couple of meetings I have with companies, they indicate that they make use of Kurento or when you look at their service it is apparent it uses Kurento. Somehow, it has become one of these universal packages that developers turn to when they need stuff done.
The Kurento team is running multiple activities/businesses (I might be doing a few mistakes here – it is always hard to follow such internal structures):
Kurento have a busy team…
What did Twilio acquire exactly?This is where things get complicated. From my understanding, reading the materials online and through a briefing held with Twilio, this is what you can expect:
To sum things up:
Twilio acqui-hired the team behind the Kurento project and took their elasticRTC offering out of the market before it became too popular.
How will Twilio use Kurento?I’d like to split this one to short term and long term
Short term – multiparty callingTwilio needed an SFU. Desperately.
In April 2015 the Twilio Video initiative was announced. Almost 18 months later and that service is still in beta. It is also still 1:1 calling or mesh for multiparty.
Something had to be done. While I am sure Twilio has been working for quite some time on a solid multiparty option, they probably had a few roadblocks, which got them to start using Kurento – or decide they need to buy that technology instead of build it internally.
Which got them to the point of the acquisition. Twilio will probably embed Kurento into their Twilio Video offer, adding three new capabilities to their platform with it:
In the long term, Twilio can employ the full power of Kurento and offer it in the cloud with a flexible API that pipelines media in real time.
This can be used in our new brave world of AI, Bots, IOT and AR – all them acronyms people love talking about.
It will be interesting to see how Twilio ends up implementing it and what kind of an API and an offering they will put in place, as there are many challenges here:
This is one of the most interesting projects in our industry at the moment, and if Twilio is working towards that goal, then I envy their product managers and developers.
What will be left of the Kurento project?That’s the big unknown. Luis Lopez, project lead of Kurento details the official stance of Kurento and Twilio on the Kurento blog. It is an expected positive looking write up, but it leaves the hard questions unanswered.
Maintaining the Kurento projectTwilio is known for their openness and the way they work with developers. While that is true, the Twilio github has little in the way of projects that aren’t samples written on top of the Twilio platform or open sourced projects that touch the core of Twilio. While that is understandable and expected, the question is how will Twilio treat the Kurento open source project?
Now that most of the workforce that is leading Kurento are becoming Twilio employees, will they work on the open source Kurento build or on internal needs and builds of Twilio? Here are a few hard questions that have no real answers to them:
While in many cases, with Kurento the answer would have been that Naevatec could just as well limit the access to higher level modules for paid customers – there was someone you could talk to when you wanted to purchase such modules. Now with Twilio, that route is over. Twilio are not in the business of paid support and customization of open source projects – they are in the business of cloud APIs.
There will be ongoing friction inside Twilio with the decision between investing in the open source Kurento platform versus using it internally. If you thought that was bad with Atlassian acquiring Jitsi – it is doubly so here, where Twilio may have to compete with a build vs buy decisions of companies where “build” is done on top of Kurento.
I assume Twilio doesn’t have the answers to these questions yet either.
Maintaining the business modelKurento has customers. Not only users and developers.
These customers pay Naevatec. They pay for support hours or for customization work.
Will this be allowed moving forward?
Can the yet-to-be-hired new team at Naevatec handle the support?
What happens when someone wants to pay a large sum of money to Naevatec in order to deploy a scalable Kurento service in the cloud? Will Naevatec pick that project? If said customer also wants to build an API platform on top of it, will that be something Naeva Tec will still do?
What will others who see themselves as Twilio competitors do if they made use of Kurento up until now? Especially if they were a Naevatec paying customer…
The good thing is, that many of the Kurento users ended up getting paid support and customization by third party vendors. Now if you only could know which one of them does a decent job…
Should TokBox be worried?Yes and no.
Yes, because it means Twilio will be getting their multiparty story, and by that competing with TokBox. Twilio has a wider set of features as well, making them more attractive in some cases.
No, because there’s room for more players, and for video calling services at the moment, TokBox is the go-to vendor. I wonder if they can maintain their lead.
What about Janus?I recently compared Jitsi to Kurento.
Little did I know then that Twilio decided on Kurento and was in the process of acquiring it.
I also raised the question about Janus.
To some extent, Janus is next-in-line:
Does Meetecho, the company behind Janus, willing to sell it isn’t important. It is a matter of price points.
We’ve seen the larger vendors veer towards acquiring the technology that they are using.
Will Slack go after Janus? Maybe Vonage/Nexmo? Oracle, to beef their own WebRTC offering?
–
Open source media frameworks have proven to be extremely effective in churning out commercial services on top of them. WebRTC made that happen by being its own open source initiative.
It is good to see Kurento finding a new home and growing up. Kudos to the Kurento team.
Learn how to design the best architecture for our WebRTC service in this new Advanced WebRTC Architecture course.
The post Twilio Acquires Kurento. Who will Acquire Janus? appeared first on BlogGeek.me.
I hope this will clear up some of the confusion around WebRTC media flows.
I guess this is one of the main reasons why I started with my new project of an Advanced WebRTC Architecture Course. In too many conversations I’ve had recently it seemed like people didn’t know exactly what happens with that WebRTC magic – what bits go where. While you can probably find that out by reading the specifications and the explanations around the WebRTC APIs or how ICE works, they all fail to consider the real use cases – the ones requiring media engines to be deployed.
So here we go.
In this article, I’ll be showing some of these flows. I made them part of the course – a whole lesson. If you are interested in learning more – then make sure to enroll to the course.
#1 – Basic P2P Call Direct WebRTC P2P callWe will start off with the basics and build on that as we move along.
Our entities will be colored in red. Signaling flows in green and media flows in blue.
What you see above is the classic explanation of WebRTC. Our entities:
What we have here is the classic VoIP (or WebRTC?) triangle. Signaling flows vertically towards the server but media flows directly across the browsers.
BTW – there’s some signaling going off from the browsers towards the STUN/TURN server for practically all types of scenarios. This is used to find the public IP address of the browsers at the very least. And almost always, we don’t draw this relationship (until you really need to fix a big, STUN seems obvious and too simple to even mention).
Summing this one up: nothing to write home about.
Moving on…
#2 – Basic Relay Call Basic WebRTC relay callThis is probably the main drawing you’ll see when ICE and TURN get explained.
In essence, the browsers couldn’t (or weren’t allowed) to reach each other directly with their media, so a third party needs to facilitate that for them and route the media. This is exactly why we use TURN servers in WebRTC (and other VoIP protocols).
This means that WebRTC isn’t necessarily P2P and P2P can’t be enforced – it is just a best effort thing.
So far so go. But somewhat boring and expected.
Let’s start looking at more interesting scenarios. Ones where we need a media server to handle the media:
#3 – WebRTC Media Server Direct Call, Centralized Signaling WebRTC Media Server Direct Call, Centralized SignalingNow things start to become interesting.
We’ve added a new entity into the mix – a media server. It can be used to record the calls, manage multiparty scenarios, gateway to other networks, do some other processing on the media – whatever you fancy.
To make things simple, we’ve dropped the relay via TURN. We will get to it in a moment, but for now – bear with me please.
MediaThe media now needs to flow through the media server. This may look like the previous drawing, where the media was routed through the TURN server – but it isn’t.
Where the TURN server relays the media without looking at it – and without being able to look at it (it is encrypted end-to-end); the Media Server acts as a termination point for the media and the WebRTC session itself. What we really see here is two separate WebRTC sessions – one from the browser on the left to the media server, and a second one from the media server to the browser on the right. This one is important to understand – since these are two separate WebRTC sessions – you need to think and treat them separately as well.
Another important note to make about media servers is that putting them on a public IP isn’t enough – you will still need a TURN server.
SignalingOn the signaling front, most assume that signaling continues as it always have. In which case, the media server needs to be controlled in some manner, presumably using a backend-to-backend signaling with the application server.
This is a great approach that keeps things simple with a single source of truth in the system, but it doesn’t always happen.
Why? Because we have APIs everywhere. Including in media servers. And these APIs are sometimes used (and even abused) by clients running browsers.
Which leads us to our next scenario:
#4 – WebRTC Media Server Direct Call, Split Signaling WebRTC Media Server Direct Call, Split SignalingThis scenario is what we usually get to when we add a media server into the mix.
Signaling will most often than not be done between the browser and the media server while at the same time we will have signaling between the browser and the application server.
This is easier to develop and start running, but comes with a few drawbacks:
Skip it if you can.
Now lets add back that STUN/TURN server into the mix.
#5 – WebRTC Media Server Call Relay WebRTC Media Server Call RelayThis scenario is actually #3 with one minor difference – the media gets relayed via TURN.
It will happen if the browsers are behind firewalls, or in special cases when this is something that we enforce for our own reasons.
Nothing special about this scenario besides the fact that it may well happen when your intent is to run scenario #3 – hard to tell your users which network to use to access your service.
#6 – WebRTC Media Server Call Partial Relay WebRTC Media Server Call Partial RelayJust like #5, this is also a derivative of #3 that we need to remember.
The relay may well happen only in one side of the media server – I hope you remember that each side is a WebRTC session on its own.
If you notice, I decided here to have signaling direct to the media server, but could have used the backend to backend signaling.
#7 – WebRTC Media Server and TURN Co-location WebRTC Media Server and TURN Co-locationThis scenario shows a different type of a decision making point. The challenge here is to answer the question of where to deploy the STUN/TURN server.
While we can put it as an independent entity that stands on its own, we can co-locate it with the media server itself.
What do we gain by this? Less moving parts. Scales with the media server. Less routing headaches. Flexibility to get media into your infrastructure as close to the user as possible.
What do we lose? Two different functions in one box – at a time when micro services are the latest tech fad. We can’t scale them separately and at times we do want to scale them separately.
Know Your FlowsThese are some of the decisions you’ll need to make if you go to deploy your own WebRTC infrastructure; and even if you don’t do that and just end up going for a communication API vendor – it is worthwhile understanding the underlying nature of the service. I’ve seen more than a single startup go work with a communication API vendor only to fail due to specific requirements and architectures that had to be put in place.
One last thing – this is 1 of 40 different lessons in my Advanced WebRTC Architecture Course. If you find this relevant to you – you should join me and enroll to the course. There’s an early bird discount valid until the end of this week.
The post How Media and Signaling flows look like in WebRTC? appeared first on BlogGeek.me.
Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.
Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.
Wow, this most certainly is a great a theme.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.