The AgilityFeat blog

At AgilityFeat we are more than just designers and coders. We are also thought leaders. Our team has spoken at numerous conferences around the US, Europe, and Latin America. We are leading the way when it comes to applying agile methodologies to nearshore software development in Latin America. Our blog is where we share those lessons with you, as well as news about what our team is up to.

Authors
Arin cole David Ford
Categories
Startups User Group Reports conferences Customer Development lean Lean Startup Silicon Valley startups Agile Events agile agile2012 Continuous Deployment Startup News AskADeveloper BizDev Travels published scrum speaking UX Nearshore Agile Costa Rica nearshore AgilityCasts DareToBeLean boston news training webinar Lean Startup Conf ATDD product owner retrospective Projects ADP West CI TDD wishlisting Real Time real time Real Time Messaging webrtc testing Verify/ATI coaching fixed price education distributed San Francisco estimation xp kanban Agile Richmond AgileCville body shops rapid feedback Standup Minimum Viable Product MVP Pivot Split Testing Startup leanDC marketing Lean DC ALNDC APLNDC remote standup games Philly Charlottesville Coding Across America mobile AgileDC points humor hours range XP2011 burndown Uncategorized LeanCville HereCostaRica rails railsconf entrepreneur Design design responsive web design mobile application tablet InnovateVirginia ruby ruby on rails AgileIowa ACCUS StickyMinds TechWell product development product development QANews offshore Outsource lean startup machine outsource MoDevEast html5 Company Culture Pro Tips UVA entreprenuer database events usability
Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
Year
2014 2013 2012 2011 2010

The ideal agile team is…

Posted By:

As a recovering agile coach*, I’m sometimes asked a question about the make up of an ideal agile team. The idea of having scrum teams around 7-9 people is well known. It’s also well known that they should be cross disciplinary, and tha...

Arin Sime

The ideal agile team is…

As a recovering agile coach*, I’m sometimes asked a question about the make up of an ideal agile team. The idea of having scrum teams around 7-9 people is well known. It’s also well known that they should be cross disciplinary, and that there are only 3 official team roles in Scrum: Scrummaster, Product Owner, and then everybody else (because they are supposed to be cross-disciplinary and doing whatever is necessary to finish the sprint). I get this and fully support those concepts.

A client planning for growth

That’s all textbook Scrum, but nonetheless I sometimes have intelligent clients ask for more hands-on advice with more specific team roles, like this recent question from one of our development clients at AgilityFeat…

I know rules of thumb are inexact, but are there any headcount ratios that you subscribe to for these roles?

  • Requirements analysts to developers
  • Developers to testers
  • Project managers to (requirements analysts + developers + testers)

I’m trying to work on a staffing model and I keep finding myself consistently surprised about how the bottleneck moves around my dev team as soon as I add another person. That makes sense, of course. But with some of those ratios in mind, I’m hoping to anticipate it a little better.

I wrote him back with a long answer, which made me think I should put this up on our blog too.

The ideal agile team is…

ScrumTeamStandup


This scrum team I coached possessed an awesome beard and one unbearded team member who always sat down during standups. It is a little known part of the agile manifesto that only team members with great beards are allowed to sit during standups.

Since an ideal scrum team is 7 people +/- 2 supposedly, I suggest the ideal scrum team makeup is the following:

  • 1 Scrummaster
  • 1 Product Owner / Requirements Analyst
  • 1 Tester
  • 1 UX/Designer
  • 4 Developers

Here comes the “It Depends…”

“How many agile coaches does it take to screw in a light bulb? No one knows, because they won’t ever give you a straight answer.” – crappy joke I just made up

Like any good consultant, I couldn’t just give my client a straight answer of how he should structure his development teams. There are a lot of qualifiers to the above “ideal” team makeup I just gave. Your experiences may vary … these are just based on our experiences in our AgilityFeat teams, and in other companies where I have coached agile teams.

The Disclaimers and Fine Print

Scrummaster

The scrummaster can usually scale to 2 or maybe 3 teams if they are only a Scrummaster/PM, but should only work on 1 team if they also have other duties on the team besides being a Scrummaster.

Product Owners and Requirement Analysts

The agile ideal to me is that the Requirements Analyst and the Product Owner are one and the same. The team should be communicating well enough that they don’t have to specify every little thing on the stories, and so a single person can do both roles on a 7 person scrum team. The nice thing about this is there is truly one voice for the customer, and you remove the risk of disconnect between the Product Owner and a Requirements Analyst. One less handoff is usually a good thing.

However, I have seen teams at larger companies have as many as one Product Owner and 2 Requirements Analysts for a single 15 person scrum team. While not my preference, it worked well for them because they did have a good amount of bureaucracy at the company to deal with and the architecture of their application made larger-than-usual scrum teams a reasonable solution.

In my client’s case, he needs to do more communication with teams who are not in the same time zone, and so I can see the potential of having a Requirements Analyst dedicated to each team and then a Product Owner who may be split across a couple teams. In that situation you need to be wary about that communication handoff between the Product Owner and Requirements Analyst.

Testers

Testers can potentially work on 2 teams at once, depending on the complexity of the product and regression testing. One of our teams really needs 2 full time testers for only 4 developers because testing is so complex for that application. On other teams we can share a tester across 2 teams and have a ratio more like 1 tester to 5 developers. Definitely an “it depends” situation.

UX/Design

In our case at AgilityFeat, UX and Design are two people who work part time on multiple projects. We have also found that UX is a great role to wear the hat of Scrummaster or Product Owner on smaller teams as-needed. In reality, we have multiple teams where our UX lead Mariana and I both wear the Scrummaster hat and trade off with each other as availability requires. This is not necessarily ideal since it can create a disconnect between us, but in our startup environment it’s a concession we make to reality, our schedules, and our sanity that has worked well overall.

The Bottleneck is always moving

The truth is, the bottleneck will always move around your team no matter how you structure it. And the bottleneck will also move around the team during certain phases of a product lifecycle (for example, during bigger releases it’s often helpful to bring in extra testing help or to have developers assist with testing of other developers’ stories).

My client is wise to try and anticipate the bottleneck and structure his teams as close to an ideal as possible, and presumably you’re pretty wise too since you’re reading this. Did I also mention that you are extremely good-looking and should hire us for your next development project? Sorry, I got distracted there…

The key is to remember that the practices of Continuous Improvement and Retrospectives are keys to any agile team so that you can always address bottlenecks as they shift around the team.

* Regarding the “recovering agile coach” bit and my cruel light bulb jokes, I jest. I love agile coaches, some of my best friends are agile coaches. I have a lot of background in agile coaching and training, but with the growth of our awesome development team at AgilityFeat, I now spend all my time working with our clients and very little time doing coaching anymore.



Ready to improve your product?

In our email newsletter, we'll pull the best content from our blog and others, and provide free advice on UX/Design, lean startups, agile development, nearshoring in Latin America, and interviews with our peers.

Sign up here to learn more!

WebRTC Fundamentals with Lisa Larson-Kelley

Posted By:

Lisa Larson-Kelley is a well established expert on web video, and recently she has applied that expertise to the WebRTC standard for HTML5 use of in-browser video, audio and data channels. If you’re new to WebRTC, the simplest and most commo...

Arin Sime

WebRTC Fundamentals with Lisa Larson-Kelley

Lisa Larson-Kelley is a well established expert on web video, and recently she has applied that expertise to the WebRTC standard for HTML5 use of in-browser video, audio and data channels. If you’re new to WebRTC, the simplest and most common example of how to use it is to build video chat directly into your web application.

The uses are much wider than this, with the ability to use this encrypted Peer to Peer channel for data exchange as well as video and audio conferencing applications. I think the great promise of this is the ability to use WebRTC for “in context communications”.

WebRTC is very much an emerging standard, and there’s a lot to learn. That’s where Lisa comes in. An experienced technologist and trainer (see her LearnFromLisa.com site), Lisa has launched a “WebRTC Fundamentals” online course that I highly recommend.

LisaLarsonKelleyWebRTCFundamentals

The course is hosted by Pluralsight, which provides a nice subscription model for a variety of technology courses. Pluralsight allows you to easily jump between the chapters, and Lisa’s content is broken up into lots of short videos that are informative, to the point, and perfect for the attention-span-impaired like myself.

Her WebRTC fundamentals course covers use cases, architecture, the server technologies, and the API itself. Practical examples are provided using Peer.js and SimpleWebRTC. According to Lisa, she built this course because there are not a lot of simple examples out there. Her goal is to make the complex simple, and make WebRTC more accessible.

As Lisa points out, there are an expected 6 billion devices by 2018 that will support WebRTC. She predicts applications in the financial services, healthcare, and insurance industries will all be big markets (and there are undoubtedly a lot more applications).

All of this spells out big opportunities for innovative startups, established enterprises with a creative streak, as well as developers like our team at AgilityFeat. But you do need to know what you’re jumping into, and Lisa’s course will give you the knowledge you need to get started.

Scott Hanselman also has done an interesting podcast with Lisa about WebRTC. Check out the podcast here. Among other things, Lisa points out in the podcast that WebRTC can be used for Machine to Machine (M2M) or Internet of Things (IoT) communications. I also love how Lisa states that now is the perfect time to take a WebRTC application to market specifically because not many people are doing it yet. Do you want to be the innovator who captures the market earlier, or do you want to wait until you have to follow your competitors to this space?

If you’re interested in keeping up with WebRTC technologies, we humbly suggest our free weekly newsletter RealTimeWeekly and our upcoming book RealTimeWeb.co.



Ready to improve your product?

In our email newsletter, we'll pull the best content from our blog and others, and provide free advice on UX/Design, lean startups, agile development, nearshoring in Latin America, and interviews with our peers.

Sign up here to learn more!

Managing data in HBase using Ruby and Thrift

Posted By:

AgilityFeat is notable for attracting clients who are looking into working with cutting-edge technologies, and thanks to that, recently, I’ve had the amazing opportunity to dive in head-first into researching about and experimenting with var...

Arin Sime

Managing data in HBase using Ruby and Thrift

AgilityFeat is notable for attracting clients who are looking into working with cutting-edge technologies, and thanks to that, recently, I’ve had the amazing opportunity to dive in head-first into researching about and experimenting with various BigData technologies, focusing mainly on Hadoop.

Hadoop is still at a very early adoption stage, which has proven to be not only challenging, but inspiring as well.

I’ve been able to contribute to the Ruby/Hadoop community by coming up with a SQL to HBase parser called HipsterSqlToHbase, and thanks to this and other similar projects I’ve worked on, my knowledge about data processing with Hadoop has been growing exponentially.

I could write volumes about Hadoop and the amazing toolset that has been evolving within its ecosystem, but to keep things concise, this post will be, in a sense, a continuation from my Introduction to Hadoop for Rubyists (in which I go into great depth of detail as to how quickly you can begin using Hadoop with Ruby working with nothing but HBase and Thrift).

In the next few lines I’ll share the intricacies of HBase data structures, show you how to insert and update simple and complex rows of data from Ruby into HBase (using Thrift) and finally how to harness the power of HBase’s Filtering Language to retrieve and bend data to your heart’s content.

How HBase structures its data

In my introductory post about Hadoop I mention how HBase saves all data directly to HDFS (as opposed to Hive, Pig, or Mahout which all generate MapReduce jobs which need to be processed before any data is stored and actually becomes retrievable). HBase is able to achieve this through it’s data structure which relies heavily on keeping every change to each row stored separately, and atomically:

hadoop, agilityfeat, ruby, hbase


As you can tell by the previous chart, structurally speaking, every column in an HBase row is stored separately as a key-value pair. In this case the row id is responsible for linking the columns together, along with a timestamp. The reason a timestamp is added is so that you can store multiple versions of a row (up to 5 by default, although this setting is completely customizable).

To explain further about how HBase stores data in HDFS would be an overkill for this post, so I’d suggest if you want to delve deeper into the subject later on, to check out this slideshare by Enis Soztutar from Apache.

Mutating an HBase Table (Better than “Inserting” and “Updating” Rows)

Now that you manage a basic concept of how data is stored in HBase, let’s show you how to manipulate these datasets.

In my Introduction to Hadoop for Rubyists I had already outlined how to create a table and how to “insert” data into said table.

I am “air quoting” the word “insert” here, because in HBase you do not insert nor update.

In regular SQL databases, tables are structures which contain rows, and these rows are inserted (created), updated (edited), or deleted. Hbase tables, on the other hand, contain no rows; they are actually made up of atomic values which are linked, grouped together, and displayed as rows. Also, within HDFS, data is never “modified” (and I do mean never). Whenever you “modify” a value, this simply generates a new key-pair file within the table, which means that the original value which you’re modifying is never touched; it is only ignored from that point on (unless you specify otherwise). This is why any data manipulation within an HBase table is referred to as a mutation of said table.

Alright, enough chit chat, let’s get down to business.

If you haven’t already, download the test files used in my intro to HBase blog post by clicking here.

You’re going to be working mainly with the file named hbase-put-row.rb, focusing on the following lines of code:

This is the main structure of the code you will be using to mutate your database from here on. You might’ve noticed that the code in of itself is pretty simple, but it can get messy quickly once you start performing more complex mutations; especially if you’re not sure of what you’re doing.

So let’s break it down part by part.

The first line you’re interested in is:

All you’re doing here is defining a variable thrift_mutations which will be an array. It might be simple, but this is already telling us about Thrift’s capabilities for running more than one mutation on a single call. This feature is super useful, and it’s one of the cornerstones of the code I used to forge the HipsterSqlToHbase gem.

Let’s move on to the next line:

Now we’re talking. This is definitely an interesting piece of code. Here you’re running the new method from the HBase::Mutation class and passing it two named arguments. This will return an HBase::Mutation instance which you’ll push into our previously defined array.

Let’s talk a little more about the HBase::Mutation class.

By definition a mutation changes a table, but within this class’s code’s semantics, it focuses on allowing us to change column values. Hence the reason you’re passing it a column’s name and the value you want to place on this column.

Now that you’ve specified the value that you want to store, you need to tell HBase where to apply this change:

Again, the wording is somewhat misleading since you’re running a method called mutateRow to apply our change. This method accepts four arguments: the name of the table you’re mutating, the id of the row that you are going to be adding the data into, an array of valid HBase::Mutation instances, and finally a hash which can contain extra options.

I do feel the need to be emphatic about this issue, again. You are NOTgoing to be mutating, creating nor modifying the row. Once you run the previous line of code, within HDFS, inside a folder assigned to the table data by HBase, a file will be generated which contains nothing more than the column name and the value it has been assigned, and the file itself will be named using the row id and a timestamp.

And, just to go a little further into this matter, once you run this code, If you take note of the UUID that was used as the row id and replace the SecureRandom.uuid portion of the code with it, the next time you run the code you will be performing an “update” instead of an “insert”, except it won’t truly be an “update” since a new file will still be generated to store the “modified” data and the previous file will simply be ignored. Both of these files will have the same row id, but a different timestamp.

Now let’s make this code a little more real world friendly.

Let us assume you’ve got a table for storing users and their info (the table columns would be ‘user_name’, ‘full_name’, and ‘password’), and you want to be able to save multiple users at a time.

Again, let’s look at what you’ve done here step by step:

Simple enough, you’re setting an user array named new_users, where each user is depicted as a hash containing a user name, the respective user’s full name, and finally their password.

Now, a common mistake when iterating over data sets in order to create thrift mutations is to place all mutations under the thrift_mutations array. This is a big no no, since once you run the mutateRow method, you are going to perform all mutations (all 3 in this case) while pointing them to a single row, and for your current purposes you would end up with a single row containing John Doe’s user as opposed to 3 rows containing Alice, Bob and John.

To avoid this, on the previous code, you instantiate an user_row_mutations array. And right after that you simply iterate over the new_users array appending the mutations for each user column onto the thrift_mutations array. And finally you append the thrift_mutations onto the user_row_mutations. Quite a mouth-full, right? Yet not complex at all.

To top it all off, you will simply iterate over each of the user_row_mutations‘ arrays and execute mutateRow for each of them. This would be the SQL equivalent of sending three separate INSERT sentences over to the database.

Using HBase Filters to Get What You Need

If you’ve made it this far, you’re a champ. We’re nearing the end of this quick and painless tutorial so don’t go anywhere just yet.

Ok, you now have a data set. Wonderful!

“How do I retrieve my data now?”, you ask? Simple. Using the HBase Filter Language.

HBase Filters allow you to ask HBase for data, given certain restrictions provided within the filters themselves. They are simple and you can group them together to perform complex data retrieval.

As of writing this article, HBase’s Filter Language is very scarcely documented with many syntactic errors found on the documentation provided by Apache. Even so, the filters themselves are actually very functional and so far I haven’t encountered any bugs using them. So I’ll do my best to describe thoroughly the ones I’ll be showing you here, and soon I’ll do a post dedicated solely to HBase Filters.

First thing you’ll need to learn about HBase Filters is their syntax. HBase Filters look a lot like functions to which you pass arguments to. Most filters will more or less follow this syntax:

Part by part, FilterName is obviously the name of the filter you are going to use (in this post I’ll be showing you the ValueFilter and the DependentColumnFilter), the condition will be a conditional operator such as (but not limited to) = or !=, and the ‘comparator:value’ pair should be looked at as two pieces of the puzzle where the comparator will be determining how the value will be affected by the condition.

There are several comparators. In this post we’ll be looking at the binary and the regexstring comparators.

Let’s put these concepts together with an example of the ValueFilter.

The ValueFilter is basically the SQL equivalent of the WHERE and the LIKE clauses merged together.

Taking into account the ‘user_table’ data set, let’s say you want to retrieve the user named ‘John Doe’. First you must lookup the value ‘John Doe’. Using the ValueFilter it would look like the following:

Quite easy, right? All you’re doing here is telling the ValueFilter to grab the columns’ values and return a binary result of either true or false as to whether the value matches ‘John Doe’. If The binary result is true, the matching row will be served.

Now, you might be wondering “how come we’re not mentioning what column we want the ValueFilter to be run against anywhere on the previous code?”, and that my dear Watson, is a very astute inquiry!

HBase Filters, by nature, can be grouped together by AND and OR operators very much like you can send various WHERE and LIKE clauses grouped together in SQL by saying something like “WHERE a=’b’ AND x LIKE ‘%z’ OR… etc”. So, in order to filter out only the rows which contain the value ‘John Doe’ inside the column named ‘full_name’ you will be grouping the ValueFilter you just wrote with the following DependentColumnFilter:

All you’re saying here is “give me all rows which contain the column ‘full_name’“. I should note that the second argument you see there, to which you are passing an empty string to, is meant for a column qualifier, but for now you’re solid just by providing the column name.

Now let’s group your filters and see what we come up with:

Before detailing what you’re doing here with the filters let me explain that the order in which you place your filters will indeed affect your end result so be very careful with that.

By placing the filters the way you just did there, you are saying: “Fetch all rows that contain a value ‘John Doe’ in any of their columns, AND after that take those resulting rows and bring forth only those who have a column named ‘full_name’ which contains the value”.

With all of the above you should now have a pretty useful understanding of how the HBase Filter Language works. Let’s put this knowledge to practice.

From the example files bring up the file named hbase-get-row.rb and replace on the following line the filter string with the filter you just wrote:

It should now look like this:

The arguments you are passing to the get method are the table name (‘user_table’), an array specifying what columns should be included in the final result array (‘*’ means all columns), the filters to run when retrieving the rows, and one optional hash meant for other advanced options you shall not worry about today.

If you run this ruby script now you should be able to see John Doe’s information in all its glory.

You have just created a request which would equate to saying in SQL “SELECT * FROM user_table WHERE full_name=’John Doe’”. But this might not be useful for long once you need to do more complex comparisons using wildcards similar to when you would normally use LIKE in SQL. And that’s where the <b>regexstring</b> comparator comes into play.

Let’s say that you wanted all users whose full name would include the string ‘Will’ somewhere in it. In SQL you would normally say “SELECT * FROM user_table WHERE full_name LIKE ‘%Will%’”, but what you want to write to get the same result from HBase will be:

Short and simple. All you did there was exchange the ‘binary:John Doe’ portion of your ValueFilter with ‘regexstring:.*Will.*’ et voilá, for your intents and purposes you should’ve gotten back the user information for Bob Williams.

This concludes this HBase tutorial. I wish I could refer you to more information about HBase mutations and filters, but as I mentioned before documentation on these subjects is still scarce. In the mean-time I’ll continue churning out more tutorials and articles on the subjects.

Until next time, Happy Hadoop’ing.



Ready to improve your product?

In our email newsletter, we'll pull the best content from our blog and others, and provide free advice on UX/Design, lean startups, agile development, nearshoring in Latin America, and interviews with our peers.

Sign up here to learn more!

Tags: Agile,
Share:

In-Context Communications with WebRTC *is* revo...

Posted By:

Communication has always been about “location” Just like in real estate, communication is all about “location, location, location.” It’s always been that way, but it’s getting turned around. The first communicat...

Arin Sime

In-Context Communications with WebRTC *is* revolutionary

Communication has always been about “location”

Just like in real estate, communication is all about “location, location, location.” It’s always been that way, but it’s getting turned around.

The first communication tool was the campfire. No, not that campfire, I mean the older one, from before the internet. You know, the one that hurts your hands when you touch it?

At the end of the day, our hunter gatherer ancestors went to the location of the campfire to relate their stories of the day. Communication among a group was only possible by going to a particular location.

A long time later, along came couriers, the postal service, the telegraph and then the telephone. With each advance, it was easier to communicate without being in the same physical location, but it was still necessary that both parties be in a fixed and known location to reach each other.

A shorter time after that, along comes the internet, email, and cell phones. We can even lump skype or google hangouts into this category. The two people communicating no longer need to be in the same location, or know what geographic location the other party is in. But they still need to know how to reach a specific person. You have to know their cell phone number, their skype username, or their email address. If you are trying to reach out to someone you don’t know yet, then you have to find this information first. That identifying information and the tool you choose to communicate with becomes the “location” where your conversation is based.

In-Context Communication changes the concept of location … again

Am I overstating things if I call in-context communication the next great advance? Possibly, but hear me out.

Already we are starting to see the concept of “location” turned on its head in communication. Technologies like WebRTC are enabling this, because it allows you to easily integrate peer-to-peer encrypted transmission of video, audio, and data between two browsers. Ten years from now WebRTC may or may not be the de facto standard for such communication, but I am convinced that the larger concept of in-context communication is revolutionary.


I’ve seen speculation about dating websites using WebRTC to allow for “anonymous” video chat with prospective dates before you exchange any identifying information, and some dating sites starting to use WebRTC. Perhaps contrary to what we tell our kids, does this make the internet a safer place to meet strangers? You get to see your prospective date, and talk with him or her for a while. You can get over the uncomfortable initial conversations, and see if there is potential. This is before you ever exchange your full names, phone numbers, addresses, or anything else. If there’s no “spark”, either person can end the call and not have to worry “Is he still going to call? Doesn’t he get the message?”

I haven’t been on the dating scene in a very long time thankfully, but that seems like a pretty big revolution in communication to me, especially when you hear statistics about how many people these days first meet their future spouse online.

Customer service is another huge area for in-context communications. If you sell absolutely anything online, then you should consider this in your business model. Looking for expert advice when comparing those two laptops on BestBuy.com? How about starting a video chat immediately with a GeekSquad agent? You could share your desktop with them and show them exactly what models and pricing you are looking at.

Having trouble completing a transaction in your online banking account? There’s no more endless chain of people asking for your name and account number before redirecting you to another agent in another division who also asks for your name and number. When you hit “call support”, your banking site already knows who you are, and what action you just tried to perform, and so it immediately routes you to a customer service specialist for that area.

Amazon-Mayday

Amazon is leading the charge for this with their MayDay customer support on the Kindle, which is widely rumored and partly confirmed to be using at least parts of WebRTC to enable their video chat. Can’t find that book you just downloaded? Hit MayDay and a rep shows up on your kindle who can not only answer your question, but also see the same thing you are seeing on your device, move your cursor or type to help you, or draw on your device to “point” where you should click next.

A recent article on WebRTC World touched on this topic when the author talked about a potential “click to call” feature on Twitter. If I’m advertising a business through sponsored tweets, I want you to do more than just follow me on Twitter. How about having you initiate a phone call with me right from the tweet? I do sponsored tweets for our software development business as well as RealTimeWeekly.com, and I would absolutely use a feature like that. Anything I can do to reduce the friction for you to call me about a potential project is a good thing.

Location is now where ever you are

In all of these examples, the location is defined as where you are on the internet, and what community you are choosing to interact with. In-context communications means that you can have meaningful conversation with experts in your field, with acquaintances and friends and even potential dates, with customer service agents, or many other situations, all without ever picking up the phone. You don’t even need to know their name.

The more things change, the more they stay the same. Communications is still about location, but the context of that location is changing. How will your business model change to meet customers where they are, instead of expecting them to call or email you? However your business model is revolutionized by in-context communications, your technology will also need to be revolutionized. WebRTC is the foundation for that.


bookInterested in learning more about Real-Time data and building real-time web applications? Then you’ll be interested in our free weekly newsletter Real Time Weekly, which provides a round up of the best news about real time web application technologies. You should also check out our book for building Real-Time Web Applications. And you can always contact us to learn more about how we can help your team build real-time applications!




Ready to improve your product?

In our email newsletter, we'll pull the best content from our blog and others, and provide free advice on UX/Design, lean startups, agile development, nearshoring in Latin America, and interviews with our peers.

Sign up here to learn more!

Using Meetup’s streaming API to mashup event re...

Posted By:

Meetup provides an API for accessing streams of real-time data about its service which can be fun to play with. In this post, we’re going to consume one of those streams via their javascript interface and mash up event registrations with a google ...

Arin Sime

Using Meetup’s streaming API to mashup event registrations

Meetup provides an API for accessing streams of real-time data about its service which can be fun to play with. In this post, we’re going to consume one of those streams via their javascript interface and mash up event registrations with a google map. When it’s complete, the end result is going to look something like this handy little animated gif we created:


As people RSVP to meetup events around the world, we receive the stream of RSVP’s and display them on the map in the appropriate location. Depending on the time of day, it’s fun to watch and see what parts of the world are awake and RSVP’ing to events.

Is this super useful? Maybe not, but it lets us look at how the Meetup API uses WebSockets, so just come along for the ride.

You can get all the code for this example here: https://github.com/agilityfeat/meetup-streaming

To start, create an index.html file that looks like the following. You’ll notice it’s pretty simple – we’re just putting a div in place for google’s map canvas, and then referencing some javascript files:

Underneath the map div, we’re linking to a few javascript files that provide the real work. First, we’re just including a jquery library, and then the google maps api.

Must.js is an important file provided by Meetup, which refers to “Meet Up STreaming”. It’s part of the Meetup API, and the must.js project provides a nice interface to the streaming API from Meetup.

The simplest way to get a copy of just this file is to go directly to the GitHub project for must. But you should use bower instead, which is a handy tool created by Twitter for managing any front end dependencies that your project might have, like third party javascript applications such as must.js. Once you have bower installed, you can get must.js by running this from your project:

bower install must

If you take a look inside the must.js file, you’ll notice wrappers for a variety of streaming API’s: event RSVP’s, comments, check-ins, and photos. We’re just going to use the RSVP’s endpoint in this example, but you can extend our example to also display comments from users about events, check-in’s at events, or display photos as they are uploaded to Meetup in real-time.

Here is the relevant part of must.js for event RSVP’s:

Our code will need to pass a callback method to must.js so that they know where to send rsvp objects as they come in. Must.js itself is just making a call to the root webservice for streaming rsvps, and passing them back to our callback method.

By default must.js is doing this using WebSockets so that it is done in an efficient socket model, but if your browser doesn’t support WebSockets, then must.js will automatically fall back to a less efficient long polling method. Must.js checks to see if you can support WebSockets here:

Assuming that you can in fact support WebSockets, then the must.js code is going to drop down to this chunk of code for receiving call back events from the meetup API:

In this code snippet, the .onmessage event is fired whenever there is data to process from Meetup, and then the must.js code above will parse the JSON data returned, and as long as there are no errors, it calls a local method handleJson which will basically just pass the json data (now stored in the variable “ary”) along to our callback method.

Now that we see how must.js works, let’s look at calling it ourselves. Back in our index.html file, the last javascript we are including is our own:

js/meetupmashup/app.js

You can see the complete file here, but let’s look at just a few key code snippets.

At the very bottom of the file, the first javascript that is executed is this statement:

geo_code_app.init();

This calls to the init method, just above, where we see the two key actions that happen on page load:

In the init() function, we are referencing a local object we created called geo_code_app, which is at the top of the javascript file and is where we store our key methods. The first thing we do is call the load_map function, which does pretty standard initialization of the google maps api so we get a full world map displayed on our page. We won’t explain that here, but you can see the code in the app.js file (link to file in repository)

Below the call to the map initialization, is this statement:

This is the call we make to tell must.js to stream all rsvp callbacks to our method display_rsvp(). Display_rsvp lives in our app.js file, and let’s look at the full method here:

This method will be called once for each rsvp that comes in from the Meetup streaming api. An object named “rsvp” will be passed into our method, and that object from Meetup contains information about the group name for the rsvp, the name of the meetup member who registered, the date and time of the meetup event, venue name, and more. We take all that information and throw it into a few div tags for display as a marker on the map.

The rsvp.venue object also contains latitude and longitude for the venue, which is obviously crucial for the google maps Marker object that we’re going to create on the map. Once the Marker and the associated InfoWindow are created, we call the google maps method to open those.

We’re dealing with real-time data from a very popular website, and so our map would fill up with markers if we didn’t close them down automatically. The last part of the display_rsvp method sets a timeout function on the Marker and InfoWindow so that they are removed from the map after a second to make space for the next set of RSVP’s coming in from Meetup.

This all makes for a fun and relatively simple way to see the power of WebSockets and real-time data streams. What free and publicly available data streams do you want to see us build an example with? Contact me at Arin@AgilityFeat.com and let me know, maybe we’ll use it as the basis of a future blog post.

You can see a live version of this example up at RealTimeWeb.co/geocode, and you can learn more about building real-time web applications in our book at RealTimeWeb.co. In this book, our team shows you how to use publish/subscribe networks and WebRTC to build real-time web applications that involve video, audio, and data applications.

Many thanks go to Allan Naranjo for his work on this example. Allan is a developer at AgilityFeat and lead developer on the example application for our RealTimeWeb.co book.



Ready to improve your product?

In our email newsletter, we'll pull the best content from our blog and others, and provide free advice on UX/Design, lean startups, agile development, nearshoring in Latin America, and interviews with our peers.

Sign up here to learn more!

Share:

4 Key Insights about WebRTC from the experts

Posted By:

This week I attended the WebRTC “conference in a conference” that was part of Enterprise Connect in Orlando Florida. In addition to being a welcome respite from the lingering cold weather of my home in Virginia, it was also a welcome opportunity t...

Arin Sime

4 Key Insights about WebRTC from the experts

WebRTCspeakerThis week I attended the WebRTC “conference in a conference” that was part of Enterprise Connect in Orlando Florida.

In addition to being a welcome respite from the lingering cold weather of my home in Virginia, it was also a welcome opportunity to network with the leaders and experts of the WebRTC world. I spent as much time tweeting as I did actually talking to people though – there was so much to absorb and the active twitter stream of myself and others became my notepad. Unfortunately I did not catch the twitter handles or names of everyone who made these comments I quoted, if it was you, send your username to me at Arin@AgilityFeat.com and I’ll link to you from here.

Here are 4 key insights I took away from the conference.


#1: WebRTC is nothing by itself

This is a point that I increasingly hear, and I completely agree with. WebRTC is an evolving standard and a set of technologies, but by themselves, they are useless. You cannot build a profitable business or revolutionize your industry with a technology. You build a business on features and value delivered to customers.

Many of us, including myself, get very excited by the possible applications of WebRTC. We can’t get too excited though about something that by itself, customers don’t care about. Customers only care about solutions, not how you implement them.

A similar refrain is to start any discussion of WebRTC with a phrase like “video/audio in your browser, with no downloads or plugins required!” It’s certainly part of my elevator pitch, but should it be? The lower friction to using WebRTC is certainly attractive, but one panelist also pointed out that “No plugins or downloads cannot be the only reason for WebRTC – look at how many people downloaded whatsapp and skype.”

It’s a valid point. If an app requiring a download solves customers problems significantly better than your WebRTC solution, then users will go with the downloadable app. Make sure you are still solving the right problems.


#2: WebRTC offers privacy

One of the big values of WebRTC is its encrypted peer to peer nature. This means that after you have completed the initial handshaking process, you are in a very private conversation. This was emphasized by a number of speakers, including the quote above about how this opens the door to easily building apps that are much harder for your competitors or governments to snoop on.

Perhaps the use case that this built-in privacy of WebRTC will affect the most people is healthcare. HIPAA compliance in the United States requires, among many other things, that all patient data must be protected. This is an obstacle to telemedicine, where applications must be extra careful that all conversations about patients are kept private. Telemedicine apps built using WebRTC can get over this hump because of its encrypted and peer to peer nature. WebRTC does not automatically make you HIPAA compliant, but one of the interesting demos at the conference this week was Net Medical Xpress, which is using a WebRTC based solution in a highly regulated industry. Look for more of this in the future.


#3: WebRTC is not disruptive … yet

I was once on a large boat in the Potomac River when one of the engines failed as we were navigating out of the piers. I stood at the stern of the boat, helpless, as the boat drifted into the large concrete seawall. Although it was futile, I grabbed a metal pole used for fishing buoys out of the water and tried to use that to brace the boat and keep it from hitting the seawall. It didn’t work.

The message to the many traditional telco businesses at Enterprise Connect was that WebRTC is not going to sink your business yet, but at a minimum you do need to keep an eye on it. If you are a startup seeking to disrupt your industry by introducing much better communication tools, WebRTC can help you do it without having to hire telecom developers.

If you’re already in a telecom related business, WebRTC is not going to kill your core business model yet, but you certainly don’t want to be the last person in your industry to adapt to it. Don’t watch your competitors from the vantage point of your sinking boat – start preparing now.

Cullen Jennings is a technologist and Fellow at Cisco, and an important player in the WebRTC standards. Cullen gave an excellent morning talk about the state of WebRTC, and many people were repeating his statement that “Someone in your company should at least be playing with this right now!”

I’ll go one step further and say that if you want to be the innovator in your industry or company, then that “Someone” should be you.


#4: WebRTC is all about in-context

If the “no downloads” aspect of WebRTC is not what makes it revolutionary by itself, then what makes it so special? I think it’s a corollary, the “in-context communications” aspect of WebRTC, and I heard panelists at Enterprise Connect make the same point.

This is the beauty of Amazon’s MayDay customer service application for the Kindle. If you are having a problem with your Kindle, you dial up a customer support agent immediately. You video chat with them, and they can draw on your screen, help you find what you are looking for, and even take control of your Kindle if needed to show you something. None of this requires you to switch applications, call a phone number, or step out of what you are doing at all. The MayDay representative is there, “in” your app, ready to help you where and when you are having problems.

The relative ease with which you can integrate WebRTC into your site, and eventually mobile devices, allows you to completely rethink how you interact with your customers. No more “Contact us” links with an email form to fill out or a phone number to call. Just speak to someone right now, right when and where you need them. It is my dream that this means when I’m on customer support with my bank, never again will I have to read out my account number 5 different times to 5 different customer service reps. They will already know where I am in the banking site, who I am, and they will just get straight to the point and help me.

Now that is revolutionary.


bookInterested in learning more about Real-Time data and building real-time web applications? Then you’ll be interested in our free weekly newsletter Real Time Weekly, which provides a round up of the best news about real time web application technologies. You should also check out our book for building Real-Time Web Applications. And you can always contact us to learn more about how we can help your team build real-time applications! (Yes, I do realize the irony that for now, that contact us page I just linked you to is an email form, not in-context communications.)




Ready to improve your product?

In our email newsletter, we'll pull the best content from our blog and others, and provide free advice on UX/Design, lean startups, agile development, nearshoring in Latin America, and interviews with our peers.

Sign up here to learn more!

AgilityFeat’s Mariana Lopez to Speak at MoDevUX...

Posted By:

We’re very excited to announce that AgilityFeat’s resident UX rockstar, Mariana Lopez has been selected to speak at the upcoming MoDevUX 2014 Conference. The conference will be held on May 19th and 20th in McLean, Virginia. As you migh...

Arin Sime

AgilityFeat’s Mariana Lopez to Speak at MoDevUX 2014

Modev, AgilityFeatWe’re very excited to announce that AgilityFeat’s resident UX rockstar, Mariana Lopez has been selected to speak at the upcoming MoDevUX 2014 Conference. The conference will be held on May 19th and 20th in McLean, Virginia. As you might guess from the conference name, the conference will focus on user experience design for mobile applications.

Mariana Lopez, AgilityFeatMariana will be leading a session called “Design in the 4th Dimension: Interaction Design for Real Time Applications”. The session will include an overview of best practices for designing real time applications along with hands on exercises to start using those best practices right away. If you design are someone looking to learn how to design real time applications for building data dashboards, collaboration applications, chat applications, or anything remotely “internet of things”, this session is for you.

MoDevUX is for professionals dedicated to exceptional design and user-centric mobile user experiences. ModevUX connects you with today’s foremost design and UX experts for two intensive days of learning. Here’s more information about MoDevUX, including registration details.



Ready to improve your product?

In our email newsletter, we'll pull the best content from our blog and others, and provide free advice on UX/Design, lean startups, agile development, nearshoring in Latin America, and interviews with our peers.

Sign up here to learn more!

Tags: Agile,
Share:

5 uses for Real-Time Data Dashboards

Posted By:

What good is “Big Data” if it’s old and out of date? Data analysts and enterprises are finding more needs for real-time insights into their data, not just historical views into data. The term “historical data” is very...

Arin Sime

5 uses for Real-Time Data Dashboards

What good is “Big Data” if it’s old and out of date? Data analysts and enterprises are finding more needs for real-time insights into their data, not just historical views into data. The term “historical data” is very relative of course, but in an increasing number of uses cases, data from yesterday or even an hour ago is very historical. Every minute you can’t act on data is a minute of business or customer value lost.

A recent article in UX Magazine highlighted the “Future of Information Dashboards”, by Shilpi Choudhury, and I’m glad to see that real-time data was one of the predictions she highlighted for 2014.

Who needs real-time data in their analysis toolkit? The odds are that you do, but it may require thinking outside of the way you are used to visualizing and acting on data currently. To help you consider some new possibilities, here are five use cases our team has seen where building in real-time to your data dashboard can allow you to make business decisions faster.


1) Looking for System problems in Real-Time

Catching errors on your website is certainly one way to know that something is going wrong. Shilpi’s article gives a good example of this by talking about highlighting payment system failures on a real-time dashboard helps businesses to track key performance metrics. If there is a sudden increase in credit card declines, that could be an indication that something is wrong with your payment processing gateway. It could also be an indication that there is some fraud occuring that you need to be concerned about.

AgilityFeat COO Ford Englander and I used to work in the music industry, and we worked together on IT systems for online ticket sales. When an “On Sale” starts for a big music artist, there are a lot of people clamoring to get the best tickets. Not all of them are legitimate ticket buyers – many of them are ticket scalpers who may be using automated programs to try and snap up the best seats quickly so they can resell them above face value. A real-time analytics dashboard can help you to see what’s happening right now. When a concert sells out in mere minutes, then if you need to take any actions like banning certain users or IP addresses from purchasing more tickets, you need to be able to see the data and make that decision right away.


2) Real-Time view into customer/data segments

12We’re building some interesting software for one of our customers that is basically a very sophisticated real-time business intelligence and data dashboard application. I have to describe the application somewhat generically, since it’s not public yet unfortunately. This client is in the logistics industry, and the software will be deployed at multiple company locations to allow them to see the current status of all their operations in real-time. To reallocate the activities of all their different locations, they need a real-time view into the data. The dashboards will often run on large touchscreen displays that allow people to customize the view to areas of most interest to them.


3) Real-Time view into geo-tagged data

word-animationIncorporation of geo-tagged data into data dashboards is another growing trend that Shilpi mentions in her article. Why not combine that geo-location information with real-time data. We built a simple example of this using the Meetup streaming API here. This is an example we use in our upcoming book on building real-time web applications. In this example, we show the location of RSVP’s to Meetup events in real-time. Meetup did the hard work of streaming the data out to a free API (thank you!), we are just mashing this up with google maps in our example. I doubt that Meetup will be using our example to make any actionable decisions, but it’s at least fun to watch for a while and you’ll notice the activity shift around the world depending on the time of day. Another cool example we saw is this real-time bitcoin globe, which shows bitcoin transactions happening around the world as they happen.

These examples may seem more fun than actionable, but depending on your business they could be incredibly powerful. For example, when I was in business school one of our case studies was how Walmart looks at sales across its stores worldwide in real-time. This allows them to realize right away when a particular product is selling unexpectedly well in a certain market. This could lead to an inventory shortage that could not have been predicted, and therefore potentially lost sales. Maybe a big school district just asked all their students to go buy styrofoam balls for a solar system model project. Walmart can catch this before the stores in that area run out of inventory and get extra product shipped there right away.


4) Internet of Things and sensor data

That FitBit band on your arm is just one example of the many different devices starting to permeate our daily lives which have internet connectivity and stream data back to some server somewhere. It might just be data about your latest workout, your fridge asking you to buy more milk, or something spanning thousands of devices across a factory floor or all of a company’s locations.

avirdeviceWhen I first ventured into freelance software development over a decade ago, one of my first clients was a small company called Avir Sensors. Avir makes a cool chemical detection system that can be used in government buildings, public transport, or the factory floor. I may not have been smart enough to understand the extreme math used to match chemical signatures in an air duct in real-time, but I was smart enough to build the first prototype of the interface software to those sensors. I climbed down into strange tunnels at the University of Virginia with other engineers, we placed a detector, and then we could pull up a web page back in the lab where we saw the data displayed in our browser in real-time. Eventually, fancier software was built to manage an array of devices in a building at once, trigger alerts when a dangerous chemical was detected, control the devices in real-time, and monitor data in real-time from any particular device.


5) In-context Communications

22All of the examples I’ve given so far are focused on the data itself, but what about the communications around that data? As you’re going over the latest sales data on the real-time dashboard, why not call up the district sales manager for a video chat? You can use WebRTC technologies to build in data synchronization, video/audio chat, and screen sharing directly into your application so that you can make decisions with your peers in real-time while looking at the same exact data together. There’s no more “flip to slide 24 in my presentation and look at the top chart.” I’m just going to show you the data in real-time, and we can manipulate and slice it together to get exactly the data we need to make a decision now.


What will you build?

In today’s fast-paced business world, waiting for the next quarterly or even weekly sales meeting is no time to review data. You need to act on that data now, in real-time.

Strategy decisions still need to be made on a longer time scale, but the tactical decisions to execute your strategy must be made much faster. Make 2014 the year that you start accessing your data in real-time, and making business decisions while they still matter.

bookInterested in learning more about Real-Time data and building real-time web applications? Then you’ll be interested in our free weekly newsletter Real Time Weekly, which provides a round up of the best news about real time web application technologies. You should also check out our book for building Real-Time Web Applications. And you can always contact us to learn more about how we can help your team build real-time applications!




Ready to improve your product?

In our email newsletter, we'll pull the best content from our blog and others, and provide free advice on UX/Design, lean startups, agile development, nearshoring in Latin America, and interviews with our peers.

Sign up here to learn more!

4 ways Perfection is killing you

Posted By:

Whether you are a bleeding edge startup or a large corporation running agile teams, here are four ways that you may be abusing agile concepts and letting perfection kill your chances of success. Product Owner Perfection In an agile team, the Produ...

Arin Sime

4 ways Perfection is killing you

This is what angry customers might look like. Don't make them angry.Whether you are a bleeding edge startup or a large corporation running agile teams, here are four ways that you may be abusing agile concepts and letting perfection kill your chances of success.

Product Owner Perfection

In an agile team, the Product Owner (PO) is the “voice of the customer” to the development team.  They play an incredibly important role in the success of the project.  The PO will end up writing most of the user stories that describe the features the team will work on next, and equally important, the PO sets the priority of what to work on next.

Implicit in this prioritization duty is that the PO is also the person who says “go” and approves the deployment of new functionality to production.  So what happens when your product owner keeps saying “just do this one more thing and then we’ll deploy”? 

For a startup, this sort of “all or nothing” attitude will not just kill your project, it will destroy your business.  It will take you so long to satisfy that perfectionist Product Owner that you will end up missing your window of opportunity with customers.  Somebody else is going to build a less perfect version of your idea first, and they will win the customers before you even have a chance.

For a large company, Product Owner perfectionism is still bad.  It means that your schedules are going to slip, major initiatives will not be completed on time, and your agile team is going to look a lot like a very slow waterfall team.

Developer Perfection

Good developers are an essential part of a successful project, and so it’s easy to go overboard and hero-worship their technical prowess.  But sometimes that desire to be the best developer out there can kill your project.

Developers (in general) are very creative folk who take immense pride in their work.  The best ones also really enjoy playing around with the latest technologies and trying new things.  But even if you’re a non-technical leader, you can’t give them free reign or they will have so much fun building shiny objects that they will never get the project done on time.  I’m a developer by training … trust me on this.

Even if you can’t debate the details of technical decisions being made, you need to set clear boundaries for the team to work within.  If there is a fixed date you must have the functionality done by, communicate that from day one.  Make it clear that meeting that date is the most important thing, and you are willing to compromise on anything but that date.  The same is true for other project constraints such as budget or feature sets.  You can’t have everything, so pick one constraint at the beginning, and make it clear to the development team that is the constraint that must be met above all else.

If you’re project has no constraints (yeah right), then make one up.  You need to give the developers those guardrails.

Scrummaster Perfection

Are you agile enough?  I mean, are you truly Agile?  If you hear this a lot from your newly minted and Certified Scrummaster, then you are at risk of Scrummaster Perfection.

It doesn’t have to be agile methods like Scrum or Kanban, or even the Scrummaster as the culprit.  Perhaps we should just call this Process Perfection instead.

Any process should be judged by the results it produces.  Not the process results (aka documents and meeting rituals), but the actual value delivered to customers.  Are customers happier with us than they used to be?  Are they getting more value from us than in the past, and are we delivering that value more efficiently?  Than our process is working.  That’s all that matters.

It just so happens that agile methods tend to do the best job of delivering more customer delight more efficiently.  That’s why agile is popular.  But they can be abused and if you spend all your time worrying about adhering to some agile book you read, then you will not deliver much customer delight.

Please remind your Scrummaster and team that agile methods are not meant to be prescriptive – they are a set of guidelines and principles.  You can (and should) constantly change your process in order to better serve your customers.  As long as you are still adhering to the basic tenets of the agile manifesto, you can declare yourself agile.

From your customers’ perspective, they don’t even need to know you’re agile.  They don’t give a rip.  You should just be doing a better job for them than you used to.

Quality Perfection

I’ve already complained about perfectionism from our product owners, developers, and scrummasters, so who’s left for me to alienate?  I know, let’s pick on testers for a moment!

I am all for testing, both manual and automated.  I am also all for high quality software.  I hate it when a bug keeps me from doing something important on another company’s web site, so why should I accept anything less on my own?

The thing is … all bugs are not created equal. If I can’t accomplish core web site functionality on the most commonly used browsers, then yes, that bug has to be dealt with quickly and should be caught before it’s deployed to production.

But should bugs in older browsers keep you from deploying software now?  It depends on your customer base, but probably not.  You can probably live without a fully functional site in IE 8.  Or you can change the code for that browser to be simpler and not have as rich functionality.

If the bug is in an obscure part of your administration console that very few users notice, then maybe it doesn’t need to be dealt with right now.

The point is, many testers have a problem with perfectionism.  I find this to be particularly true with testers in large companies because in the past they were judged by the wrong metrics.

A tester should not be judged by how many bugs escaped the test environment and made it to production.  A tester should be judged the same way as the whole team – are we making our customers happy by getting them the right features at the right time with an appropriate level of quality?

Unless you’re building safety-critical systems, you don’t need to deliver perfect quality.   You only need to deliver just enough quality, focused on the right areas.

Is your perfectionism holding you back?

What is keeping you from deploying that code today?  What is keeping you from making that deadline next week?  It may just be endemic perfectionism in one or more parts of your team.  Go find that person right now and reset expectations with them before it’s too late.  You can still save that project if you act now!



Ready to improve your product?

In our email newsletter, we'll pull the best content from our blog and others, and provide free advice on UX/Design, lean startups, agile development, nearshoring in Latin America, and interviews with our peers.

Sign up here to learn more!