Various Railsconf Memories

I came to the ruby party a bit late. Like most people it was rails that introduced me to the expressive beauty that is ruby.  I hear people like ntalbott talk about how he’s been to every Rubyconf and it makes me feel like I missed something along the way.  After going to my first Rubyconf last fall, I understand why… I did!

I can actually say that I’ve been to every Railsconf ever held in the history of the universe (all 3 of them).  In case my amazing longevity makes you feel like you missed something, you can go to and see what you missed.  They don’t have everything, but they have lots of keynotes from the past gatherings.  One of my favorites is reliving Chad Fowler play the ukulele while Rich Kilmer introduces DHH for the 2007 keynote.  Check it out.

Agile — entree or a la carte?

Warning:  opinionated thoughts below.  Just because I found something to be true does not mean you can’t or shouldn’t hold an opposite belief.  I’ll state my opinion, get your own blog and state yours.  We can still be friends.  In other words, this is what I believe, your mileage may vary…

After spending many years in the software development world, I’ve seen projects run a lot of different ways. I think my quality time with opinionated software has helped me harden up some of my own opinions about what works and what doesn’t. There will never be a silver bullet that is the solution for every development project. The style of a process that works or doesn’t in a particular environment is dependent on a number of factors: the attitudes and capabilities of the management, the competence of the development staff, and the trust they have with the management staff.

Though this wasn’t their original intent, very traditional styles like waterfall-based processes were very popular in environments where managers were typically either not developers or they were the type that would rather manage people than code.  It always felt to me like an attempt to manage a very non-manufacturing process with a manufacturing mindset.  The managers would go off and pick the features, make impressive gantt charts, and come back with “unmoveable” deadlines, then the big dance began.  As the release got closer, things were always behind.  “Unessential” elements like testing & documentation were often pushed aside in favor of getting the code done, yet integrating all that code to work well together was problematic.  As it became obvious that something had to slip, the courtship of compromise would begin.  We’ll agree to ship on that date, you drop this feature, etc.  I can honestly say that in all my years I never reached a deadline where there wasn’t either a slip of the date, or a reduction of features, usually a generous amount of both.

I’m not a project manager by trading, and though I’ve done my share of PM work as a team lead I never enjoyed the process.  Why is it so consistently bad?  Here’s a list of the things I think of when contemplating why it never worked that well.  Your experience may be better or worse in some areas.  I’d say the culprits included, but are not limited to:

  • Disconnect between managers gantt chart and developers daily prioritization
  • Excruciatingly long release cycles
  • Poorly defined requirements
  • Relying on the serial-optimism of developers for estimates
  • Implementing code before thinking about integration/API issues
  • Lack of integration between development and QA
  • Absence of customer input during development
  • Over emphasis on architecture and premature optimization

If you had asked me a year ago what the right answer was, all you would have gotten is a shrug. I’ve tried to do “agile” over the years, or at least I thought I had.  If you read any of the literature, there are a number of components to an agile process.  My typical approach had been to treat it like the desert tray at a good restaurant.  “Small iterations sound good, I’ll have a couple of those.”  “I’m intrigued by the whole ‘interfacing with the customer’ thing, think I’ll try that one too.”  However, it was always just putting agile colored lipstick on a 1200 lb. waterfall-based pig.  We may be able to pull off small iterations on our team, but we’d never get buy-in from marketing to release any sooner.  Doing small iterations based on large requirements doesn’t work that well to start with.  Talk to a typical manager about tying up two developers on a single workstation working together (pair-programming) and you will probably get visited by the in-house psychologist.  Most developers avoid writing tests almost as much as they avoid writing documentation.  Write the code, if it works you don’t need any tests.

If your experience has been similar to mine, I’m here to let you know there is hope.  Grockit is by no means a typical project or work environment, but it’s also not a typical development process with far from the typical results.  We brought in Pivotal Labs to help us get things rolling.  I could tell early on these guys were incredible developers, but they didn’t just bring their keyboards, they also insisted on how the development process should be run.  I was hopeful, but skeptical.  Now, over a year later, I thought it would be a good idea to share some of what we did, and how well it worked.  My goal is to get some other people to try it, or at least be intrigued enough to want to try it if given the opportunity.  If you’ve been around the block a time or two, you need to ignore that sarcastic voice in your ear and give it a legitimate chance.  I’m just going to do a rough walk through of recommendations that match our process, then address what I’ve seen with regards to the list above.  We’ve had our app in development for over a year (on weekly iteration #53 as of this writing).  We just took a Jury Selection award at TechCrunch50 and are about to launch a private beta with some very cool functionality.

Recommendation #1: Throw away Microsoft Project and using a real agile project management tool.  Those gantt charts are usually out of date before the toner dries anyway.  Don’t manage a process that is iterative and creative with a tool that is not.  There are a few good agile PM tools available, we use Pivotal Tracker (which is now in open beta and available to everybody).  It’s become the linchpin of our process.  Use the ice box to collect stories about things you don’t want to forget.  Use the backlog to prioritize and order them, Tracker will automatically show you where the iterations will fall based on your average velocity.  Learning how to differentiate chores and stories, and how to point the stories is not obvious, but also not an issue.  As long as you are consistent among the team across the life of the project, it will work itself out.  If you want to talk about how we do it, let me know.

Recommendation #2:  Iterate often, release oftener.  Construct your stories at the right level of granularity so stories can be knocked out usually in a day or two of pairing.  Keep your iterations short (we like 1 week iterations).  We have a planning meeting to start the iteration, a 15 minute daily standup to track our progress as we go, and a retrospective at the end of the iteration.  No extended meetings, no heavy process requirements.  The tasks and accountability usually provided by a dedicated project manager is enforced from within because the team is dedicated to the process.  If you police yourself to make sure your test coverage is good, and your CI build remains green, then fear of deployment should be a thing of the past.  We leverage our CI process to ensure only green stable builds get deployed and can do so whenever we like.

Recommendation #3:  Ignore your inner lone ranger and find a pair.  This is the part of the process I was most skeptical of, but it’s turned out to be the part that amazes me the most.  Skeptics will look at pairing and say you have a 50% loss in productivity.  My advice:  don’t knock it if you haven’t tried it (and tried it with somebody who has done it before).  There are days when two solo devs will produce more code than a pair, however on average you will get much more code out of a pair.  In a pair, the level of focus is much higher because your are accountable to your pair, and you are not available to constantly be distracted by email, im, etc.  When pairing you don’t need to take time for code reviews and you are more confident in what you write because there are two devs standing behind it instead of one.  If you switch up your pairs and make sure everybody is constantly rotating (our goal is to change pairs daily), the result is natural cross training and co-ownership of the entire codebase.  We have no silos where we are dependent on one person to work on a piece of code that only they understand.  The biggest advantage and one that caught me by surprise was how much you learn from pairing.  I could stumble through javascript a year ago but it certainly wasn’t my strong suit.  I’m doing things now in javascript that I would never have thought possible, and it’s 100% due to pairing.  I’ve had the opportunity to pair with some amazing devs who were really good at explaining what we were doing and keeping me in the loop.  Now I am driving javascript pairs and helping other guys come up to speed on our fairly complicated javascript platform.  Whatever your doubts, find somebody that knows how to lead a pair and give it a shot.

Recommendation #4:  Write the tests before you write any code.  Test-first development is not solely about making sure you have good test coverage, though that’s a big part of it.  It forces you to think about your code from a different perspective and usually results in better code.  By writing the test first, you are forced to think about how you will use the module/method you are writing before you consider it’s implementation.  You will almost always make decisions that will result in a better API, therefore easier integration with the rest of the codebase.  If you start by writing the code, you end up making compromises to force a test around code that is hard to test because it wasn’t written with it’s user in mind.  Code that is hard to test is ripe for refactoring.  Back up, write the test that reflects the way the code ought to work, then refactor the code so the test passes.

Recommendation #5:  Test within the proper context.  Instead of just writing a smattering of regression tests and hoping you’ve got it covered, make sure you’ve got it covered.  Write tests that are appropriate for their context.  For example, make sure your model specs test any custom methods in your model.  If those tests make some expectation on fixture data that if broken would break the model spec, then write a fixture spec to bake that assertion into your test suite.  If you know your model methods are well tested, then when writing the controller test you don’t need to repeat the model assertions, use mocking and stubbing instead and just test the controller code.  The same goes for view tests, they should as much as possible only depend on code in the view layer.  Make sure the end-to-end scenarios are covered using selenium or firewatr tests, but the bulk of your testing should be layered into context partitioned tests.

Recommendation #6: Your code should be self-documenting.  If you think it needs comments, what it really needs is refactoring.  Comments are useful for things like marking your place if you need to take a break, leaving TODO notes for refactorings that need to happen but need to be postponed, etc.  If you find yourself in a section of code where you feel compelled to write comments because you don’t think anybody else can figure out what is going on, that should send off a five alarm bell in your head that it’s time to refactor this code.  Especially when working with a language with the expressiveness of ruby, there’s no reason to write code that would be hard to understand.  Pairing helps with this because usually your pair won’t let you leave ugly code laying around that somebody else will have to clean up later.  Keep code clean and refactored as a favor to your teammates because they have to use it too.  It also is your job as a craftsman to leave work you are proud of behind.

Recommendation #7:  Grand expectations of future needs are usually wrong.  It’s a good idea to keep future needs in mind, but only as reminders for where you need to hook in future behavior.  When it comes to things that are not part of the current story, remember YAGNI (You Ain’t Gonna Need It)  More often than not, priorities and/or requirements shift and that “crucial” feature becomes less so.  Plan for tomorrow, but build for today.  Most projects accrue a huge amount of code debt to maintain code and tests for features that have never been implemented and maybe never will.  If you don’t need it today, don’t build it.   The code will still be here tomorrow.  Along the same lines, don’t cripple yourself by aggressive pre-optimization.  Optimize the problems and bottlenecks to be sure, but have patience and work on demonstrable problems, not the ghosts you think will become problems.

Recommendation #8: Spikes are good, but should not be converted to code.  This one is a corollary to the last one.  Spikes are an important part of any agile methodology.  There are some problems that need some investigation and are impossible to test drive and know the best way to build it.  Spikes require discipline as well but of a different sort.  Throw caution to the wind, don’t worry about writing tests, just code to your hearts content and get it working.  It’s ok, really.  You need that freedom to fully investigate the best implementation.  However, once you think you’ve got it nailed, this is where the real discipline comes in, and this is non-negotiable.  The next step is to DELETE THE SPIKE!!!  Don’t hesitate, toast it immediately.  This was very traumatic to me the first time I saw a four hour spike disappear before my very eyes.  I thought my pair was insane.  However, consider this.  You’ve just proven that you can write the code once, and if so then you can do it again.  By throwing away the code, then starting with the tests, now you can take the implementation you just deleted and think about it from the API perspective that test-first development gives you.  Get back in the groove, follow your normal red-green-refactor workflow and chances are the test driven implementation will be at least as good and usually better than the one you pitched.  By reimplementing it from a test-driven mindset you’ll be much happier with the final result.

In closing, I think if you now go back up the page and re-read that initial set of complaints I had about traditional development processes, you’ll see that every one of those concerns/shortcomings are addressed in these steps.  I realize there are environments where this will never work.  It requires those in charge to loosen the reigns and trust their developers to take ownership of the product and the process.  It requires the customer to be closely involved and constantly approving/revising the stories as they are written up and developed.  It requires developers to be willing to share what they know and learn from each other.  It requires code ownership to be shared.  It requires craftsmen who take pride in their work and support their teammates by keeping the codebase clean, and ensuring that every interaction with the code leaves it in a better state than it started..  It is not an easy transition, and is hard to maintain if you do not build the process into the DNA of your organization.  However, if you can get the buyin from your teammates, and the support of your management, I urge you to give it a shot.  Everybody deserves the chance to have this  much fun and take this much pride in their profession.

Better Validation/Association Testing

I’ve learned to test almost to a fault, though I’m not really sure you can take it that far. If you write tests at every layer of your app, then you don’t have to deal with trying to do to much in any one test. If my model methods are well tested, then I can mock them out without guilt in my controller layer and focus only on what the controller is suppose to do for instance.  If you keep your layers well tested independently, then it’s easier to maintain.

One thing I’ve struggled with in the past is how to test validations and associations in model tests.  Historically what I would do is this.  Say I have a model (Blarf) that has some required attribute (fluggle):

class Blarf < ActiveRecord::Base
  validates_presence_of :fluggle

In order to test this, I would rely on the ActiveRecord validations to trigger an error like so:

describe Blarf do
  describe "Validations" do
    describe "#fluggle" do
      it "is a required field" do
        model =
        model.should_not be_valid
        model.errors.on(:fluggle).should_not be_nil

While this works, and makes sure the field is always required it’s too heavy and invasive. Just like I assume the other layers of my code are well tested so I only need to test the appropriate layer at hand (see the model/controller example above) I make the same assumption about the frameworks/plugins I use. If I uncover an issue in them, then I fork the repo and fix it there (by starting with a failing regression test) using the same approach if the bug is found in my code. What’s always bothered me about my validation tests in the past is that I shouldn’t have to test that the AR validations work, only that they are in place. The same goes for testing associations.

So imagine my joy when I recently uncovered the rspec_validation_expectations plugin from Matthew Bass. It leverages the goodness of the validation_reflection plugin to give you helpers to make assertions about existing validations and associations. In the overly simplistic example above, it now becomes simply:

describe Blarf do
  describe "Validations" do
    it_should_validate_presence_of :fluggle

The great thing is that it doesn’t test the validation, only that it exists. It introspects on the class to make sure that your “validates_presence_of” code is present. It comes out of the box with the following validation helpers:

  • it_should_validate_presence_of
  • it_should_validate_numericality_of
  • it_should_validate_uniqueness_of
  • it_should_validate_inclusion_of
  • it_should_be_createable

I’ve forked it and added the following (I’ll be submitting a patch to get these folded back in once I think I’m done with what I need):

  • it_should_validate_length_of
  • it_should_validate_confirmation_of
  • it_should_validate_format_of

It also has helpers for associations. The following are already coded:

  • it_should_belong_to
  • it_should_have_many
  • it_should_have_and_belong_to_many
  • it_should_have_one

I love the amount of code this has removed from my model tests. It’s clean, concise, and more to the point of surgical testing. I am happy to write tests around any custom model behavior I build, and this gives me a way to make sure all my association and validation hooks from ActiveRecord are in place without having to write code to make sure AR is doing its job. Less code and better coverage works for me!