Ruby Date Shifts

I was tracking down a failing test last night for a date related calculation. I know this test was passing and I haven’t touched the code, so I strongly suspected it was a time zone related failure. The confusing part was that these were Date objects, not DateTime objects, so they didn’t need to be time zone adjusted.

Consider this snippet of rspec:

days_to_finish = 7
some_object.started_on = days_to_finish.ago.to_date
some_object.do_some_finish_action
some_object.days_to_finish.should == days_to_finish

That was the gist of the code that was failing, and the failure was “expecting 7, got 6″. I puzzled over it for a bit, then I recreated it in console and found my issue. The problem was that I was using the Fixnum#ago helper that rails adds. That generates a DateTime object, but in UTC.

So here is what triggered it for me in console.

ruby-1.9.2-p290 :001 > Time.now
=> Mon, 10 Oct 2011 23:22:51 EST +04:00
ruby-1.9.2-p290 :001 > 7.days.ago
=> Tue, 04 Oct 2011 03:22:51 UTC +00:00

Since Time.now produces a time zone adjusted object, and Fixnum#days_ago produces UTC. Running to_date on the objects just strips the time portion out, so converting both of these does in fact give you dates that are only 6 days apart instead of the expected 7. With that knowledge, the fix was easy. I changed:

days_to_finish = 7
some_object.started_on = days_to_finish.ago.to_date

to

days_to_finish = 7
some_object.started_on = Date.today - days_to_finish.days

and my specs were green again. Nothing major, but sometimes time zone issues can crop up where you least expect them.

Changing it up, yet again

In the interest of getting some fresh content here, I’m giving up on my big plans to write something to manage this and going back to wordpress.  I had great intentions to build my own blog engine and use it as a way to show off my mad skillz, but in reality I’m so busy elsewhere between work and home that it’s just been a big source of guilt.

There will clearly be some churn over the next few days while I settle back into wordpress and get things set up, then hopefully the content will come more regularly.  Check back soon!

Testing your test data

Ever had the situation where a test that had run reliably for a while suddenly starts failing and nobody knows why? You spend hours digging through the code with no luck, then later find out somebody tweaked a fixture or factory for another test, breaking your test. Optimally, you have a team practice of always running the tests before any checkin, followed up by a CI instance that is constantly making sure everything is green and these things get caught right away. However, even on projects with the best of intentions we all know there are times when that just doesn’t happen.

The optimal way I’ve found to ease the pain of this is to use two simple practices. Hardcode your assumptions, and test the test data. It only takes a few seconds to add these simple tests, but it’s worth it the first time you have one of these fail and you can pinpoint the problem immediately. Let me explain what I mean by both of these things. A quick note about the source for test data. I tend to use a mix of fixtures and factories. I’m not sure I can quantify why I use both, or what is my determinant for when to use one vs. the other. I definitely prefer fixtures for static reference data, and for things like standard users or data scenarios I need for a lot of different tests. Factories fit my brain better for customized dynamic scenarios that only need to be created for small subsections of the tests. I find that mixing and matching works well for me, YMMV.

Hardcoding assumptions is simply turning your assumptions into assertions. I love rspec, so that’s what I’ll use to illustrate. If I setup a describe or context block with some text that assumes a data condition, then the accompanying before block should ALWAYS hardcode that conditional so you know that it’s true. Assume I’m testing a user controller, and I have different behavior that should happen based on whether the current user is an admin. Here’s how that would look:

describe UsersController do
  describe "#index" do
    context "current user is an admin" do
      before do
        # ensure that the user for this test is an admin
        some_user.should be_admin
      end
      it "...."
      end
      context "current user is not an admin" do
      before do
        # ensure that the user for this test is not an admin
        some_other_user.should_not be_admin
      end
      it "...."
    end
  end
end

If you’re not use to seeing describe/context blocks broken out that explicitly, don’t get hung up on it that’s my preferred style. The important part is that I have an assumption in my context description that needs to be true in order for the tests to be accurate. However, if I don’t harden that assumption by asserting it, somebody else could change the fixture/factory that user comes from and break my assumption without my knowledge and it wouldn’t be clear that the test broke due to data rather than code.

The second related practice I like is to test the data. I’d love to know how many times some test breaks because somebody added a new validation, and the fixture/factory didn’t get updated to provide that data so any save on that model fails. This seems to have bitten me more with controller tests than anything else, and it was rarely obvious what the real cause of the failure was initially. I learned this trick when I worked at Grockit and it’s part of everything I do now. Create a set of fixture and factory specs that just do a simple validity test on all of the generated test data. In most cases, you are assuming the source of your test data will return valid data to you, so bake that assumption into an assertion as well so you have a hard failure if things break down. I have a spec/fixture_specs directory containing specs for every fixture in spec/fixtures, and the same goes for the factories I create.

Here’s what a typical fixture spec looks like:

require File.expand_path(File.dirname(__FILE__) + '/../spec_helper')
context "All User fixtures" do
  specify "are loaded" do
    User.count.should be > 0
  end
  specify "are valid" do
    User.all.each do |user|
      user.should be_valid
    end
  end
end
context "individual fixtures"
  specify "are loadable by name" do
    [:bocephus, :pharoah].each do |name|
      users(name).should_not be_nil
    end
  end
  it "bocephus should be an admin" do
    users(:bocephus).should be_admin
  end
end

The first two blocks are critical and will catch a number of problems. This includes a migration that may have tweaked the table rendering my fixtures unloadable by the database, a new validation that renders them invalid, etc. Finding the error in this test is much more direct and easy to fix than wondering why a controller create method suddenly fails to save the form data. The second two blocks are useful at times, but should only be implemented if necessary. When I first created this project, I had an issue with the fixtures loading at all, so I created the first block to help me get that working. If there are global assumptions about any specific fixture users, hardcoding assumptions here creates a single point of failure that may explain lots of other failures.

My approach to factories is similar, but I’ve found a trick for creating a global factory spec that handles most of this work. Here’s what mine looks like:

require File.expand_path(File.dirname(__FILE__) + '/../spec_helper')
Factory.factories.keys.each do |factory|
  describe "#{factory.to_s.titleize} Factory" do
    describe "default #{factory}" do
      attr_reader :model
      before do
        @model = Factory.build(factory)
      end
      it "should not be nil" do
        model.should_not be_nil
      end
      it "should be valid" do
        model.should be_valid
      end
      it "should be able to save without error" do
        model.save!
      end
    end
  end
end
#
# When you need to test a specific factory, uncomment this and set the appropriate factory.
# Then run a focused spec from the before block.
#
# describe "Focused Factory" do
#   describe "default" do
#     attr_reader :model
#     before do
#       @model = Factory.build(:factory_to_debug)
#     end
#
#     it "should not be nil" do
#       model.should_not be_nil
#     end
#
#     it "should be valid" do
#       model.should be_valid
#     end
#
#     it "should be able to save without error" do
#       model.save!
#     end
#   end
# end

This will actually run a similar spec for every factory you have defined. The downside is that if one of them fails, you can’t easily debug it using this global spec. That’s why there is a specific version commented out at the bottom. If you get a failure, you can comment out the top half, uncomment the bottom half and change the :factory_to_debug to whatever factory has the failure, then switch it back once it is working. Along with this global factory spec, I still have specific ones for factories that need unique assertions just as I do with fixture specs.

I have seen the benefit of this to the point where it’s a standard part of how I work now. Give it a shot and see if it doesn’t help narrow down your test failures faster.

Porkbarrel Spending

Life is pretty crazy right now.  I spend a lot of time on the west coast, and since we pair-program all the time I usually work a west coast time when I’m home in NY.  That makes for an interesting schedule with many challenges and opportunities.  We had some big production issues last night and my 2nd grader was not happy that I was absorbed with work during the time when we usually play a game before she goes to bed.  I told her we’d make up for it this morning.

One of the huge bonuses of the weird schedule is that my mornings are free to help my wife with homeschooling our brood.  I’ve been doing math with the aforementioned 2nd grader and we’ve been working through a chapter on money (counting/rearranging coins, etc).  So after we were done this morning, it was time for the makeup from last night.  She was counting on playing a game of Pass the Pigs.  Since she’s doing all coins at or under $1, and since in the game you accumulate points to 100 based on how the little piggies roll (it’s really great fun if you haven’t tried it), it was perfect.  We played a couple of games of Pass the Pigs, and she had to keep score for both of us using coins instead of a point tally on a sheet of paper.  After you stopped and accumulated your coin from your turn, I would make her rearrange the coins to get it down to the minimum number for the value.  Once she got into the groove, she got faster and faster and was doing more of it in her head.  It was awesome.  Yet another example of how games are the perfect avenue for teaching, and how the brain is more engaged during play than any other time.

I was amazed at how using money instead of points changed the risk of the game as well.  I am usually pretty cavalier in that game, and often lose huge points because I pushed it too far.  By having coins in front of me instead of an abstract number, I found myself playing much more conservatively.  She didn’t seem to have the same approach, but she’s only 6 so money doesn’t exactly translate the same in her brain.  It made me wonder if people would treat the stock market differently if they were handling actual cash instead of stock certificates or shares.  Something about actual currency clarifies risk like nothing else.