Warning: opinionated thoughts below. Just because I found something to be true does not mean you can’t or shouldn’t hold an opposite belief. I’ll state my opinion, get your own blog and state yours. We can still be friends. In other words, this is what I believe, your mileage may vary…
After spending many years in the software development world, I’ve seen projects run a lot of different ways. I think my quality time with opinionated software has helped me harden up some of my own opinions about what works and what doesn’t. There will never be a silver bullet that is the solution for every development project. The style of a process that works or doesn’t in a particular environment is dependent on a number of factors: the attitudes and capabilities of the management, the competence of the development staff, and the trust they have with the management staff.
Though this wasn’t their original intent, very traditional styles like waterfall-based processes were very popular in environments where managers were typically either not developers or they were the type that would rather manage people than code. It always felt to me like an attempt to manage a very non-manufacturing process with a manufacturing mindset. The managers would go off and pick the features, make impressive gantt charts, and come back with “unmoveable” deadlines, then the big dance began. As the release got closer, things were always behind. “Unessential” elements like testing & documentation were often pushed aside in favor of getting the code done, yet integrating all that code to work well together was problematic. As it became obvious that something had to slip, the courtship of compromise would begin. We’ll agree to ship on that date, you drop this feature, etc. I can honestly say that in all my years I never reached a deadline where there wasn’t either a slip of the date, or a reduction of features, usually a generous amount of both.
I’m not a project manager by trading, and though I’ve done my share of PM work as a team lead I never enjoyed the process. Why is it so consistently bad? Here’s a list of the things I think of when contemplating why it never worked that well. Your experience may be better or worse in some areas. I’d say the culprits included, but are not limited to:
- Disconnect between managers gantt chart and developers daily prioritization
- Excruciatingly long release cycles
- Poorly defined requirements
- Relying on the serial-optimism of developers for estimates
- Implementing code before thinking about integration/API issues
- Lack of integration between development and QA
- Absence of customer input during development
- Over emphasis on architecture and premature optimization
If you had asked me a year ago what the right answer was, all you would have gotten is a shrug. I’ve tried to do “agile” over the years, or at least I thought I had. If you read any of the literature, there are a number of components to an agile process. My typical approach had been to treat it like the desert tray at a good restaurant. “Small iterations sound good, I’ll have a couple of those.” “I’m intrigued by the whole ‘interfacing with the customer’ thing, think I’ll try that one too.” However, it was always just putting agile colored lipstick on a 1200 lb. waterfall-based pig. We may be able to pull off small iterations on our team, but we’d never get buy-in from marketing to release any sooner. Doing small iterations based on large requirements doesn’t work that well to start with. Talk to a typical manager about tying up two developers on a single workstation working together (pair-programming) and you will probably get visited by the in-house psychologist. Most developers avoid writing tests almost as much as they avoid writing documentation. Write the code, if it works you don’t need any tests.
If your experience has been similar to mine, I’m here to let you know there is hope. Grockit is by no means a typical project or work environment, but it’s also not a typical development process with far from the typical results. We brought in Pivotal Labs to help us get things rolling. I could tell early on these guys were incredible developers, but they didn’t just bring their keyboards, they also insisted on how the development process should be run. I was hopeful, but skeptical. Now, over a year later, I thought it would be a good idea to share some of what we did, and how well it worked. My goal is to get some other people to try it, or at least be intrigued enough to want to try it if given the opportunity. If you’ve been around the block a time or two, you need to ignore that sarcastic voice in your ear and give it a legitimate chance. I’m just going to do a rough walk through of recommendations that match our process, then address what I’ve seen with regards to the list above. We’ve had our app in development for over a year (on weekly iteration #53 as of this writing). We just took a Jury Selection award at TechCrunch50 and are about to launch a private beta with some very cool functionality.
Recommendation #1: Throw away Microsoft Project and using a real agile project management tool. Those gantt charts are usually out of date before the toner dries anyway. Don’t manage a process that is iterative and creative with a tool that is not. There are a few good agile PM tools available, we use Pivotal Tracker (which is now in open beta and available to everybody). It’s become the linchpin of our process. Use the ice box to collect stories about things you don’t want to forget. Use the backlog to prioritize and order them, Tracker will automatically show you where the iterations will fall based on your average velocity. Learning how to differentiate chores and stories, and how to point the stories is not obvious, but also not an issue. As long as you are consistent among the team across the life of the project, it will work itself out. If you want to talk about how we do it, let me know.
Recommendation #2: Iterate often, release oftener. Construct your stories at the right level of granularity so stories can be knocked out usually in a day or two of pairing. Keep your iterations short (we like 1 week iterations). We have a planning meeting to start the iteration, a 15 minute daily standup to track our progress as we go, and a retrospective at the end of the iteration. No extended meetings, no heavy process requirements. The tasks and accountability usually provided by a dedicated project manager is enforced from within because the team is dedicated to the process. If you police yourself to make sure your test coverage is good, and your CI build remains green, then fear of deployment should be a thing of the past. We leverage our CI process to ensure only green stable builds get deployed and can do so whenever we like.
Recommendation #4: Write the tests before you write any code. Test-first development is not solely about making sure you have good test coverage, though that’s a big part of it. It forces you to think about your code from a different perspective and usually results in better code. By writing the test first, you are forced to think about how you will use the module/method you are writing before you consider it’s implementation. You will almost always make decisions that will result in a better API, therefore easier integration with the rest of the codebase. If you start by writing the code, you end up making compromises to force a test around code that is hard to test because it wasn’t written with it’s user in mind. Code that is hard to test is ripe for refactoring. Back up, write the test that reflects the way the code ought to work, then refactor the code so the test passes.
Recommendation #5: Test within the proper context. Instead of just writing a smattering of regression tests and hoping you’ve got it covered, make sure you’ve got it covered. Write tests that are appropriate for their context. For example, make sure your model specs test any custom methods in your model. If those tests make some expectation on fixture data that if broken would break the model spec, then write a fixture spec to bake that assertion into your test suite. If you know your model methods are well tested, then when writing the controller test you don’t need to repeat the model assertions, use mocking and stubbing instead and just test the controller code. The same goes for view tests, they should as much as possible only depend on code in the view layer. Make sure the end-to-end scenarios are covered using selenium or firewatr tests, but the bulk of your testing should be layered into context partitioned tests.
Recommendation #6: Your code should be self-documenting. If you think it needs comments, what it really needs is refactoring. Comments are useful for things like marking your place if you need to take a break, leaving TODO notes for refactorings that need to happen but need to be postponed, etc. If you find yourself in a section of code where you feel compelled to write comments because you don’t think anybody else can figure out what is going on, that should send off a five alarm bell in your head that it’s time to refactor this code. Especially when working with a language with the expressiveness of ruby, there’s no reason to write code that would be hard to understand. Pairing helps with this because usually your pair won’t let you leave ugly code laying around that somebody else will have to clean up later. Keep code clean and refactored as a favor to your teammates because they have to use it too. It also is your job as a craftsman to leave work you are proud of behind.
Recommendation #7: Grand expectations of future needs are usually wrong. It’s a good idea to keep future needs in mind, but only as reminders for where you need to hook in future behavior. When it comes to things that are not part of the current story, remember YAGNI (You Ain’t Gonna Need It) More often than not, priorities and/or requirements shift and that “crucial” feature becomes less so. Plan for tomorrow, but build for today. Most projects accrue a huge amount of code debt to maintain code and tests for features that have never been implemented and maybe never will. If you don’t need it today, don’t build it. The code will still be here tomorrow. Along the same lines, don’t cripple yourself by aggressive pre-optimization. Optimize the problems and bottlenecks to be sure, but have patience and work on demonstrable problems, not the ghosts you think will become problems.
Recommendation #8: Spikes are good, but should not be converted to code. This one is a corollary to the last one. Spikes are an important part of any agile methodology. There are some problems that need some investigation and are impossible to test drive and know the best way to build it. Spikes require discipline as well but of a different sort. Throw caution to the wind, don’t worry about writing tests, just code to your hearts content and get it working. It’s ok, really. You need that freedom to fully investigate the best implementation. However, once you think you’ve got it nailed, this is where the real discipline comes in, and this is non-negotiable. The next step is to DELETE THE SPIKE!!! Don’t hesitate, toast it immediately. This was very traumatic to me the first time I saw a four hour spike disappear before my very eyes. I thought my pair was insane. However, consider this. You’ve just proven that you can write the code once, and if so then you can do it again. By throwing away the code, then starting with the tests, now you can take the implementation you just deleted and think about it from the API perspective that test-first development gives you. Get back in the groove, follow your normal red-green-refactor workflow and chances are the test driven implementation will be at least as good and usually better than the one you pitched. By reimplementing it from a test-driven mindset you’ll be much happier with the final result.
In closing, I think if you now go back up the page and re-read that initial set of complaints I had about traditional development processes, you’ll see that every one of those concerns/shortcomings are addressed in these steps. I realize there are environments where this will never work. It requires those in charge to loosen the reigns and trust their developers to take ownership of the product and the process. It requires the customer to be closely involved and constantly approving/revising the stories as they are written up and developed. It requires developers to be willing to share what they know and learn from each other. It requires code ownership to be shared. It requires craftsmen who take pride in their work and support their teammates by keeping the codebase clean, and ensuring that every interaction with the code leaves it in a better state than it started.. It is not an easy transition, and is hard to maintain if you do not build the process into the DNA of your organization. However, if you can get the buyin from your teammates, and the support of your management, I urge you to give it a shot. Everybody deserves the chance to have this much fun and take this much pride in their profession.