Wednesday, January 20, 2010

Agile Planning

Agile suggests doing "just what is needed". Sometimes that just doesn't feel right. Sometimes it doesn't even seem possible. When it comes to planning, I'm finding an agile approach to be a bit nerve-racking.

We're getting ready to build some functionality that . . . well . . . scares me. It's basically a specialized WYSIWYG editor in javascript, but that "specialized" functionality is anything but trivial.

The first fear comes from a language requirement which is not coming from the development team. The developers provided a few options that might provide the functionality we need. Another group (that includes a designer, a project manager, a salesperson, etc.,) decided using javascript provides enough benefits to justify the risks. I'm confident we'll be able to make this option work (JQuery to the rescue), but I'm not sure how long it's going to take (JQuery is powerful, but browser inconsistencies are more powerful).

Another fear comes from not having a clear spec. The stakeholders have seen mockups that include features that may be extremely expensive to create. They've shared ideas that are fuzzy at best, and ill conceived at worst. And since there is no clear documentation, there's no way to know what each persons expectations are, and therefore no way to know how much effort is required. How much do we have to build in order to meet the minimum functionality required? How do we break up this huge monster of vision into iterations? Just because we're using Agile approaches doesn't mean we should completely skip planning.

Before we introduce any work to make this feature set a reality, our customer (management) wants to know how long it will take to build. So, without really knowing all the details about what we're supposed to build, we have to tell how many iterations we'll need to build it, so that we can schedule the first iteration to determine what we need first.

What would make everyone happy is to figure out exactly what this editor is supposed to do (full specs). Then, we'd break down the functionality to figure out the server-side interfaces so our javascript can make the proper ajax calls. We'd figure out exactly how the javascript is going to persist and reload data. We'd make sure we at least had a proposed architecture, and had identified the major parts of it, deciding whether any needed further analysis to expose potential roadblocks. At that point, we could come up with a guesstimate of how long the entire project would take. It's about as unagile as you can get.

So, how do you strike a balance and give a time estimate? Demand specs (even if what you end up is useless for the development team)? Demand more time to investigate (even if you have no time to devote to interviewing stakeholders)? Give up and get a job waiting tables?

Simple: you guess! And if at all possible you let the customer provide you with an answer! If you can illicit a desired date from the client (say, a month), you have a starting point for an iteration. Starting with a timebox of a week, we can run a bunch of spikes, perhaps coming up with some usable code. At that point, we'll be able to see what's possible to complete in the rest of the iteration. At that point, we set the expectation clearly and repeatedly. If there's a complaint or resistance, we're in a much better position to renegotiate (and well ahead of the deadline, so that no one panics).

Of course, as I write this, it's all conjecture. We have negotiated 5 weeks, but have yet to schedule it into our sprints. I'm curious (no . . . I'm nervous) to see how it all works out. Will that be enough time to get a quality solution? Or is that enough time to get an underwhelming, buggy turd propped up? Is renegotiating for more time, if it comes to that, going to be more confrontational than just dealing with misaligned expectations upon delivery? Guess I'll find out!

Sunday, January 10, 2010

The problem with QA

As far as Java goes, we've solved most of our testing problems. We built enough infrastructure to make unit testing easy and effective, even when complex mock objects are involved. Fitnesse provides acceptance testing, which doubles as functional tests. While we haven't had time to put resources into JMeter testing, we've got the infrastructure set up and ready to go (hopefully we'll eventually be able to afford additional headcount to build out a full performance test suite).

For our client apps, unit testing is done in a similar fashion to Java. Mocking is generally pretty easy and we've had no trouble putting the test suites into our automated build process. Coverage tests help identify risks we expose when we forget to follow TDD practices.

While we've done some unit testing for Javascript in the past, it's been pretty minimal. One developer on our team really likes working with Javascipt, and he's put effort into improving our testing capabilities. Unfortunately, we still have issues with testing events (mainly due to the inability to run multiple threads in our tests). We still haven't attempted to automate the tests within our build, but I'm confident we won't have any issues with finding a solution.

To really test our entire application stack, we need more infrastructure. We're using Selenium in conjunction with Fitnesse to write tests, but we're far from a sustainable solution. We're able to run multiple systems with the exactly the same software as the production systems (config changes, but no code changes - mocks are not allowed).

But keeping this running has been an incredible challenge. Besides having to solve the technical challenges, we'll never be able to automate our entire testing infrastructure. Because we have multiple systems involved, we can't (nor want) systems to be automatically deployed. We don't want our front-end tests to break just because we're making changes to how the web server works. Sure, once we think the change is "complete", then we want tests to show us where we've missed things. Having multiple systems deployed on every code change would just lead to tests always failing without indicating which ones are serious and which are trivial. This makes the tests useless as developers quickly learn to ignore the results.

We also have the issue of running tests across multiple machines. Even though Selenium provides the ability to control other machines, just keeping those machines running is a challenge. While VM's may reduce our need for hardware (and space), they certainly don't reduce our need for admin assistance.

The real issue with QA is that, like code, it takes people and time to provide a high quality solution. If you don't have people to build and maintain your test suite, your code will suffer. On the other hand, if all of you developers are forced to become QA implementers, your code will suffer (at least as far as scheduled delivery is concerned). I believe building software without balancing QA staff with development staff leads to technical debt that's risky and dangerous.

Friday, January 8, 2010

Inheritance gone wrong

Wow. I just got done refactoring a class with one of the worst designs I've ever seen. An abstract class contained an abstract variable that was used in two protected methods.

Most of the concrete implementations (writing by the same developer who wrote the abstract class) set the protected in the "setArgs" method, and used it in the "getArgs" method. Imagine my surprise when I overwrote the "getArgs" method and didn't set the protected variable.

The silver lining is this ugly, smelly design led me to other ugly smellies in the code. While I wouldn't call the design elegant yet, the next developer that builds a concrete implementation of this abstract class will have a much easier time.

Tuesday, January 5, 2010

Google Wave is my friend

After falling madly head-over-heels for Google Wave, I've discovered that my infatuation was based more on lust than love. Sure, at first sight, it seemed beautiful and wonderful, despite warning me that it was not ready for a long-term relationship. So now, we remain great friends, though don't talk as often as we once did (nor perhaps as much as we should). But I'll stay in touch because I now realize a good tool, like a good friend, is sometimes exactly what you need to make a bad day better, or a good day great.

I've attempted to categorize the types of waves I've seen. There could be others, but these seem to be prevalent in my inbox:
  • IM style messages
  • Voting systems
  • Long term documentation
  • Forum style/interest topics
  • Running jokes
Of these, the first two are where I think Wave excels. Or maybe, that's just where I've learned to use it best. Both of these share a common trait - they are short lived. These are replacements for IM messages and short lived emails. Waves persist, while IM messages don't. The ability to see messages being typed, and the ability to change them could make IM obsolete, and make email laughable as a collaboration tool. I've seen the real time, multiple user style IM communication wow many new users.

I find waves very helpful for conversations that last only a day or two. Distributed standup meetings, getting concensus on design decisions and helping other developers get up to speed on new technologies are all situtations where Wave has been a tremenous help. These are all conversations that stale quickly.

Some of the limitations of Wave could be improved in future iterations. A better alert system, the ability to hide waves (having them only reappear if content is updated) and the ability to remove users from a wave (or even removing ones self), would go far in making Wave better for longer term communication.

Wave is now one of my main communication tools. Right up there with email, IM and RSS, Google Wave is something I'll continue to use daily.