Tuesday, December 7, 2010

On the hunt

It's the second full day of being unemployed. Quite an odd feeling. I feel as if I should be scared, but recent events make it seem like the future is full of possibilities - many of which may be better than the recent past - and that was pretty damn good. I'm very conflicted - we're conditioned to think that being unemployed shouldn't feel like this.

In my free time (which isn't as considerable as I had hoped), I'm working as hard as I ever had. And all of it is in Rails, which also fills me with joy.

Of course, it's not all roses! I'm learning how to write extensions for Spree, and I've just encountered my third "wrong turn". While it's frustrating, each correction results in significantly less code. I suppose that means I'm on the right path!

Here's a sample of my schedule for the next couple of days:
- Today (tuesday): Should hear back on whether I'll get a second interview with a primarily .net shop
- Tomorrow: Phone interview for 3 month contract position. Possible call with HR department of large open-sourcish company (their HR guy contacted me - never burn bridges).
- Thursday: New office space! It's only a short term deal, but it's free for now!
- Early next week: If the phone interview goes well tomorrow, I'll do a conference call with their developers - one of which I interviewed while at my last job - the recruiter said he was thrilled that they might be able to hire me (never, ever burn bridges).
- Later next week: Interview with extremely large company. I used to work for the manager of the group I'd be working in - he said yesterday how great it would be to have another developer he knew he could trust to write code correctly (never, ever burn bridges, pee off the side of them or do anything else that might soil your relationship with the other side of the bridge).

Wednesday, August 4, 2010

Why Javascript Sucks (a.k.a. Fuck you Microsoft)

Javascript is a very cool language. With many of the features that make Ruby fun to work with, Javascript gets a lot of unwarranted criticism.

But, even with it's cool features, it's a poor choice for writing rich client interfaces. I'd rather use Java, C/C++, QT or even Flash. Thank goodness for GWT, which makes it possible to create Javascript through the use of Java (which, even in this case, provides a bit of "write once, run anywhere").

But again, it's not because Javascript is a bad language; it's because to use Javascript, your code has to work in a browser. And as any programmer who has ever written web code knows, Microsoft has fucked that up for everyone.

Fuck you Microsoft!

Friday, July 16, 2010

Sites vs Applications vs Software

Just because it's made up of "webpages", not every coding effort is equivalent. I really like this article. Of course, I replace ".Net" with "Java" when I read it, but the concepts are right on!

If you're simply building a website for a client, you can figure out the single best way (in someone's opinion) to accomplish a task or complete a workflow. Our designer loves consistency and simplicity in products we release. And he's got good reasons for saying that.

But when you're trying to provide solutions to evolving and complex problems, you need to build in flexibility. Business needs today will not be the business needs of tomorrow. And the business may need multiple ways of accomplishing the same task.

The "single approach" systems work well with scripting languages. Many times, when a business need changes, you can simply change the way the code works. The tests protect you from breaking other parts of the system, and you end up with a little more (or a little less code) than you had before the refactor.

More complex systems benefit from statically typed languages (and the tools that make them such a pleasure to work with). When a business requirement changes, it often comes with many rules and variables. Often, a simple refactoring is not sufficient to meet the new needs (which is likely some crazy-difficult logic that some VP dreamed up in a scotch-induced haze that will do little to advance the company but will surely drive you to the edge of insanity).

We build "sites", "applications" and "software". Our event registration product is an "application" which helps our clients create "sites" which sell tickets to their events. But, because we have many clients all using the same application (in pretty different ways), and because we want the event registration system to be integrated into other applications, we've built "software" that allows selling tickets or trinkets or anything else. Had we simply built an event registration application, we'd lack the ability to provide the type of fluidity in reporting, analysis and new product creation that a well thought out software platform provides. Trying to write the software in a scripting language would quickly become unwieldy. However, trying to write the applications or sites in Java is slow and cumbersome (with the exception of using GWT for Javascript - gotta love that for a productivity boost).

Tuesday, June 29, 2010

The worst fail I've ever seen

If you spend anytime working with computers in an office setting, you've undoubtedly experienced "an outage". This is the dreaded, panic-stricken period of time where something goes wrong. It could be a complete loss of power, despite on-site generators that "were supposed to kick on". It could be a car hitting a telephone poll that knocks out your T1 line. Or, it could be a hard drive failing, taking all your data with it. Some of these are avoidable, and others are not.

But IT is expected to be somewhat proactive. They are expected to have basic monitoring in place to ensure they notice a machine dying before paying customers do. And when the machine dies, they should probably have some plan for bringing it back online. I've seen this done well, and I've seen this go awry (and when I was a sysadmin, was even responsible for a few less than perfect recoveries).

But what happened last week was an epic fail of monumental proportions. Our company continues to support a legacy e-commerce system. It continues to generate revenue for us, but more importantly, it is the platform on which a number of small businesses run. In other words, without this service up and running, our customers are not able to do business. I'd like to think I work at a company that cares for it's customers and wants to see them succeed.

And I understand - shit happens. So, it when the system went down , it was not a surprise. After all, one reason the system is being replaced is because it . . . well, frankly it sucks!

But, this didn't just lead to an outage - the system was down for days. Here's what happened, according to our "IT Director":
[T]he server that holds the database files ran out of disk space. When this happened, it corrupted both the database file, and the transaction log file that's used to recreate activity in the event of the database file becoming corrupted. This left us with no way to recreate the activity on the platform and has left us with no any easy way to get that information back.

No big deal. That's what backups are for, right? Well, here's the explanation from our marketing folks a few days later:
During a reconfiguration of our backup utility (on April 22), another server took over mirroring the backups during the reconfiguration. After the reconfiguration was complete, the backup process was transitioned back to its normal process. As a precaution with the updated system, the backups were left on the temporary server and that is where the recovery came from.

In other words, the backup strategy in place was fucked up and no one noticed. The most recent backup was from 2 months ago. 2 fucking months! Are you kidding me! I've seen a lot of IT disasters that have cost people and companies a lot of money. While this may not have been the most expensive, it certainly was the most careless and avoidable - by an extremely wide margin!

In large enterprises, file systems running out of space can result in firings; there would at least be a lot of post mortem meetings. Downtime of hours (much less days) causes more unemployment. Lack of backups for 2 months would get most IT departments "swept clean". In smaller companies, those responses aren't possible since the "IT department" may be a single person. But it certainly is a sign that something is seriously fucked up and mismanaged.

Since we only have 2 people in our IT department (the IT Director and a sysadmin), no one has been fired. As far as I know, there have been no reprimands or repercussions. There are hints about not having enough staff and already working 9 - 12 hours a day coming from the IT guys. But this isn't about staff or hours, it's about planning and management. The current "IT Director" (said sarcastically) has proven an inability to manage an IT department and maintain systems.

Even if the company isn't going to attempt to fix this, the IT Director should man up and step down. Or at least, take responsibility. The sysadmin has admitted publicly to making mistakes and dropping the ball, but the "IT Director" is just waiting with his head down, hoping the storm will pass.

And, if you think I'm being harsh, let me assure you, I am not. After all, 2 1/2 years ago, the same system had a similar multiday outage, because . . . you guessed it . . . no fucking backups! You can bitch about not having enough people or working too many hours already, but there's no excuse for that!

I've seen a lot of IT fuck ups over the years, and most I can look back on and laugh about. I'm not laughing now!

Thursday, June 3, 2010

MapReduce - just another design pattern!

A co-worker mentioned that we was playing around with MapReduce recently. He told me how "fast" is was and how he could process a huge batch of log files "in real time". And the whole time, there's something in the back of my head telling me that something isn't right about all of this. And it's not just because he's a sysadmin, and not a developer.

Because he's testing on a laptop (with a single CPU no less), he's not getting any benefit from using a MapReduce approach. Sure, he's using a shiny new technology that helps him sound cool and cutting edge. But in reality, the processing his application is performing could be done, probably more efficiently, using a different approach.

MapReduce on a single machine is nothing more than a design pattern! On a single processor, it's not even a good design pattern. You're taking a large data set and breaking it up to be "mapped" by a bunch of workers that are going to end up waiting on each other, only to "reduce" the data into a single collection. There's no advantage over processing the data serially. In fact, the cost of breaking up the input and merging the results is additional overhead.

I suppose the "style" of typical map and reduce code may seem novel. Since these often take a more functional approach, programmers who have only worked with object-oriented code see the simplisity of passing functions-as-arguements a wonderous and magical thing. But this is not a MapReduce specific feature. Again, the same result can be achieved in many other ways.

I'm currently finding many business people (non-programmers) around the office with huge misconceptions about MapReduce. There seems to be a concensus that MapReduce is synonomous with NoSQL. And you can take billions of rows of data, run an ad hoc query, and get results in sub-second times. After all, that's the experience one gets when searching Google. Right?

At least I think I've figured out where these misconceptions are coming from! And I'm pretty sure, as he continues to tinker with his super-speedy log splicer, he's going to report his magnificent performance results. I trust he'll never benchmark his ingeniuos invention against a simple Perl script (since there's no buzz around such an old school approach to parsing log files).

Too bad he doesn't realize his "big data" is laughably small compared to the amounts of data being processed when MapReduce was conceived. The bright spot in all of this is, as long as he continues to use MapReduce for purposes it is not intended for, he won't have to ask management to fund more servers. Maybe it'll also keep him busy and out of the developers' hair.

Friday, April 2, 2010

Learning UI programming all over again

We're adding a new feature to one of our client applications that is really an application in it's own right. The user interface is often described as "burley", at least from a coding perspective. From the users point of view, this thing is will nothing short of elegantly awesome, for there's a lot of work that goes into convert the vision to reality.

It's interesting working through front end code again. It's something I've haven't done in years, at least as far as hard core end user interaction goes. I'm not talking about putting html and css on the page and responding to asynchronous method calls. I'm talking about really cool stuff that designers come up with when they believe there are no limits to what a user interface can contain.

My last experience with UI work was with Swing. I did a lot of interactive mapping functionality for open source projects, as well as a visual inspection and packaging tool for an employer. I went through the whole experience of discovering (and understanding) the main event thread. I really learned about the MVC and Observer patterns in a way that reading alone can't provide.

While our "burley" feature is implemented in javascript, we've chosen to use GWT. While there is a bit of familiarity between Swing and GWT, they are quite different. We're also not using GWT the way it's intended - instead of building the UI from new Widgets, we're taking an HTML page (provided by the end user) and wrapping Elements from the DOM in custom Widgets. It made us question whether GWT was the right tool for this job, but so far, writing in Java instead of javascripts has proven to be a huge win for us. Object oriented or not, navigating a slew of javascript files or a few monolithic js files, javascript has proven to be fairly non "team-friendly". Rafactoring Java in Eclipse is trivial, making it easy to keep clean and allowing us to spike out new ideas quickly and easily. And don't even get me started on the testability of how this code is shaping up, compared to old javascript that remains largely uncovered.

Regardless of how it's implemented, a UI is a UI, and there are common problems that arise. We initially spiked out a
with a mouseover event that caused a "masking"
to cover the original. On mouseout, we remove the mask. In hind sight, it's obvious that once the mask is positioned over the underlying element, the underlying element will loose mouse focus. Fortunately, none of the developers have Photosensitive epilepsy.

While I love to think our team is good enough to start off on the right path everytime, that was obviously not the case. Besides learning the GWT api, refreshing our knowledge of browser events, and determining how to navigate any dom a user could throw at our code, we ran a lot of spikes that challenged our knowledge and assumptions. Once we determined what we wanted the code to do, we realized the code we'd developed during the spikes was less than optimal. Much of what we had was untestable, tightly bound and just plain messy. Fortunately, all of that can be fixed by following some common patterns and best practices.

Separating concerns (MVC) and using a publisher/subscriber (Observer) approach to events has allowed us to come up with code that's enjoyable to work with, and easy to understand. The learning curve for developers jumping in (and out) of this project is incredibly low. The ability to test logic (while not messing around with GWTTestCase and it's long run times) provides a sense of confidence that browser testing alone can not match.

In the end, it just shows that languages and frameworks are not as important as good coding practices. Clean, simple and tested code that follows proven patterns and practices provides a quality product that can be extended, modified and maintained long into the future. If you start with mimicing what's been shown to work well, you'll have a better chance of ending up with something that serves you (and your customers) well.




Wednesday, January 20, 2010

Agile Planning

Agile suggests doing "just what is needed". Sometimes that just doesn't feel right. Sometimes it doesn't even seem possible. When it comes to planning, I'm finding an agile approach to be a bit nerve-racking.

We're getting ready to build some functionality that . . . well . . . scares me. It's basically a specialized WYSIWYG editor in javascript, but that "specialized" functionality is anything but trivial.

The first fear comes from a language requirement which is not coming from the development team. The developers provided a few options that might provide the functionality we need. Another group (that includes a designer, a project manager, a salesperson, etc.,) decided using javascript provides enough benefits to justify the risks. I'm confident we'll be able to make this option work (JQuery to the rescue), but I'm not sure how long it's going to take (JQuery is powerful, but browser inconsistencies are more powerful).

Another fear comes from not having a clear spec. The stakeholders have seen mockups that include features that may be extremely expensive to create. They've shared ideas that are fuzzy at best, and ill conceived at worst. And since there is no clear documentation, there's no way to know what each persons expectations are, and therefore no way to know how much effort is required. How much do we have to build in order to meet the minimum functionality required? How do we break up this huge monster of vision into iterations? Just because we're using Agile approaches doesn't mean we should completely skip planning.

Before we introduce any work to make this feature set a reality, our customer (management) wants to know how long it will take to build. So, without really knowing all the details about what we're supposed to build, we have to tell how many iterations we'll need to build it, so that we can schedule the first iteration to determine what we need first.

What would make everyone happy is to figure out exactly what this editor is supposed to do (full specs). Then, we'd break down the functionality to figure out the server-side interfaces so our javascript can make the proper ajax calls. We'd figure out exactly how the javascript is going to persist and reload data. We'd make sure we at least had a proposed architecture, and had identified the major parts of it, deciding whether any needed further analysis to expose potential roadblocks. At that point, we could come up with a guesstimate of how long the entire project would take. It's about as unagile as you can get.

So, how do you strike a balance and give a time estimate? Demand specs (even if what you end up is useless for the development team)? Demand more time to investigate (even if you have no time to devote to interviewing stakeholders)? Give up and get a job waiting tables?

Simple: you guess! And if at all possible you let the customer provide you with an answer! If you can illicit a desired date from the client (say, a month), you have a starting point for an iteration. Starting with a timebox of a week, we can run a bunch of spikes, perhaps coming up with some usable code. At that point, we'll be able to see what's possible to complete in the rest of the iteration. At that point, we set the expectation clearly and repeatedly. If there's a complaint or resistance, we're in a much better position to renegotiate (and well ahead of the deadline, so that no one panics).

Of course, as I write this, it's all conjecture. We have negotiated 5 weeks, but have yet to schedule it into our sprints. I'm curious (no . . . I'm nervous) to see how it all works out. Will that be enough time to get a quality solution? Or is that enough time to get an underwhelming, buggy turd propped up? Is renegotiating for more time, if it comes to that, going to be more confrontational than just dealing with misaligned expectations upon delivery? Guess I'll find out!

Sunday, January 10, 2010

The problem with QA

As far as Java goes, we've solved most of our testing problems. We built enough infrastructure to make unit testing easy and effective, even when complex mock objects are involved. Fitnesse provides acceptance testing, which doubles as functional tests. While we haven't had time to put resources into JMeter testing, we've got the infrastructure set up and ready to go (hopefully we'll eventually be able to afford additional headcount to build out a full performance test suite).

For our client apps, unit testing is done in a similar fashion to Java. Mocking is generally pretty easy and we've had no trouble putting the test suites into our automated build process. Coverage tests help identify risks we expose when we forget to follow TDD practices.

While we've done some unit testing for Javascript in the past, it's been pretty minimal. One developer on our team really likes working with Javascipt, and he's put effort into improving our testing capabilities. Unfortunately, we still have issues with testing events (mainly due to the inability to run multiple threads in our tests). We still haven't attempted to automate the tests within our build, but I'm confident we won't have any issues with finding a solution.

To really test our entire application stack, we need more infrastructure. We're using Selenium in conjunction with Fitnesse to write tests, but we're far from a sustainable solution. We're able to run multiple systems with the exactly the same software as the production systems (config changes, but no code changes - mocks are not allowed).

But keeping this running has been an incredible challenge. Besides having to solve the technical challenges, we'll never be able to automate our entire testing infrastructure. Because we have multiple systems involved, we can't (nor want) systems to be automatically deployed. We don't want our front-end tests to break just because we're making changes to how the web server works. Sure, once we think the change is "complete", then we want tests to show us where we've missed things. Having multiple systems deployed on every code change would just lead to tests always failing without indicating which ones are serious and which are trivial. This makes the tests useless as developers quickly learn to ignore the results.

We also have the issue of running tests across multiple machines. Even though Selenium provides the ability to control other machines, just keeping those machines running is a challenge. While VM's may reduce our need for hardware (and space), they certainly don't reduce our need for admin assistance.

The real issue with QA is that, like code, it takes people and time to provide a high quality solution. If you don't have people to build and maintain your test suite, your code will suffer. On the other hand, if all of you developers are forced to become QA implementers, your code will suffer (at least as far as scheduled delivery is concerned). I believe building software without balancing QA staff with development staff leads to technical debt that's risky and dangerous.

Friday, January 8, 2010

Inheritance gone wrong

Wow. I just got done refactoring a class with one of the worst designs I've ever seen. An abstract class contained an abstract variable that was used in two protected methods.

Most of the concrete implementations (writing by the same developer who wrote the abstract class) set the protected in the "setArgs" method, and used it in the "getArgs" method. Imagine my surprise when I overwrote the "getArgs" method and didn't set the protected variable.

The silver lining is this ugly, smelly design led me to other ugly smellies in the code. While I wouldn't call the design elegant yet, the next developer that builds a concrete implementation of this abstract class will have a much easier time.

Tuesday, January 5, 2010

Google Wave is my friend

After falling madly head-over-heels for Google Wave, I've discovered that my infatuation was based more on lust than love. Sure, at first sight, it seemed beautiful and wonderful, despite warning me that it was not ready for a long-term relationship. So now, we remain great friends, though don't talk as often as we once did (nor perhaps as much as we should). But I'll stay in touch because I now realize a good tool, like a good friend, is sometimes exactly what you need to make a bad day better, or a good day great.

I've attempted to categorize the types of waves I've seen. There could be others, but these seem to be prevalent in my inbox:
  • IM style messages
  • Voting systems
  • Long term documentation
  • Forum style/interest topics
  • Running jokes
Of these, the first two are where I think Wave excels. Or maybe, that's just where I've learned to use it best. Both of these share a common trait - they are short lived. These are replacements for IM messages and short lived emails. Waves persist, while IM messages don't. The ability to see messages being typed, and the ability to change them could make IM obsolete, and make email laughable as a collaboration tool. I've seen the real time, multiple user style IM communication wow many new users.

I find waves very helpful for conversations that last only a day or two. Distributed standup meetings, getting concensus on design decisions and helping other developers get up to speed on new technologies are all situtations where Wave has been a tremenous help. These are all conversations that stale quickly.

Some of the limitations of Wave could be improved in future iterations. A better alert system, the ability to hide waves (having them only reappear if content is updated) and the ability to remove users from a wave (or even removing ones self), would go far in making Wave better for longer term communication.

Wave is now one of my main communication tools. Right up there with email, IM and RSS, Google Wave is something I'll continue to use daily.