Saturday, June 12, 2021

Communication, Collaboration & Coordination

Got a team? Then you need the three C's. 

Communication - duh. You can't build anything meaningful with others unless you discuss and align on a goal.

Collaboration - two brains are better than one. There is a point of diminishing returns, or the "too many cooks" syndrome. But no matter how brilliant you are, others may see things or realize things you don't. The best path to the goal can only be determined by exploring all a available paths.

Coordination - smooth execution relies on things falling into place at the right times. To reach the goal, each team member needs to do their part. If those parts overlap, or leave gaps, achieving the goal is compromised.

Got a team? Then you need the three C's. (Yes, this is intentionally repeated)

Saturday, May 22, 2021

Blast from the past!

 It's (obviously) been years since I've been interested in sharing software development stories outside of a work setting. But with the passing of covid, and the new normal of working remotely, I feel the desire to have an outlet. I briefly considered starting a "real" blog, but don't have any intention or desire to actively market or promote this content, so I chose to resurrect this old thing 

And I'm so happy I did! It's hysterical to read my beliefs and convictions from over ten years ago. While I haven't yet reread all the posts, I've already found some very entertaining stuff. Flash over Javascript? I still believe that was true - back then! Considering Flash is gone now and React is the current hotness (along with Angular and Vue), it's a lesson that you either evolve or you die. This could be said about all life, but is especially poignant for software engineers.

Wow! Maybe more amazing than what I've rediscovered, is what's been missed. I was just about to interview with Oracle, but would end up turning that down to work at Jive. Great decision! I suppose that's when and why I stopped blogging in public - Jive made it easy to blog AT work. 

I have done other public blogging since then, but generally more focused than I did "back in the day". I never attempted to make any money, or brand myself through blogging. It's always been an outlet. And it's always a selfish, personal endeavor. If anyone gets value out of it, great. If not, feel free to read something else. 

So, over 10 years later, I'm going to start venting, ranting ... and now maybe reminiscing ... again. 

Tuesday, December 7, 2010

On the hunt

It's the second full day of being unemployed. Quite an odd feeling. I feel as if I should be scared, but recent events make it seem like the future is full of possibilities - many of which may be better than the recent past - and that was pretty damn good. I'm very conflicted - we're conditioned to think that being unemployed shouldn't feel like this.

In my free time (which isn't as considerable as I had hoped), I'm working as hard as I ever had. And all of it is in Rails, which also fills me with joy.

Of course, it's not all roses! I'm learning how to write extensions for Spree, and I've just encountered my third "wrong turn". While it's frustrating, each correction results in significantly less code. I suppose that means I'm on the right path!

Here's a sample of my schedule for the next couple of days:
- Today (tuesday): Should hear back on whether I'll get a second interview with a primarily .net shop
- Tomorrow: Phone interview for 3 month contract position. Possible call with HR department of large open-sourcish company (their HR guy contacted me - never burn bridges).
- Thursday: New office space! It's only a short term deal, but it's free for now!
- Early next week: If the phone interview goes well tomorrow, I'll do a conference call with their developers - one of which I interviewed while at my last job - the recruiter said he was thrilled that they might be able to hire me (never, ever burn bridges).
- Later next week: Interview with extremely large company. I used to work for the manager of the group I'd be working in - he said yesterday how great it would be to have another developer he knew he could trust to write code correctly (never, ever burn bridges, pee off the side of them or do anything else that might soil your relationship with the other side of the bridge).

Wednesday, August 4, 2010

Why Javascript Sucks (a.k.a. Fuck you Microsoft)

Javascript is a very cool language. With many of the features that make Ruby fun to work with, Javascript gets a lot of unwarranted criticism.

But, even with it's cool features, it's a poor choice for writing rich client interfaces. I'd rather use Java, C/C++, QT or even Flash. Thank goodness for GWT, which makes it possible to create Javascript through the use of Java (which, even in this case, provides a bit of "write once, run anywhere").

But again, it's not because Javascript is a bad language; it's because to use Javascript, your code has to work in a browser. And as any programmer who has ever written web code knows, Microsoft has fucked that up for everyone.

Fuck you Microsoft!

Friday, July 16, 2010

Sites vs Applications vs Software

Just because it's made up of "webpages", not every coding effort is equivalent. I really like this article. Of course, I replace ".Net" with "Java" when I read it, but the concepts are right on!

If you're simply building a website for a client, you can figure out the single best way (in someone's opinion) to accomplish a task or complete a workflow. Our designer loves consistency and simplicity in products we release. And he's got good reasons for saying that.

But when you're trying to provide solutions to evolving and complex problems, you need to build in flexibility. Business needs today will not be the business needs of tomorrow. And the business may need multiple ways of accomplishing the same task.

The "single approach" systems work well with scripting languages. Many times, when a business need changes, you can simply change the way the code works. The tests protect you from breaking other parts of the system, and you end up with a little more (or a little less code) than you had before the refactor.

More complex systems benefit from statically typed languages (and the tools that make them such a pleasure to work with). When a business requirement changes, it often comes with many rules and variables. Often, a simple refactoring is not sufficient to meet the new needs (which is likely some crazy-difficult logic that some VP dreamed up in a scotch-induced haze that will do little to advance the company but will surely drive you to the edge of insanity).

We build "sites", "applications" and "software". Our event registration product is an "application" which helps our clients create "sites" which sell tickets to their events. But, because we have many clients all using the same application (in pretty different ways), and because we want the event registration system to be integrated into other applications, we've built "software" that allows selling tickets or trinkets or anything else. Had we simply built an event registration application, we'd lack the ability to provide the type of fluidity in reporting, analysis and new product creation that a well thought out software platform provides. Trying to write the software in a scripting language would quickly become unwieldy. However, trying to write the applications or sites in Java is slow and cumbersome (with the exception of using GWT for Javascript - gotta love that for a productivity boost).

Tuesday, June 29, 2010

The worst fail I've ever seen

If you spend anytime working with computers in an office setting, you've undoubtedly experienced "an outage". This is the dreaded, panic-stricken period of time where something goes wrong. It could be a complete loss of power, despite on-site generators that "were supposed to kick on". It could be a car hitting a telephone poll that knocks out your T1 line. Or, it could be a hard drive failing, taking all your data with it. Some of these are avoidable, and others are not.

But IT is expected to be somewhat proactive. They are expected to have basic monitoring in place to ensure they notice a machine dying before paying customers do. And when the machine dies, they should probably have some plan for bringing it back online. I've seen this done well, and I've seen this go awry (and when I was a sysadmin, was even responsible for a few less than perfect recoveries).

But what happened last week was an epic fail of monumental proportions. Our company continues to support a legacy e-commerce system. It continues to generate revenue for us, but more importantly, it is the platform on which a number of small businesses run. In other words, without this service up and running, our customers are not able to do business. I'd like to think I work at a company that cares for it's customers and wants to see them succeed.

And I understand - shit happens. So, it when the system went down , it was not a surprise. After all, one reason the system is being replaced is because it . . . well, frankly it sucks!

But, this didn't just lead to an outage - the system was down for days. Here's what happened, according to our "IT Director":
[T]he server that holds the database files ran out of disk space. When this happened, it corrupted both the database file, and the transaction log file that's used to recreate activity in the event of the database file becoming corrupted. This left us with no way to recreate the activity on the platform and has left us with no any easy way to get that information back.

No big deal. That's what backups are for, right? Well, here's the explanation from our marketing folks a few days later:
During a reconfiguration of our backup utility (on April 22), another server took over mirroring the backups during the reconfiguration. After the reconfiguration was complete, the backup process was transitioned back to its normal process. As a precaution with the updated system, the backups were left on the temporary server and that is where the recovery came from.

In other words, the backup strategy in place was fucked up and no one noticed. The most recent backup was from 2 months ago. 2 fucking months! Are you kidding me! I've seen a lot of IT disasters that have cost people and companies a lot of money. While this may not have been the most expensive, it certainly was the most careless and avoidable - by an extremely wide margin!

In large enterprises, file systems running out of space can result in firings; there would at least be a lot of post mortem meetings. Downtime of hours (much less days) causes more unemployment. Lack of backups for 2 months would get most IT departments "swept clean". In smaller companies, those responses aren't possible since the "IT department" may be a single person. But it certainly is a sign that something is seriously fucked up and mismanaged.

Since we only have 2 people in our IT department (the IT Director and a sysadmin), no one has been fired. As far as I know, there have been no reprimands or repercussions. There are hints about not having enough staff and already working 9 - 12 hours a day coming from the IT guys. But this isn't about staff or hours, it's about planning and management. The current "IT Director" (said sarcastically) has proven an inability to manage an IT department and maintain systems.

Even if the company isn't going to attempt to fix this, the IT Director should man up and step down. Or at least, take responsibility. The sysadmin has admitted publicly to making mistakes and dropping the ball, but the "IT Director" is just waiting with his head down, hoping the storm will pass.

And, if you think I'm being harsh, let me assure you, I am not. After all, 2 1/2 years ago, the same system had a similar multiday outage, because . . . you guessed it . . . no fucking backups! You can bitch about not having enough people or working too many hours already, but there's no excuse for that!

I've seen a lot of IT fuck ups over the years, and most I can look back on and laugh about. I'm not laughing now!

Thursday, June 3, 2010

MapReduce - just another design pattern!

A co-worker mentioned that we was playing around with MapReduce recently. He told me how "fast" is was and how he could process a huge batch of log files "in real time". And the whole time, there's something in the back of my head telling me that something isn't right about all of this. And it's not just because he's a sysadmin, and not a developer.

Because he's testing on a laptop (with a single CPU no less), he's not getting any benefit from using a MapReduce approach. Sure, he's using a shiny new technology that helps him sound cool and cutting edge. But in reality, the processing his application is performing could be done, probably more efficiently, using a different approach.

MapReduce on a single machine is nothing more than a design pattern! On a single processor, it's not even a good design pattern. You're taking a large data set and breaking it up to be "mapped" by a bunch of workers that are going to end up waiting on each other, only to "reduce" the data into a single collection. There's no advantage over processing the data serially. In fact, the cost of breaking up the input and merging the results is additional overhead.

I suppose the "style" of typical map and reduce code may seem novel. Since these often take a more functional approach, programmers who have only worked with object-oriented code see the simplisity of passing functions-as-arguements a wonderous and magical thing. But this is not a MapReduce specific feature. Again, the same result can be achieved in many other ways.

I'm currently finding many business people (non-programmers) around the office with huge misconceptions about MapReduce. There seems to be a concensus that MapReduce is synonomous with NoSQL. And you can take billions of rows of data, run an ad hoc query, and get results in sub-second times. After all, that's the experience one gets when searching Google. Right?

At least I think I've figured out where these misconceptions are coming from! And I'm pretty sure, as he continues to tinker with his super-speedy log splicer, he's going to report his magnificent performance results. I trust he'll never benchmark his ingeniuos invention against a simple Perl script (since there's no buzz around such an old school approach to parsing log files).

Too bad he doesn't realize his "big data" is laughably small compared to the amounts of data being processed when MapReduce was conceived. The bright spot in all of this is, as long as he continues to use MapReduce for purposes it is not intended for, he won't have to ask management to fund more servers. Maybe it'll also keep him busy and out of the developers' hair.