Thursday, March 7, 2013

Lessons from Sim City Limited


So a blast from the past was released this week.  The new SimCity, descendant of one of my great childhood joys, SimCity 2000.  I pre-ordered this game, and had it running the night I got it.  But fate is a cruel mistress... Or more accurately, products released before they are ready are cruel mistresses.  I had trouble at just about every step of the pipe.  My install failed on my Microsoft Surface Pro, claiming that it was a virtual machine and thus, ineligible.  After installing it on my desktop, it crashed about every 10 minutes, and did nothing to restore games from a checkpoint.  After a patch came out on day 2, the servers became impossible to log in.  It is clear that EA had a deadline to be hit come hell or highwater, and their engineers were left to compromise the product's quality.  This is a dangerous game to play.


Traditional project management is guided by the notion that a project can be characterized by three axis which can be traded on, where those axis are resources allocated to a project, time allotted, and feature scope.  Management's balancing act is figuring what compromises can be made with two of these axis to allow a third column to extend out, thus making a quality product achievable.  Problems manifest when too many of these axis become fixed in scope.

The first problem I would argue is the cost axis.  We all recognize that there are non-linear costs associated with adding people to projects.  Larger teams have more complicated communication structures, require more time integrating pieces developed independently together, contain more bugs because of failed assumptions at the interfaces, and have the problem of running into irreducible complexities.  Nine women can't make a baby in one month.  Brook's law, so eloquently stated in the Mythical Man Month is that adding late people to an already late project makes it later.  New people have a spin-up time which lowers the productivity of everyone else on that project.  The only real time to trade on this axis is at the start of a project.  If ever you get way ahead, it is much easier to remove people from a project than add them to it.  The effect of this is typically that cost really moves to the center of the triangle, and teams trade on quality, even as every fiber of their being scream that this should not be traded on.

Time is the hairy beast on the triangle.  Engineers are bad at projecting how long something is going to take.  Even over short intervals such as Agile sprints, it is hard to gauge the timelines accurately.  To compound the problems, engineers usually error to the side of estimating low.  That means that management is really only supposed to be left with scope to trade on, when timelines begin to balloon.  And the pin that pops this bubble is how stakeholders force teams to be deadline driven.  Customers have expectations as to when a feature should be released.  Management needs a backbone to push back on customers if expectations have been already made.  Release schedule should be the deep dark secret that customers don't see coming, giving the time tent pole more room to grow.

Being able to trade scope requires developers to heavily focus on loosely coupling their products, so simpler solutions can be dropped in place of planned ones.  Developers often lack the foresight to adequately plan for these contingencies, and waterfall projects tied to long term design plans can fail if dependent features are incomplete when a deadline is reached.  If any given feature is missing come deadline day, it has the potential to cascade down the line, blocking feature after feature.



Quality starts out as the least tangible aspect for management.  They have their budgets projecting costs, their requirements detailing feature requests, and their timelines that they rigidly stick to.  Code rot can quietly be swept under the rug.  Test plans can be silently neglected.  Acceptance criteria can be too incomplete to fully capture a feature.  And it isn't until release, when many hands are all touching your product, then you realize what was traded on over the course of your product.

I appreciate the fresh look Agile has for dealing with this triangle.  By always having a stable product that can be re-evaluated every sprint, you make tangible the quality part of the triangle, and can force it to remain high.  By reducing the features to smallest working extension, you can shrink the timescales allowing them to become more predictable for engineers.  By allowing engineers to decide the time frame on a feature by feature basis, it helps deconstruct artificial deadlines that serve to press against quality.  And finally, it forces product owners to focus on trading on the right part that managers should really be focused on in the first place: features.  Prioritization becomes their sword, because it is the most effective weapon a PO or PM has in their arsenal.

Sim City obviously has traded on quality, and that is going to cost them a lot of money, as well as make them an example for others to learn from.

Saturday, March 2, 2013

Google Code Jam Prep

  I love the Code Jam... I've done it every year since I first discovered it in grad school.  As a competitive person, I was always looking for an edge that could improve my scores.  With Google Code Jam right around the corner, I thought I'd share some of the tips I've picked up over the last several years.  Feel free to include them into your contest.

  • Optimize your work space for efficiency.
With the exception of the first round, you'll have a really limited time frame to work with, and you want to avoid as many disturbances as possible.  I like to close all the miscellaneous windows I generally keep open, save for some music and maybe a quick window for Google searches, in addition to my IDE and a window with the problem sets.  I have some water handy, and anything that could distract me during the contest hours conveniently taken care of.
  • Write your boilerplate before the contest.
All Google problems look exactly the same from an IO standpoint. You read some data from a file, perhaps multiple lines per problem.  You parse the data into their respective data types, do some real work on the data, and then output to a file a line starting with a leading number, followed by whatever answer you are going to give.  Typically, they have three problems for you to work on, so I have a project set up with all three problem classes already set up to do some generic parsing of a simple problem, and generic output of whatever you are going to solve.  With three hours available for critical thinking, these presious minutes can be the difference between a working solution and an almost finished solution.
  • Have a resource file easily available to paste in the free test data they give you
Google problems all give you some example inputs and outputs as a starting point for what your program should be able to do.  In the boilerplate code, I like having an empty text file pre-wired to be parsed as the input file of choice, which I can just paste the examples in and run against my program.  I also have a pre-wired text file for the output, already open, which automatically refreshes when I run my program.  It gives me instant feedback as I am writing my algorithms.
  • Read all the problems before you start.
This is more of a generic test taking strategy, but it applies well here.  Google assigns different points to different problems.  Typically in the first round, a score large enough to get you on to the next round doesn't have to be a perfect score... but it also typically isn't the score of the easiest problem either.  Also, I feel that reading all the problems before starting gives your mind a chance to process a bit before you jump into coding.
  • Have a plan of attack
Unless there is a very obvious optimal type answer, you'll need to think about how you are going to attack a problem before you jump into it.  The big points require optimized algorithms.  O(n^2) is the quickest way to not having a working answer.  Diagram out on a piece of paper the implications of what you want to achieve.
  • Know where to find your file I/O.
Once you download an input file for processing, you typically have around 5 minutes to upload your program's results.  The more time you are fumbling around looking for where your input file downloaded to, or wiring up your program to use it... and again finding the file your program outputs for upload, the less time your program can be crunching on the answer.  Sometimes, sub-optimal algorithms can still pass the large dataset... but usually with very little time to spare.  Seconds count here, and having done a few practice runs against Google's practice problems will help iron out the wrinkles.
  • Fuzzy test your big data sets before submitting.
This will be a new one for me this year... but it is something that I tried in the Facebook Hacker Cup.  Before attempting the large dataset, you need to know the performance pain points of your algorithm.  The best solution I've come up with is to Fuzzy Test your algorithm against the boundaries supplied in the question.  Making sure your fuzzy testing module tries various combinations on the boundaries especially, because this is where the tricks occur.


Any additional suggestions you may have about how to improve your Google Code Jam experience, I'd love to hear.  Good luck in the 2013 Code Jam!