Just Get Started

11 minute read

If you like reading Control-Alt-Backspace, I expect there’s a decent chance that you also like doing things right. Maybe you’re not a perfectionist, but you at least want your work to be “good,” for whatever definition of “good” you care to provide. You probably don’t want to share your work until you feel it’s good, and you don’t want to screw up.

That most people work this way is no surprise. Who wants to do bad work, or have other people see their sloppy work? Here’s the problem: In most cases, the “work on it until it’s good” model is not only less efficient but also less rewarding and produces a worse product.

Lousy Works

The UNIX operating system is famous for punting when it comes to difficult design problems. Rather than trying to prevent deadlocks (a situation where several programs get stuck waiting forever, all waiting on another program to proceed first), it simply pretends they won’t happen because they happen so rarely that users will just grumble and restart the programs involved without suspecting anything. If the operating system gets interrupted while responding to a request, it tells the requester that it finished while setting a special error code off to the side that says “sorry, this actually failed, you need to retry your request” – forcing any program that makes a request to check that error code and potentially try again. If the user types the magic words rm -rf /, the system self-destructs without asking for confirmation, deleting anything it has permissions to on all connected network drives and removable media for good measure. Are these the marks of a “good” system? No! It’s a lousy system.

And you know what? It works. It doesn’t just kind of work; it works great. In fact, it works so well that UNIX has been the dominant force in operating systems since its inception. Windows may be more common on desktop and laptop computers today, but Linux (which runs most servers), macOS, and Android (which runs most smartphones) are all based on the foundation of the UNIX of the 60’s and 70’s. A few new things have been tacked on, but the basic design is exactly the same.

Richard Gabriel, in his fascinating 1996 collection of essays, Patterns of Software:

“UNIX designers probably thought that it was OK for computers to be like anything else – lousy – and they were right.”

UNIX became so popular because it worked, and it started working immediately. The first versions were even more finicky and harder to use, but they were out there, providing flexibility that could work around their faults and support on many different types of computers at a time when very few operating systems were designed for more than one type of computer.

Guess what? Those design flaws weren’t actually a liability – in fact, you could argue that by ignoring the really knotty cases and enabling UNIX to get out there and perform better with fewer bugs, they were an asset!

Misdirection is Avoidable

In theory, theory and practice are the same. In practice, they aren’t.

We have a tendency to spot little problems with things and get stuck on them and spend hours and hours fixing them. Maybe those were the things that were important. But maybe they weren’t. I call this misdirection – you let your attention be captured by a problem while missing more important problems.

Let’s imagine that I’m designing a website and I discover that the form loses all the data the user enters if they’re using Firefox and they press F2 sixteen times in a row. Should I fix that? All else being equal, of course, it would be preferable to have a website that did not crash just because somebody pressed F2 sixteen times. I want my site to be good, right? But it’s always hard to know how long something like this will take to fix. Maybe this is a remarkably thorny problem with roots in a design flaw in Firefox, and it will take me 200 hours to fix the problem. Is it worth it? Maybe not. Maybe, just like deadlocks in UNIX, it’s not a problem in practice.

The only way I can really know if it’s a problem is to start using it – to move from theory to practice. Ship it, maybe to a limited set of users, maybe with a note somewhere that says not to press F2 a bunch of times, and see what happens. Maybe I get a flood of complaints. That means it’s important, and I fix it and ship an update. Maybe I never hear anything more about it. That means it doesn’t matter, and I just saved myself 200 hours.

It would be easy to tell me off for being lazy and producing lousy software. But it’s not just being lazy. The 200 hours I don’t spend fixing a bug that nobody ever notices are 200 hours I can put towards features that are actually useful and bugs that are actually annoying. And the Pareto principle absolutely applies to software development. If I can find the 20% of issues that take the longest to fix and the 20% of those issues that come up the most rarely, and simply ignore that 4% of issues unless I have spare time, I can get way more than 4% of my time back. I could easily gain 20% or 30% more time to spend on other things. Unless that 4% issue is seriously awful, I come out far ahead – all courtesy of being willing to be lousy.

On the flip side, the problems that are worst often don’t show up in design and testing at all. When you just start using the software, you spot these mistakes for free. When they come up and prevent you from getting things done, you go and fix them. Testing is great, but actual use will always find more issues, and find them with less effort. Getting started right away and being willing to evolve as you go along means you can spend less time testing and more time using.

Get Value Immediately

Do not shame people for releasing broken code; reward them for transparency and promoting collaboration.
Thomas Limoncelli (see later)

My other two points have mostly addressed long-term benefits. What about the benefits right now?

Well, if we look at UNIX again, we can see that often things that are perceived as lousy, or not done yet, or not quite good enough to share, are actually perfectly fine. Even if only half of the system works, or it has to be fixed or changed out entirely as you go along, the other half probably works quite well and could save you a lot of trouble!

Thomas Limoncelli, in his fantastic article Manual Work is a Bug, tells a story of a process he used at his workplace which they managed to completely automate – except for one spot where someone had to show up at the computer and click the “OK” button. That would certainly get eye-rolls if you bought the product with the promise that it would automate your task. But guess what? Now you can do something else while it runs and just hit “OK” once in a while. Sure, it could be better, but it’s still an improvement. If Limoncelli’s team had waited to introduce their automation until they could eliminate the press of the OK button, they would have kept on doing it the slow, manual way for months. Instead, they got 90% of the value immediately. It could even have turned out that worrying about the OK button was misdirection: they could have discovered, once they actually started using it, that it actually wasn’t very annoying to deal with and they didn’t use the process frequently. Then they would have saved even more time.

Don’t try to wait for the big improvements. Instead, challenge yourself to make one small improvement today. Not tomorrow, not next week when you might have some spare time – right now. Maybe you write down the steps you take to perform a task so you can evaluate them later for improvement opportunities. Maybe you fix a typo or add a qualification in the steps you already have. Maybe you take that documentation and write a little snippet of script to make one small part of the process easier. If you can make a small improvement every time you do something, you will be amazed how quickly and effortlessly you progress.

Examples

I focused mostly on computers above because it’s one of the areas of life that has the most research and methodology about just getting started behind it. (The set of practices related to iterative small improvement of software is often called Agile.) However, it is equally applicable to just about anything.

Note: I say “just about anything.” The exceptions are mostly common sense, but they certainly do exist: I would wager, for instance, that you would rather not ride in an airplane or be sent into a war zone with a gun whose design was only half finished. Or would you like your employer to use payroll software that guarantees it “probably issues the right paychecks”? Some things need to be as good as humanly possible from the start. But only a small fraction of things need to be 100% right from the beginning, and we’re inclined to think things fall into that category that don’t.

Here are some non-software-development examples (some of them do involve writing a bit of software, but that’s because I write a bit of software for almost everything I do; the software is not a necessary component of the process!).

  • Any time I cook from a recipe, I make notes on the recipe to clarify things that were confusing or note things I chose to do differently. Eventually I might not be using the recipe anymore, but making sure I stay on track in the meantime and do it the same way every time guarantees I don’t lose proficiency if I don’t make the recipe for 3 months and accelerates the process of memorization. My note might not be right; it might only apply if the day is really humid. But I can change it later.
  • I started out tracking my money with Mint. Then I learned a bit about accounting and switched to Ledger for its additional flexibility. Over the next year or so, I started to use more of the features of Ledger, day by day. Then I wrote a script to download my statements from Mint and put them into a more convenient form for copying into Ledger. Then I wrote another script to double-check my account balances in Ledger matched my account balances in Mint when I was done editing transactions. Then I developed a system for scanning important receipts and adding them in. Then I redesigned the way I added transactions to make it faster. And so on. Every time I waited and saw what the actual results were and then worked out what the next piece I needed to improve was. Small projects, even the largest just a couple of hours, but eventually I will have a system customized to do only exactly what I need and where I’ve automated every step that doesn’t require a decision from me.
  • I’m just starting on a system to optimize my grocery shopping. My realization one day as I was walking through the store was that I always buy the same set of 50 or so items at the store, and they’re (nearly) always in the same places, yet I am constantly missing things or forgetting where they are and having to turn back. This costs me a couple of minutes and a lot of frustration every trip. Therefore, I’ve started making notes on my grocery list about a single path I can follow through the store that passes all the items I want to buy. I’ll put the items I want on the list in that order (probably with the help of a script or two) and the problem will be gone. It won’t happen overnight; I’ve started with gathering notes and making an ordering of products, then I’ll try that order out over a few trips. The order will probably be wrong in some places, so I can make corrections as needed. Trying it is the best way to find the mistakes, and even a partial, wrong order is better than no order. Once it’s mostly right (not all the way right!), I’ll start working on whatever scripts might be necessary, interspersing that work with more trips to test it out.