11 minute read

Take a moment right now to recognize how amazing a device a computer is. That little hunk of metal on your desk or in your pocket or in a vast server room somewhere will do the most repetitive, boring, error-prone tasks faithfully for years, correctly every time, without caring at all.

What keeps us from recognizing this daily? Why do we feel irritated and trapped by computers rather than empowered by such powerful tools? After years working with computers, I’ve come to the conclusion that the essential problem is one of communication. The very thing that makes computers useful – their unfailingly logical, tireless nature – also makes them hard for us confused, imperfect, absentminded humans to communicate with. Over the millennia, our brains have developed to be fabulous at dealing with imperfect information. You might not even notice if I left out a grammatically necessary comma somewhere in this article (hmm, did I?). You certainly wouldn’t come to the place where the comma was supposed to be, shout “Gotcha, syntax error!”, and close the article and refuse to read the rest of it. But that’s exactly what a computer will do when it’s reading code, or when you type a space in your phone number and the web form wasn’t designed to handle one. It isn’t designed to understand instructions that aren’t perfectly correct and in the exact format it expects. We are – we even think the correct interpretation is obvious – and we get really annoyed when someone (or something) doesn’t understand something that seems obvious to us.

Under the hood: There’s no reason a computer can’t try to understand places where commas have been left out. In fact, the language that powers most web pages nowadays, Javascript, does this with semicolons; it’s called automatic semicolon insertion. The problem is that this feature sucks. In most cases, you can leave one out and the computer will figure out what you meant. Unfortunately, the result is that most of the time the feature works great, but when it doesn’t it fails in confusing and unintuitive ways and requires you to memorize the details of how the computer tries to insert semicolons to figure out what went wrong. The computer just isn’t equipped to understand ambiguity – the only way it can handle semicolon insertion is by following other specific instructions which are necessarily imperfect. (You could also try to have it use a machine-learning algorithm, but that would be even worse because the computer wouldn’t even be able to explain why it made the choice it did!)

Compounding the difficulty is the unfortunate fact that the vast majority of software – our interface to the computer – communicates poorly with its users. I have a whole book on my bookshelf titled Why Software Sucks. Of course, there’s no single answer to that question; the answers go all the way from the complexity of the problems it tries to solve to market forces to people not knowing what they want to incompetent developers and designers and analysts and project managers to all kinds of things we don’t even understand.

Bruce Tognazzini, a highly respected researcher on user-interface design, points out in his First Principles of Interaction Design just how absurd this combination of required precision and poor communication would be in real life:

We’ve gotten so used to being the victim of data loss that we often don’t even notice it. So consider if what happens routinely on the web happened in real life: You go into Harrod’s Department Store in London. After making your selections, you are asked to fill out a four-page form. A gentleman looks the form over, then points to the bottom of Page 3 at your phone number. ‘Excuse me,’ he says, ‘Look there. See how you used spaces in your phone number?’ When you nod, he continues, ‘We weren’t expecting you to do that,’ at which point, he picks up the four-page form and rips it to shreds before handing you a new, blank form.

So while I don’t want to be reductionist and say the entire problem with all bad technology is that it communicates poorly with people, that is one excellent and useful way to look at it:

  • Good user interfaces are those that allow people to more easily communicate with the computer.
  • Good error messages are those that are aware of the problem, explain it in understandable terms, and identify a path forward so the user isn’t frustrated by mysterious things beyond their control.
  • Good tools are those that help us explain to the computer exactly what we want done rather than force us to perform repetitive actions that the computer is designed to do for us.

While it isn’t perfect, and it can’t be because humans and computers think differently, scripting is the essential, foundational way to communicate effectively with a computer. Here’s why.


Imagine for a moment that you’re a cave-dwelling hominid thousands of years ago and as yet have no spoken language. (Apologies in advance for what is very likely a heap of historical inaccuracies. I’m not an evolutionary biologist and this is an analogy.) You’re sitting around a fire with your friends and you want someone to pass you the water skin from the other side. What do you do? If your mind went to the typical stereotype, you point at it and grunt. Right?

A typical, unembellished graphical user interface bears an uncanny resemblance to pointing and grunting. We even call it by a similar name: “point and click.” Now it’s a little bit more complicated than that both in the caveman scenario and the computer scenario. For instance, we have different types of grunts. If you’re using a modern mouse, you have at least a left grunt and a right grunt and a hand-waving wheel to indicate you want scrolling or zooming, and if you’re using a touch screen, you have similar options in the form of short and long presses, swipes, and pinches. Maybe you have a few extra buttons or sounds too. But the general idea is the same.

Now I’m not trying to say that point-and-grunt interfaces are intrinsically bad. The point-and-grunt method is incredibly easy to use and tolerant to unexpected lapses in user knowledge. You don’t have to know what the thing you want to discuss is called, and you don’t even really have to remember where it is – you just look around until you find it. If you rarely use the software, ease of discovery and minimization of necessary memorized knowledge are paramount concerns, and the point-and-grunt approach is probably exactly what you want.

But we moved beyond pointing and grunting for a reason: it has severe limitations. When I was in Seoul a couple of years ago on tour with my college choir, I received a perfect demonstration of this when I had the opportunity to take home one of our surplus concert posters. Especially as I had just moved into my first apartment and had nothing whatsoever to hang on my walls, I very much wanted to take one – but I had no space in my bag to keep a concert poster without crushing it. So I went to the post office to send it home in a tube, only to find that nobody there spoke any English and I knew about four phrases in Korean. (My other languages are German and Latin – not a whole lot of help in Asia!) It’s not easy to coordinate using an unfamiliar process to find an appropriately sized container, address a package, buy postage, and drop the package in the appropriate receptacle, all by pointing and gesturing! I was perhaps unreasonably proud of myself when the package arrived safely at home two weeks later.

While it wouldn’t have made for nearly as good a story, a useful tool in this situation would have been a phrasebook. Phrasebooks don’t allow a whole lot of room for complicated self-expression, but they can quickly teach and allow you to reference a fairly extensive series of predefined ideas you’re likely to want to express. Traditional keyboard shortcuts are the computer equivalent of a phrasebook. Rather than pointing at the cut icon and grunting, you can say, “Hey computer, cut the stuff I have selected” (in the keyboard’s language, that’s Ctrl-X). The phrase has several benefits: in particular, it reduces ambiguity and the potential for miscommunication (ever clicked on the wrong button because the screen refreshed and a new button showed up under your cursor right before you clicked?), and it saves time (you don’t have to find a “cut” button near you to point at before you can explain what to do).

Most interfaces in the commercial software we use every day offer point-and-grunt and phrasebook methods of communication, then stop there figuring they’ve covered everything. But even if you had a five-hundred-page phrasebook and you memorized everything in it, wouldn’t you still feel pretty limited if that book contained all the things you could ever say? What you really need to communicate at maximum effectiveness is the freedom to put together complete sentences out of individual words and phrases and express logical relationships between things. Using a full language is far more complicated than using a phrasebook because you have to learn not only all the words and phrases but also the grammar that lets you combine them to express ideas. But we couldn’t possibly have built modern society without such languages, because they offer unparalleled freedom to precisely describe things that do not have physical existence and have never been described before.

Programming languages, from easier scripting languages like PowerShell, Python, and Bash to more difficult systems programming languages like C are the computer equivalent of human natural languages. Learning how to express yourself in a programming language is as revolutionary as learning to speak a natural language in terms of your ability to efficiently interact with a computer and eliminate frustrations.

Note: Some rare software manages to include some of the principles and flexibility inherent in natural languages in its design, allowing you to take expressive actions without needing to write appropriate programs from the ground up. This software is almost without exception excellent. The text editor vi (and its modern cousin Vim) is a classic example.

Fortunately, it’s a lot easier to learn a programming language than a natural language. If you learned your own native language, you can learn to understand the grammar and syntax of a programming language – compared to natural languages, they’re all incredibly simple and clear because computers stink at understanding ambiguity. It might be a bit frustrating for a few weeks, but after that most people learn all they ever need to know to do it right. Learning the syntax of a second programming language may take only a week; there tend to be more similarities between languages than differences. And if you can explain to someone how to do something – say, make pancakes – you have the basic skills you need to explain to a computer how to do something. You just need to learn the language to do it in. In other words, this part might look intimidating, but it’s actually the easy part.

The truly difficult skill you have to learn to succeed at scripting is how to think logically and precisely like a computer. That’s not an everyday skill for most people, and at first it feels like the computer is willfully misinterpreting you at every turn. The computer is rather like Amelia Bedelia: say something slightly inaccurate and it takes it literally and goes off doing precisely what you said but not at all what you meant. This part is not so easy, and until you improve at it, talking to the computer will be frustrating and seem unreasonably slow. Fortunately, thinking logically and precisely is a skill useful in many realms of life, not just explaining things to computers; you needn’t regret learning the skill even if you never touch a computer again.


I haven’t yet decided exactly where I plan to continue this thread. It might end up being part of the filesystem series, or it might stay in this series of its own. However, I wanted to get the idea out there now so I have something to point at to explain myself in depth as I continue going on about how great scripting languages are in other places on this blog. Rest assured that I’ll be coming back.