How to Replace a Kindle 2 Battery

As I’ve mentioned before, I put my Kindle to regular use. It has become an essential part of my literate life; every writer ought to have an e-ink reading device. Though I’ve been tempted by the newer models, my Kindle 2 has kept on working flawlessly for more than three years. Or at least, it was working flawlessly until about two months ago, when my Kindle’s battery life began a rapid decline. Rather than going weeks on end without a charge, it could scarcely manage a couple of days on standby.

Kindle low battery message

Rather than rushing to Amazon to order a replacement, I thought I might attempt to do the environmentally-minded thing and repair it first, while documenting the process here. Other than battery life, the Kindle was functioning normally, so I suspected that the Lithium-Polymer (LiPo) battery had cycled too many times and was approaching the end of its life. I spent a little time over the course of several days finding and obtaining the needed part, figuring out how to disassemble the Kindle, and successfully completing the repair. Here’s how it’s done.

To replace the battery, you’ll need:

  • the new battery (I found one on eBay for about US$9.00, including shipping),
  • a spudger for prying into the case, and
  • a Phillips #0 screwdriver.

I bought a battery on on eBay for about US$9.00, including shipping. I have a spudger and screwdriver in my iFixit Pro Tech Toolkit, which has pretty much everything required to disassemble and reassemble electronic gadgetry. Once you’ve got the part and tools, here’s how to actually do the replacement:

  1. Turn off your Kindle. Slide and hold the power switch for four seconds, then release. The screen blanks.
  2. Place the Kindle screen down on a flat, clean surface.
    Kindle placed face down
  3. Remove the plastic back panel. Two tabs keep the back panel in place. Use a thin spudger to slide the panel away from the larger metal back panel.Using a spudger to separate the Kindle's back panelsThis step is a little scary while you do it. You will think you’re breaking the Kindle, but as long as you apply only gentle force, you won’t actually break anything.
  4. Remove the metal back panel. With a Phillips #0 screwdriver, remove the two screws which attach the metal back panel to the Kindle’s plastic body, then slide the panel away.Retaining screw inside the KindleUntil you replace the back panels, keep a close eye on the volume rocker. Without the back panels to hold it into position, it may fall out.
  5. Remove the old battery. With a Phillips #0 screwdriver, remove the two screws holding the battery in place, then lift the battery out by its black tabs.Removing a screw from the Kindle batteryLifting the battery out of the Kindle
  6. Install the new battery. Drop it in connector first, then replace the two retaining screws.
  7. Replace the metal back panel. Place it such that the panel lies flat and the volume rocker is held in place, then replace the two retaining screws.
  8. Replace the plastic back panel. Place it such that the panel lies flat and the volume rocker is held in place, then slide it toward the bottom of the Kindle.
  9. Recharge the Kindle with a micro USB cable. After a few seconds, the amber light will illuminate and the screen will refresh.Recharging the Kindle with a micro USB cable

Once the light turns green, you’re finished and the Kindle has a new lease on life.

Though I’m still tempted to upgrade, I’m glad I fixed this Kindle. With the new part, even if I were to upgrade, I imagine my Kindle may still have a second life as a display or in some other use, which I’ll be sure to share. This fix was easy, cheap, and a good thing to do.

Documentation is artificial experience

When presented with some device or software, a new user needs to answer some fundamental questions like, What is this thing?, What is it good for?, and How do I use it? Perhaps the plainest way to answer those questions is through real, lived experience, such as direct examination or experimentation. Given a physical device, for instance, a user can answer those questions by turning it over in her hands, locating physical features, or playing with the device, pushing buttons and flipping switches.

The other way to answer those questions is by consuming the documentation. By substituting that direct experience, documentation acts as a prosthesis. Documentation isn’t the same as experience, but provides a quality of user experience which approaches the real thing (granted, the prosthesis metaphor breaks down over time; you can grow new experience, but not new limbs—yet). Documentation affords new users the abilities of a more experienced user, with lower costs than gaining all of the experience directly, as through experimentation and practice. Those costs include time spent learning, the frustrations of the gulf between desire and ability, and others. How much and what kinds of experience documentation successfully substitutes is a handy lens through which to evaluate documentation and make choices about documentation goals.

The most basic form of documentation answers those early questions (What is this thing?) which make up a user’s earliest experience with the thing documented. Given a physical device, for instance, basic documentation allows a user to skip much of the preliminary experience of turning it over in her hands and locating all of the physical features that identify the object. Product packaging, by identifying the product for would-be owners, serves as a kind of basic documentation in many cases (indeed, many products are sold such that the packaging is the only documentation).

Common documentation identifies a thing and describes its usual functions, like turning on, setting up, and major, advertised features. It often takes the form of an instruction booklet, a README file, or a quick start guide. While helpful, such documentation represents the kind of experience which can be recreated in isolation by a sufficiently eager new user. Such documentation does not provide any information about the software or device’s place in the world.

Great documentation goes beyond mere identification or cursory functional capability. Great documentation contextualizes the documented thing and enables non-trivial accomplishments, composition with other tools, and critical thinking about the documentation’s subject. This kind of documentation gives the audience a substitute to experience they would not necessarily attain on their own, as the experience is non-obvious, developed over a long period of time, or must be attained outside the audience’s immediate context. In the model of documentation as substitute experience, great documentation helps a user understand how the thing documented relates to her life and how it can be used to solve her problems.

One of my favorite examples of this kind of documentation is the SQLite documentation page, Appropriate Uses For SQLite. SQLite is a software library that provides a single file, serverless SQL database. A serverless SQL database is an odd thing and unlike high-profile databases like MySQL or Oracle. Less sophisticated documentation might describe how to use the database and call it a day, but Appropriate Uses for SQLite is a bit smarter:

The page’s first section, Situations Where SQLite Works Well, describes applications where SQLite has succeeded. It’s not a list of available features. Rather, it’s a list of scenarios in which SQLite is thought to be a suitable database candidate: application file formats, low-traffic websites, prototyping, testing, and teaching, to name a few. But where the page really shines is in the subsequent section, Situations Where Another RDBMS May Work Better. That section describes scenarios in which SQLite would make an ill-advised database choice.

By the inclusion of those two sections, the SQLite documentation helps the audience to make smarter decisions about when and how to use SQLite. By abbreviating the process of learning to make choices with SQLite, the documentation enables the audience to accomplish their goals, not just complete a set of procedures which are limited to the software in isolation.

So the next time you’re thinking about the scope of your documentation, take a look at Appropriate Uses For SQLite and ask yourself, How can I give my audience a shortcut to experience?

Don’t be so negative: avoiding dead-end directions

Some sentence structures, which may feel natural while you’re writing them, are just naturally confusing. Here’s an example of one I’ve been running into recently:

Don’t press the button on the controller until the green light turns on.

This is a garden path sentence, which requires a reader to backtrack to successfully parse the meaning. It initially appears as though the sentence presents a thing you ought not do. But when you read to the end of the sentence, it resolves as a thing you ought to do after a condition has been met. While careful readers will understand the meaning, non-native English readers and run-of-the-mill skimmers may understandably misunderstand it.

This is a fixable problem, however. You might be tempted to remove only the don’t from the sentence, like so:

Press the button on the controller when the green light turns on.

That’s somewhat better. A bigger proportion of readers are likely to understand that the button should be pressed, but now they may not notice that there’s a condition to be satisfied before pressing the button. Instead, I recommend this approach:

When the green light turns on, press the button on the controller.

Not only does adhering to the common if-then conditional structure appeal to my left brain, this sentence structure makes an implication to the reader: wait before acting. This pattern is less likely to be difficult for readers to parse than the first example and makes the existence of the requirement more obvious to all readers than both previous examples, even those skimming from the middle of the sentence.

The only time it’s appropriate to go negative is when the reader should always avoid the instructed activity, or when doing the activity would represent an exceptional case. When giving a negative instruction, start with the strongest appropriate words, like these examples:

Never press the red button.

Avoid getting the device wet.

Do not press the red button unless instructed to do so by a technician.

It’s easy to get wrapped up in the higher-level concerns of your writing, like scope and organization, but don’t forget these sentence-level details. They’re easy to miss, but important for making your prose understandable and helpful.

Use text-to-speech to improve your writing

An oft-recommended way to improve writing is to read it aloud. Reading aloud slows you down, giving you the opportunity to catch missing words, awkward phrasings, and other mistakes that are easy to gloss over when you’re familiar with a text. It’s good advice mainly because it works, but I find that it’s not fool-proof. As a fool, I slack off and mumble through, completely missing the point of the exercise. And even if I do it properly, I can still defeat myself by unconsciously fixing mistakes in my verbal rendition, never giving myself the opportunity to hear them.

So, yet again, technology comes to the rescue. Instead of reading aloud to myself, I have my computer read aloud to me. This has some major benefits: the computer won’t fix mistakes (what you hear is what you get) and with headphones it’s possible to do this kind of proofreading anywhere, without looking foolish (or additionally foolish, as the case may be). Plus, hearing it in the HAL 9000-esque voice of the computer distinguishes the words from the voice in your head; it separates the idea of the text from the text itself.

There are a bunch of tools out there to do this sort of thing, including free and paid applications, browser plugins, and operating system services. You can Google them if you like. I’ll just share my personal favorite, which is built-in to Mac OS X. With a few steps, it’s easy to configure OS X to read selected text aloud with a keyboard shortcut (it’s my understanding that Windows 7 has a similar feature in the form of Narrator, though I’ve never used it).

To enable this text-to-speech feature:

  1. Open System Preferences.
  2. Click Speech.
  3. Click Text to Speech.
  4. Click Speak selected text when the key is pressed.
  5. Click the Set Key… button.
  6. Press the keyboard shortcut you want to press to read selected text aloud. I use Alt + Esc.

Now, any text you can select can be read aloud by your Mac: select it and press the keyboard shortcut. You can stop playback by pressing the keyboard shortcut a second time.

See also: Notifications, Automation, and Text-to-Speech

Writing feature requests and bug reports that get results

As a technical writer, I’m often the first person to use a new widget or whatsit without knowledge of its internal function or composition, so I get to act as a surrogate user. Sometimes I spot bugs or areas for improvement that my developer colleagues have not, but before those things have been revealed to our customers (though it’s a credit to the skill of my colleagues that this is an unusual occurrence). Thus, I get to write bug reports and feature requests on a somewhat frequent basis. And I’ve come to a recurring pattern for reports and requests that has proven successful at getting those problems fixed and those changes made.

So today I present to you, with considerable commentary, my outline for writing up a bug report or feature request. These guidelines aren’t definitive, but they represent the kind of requests I’ve sent or received that got results, with a minimum of back-and-forth.

  1. Briefly summarize the request.

    Ideally this should be no more than a few words. It should fit comfortably into the subject line of an email or your ticket tracker. This summary will probably become a mental shorthand that identifies a unit of work for somebody, so it’s kind to be terse. A terse summary often takes a simple form, such as noun verbs the object for a bug or noun should verb the object for a feature request.

    If you can’t complete this step, stop and think things over before continuing. You might have more than one problem on your hands. Narrow it down a little, or you might make the recipient an unwitting contestant in a game of Twenty Questions.

  2. Describe the current situation.

    Provide a context to compare the proposed change with the status quo. If it’s a bug, provide the steps to reproduce the bug. If you can’t reliably reproduce it, describe (in as much detail as you can) the conditions under which the bug appeared. If it’s a feature, just briefly describe how things work presently.

    As both a reader and a writer, it’s easier to understand a proposal when there’s a contrast between the way things are and the way things you want them to be.

  3. Describe the desired outcome.

    In other words, ask for the change. Don’t forget: be polite.

    This is probably the thing that prompted you to start this process in the first place. If you’re requesting a bug fix, describe what you expected to happen. If you’re asking for a new feature, describe how a working implementation would behave.

    Unless you’re knowledgeable about the specific implementation details, don’t speculate about how easy, hard, simple, or complex the changes required may be. It’s neither as helpful nor as convincing as you imagine.

  4. Describe the benefits of the change.

    Justify your request. Describe how it will help your users, customers, or colleagues to live better, more pleasant lives. This is useful not just to convince the recipient to fulfill your request, but also to enable her to make an informed decision about which tasks to take on first.

  5. Describe the negative effects of the change.

    If you can, describe how the change will require new effort or behavior on the part of your users, customers, or organization. You might suppose it’s counterproductive to describe the drawbacks of your proposed change, but ignoring drawbacks means ignoring opportunities to mitigate them. And occasionally you’ll have the good fortune to report that there’s no downside.

Here’s a made up example, based on the outline, to request a new feature for Gmail:

Feature request: Individual messages should have a delete button

Presently, to delete a specific message in a Gmail conversation, you must click the down arrow menu button in the upper right corner of a message, then click “Delete this message.”

Please add a button, perhaps alongside the existing forward button and down arrow menu button, that deletes the message in question when clicked.

The button would speed up the process of handling individual notification messages that Gmail has erroneously grouped together as a conversation. It would reduce the number of clicks in such a situation by 50%.

The addition of the button may cause some confusion for users when it first appears. During the adjustment period, some users may accidentally click the delete button and need to undo deletions more frequently than they would otherwise.

Please let me know if you require any additional information. Thank you for taking the time consider my request.

Honestly, it’s an iffy, high-risk feature—if I worked on Gmail, I’d probably reject it—but it contains some information on which to make that decision. On rejection, there’s an opportunity to identify alternatives, like fixing erroneous conversation grouping or providing a facility to split conversations. Those alternatives wouldn’t be apparent in a less detailed request, such as “Individual messages should have a delete button” alone. On acceptance, there’s enough information to determine what completion looks like (in this case, whether the implementation reduces clicks by 50%). Either way, a well-organized request makes it possible to find a good outcome.

And finding good outcomes is what making bug reports and feature requests is all about. So, with your help, we can make a less buggy, more useful tomorrow.

How to Make Your Mac Automatically Load Up Your Kindle

Writing about technical writing is fun and all, but it’s fun to write a tutorial every now and then too. Here’s a look at something I’ve been working on recently.

I love reading on my Kindle. For stuff longer than your average blog post, the Kindle is vastly preferable to even the finest back-lit display. So in addition to the books I buy from Amazon, I load up my Kindle with stuff from a variety of sources. For example: I recently found out about NASA’s free ebook collection and immediately picked out some promising titles or I take advantage of my local libraries’ OverDrive subscriptions to check out books.

After a day or two, I’ll often end up with a bunch of articles and books to dump on my Kindle. Now, I could pay Amazon to wirelessly deliver to my device1, but I’d have to leave the wireless turned on, considerably reducing battery life. I figure I may as well save money and battery by loading the files over USB, since I’ll be plugging in to charge on occasion anyway. So I have a process:

  1. Put new ebook files in a folder called Reading Material.
  2. Connect Kindle to the computer.
  3. Copy the contents of Reading Material to the Kindle’s documents directory.
  4. Eject the Kindle. See the files appear on the Kindle’s home screen listing of available documents.

That’s easy enough, I guess, but I thought I could do better. My preferred scenario would be that I plug the Kindle in and the computer does the rest. After a bit of tinkering, it turns out this scenario is pretty easy to pull off. All it takes is a Bash script, a launchd plist file, and a couple of commands at the terminal. So here’s what to do:

First, we need a script that’s going to run each time a new drive is connected to move files to the Kindle. The script will run any time a new drive is mounted (not just the Kindle), so we’ll need to check the right storage device is available before doing anything. And there are other boring details to deal with. I’ve taken the liberty of writing load_kindle.sh, though you’ll need to to tweak it to your specific situation. Please see README.rst for all the gory installation and configuration details.

Next, we need to tell OS X to run the script whenever a new volume is mounted. Mac OS X features a service called launchd which can be used to schedule scripts to run at certain times (much like cron, if you’re already familiar with that) or under certain conditions.

In our case, we want to run load_kindle.sh whenever a new volume appears. We can do that pretty easily by adding a file to the ~/Library/LaunchAgents directory. launchd is controlled by plist (as in property list) files which are XML files using a special Apple XML DTD. It’s kind of ugly and a pain to work with, but this use case is fairly understandable:

Save the file as ~/Library/LaunchAgents/com.[username].KindleLoader.plist, where [username] is your Mac OS X username. Then open up the Terminal application and enter launchctl load $HOME/Library/LaunchAgents/com.[username].KindleLoader.plist and press Enter to register it with launchd.

Alternatively, you can use a tool which will create, edit, and register the file for you. One is Lingon (as in lingonberry, yummy), available on the Mac App Store. I haven’t used Lingon since a long time ago when it was free and the MAS was just a glare in Steve Jobs’s eye, but it’s an option if you don’t want to mess with plsit files yourself.

Now, when you plug your Kindle into your Mac, your new reading material will be automatically loaded and the dulcet tones of Alex’s voice will announce the proceedings (or you can turn that off, if you read the README like I suggested). Then the Kindle will be automatically ejected for immediate use (don’t worry, it’ll keep charging as long as the cable is connected).

And that’s how you can reduce the friction between you and reading, at least a little bit.


Note 1: Kindles with WiFi can avoid the fee by skipping the 3G network. My Kindle 2 doesn’t have WiFi and I don’t feel the need to upgrade and give up my nice, wide page turning buttons.  ↩

Building Better API Documentation

In October, I gave a presentation at BarCamp Philadelphia titled Building Better API Documentation. This post is based on that presentation.

Building Better API Documentation

Despite my English major background, I’m a big nerd. I program as a hobbyist and I program for work. But specifically, it’s my work—documenting a web hosting service—that puts me in contact with a lot of different Application Programing Interfaces (APIs, how different pieces of software “talk” to each other). And having seen such a variety of APIs, I’ve come to this sad conclusion: most APIs have documentation which is useless unless you’re already familiar with that API.

This applies across all kinds of software: open source, proprietary, internal, external, web APIs, library APIs, and more. It’s a widespread and sorry state of affairs. Especially when I stop and consider that, as a user of APIs, it makes me really happy to be a user and to be a customer when I have access to good documentation. It means that a lot of APIs are making me unhappy. So I want to help developers and writers be happier by building better API documentation.

The right things aren't hard to do but the wrong things are even easier.

Now, it’s probably bad for my profession to put this idea into your mind, but doing the right thing isn’t hard. It’s just that doing the wrong thing—creating bad documentation—is so much easier. I can look at API docs and find all manner of mistakes in minutes, just because it is that easy. I make some of these mistakes myself. But there are a few things that can be done to avoid the biggest, most glaring errors.

Write Your Own Documentation

First, write your own documentation. Or to put it another way, if you didn’t write it, it’s not documentation. Or if you didn’t organize it, it’s not documentation. There are tools out there which claim to generate documentation, bypassing the process of writing and organizing.

This Is Not Documentation

Java has Javadoc. Ruby has RDoc. Python has epydoc. They all do the same kinds of things, like spitting out class definitions, function signatures, and inheritance diagrams. That’s what people usually mean when they talk about generating documentation. I’ve come to regard that idea suspiciously, because I’ve never encountered computer generated documentation which was half as effective at communicating with me than even the poorest of human-written documentation. The output from such tools is usually organized to mirror the source code input. Generated docs are structured around execution, not consumption.

Directions to My Neighborhood

Consider this metaphor: what if you wanted to know how to get from Jon Huntsman Hall (where I originally gave this talk) to my neighborhood, Manayunk? One way to get you from point A to point B would be to dump a bunch of latitude and longitude coordinates on you. That’s technically enough information to make that journey, but it misses the point. Latitude and longitude is information about navigation, but it’s not how to actually navigate. It’s the generated documentation of navigation.

Directions to My Neighborhood Redux

A better way to help you get from point A to point B is to incrementally build understanding and provide details which are useful to humans, not a GPS device. Better directions for people would include details structured around what the audience wants to do with the information. In this case, more wayfinding details, like the starting point (e.g., “building”), names of things along the route (e.g., “Market”), and relative directions (“turn right”). This is the kind of thing that works with real people.

What’s better still is that it works for a much larger set of audiences. While the latitude and longitude data is only good for those with GPS devices or detailed maps handy, the human-centric directions work for many people. It doesn’t matter if you know the area like the back of your hand or if you’re a visitor to Philadelphia. No one’s left out and no one’s insulted.

A lot of writers, especially in technical domains, have a fear of dumbing down their content. They don’t want to insult their audience with hand-holding. But what many of these writers forget is the incredible human propensity for filtering. People readily distill the information they need from the information available, but face a much harder challenge in inventing it themselves.

Doc generators are source code viewers without the source.

All that said, documentation generation isn’t useless. Some of these tools can actually be repurposed to produce beautiful, smart documentation. But by default, documentation generators are pretty source code viewers. They provide an alternative means for inspecting the source code, but they’re no substitute for human-written documentation.

Tasks, Topics, and References

So if documentation generators won’t do the work for you, what do you need to write? In this order, there are three things you need to write:

  1. Tasks
  2. Topics
  3. References

Tasks

Tasks are critical. “How do I…?” is likely to be the first and last question your audience ever asks of your documentation. I once came across a piece of software that claimed, on its feature list, to solve the problem I had exactly. But I ultimately did not use that software because I could not figure out how I was supposed to install the thing (and I’m no technical slouch). If your docs can’t answer these kinds of simple, fundamental questions, that’s a failure no matter how good the rest of the materials are.

A good example of task-based documentation is the Django tutorial. The Django tutorial shows you how to make a website which lets you create polls, cast votes, and see results. The entire tutorial takes perhaps 20 or 30 minutes for someone with previous Python experience. It answers a fundamental question when presented with the Django library: how am I supposed to use this thing?

Topics

But tasks alone aren’t enough. Topic-based documentation affords the opportunity to answer questions like “What is this?” and “Why is this?” Topic docs let you create a toolkit for your audience to complete tasks which your docs don’t or can’t address specifically. Want to explain your serialization format? Talk philosophy? Describe your ticket triage procedure? Do it in the topic based documentation.

An excellent example of topic documentation is SQLite’s page Appropriate Uses for SQLite. The page describes the project’s design goals and provides example use-cases for SQLite. It’s good documentation, but it might be the single best page of documentation created by an open source project because of the following section, “Situations Where Another RDBMS May Work Better.” It’s a list of use-cases where SQLite should not be used. It’s a brilliant application of topic documentation. No audience member is going to ask “When should I reject your software?” but the topic docs still allow an opportunity to address the issue.

References

The final type to round out API documentation is reference material. References are for people who know what they’re looking for. References are for the finer details. References are the one and only place where a documentation generator makes sense, but even then you can often do better.

Flask, a micro web framework for Python, has a beautiful, hand-crafted API reference. It’s built with a wonderful tool called Sphinx, which pulls in relevant data from the source code (like class names and function signatures), but obligates the writer to organize the information. The result is documentation that’s organized for human consumption, but coupled to the source for accuracy.

Better living through documentation.

Overall, I think good API documentation is within reach for most writers. Fancy tools, like documentation generators, aren’t required. A simple text editor will do. Take a look at Flot’s documentation. It’s a plain text file, which covers tasks, topics, and references with ease. You just have to write.

So start somewhere simple, like how to install your software or how to register for an API key or whatever simple prerequisite you’ve got. You can start today to make your users happier and bring about better living through documentation.

Notifications, Automation, and Text-to-Speech

Recently I’ve been giving a lot of thought to notifications. Given that I spend so much time trying to communicate with my Mac, trying to get it to do whatever it is I’m thinking, it’s amazing how poorly my Mac manages to communicate with me. Mac OS X has no built-in notification system, so most Mac applications deal with notifications in one of three ways:

  • Issue notifications with a somewhat popular third-party utility for notifications, Growl. It provides simple overlay notifications, so it’s functionality is limited to being in the way or being ignored.
  • Issue their own equally broken notifications.
  • Omit notifications entirely.

In short, the Mac is a terrible platform for receiving notifications.

Here’s a digression: Before owning an Android phone, I might have said that notifications aren’t a solvable problem, but the Android mobile operating system’s notifications are a delight. They provide instantaneous notifications by flashing text in the status bar, followed by a persistent entry in the “window shade” pull-down notification queue. Android notifications provide immediate notifications like an overlay might, without being in the way or being permanently dismissed like an overlay. It’s a good solution.

Unfortunately, the prospects of a workable notification system arriving on the Mac look dim. Apple hasn’t yet attempted an OS X notification system, but it has dabbled in notifications on iOS. Although Apple’s iOS Notification Center is superficially similar to Android’s notification bar, it lacks a persistent inbox of notifications. Despite improvements, iOS still largely insists that notifications be dealt with on first appearance, rather than at a time of the user’s choosing.

Given the sorry state of notifications on the Mac, I’m left doing a bunch of wasteful things in terms of time and attention, just to know what’s going on. I spend a lot of time repeatedly checking on things. But I have found one area where I can get my computer to notify me in a useful way.

Lately, I’ve been on an automation kick, trying to collapse many-staged tasks into single, fire-and-forget tools. For example, to deploy the WebFaction documentation, I used to 1) log into the machine where the docs are hosted 2) update the version control checkout 3) run the build script and 4) enter my password at the appropriate prompts. Now I’ve managed to wrap all of that into a single command with Fabric, a tool for automating SSH activity.

While I’ve managed to eliminate the part where I pay attention to what’s going on, it still takes a minute or two to actually finish running. I hate checking on this process though, because it feels like I’m not actually getting the productivity gains I expect from automation. So I came up with this goofy scheme to announce the start and completion of Fabric tasks by text-to-speech. It’s is harder to ignore a loud, computer-generated voice than a visual notification (Air France 447 notwithstanding), but it also doesn’t get in the way. It’s not exactly what I want from my computer, but it’s what I can get.

How to Make Your Mac Read Your Fabric Tasks Aloud

This section will be most interesting to people who spend time with the terminal on the Mac. It’s okay to stop reading here, if finer implementation details don’t interest you.

To make this work, I exploit two newish features of my terminal application, iTerm2: triggers and coprocesses. Triggers let you fire off scripts and other activity upon the appearance of certain regular expressions in the terminal. Coprocesses receive terminal contents as standard input (and their standard output can be used as input back to the terminal, though I don’t use this particular feature).

To read aloud my Fabric tasks, I simply start a script on the appearance of the words Executing task, then call out to OS X’s say command at the appropriate points (and error conditions). Here’s a coprocess script that captures the Fabric output and says the relevant bits:

(you can also see/download this code as a gist)

#!/usr/local/bin/python2.7

# use as a trigger.
# Trigger regex: Executing task '(.*)'$
# Coprocess command: $HOME/bin/fabsay.py \1

import subprocess
import sys

def incoming():
    while True:
        yield raw_input()

def say(phrase):
    subprocess.call(['say', phrase])

def main():
    task_name = sys.argv[1]
    say('Starting {}'.format(task_name))

    for line in incoming():
        if 'Done.' in line:
            say('{} finished.'.format(task_name))
            sys.exit(0)

        if 'Aborting.' in line:
            say('{} failed.'.format(task_name))
            sys.exit(1)

        if 'Executing task' in line and task_name not in line:
            say('{} finished.'.format(task_name))
            task_name = line[
                line.index('Executing task') + len('Executing task') + 1:
                -1
            ]
            say('Starting {}'.format(task_name))

        if 'Stopped.' in line:
            sys.exit(0)

if __name__ == '__main__':
    main()

To set up the trigger:

  1. Start iTerm2.
  2. In the menu bar, click iTerm2 –> Preferences.
  3. Click Profiles.
  4. Select a profile.
  5. Click Advanced.
  6. In the Triggers section, click the Edit button.
  7. Click the + button to create a new entry.
  8. In the Regular Expression field, enter Executing task '(.*)'$.
  9. In the Action field, click to select Run Coprocess....
  10. In the Parameters field, enter the path to the script, followed by \1.
  11. Click the Close button.

And you’re done! Enjoy the dulcet tones of Apple’s Alex announcing, “Deploy finished.”

Cave of Forgotten Context

The weekend before last, I saw Werner Herzog’s Cave of Forgotten Dreams, a film about the Chauvet Cave which houses the oldest known paintings in the world. It’s a fascinating, though quirky documentary. The lingering shots of the cave, its paintings, and artifacts are beautiful and compelling, particularly because the cave cannot be entered by members of the public. On the other hand, the narration and interviews are (unintentionally) funny, thanks to Werner Herzog’s undoubtedly unique perspective on art and history. But in general, the movie demonstrates a great amount of technical filmmaking skill and great care in sharing important evidence of humanity’s past.

Except for one scene that’s been nagging me seeing the movie almost two weeks ago.

In a scene relatively early in the film, the camera turns to one of the visitors to the cave, Jean Clottes, who says, “Silence please. We’re going to listen to the cave and—perhaps—we can even hear our own heartbeats.” Then the crew of filmmakers and scientists visiting the cave become quiet, while the camera remains pointed at the people standing silently in the dark. Then, slowly, the sound of a heartbeat is played at increasing volume.

You can get a taste for that scene in the film’s trailer (starting at approximately 1:40):

Unfortunately, the scene is something of an object lesson in completely failing to be aware of the context in which the audience will experience the work. The scene becomes an apparently unintentional filmic version of John Cage’s 4′33″, reminding the audience of the context in which they’re seeing the film, rather than the prehistoric volume of the cave. It’s as if the filmmaker hadn’t been to a movie theater recently. Instead of (not) hearing the cave, the audience hears the whirring and clicking of the projector, the shuffling of audience members’ feet, and the chewing of popcorn. In other words, it calls attention to distractions from the movie.

And while I was thinking about all those distractions in the theater, I got to thinking about how the scene is just as good a lesson for the written word as it is for film. For example, it’s a rather common practice on blogs and news sites to provide a bunch of links to share the current page on Facebook, Twitter, and elsewhere and to see the peanut gallery’s comments. This is how the New York Times does it:

NYT page with social media widgets

I’d characterize this as an anti-pattern. It’s more or less an invitation to embrace the distraction of, well, everything else on the web. Though I may end up tweeting a link to the page or plugging it on my Facebook wall, it’s also quite likely that I won’t return to the page I just shared. Presumably the Times thinks the rest of the article would benefit its audience (or else they wouldn’t have published it, I hope), but why offer a point of exit at the top of the article? It’s a bit like turning down the signal so I can hear the noise.

I don’t reject the benefits of Twitter and Facebook, but I recognize their capacity to draw me away from things. I’d think nothing of it to find those widgets at the end of the article, but elsewhere they exaggerate the range of possible distractions. The next time you’re writing some docs, designing a layout, planning a project, or, yes, making a film, consider the distractions your audience will face. Don’t emphasize distractions and annoyances, minimize them.

Old School Virtual Meetings

Recently, I’ve been encountering (or reencountering) a lot of web-based group chat platforms, like Convore or Stack Overflow chat. Visually, they’re interesting communication tools and they’ve got some neat hooks into other ways I make myself present on the internet (Twitter and Stack Overflow accounts, respectively). But after the flurry of “new, shiny” activity that seems to accompany big social-web launches dies down, I’ve got the sinking suspicion that they’re attempting to solve a small problem with a giant, overwrought solution. I don’t see myself dropping or expanding my chat needs to any of these platforms and, in fact, they simply remind me of another platform that meets my needs already: IRC.

The compelling thing about IRC is that it works. And it has worked for, literally, decades. It worked when I first learned to use IRC in middle school and it continues to work today. In a lot of ways, IRC is like email: it’s a protocol rather than a platform (so it’s not under any centralized control) and it has managed to serve a huge number of needs, despite the huge growth of the internet.

In contrast to email, IRC is a great small group communication medium. It’s so good in fact, that I’ve come to regard the daily status meeting I “attend” via IRC as the single most productive meeting format I’ve ever dealt with. IRC eliminates all the nasty characteristics of an in person meeting, like:

  • the loudest person getting the most say
  • surreptitious email, texting, etc.
  • scheduling an hour-long meeting because that’s the default meeting duration in Outlook

And because meetings in IRC are mediated (as in through a medium, not having a mediator), there’s no demand for the participants to be perfectly synchronous. So I don’t have to respond to incoming messages right this second, just soon. It affords a certain thoughtfulness I don’t think I can muster in person.

Finally, since IRC isn’t just some website, I can do things like automate certain (repetitious) activities that are harder to work around in a browser. For instance, on my employer’s IRC server it’s customary to append “brb” to your nickname if you’re going to be away from the computer for a minute. Here’s my AppleScript to quickly toggle my nickname in Colloquy:

tell application "Colloquy"
	repeat with conn in every connection
		if URL of conn contains "myjob" then
			if status of conn as rich text is equal to "connected" then
				if nickname of conn is "daniel" then
					set nickname of conn to "daniel|brb"
				else
					set nickname of conn to "daniel"
				end if
			end if
		end if
	end repeat
end tell

Presto and voilà: a one click shortcut. There may be a way to do such a thing with a web app, but IRC is so well known that this method is already present in the skill sets of many internet users.

All that said, IRC isn’t perfect. Those newfangled web chat services do a much better job of providing persistence to the conversation. For the most part, an IRC conversation starts and ends on log in and log out for each participant. It’s a hassle in the case of a broken connection (“What was the last thing you said?”) and it’s a huge barrier to participation in the case of joining a conversation in the middle (“Can you recap the last thirty minutes?”). Services like Convore allow you to see the conversation already in progress and review long-passed logs. Many IRC servers don’t offer easy-to-use logs, so IRC is sort of stuck in the now, with only a tenuous connection to past and future conversations.

Convore and StackOverflow chat end up providing beautiful examples of how to help a conversation keep momentum despite time and space, but in doing so, ignore or reengineer all of the existing benefits of IRC. I would rather all that brain power go toward a better IRC server, rather than a relatively immature (though technically impressive) web application. Perhaps the next time someone decides to take a crack at improving the chat experience, we’ll see an augmentation of the existing tools, rather than a reinvention of the wheel.