Josh Work Professional Organizations Trip Reports Conference Report: 2000 LISA

The following document is intended as the general trip report for Josh Simon at the 14th Systems Administration Conference (LISA 2000) in New Orleans LA, from December 3-8, 2000.


Friday, December 1

Today was a travel day. After working for a few hours on MPO projects I hopped the shuttle to the airport. The plane departed 5 minutes early (we were all aboard and virtually full) and landed early; my bags arrived safely; and I got to the hotel without incident, though it took a little longer because it was Friday night and rush hour.

Dinner was at a small Mexican restaurant. Esther, Tom, and I enjoyed some very nice, tasty, and inexpensive food.


Saturday, December 2

Today I cleaned up the tutorial handout room before registration opened, helped Evi Nemeth set up registration, and helped hand out tutorial notes before sticking a coworker with the rest of my shift. (Thanks, Steve.)

Dinner was at the Gumbo Shop; 19 of us pigged out on Creole and Cajun food.


Sunday, December 3

This morning I attended tutorial S9 on Storage Area Networks (SANs). The instructor, Dan Pollock of Aol, did a good job in general. He spent too much time on overview and had to rush the end details a bit, but it was his first tutorial for USENIX/SAGE.

This afternoon I had nothing scheduled so I tried to get my wireless Ethernet card working. There seemed to be a driver conflict somewhere, so I gave up and hung out in the lobby bar, networking with other attendees who didn't have afternoon tutorials.

A group of us took Tom Limoncelli and Cat Okita out to 201 for a combined birthday dinner. Good food, but a little pricey. After dinner, Tom and Cat threw a party in Tom's room and it got packed. I left some time after the King Cake got passed around.


Monday, December 4

Today I attended the MetaLISA workshop. Co-moderated by Tom Limoncelli and Cat Okita, we covered the following six major topics:

This evening, ten of us — me, Bob, Tom, Trey, Jesse, Cat, Aaron, Frank, Jeff, and Tyler — went out to Emeril's for dinner. We were called "the first LISA prom" because we all dressed up, and several people decided to take pictures. They should be on the web soon. We chose the degustation menu:


Tuesday, December 5

Daytime activities

Tuesday was the Advanced Topics Workshop, ably hosted and moderated once again by Adam Moskowitz. The 25 of us went around the room doing introductions (who we are, what we do, and what problems we're facing). The introductions generated interesting questions and topics for discussions.

One interesting discussion was on the professionalization of systems administration. The comparison was made to doctors. We use similar skillsets — diagnosis, comparability, problem solving, and so on. But can it be said that lives are at stake when systems administrators do their job? Doctors charge by the visit or the procedure; systems administrators don't. The models are, however, somewhat converging in some ways. Many systems administrators do more architecture than doctors. Differences in scale: doctors are like help desks, while systems administrators sometimes deal on larger scales regarding number of people served. Patients are sometimes a bit more standardized. Doctors are, in fact, certified. Some systems administrators contravene organizational policies. Doctors are liable, lawyers are liable, engineers are liable; systems administrators are rarely liable. This led to a discussion on professions: professions have standards for training and knowledge (certification); there's a fixed set of information. Sysadmins are often grass-roots with self-training and apprenticeship. Certification is a required stepping stone. Maybe systems administration should be a "guild." Or maybe we should form a union.

Our second area of discussion was whether or not ISPs are now perceived as commodities and whether they can be run as commodities. The concensus was that they can, but your should be sure to check out their long-term business prospects because business models change rapidly. Finding a provider for services "beyond the basics" is hard. Consider NetLedger: They will run your small-business books for you on the web for a small monthly fee (personal service is free) based on number of users. These guys might not succeed. Sharing your data with them for several years might end up at a very bad end if they suddenly fold. The ISP consolidation is in progress. Any new ISP will require new technology. Not only are ISPs perceived to be commodities, so are their users (who are traded). Local and national ISPs can survive; but it's tough for regional ISPs, who are neither local nor a brand name. Are there brokers for customers? There're special deals among ISPs, but no B2B site. We think we'll move some DSL subscribers around but we see stability showing up in six months. A new technology could disrupt this. DSL was enabled by aggregating terminations at the central office. Those who can scale will survive. You can now purchase a turn-key 10-50Kuser ISP solution that requires very low levels of sysadmin skill. Shell accounts are a thing of the past; people are running their own servers at the end of a DSL line.

Our third major area of discussion was on separating policy and implementation. One possible solution is to have an interpreted "policy language." Maybe you can use general principles then color the bottom level implementation to match existing policies. This is more of a mindset problem than a coding problem. Let's build policy engines, not engineer accounting (or whatever) systems. Cfengine has features that can help you implement policies. You must codify the policy in a way that's measurable so you know if you're "on policy" or not (and then you can get back on-policy if you get far enough "off track"). We're already adapting host-based tools that query directories. Maybe we can graft policy engines onto directory responses. LDAP is insecure, though — we should address this. Microsoft thought Kerberos authorization was the big deal, not authentication. Changed Kerberos to TCP. They put policy at the Kerberos server.

Our fourth area of discussion was on how new technologies in the last few years seem to be languages. This is since languages can express extensible ideas — build from primitives and move to greater complexity. Some people say "use a database for policy" but that's hard because databases too often require predefinitions. Languages, on the other hand, are built from primitives and are infinitely extensible. We think this is the solution for policy expression. A well-crafted language could potentially address this problem, but we don't know of one right now. We think languages can express these specifications at the proper level. There are results here in Academia — see the Intrusion Detection literature. Perl6 will have the ability to make a "little language." This moves to per-application languages; specs for the perl6 sub-languages lead us to believe we could write "Authenticate all users for all machines" or somesuch. The real problem is the ability to describe when a particular operation is authorized. We need to agitate for richness-of-expression in commercial tools. Windows has a lot of configurable options under the hood that were difficult to access via the desktop or command line, even though an API was available. Declarative languages like Prolog might be able to help here. Exceptions are surely the difficult and important part of this problem.

We wrapped up in looking at our 1998 and 1999 predictions to see if we were late or still wrong. We're still batting a low average. In 1999, 9 of our 19 predictions came true (or mostly true), for a 47% success rate. More of our 1998 predictions came true in 2000, but we're still looking at about a 50% success rate.

Our 2000 predictions are:

Finally, we listed some cool tools we're using:

Evening activities

This evening was the GLBTUVWXYZ (motss) BOF, hosted by Bob Gill and Esther Filderman.

After the BOF the LISA 2000 Program Committee went out for the PC dinner at the Palace Café. We were separated into a group of 16 and a group of 12, with some intermingling between groups. I had shrimp remoulade as an appetizer (along with a very nice white wine whose name I didn't catch), turtle soup with sherry as a second course, and filet mignon over scallion mashed potatoes with a mushroom demiglace and a bleu cheese coating as the entrée (with a 1998 Ironbark Cabernet Sauvignon/Merlot blend). Then about 8 of us split a bottle of Fonseca 20-year-old tawny port. About three and a half hours later, we stumbled back to the hotel.


Wednesday, December 6

Wednesday marked the start of the actual technical conference; as usual we held the Collective Breakfast before the sessions began with the announcements.

Collective Breakfast

We all managed to make it to breakfast on the fourth floor of the hotel in time for the announcements. I covered what was going on, who was responsible for what, and provided the schedules for the rest of the week. Shea Avery reminded folks that the $10 per night upgrade charge was not one CT would be paying. Jonathan Hines reminded folks what to do when working the booth.

Session 1: Announcements & Keynote

Announcements

The first session started with the traditional announcements from the Program Chairs, Phil Scarr and Remy Evard. Phil began with the following:

Remy Evard then announced the best paper awards:

Dan Geer, USENIX President, announced the proposed split of USENIX and SAGE:

As you may know, over the last year or so, the SAGE Executive committee have developed an exciting vision of the future of SAGE. One important part of that vision is that they anticipate growing SAGE membership tremendously, into the tens of thousands of members. They feel strongly that this growth requires SAGE to become a new, independant, organization, largely distinct and operationally seperate from USENIX. This will not affect the deep and enduring bonds between USENIX and SAGE; nor will it change existing cooperative efforts, such as LISA.

The USENIX Board has agreed in principle to accomodate this desire, and both boards are negotiating about organizational details for the new SAGE, and how and to what degree the restructuring will occur. While the timeline is still fluid, our current goal for seperation is 2001.

This is, of course, a momentous event; certainly the biggest thing to happen to USENIX for over a decade. The USENIX board wants to know what its members think about this. While we will likely call for a plebescite on the finished proposal, we'd like your input sooner than that. We would welcome comments by email (to board@usenix.org), netnews (comp.org.usenix), and in person. There are several board members here all week; grab us when you see us — we're here to listen, and to answer questions. And if you lack the hunting instinct, go talk to the person at the SAGE Membership booth. We're interested in your thoughts on the details of the restructuring, and even if we ought to seperate at all.

I know the SAGE Executive committee want to hear your comments as well; they have organised three formal events (SAGE Community Meeting, Tue 6-7pm; SAGE Update invited talk, Wed 2-3:30pm, SAGE Candidates Forum, 5:30-6:30pm), and as always, there is normally someone at the SAGE Membership booth.

There is some material available about this effort; the SAGE Executive committee has prepared a two page summary (available on the web at http://www.usenix.org/sage/restructuring/growing.html and in printed form at the SAGE Membership booth), this note (at http://www.usenix.org/sage/restructuring/pres_remarks.html), and the current set of discussion points (at http://www.usenix.org/sage/restructuring/points.html). All this material can be found via a "restructuring" link on both www.usenix.org and www.sage.org.

Barb Dijker, SAGE President, echoed Dan's announcement, reiterated the desire for member feedback, reminded folks that the SAGE elections are this January, and announced the 2000 SAGE Award for outstanding achievement: Celeste Stokely, for her work in collecting and distributing systems administration information for over ten years.

Finally, Remy introduced our keynote speaker, J.D. Frazer, known perhaps best as Illiad, the cartoonist behind User Friendly.

Keynote address: The World-Wide Syndicate, or Breaking out of the Cage

J.D. started by showing us the traditional syndication model, which (to put it baldly) stinks. The creator (in the case of cartoons, the cartoonist) creates and hands his material off to a syndicate, which hands it off to a newspaper, which provides it to an audience. At each hand-off, however, the creator loses both control of his work and revenue generated. Syndicates are really bosses, not partners. And if the artist falls behind in his work the syndicate is within their legal contracted rights to hire another artist to make up the delay and charge the original artist for it! Furthermore, the syndicate can change the artist's concept, art, technique, text, and other features, such that the end product can bear little resemblance to the artist's intent.

Syndicates treat audiences as a consumer base, which is a very one-way relationship. UserFriendly moved away from this model by going to self-syndication. They retain the power to choose, editorial and creative control, and the freedom to experiment with different concepts. They also have the freedom to change strips on the fly; where traditional syndication requires strips weeks if not months in advance, self-syndication like UserFriendly allows for last-minute changes. When some event like the United States presidential election comes along, UserFriendly had the freedom to poke fun at it in real-time. Strips like "Doonesbury" don't.

The challenges to self-syndication are many. Having to build audience awareness, how to promote the cartoons to sponsors (who have to be willing to do without any creative or editorial control), how to get support staff in place for the creative staff, and how to get legal representation where necessary. However, the rewards are worth it, for J.D. at least. The audience is a community and can provide feedback to the artist quickly.

J.D. also provided some advice from the trenches. If you decide you want to try something like this, write what you know. Always be honest. Don't be afraid to make mistakes. Stay humble; you'll learn more that way. Stay connected to your audience.

Session 2: Network Track:
Deploying QoS on Your Network, or What??

Eliot Lear of Cisco Systems spoke about Quality of Service (QoS) features and what they really mean. Certainly voice applications need it, because they have certain constraints on bandwidth (high), latency (low), and reliability (no drops); video applications need even more of it. QoS is a buzzword and basically means the preferential treatment of some kind of data on a network. However, not all voice or video applications require QoS; non-interactive data can be buffered. But you need it not only on your own routers but also on all intermediate routers between your source and destination environments.

Bandwidth is defined as the amount of data per unit time that can go in or out of an interface. The latency is defined as the time for the data to go from one interface to the other. Throughput is the amount of data transmitted and received. Finally, goodput is good throughput, or the throughput without the data that was dropped.

For voice applications, the packets mostly have to get there; some drops (up to 50ms) are possible without negatively affecting the quality of the transmission. However, the latency has to be less than 200ms. (The Bell telephone system standards include a maximum 50ms drop and 200ms latency; so-called "smart" phones can handle 100ms drops and 200ms latency due to buffering and error correction.) Latency is the transmission time on the wire (be it copper or glass or whatever), which is related to the speed of light, plus the time it takes for each packet to get through the queues on the routers.

There are two major models that control QoS. IntServ is where the application explicitly asks for end-to-end characteristics of the link using the RSVP protocol. It works in both unicast and multicast environments, though the sender and receiver(s) must each make the reservation. It fails noisily if it cannot get an end-to-end reservation. IntServ can scale up to 20,000 reservations per Cisco 7500-series router. The second model is DiffServ, which provides no end-to-end signalling and which therefore can fail silently. DiffServ requires traffic classification; the classes are Best Effort (BE), which is the default class; Assured Forwarding (AF), which is for priority traffic; and Expedited Forwarding (EF), a superset of AF traffic for the most-important data.

Now that you've got these QoS features, who can use them? This rapidly becomes a policy decision instead of a technological one. But on the technological side there are various dropping and queueing methods for the router to decide what data to process how. Given that there is a classification scheme (such as BE, AF, and EF), how does the router know what to do? In the "old" world, the first-in, first-out (FIFO) queue would drop the tail end packets, which is unfair to packets arriving late. Note that the TCP congestion control and window size can affect QoS. Random Early Detection (RED) shrinks the TCP window and helps goodput. There's research ongoing into Weighted RED, but that requires stable buffers (which we don't have yet). Applications that don't require acknowledgements can use Priority Queueing (PQ); that's good for little voice traffic. And the Modified Distributed Round-Robin (MDRR) can help if you have multiple weighted queues yet want to be fair in providing services.

Note that shared bandwidth makes it very hard to implement any of these features. With ATM, experience shows that you should use BE on large-bandwidth VBR circuits and AF or EF on small-bandwidth CBR circuits. However, queueing delays in the ATM hardware need to be taken into account as well.

Managing QoS is hard; there's no great product out yet that can do so. Not all of the protocols discussed here are final. Monoitoring QoS features is also hard (and essential in a DiffServ environment); determining the queue depth is nontrivial. Implementing QoS on the Internet itself is a very hard problem: all backbone providers would have to agree to implement it, we're not sure if RSVP scales; QoS features would have to be implemented on all devices. This doesn't even begin to address the authentication, authorization, and accounting issues with using QoS.

Session 3: Vendor floor

I spent some time before and after lunch on the vendor floor, making sure things were going well at the CT booth and seeing what cool toys I could pick up.

Session 4: Invited Talks:
The Digital House

Lorette Cheswick spoke about her house. This may sound trite, but Lorette has the fortune — good, bad, or otherwise — to be married to Bill Cheswick of Lumeta. They have automated a lot of the otherwise-mundane tasks of running their household, and Lorette spoke about the process.

They have 11 computers on the house LAN (none of which are completed systems). The computers vary from PCs running Windows and Linux to Macintoshes and the occasional pure Unix system. They use a lot of X.10 serial connections as well as customized drivers, serial caller-ID text, and Bell Labs' text-to-speech software. The doorbell talks ("someone's at the door!"). Lights come up in the kids' bedrooms when it's time to get up for school. Voice announcements herald the arrival of mail, the state of the garage door, whether there's a flood or a fire, special events (birthdays, anniversaries, and so on), as well as the daily stock market closing numbers and weather reports.

There are of course some issues with having your house made digital and automated. There is no voice input yet. The sensors can have trouble when it's cloudy or dark out. Having all the systems running all the time can be expensive (hardware cost, electricity, cooling, time to do special coding, and so on).

Evening activities

This evening I did some social events around the vendor floor and lobby bar, had dinner at the hotel, and then met up with Rob Kolstad, Dan Klein, and a couple of other testers to run through the Quiz Show questions and answers. We corrected some bad questions, corrected some wrong answers, wrote some very easy categories in the event we needed an easy game, and wrote up the prequalifying questionnaire. After a few hours, we wrapped up and I went to the terminal room to check my mail.


Thursday, December 7

I slept in this morning. I'd seen Dr. Felten's talk at USENIX in San Diego in June. I worked on my trip report and caught up on email (partially).

Session 2: Network Track:
Broadband Changes Everything

Brent Chapman of Great Circle Associates spoke about how broadband — which includes the variants of DSL, cable modems, and possibly even wireless — changes the way people perceive the Internet.

Broadband has two major features: it's high speed and always on. DSL provides speeds on the order of 144 kbps (or more than 7 MBps). Cable modems share the same big pipe, but provides similar high speeds. In comparison, even the fastest phone-modems provide no more than 53 Kbps. By always on, Brent means that there's no longer any dial-up delay and no busy signals. This makes the Internet just like electricity or water: You flip a switch or turn a knob and it's just there. This will change how people perceive and use the Internet in the long run; rather than saying "I'll go online later and do that" they're much more likely to hop on and off the net for brief visits to accomplish tasks as they come up as opposed to waiting until later. (Note that most consumer electronics today, such as stereos, televisions, and microwaves, don't actually power themselves completely off. They remain in a reduced-power "stand-by" mode so they can appear to power up more quickly when needed.)

Broadband is also cheaper than traditional leased lines. A T1 line from a telecommunications provider (telco) used to run $1,500 a month. Comparable speeds via DSL are on the order of $300 a month.

The revolution in providing broadband leads to new capabilities, such as connecting small offices or home offices to the Internet at high speeds, as well as making telecommuting more effective for virtually everyone. It also leads to new services or more efficient older services, such as:

Unfortunately, broadband also leads to new security threats. "Always on" means "always vulnerable." You can no longer assume that you can only be hit by attacks when you're on line in front of the computer when the Internet link is always up. Cable modem lines are shared within a neighborhood, so "Network Neighborhood" takes on a whole new meaning. If you have shared your disk or printer within your own home, you're also sharing them with the entire cable neighborhood. We should expect to see new hardware and software firewalls built into broadband DSL in the near future.

Broadband also allows you to save money. Many homes have more than 2 computers, so networking them within the home to share a single big pipe for bandwidth makes more sense to more users now. This means that you could cancel your second phone line (saving about $15/month) as well as multiple ISP accounts (saving $20/month).

What's coming in the future of broadband? Brent expects that virtual ISPs (for sales and marketing features), affinity ISPs (like credit cards), subsidation and cross-marketing will happen in the near term. We'll also see voice over DSL and voice over cable (some areas already have one or both of these); the problem faced by the providers here is "five 9s reliability," or less than 5 minutes of downtime — scheduled or unscheduled — per calendar year. We'll see more network appliances (like WebTV and Tivo) and radio— and broadband-ready MP3 receivers. We'll also see Internet-enabled appliances, such as the refrigerator with a touchscreen for restocking linked to a grocery delivery service such as Peapod or WebVan.

There are several IT management issues with broadband. First among these is security: should employees' homes be inside or outside the corporate firewall? If they're inside, who other than the employee has access to the company network? If they're outside, how does the employee get inside for work? Should the corporate Internet access be shared with the homes? If so, we need to have some kind of firewall protection (but then who maintains and monitors those firewalls); if not, the cost to the company will skyrocket since every home user needs to have their own bandwidth. What carriers are available to the employee? How do you connect to them? Are they secure? Are they reliable? Do they perform well? DO you use a single or multiple carriers? If multiple, how do you deal with coordination? Who supports the home system? Who supplies the home system? Who supplies parts for it? What operating systems, releases, applications, and versions are supported? Who can call the help desk? Who uses the systems and the network? How can you provide mutually secure access, such as when an employee's spouse works for the competition? Is a VPN the right solution? If so, is it PC-based (which leads to driver issues) or router-based (which doesn't address the other-people issue)? Are personal firewalls the answer? Those also lead to issues of who provides, configures, reconfigures, manages, and updates them, and ignores the multiple-connection issue.

In the question and answer section, Brent noted that distributed denial of service attacks (DDoS) will increase. Host-based security has to come back into style, since firewalls are no longer enough protection. The Cheswick/Bellovin model of a crunchy exterior and creamy interior no longer applies. Satellite broadband is unlikely because of the huge latency involved. Broadband affects the core routers. When asked what it'll take to administer the high-bandwidth providers (such as Akamai), Brent noted that there's no good answer yet but we certainly need to work on it. As an example, Akami has 600 servers and is moving towards 600,000 servers. Broadband also leads to more peer-to-peer networking, so the traditional source-and-sink model may need to be redefined.

Session 3: Security Track:
Cops are from Mars, Feds are from Pluto

While I attended this talk I did not take particularly good notes. So they can best be summed up by the following:

Session 4: Refereed Papers:
The Sorcerer's Apprentice

Session chair, Josh Simon

I was the session chair for this session, so I got to introduce the three speakers:

Evening activities

This evening started with the SAGE Executive Committee Candidates Forum, where the nine candidates running for EC and present at the conference introduced themselves and said what they wanted to accomplish. Questions generally focused on the separation of SAGE from USENIX, though one questioner asked the short "What have you done for SAGE?" that showed how all nine candidates were involved in different aspects of the organization. (Additional candidates were either not present at LISA or nominated themselves afterwards due to the discussion on the separation; they were not necessarily present at this event.)

After the forum it was time for the conference reception. As was to be expected, the theme of the evening was Mardi Gras; as was not expected, however, were the two baby alligators near the entryway where people could get their pictures taken with the cute li'l beasties. Food was good and included gumbo (though if I ever find the geniuses who decided serving soup at a nowhere-to-sit dinner function for 1,900 people was a good idea I'll kill them), andouille sausage en croute, crab cakes, jambalaya, deep-fried turkey, roast beef, and other New Orleans specialities as well as the usual fruit and cheese and roasted vegetables. And the open bars served hurricanes (at least until the mix ran out).

After the reception I stopped by to say Hello to Steven Levine's parents at the Meet Steven's parents BOF. I couldn't stay, however, since I had promised to attend the Scotch BOF that evening. Sat and drank really good scotch and relaxed and chatted with a couple of friends for a few hours in blissful quiet. A nice way to end the evening — and since the hotel had no hot tub, the most relaxing way to end the evening.


Friday, December 8

Session 1: Refereed Papers
Fully Automatic

Session chair, John Orthoefer

"Deployme: Tellme's Package Management and Deployment System," by Kyle Oppenheim and Patrick McCormick, Tellme Networks

At Tellme they have automated the release engineering process such that anyone can push anything anywhere at any time. By using update logs and symbolic links, they can rollback changes made by mistake.

A product lifecycle consists of four steps: creation, distribution, activation, and clean-up. Once a package has been created it can be distributed; once it's distributed it can be activated (the "production" link changed to point to the new version), and old versions no longer needed can be cleaned up and removed from disk.

The lessons they learned while building Deployme include not to enforce policies in the tools and to have fast failure operations with easy roll-back. WHile not yet publicly available, a version should be released to the public in the forseeable future.

Terminal Room

I only attended the "Deployme" paper before heading off to the terminal room to check my email and try to get caught up again.

Session 2: Refereed Papers
Building Blocks

Session chair, Ruth Milner

"Unleashing the Power of JumpStart: A New Technique for Disaster Recovery, Cloning, or Snapshotting a Solaris System," by Lee Amatangelo, Collective Technologies

Lee has developed a way to use optical media, JumpStart, and installboot to effectively make a bootable CD from which you can do bare-metal recovery of a customized system back up to full multi-user in under an hour. By doing regular system caputres (which take about 20 minutes per system) you can create a bootable CD for the system that restores it to the snapshotted state in about 40 minutes.

Judging the Pre-Qualifying Exams

I missed the second and third papers in this block to go pick up the Quiz Show Pre-Qualifying exams, take them to lunch, and score them.

Session 3: Invited Talks:
Why the Documentation Sucks, and What You Can Do About It

Steven Levine spoke and sang about documentation. Steven is a technical writer with SGI and talked about four major subjects: myths, difficulties, projects, and improvements.

First, Steven talked about some myths. One myth is that writers are editors. In reality, writers not only edit stuff others (such as developers) write, but write their own original content, maintain other existing documents, and produce both hardcopy and online help and web-based documentation. They also coordinate and organize and are detectives; they have multiple information sources and work between different groups or departments. They are also responsible for documentation consistency and legal issues.

Second, he discussed some of the difficulties that writers face. He started this section with a song, which is best described by Steven himself:

The big song — the Tech Writer's Lament — is based on a 19th-century song called "The Housewive's Lament."

You can play the melody at this site, but it's kind of tediously done.

I stuck pretty close in form to the original. My father wondered afterwards how anybody could get the song without knowing the original.

This is my song itself, also my poem, FWIW. I suspect that on the page like this it will seem a bit more leaden than it did live — various faults fade into the background in a live performance.

The Tech Writer's Lament

From outside my cubicle rose such a ranting;
The language was vivid, the words were obscene;
I spied a tech writer, impassioned and panting,
And this was his song as he raged at the screen:

CHORUS:
Servers get flaky while work never ceases;
Projects are many and writers are few;
Each business day brings new software releases,
And no one has time to return their review.

The formatting package was bought at a bargain:
The toolbars and help screens are nothing but jokes;
There's never a rest from the fight against jargon;
There's so many buzzwords the spellchecker chokes.

CHORUS

On Tuesday you change what was final on Monday;
On Wednesday the software needs just one more tweak;
New features on Friday means work through to Sunday;
On Monday the project has slipped by a week.

CHORUS

The system examples have never been tested;
This package of images tends to dump core;
There's no getting back all the time you've invested
On products that never will get out the door.

CHORUS

Last night in my dreams I was facing a deadline,
And yet I'd heard nothing from software for weeks;
In my mind I could see it, in 50-point headline:
Tech Writer Kidnaps a Roomful of Geeks!

Alas `twas no dream, but a dire prediction;
I fear that I'm reaching the end of my rope;
I'll turn in a manual that's modeled on fiction,
And nourished on wishes, and premised on hope.

CHORUS

(And yes, all 300-plus audience members were singing along with the chorus.) The major difficulties are lack of resources, conflicting perspectives, no usability (if any!) testing of the documentation, shorter release cycles, distinctions in hardware and software, the problems of writing from experience on as-yet-nonexistent products, and having to rely on developers' time and interest to improve the documentation.

Third, he discussed some of the typical projects that writers are involved in. Not only are they responsible for administrative documentation but also online documents, procedures and examples, and man pages (which may not be sexy but are certainly very useful).

Fourth, Steven discussed how to improve the documentation. The short answer is it's a two-way street. If you see something needing work in a document, let the author know. There's always some form of contact information (even if it's postal mail). Document what you want solved. Document what you did to work around a problem to help others not have to go through it themselves. Formalize your informal people networks; if you're a developer, take your writer to lunch. Produce libraries of examples, procedures, tricks you use, and so on. Collaborate within the company across department lines. Collaborate with friends and aquaintances at other companies.

Steven closed with a poem (and apologies to Henry Wadsworth Longfellow):

The Village Technical Writer,
by Steven Wadsworth Longfellow

Facing a glowing terminal screen
  The village writer stands;
Full his store and rich his lore
  Of options and commands;
Swiftly do his fingers fly
  From carpal-tunneled hands.

Week in, week out, from morn `til night
  He labors at his prose;
Perusing code, he hopes in vain
  Its secrets to disclose;
In fervent quest he finds no rest
  Until the source has froze.

His formats, lovely on the page,
  He often must revise;
His users plead their latest need:
  html-i-cize;
And with his aching hand he wipes
  A tear out of his eyes.

Thanks, thanks to thee, my worthy friend
  Whose hands are stained with ink;
For we did thirst for knowledge,
  And you offered us a drink;
And in the admin. guide to life
  You've tested every link.

LISA Quiz Show

The Quiz Show started with a video performance of the Internet Help Desk sketch by the Canadian troupe 3 Trolls in a Baggie. Then Dan Klein read the credits for the show and introduced Rob Kolstad, who moderated the four games:

Game 1
Categories Television 2001, World Leaders, State Borders, Hoaxes, Bad Engineering, Movies
Players Hal Burch (900), Fuat Baran (1300), Trey Harris (2600)
 
Game 2
Categories Animal Children, Guitar Players, Units, University Cities, Automobile Parts, Movies II
Players David Grieg (1400), Bob Lawhead (2400), Adam Moskowitz (2300)
 
Game 3
Categories Country Domains, Constellations, Phobias, Animal Groups, Tourist Sites, Movies III
Players JB Segal (2200), Aaron Mandel (3700), Jeff Allen (1700)
Tie Breaker Elements
 
Finals
Categories Frequency Q's, Modern Cartoons, Spacecraft Names, Famous Quotes, Country Borders, Movies IV
Players Trey Harris (2900), Bob Lawhead (400), Aaron Mandel (4000)
Tie Breaker Movies V

Evening activities

After the conference itself ended, I went to a quiet dinner with Dkap, Tia, and their friend (whose name escapes me, sorry) at an Italian restaurant. The food wasn't bad, though the initial wine list was missing the "red" page and the waitroid mis-heard an order and delivered the wrong pasta.

After dinner it was the dead-dog party in the Manager's Suite. For an interesting change the suite had two bars, so we set up soft drinks and beer in the kitchen with most of the food and the hard liquor at the wet bar in the living room. I tended bar off and on for 4 hours or so in my bartender drag, and collapsed in bed around 2am.


Saturday, December 9

Today was a social day for me. Slept in, did lunch and shopping up and down Bourbon Street, discuss the USENIX/SAGE split and certification, and so on. Dinner was at Lucky Chang's; it was very good but a little too loud and a little too smoky for me so I balied a little early and went to bed.


Sunday, December 10

Today was a travel day. As it happens, I managed to share an airport shuttle back to MSY with Tom and Bob. They went to their airline and I to mine. The flight departed on time, arrived only a little late (an initial ground hold because of weather at O'Hare was the contributing cause), got my luggage, and got home before the worst of the blizzard hit.



Back to my conference reports page
Back to my professional organizations page
Back to my work page
Back to my home page

Last update Feb01/20 by Josh Simon (<jss@clock.org>).