Josh Work Professional Organizations Trip Reports Conference Report: 2000 USENIX

The following document is intended as the general trip report for Josh Simon at the USENIX General Technical Conference in San Diego CA, from June 18-23, 2000.


Saturday, June 17

Today was a travel day. I managed to get to the hotel without incident (as long as you don't count the shouting baby in the row ahead of me on the plane), and run into a fellow LISA 2000 Program Committee member in the lobby. We grab lunch, check our email, and head over to the conference registration area to sign in. We run into more and more folks (including 2 of my 3 coworkers I'm here with), and a group of us wind up heading to the bar to be social (and to get away from the registration and tutorial notes areas: if we're there, well get drafted to help). We hang out waiting for Cat to arrive and when she does 8 of us head out to a late dinner at Asti, a (tres chic) Italian restaurant in the Gaslight district, a few blocks walk from the hotel. I had a very good seafood risotto (mussels, clams, shrimp, salmon, halibut, and some other fish in a light tomato sauce with lots of creamy Arborio rice), but I think Cat was the big winner with the venison in a wine reduction with onions and a side of polenta.

After dinner (and the walk back to the hotel), I crashed hard.


Sunday, June 18

Somehow I managed to wake up in time for breakfast. I met Chris and Bob and we went in to the buffet; Skunky, Trey, and Randy joined us and we all pigged out big time. After that we headed over to registration to see who else had shown up, and I helped a little during the rush before classes started at 9:00.

I attended S6: Hacking Exposed: Live! George Kurtz and Eric Schultze team-taught a course that showed us how easy it is to break into machines from the Internet, getting through a demilitarized zone (DMZ) and onto the back-end corporate network without going in through the router. A very interesting, amusing, and scary class. (I missed the last quarter of it trying to shake a near-migraine headache. Lying in a cool, dark room helped; the drugs I took didn't.)

After the tutorial sessions, we hung out until the Welcome Reception opened and we munched on appetizer-like foods (chips, dip, potato skins, antipasti, and so on). Eight of us headed over to Molly's, the steakhouse in the restaurant, for dinner. Very good food, very good (though occasionally slow) service. The cherries jubilee for dessert was delicious.

After dinner we went to the hot tub and soaked for an hour or so. I left there around 10:30 or 11:00 and went to bed.


Monday, June 19

Today was my "no tutorial" day. I intended to read my LISA 2000 mandatory abstracts (the 10 abstracts I was assigned to read by the program chairs), but I had to start by straightening out a room problem. It seems that the hotel had not quite gotten the "direct bill" aspect of Darryl and Jeff's room down correctly. A brief meeting with the front desk manager and a call to Robin, the wonderful RC in the Pacific Southwest region, cleared things up. We thought. (This theme will resurface.)

LISA 2000 Program Committee

I managed to review 8 of the 10 abstracts I'd brought with me before meeting with folks for lunch, and the other 2 after lunch. I spent the afternoon typing up my review comments and working with the hotel technical staff to fix the damaged data port in my hotel room.

Evening activities

Ten of us (Bob, Tom, Dominus, Brad, David, Trey, Skunky, Chris, Lee and I) went to dinner at Taka, a Japanese sushi-and-more restaurant in the Gaslight district (http://www.taka-signonsandiego.com/). I had a wonderful shrimp tempura and salmon teriyaki. Afterwards we wandered to the nearby ice cream shop for dessert and then to the nearby Ralph's grocery store for supplies. Then some hot tub time and to bed.


Tuesday, June 20

Daytime activities

Tuesday I attended tutorial T8: Managing and Being Managed. Steve Johnson (who used to be President of USENIX) and Dusty White team-taught this course on management. They provided some information on neurolinguistic programming and what we should look for in our managers or in peope we manage in order to determine the true underlying meaning and in order to help understand how they process information. Some areas we specifically covered included knowing the expectations of the company and the employees, finding the management styles that work best for you, chunking information up and down, and other communications skills.

I missed the final quarter of the tutorial because I was coordinating the changes. Hal Skelly, who was to have been attending the entire conference through Friday, was deployed to a customer. Since Collective Technologies had agreed that Hal would write up five specific sessions for ;login: (and got Hal's technical session registration for free as a consequence), I worked with Mike Geringer (Regional Director of the Pacific Southwest Region) and Robin Motherhead (Regional Coordinator) to get Gustavo Vegas approved to come up from Phoenix to cover those sessions. Unfortunately, since Hal was local to San Diego I had to hunt down a hotel room; the hotel was booked up, so I transferred Jeff Schouten to my room and added Gustavo to Darryl Baker's room, or in other words managed to get all fo us in hotel rooms without any additional cost. I also got Gustavo's badge printed, registration packet compiled, and everything prepared on the USENIX and Marriott fronts.

Evening activities

A small group of us went to dinner at the hotel's Yacht Club restaurant. We had nachos, pot stickers, and burgers. We finished in time to head over to the GBLT BOF which Chris ran this time. After a round of introductions we talked about what the Association could do for us. Trey collected a list of ideas and names and will be following up to work on a Short Topics booklet on what policies an organization should have in terms of GLBT rights (such as inclusion in the Equal Opportunity Employment (EOE) clause, domestic partner inclusion in health and leave benefits, and so on) and how to go about getting those policies enacted. After the GLBT BOF broke up we headed over to the Sendmail hospitality suite. After some Haagen Daas ice cream and drinks and a lot of conversation (technical and otherwise), we adjourned to the hot tub to round out the evening.


Wednesday, June 21

Wednesday marked the start of the actual technical conference. Since the convention center had its own audio-visual team, we didn't use the group that USENIX usually uses. This was very evident when the room was not prepared until 8:50 am (when the session started at 9), the microphones didn't work right, the video feeds were badly out of color synchronization, and so on. However, once those problems were mostly worked out, the session began with the announcements.

Session 1: Announcements & Keynote

Announcements

The first session started with the traditional announcements from the Program Chair, Christopher Small:

Chris then announced the best paper awards:

Kirk McKusick, chair of the Freenix committee, spoke about that track. They received 56 refereed paper submissions and accepted 29 of them. Most of the papers were open-source-related. Because the Freenix track was run with the same rigor as the refereed papers track, they also presented awards:

Andrew Hume, immediate past president of the USENIX Association (and now Vice President) announced the two annual USENIX awards:

Keynote address

Bill Joy presented the keynote address, his visions of the future of technology.

Based on his 25 years of experience Bill forecast the next 25-30 years in computing. But he started by looking back at that history: the eventual acceptance of software as research in computer science, the integration of networks and the operating system, and the growth of portability in computing. More and more we;ll see standards defined in English, perhaps passing code or agents instead. He also sees the continued need for maintaining compatibility with open protocols and specifications, noting that protocols often outlive the hardware systems for which they were designed.

Looking forward, Bill believes that Moore's Law will continue. Based in part on molecular electronics and improved algorithms he expects to see up to a 1012 improvement over 30 years. The question of synchronization between different geographies becomes hard when you can store 64 TB in a device the size of your ballpoint pen. We need to improve resilience and autonomy for the administration of these devices to be possible. Further, he sees six webs of organization of content: near, the web of today, used from a desktop; far, entertainment, used from your couch; here, the devices on you, like pagers and cell phones and PDAs; and weird, such as voice-based computing for tasks like driving your car and asking for directions. These four would be the user-visible webs; the remaining would be e-business, for business-to-business computing, and pervasive computing, such as Java and XML. Finally, reliability and scalability will become even more important. Not only will we need hardware fault tolerance but also software fault tolerance . In addition we need to work towards a distributed concensus model so there's no one system in charge of a decision in case that system is damaged. This leads into the concepts of byzantine fault tolerance and the genetic diversity of modular code. We also need to look into the fault tolerance of the user; for example, have the computer assist the user whos forgotten her password.

Session 2: Refereed Papers: Instrumentation and Visualization

Session chair: Christopher Small (Conference chair)

Mapping and Visualizing the Internet
by Bill Cheswick, Hal Burch, & Steve Branigan

We need tools to be able to map networks of an arbitrarily large size, for tomography and topography. This work is intended to complement the work of CAIDA. So Bill et al developed toolsUnix-style, using C and shell scripts'to map the Internet as well as the Lucent intranet. The tools scan up to 500 networks at once and is throttled down to 100 packets per second. This generates 100-200MB of text data (which compresses to 5-10MB) per day and covers on the order of 120,000 nodes. http://www.cs.bell-labs.com/who/ches/map/ has the details and maps.

Measuring and Characterizing System Behavior Using Kernel-Level Event Logging
by Karim Yaghmour and Michel R. Dagenais

Karim Yaghmour spoke on the problem of visualizing system behavior. ps and top are good, but neither provides truly real-time data. He therefore developed a kernel trace facility with a daemon to log them to a file, and instrumented the Linux kernel to trace the events, and does offline analysis of the data. The tools do not add much overhead for server-side operations but a lot of overhead to intensive applications such as the Common Desktop Environment (CDE). Data is collected up to 500 kb per second but it compresses well. Future work includes quality of service kernels (throttling the rate of, for example, file opens), security auditing, and even integrating the event facility further into the kernel. Sources are available at http://www.opersys.com/LTT/ and are under the GPL license.

Pandora: A Flexible Network Monitoring Platform
by Simon Patarin and Mesaac Makpangou

The goal of Simon and Mesaac was to produce a flexible network monitoring platform with online processing, good performance, and no impact on the environment. The privacy of users was also important in the design. They decided to use components for flexibility and a stack model. They developed a small configuration language and a dispatcher that coordinates the creation and destruction of the components. The tool is 15,000 lines of C++, using libpcap. The overhead is about 0.26 microseconds per filter per packet. For example, http requests get ovder 75 Mb/s throughput on traces, which translates into 44-88 Mb/s in real-world situations, or 600-2600 requests per second. Future work includes improving the performance and flexibility. More details are available from http://www-sor.inria.fr/projects/relais/ and released persuant to the GPL license.

Session 3: Refereed Papers: File Systems

Session chair: Liuba Shrira

After a quick bite to eat I took a turn around the vendor floor, picking up some goodies (like a t-shirt from Sendmail, a Compaq model car, my BSD daemon horns, and so on). As it got closer to 2pm I headed up to the sessions. I arrived a little late; even though my watch said it was 2:01pm, the speaker was several slides into the first paper.

A Comparison of File System Workloads
by Drew Roselli, Jacob R. Lorch, and Thomas E. Anderson

In the paper, the authors described the collection and analysis of file system traces from a variety of different environments, including both UNIX and Windows NT systems, clients and servers, and instructional (educational) and production systems. Their stated goal was to understand how modern workloads affect the ability of file systems to provide high performance to users. Because of the increasing gap between processor speed and disk latency, file system performance is largely determined byb its disk behavior. Therefore the authors focused on the disk I/O aspects of the traces.

The authors determiend that more processes access files via the memory map interface than through the read interface, but these files are also more likely to be cached. File access also has a bimodal distrubution pattern: some files are written many times without being read, and other files are read almost exclusively.

Using a new metric they developed, the authors measured file lifetimes that allow for never-deleted files. This indicated that some workloads have much longer block lifetimes than other workloads, and those significantly exceed the 30-second write delay used by many file systems. This should allow administrators to manage and predict file system growth rates. Additional details are available at http://tracehost.cs.berkeley.edu/traces.html.

FiST: A Language for Stackable File Systems
by Erez Zadok and Jason Nieh

File system development is a hard problem. How can it be made easier? The authors decided to implement a stackable modular approach. The two major approaches are to replace the existing file system completely, which is a very hard problem involving rewriting at the kernel layer, and using templates, which are very low-level and generall not portable to other operating systems.

The FiST system uses both approaches at the same time to extend existing file systems. FiST does not yet allow for the creation of new low-level file systems. FiST is a simple, portable language on top of C templates, much like yacc for file systems, and an abstraction, treated like a layer above the fille system driver, sitting between the user space and the low-level driver in the kernel. A program, fistgen, generates the drivers based on the configuration file and the templates.

Results, comparing different file system types (snoopfs, cryptfs, aclfs, and unionfs) in terms of code size, development time, and performance overhead, showed that FiST was between 1.3 and 31.1 times smaller (and 10.5 times smaller on average) and about 7 times faster to develop than, for example, wrapfs. The overhead seen was typically on the order of 0.5% to 2.1%, varying by file system type.

Details can be found on the web at http://www.cs.columbia.edu/~ezk/research/fist/.

Journaling Versus Soft Updates: Asynchronous Meta-data Protection in File Systems
by Margo I. Seltzer, Gregory R. Granger, M. Kirk McKusick, Keith A. Smith, Craig A. N. Soules, and Christopher A. Stein

The authors aimed to solve the problem of the consistency of the metadata of the file system. The end goal is to have the data have integrity (be recoverable), durability (be persistent), and atomic (either all or none of it is written to disk, never just some of it). There are two classes of solution to this problem: journaling and soft updates.

Soft updates are delayed, asynchronous write system calls. The problem with soft updates, though they are fast due to the interleaving of CPU and I/O, is that they are neither durable nor atomic. Journaling, on the other hand, writes a log record to some data structure, and write-ahead logging ensures recoverability. The metadata is atomic, and the durability can be turned on or off. Synchronous journaling has all three properties of integrity, durability, and atomicity. The results of the testing showed that soft updates can background the deletion of files. The integrity costy is very low, but the durability cost is higher (by about 2 to 4 times). The durability cost can be affected by moving the journal log to a different disk.

Session 4: Refereed Papers: Old Dogs, New Tricks

Session chair: Greg Minshall

Lexical File Names in Plan 9, or, Getting Dot-Dot Right
by Rob Pike

Historically there have been problems in Unix in determining where the parent directory (dot-dot, or ..) is. The problem is from symbolic links that traverse file systems or that go through automount maps. For example, the parent of /home/rob could be /home or /net/servername/volume. This was thought to be a hard problem to resolve but it was fixed in a single afternoon in Plan 9. They define ".." lexically as the directory (such as /absolute/path/to/whatever) without the last element (/absolute/path/to). In Plan 9 they use a binding (similar to a link in Unix) to create a union. For example, /bin could be bound to /sparc/bin, $HOME/sparc/bin, and so on. So what would the parent of /bin be (/bin/..)? The root directory (/) is the expected and correct answer. In Plan 9 there's a channel (which isn't quite like an inode or vnode in Unix file systems) has a CNAME like in DNS which defines the current working directory and can be used to disambiguify the parent directory. The code, available at http://plan9.bell-labs.com/plan9dist/, affects all file system-related calls, including open() as well as chdir().

Gecko: Tracking a Very Large Billing System
by Andrew Hume

The problem faced by AT&T, Andrew's employer, was billing for every call on their network exactly once. The solution was to look at all the data passively, at all the various steps in the (complex) billing process, and track each call uniquely. They decided to use flat files to store the collected data so the normal Unix tools (specifically grep and its cousins) could be used to search. The Gecko project took 6 people 8 months to implement from the start of discussions to rolling it into production.

AT&T has a high volume of calls, on the order of 250 million calls per day. Their system has 250 data feeds and 650 data sets, generating 3100 files (or 250 gigabytes of data) per day. One problem was that of data mutation, since different systems track across time zones, area code splits, and have to follow different tariffing systems. The Gecko project is accurate, with 0.007% error and 0.03% production noise.

They used the calculations as a benchmark. A 32-processor Sun E10000 Server cost $1.2 million. An O2000 cost $0.7 million. An SGI was 4.3 times the speed of the Sun (1.28x speed, 1.74x cheaper, and 1.95x efficient)!

One note is that they chose to use the passive data feeds rather than create new feeds in order to make the legacy systems operations teams jobs easier.

Extended Data Formatting Using Sfio
by Glenn S. Fowler, David G. Korn, and Kiem-Phong Vo

Sfio was intended to be a better standard input/output manipulation. They wanted to provide extended formatting for portability, convenience, and robustness. The modified routines allow for print() and scan() without the risk of buffer overflow attacks. If theyre not used, there's no overhead; if they are, there's a small, reasonable, linear cost. The routines themselves are open source and available from http://www.research.att.com/sw/tools/.

Evening activities

After a quiet dinner in the hotel with Tom, Skunky, and Carson, I went to the terminal room to check my mail (after a couple of Hallway Track discussions), then off to the hospitality suites. I only managed to make it to Intelsthey had good food (including roast beef, crab cakes, chicken fingers, spring rolls, crudites, cheeses, jalapeno poppers, cookies, and chocolate-covered strawberries), an open bar, and gambling tables (poker, blackjack, roulette, and craps) at which you could gamble your $200 in Intel Dollars to win tickets good for a later raffle. I wasnt feeling that well so I went up to start writing this trip report, and took a quick dip in the hot tub before going to bed.


Thursday, June 22

Session 1: Invited Talks: The Microsoft Antitrust Case: A View from an Expert Witness

by Edward Felten

Disclaimer: Neither the author of this write-up nor the speaker are lawyers.

Dr. Felten was one of the expert witnesses for the United States Department of Justice in the recent antitrust case against Microsoft. In his talk he discussed why he believed the government chose him and explained the role of an expert witness in antitrust cases.

In October of 1997 Ed received an email from an attorney is the Department of Justice asking to speak with him. After signing a nondisclosure agreement (which is still binding, so he advised us there were some aspects he simply could not answer questions on), and over the course of several months, he spoke with the Department of Justice until, in January 1998, he signed a contract to be an advisor to the case.

What was the case about? Unlike media portrayals, the case was not about whether Microsoft is good or evil, or whether or not Bill Gates is good or evil, or whether Microsoft's behavior was good or bad. The case was specifically about whether or not Microsoft violated US antitrust laws.

A brief discussion of economics may be helpful here. Competition constrains behavior. You cannot, as a company, hike prices and provide bad products or services when there is competition, for the consumer can go to your competitors and you'll go out of business. Weakly constrained companies, or those companies with little or no competition, have what is called monopoly power. Monopoly power in and of itself is not illegal. What is illegal is using the monopoly power in one market (for example, flour) to weaken competition in another market (for example, sugar).

The US government claimed that (1) Microsoft has monopoly power in the personal computer operating system market; and that (2) Microsoft used its monopoly power to (a) force PC manufacturers to shun makers of other (non-Microsoft) applications and operating systems; (b) force Aol and other ISPs to shun Netscape's browsers, Navigator and Communicator; and (c) force customers to install and use Microsoft Internet Explorer. These issues are mostly non-technical and specifically economic. Dr. Felten focused on the technical aspects.

Under US antitrust law, tying one product to another is illegal in some cases. For example, if you have a monopoly on flour, you cannot sell flour and sugar together unless you also sell flour alone. You cannot force customers to buy your sugar in order to get your flour. Similarly, the government argued, Microsoft cannot tie Windows 95 (later, Windows 98) together with Internet Explorer unless it offers both the OS and the browser separately. Microsoft claimed technical efficiencies in bundling the products together.

This boils down to two legal issues. First, what was the motive in combining the OS and the browser? The answer to this is documentation and witnesses, subpoenaed by the government, and not technical. Second, does the combination achieve technical efficiencies beyond that of not combining the two? The answer to this is experimental, technical, based in computer science, and the focus of the rest of the talk.

Specifically, how did Dr. Felten go about testing the efficiencies or lack thereof? He started by hiring two assistants who reverse-engineered both Windows and Internet Explorer. (Note that this work, because it was done on behalf of the government for the specific trial, was not illegal. Doing so yourself in your own basement would be illegal.) After 9 months, they were able to get a program assembled to remove Internet Explorer from Windows.

The next step in the process was to prepare for court. In general , witnesses have to be very paranoid, nail down the technical details, have sound and valid conclusions, and learn how to be cross-examined. Lawyers, no matter your personal opinion of them, are generally very well schooled in rhetoric, terminology and framing of questions, and hiding assumptions in them. They're also good at controlling the topic, pacing the examination, and producing sound bites. In his testimony, Dr. Felten demonstrated the "remove IE" program. Jim Alchain, Microsoft Vice President, provided 19 benefits of tying the products together and claimed the removal program had bugs. In the government's cross-examination of Mr. Alchain, he admitted that all 19 benefits were present if IE was installed on top of Windows 95, and that the video used to show that the demonstration of the removal process had bugs had errors and inconsistencies. Microsoft tried a second demo to show problems with the removal program under controlled circumstances and could not do so. Furthermore, in rebuttal to Microsoft's assertion that the products had to be strongly tied together to gain benefits, the government pointed out that Microsoft Excel and Microsoft Word were not strongly tied and yet were able to interoperate without being inseperable.

Judge Jackson reported in his findings of fact in November 1999 that the combination had no technical efficiencies above installing them separately, Internet Explorer could be removed from the operating system, and tying the browser and the operating system together was, in fact, illegal.

The next phase of the trial was the remedy phase in May 2000. The goals of the remedy phase are to undo the affects of the illegal acts, prevent recurrence of those acts, and be minimally intrusive to the company, if possible. There are generally two ways to accomplish this: structural changes (reorganization or separation of companies) and conduct changes (imposing rules). The judge could choose either or both. The decision was reached to restructure Microsoft such that the operating system (Windows) would be handled by one company and everything else by another. Furthermore, in conduct changes, Microsoft could not place limits on contracts; could not retaliate against companies for competing in other markets (such as, for example, word processing); must allow PC manufacturers to customize the operating systems on the machiens they sell; must document their APIs and protocols; and cannot tie the OS and products together without providing a way to remove them.

Microsoft has appealed the case. At the time of this writing it is not clear whether the appeal will be heard in the US Court of Appeals or by the US Supreme Court. The remedies are stayed, or on hold, until the resolution of the appeals or until a settlement of some kind is reached between Microsoft and the US government. Once the case is truly over, Dr. Felten's slides will be made available through the USENIX web site, http://www.usenix.org/.

Session 2: Invited Talks: Challenges in Integrating the MacOS and BSD Environments

by Wilfredo Sanchez

Fred Sanchez discussed some of the challenges in integrating the MacOS and BSD environments to produce MacOS X. Historically, the Mac was designed to provide an excellent user interface ("the best possible user experience") with tight hardware integration and a single user. In contrast Unix was designed to solve engineering problems, using open source (for differing values of "open"), running on shared multi-user computers and with administrative overhead. There are positives and negatives with both approaches. MacOS X is based on the Mach 3.0 kernel and attempts to take the best from both worlds. A picture may help explain how all this hangs together:

Platinum Aqua Curses
Classic Carbon Cocoa
(OpenStep)
BSD
Application services:
Quantum, OpenGL, and QuickTime
Core Services
Darwin (BSD layer)

Fred next talked about four problem areas in the integration: file systems, files, multiple users, and backwards compatibility. Case sensitivity was not much of an issue; conflicts are rare and most substitutions are trivial. MacOS used colon as the path separator; Unix uses the slash. Path names change depending on whether you talk through the Carbon and Classic interfaces (colon, :) or the Cocoa and BSD interfaces (slash, /). File name translation is also required, since it is possible for a slash to be present in a MacOS file name. File IDs are a persistent file handle that follows a file in MacOS, providing for robust alias management. However, this is not implemented in file systems other than HFS+ so the Carbon interface provides for nonpersistent file IDs. Hard links are not supported in HFS+, but it fakes it, providing the equivlent behavior to the UFS hard link. Complex files 00 specifically, the MacOS data and resource forks — are in the Mac file systems (HFS+, UFS, and NFS v4) but not the Unix file systems (UFS and NFS v3). The possible solutions to this problem include using AppleDouble MIME encoding, which would be good for commands like cpand tar but bad for commands using mmap(), or using two distinct files, which makes renaming and creating files tough, overloads the name space, and confuses cpand tar, The solution they chose was to hide both the data and resource forks underneath the file name (for example, filename/data and filename/resource, looking like a directory entry but not a directory) and have the open() system call return the data fork only. This lets editors and most commands (except archiving commands, like cpand tar, and mv across file system boundaries) have the expected behavior. Another file system problem is permissions (which exist in HFS+ and MacOS X but not in MacOS 9's HFS). The solution here is to base default permissions on the directory modes.

The second problem area is files. Special characters were allowed in MacOS file names (including space, backslash, and the forward slash). File name translation works around most of these problems, though users have to understand that "I/O stuff" in MacOS is the same as "I:O_stuff" in Unix on the same machine. Also, to help reduce problems in directory permissions and handling they chose to follow NeXT's approach and treat a directory as a bundle, reducing the need for complex files and simplifying software installations, allowing drag-and-drop to install new software.

The third problem area involves multiple users. MacOS historically thought of itself as having only a single user and focused on ease of use. This lets the Mac user perform operations like setting the clock, reading any file, installing software, moving stuff around, and so on. Currently MacOS X provides hooks for UID management (such as integrating with a NetInfo or NIS or LDAP environment) and tracks known (Unix-like) and unknown disks, disabling commands like chown and chgrp on unknown disks.

The fourth and final problem area Fred discussed was compatibility with legacy software and hardware. Legacy software has to "just work," and the API and toolkit cannot change, so previous binaries must continue to work unchanged. The Classic interface provides this compatibility mode. Classic is effectively a MacOS X application that runs MacOS 9 in a sandbox. This causes some disk access problems, depending on the level (application, file system, disk driver, or SCSI driver). The closed architecture of the hardware is very abstracted, which helps move up the stack from low-level to the high-level application without breaking anything.

Questions focused on security, the desire to have a root account and the terminal window or shell. The X11 windowing system can be run on MacOS X, though Apple will not be providing it. Software ports are available from http://www.stepwise.com/. Additional details can be found at http://www.mit.edu/people/wsanchez/papers/USENIX_2000/, http://www.apple.com/macosx/, and http://www.apple.com/darwin/.

Session 3: Invited Talks: The Convergence of Networking and Storage: Will it Be SAN or NAS?

by Rod Van Meter

The goal of this talk was to provide models for thinking about SANs and NASes. Network-attached storage (NAS) is like NFS on the LAN; storage area networks (SAN) are like a bunch of Fibre Channel-attached disks.

The goal of people is to share their data. There are several patterns, such as 1-to-many users, 1-to-many locations, time slices, and fault tolerance; activities, such as read only, read-write, multiple simultaneous reads, and multiple simultaneous writes; and multiple ranges, of machines, CPUs, LAN versus WAN, and known versus unknown clients.

When sharing data over the network, how should you think about it? There are 19 principles that Levy and Silverschatz came up with that describe the file. These include the naming scheme, component unit, user mobility, availability, scalability, networking, performance, security, and so on. There is Garth Gibson's taxonomy of four cases: server-attached disks, like a Solaris machine; server integrated disks, like a Network Appliance machine; netSCSI, or SCSI disks shared across many hosts with one "trusted" host to do the writes, and network-attached secure devices (NASD). Over time, devices are evolving to become more and more network-attached, smarter, and programmable.

Rod went into several areas in more detail. Access models can be application specific (like databases or http), file-by-file (like most Unix file systems), logical blocks (like SCSI or IDE disks, or object-based (like NASD). Connections can be over any sort of transport, including Ethernet, HiPPI, Fibre Channel, ATM, SCSI, and more. Each connection model is at the physical and link layers and assumes there is a transport layer (such as TCP/IP), though other transport protocols are possible (like ST or XTP or UMTP). The issues of concurrency (are locks mandatory or advisory, is management centralized or distributed), security (authorization and authentication, data integrity, privacy, and nonrepudiation), and network ("it doesn't matter" versus "it's all that matters") all need to be considered.

Given all those issues, there are three major classes of solutions today. The first is a distributed file system (DFS), also known as NAS. This model is a lot of computers and lots of data; examples include NFS v2, AFS, Sprite, CIFS, and xFS. The bottleneck with these systems is the file manager or object store; drawbacks include the nonprogrammability of these devices and the fact that they are OS-specific and have redundant functionality (performing the same steps different times in different layers).

The second class of solution is storage area networks (SAN). These tend to have few computers and lots of data and tend to be performance critical. These are usually contained in a single server or machine room; the machines tend to have separate data and control networks. These devices' drawbacks are that they are neither programmable nor smart, they're too new to work well, they provide poor support for heterogeniety, and the scalability is questionable. However, there is a very low error rate and the application layer can perform data recovery. Examples of SANs include VAX clusters, NT clusters, CXFS from SGI, GFS, and SANergy.

The third solution class is NASD, developed at CMU. The devices themselves are more intelligent and perform their own file management. Clients have an NFS-like access model; disk drives enforce (but do not define) security policies. The problems with NASD is that it's too new to have reliable details, more invention is necessary, there are some OS dependencies, and some added functionality may be duplicated in different layers. Which solution is right for you? That depends on your organization's needs and priorities.

Slides from this talk will be made available through http://www.usenix.org/ shortly after the conference.

Session 4: USENIX Association Open Board Meeting

I attended the USENIX Open Board meeting during the fourth time slot today. The Association provides 12 annual meetings (conferences, symposia, and so on), and is working towards additional international events. We provide $1.1 million annually towards good works, including the Electronic Fronteir Foundation, the Internet Software Consortium, the Software Patent Institute, certification of systems administration (with SAGE, $135,000), and over $680,000 to educational scholarships,stipends, and awards.

The new board and directors were announced; Evi Nemeth asked her usual question of what the new directors goals were. John Gilmore wants to get back in step with the community and look more at how what we do impacts the "normal citizen;" Avi Rubin wants to work more with educational outreach; Michael Jones wants to perform more information exchange and enable more information sharing across communities; and Kirk McKusick wants to give back to the organization.

Evening activities

The 25th anniversary reception was held in the hotel from 6 to 8. The reception theme was "under the big top," and the circus motif was quite evident. In addition to the open bar there were buffet lines for fajitas, pasta, bruschetta, crudites, turkey, ham, and ice cream; there were also face-painters and clowns and magicians roaming the area. There were also several interactive stations, incolving bean-bag tosses and similar games.

After the reception I went down to the hot tub to soak for a while before the Scotch BOF. I spent most of the time there socializing with friends and drinking very little scotch. We had a nice surprise when the birthday cake was rolled out: not only was Thursday Toni Veglia's birthday, but Friday was Pat Wilson's birthday, Saturday was Judy DesHarnais' birthday, and Adam Moskowitz' birthday is July 8th, so we had fun teasing them and singing happy birthday at them.


Friday, June 23

The last day of the conference itself dawned gray and cloudy, but it cleared up soon.

Session 1: Invited Talks: An Introduction to Quantum Computing and Quantum Communication

by Rob Pike

Quantum computing shows that information is a physical concept and therefore removing the approximation that information elements are independent can lead to breakthroughs. What sort of breakthroughs? We're running out of particles; current CMOS insulation is about 6 atoms thick, and below that there is no insulation. Similarly, we use 10,000 photons to represent one bit of information and even given color (wavelength) encoding we're running out of photons. At current growth rates we'll be out of both by 2010.

Feynmann proved that a computer could not simulate a physical system perfectly based on the exponential numbers of particles involved and their interactions. However, it must be possible, since that's what Nature does.

After a brief introduction to quantum physics (there's nothing like quantum mathematics in the first morning session of the last day of the conference), we can define a new term, the qubit. If a bit has two states (0 and 1), then a qubit, or a parcel of information, is a combination of bits with states represented as |0> and |1>. If N bits represetn 2N integers, N qubits represent any 2N dimensional vectors. Where asking about the status of one bit in a register does not change the values of other bits (for example, a 2-bit value can be any of 00, 01, 10, and 11, and asking the status of one bit has no affect on the other bit). However, asking about 2 qubits is different; there are 1/sqrt(2) (|01> + |10>) and their values are entangled, meaning you cannot measure one qubit without affecting the value of the other because measurement affects the system ("collapses the state vector" in quantum physics-speak).

Rob went on to prove that you cannot clone or create a copy of an unknown quantum state without destroying the original. The implication is that like matter and energy, information can neither be created nor destroyed. Or in a computation, n qubits into a quantum computer function would have to provide n qubits of output (a rotation in n dimensions). A quantum computer is therefore a vector, V, with n bits, supplied to a function, F, which produces a vector, W, with n bits. So how do we design F to give the desired answer most of the time (for quantum computers are probabilistic)? Some examples of quantum computing show that factors of any number can be obtained in polynomial time (Shor) and a list of n elements can be searched in sqrt(n) time (Grover), both being much faster in quantum computing than in classical computing.

However, for these algorithms to work in a quantum computer requires 3,500 well-behaved qubits, and current state of the art is 4 qubits. So we need to implement some form of error correction. So for factoring a 200-digit polynomial we'd need 100,000 instead of 3,500 qubits, since you need 7 qubits to error-correct 1 qubit.

So all this is neat stuff, but nothing will happen soon since qubits are currently too delicate. Communication is possible using quantum theory, though, with higher communication density. In conclusion, as computational elements get smaller, quantum mechanical effects become more important — and this may be good. Information is physical and conserved. And finally, who could have guessed the general computing available in 2000 based on the computers of the 1940s? Think about that when you consider how the 2000 quantum computer might come into being.

Slides from the talk will be on the web through http://www.usenix.org/; the hotel business center lost the hardcopies they were to provide.

Session 2: Invited Talks: Providing Future Web Services

by Andy Poggio

Andy basically expanded on Bill Joy's keynote talk. The Internet has effectively begun to mimic Main Street and is beginning to provide those services that main Street cannot, such as any time and anywhere. The six webs are of relevance:

So how do we get there? Three aspects need to be worked on. First, the network has to be enhanced. Ipv6 provides more address space, better configuration management, authentication, and authorization, but has been slow to be adopted. Andy predicts wired devices will win over wireless devices, both quality of service and overprovisioning will continue, optical fiber will replace or supercede electrical (copper) wiring, and the last mile to the home or the consumer will be fiber instead of ADSL or cable modems or satellites. Second, the computer chip architecture will probably remain based on silicon for the next ten or so years. Quantum effects (see the 9am "Quantum Computing" talk for more information) show up around 0.02 microns, so we need new approaches such as optical computing, organic computing, quantum computing, or computational fogs (virtual realities). Third, Andy believes that the system architecture will connect three components — CPU server, storage devices, and the network — with some form of fast pipe, probably InfiniBand (a high-bandwidth, low-error, low-latency fast interconnect).

Session 3: Hotel Billing

I didn't have any assigned sessions in this time block (which was just as well, since it's one of two unassigned blocks in the twelve time blocks at the conference), so I worked with the front desk to move the room and tax charges for both rooms for the 4 CT-members to be billed to my American Express card. The front desk people assured me that all the room nights and taxes would be transferred to my folio. (This comes back to haunt us in part on Sunday morning.)

Endnote: New Horizons for Music on the Internet

by Thomas Dolby Robertson

Tom Dolby is a musician (you'll probably remember him from "She Blinded Me with Science!") who's been working on integrating computers into music for at least 20 years. One historical tidbit: The drums in "...Science!" were actually generated by a discoteque's light control board.

Tom is one of the founders of Beatnik (http://www.beatnik.com/), a tool suite or platform to transfer descriptions of the music, not the music itself, over the Internet. For example, the description would define which voice and attributes to use, and the local client side would be able to translate that into music or effects. This effectively allows a web page to be scored for sound as well as for sight.

For example, several companies have theme music for their logos that you may have heard on TV or radio ads. These companies can now, when you visit their web site, play the jingle theme without needing to download hundreds of kilobytes but merely tens of bytes. Similarly, a web designer can now add sound effects to her site, such that scrolling over a button not only lights the button but plays a sound effect. Another use for the technology is to mix your own music with your favorite artists, turning on and off tracks (such as drums, guitars, vocals, and so on) as you see fit, allowing for personalized albums at a fraction of the disk space. (In the example provided during the talk, a 20K text file would replace a 5M MP3 file.) In addition to the "way cool" and "marketing" approaches, there's an additional educational component to Beatnik. For example, you can set up musical regions on a page and allow the user to experiment with mixing different instruments to generate different types of sounds.

The technical information: Beatnik combines the best of the MIDI format's efficiency and the WAV format's fidelity. Using "a proprietary key thingy" for encryption, Beatnik is interactive and cross-platform, providing an easy way to author music. And because the client is free, anyone can play the results. The audio engine is a 64-voice general MIDI synthesizer and mixer, with downloadable samples, audio file streaming, and a 64:2 channel digital mixer. It uses less than 0.5% of a CPU per voice and there are 75 callable Java methods at run time. It supports all the common formats (midi, mp3, wav, aiff, au, and snd were what I got written down), as well as a proprietary rich music format (rmf), which is both compressed and encrypted witht he copyright. RMF files can be created with the Beatnik Editor (version 2 is free while in beta but may be for-pay software in production). The editor allows for access to a sound bank, sequencer, envelope settings, filters, oscillations, reverbs, batch conversions (for example, entire libraries), converting loops and samples to MP3, and encryption of your sound. And there is an archive of licensable music so you can pay the royalties and get the license burned into your sample.

Web authoring is easy with the EZ Sonifier tool which generates JavaScript, middling with tools like NetObjects' Fusion, Adobe GoLive, and Macromedia Dreamweaver, and hard if you write it yourself, though there is a JavaScript authoring API available for the music object.

Beatnik is partnered with Skywalker Sound, the sound effects division of Lucasfilms Ltd. Additional information can be obtained from http://www.beatnik.com/.

Evening activities

After the conference ended, a group of about 20 of us headed over to the Gaslight district to Trattoria Portabello for dinner. I had mozarella with roasted peppers, some eggplant lasagna rolls, and calamari for appetizers, and a linguine with lobster, clams, mussels, salmon, swordfish, and halibut in a spicy tomato sauce for an entrée. Delicious. The pinot grigio we had with it was very tasty too.

After a brief visit to the hot tub, we headed up to the traditional close of conference party. I tended some bar, watched the Iron Chef episode from the previous week (on maitake mushrooms), watched kc claffy's "Favorite Net Things" video, tended some more bar, and headed up to the upper level of the suite where the hot tub inside was clothing denied. Hung out there another two hours, flirting heavily with everyone there (even the straight boys and the one woman who braved the tub). The party broke up around 2:30am when the person who was living in the suite wanted to go to bed.


Saturday, June 24

I spent Saturday morning sleeping in (and recovering from the party), then met up with Trey, Bob, and Randy to go to lunch. We also ran to the hospital (so Trey could get his CT films), the Sprint PCS store (so Bob could get his dead phone replaced), and then back to the Marriott to laze out by the pool until dinner at Molly's. We soaked in the hot tub for half an hour or so until they closed the pool deck.


Sunday, June 25

Today was the day to travel back home to Chicago. But before checking out of the hotel I decided to make sure everything that should have been billed to me was. It took about half an hour to straighten everything out.

All the room and tax should have been moved to my American Express card. Skunky's room and tax was (half of the room for Saturday, Sunday, and Monday), and all of my room was (the entire room for eight nights). The room and tax for Gustavo and Darryl was partly moved (Wednesday through Friday nights), but not entirely, so Darryl had one and three-half nights (half for Saturday, Sunday, and Monday, and all of Tuesday) on his bill. Darryl apparently paid it on his credit card and is expensing it seperately.

Trey and I had agreed to meet up and share a shuttle to the airport; he was a little late since the elevators in the North tower of the Marriott were slow (he had to wait 15 minutes for an elevator). We got to the airport without incident, but that's the last thing that went well for a while.

After 45 minutes in line to check in, I find out that the flight number on my itinerary is wrong (it said 1280 which was Dallas, not 1006 which was Chicago). Further investigation showed that flight 1006 had been cancelled. So the airline rebooked me onto a flight to Chicago via Los Angeles. Got to LA okay but the flight out was delayed half an hour because of a "spill" in the last row near the restroom. The delays and layovers added four and a half hours to my schedule (and time-and-a-half for the shuttle ride home from O'Hare).



Back to my conference reports page
Back to my professional organizations page
Back to my work page
Back to my home page

Last update Feb01/20 by Josh Simon (<jss@clock.org>).