The Hype in Hypertext: A Critique

Jef Raskin
Information Appliance 1014 Hamilton Court Menlo Park, CA 94025
317426.317449.pdf595.29 KB


Hypertext has received a lot of mostly uncritical attention. The author sees it as one part inspiration and nine parts hyperbole. A number of user interface and technical problems are discussed.


The literature on Hypertext is generally effusive and non-critical. Even Conklin’s survey article in Computer (IEEE Computer, September 1987, pg 17 ff) ends up admitting that the author hopes that “the reader come away from this article excited, eager to try using hypertext for himself, and aware that he is at the beginning of something big, something like the invention of the wheel, but something that still has enough rough edges that no one is really sure that it will fulfill its promise.” This article looks at what some have seen as rough edges but which may be cracks that extend deep into the heart of hypertext.
Conklin’s advocacy is tame stuff compared with Hypertext’s prime mover, Ted Nelson, who writes with the messianic verve characteristic of visionaries. Many followers and supporters of the Hypertext concept take much the same tone; in fact, there have not been many who have gainsaid the concept, for on the surface it seems a good one that is well within our technical reach. On the other hand, it has not been implemented except for a few more or less experimental projects: the grand vision languishes unfinished, though often started. If the details are kept fuzzy enough, Hypertext seems like a wonderful, universally applicable, powerful, natural, human-oriented model for organizing and accessing knowledge. Having felt that draw, and also having implemented some real-world projects that were considered visionary when I started but became well-accepted (and commercially successful) when I was finished, I took a closer look at Hypertext and found some deep and fundamental difficulties that have not been much discussed.
A good place to begin is with some other ideas that, like Hypertext, sound very good but are simpler, and have failed to deliver on their promise.
There is a certain frustration in playing adventure games. The games ostensibly avoid requiring that you learn special computer commands by allowing you to frame your commands in English. This sounds very inviting, but whenever your response varies from the stylized vocabulary and grammar that the game recognizes you run into a wall of incomprehension. The game spins off a subgame of guessing which words the developer thought useful or significant. Synonyms in English may or may not be synonyms to the game; you often find that though what you said is correct, it is not accepted. All in all these games tend to be a frustrating experience unless you come to accept that figuring just how to emasculate your vocabulary and grammar is part of the game. Not everybody thinks this is fun, but if you enjoy it, you are a prime example of a puzzle-lover or “hacker” in the old, non-pejorative sense of a person who likes to play with a system—or hours and days on end if necessary—in order to learn how it works.
What is really happening in these games is that the players are learning a set of computer commands—by trial and error. In an adventure-style sword battle your knowledge of fencing is of no use, but your knowledge of the system developers’ frame of mind is. The notes that most players end up making for their own use form the manual that the game’s inventors (deliberately) didn’t provide. In other words, there is not much difference between an adventure game and a poorly documented computer program. Both present the same kind of challenge, and appeal to much the same set of users.
The problem with adventure games was that something that sounds like the right, natural, inevitable and easiest thing to do failed (in some sense) because it delivered only the aroma of what was promised. When you are hungry, this can be worse than nothing at all. The full meal may, in fact, be undeliverable. Natural language may be an inherently unsuitable medium for programming (in the present meaning of the word) or even controlling an imaginary universe.
This insight can be extended to Hypertext. Hypertext, in a nutshell, is text (in the sense of what one finds in books) where there are links between different texts and portions of texts for some very large universe of knowledge. You might be reading, on the display of your information appliance, about butterflies. Say that the text mentions a gathering of monarch butterflies that occurs in Monterey, California. So you point to (by some means) the word monarch, and you press the “delve deeper” button (or some such action) and a picture of the butterfly comes onto your screen. You touch one of the legs of the butterfly and up comes, say, details of the leg of the butterfly or perhaps an article on butterfly legs. You then press the ‘back to where I was” button, and continue reading. Upon pointing to the word “California” the delve deeper button might give you a map of California with Monterey called out, and so forth.
As technology permits, the concept of “text” in Hypertext expands to include color images, moving images (movies), sounds, real-time links to the author or other experts on the subject you are “reading’ about, reference librarians who can point to further sources of information, and literally anything else that might conceivably be pumped through wires or the aether to your system.
Lastly, the Hypertext vision includes a growing body of on-line information, becoming richer as it is used because users provide an ever-increasing number of sources of data and links between items. It assumes wide acceptance by a large number of ordinary, non-technophilic people.
It sounds wonderful.
There are, of course, some technological questions that can be asked about Hypertext: for example, how will we get the very high bandwidths between sources of information and the individual’s machine required to make pictures and text available quickly enough to avoid frustration? Another technological problem lies in the massive storage capacity, network, switching, and software required. There are social, legal, and economic problems as well. Examples of these problems include questions such as: will the financially privileged have easier access, and thus widen the gap between haves and have nots? Nelson has addressed (though without sufficient depth) the questions of who owns the material, who owns the all-important linkages, how copyrights are preserved, who pays for the central system, how to insure compatibility between systems and how an author gets paid. But this essay is concerned with basic conceptual problems, assuming that the technological problems will be solved in the natural course of events, and leaving the other questions for another time. I will also ignore the very real but probably not as serious “chicken and egg” problem of how Hypertext gets started, since at first it is necessarily rather weak and thin. While this last problem will have to be faced, other services, such as telephones faced the same dilemma and overcame it. Whether it gets off the ground depends in part on the quality of the initial system: where will we get enough linked text to make it useful at first?
The problem with “natural language” games, which also applies to many “natural language” interfaces for more serious (this does not imply more worthy) applications, has a parallel in the proposed Hypertext systems. As an example, take the passage where the text mentioned a gathering of monarch butterflies. First of all, not every word will have the same kind or depth of information behind it. Say that a previous user had looked up the meaning of the word “monarch” and established a link to “king.” Then, when you point to “monarch” meaning a kind of butterfly, you might find yourself in the midst of a discussion of the divine rights of hereditary rulers. You go “up”and try “butterfly” and you find a general description of the lepidoptera, which does not mention “monarch” since it only gives the Latin name, which you do not recognize. You can grope around for a while, trying this and that sub-heading in what you find, and maybe what you want is there, and maybe it isn’t. One key question is: How Do You Know If It’sThere? This isn’t the advertised smooth, rapid access “feel” of hypertext, this is a fishing expedition.
Let’s say that, by some means, you can point for specificity to the whole phrase “monarch butterfly” and you get, rather than some particular linked item, a list or menu of such items and other menus. Aside from having to learn how to operate the menus and make selections from them, it is well known that menus are a slow way of getting from place to place, especially if they are many levels deep. Avoiding, also, the question of how (or by whom) the menus are generated (Nelson suggests that people will spontaneously create them and charge for their use), let’s say that one menu item leads you to a picture of the butterfly. As before, you point to a leg. Now, it is not clear if you are pointing to, say, the tarsus, the whole leg, to legs in general, to butterfly legs, or to the whole insect. Since butterflies are beautifully symmetrical, are you pointing to ask about symmetry in general?
Not every item or detail in a picture will have a reference attached to it. How will you tell which do and which don’t? None of the references about Hypertext seem to carefully address this problem. I tried a medical “Hypertext” styled system a few months ago, based on video disks with immense storage capacity. You could point to a structure in a dissection and get a close up view, or views from other angles. It seemed fine when being demonstrated by someone who knew which body parts had further data behind them, but when I tried to use it and pointed to structures that I was curious about, most of the time the system, being finite, could give me no information.
There is a vague assumption that as people use it, the system will monitor and gather up the links they generate, and thus grow in depth and value. Does a link happen whenever you move from one frame or screenful to another? It will not, according to Nelson. You will have to explicitly tell the system when you are making a link (just how you do this is not specified), and I think it is important to ask if you are likely to bother making links at all. Remember: people are inherently lazy, and when you are hot on the trail of some information how likely are you to stop and tell the system which links seem valuable to you? If as a result of this observation you make the system record all links, good and bad, will there be human editors who will prune them (and where would we get these editors, how would we pay them, and is it in principle even possible to tell a good link from a bad link?) If the links are not pruned, won’t most searches turn into wild goose chases? One man’s links are another man’s sausages.
In other words, it is not clear that it can work as neatly as the hand-waving of its visionaries make it seem. I have been told that there are no bad links, and in a sense this is correct: any connection, even if it is a misreading of, say, “sheer” for “shear,” is meaningful and potentially useful (say to a person studying common reading mistakes, or to a poet). But the clutter a user faces will be enormous.
The central lacuna is the omission of any specification of a human interface. The Xanadu project, trying to implement a real-world version of Hypertext, is just building a central processing and storage facility. When I have asked the implementors what it will look like to the user—a central question if it is to be widely accepted, and it must be used by a large population to fulfill the plans of its inventors—they say that the “front end” is not their concern. I claim that the “front end,” namely what devices and how it will look to users is as important as the central nexus. It may be more important, since if the front end puts people off, they will never get any further. Yet there is no user interface specification for what Hypertext will look like to the individual user. It is important that the user interface be reasonably uniform from implementation to implementation. This point has been well demonstrated by the Apple Macintosh computer. One of my goals when designing the Mac was to make it easier for a software designer to use a provided interface model than to create a new one. This was a positive use of people’s laziness. Thus, unlike any previous system, you can move from application to application with relative ease, and buying a new program is not as traumatic an experience as it used to be. With earlier computers, each new application program gave you a nearly totally new experience.
A Hypertext system will have good “feel” only if it is fast enough. This is another area solved primarily by looking the other way. In ten years the word “work” may easily generate 200 pages of menus of referents (since the word “work,” like so many other words, has everything from negative implications (dirty work) to positive ones (work ethic), to technical ones (force times distance), to political ones (ownership of one’s labor)). How long will it take the central system to find the menus, how long will it take to transmit them, how long will it take you to go through them, how long will it take you to get to the next submenu? Most people don’t use dictionaries and encyclopedias because it takes from half a minute to a few minutes to look something up. If a computer system is as slow or slower, it will be avoided as thoroughly as people avoid other reference systems.
The lack of a carefully thought out “front end” is thus a major flaw in the design of Hypertext. A person should be able to use it from any available port, whether at home, at school, in the office, at the library, or when walking in the park using Alan Kay’s mythical Dynabook. (There’s another fine-sounding idea that has never been specified to the point where you can evaluate if it’s workable. It, too, lacks an interface specification and apparently does what the user intends by a combination of wishful thinking and magic.)
In short, Hypertext only sounds like a good idea. It tends to evaporate when looked at closely. There are three basic human interface problems: (i) the linkages are often either cumbersome, wrong for your needs, or trivial, (ii) the problem of what aspect of a word, phrase, or picture you intend has not been addressed, (iii) a uniform and excellent human interface specification is both necessary and absent.
I am sure enough of human ability to believe that the problems being addressed by Hypertext-like systems are not impossible to solve. What I am questioning is whether this particular proposed part of a solution, namely linked text (with a very broad definition of “text”) is the right one, or even a feasible one. As with playing “adventure” games, you will be trying to second guess the link builder’s frame of mind rather than staying within the subject you are working with. Knowledge of the system will often be more valuable than understanding your field. As the students of a poor teacher learn to give not the right answers but those the teacher wants, Hypertext users will spend much of their time pleasing the system instead of themselves.
It is perhaps worthwhile asking why Nelson and the Xanadu people have looked at the aspects they have and ignored the ones mentioned here. I hope I may be excused for making an ad hominem observation: these people consider themselves hackers (in the best sense). They feel at home at the annual Hackers’ Conference. They love technology, and like Joyce’s Leopold Bloom, thrive on innards. They can discuss the minutiae of a particular system to all hours and, I fear, sometimes confuse their satisfaction in technical ingenuity and accomplishment with usability. The power of a linked data structure is formidable and amazing. My own thesis in computer science exploited such a structure. From speaking with Xanadu’s implementors, and some people who hope to tap into its data, I have seen how they have been seduced. The potential is enormous, the attention given to implementation of searches into vast quantities of human knowledge brilliant, and a host of fascinating problems in computer science have been attacked, and some seem solved. One gets an honest feeling of accomplishment. But they have not addressed the question: when a person sits down at her system what will have to be done first? What will she see? What is the next step the user takes, and the next…? How will it look? How many keystrokes or mouse pushes or finger jabs will it take to find what is wanted? How long will it take? No, they speak to me of megabytes and optical disks instead of Susan and Jim and Marjorie and Bill. They speak of the great benefits that will accrue to individuals - that’s one end; and the vast linked data base—that’s the other end. But when asked how the ends are made to meet, just how the natural impedence between them is overcome, the designers become inspecific and waffle. How humans interact with systems is not their field and is not really close to their hearts. It is not the computer science they studied. It is a soft subject and making good interfaces (which they fervently believe should be done) will not win them the sincerest plaudits of their peers.
None of this denies their interest in doing good things for the world and for individual people. The “how” just hasn’t been thought through. This may stem from Nelson’s belief that “The starting point in designing a computer system must be the creation of the conceptual and psychological environment—the seeming of the system [a nice phrase, but not very informative]—what I and my associates call the ‘virtuality.’ ”
And what everybody else calls the “concept.” I agree that one must begin by deciding, as Nelson says, “how it ought to be.” Unfortunately, Nelson’s “how it ought to be” does not include specifics of how it works in detail. Whether the hypertext concept is useful or not is most decidedly a function of the user interface, just as computers are more or less useful depending on how they facilitate access to their power. Consider the relative difficulty of using a double-edged razor blade without a handle. It is just as sharp, but much harder to use. Nelson’s avoidance of what he calls “front end” issues is a major failing in the quality of his vision, his priorities, and his understanding of what makes things work well for people.
Since creating the Macintosh project at Apple I’ve led the design of a number of products including the Canon Cat and SwyftWare. In the process, my associates and I obtained an excellent interface between people and text by using exactly what Nelson decried: a straightforward linear structure. It was acted upon not by links and list processing but by an extremely fast search (which can be considered a real-time volatile link). One major difference is that we were driven to this internal structure by starting with a particular and finely tuned human interface, and only then searching for technology to implement it. I do not agree with Nelson’s statement that people naturally think in hierarchical structures of many layers. If that assumption falls, so does most of the Hypertext concept. Studies show that people prefer flat structures over deep ones, and if we do not accede to the way humans work then we are not likely to design workable products.
The Hypertext vision is worth a try. The demonstrations so far have been tantalizing. It is my guess that the reality will remain tantalizing, and will never fulfill the dreams of Hypertext’s advocates—nor the dreams that they instilled in me, for that matter.
While Hypertext itself may or may not be a good idea, the vision of giving everybody access to vast reaches of human knowledge is a praiseworthy one. There are probably better and more realizable ways. Hypertext is certainly incomplete at present, both in terms of implementation (as its designers would agree) and in terms of concept (they might not agree). My intent has been to present a countervailing opinion to the vast majority of what has been written and to balance the present uncritical discussion (that is, adulation) of Hypertext.
Thanks to David Alzofon, Sam Bernstein, and Scott Kim for their constructive comments.


Conklin, Jeff,“Hypertext: An Introduction and Survey”, IEEE Computer, September 1987, 17ff.
Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission.
c 1989 ACM 089791-340-X/89/001 l/0330$1.50 330
[Source: Raskin, J. 1987. The Hype in Hypertext: A Critique. Proceedings of the ACM Conference on Hypertext (HYPERTEXT '87), 325-330, at Chapel Hill, NC, USA. ACM, New York, NY, USA. ]