Doug Engelbart

Dr. Douglas C. Engelbart (born January 30, 1925) is an American inventor and early computer pioneer. He is best known for inventing the computer mouse, as a pioneer of human-computer interaction whose team developed hypertext, networked computers, and precursors to GUIs; and as a committed and vocal proponent of the development and use of computers and networks to help cope with the world’s increasingly urgent and complex problems.

First tape, Interview Between Doug Engelbart & Belinda Barnet, 10th Nov, 1999

B: In your earlier work you talk about a new stage of evolution between humans & machines characterised by external symbol manipulation. The symbols with which we represent our world can be arranged, manipulated on-screen, with minimum information supplied by the human. Intelligence augmentation. What was the motivation behind this?

E: What motivated me there was the collective capability–boosting people’s collective minds. You know, I’ve been for years calling it the collective IQ.

About fifty years ago, december 1950 and early 1951, I said to myself ‘let me commit the rest of my career to seeing how I can maximise my contribution to mankind’.

I thought, ‘it’s a complex world, getting more complex and the problems are getting more urgent, and there are problems that need to be dealt with collectively, so let’s just see what we can do to improve mankind’s collective ability to deal with complexity and urgency.’

That’s been the driving force. Those collective capabilities.

So [in creating NLS, the first hypermedia system], I put together what I knew about computers and what I knew about radar circuitry etc. to picture working interactively, and it just grew from there.

B: What was it that made you think that computers might be used to augment the human mind? It seems that in the 60′s computers were seen as calculating machines, not tools to think with. Did people think it was a strange idea?

E: Oh it was! It was ludicrous. It embarrassed people.

I went to Berkley to learn about computers, because they had a research contract to build one, and the nearest working computer was either at Bell labs or the University of Pennsylvania, or MIT (I’ve never been sure which one was closer to Berkley). All I knew was that there were some computers they were building there with vacuum tubes etc., and Berkely had a research project to build a computer, and so in all the years I was there it never got finished, and it had already been going for three years.

In those days, the thought of doing a Ph.D thesis on some of this, even talking about how you could manipulate symbolic logic with the computer was just too far out to be acceptable as a thesis. It was wacky even in the seventies, when we had it working–real hypermedia, real groupware working.

We had customers using our system to do their knowledge work, collectively, all over the country, over the ARPANET, and it grew in the eighties to be something like 20 big mainframes supporting that. Even then we could not get anyone to register, the people who were involved in it got enthusiastic, but we couldn’t get any of the computer companies into it. So when you mentioned that it seemed outlandish, yes.

B: Do you think people understand your “framework” more now than they used to?

E: Not particularly, no.

The paradigms seem to be ‘oh, we’re going to automate things we do now, automate the way we do business’ etc., and the idea of really augmenting, people are beginning to register, but it’s in a limited conceptual frame.

We need to think about how to boost our collective IQ, and how important it would be to society because all of the technologies are just going to make our world accelerate faster and faster and get more and more complex and we’re not equipped to cope with that complexity. So that’s the big challenge. At some place in the end of that 1962 report I said if only the world could take this on as a possible way to boost our capability, it would become the highest priority, a grand challenge we could take on.

B: Vannevar Bush [1945] also seems to open every paper with a comment about the urgency of the situation, the complexity of the world and the need for technology to navigate the mess. You were influenced by this?

E: Well, it was a long trail before I went back to look at [Bush's work], about ’61 or something.

What really connected for me was thinking about what an augmentation system really is, that humans learn how to live within and with social structures and conventions and facilities and tools, so all that is one giant augmentation system.

Then I did a study in the late fifties. I was really lucky to land a research contract that was near to my heart, it was on dimensional scaling of electronic components. What happens is you start learning how to make things smaller and smaller. Suddenly you realise the rate at which things happen, like with a mousetrap, if you made one a thousand times smaller it would snap shut in a millionth of a second. I mean as a sort of phenomenological thing, things happen much quicker at smaller scales. Like with a dust mote, gravity has almost zero effect on it. You could go through a whole bunch of basics like that. What I came out of it with is that conviction that digital components would get smaller and faster and cheaper, and we would have all the computing power we could want, so it’s best to work out what to do with it.

So that was in the late 50′s, so then I got the money to sit there and work on this conceptual framework and the reason for that was that every time I tried to tell somebody what I thought you could do with computers, I’d run into their framework, their conceptual upbringing inside a professional discpline.

I got very stern lectures from three belligerent, angry guys after I’d given a talk to a group of scholars at Stanford, they got me later outside at a table. They said, all you’re talking about is information retrieval. I said no. they said yes it is, we’re professionals and we know, so we’re telling you you don’t know enough and you stay out of it, ‘cos goddamit, you’re bollocksing it all up, you’re in engineering, not information retrieval. I tried, and finally I knew I just had to retreat. Because there was no way to get them to rethink this.

And then another time I met somebody (at this time cognitive psychology was the new thing), and this guy was telling me “well look cognitive science is all you’re talking about, we’ve taken years in that field to do something”, and he was more gentle about it but just as clear: as an engineer I had no business talking about it.

Then the people who were doin AI, they just laughed what we were doing off the map. Anyway, so I read a paper by two executives at RAND corp., about the problems they had getting a multidisciplinary team to focus on a multidisciplinary problem. Each one of them will look at that problem from his or her own framework that they absorbed as they grew into their professions. So what you have to do is explicitly go through an early part, search for a common conceptual framework.

And so I said “ah”. Then it took me a year & a half to work on “A Conceptual Framework for Augmenting the Human Intellect”.

B: And what was difficult for the computer community to accept was that you were talking about the human side of thing, the human system.

E: Right. And so that’s still the case out there, the marketplace is being driven by the technology.

I’m still trying to get people. The intent from those days, the failures in communicating stuff, more and more I’m realising that there’s a real strategy that’s needed. Because the scale of the challenge is huge, the technology is impacting throughout the world, in all kinds of ways, and if we’re going to get some advance in how we can do our collective thinking better, that’s a big scale thing, you got to get organisations and institutions involved. They have to be very proactive and involved in trying to improve their collective abilities.

If they don’t, on the one hand, the turmoil that all this is going to casue is likely to cause collapse or huge fractures in our society. We desperately need to be getting better at our collective ability. And on the other hand, if we don’t get proactive in user organisations, the human system side of it is going to get pushed by just the technology.

So all those things of augmenation, the scaling study, have just evolved straight on. The scaling studies taught me about what suprises there are in store for people who haven’t become familiar with a change of scale in any way whatsoever. So when I looked at that human side I said, “by god the scale of change that’s going to be imposed by technology is going to be so revolutionary, so we better start looking for the candidates for the change that would pay off”.

Very quickly I saw that language needs to be relooked at when you have computers. So as far as I can remember that’s where the hypermedia evolved, and when I was writing about that and working on it I remembered back about Bush, and then gave credit to him for coming up with similar things.

B: There’s something I’ve noticed from talking to [Ted] Nelson a little, and reading Bush… that this technology is modelled on how the mind works. Unlike normal technological evolution, hypermedia was originally modelled on the human system, not vice versa.

E: Wow, yes. Right.

But then the real potential is that the way it worked in the past has been very much affected by the environment that we’re in. And so you’ve got an environment now that’s so different you can really rethink the way you harness the basic machinery, you know basic cognitive, sensory motor machinery. Our whole concept of how we symbolise and manipulate and portray our concepts. So that’s when I said “oh boy, we’re looking at something where you have all kinds of optional views of your document”, so we started building that in the sixties, and that’s still not into the web world yet.

B: Actually, that’s another question I want to ask. I’m looking at the way pioneering hypermedia systems might still inform the web. Do you think the web embodies any of the ideas you were having in the sixties? How would you improve it?

When my daughters and I established this Bootstrap Institute ten years ago, I was out there in the commercial industrial world trying to get them to move and I said, “let’s set something up independently that has the scale and framework that can cope with this problem”. This is still moving ahead, we’re busy right now, getting support, putting on a webcast of ten weeks, two and a half hours each presentation, of the whole picture, and trying to get people worldwide to start tuning in and trying to participate. We [want to] develop a strategy so the whole world can start doing this, commensurate with the challenge and the need etc.

B: The web is organised around hyperdocuments with facilities for linking, multiple object types etc.–how does this compare to your open hyperdocument system [NLS]?

E: Well its’ like there are a lot more features and functions etc., that are candidates to be integrated into the hyperdocument environment that aren’t presently there.

There are a whole bunch of them that we had on our Augment system, using them every day, that are candidates for integrating into it. It has to be an open system, without question. It can’t be somebody’s proprietary package. I imagine you’re familiar with XML?

B: Yes.

E: So something arising from that, evolving out, in order to establish the kind of common properties that need to be in the knowledge package, the knowledge containers. There are quite a few more things that need to be there and a lot more evolution has to get specifically pursued, so in order to do that, there’s properties that you build into the documents as well as the functions that you employ to operate on them. So for instance, in our environment, we would never have thought of having a separate browser and editor. Just everyone would have laughed, because whenever you’re working on trying to edit and develop concepts you want to be moving around very flexibly. So the very first thing is get those integrated.

Then [in NLS] we had it that every object in the document was intrinsically addressable, right from the word go. It didn’t matter what date a document’s development was, you could give somebody a link right into anything, so you could actually have things that point right to a character or a word or something. All that addressibility in the links could also be used to pick the objects you’re going to operate on when you’re editing. So that just flowed. With the multiple windows we had from 1970, you could start editing or copying between files that weren’t even on your windows.

Also we believed in multiple classes of user interface. You need to think about how big a set of functional capabilities you want to provide for a given user. And then what kind of interface do you want the user to see? Well, since the Macintosh everyone has been so conditioned to that kind of WIMP environment, and I rejected that, way back in the late 60s. Menus and things take so long to go execute, and besides our vocabulary grew and grew.

And the command-recognition [in the Augment system]. As soon as you type a few characters it recognises, it only takes one or two characters for each term in there and it knows that’s what’s happening.

B: And with command-line, you have more control.

E: Right, so what you have is a vocabulary control, because you can use real verbs and real nouns, and nouns will actually tell you what class of object you want to do something with, you want to copy a character, or copy a word, copy a whole paragraph.

B: So how does all this inform the web?

Our proposal is, let people experiment with different kinds of interface that have a common vocabulary underneath.

So you could flip into your own interface when you need it. Then let people start experimenting with much more flexible ways of doing it, experiment with functions and nouns and verbs that the more elementary or what we call pedestrian users aren’t ready for. But they all work over the same knowledge domains. So that kind of environment is what you have to do to get evolution happening. And you have to do it with open standards for documents. So, I say no proprietary ownership of the class of functions you’re going to employ.

It’s like saying, if you cook, in your household, you got a kitchen, you acquire all of it from one vendor. You then have to take what’s there and you can’t go and do this and that because it doesn’t fit. So if you buy a refrigerator, then they come out with a whole new environment and it doesn’t work anymore so you have to replace your whole kitchen. My god. Ludicrous. Evolution would be nailed down for financial advantage. No way, that’s not being done for improving human organisations.

B: No proprietary ownership? Surely an insurmountable challenge.

There are huge challenges and the scale of it is something that you have to start working on. We coined the term some time ago about a capability infrastructure as being what an augmentation system provides. And part of that capability is the capability to improve, so there’s an improvement infrastructure.

A lot of times it’s implicitly at work, people don’t recognise it. When you go to school, you’re engaging yourself in an explicit improvement process. People sometimes consciously try to improve their health etc., but then look at organisations. Now they’re being pushed more and more to improve their capability to compete. So we think let’s look at the improvement infrastructure, that capability to improve the organisation, and then strategically let’s see if we can deploy as much as possible new capabilities for learning. Let’s work inside the improvement infrastructure as soon as you can, so that means then any of the gains you make in boosting collective capability is going to improve the ability to improve. That’s why we use the term bootstrapping, the better you get at it the better you get at it.

We’re trying to tell the world that an improvement infrastructure is something every organism has, every organisation has, every society has. So people need to start thinking about this on a national scale. Every nation needs to think about an improvement infrastructure, trying to augment it as much as possible, and then, since countries have to do things together, we need to have a global sense of that. So that’s the challenge.

B: So, as part of this, you stand in opposition to the “dumbing down” of hypermedia.

E: Right. Someone can get on a tricycle and move around, or they can ride a bicycle and have more options. Eventually everybody who has any education at all is going to aim to be the highest augmentation. So you’re never going to find out if everyone assumed*

The words which killed me, which exiled me were:

Easy to learn, natural to use.

So everybody is supposed to ride tricycles because they’re easy to learn, natural to use. But the world shouldn’t live on tricycles, past the age of six. That’s the best analogy I can give to the world* do you want to be locked into tricycles because they’re easy to learn?


Leave a Reply

Your email address will not be published. Required fields are marked *


You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>