The historic Parker Electric Mfg. Co. building

221 Washington Street, Oakland CA



Context

“Computers for the rest of us” has been the rallying cry for a generation: we’ve made visual interfaces that let most of the world address their simplest tasks with point-and-click objects on screens, tablets, and phones. A remarkable achievement. But just a start.

The visual language that lets us do this can refer to anything in the world with just a small handful of widgets (like icons, buttons, spreadsheet-like grids, scroll bars, and text). It is extremely generalized; has to be to let us describe anything someone might want to show. It’s limited and inexpressive: imagine writing fiction if you only had a 15-word vocabulary of words as plain as “object,” “action,” “attribute,” “list,” “move,” and “description.” And it’s stagnant: the ideas were invented by Sutherland (in the mid-1960s!) then extended by Xerox PARC in the late ’70s and Apple/Windows in the mid-’80s. The last thirty years have applied little more than visual make-overs; perhaps the biggest advancement is the recent popularity of “Flat Design” fashion—rules that make the language more abstract, not more expressive.


The Project

The Parker Electric Mfg. Co. building was purchased in 2013 by Digital Image Design Incorporated as a forum for the exploration of a new kind of interface: one that focuses on specific individual expertise, not humanity’s lowest common denominator. One that accepts the complexity of the real world, without trying to to simplify away the differences. One that exults in the perceptual variety we evolved to understand, without trying to strip out “content” irregularities to fit fashionable design rules.

Instead, we can play to the highest acheivements in our culture. Humans have organized into myriad separate clans focusing on narrow issues; this is a primary mechanism of culture. We find people interested in similar things, then think and innovate together. We develop mental models of what we care about: how our topics and subjects interrelate and behave. We automatically devise metaphors that let us grasp these things better, and develop precisely expressive jargons to communicate with our peers about these models.

Interfaces don’t have to squeeze our ideas through a homogenizing screen. They can directly illustrate our shared mental models if we’re willing to make the tailoring effort. An interface can embrace existing culture as we are embracing Oakland's Parker Electric building; it can communicate better by crafting with things that already have meaning.

When an interface does this we’re staring directly at what we care about—not abstract simplified marks on a page that need to be decoded to be attached to the rich ideas in our heads. Humans have the ability to attach rich percepts to rich ideas—and we can deal with hundreds of thousands of meaningful categories, not just a handful of blessed widgets. Look around yourself: see how many things you can name, point at, act on. Now imagine trying to use a flat interface that lets you act on all those objects on your phone, with traditional widgets. Those widgets work for trivially simple tasks like Web brochures and shopping carts, but are hopelessly inadequate for tasks of even moderate complexity. With the right representation, though, the human mind is equipped to deal with this complexity effortlessly. Look around again—you’re doing it!

To tap into that basic human ability we need to understand how experts form mental models, how we enrich them to embody the complexity of the world, how we attach words or pictures to them to share them with our peers. This is the domain of the humanities, not fashion, advertizing, or even computer science. We need to revisit the first principles of meaning, communication, and problem-solving. Then we need to rebuild a new kind of interface language: one where the structure is regular enough to be easily learned and the vocabulary is as unlimited as human thought. We have the technology to do this, and are starting to understand it will help us grasp and act on incredibly complex problems humanity faces today—helping us think may help us survive.


Next Steps

At Parker, Brad Paley will be inviting leading researchers and practitioners from dozens of fundamental fields: the fields we need to draw on for our first-principals understanding. Then together we’ll synthesize these basic findings into a Cognitive Engineering methodology that extends the way we think.

Paley has been addressing these issues for more than two decades and has had considerable success with even the first few steps in this direction. He’ll frame each basic science presentation with what we should be extract to serve our humble trade. Then he’ll mediate discussions between the scientists and new practioners. Together we’ll articulate better ways of illustrating mental models to support expert thought.


Participants

Paley has had the honor of involving many world thought leaders in this exploration to date. As the Parker Electric Mfg. Co. building and community comes together, similarly accomplished thinkers will be invited to Oakland to help in this venture.


George A. Miller
Professor at Princeton, Dr. Miller coined the term “Psycholinguistics” and founded the field; wrote the foundational Cognitive Science paper The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information, founded WordNet, and spoke as a guest lecturer in Paley’s Columbia Computer Science graduate class.
Mark Weiser
A Chief Scientist at Xerox PARC, Dr. Weiser conied the term “Ubiquitous Computing” in 1988 when his idea of hundreds of computers per person per room was almost inconceivable—we’re at dozens already. He spoke as a panelist in Paley’s 1998 SIGGRAPH panel on small, special-purpose hardware/software computer interfaces: The Sourcerer’s Apprentice.
Robert Bringhurst
Language scholar, poet, distinguished typographer, and author of the field’s bible The Elements of Typographic Style, Mr. Bringhurst spoke at Paley’s Columbia class and in the Information Esthetics Lecture Series One.
Edward Bell
Art Director at Scientific American, was responsible for how we visually grasped complex scientific issues in that journal for decades; he spoke as a guest lecturer in Paley’s Columbia Computer Science graduate class.
Ronald Rensink
One of the world’s experts on “Change Blindness,” a feature of the human visual system that allows major changes to happen unnoticed right in front of one’s eyes, Dr. Rensink studies human perception, discovering and sharing principles useful in design; he participated in the Information Esthetics Lecture Series One.

Paley has also had the pleasure of hosting computer scientists such as Steve Feiner of Columbia University (pioneer of Virtual Reality and Augmented Reality), Ted Selker of the MIT Media Lab (inventor of the IBM TrackPoint), Bill Buxton of Silicon Graphics and Microsoft Research, and Tamara Munzner, author and University of British Columbia professor of Information Visualization, among many others.

His own work has been in the New York Times, Nature, MoMA, the Whitney, the New York Stock Exchange, SEED, GEO, SlashDot, Ars Electronica, Google NY, ID Magazine; he won Grand Prize [non-interactive] at the 2006 Japan Arts Media Festival, is a NYSCA grantee and NYFA fellow; he’s given invited or keynote talks at all major Information Visualization symposia, as well as CogSci, the Institutional Investors Financial Technology Forum, and the American Statistical Society’s Joint Statistical Meeting; he was appointed as an Adjunct Associate Professor in the Department of Computer Science at Columbia University and asked to create a graduate-level class detailing his own design methodology.