^zhurnal - v.0.02

This is Volume 0.02 of the ^zhurnal --- musings on mind, method, metaphor, and matters miscellaneous ... a rather cluttered set of sporadic Good Mistakes. What's it all about? Maybe "... to create moments of philosophy --- that is, to pass from opinion to thought ...." It's also the journal of ^z = Mark Zimmermann. See the ZhurnalyWiki on zhurnaly.com for a parallel "live" Wiki experiment. For back issues of the ^zhurnal see Volumes 0.01, 0.02, ... 0.41, 0.42, ... Current Volume. Send comments & suggestions to "z (at) his (dot) com". Thank you! (Copyright © 1999-2004 by Mark Zimmermann.)

Encapsulation and Trust

Computer science, when applied to the solution of nontrivial problems, is the art of complexity management. Unlike textbook exercises, real-world tasks don't factor via mathematical tricks into simple recursive patterns. Important human jobs are messy, with lots of special cases and ugly trade-offs. Programs to do such jobs are complicated affairs, not things that a single person, however clever, could write alone.

A key tool for complexity management is encapsulation --- packaging parts of a solution into units, with well-defined functions to perform and limited interfaces to the rest of the program. All modern computer languages support encapsulation through the use of subroutines, functions, or similar constructs.

Encapsulation is a valuable metaphor for human interactions too. People are hugely complex, but for many jobs they can do their part without close supervision. Humans have well-defined interfaces with their fellows through language. "A word to the wise is sufficient" to convey critical information. We need not develop neurophysiological models of our colleagues' brain states in order to collaborate.

But for the strategy of encapsulation to succeed, it's essential to have trust. If a computer subroutine claims to compute the square root of its input, but once in a while returns a badly erroneous answer, we can no longer view it as a "black box" and rely on it in critical circumstances. Instead, we have to look inside it and debug it, or replace it with another chunk of code. Similarly, if interpersonal communications break down --- if someone lies to us, or doesn't keep promises, or can't understand us --- then we can't trust the other party, and we can't succeed in our joint endeavors.

Trust isn't a one-shot affair, either. It's a matter of reputation, based on repeated interaction, verifiable honesty, and a willingness on both sides to forgive occasional mistakes. Trust grows organically, the way a tree grows. With steady cultivation, over time the roots of trust extend deep and wide, so that a relationship can survive even the most stressful crisis.


- Sunday, July 25, 1999 at 22:10:00 (EDT)


Neats and Messies

People can be divided into "neats" and "messies". A "neat" pursues order and deep unifying principles. A "messy" exults in the complexity of the real world. A "neat" seeks to unfold subtle implications hidden within fundamental, absolute truths. A "messy" finds new patterns in the multitudinous interactions among diverse entities and phenomena of the universe. A "neat" tends to work through deductive reasoning; a "messy, through induction. "Neats" dig deep; "messies" fan out. Both approaches, together, are vital for creativity.


- Friday, July 23, 1999 at 21:41:03 (EDT)


Global Wisdom

Imagine viewing the Earth from space, through a lens that reveals information and its flow around the planet. You see a web of light: bright high-bandwidth optical fibers spanning the oceans in thin threads, gleaming networks crisscrossing North America and Europe, covering Japan and the edges of Australia with ribbons of fire, extending tendrils into other continents and islands ... plus glowing microwave links reaching across tundra, desert, and jungle, beaming up to satellites in geosynchronous and low earth orbit ... shining copper wires fanning out from central offices to homes and businesses ... flickering cellular phones passing data first to one tower, then another ... neon television and radio stations splashing out their signals. And you see the nodes in the information web: white-hot libraries full of books ... data repositories like beacons ... fiercely glowing CDs, disks, tapes, and other media.

But that's mere data --- bits of raw, uninterpreted information. Where's the knowledge on this planet? Knowledge implies conscious understanding, something that happens only in minds. (Perhaps animals and machines can have minds too, but that's another topic.) A knowledge-display of the Earth from space shows people as brilliant points of light ... reading and writing ... listening and speaking ... learning and teaching ... clustering together in great cities and spreading out in search of solitude ... drawing information from static storehouses, manipulating it, unifying it, and converting it into useful things.

But that's just knowledge. Where's the wisdom on this sphere? Wisdom is understanding transformed by thought and experience into deeper meaning. What would a wisdom-image of the globe reveal? Mostly an abyss of darkness ... a long night broken at intervals by sparks of realization, as individuals grapple with the questions of life and existence ... occasional faint flickers when caring people meet and open their minds to one another ... glimmers of light as they risk sharing the depths of their griefs, their hopes, and their loves.

How is wisdom stored and conveyed across space and time? Not through high-bandwidth channels, 3D-graphics rendering, supercomputer clusters, terabyte servers, TCP/IP, or any other technological marvels. No, wisdom grows in one mind at a time, as each person wrestles with reality. Wisdom moves at its own pace ... a torch passed hand to hand ... thoughtful conversations ... difficult books read and pondered ... challenging lectures ... and years of agonized meditation. It cannot be hurried. But if we watch, and wait, and care, it will come.


- Thursday, July 22, 1999 at 21:21:28 (EDT)


Irreversibility and Time

On the tiniest length scales of matter, where atoms vibrate and photons of light shake electrons, there is no time. Or rather, there is no flow of time, no past or future. Every interaction is perfectly reversible; nothing wears out or breaks or unwinds or stops. (Footnote: we can talk about weak interactions, black holes, and CPT some other time; they don't change the answer.)

Imagine a game of billiards played on a frictionless table, with no pockets for the balls to fall into. After the first blow from the cue, the billiard balls never stop moving --- they bounce without losing energy off the cushions and collide with each other, ricocheting about forever. If we were to watch a movie of this rather boring situation, after it is well underway we would have no way to tell whether it were running forward or in reverse. Every encounter of a ball with an object would look exactly the same.

But on a microscopic level, this is precisely the situation of the world! Atoms bounce off one another, in an utterly reversible fashion; each individual collision looks perfectly legal if seen going backwards in time. Light is absorbed and emitted in the same symmetric way.

So how can there be any difference between past and future, if every single microscopic event is precisely reversible? Yet we know there is a difference: we remember the past but cannot (usually!) foresee the future; we and the artifacts we make age and break down, as does everything else we see, animal, vegetable, or mineral. How so?

The answer emerges from the fact that the world is built of many tiny parts. Turn from the boring ideal billiard game to a deck of cards. For starters, take a "deck" with only one card in it. No amount of shuffling will make any difference --- there are no "degrees of freedom" and no information content to the order of cards in a one-card deck. But add another card and now there are two possible configurations, one single bit of data. (We can imagine playing a simple, babylike game.) Add more cards, and the number of possibilities grows quickly --- so that with three cards there are six arrangements that we could shuffle the deck into, with four cards there are 24 orders, with five there are 120, and so forth.

A single-card deck shows no magic, and without close examination neither does a deck of two or three cards. No one would be startled to see, after a few shuffles, such a small set of cards randomly coming back to its original order. But consider a regular deck of 52 cards, for which the number of possible states is approximately 8 followed by 67 zeroes. Such a number is so large that if all the six billion people on Earth each shuffled a deck once a second, the chance of any one of those six billion decks returning to its original starting order during the age of the universe is utterly negligible. And yet, this is only a deck of 52 cards. How many more atoms are there in a droplet of water or the tiniest speck of dust? How many more arrangements could those atoms get themselves into?

That's the big secret of time: in a word, heat. Heat, the common everyday experience, the random vibrations of atoms and other tiny particles. Just as no plausible amount of shuffling can return a deck of 52 cards to its sorted order, similarly no amount of waiting for random vibrations of atoms can plausibly return a spilt puddle of milk to the cup that a child drops --- even though no law of Nature on the microscopic level forbids such a happening. The odds are too vastly stacked against it.

So we can tell past from future, by looking at how heat flows and chaos grows. Ordered things become increasingly random over time. Machines break; ink fades; stone wears away to dust; creatures age and die. We can create order only locally, at the expense of adding disorder elsewhere. It's a losing battle, but that's life.


- Wednesday, July 21, 1999 at 21:11:38 (EDT)


Good Ideas

"The best way to have good ideas is to have lots of ideas!", Linus Pauling is reputed to have said. That's a cute sentiment, well-phrased --- but obviously, there has to be some goodness among the "lots of ideas" for this method to work. If all of the notions that we generate are poor, or redundant, or unusable, then cranking out tons of them is of no real value.

As in every human endeavor, what's needed is balance --- a rational trade-off between quantity and quality --- "moderation in all things", so to speak. Best is to work on many scales. Generate lots of ideas internally, but send them through a fast filter function, to get rid of blatant stupidities quickly. Then run the survivors through other filters ... write thoughts down ... review them ... sleep on them ... share them with colleagues ... group them with related concepts ... organize them ... publish them ... and seek new applications of them to unanticipated domains. That's how good ideas can grow and evolve into great ideas.


- Tuesday, July 20, 1999 at 17:23:50 (EDT)


Meet Mind

How can mere matter make mind? How can a biological system of cells and chemicals and electrical impulses --- a brain --- be the seat of something that thinks and feels and is aware that it is doing so?

Not too many years ago, the obvious answer was, "It can't". The laws of chemistry and physics were so incompletely known that there was no way to plausibly explain life, much less intelligence. Today far more is understood, and although the gaps are still huge a sketch of consciousness is dimly emerging. During the next several decades, with steady progress, models of how we think will become increasingly detailed.

The phrase "mere matter" is a major misnomer! Matter is far from "mere". The more closely we examine the universe, the more obvious it is that nature's building blocks are marvelous, on all scales of space and time. And even more magical is the mathematics that combines fundamental particles and fields to make complexity.

There's no reason to be afraid of understanding mind, any more than there was reason to fear understanding the solar system, or the processes that create canyons and mountains, or the circulation of the blood, or the rules of genetic inheritance. Better explanations of natural phenomena always enrich us. They give us new capabilities to do good, if we are wise enough so to choose. Understanding nature adds meaning and beauty to life. Let us be brave. How could knowing our own minds, the most important part of our humanity, be otherwise?


- Monday, July 19, 1999 at 20:08:47 (EDT)


Booklessness

At an outdoor rock concert Friday night (don't ask!) I was struck by many things: the >98% white ethnicity of the audience, the huge amount of beer consumed, the prevalence of teen-age smoking, the tattoos and pierced body parts, the high volume and distortion level of the audio system, etc., etc. But one thing that wasn't visible disturbed me more than anything I saw. There were no books. In fact, there was no reading material of any kind observable in the crowd as it waited on the grass 90 minutes for the music to begin. No newspapers, no magazines, no pamphlets, no cards and letters. Nothing.

The thousands of people sitting there were bookless, by choice. None of them brought anything to read. Reading was not associated in their minds with fun, or passing time effectively, or learning something new on a pleasant summer evening before the sunset. Instead, they chatted, ate, drank, and eyed each other.

Thomas Jefferson once wrote, "I tremble for my country when I reflect that God is just." Perhaps we may all want to tremble for our country, and our world, when we reflect that so few people (especially younger people) read. Perhaps we may hope that the rock concert audience was unrepresentative --- but I fear that it is more typical than we conceive. And for ourselves: do we let others (especially younger people) see us reading, devouring good books, happily and voluntarily? Or do we only read when compelled to by work, or social pressure (e.g., the current best-seller), or to escape unpleasant surroundings?


- Sunday, July 18, 1999 at 08:31:00 (EDT)


Seeing, Forgetting, Names, and Things

Paul Valery said, "Seeing is forgetting the name of the thing that one sees." In other, less poetic words: when we really see something, we don't convert the image into symbols, tokens, words in a language; instead, we accept the direct impression of the thing itself on our visual cortex. It's like tasting a fruit, smelling a rose, kissing a friend --- imminent contact with reality --- rather than reading a description of the event and reconstructing it in the imagination.

A lot of the problems most of us have in trying to make representational drawings can be overcome by getting rid of the labels and symbols that we learn unconsciously at an early age. (That's a major point of the excellent book Drawing on the Right Side of the Brain.) Moreover, many difficulties we encounter in life, trying to understand people and events, can be traced to our garbled efforts to translate from sensation into words. Language is wonderful --- it's the medium of thought itself --- but we have to remember that the symbol isn't the object. Things are what they are. Get real!


- Thursday, July 15, 1999 at 21:53:07 (EDT)


Failure

"Fail. Fail again. Fail better."

This advice was reportedly posted on Samuel Beckett's wall beside his desk. Any worthwhile pursuit --- gardening, cooking, drawing, writing, thinking, teaching, learning, ... --- is never done to perfection. There is always room for improvement, a shortfall to correct, an error to identify and fix.

That's precisely what makes something worthwhile: inevitable failure, plus the golden chance to try again, and to do better next time. Living is like that.


- Tuesday, July 13, 1999 at 06:30:31 (EDT)

(the above was seen in a New York Times essay, author now forgotten --- likely the original Beckett quote is "No matter. Try again. Fail again. Fail better." --- but I prefer the "Fail. Fail again. Fail better." version - ^z - 20040727)

Schadenfreude Tempered with Mercy

There's a wonderful German word for taking joy at someone else's misfortune: "Schadenfreude". When we catch ourselves feeling that way, should we also feel guilty? Perhaps ... though when troubles befall people who are outrageously mean or foolish, our natural sense of justice tends to swell, and we thrill to the sense that, for once, somebody "got their just desserts".

Understandable to crow at the too-rare triumph of right --- but may we combine that emotion with a bit of pity? So many innocents are hurt each day; so many worthies are ignored, snubbed; so many beings die too soon. We can never do enough to remedy injustice. Can we not, then, strive to be great-spirited and feel empathy when punishments come to those who richly deserve them? It's hard to be magnanimous when we instinctively want to chortle! Maybe that's why we need to work at it.


- Monday, July 12, 1999 at 22:02:43 (EDT)


Knowns and Unknowns

Often a useful way to start thinking clearly about a new issue is to sort out the known from the unknown components. What are we given, and what must we seek to discover? This clearly applies to mathematical problems, but it also is important in many other applications. Fixing dinner, what ingredients do we have at hand, and what are we without? Finding our way home, when lost in the woods at night, what do we know about where we are and what data do we lack about the paths or topography between ourselves and our desired goal? Figuring out what to do with our lives, where are we now, where have we been, what capabilities do we have already, what skills can we develop, and what is likely to be beyond our reach?

We also do well to think about what we do not need to know in order to reach our goals. Does the price of eggs in 18th-century Warsaw matter to us, or the alignment of Jupiter's moons? (Sometimes yes, but more often no.) Closer to home, can we forget for a moment about all the trivial distractions of our daily life in order to focus on the question at hand --- or are some of those distractions not at all trivial and must be considered to make progress?


- Sunday, July 11, 1999 at 22:08:13 (EDT)


Attractive Opposites

Niels Bohr is reputed to have said, "The opposite of a correct statement is a false statement. But the opposite of a profound truth may well be another profound truth." Examining opposites --- seeking to draw contrasts by comparing things with their converses or inverses --- can be a powerful tool for thought.

Consider something commonly regarded as a great idea --- say freedom, or diversity, or disclosure. Think now about the opposites --- regimentation, or uniformity, or concealment. Each of these can, if we study them, prove to be valuable under some circumstances:

Even when an idea is much stronger or more appealing than its opposite, examining the antithesis often is valuable in clarifying our thoughts and making the limitations of a concept more apparent.

So is Bohr's statement about opposites itself a "profound truth"? Perhaps so --- since we can examine the opposite of it and try to find profound truths that don't have comparably strong opposites. Truth, for instance, has as its obvious inverse deceit, falsehood, misdirection ... not a very attractive crowd! Although we may be able to think of times when lies seem necessary, such circumstances (we hope!) are not too common.


- Saturday, July 10, 1999 at 12:12:33 (EDT)


Bird Brains

We are magpies. We live gathering brightly colored pebbles into the nests that are our minds. We accumulate ideas, array them, push them into pretty patterns ... and then we die and our nests are scattered to the winds.

But some configurations of pebbles are so beautiful that we struggle to encode them as sounds, or marks on paper, or patches of paint. Other people decrypt our symbols and arrange their mental pebbles into something like our patterns, but their own. And so we learn, and create, and share.


- Friday, July 09, 1999 at 12:47:35 (EDT)


What is a Book?

When someone has thought hard about a topic for a long time, for many years, wouldn't it be splendid if s/he could share the gist of that thinking with other people? Wouldn't it be a miracle if, for a few hours, we could listen to a wise and learned person talk about the most important things s/he knows?

That's what a great book is all about. Thinking and sharing. Not shallow thoughts, ripples on the surface of a pond --- but profound depths of knowledge, plunging dives into the abyss of ideas. Not a quick glimpse, a strip-tease flash of flesh --- but leisurely conversation, time spent together, marriage, memories, gifts given and received for life. Thinking and sharing. That's a book.


- Thursday, July 08, 1999 at 21:38:30 (EDT)


Lost Inheritance

"Solitary, poor, nasty, brutish, and short" are Hobbes' words for life without society. Who should we thank for our current state of comparative wealth? Our ancestors, who left us two great gifts: But now, in the very present, what are we preparing to bequeath to our posterity? Savings rates are negative in much of the world, as speculators cash in on recent (apparent) capital gains and spend spend spend. New ideas are increasingly held in secret, as proprietary knowledge for profit. Learning in the schools is in crisis, as children waste their time on entertainment and consumption.

Adam Smith observed that all the activities of governments had not been able to destroy capital so fast as individuals efforts had built it up. The balance may have tipped; we may be at a turning point. Our descendants may look back at our inherited wealth and ask why we chose to squander it. How can we answer them?


- Wednesday, July 07, 1999 at 20:46:58 (EDT)


Underappreciated Ideas

In the New York Times Saturday "Think Tank" feature of 10 January 1998, four philosophers were asked to identify the most underrated concepts of life. They nominated:


- Tuesday, July 06, 1999 at 18:17:43 (EDT)


Scathing Remarks

A few days ago at the library I picked up a pair of massive tomes on the history of ideas in Western civilization. Both purport to summarize the "hundred best" classic works of the past few thousand years. One book seems quite good, if a bit dry. The other, surprisingly, turns out to be full of personal invective and negativism, to a degree that makes it unreadable. What happened?

Brutal critiques of stupid movies and television shows are often entertaining. So are acerbic analyses of political folly, of business fraud, and of poorly written but popular books. And humor need not be limited to today's topics. A study of history and wisdom from ages past can be fun as well as serious, and can puncture foolish balloons with gusto, as Gibbon does in his Decline and Fall of the Roman Empire.

But petty comments and ad hominem attacks on contemporaries are far out of place in what aims to be a 500-page survey of "The Best" of all time. Scathingly negative remarks lower the author, distract from the content, and reduce the credibility of the entire work. Yes, authors are human; revenge is a natural human temptation; so is taking advantage of one's status to preach to an audience which has come to meet, indirectly, the wisest of the ancients. But it's hard to throw mud without getting it on oneself.

Better by far to remain quiet about one's pet peeves --- and in particular, avoid fingering one's enemies in print. Marcus Aurelius in the Meditations explicitly gives credit to individuals who have helped him, but avoids naming any names on the other side of the ledger. Today's enemies may well become tomorrow's friends. And each of us may hope to discover, in the future, that our judgments have matured and improved ... and that things which we were certain of have turned out to be more complex and multifaceted than we once imagined. Better to praise the good and be proved to have been too magnanimous, than to publicize scorn and later have to apologize to those we have wronged.


- Monday, July 05, 1999 at 08:03:58 (EDT)


Idea Champions

A colleague and friend once lamented that "An idea, however demonstrable its validity, never gains any currency unless it acquires a powerful champion." (Les Lilliman, 1998)

How can good ideas be recognized and supported? Must they have explicit champions to succeed? And, conversely, how can bad ideas be kept from fooling people? And who decides what's good and what's bad? Courts of law, especially when dealing with complex technical issues, have long struggled with these problems.

One hope might be that a "free market" in ideas could separate wheat from chaff. This has often worked well, in the long run at least. But when pernicious notions have great short-term profits associated with them, selfish pseudo-experts and promoters fly out of the woodwork in search of personal gain. Even more dangerous, when bad ideas have irreversible consequences it can be tragic to let them go unchallenged. Numerous examples from medical quackery come to mind; so do destructive political systems like Nazism and Communism.

But what are the alternatives to purely natural selection of ideas? Panels of authorities can help at times, though they tend to oppose valid new developments out of inertia and conservatism. A skeptical attitude on the part of every individual is excellent. But raw doubt must be tempered with open-mindedness in the light of evidence, and none of us have enough time to investigate everything. What other ways are there to filter out bad ideas from good?


- Sunday, July 04, 1999 at 15:21:39 (EDT)


Meaners

We generally have a naive picture of our selves --- the "I" that we each imagine being --- as something like a little person inside each of our heads. We envision a homunculus, a cute wee humanoid creature who looks out through our eyes and listens through our ears, takes in all the sensory information we get, perceives it as meaningful, decides with some kind of free will, and then acts by sending out nerve impulses to our muscles.

This model of the mind is, however, obviously flawed. What's inside that tiny guy's head? A still smaller dude? This can't go on forever! At some point, before we run out of atoms, there has to be a self-contained source of meaning that doesn't rely on wheels within wheels to make sense out of life. We need a meaner --- a source of reason and thought and consciousness and "I".

When challenged that way, we have a couple of options. We can invoke a non-physical explanation: a soul or spirit or other such escape from infinite regress. That works --- but it adds a new element to the equation and gives up the hope that we can understand ourselves without outside intervention from the spiritual realm. We also have to be careful to avoid the same trap on the nonphysical side as we are escaping here. It's not enough to say that meaning comes from the spirit. How does the spirit achieve "meaning"?

Alternatively, we can stick to the material world and entertain the hypothesis that there is no single central meaner --- but rather, meaning emerges from complex feedback loops. These loops are instantiated in us by neurons plus other physical components, each of which obeys the laws of nature. Individually, nerves and connections among them and chemicals aren't magic; collectively, they may make a mind. Equivalent loops, under this theory, could be implemented using other mechanisms --- electrical circuits, or interacting nuclear spins, or vibrations in nonlinear media, or whatever. The pattern is what counts, not the method of building it.

Daniel Dennett wrestles with these issues at length in Consciousness Explained and pretty much comes down on the side of physics; so does Marvin Minsky in his book The Society of Mind. Various intelligent critics argue the other way: that meaning is so different from matter that mind can't possibly be an emergent phenomenon. That position seems, however, to be suspiciously parallel to ones that many folks once took (and some still do): that organic chemicals can't be synthesized by inorganic processes, that life can't come from non-life, and so forth.

But the big questions still exist. Is meaning an all-or-nothing proposition? We definitely tend to feel that way --- but are our opinions on the issue merely prejudices, based on our experience with almost-meaningless machines and the contrasts they show with meaningful organic systems? Can little loops exhibit a similarly little quantity of meaning --- a few bits of "sensing" or "knowing"? A thermostat is far from self-aware, but maybe it "knows" something about the temperature, in an exceedingly simpleminded (!) way. A worm may have only a few hundred neurons, and so it can't be expected to do much --- but perhaps it "knows" something. A dog surely knows much more, and people know more still.

The most complex computer software yet written probably has less than a million significant relationships encoded in it. (It has many more lines of code, and may tap huge archives of raw data, but most of that arguably shouldn't count, in an information theoretic sense.) Such programs can't be expected to be intelligent --- but may they not begin to display "meaning"? Chess players often anthropomorphize when playing against a computer, and say "It saw I was threatening its king and so it castled", etc. But is anthropomorphization the wrong word? Or might not our perception of sensing and knowing and meaning and choosing by the machine be partially correct? Are we seeing the program beginning to morph into a meaningful anthropic-like system?


- Saturday, July 03, 1999 at 10:08:01 (EDT)


Temperature

Temperature is a word for the random vibrations of atoms or other fundamental particles. When things are in a nice equilibrium, the particles share energy by interacting with each other, and the average energy of each one is a constant times the temperature (named "Boltzmann's Constant"). But temperature as a concept applies much more widely than one might imagine; it's not just a property of objects like icecubes and ovens.

Stars in globular clusters attract each other via gravity, and over millions of years settle down into a distribution of orbits described by a temperature. Pump more energy into the cluster, say through gravitational collapse, and stars "evaporate" --- they get kicked out to speeds beyond escape velocity and metaphorically boil away. In the microscopic universe, nuclei of atoms have a spin, which gives them a magnetic field. They exchange energy with other nuclei, and so can share a nuclear temperature. These nuclear spins are in fact used to make tiny refrigerators, to cool other systems down yet further.

Blair and McNamara (in their book Ripples on a Cosmic Sea) quote physicist Bill Fairbank as saying, "An experiment can always be done better if it is done at low temperatures ..." --- to which his friend and rival Bill Hamilton responds, "... but it is always much more difficult!" Low temperatures reduce noise, the random fluctuations that make it hard to see tiny signals. But in turn, low temperatures make systems more delicate, with less capacity to absorb minor disturbances of any sort. And the most fundamental laws of thermodynamics make it increasingly hard to approach the ultimate low temperature, absolute zero, the closer one gets to it.

Are there interesting and productive metaphors from the concept of temperature that could be applied to social situations or to life in general? Do societies have a "temperature" perhaps reflecting the amount of shared cultural interaction of their members? Do individuals drop out or escape from the common ways of life when their interactions become too extreme? Can the effective temperature of a society be so low that the nation stagnates, or crystallizes and can no longer change without breaking? Do highly energetic subcultures inevitably melt and diffuse out into the larger context, or can they maintain their individuality without isolation? Are there tradeoffs between temperature and other social phenomena --- say, population density, or natural resource exploitation, or intellectual growth?


- Thursday, July 01, 1999 at 18:43:38 (EDT)


Categories, Neat and Messy

George Lakoff in Women, Fire, and Dangerous Things talks about categories and how we use them to divide the world into fundamental elements of thought. Lakoff convincingly shows that human categorization is messy and complex, not at all like the abstract bins of mathematical logic. He argues, therefore, that we must beware attempts (common in some areas of AI research) to reduce intelligence to axioms and theorems.

But there is a place for ideal, pure, logically precise categories ... even though people, as imperfect "meat machines", may not implement them very well in everyday life. Mathematics shines a light on our universe; it gives us a rock-solid foundation to build upon. Points, lines, and planes are utterly imaginary geometric entities --- but they hint at how we may think about actual objects. Numbers are mere abstractions, which throw away all but one aspect of real life --- but that aspect turns out to be tremendously important. Sets and subsets are the basis of a literally infinite variety of concepts, at such a deep level that we can barely talk about them.

Humans don't need to become mechanical automata --- far from it! But it can't hurt us to develop more self-awareness, in the Stoic philosophical spirit, of the roots of our thought processes and the ground in which they find purchase. Those roots, and that ground, are mathematics.


- Wednesday, June 30, 1999 at 17:50:55 (EDT)


Scripting Languages

Many people follow the same progression in learning computer programming: start with a mid-level language like BASIC or FORTRAN, move down after a while to flipping bits in assembler (or close to it in FORTH), and eventually mature into using a wide variety of high-level languages, each appropriate to its own domain.

Perhaps as part of my (belated!) maturing process I've recently come to appreciate something I had formerly held in slight contempt: "scripting languages" such as Netscape's JavaScript, Apple's HyperTalk, etc. These languages seemed to me like poor stepchildren, designed for amateurs who couldn't hack out a "real" program and who therefore had to have their hands held when crossing the street.

But modern scripting languages, combined with today's fast processors, do a lot more than they used to. The best of them offer associative arrays, regular expression pattern-matching, automatic memory management, and a host of other sophisticated features. They do away with most of the boring and stupid parts of small- to medium-scale programming --- type declarations and conversions, formatting hassles, and the grungier aspects of user interface development, for starters. New scripting languages are reasonably efficient, too, thanks to just-in-time compilation and other tricks of the trade.

In fact, loosely-typed, flexible, friendly scripting languages are getting to be a lot like human natural languages in their tolerance for sloppiness. They float above the level of ugly detail that programmers formerly had to wrestle with. To a tiny degree, scripting languages are starting to act in the same way that intelligent people do ... they fill in the gaps and interpret incomplete instructions, so as to do the reasonable thing in ambiguous situations. That's real progress! Scripting is nothing to be embarrassed about any more.


- Tuesday, June 29, 1999 at 19:06:05 (EDT)


Metafoundry, Metaforgery, Metafishery, Metaforestry, ...

Metaphors are so important, as tools for thought, that it seems worth playing with words to describe their creation. A metafoundry could be a place where raw ores for relationships are melted and refined ... and then metaforged, not in the "counterfeiting" sense but rather hammered on an anvil, like horseshoes or iron spikes --- stressed, annealed, shaped, trimmed, and finished. Or a metafishery could be a place where little metaphors are raised from hatchlings, minnows ... and when they're big enough to be set free, metafishermen bait hooks and patiently try to catch them again.

But best, I think, is to imagine a metaforest --- an ecology of diverse species, interrelated concepts slowly growing together. Metaforesters attend to the health of the system, transplanting saplings to better locations, at times harvesting mature trees, and clearing underbrush away in places so that visitors can walk the well-established paths. On occasion, a fire rages and there's a chance for a radical new start. Adventurers wander far from known trails and bring back tales of wonder --- reports of isolated groves, where strange and marvelous ideas have evolved in juxtaposition. New routes are cleared to them, and the process of learning and discovery continues....


- Monday, June 28, 1999 at 21:22:04 (EDT)


Headlights and Decisions

Anne Lamott, in Bird By Bird, writes:
E. L. Doctorow once said that "writing a novel is like driving a car at night. You can see only as far as your headlights, but you can make the whole trip that way." You don't have to see where you're going, you don't have to see your destination or everything you will pass along the way. You just have to see two or three feet ahead of you. This is right up there with the best advice about writing, or life, I have ever heard.
Nice sentiments! On the other hand (and there is always another hand!) it helps tactical decisionmaking to have a strategic viewpoint --- a large-scale map of the situation, so that the right local battles can be fought to lead toward global victory. In writing, it helps to have an outline (or at least a general vision) of the final product. In living, it helps to have a long-term plan (save, invest, study, learn, work, marry, raise the kids, pay off the mortgage, think, etc.).

Always, immediate conditions must guide today's actions, but in the light of the larger context. Never let a lack of complete information paralyze the will, but don't act precipitously either. You're not omniscient; some things aren't predictable. Refusing to decide is a decision; deciding prematurely is also a decision. Wisdom lies in balancing the two.


- Sunday, June 27, 1999 at 06:13:40 (EDT)


Mental Bandwidth Boosters

Could we learn to think better by somehow enhancing our brains' bandwidth, the speed at which we handle information? How might we do that?

Many years ago, Robert A. Heinlein in his short novel Gulf suggested that faster thought might come from using a more efficient language. Heinlein's imaginary system let each short sound stand for a word, so that our word-sized units became whole phrases or sentences. People (actually super-genius types, in the story) then could talk and think an order of magnitude more rapidly. It sounds promising --- though if it were that straightforward to engineer, wouldn't ordinary human languages already have evolved to do it? Do some natural languages today have a higher information density than others? Do their native speakers thereby think better?

Even without increasing the density of speech, are there conscious techniques that could help increase our minds' efficiency? What might they look like? Some obvious candidates are:

What else?


- Saturday, June 26, 1999 at 06:46:34 (EDT)


On the Fringe of Things

Waves interfere with each other, sometimes adding, other times arriving out of phase and cancelling. Fringes are the result --- patterns in space of bright and dark, of high and low amplitude. Fringes of coherent light make holograms, records of a scene in three dimensions, spread out and encoded by the waves so that they store more than the flat image of a photograph. Fringes of radio signals sweep across the earth and are picked up by sensitive antennas; the fringes are then processed by radio astronomers to produce ultra-high-resolution maps of distant galaxies, as sharp as if taken by a telescope thousands of miles in diameter. Fringes of quantum mechanical wavefunctions interfere, constructively and destructively, to guide fundamental particles in their every motion around and between atoms.

And even the paths of macroscopic objects --- like stones, like planets, like us --- are determined by interference fringes in the objects' Lagrangian function, as it tries all possible paths and converges on the classical one that minimizes the integral of the mathematical action.


- Friday, June 25, 1999 at 05:52:05 (EDT)


Mind Me! ... and Hidden Teachers

People yearn for others to pay attention to them, to listen to their thoughts. Why? Do we feel, like infants, that we don't exist if somebody isn't looking at us? Or do we strive to inject our ideas into others, hoping to cheat death by leaving bits of our thinking in other minds, like a virus replicating through a series of hosts? These seem rather selfish and immature reasons to communicate. At best, do we try to help others by sharing with them the insights that we've won through our hard work?

Patents, copyrights, fame, fortune --- these material rewards for intellectual property may be appropriate at times ... but not always, and surely not for the most important discoveries of life. As authors and speakers, perhaps we should consider offering more of our contributions anonymously, rather than seeking glory for ourselves. As readers and listeners, conversely, perhaps we should turn away from celebrities and seek wisdom from those without big names and big advertising budgets. Maybe, in fact, we should try to be more observant and learn from sources without any names at all --- from the forgotten people who brush past us every day, ignored and unnoticed, quietly doing their jobs. What lessons do they have to teach us?


- Thursday, June 24, 1999 at 05:59:48 (EDT)


Rules vs. Principles

At a stop sign in the middle of the desert, with no human beings for miles around, should you stop? If you break a "rule" and nobody's harmed (or could have been harmed), was that wrong? Are some rules sacred, by consensus or Higher Authority? If you know you broke a rule, should you feel guilty even if no one else knows? If a rule seems bad, is it wrong to obey it?

Robert Pinsky in The Sounds of Poetry: a brief guide begins with:

There are no rules.
However, principles may be discerned in actual practice: for example, in the way people actually speak, or in the lines poets have written. If a good line contradicts a principle one has formulated, then the principle, by which I mean a kind of working idea, should be discarded or amended.
Pinsky also comments, at the end of his second chapter:
Less formally, a mental process like such an exercise --- being aware of how a thing is done, and appreciating more by noticing more --- is the goal of this book.

Perhaps rules are only a step along the way to properly implementing principles in our lives. Rules serve as a kind of short cut to help us do the right thing, most of the time. But rules only dictate "what" --- good principles tell "why" and sometimes "how". We begin with rules when we're young, or first learning a skill. We may graduate, eventually, to understanding principles and applying them to situations far beyond the scope of the initial rules. That's maturity; that's wisdom.


- Wednesday, June 23, 1999 at 06:03:11 (EDT)


Expert Player on a Poor Instrument

Elizabeth Barrett Browning in one of her less-quoted Sonnets from the Portuguese (XXXII) modestly describes herself as " ... more like an out-of-tune / Worn viol, a good singer would be wroth / To spoil his song with, and which, snatched in haste, / Is laid down at the first ill-sounding note." But she then turns the image around and apologizes for not realizing that " ... perfect strains may float / 'Neath master-hands, from instruments defaced ...."

In plain language, the poet is saying that great skill shows itself in calling forth beauty from modest, imperfect materials. Similarly, great thought can emerge from rough, incomplete, confused experience --- if we can see through the distractions and ephemera to recognize underlying ideals and pure concepts.


- Tuesday, June 22, 1999 at 05:56:04 (EDT)


Impossible Standards

Setting targets too high to ever be achieved isn't daunting or discouraging --- it's exhilarating! Such goals will always be there, like infinity, forever beyond grasp. But isn't that better than working toward mundane ends, achieving them, and wondering what to do next?

Don't measure yourself against other people, however "great" their accomplishments may seem. They're human; they're limited; they're imperfect. Try for more. Look up, see the stars, and have the courage to strive for them.

Hopeless? No! Reaching out, escaping the box, transcending all limits --- in spite of certain failure --- that's the source of real hope in life.


- Monday, June 21, 1999 at 05:49:42 (EDT)


Twisting Space and Time

In the classical realm, time moves smoothly and inexorably forward, uniformly no matter where we sit or how we ourselves move. This simplicity that time exhibits makes the laws of Nature easier to write down and manipulate. Forces act on masses and accelerate them; doubling the force doubles the acceleration. Two observers watching the same experiment agree on what happens when, and in what order events succeed each other. But this simplicity begins to unravel when we look more closely at fast-moving objects.

What is "fast"? Fast, from the viewpoint of spacetime, means moving at a fair fraction of the speed of light. From an everyday perspective, that's really quite fast, much faster than the motions of animals or vehicles near the Earth, and even faster than the movements of visible objects in the Solar System. When we move fast, time and space get mixed up --- rotated into one another, just as directions can get mixed with each other when we turn.

If we face North, then East is on our right. We don't think it extraordinary, when we turn a few degrees leftward, to find that our right hand now points not due East but rather mostly East and a little North. Similarly, when we move rapidly through space, our time dimension begins to point a bit into the direction that we are heading, and the direction towards which we are moving points a bit into our time dimension.

What does that mean? It means that, when we (and anyone traveling along at the same speed and direction as we are) look at things and see them, although they seem normal to us, we will not agree with some of the observations of people who are moving relative to us. The differences in what we perceive and measure aren't mere artifacts caused by the time it takes light or other signals to reach us --- the changes are much more fundamental than that.

We're free to consider ourselves to be at rest. But when we try to compare distances that we measure with those measured by moving observers, we find that they see lengths contracted along the direction in which they move. Worse, their clocks and sense of time passing go slower than for us. And we cannot agree about even the order in which some events happen, or about the simultaneity of events. We aren't special, either --- every other observer has a similar disagreement with us (and with each other). Everybody sees everyone else's clocks run slow if they're moving.

Things get even more fun when gravity is involved --- since observers who are near a large mass also seem to unavoidably have their clocks ticking slower than ours. And their distance measurements disagree with ours, even if they are not moving relative to us. Remember the thought experiment facing North with East on our right hand? Take a giant step forward, then left, then back, then right. On a curved planet, if our steps are big enough we won't get exactly back to where we started, and we won't be facing precisely North any more. Curved space near gravitating masses works likewise.

That's what it means for time and space to get twisted into one another. These changes don't just apply to people, of course --- they happen to even the simplest subatomic particles, which age slower when they move faster precisely according to these rules. There's no escape. Nature isn't classical; our prejudices, formed from moving leisurely around in weak gravitational fields, aren't universally valid.


- Sunday, June 20, 1999 at 07:51:48 (EDT)


Polygon Power vs. Brain-in-a-Vat

How can we trust reality? How can anybody know that they're not just a brain in a vat, being fed artificial neural impulses by some mad scientist running a supercomputer simulation of the world?

Perhaps (if one can rely on one's own reason) there is a solution. Are there natural problems so computationally intensive that they can't successfully be solved in real time, even with the largest conceivable future computers? If so, could one use such problems to tell the difference between a simulation and reality? That is, could you know that you were just a brain in a vat by the breakdown of such a complex simulation? And contrariwise, could you trust the evidence of your senses, if you observed situations too rich in detail ever to be faked?

Imagine raising a pair of binoculars to your eyes, to look more closely at a waterfall, and seeing the cascade resolve itself into a wire-mesh framework rather than a chaotic torrent of droplets and eddies. Imagine flipping the pages of a book and noticing a delay before the words of the next chapter appear. Imagine tuning a radio across the RF spectrum and hearing perceptible stutters at each different station before its music resolves into continuity.

To cause such a breakdown of the simulation, the challenge problem has to have real-time elements --- otherwise, it might be precomputed at the leisure of the evil genius who is out to trick us. It has to incorporate directly observable features --- otherwise, the readouts of complex instruments could be faked without doing the underlying hard computational work. And the challenge problem has to have an answer that can be reliably checked, after the fact, by trustworthy human mental operations.

What are examples of such complex and noncomputable problems? Could explicit ones be exhibited, perhaps involving issues of number theory or the chaotic behavior of deterministic physical systems? Or do no such problems exist in the real world? If not, is that a coincidence, or ...?


- Saturday, June 19, 1999 at 06:22:11 (EDT)


Power Curves, Hysteresis, and Forgiving

Some systems have multiple states, even under identical conditions. If you plot a curve of their behavior, it's not a simple single-valued function --- it bends back on itself.

Consider a gasoline engine in an automobile. At a given speed, there are usually two possibilities: the engine can be running efficiently, or you can be "on the wrong side of the power curve", in a state where a lot of gas is being wasted and not much energy is coming out. Pressing harder on the accelerator in the second case won't do any good --- it just makes things worse.

In the same way, a piece of iron has something like a memory of the magnetic fields it has been through, so the current state shows evidence of the past. A tax system can yield different revenues with the same rates, depending on taxpayers' choices of activities. An airplane can be flying level either mushing along at a dangerously high angle of attack, or going fast with low drag and the nose down.

Hysteresis is the technical term for this sort of history-dependent behavior. People show hysteresis too: how they get to a place influences how they act. Treat them fairly, and they'll tend to reciprocate. But we can't expect them to instantly forget historical wrongs, even if their current circumstances are identical to those of others who haven't been through such a past. Should everybody "forgive and forget"? Perhaps --- but perhaps such magnanimity isn't something we can legitimately ask of anyone else. Forgiveness is a gift, not an obligation.


- Friday, June 18, 1999 at 19:22:50 (EDT)


Eating One's Own Cooking

It's easy to tell others what to do. We all imagine the world would be vastly improved if everybody else would only do as we say.

It's hard to change one's self --- to fight bad habits, to choose to act at all times with reason and virtue and charity and honor, in accordance with one's own highest goals.

But paradoxically, one's self is the only person whom any of us can directly affect. And teaching by example is the only real way to show how to live properly. How wise is it then to prescribe regimes for others that we aren't willing to endure --- to cook and serve dishes for others that we aren't eager to eat ourselves?


- Thursday, June 17, 1999 at 20:30:29 (EDT)


Self Reliance

Merle Zimmermann found the following inspirational preface in a book on self-taught shorthand; the sentiments apply universally:

TO THE STUDENT

Reliable as this method is, let no one depend too much on either method or teacher. The work, remember, must be done by yourself; and the fullness of success can come to you in one way only --- and that is by bringing to the work enthusiasm, energy, perseverance, and invincible determination; by being exacting with yourself, and never calling anything "good enough" that is not the best you can do; by putting an object at the end of every hour's practice, and never dawdling over the work simply to kill time; by adopting your motto "There is no such word as fail," and never losing sight of it.

Such common-sense measures, directed by a common-sense method, make failure not only improbable but absolutely impossible.

from The Chandler Practical Shorthand: for schools, colleges and self-instruction, by Mary A. Chandler Atherton (founder of the Chandler Normal Shorthand School; Boston, Mass.)


- Wednesday, June 16, 1999 at 07:00:27 (EDT)


Binding Energy

When objects stick together, they interact with a certain strength --- the binding energy. The concept applies widely:

Binding energy considerations provide good mental tools to understand countless phenomena:

Why do solids melt or decompose at particular temperatures?
It happens when thermal energies exceed the binding energy of the molecules.
Why do Saturn and Jupiter (and Uranus, etc.) have rings?
When small bodies get too close to a large mass (within the "Roche Limit") tidal forces exceed the gravitational self-binding and the little masses are torn apart.
Why do nuclear bombs release so much destructive energy compared to chemical explosives?
The binding energies of the nuclei are millions of times greater.
Why do globular clusters or galaxies tend to evolve over time to have black holes at their centers?
When several stars coincidentally pass near each other, sometimes the gravitational interactions will kick one star away rapidly, leaving the others more tightly bound together; the process, repeated over millions of years, tends to produce high-density central cores, which then collapse into black holes.
Why are extraordinarily powerful nanotechnology "mechanical explosives" (e.g., as described in Neal Stephenson's Diamond Age sf novel) implausible?
Any hypothetical nanotech components (rotors, spokes, etc.) would have to be held together by intermolecular forces, and cannot have binding energies beyond those of ordinary molecules --- so the tiny mechanical explosives would fall apart if one tried to store more than usual chemical energies in them. They thus might be equivalent to normal explosives, but not significantly more powerful.

The concept of binding energy could also be metaphorically applied to nonphysical entities ... ecological systems ... groups of people ... technological webs ... interacting societies ... networks of ideas ....


- Monday, June 14, 1999 at 19:21:07 (EDT)


Living Philosophy

Plutarch writes:

"They are wrong who think that politics is like an ocean voyage or a military campaign, something to be done with some particular end in view, something which leaves off as soon as that end is reached. It is not a public chore, to be got over with. It is a way of life. It is the life of a domesticated political and social creature who is born with a love for public life, with a desire for honor, with a feeling for his fellows; and it lasts as long as need be.

"It is not simply office-holding, not just keeping your place, not just raising your voice from the floor, not just ranting on the rostrum with speeches and motions; which is what many people think politics is; just as they think of course you are a philosopher if you sit in a chair and lecture, or if you are able to carry through a dispute over a book. The even and consistent, day in day out, work and practice of both politics and philosophy escape them.

"Politics and philosophy are alike. Socrates neither set out benches for his students, nor sat on a platform, nor set hours for his lectures. He was philosophizing all the time --- while he was joking, while he was drinking, while he was soldiering, whenever he met you on the street, and at the end when he was in prison and drinking the poison. He was the first to show that all your life, all the time, in everything you do, whatever you are doing, is the time for philosophy. And so also it is of politics."

(Emphasis added; quote from The Practical Cogitator; or the Thinker's Anthology, selected and edited by Charles P. Curtis, Jr. and Ferris Greenslet, 1945.)


- Saturday, June 12, 1999 at 18:37:26 (EDT)


Breadth-First vs. Depth-First

Trying to find something in a multidimensional space? There are two major approaches. One can look broadly but shallowly at first, repeatedly plowing the same ground, working deeper and deeper until the goal is achieved. Or one can plunge a needle straight through the realm of possibilities, probing all the way down in various places until finding success.

The first way, breadth-first search, makes sense in many circumstances:

Contrariwise, the other method, depth-first search, has at times its own advantages:

In real life, the right approach depends on the problem; often a mixed strategy is best. Logic programming (as implemented, for example in the PROLOG language) applies depth-first search toward a goal --- but with programmer-controlled modifications to "cut one's losses" and give up when a line of reasoning leads into impenetrable thickets. Most computer chess programs apply a breadth-first iterative-deepening method --- gathering information from positions a few half-moves ahead to optimize later analyses that go further into the future of the game.

We do the same in writing or drawing ... sketching out the general scene in broad brushstrokes or terse outline form ... focusing on certain areas and working on their details ... moving to other parts of the composition, then returning to add additional nuances ... and so forth. Perhaps the complementary aspects of breadth-first and depth-first can apply to other situations in life?


- Friday, June 11, 1999 at 18:41:06 (EDT)


Reader as Performer

Reading seems a passive act, like playing a recording ... but it's far more. A reader is a performer of an author's work, just as a musician is a performer of a composer's score. Writers struggle to encrypt their ideas in words, strings of arbitrary symbols. Readers take those symbols and decode them back into mental activity. Sometimes the reader's performance is relatively faithful to the original intent, and corresponding ideas take shape in another person's head. Sometimes the message doesn't get through --- it may be written in an incomprehensible language, or require background knowledge and experience that a particular reader lacks.

Sometimes, in the most fortuitous circumstances, the performance exceeds the wildest hopes of the author --- and the reader's mind leaps to concepts beyond those originally dreamed of. That's poetry ... creativity ... progress....


- Thursday, June 10, 1999 at 22:28:14 (EDT)


We Suddenly See to the Edge of the Universe

The late Richard Casement created the science and technology section of The Economist magazine. A book published some years ago in memory of him bears the poetic title Man Suddenly Sees to the Edge of the Universe. What does that mean?

The immediate sense of the phrase is that astronomers can now observe objects distant enough in space and time to be near the Big Bang, the beginning of everything. The Hubble Space Telescope has contributed; so have orbiting infrared, X-ray, and gamma-ray observatories. On the ground, radio telescopes and new sensors attached to classical optics have provided huge data streams about the early cosmos. It's both exciting and frightening to imagine that we are on the threshold of viewing the origins of the physical universe.

But can we also suddenly see to the edges of other, more metaphorical, "universes"? Perhaps so. Consider physics, biology, mathematics, and cognitive science. In quantum mechanics and general relativity we may have some of the most fundamental laws of nature. Molecular biochemistry and Darwinian evolution could be the big keys to understanding living creatures. Via this century's discoveries in logic, algebra, and analysis we find answers to many of the most perplexing questions of proof, infinity, number theory, and the limitations of formal systems. And through work on computation, neuroscience, perception, and applied philosophy we may be closing in on the roots of consciousness --- the sources of the mind.

Is the above sheer hubris? Does the edge of the universe recede faster than our knowledge can ever approach it? An optimist would speculate not. Yes, the more we learn, the more we find that we do not know --- but perhaps we do suddenly begin to see faint outlines of the most important truths. Science isn't over; there's much to work on beyond measuring known parameters to a few more decimal places; important new discoveries will surely raise and resolve questions which we cannot even formulate today. But the big picture may be emerging. What wonderful things we now see!


- Tuesday, June 08, 1999 at 06:33:21 (EDT)


Singularities

What is a singularity? In mathematics, a singularity is a place where something blows up --- where a function goes to infinity or otherwise has perverse, degenerate behavior. For instance, the simple function 1/x clearly gets into trouble as x goes toward zero from either side. And there are even more bizarre examples, such as functions that oscillate faster and faster as one approaches a point, with the speed of vibration going to infinity even though the function itself remains finite there. (Try evaluating sin(1/x) with a calculator for small values of x.)

Singularities are fun because they stress our prejudices, which we've built up from experience in non-singular environments. Singularities are also important, in that they can encapsulate information and sometimes control the behavior of functions far away, as an anchor can control a ship. Physics encounters singularities in idealized cases such as a point-like electrical charge (which would seem to have infinite electromagnetic field energy and therefore infinite mass) or the ultimate gravitational collapse of a black hole. Singularities in physics are a vital symbol of our ignorance --- since they indicate a breakdown of known physical laws.

In human events, there are fascinating circumstances where societies go through (or near) singularities --- abrupt catastrophic changes that don't connect smoothly with the past. Examples include revolutions, monetary collapse, and mass migrations. Some historical instances of this sort of thing are described in Extraordinary Popular Delusions and the Madness of Crowds ... the Crusades, the Tulip Mania and other stock market "bubbles", the Witch Burnings, the spread of catch-phrases in language, etc. Vernor Vinge in various of his stories (The Peace War, Marooned in Realtime, ...) and essays has wrestled with the possibility of a social singularity affecting all of humanity. He envisions it as the endpoint of a period of accelerating technological progress, where rates of change go infinite and (almost) the entire species either self-destructs or transcends ordinary existence.

It's important to look for current examples of looming singularities, or developments which could lead to them. What could cause sudden large-scale chaos? Besides technological breakthroughs, perhaps widespread and uncontrollable disease, or heightened tribalism, or cosmic disaster, or mass irrationality, or ....?


- Monday, June 07, 1999 at 06:32:04 (EDT)


Ingenious Devices

This is undeniably an age of technological miracles and wonders, but in between patting ourselves on the back over our big bandwidths, we may wish to remember another side of "progress". Nicely apropos is J. R. R. Tolkien's description (in Chapter IV of The Hobbit):
"Now goblins are cruel, wicked, and bad-hearted. They make no beautiful things, but they make many clever ones. They can tunnel and mine as well as any but the most skilled dwarves, when they take the trouble, though they are usually untidy and dirty. Hammers, axes, swords, daggers, pickaxes, tongs, and also instruments of torture, they make very well, or get other people to make to their design, prisoners and slaves that have to work till they die for want of air and light. It is not unlikely that they invented some of the machines that have since troubled the world, especially the ingenious devices for killing large numbers of people at once, for wheels and engines and explosions always delighted them, and also not working with their own hands more than they could help; but in those days and those wild parts they had not advanced (as it is called) so far."
We're not cruel or wicked or bad-hearted. But, lest we become too proud of our progress, perhaps we should think on occasion about all our "Made in China" coffee cups and hand-held video games ... about the wonder weapons of modern applied physics that our taxes buy ... and about the uses to which we have turned miracles of information technology: the marketing of junk, the monitoring of formerly-private activities, the pandering of sex and violence for profit, and the buying of elections via ad campaigns.

Yes, there is a happy side. Technological progress has freed slaves, lengthened life, spread literacy, and brought hugely more of us to the point where we have a chance to think and learn and be all that we can be. How tragic then, to witness without shame or protest, evil applications of so many ingenious devices.


- Sunday, June 06, 1999 at 17:36:40 (EDT)


Thinking Through Prejudice

Some issues are particularly hard to think about because we have strong prejudices, one way or another, about them --- we really wish that some (cherished) things were true, or conversely we fear that their (abhorrent) opposites might be true. Examples abound:

How to deal with such preconceptions? Perhaps the best one can do is to become conscious of them, and to apply extra caution and skepticism when an argument lines up "nicely" with what one instinctively hopes to be true.


- Saturday, June 05, 1999 at 20:30:57 (EDT)


Have Fun, Do Good, Stay Out of Trouble

A plausible set of working goals could be captured in the mantra "Have Fun, Do Good, Stay Out of Trouble --- Pick Any Two!" The implication, of course, is that: The ideal job would achieve all three at once. Alas, such employment is tough to find!


- Saturday, June 05, 1999 at 12:26:49 (EDT)


A Sense of Where You Are

John McPhee called his biography of then-basketball player Bill Bradley A Sense of Where You Are. The title comes from a beautiful scene where Bradley stands talking with McPhee on a court, facing away from the basket. Without looking back, Bradley tosses a ball over his shoulder, and it amazingly goes through the hoop; McPhee fetches it, and Bradley repeats the feat. "When you have played basketball for a while, you don't need to look at the basket when you are in close like this," Bradley says, "You develop a sense of where you are."

We do the same thing, without recognizing it, throughout life. The most astonishing human activities --- like walking, seeing, and thinking --- are those we learned to perform before we could even pretend to introspect and analyze what we're up to. (They're also among the hardest things to program a computer to emulate.) We've been doing them so long that we've developed "a sense of where we are" in moving around three-dimensional space, in parsing patterns of light on retinas into objects, and in manipulating the symbols of language. It takes great effort to even begin to "explain" how we do them, and many of our "explanations" are probably wrong.

So the big challenge, perhaps, is to break through the surface "sense of where we are" and begin to see the underlying rich complexity of everyday life. Then we can start working to understand daily events. Ultimately, we may learn to appreciate them as the miracles that they are, not as unexamined blobs. Is that a key part of enlightenment, of mindfulness, of becoming fully human?


- Friday, June 04, 1999 at 20:09:17 (EDT)


Bosons and Fermions

The fundamental subatomic particles of physics come in two varieties: bosons and fermions. All particles have spin, intrinsic angular momentum. Bosons have spin in whole-number amounts: 0, 1, 2, etc. --- for instance, particles of light, photons, have spin 1 and are thus bosons. Fermions have half-integral spin: 1/2, 3/2, 5/2, .... Examples of fermions include electrons, protons, and neutrons.

The difference between bosons and fermions is important to all of nature. For deep reasons of quantum mechanics, bosons "like" to clump together, statistically, and they can form coherent systems of big waves that move in phase, as happens in lasers. Fermions, on the other hand, are absolutely forbidden to be in the same state as each other. Hence, electrons, being good fermions, can't all drop down to their lowest possible energy levels near the nuclei of atoms. That's extraordinarily fortunate, since otherwise all the chemical elements would be pretty much like hydrogen --- and it might be tough to get an interesting universe out of such crude building blocks!

The magic of the world is that putting together two such simple sets of rules --- one for bosons, another for fermions --- can yield the amazing complexity of large-scale systems we observe in our lives every day.


- Thursday, June 03, 1999 at 21:33:25 (EDT)


Dangerous Yourself

The study of ecology suggests that members of a single species tend to be each other's worst enemies --- because they're competing for the same resources in the same ecological niche. But each of us faces a more serious threat, one even closer to home. In Book III, Chapter 5 of The Lord of the Rings, J. R. R. Tolkien writes, aptly:
'Dangerous!' cried Gandalf. 'And so am I, very dangerous: more dangerous than anything you will ever meet, unless you are brought alive before the seat of the Dark Lord. And Aragorn is dangerous, and Legolas is dangerous. You are beset with Dangers, Gimli son of Gloin; for you are dangerous yourself, in your own fashion....'
We are each our own worst enemy. As the Stoic philosophers argued, external things are for the most part of no consequence in comparison to our individual choices: to live virtuously, or to fall prey to greed, selfishness, anger, and the other petty distractions of the world.


- Wednesday, June 02, 1999 at 21:44:32 (EDT)


That Which We Cannot Say

Wittgenstein is reputed to have remarked, "We must not speak of that which we cannot say." What can't we say?

Some things can't be talked about because discussing them would be incredibly hurtful to others. Such themes might include skeletons in a friend's closet, foolish past behavior, former lovers, and subjects on which earlier conversations have revealed irreconcilable differences of politics, religion, or other cherished belief systems. Hashing those things out yet again would be counterproductive --- though at some indefinite point in the future they likely will be viewed with joint amusement as youthful indiscretions, "water over the dam", or excusable naivete and high spirits.

That stands in stark contrast with speaking and writing on certain other topics, particularly ones involving eros, human sexuality. Almost without exception, attempts to discuss sex go in the opposite direction --- momentarily entertaining (if well done) but increasingly embarrassing to both orator/author and listener/reader as time goes by. Energetic advocacy of extreme political or philosophical positions also tends to appear stale and silly after a few years, attractive as it may seem when fresh.

The exceptions are those conversations, oral or on paper, which treat critical topics with generosity, openness, magnanimity, thoughtfulness, honesty, and love. Much can be forgiven, and much is forgiven, for those who speak thus from the heart, even when they say that which perhaps they should not have.


- Monday, May 31, 1999 at 15:43:12 (EDT)


Simplifying Through Complexity

Sometimes a difficult problem can be solved by lifting into a higher-dimensional space --- generalizing, adding complexity, transforming, and then simplifying back at the end. This is a strikingly counterintuitive approach. Are we not seeking to reduce complexity? But lifting can work wonders when our original challenge is solvable only in an extremely subtle and non-obvious way, because it's a special case of a much larger problem --- a larger problem that has more structure and therefore more handles to get a grip on.

Think about the shadow cast by a moving three-dimensional object with a complicated shape. It could be almost impossible to predict the changes of that shadow, if we are only allowed to observe and think about the shadow itself. But if we can figure out the shape of the real object from evidence that the shadow provides us, then the problem to understand the shadow's motions becomes simpler.

The big magic happens when we realize that, for many complicated tasks, there need not be a unique "real object" at all --- and that the "shadow problem" we must solve can be extended into higher dimensional spaces in many different ways. So we can choose a lifting strategy deliberately to make our shadow problem turn into something easier to solve. We thus take advantage of the freedom that comes from going to a richer space of possibilities.

This method of "lifting" applies to numerous mathematical puzzles, and analogous ideas can be used on challenges in other areas of life. Often our struggles can be eased if we see them as special cases within larger situations --- and if we can escape from our immediate contexts to view ourselves in that larger universe....


- Monday, May 31, 1999 at 12:02:30 (EDT)


My Business

In Charles Dickens' A Christmas Carol when Scrooge tells the ghost of his partner Marley "... you were always a good man of business..." the response is:
"Business!" cried the Ghost, wringing his hands again. "Mankind was my business. The common welfare was my business; charity, mercy, forbearance, and benevolence were all my business. The dealings of my trade were but a drop of water in the comprehensive ocean of my business!"


- Sunday, May 30, 1999 at 08:02:40 (EDT)


Better, Faster, Cheaper --- Conflicting Constraints

A useful rule of thumb in software development (and many other areas of life) is "Good, On Time, Within Budget --- Pick Any Two!"

In other words, if you want a program to be good, to do what it promises, then you'll likely get it late or it will cost more than you expect. If you want the code delivered when promised, then you should anticipate loss of functionality or great expense. And if you focus on cost control, then either performance or schedule will slip.

A shorter but less grammatical form of the same concept is, "You want it bad, you'll get it bad!"


- Saturday, May 29, 1999 at 14:17:46 (EDT)


Knowing, Choosing, Doing

A few decades ago, Henry Veatch wrote Rational Man, a retelling and exploration of Aristotle's Nicomachean Ethics. One of Veatch's key themes revolved around the trio of knowledge, choice, and action: Omit any of these three, and personal merit evaporates. Without knowledge of the good, one can choose and achieve it solely by chance. Without free choice, knowing and doing good deserve no credit. Without actual achievement of good, knowledge and choice are reduced to feckless (albeit nice) intentions. The result is a tragic failure rather than a successful, virtuous life.


- Saturday, May 29, 1999 at 06:55:40 (EDT)


Miss Judgment

Why do we make mistakes, misjudgments, errors of decision?


- Thursday, May 27, 1999 at 21:32:40 (EDT)


Genius, Computational Complexity, and the Limits of Intelligence

Mark Kac (1914-1984), in the introduction to his autobiography Enigmas of Chance, writes:
In science, as well as in other fields of human endeavor, there are two kinds of geniuses: the "ordinary" and the "magicians". An ordinary genius is a fellow that you and I would be just as good as, if we were only many times better. There is no mystery as to how his mind works. Once we understand what he has done, we feel certain that we, too, could have done it. It is different with the magicians. They are, to use mathematical jargon, in the orthogonal complement of where we are and the working of their minds is for all intents and purposes incomprehensible. Even after we understand what they have done, the process by which they have done it is completely dark. They seldom, if ever, have students because they cannot be emulated and it must be terribly frustrating for a brilliant young mind to cope with the mysterious ways in which the magician's mind works.

But is this true? Or do the "magicians", as Kac terms them, simply tap into different sources of power on some deep level? Do they perhaps exploit different ways of thinking --- highly nonverbal mental functions which they can't explain but which could, by study and emulation, be applied by others? And on the other hand, are the "ordinary" geniuses actuallyextraordinary in ways which tend to escape notice?

How easy is it to be "many times better" than we ordinarily are? Might there not be computationally complex processes going on, beneath the areas of thought open to introspection? Processes which grow exponentially hard, so that speeding up the mental processor's clock rate could never actually succeed in performing them within practical amounts of time?

What are the ultimate limits of "intelligence"? If one had arbitrarily huge memory and similarly huge computational speeds, would one's "effective IQ" still max out somewhere around 1000 or so? Are there essential mental processes, perhaps involving pattern-matching and function evaluation, that are like the classic NP tasks of computer science --- problems which brute force cannot efficiently solve?


- Tuesday, May 25, 1999 at 17:47:58 (EDT)


^zhurnal Volume 0.01

Since the file of April-May 1999 entries in this online journal has gotten a bit (~80kB) long, I've moved that material to ^zhurnal v.01.


- Tuesday, May 25, 1999 at 12:24:36 (EDT)



This is Volume 0.02 of the journal of ^z = Mark Zimmermann ... musings on mind, matter, method, and metaphor ... new posts every few days, since April 1999. See ZhurnalyWiki on zhurnaly.com for a parallel "live" Wiki experiment in shared thought. For back issues of the ^zhurnal see Volumes v.01 (April-May 1999), v.02 (May-July 1999), v.03 (July-September 1999), v.04 (September-November 1999), v.05 (November 1999 - January 2000), v.06 (January-March 2000), v.07 (March-May 2000), v.08 (May-June 2000), v.09 (June-July 2000), v.10 (August-October 2000), v.11 (October-December 2000), v.12 (December 2000 - February 2001), v.13 (February-April 2001), v.14 (April-June 2001), 0.15 (June-August 2001), 0.16 (August-September 2001), 0.17 (September-November 2001), 0.18 (November-December 2001), 0.19 (December 2001 - February 2002), 0.20 (February-April 2002), 0.21 (April-May 2002), 0.22 (May-July 2002), 0.23 (July-September 2002), 0.24 (September-October 2002), 0.25 (October-November 2002), 0.26 (November 2002 - January 2003), 0.27 (January-February 2003), 0.28 (February-April 2003), 0.29 (April-June 2003), 0.30 (June-July 2003), 0.31 (July-September 2003), 0.32 (September-October 2003), 0.33 (October-November 2003), 0.34 (November 2003 - January 2004), 0.35 (January-February 2004), 0.36 (February-March 2004), 0.37 (March-April 2004), 0.38 (April-June 2004), 0.39 (June-July 2004), 0.40 (July-August 2004), 0.41 (August-September 2004), 0.42 (September-November 2004), ... Current Volume.Send comments and suggestions to z (at) his.com. Thank you!