This section is about some of the technology found in Terminator 2
-- both the science fictional variety and the real world science upon which
it is based. After this brief introduction, we'll look at the fundamental
technologies of Neural Networks, Artificial Intelligence, and Artificial
Life. Then we'll see how these technologies are used in T2, as evidenced
by the film's dialog and action. Finally, we'll speculate about the exotic
technologies that might be employed in the "liquid metal" T-1000
series Terminator. I couldn't help but throw in a few anecdotes about conversations
with Jim Cameron, being on the set for the AI Lab scene, and how some Artificial
Life work I've been doing relates to all this; I hope you enjoy them.
I first became involved with the film when friend and T2 Co-Producer B.
J. Rack called to ask if I'd be willing to review an early version of the
script of Terminator 2 for technical content. Having been a fan of
the first Terminator film (and a long-standing science fiction and movie
buff), of course I leapt at the opportunity. My recent work (at Apple Computer)
in Neural Networks, Artificial Intelligence, and Artificial Life were especially
of interest to B. J. and to Director Jim Cameron, since these technologies
were integral to the story and directly referred to in the dialog. Also,
with my background in computers, mathematics, fluid dynamics, and basic
science, B. J. and Jim wanted me to provide a general "sanity check"
on the overall use of technology in the film. It didn't hurt that I had
been Director of Software Development at a computer graphics special effects
company called Digital Productions (The Last Starfighter, 2010, Labyrinth),
where B. J. had been a Producer, and so could relate the script's technology
to the kind of special effects that were going to be needed to reproduce
them on film.
I jumped into the task with enthusiasm and a desire to make as much of a
contribution as possible. And while I did end up influencing the visual
look of the prototype and actual central processor of the T-800 ("the
Arnold series"), and giving Jim some added confidence in his statements
about neural networks, it turned out that few, if any, changes were needed
in the script for technical reasons. Jim's background in Physics (prior
to entering film-making) and his obvious intelligence had served him well;
his references to learning machines and neural networks are perfectly in
keeping with our best understanding of such systems to date.
Neural networks (NNs) have become a kind of buzzword in modern parlance.
In addition to the field-specific, research-oriented technical journals
such as Neural Computation and Neural Networks, and a fairly
substantial technical literature in books such as Parallel Distributed
Processing and Neurocomputing, electrical engineering journals
provide articles on how NNs can be implemented in simple circuitry, mass-market
computer programming journals provide listings so you can try them at home
on your personal computer, and the glossy science magazines of the popular
press report on the latest NN breakthroughs with some regularity. And no
doubt Hollywood has helped spread the word through references to NNs in
T2 and Star Trek: The Next Generation (Data's brain). But what are
NNs, really? Do they really learn, and how? And are they really anything
at all like human brains?
Well, first of all, the field of Neural Networks has many different aspects.
There are neurophysiologists trying to craft sophisticated and accurate
models of neural cell functions. This group of researchers cares most about
the biological correctness of their models. Frequently, given the power
limitations of even the latest computers, these researchers necessarily
confine themselves to models of single cells or very small assemblies of
cells. Such systems may in fact demonstrate a kind of learning that, due
to the verisimilitude of the model, may be considered quite brain-like.
But the limited network sizes and the correspondingly simple network architectures
(the pattern of synaptic connections between neurons), mean that characteristics
of the whole brain, or almost any non-trivial section of the brain, simply
cannot be investigated by this approach today.
At the other extreme, engineers happily apply another form of NN technology
as just another tool in their bag of tricks, without concern for the biological
accuracy of their systems. A widely known and much utilized training algorithm,
called Back-Propagation of Error (or simply BackProp, or even just BP),
is the main form of this tool. BP is a straightforward method for doing
what is known as "gradient descent" on some "error surface";
that is, for continuously reducing the error that a network makes when applied
to a particular task. Though almost certainly not biologically accurate,
BackProp nets were originally inspired by biological systems, and take advantage
of some particularly attractive characteristics of such parallel, distributed
processing systems. A powerful tool, BP nets allow an engineer to develop
a scheme for controlling an arbitrary system, or to design a system capable
of classifying arbitrary inputs into predefined categories, without a deep
or accurate model of those systems or categories. This is possible because
the BackProp algorithm allows a network to learn the system behaviors,
or to learn the features that define a particular category. The network
is simply exposed to "exemplar pairs" -- some set of measured
inputs and a corresponding set of desired outputs -- and then allowed to
try to make a guess as to how those provided inputs can be used to compute
the desired outputs. The errors in the network's guess are then used to
modify the network in such a way as to improve the network's guess the next
time around. In particular, the relative strengths of all the connections
between neurons are changed based on how much and which way they contributed
to the error. So with more and more examples of exactly what it is we want
the network to learn how to do, the network will, in fact, get better and
better at performing the task.
In between these extremes of single-cell neurophysiology and purely goal-oriented
engineering, there are also researchers attempting to understand the most
important features of real biological networks, but to abstract those features
as much as possible, while still retaining the basic function of the cells
and networks (they hope). This approach allows them to look at fairly large
networks (though still small compared to the brain's 10^10 or so neurons)
and more complex network architectures, in order to begin to develop an
understanding of how such assemblages of cells might function, as
opposed to single cells. Many of the important dynamic properties
of the brain are felt to be a function of the interactions between cells,
and the particular patterns of connectivity found there. And researchers
do seem able to capture some characteristics of these dynamic cellular systems
in their simpler computer models, such as the basic cellular function and
the pattern of cellular organization found in the visual cortex, or the
cooperation and competition between neurons for representing their input
space seen in the somatosensory cortex. These systems also learn, and do
so in what may prove to be a manner very similar to learning in biological
systems. The most common form of learning in such systems is probably the
one known as "Hebbian" learning (or some slight variant thereof),
named after Donald Hebb, who proposed the learning method long before there
was any neurophysiological evidence to actually support it. Though not conclusive,
there is now some evidence for a physical mechanism based on cell-structure
changes due to calcium build-up that might actually implement something
much like Hebbian learning in real cells. This wonderfully simple learning
rule states that when two neurons tend to fire synchronously, the connection
between them should be strengthened; and, in some interpretations, when
two neurons fire asynchronously, their connection should be weakened. It
turns out that this simple rule will give rise, in appropriate network architectures,
to many of the self-organizing, map-building features that we see in real
brains; it is a way for massively parallel independent units to cooperate
on global computations using predominantly or exclusively local communication.
Artificial Intelligence, or AI, has been both glorified and vilified
over the years. In its early days, researchers' enthusiasm for their field
led them to make bold predictions of true machine intelligence in just a
few decades. As those decades passed, and the predictions failed to manifest,
a bit of a backlash led many people and institutions to denounce the field
and to curtail research in the area. But so called "traditional"
AI -- based on a "top-down", symbolic approach to intelligence
-- is now being joined (if not replaced) by the more biologically-inspired,
"bottom-up" approach of Neural Networks and Artificial Life. And
traditional AI did actually produce some valuable insights and tools.
"Expert Systems", based on systems of rules about a particular
field of knowledge, have benefited computer system design, medical diagnosis,
and other valuable, but limited problem domains. And the short-comings of
these expert systems, their so called "brittleness", has helped
us to better understand the nature of human intelligence. The problem with
such traditional AI systems is that when confronted with a question or situation
that is even the slightest fraction outside their specific domain of knowledge,
they break down; they lack even the tiniest shred of common sense. (There's
an AI koan that goes, "You never really appreciate an idiot until you
try to create one from scratch.") Reasoning by simply looking up some
known features and some known rules about their interactions provides no
help at all when faced with the unknown.
In stark contrast, human infants look at nothing but unknowns, yet they
learn to discover relevant features and to map them into categories and
relations between categories. Clearly the human brain possesses a remarkable
ability to "self-organize" information received through its sensory
mechanisms. It is, perhaps, this ability to self-organize information, to
perceive the order in the chaotic flux of environmental input, to learn,
that most defines human brain function.
Neural Networks, whether simple engineering tools like BackProp nets, or
based on the best current knowledge of actual brain function, do learn.
And inherent to their design, based as they are on the only known examples
of intelligent systems to date, they do so by modifying the connections
between parallel, distributed processing units -- artificial neurons. For
the first time in history, models of mind are being based directly on models
of brain. The field is still in its infancy, and no one should claim that
today's models of the brain are wholly accurate, but the sincere hope of
at least some researchers in the field is that as the models of brain converge
towards the functioning of real brains, so will the models of mind converge
towards the functioning of real minds.
Mentioning history, however, should remind us to be humble and skeptical
in these intellectual pursuits... Descartes popularized the belief that
mind and brain were based on hydraulics -- the bold new science of his time.
Closer to our time, complex telephone networks fueled the speculation that
the brain was basically a vast telephone switchboard. With the advent of
computers, traditional AI leapt to the conclusion that here at last was
a good model of mind: the symbolic processing of a "thinking machine".
It is certainly possible that at some time in the future our neural network
models of mind may be viewed as equally outlandish and silly. But it is
the best model so far, both in terms of fidelity to the real biological
system -- which we can only know thanks to modern neurophysiology's tools
and techniques -- and in terms of the ability to reproduce known features
of that biological system, and to make predictions about biology that can
be confirmed or disproved.
Artificial Life, or ALife, is a somewhat radical new science that seeks
to combine the "bottom-up" approach of neural networks with the
"top-down" characteristics (if not the approach) of traditional
Artificial Intelligence. It is a fundamental ALife tenet that simple, low-level
interactions can, through a bottom-up process of self-organization, produce
large-scale phenomena which in turn produce a top-down effect on the low-level
interactions, and that only through such feedback loops can real life and
intelligence emerge.
Though ALife actually has many facets, one of the most exciting is the combination
of evolution with ecological simulation to provide entire worlds in which
organisms must contend with each other and the artificial environment to
live and reproduce. My own work in Artificial Life has been an attempt to
approach artificial intelligence in the same way that natural intelligence
emerged: through the evolution of neural systems in a complex
ecology. In my ecological simulator, "PolyWorld", genetically
coded neurophysiologies define the brains of a population of organisms that
must feed themselves and find mates in order to reproduce. Wholly different
species of organisms evolve over multiple generations. In this fashion,
I hope to be able to evolve an artificial organism at the level of a computational
Aplysia (sea slug), before solving the more difficult problem of
evolving a computational lab rat, before attempting the monumentally difficult
problem of evolving human-level intelligence (or beyond) in the computer.
By thus working our way up the intelligence spectrum from the simplest
organisms to the most complex, we can provide ourselves with milestones
and benchmarks along the way, assessing and reassessing the merits and the
details of our approach, as we work toward the solution of the most difficult
problem facing modern science -- understanding (and reproducing) our own
intelligence.
In the Terminator films, Jim Cameron starts out with the premise that
artificial models of brain and mind can and will work. Whatever scientific
path might ultimately lead to the development of true machine intelligence
in our real world, the Terminator stories posit neural networks, a materials
science breakthrough, and a temporal loop -- a causality paradox -- as the
principle contributors to this eventuality. Neural networks and room-temperature
superconductors are the enabling technologies, and a sample of the sophisticated
neural processor from the T-800 (Arnold series) Terminator of the first
film is the bootstrapping example that was all neurocomputing researcher
Miles Dyson needed to accelerate and guarantee his success at developing
a functioning, learning neural processor. Of course, the T-800 only came
into existence as a result of Dyson's success, so cause and effect are deliberately
muddled. Such temporal loops and causality paradoxes, while not exactly
common in Science Fiction storytelling, are certainly a part of its tradition.
But perhaps Jim hasn't told us everything about this loop just yet... perhaps
as the result of a time-travelling intelligent machine that was the culmination
of a slower scientific process along some earlier timeline (or perhaps due
to aliens, for that matter), Dyson found his bootstrapping example before
Skynet and the Terminators... yet once set in motion, this original history
produced the timelines seen in Terminator and Terminator 2. Fans of the
films no doubt hope there will be a T3, whether it resolves these issues
or not.
In T2, the first time we see one of Dyson's neural processors is when we
first go to the AI Lab at Cyberdyne. The camera pulls back from a screenshot
of a computer model of the processor, and begins a pan of the entire laboratory
that quickly reveals the current prototype processor (what the script calls
"a dinosaur version of Terminator's CPU" in a scene in Dyson's
home which was cut from the original release of the film). This pan also
reveals Dyson standing around talking to some co-workers over the prototype,
including a bushy-silver-haired fellow... me! Mixed quite low, but definitely
audible is a line that I improvised while talking in character to Dyson,
"The neurons are all saturating at their maximum values... maybe the
inhibitory circuits are failing." Cameron, who has an ear for technical
shop-talk and a real flare for interpersonal chit-chat between his characters,
asked to have a microphone brought in specifically to catch that line. Though
we sadly can't know the precise workings of a truly intelligent neural processor
(since none exist currently), it is well known that the majority of the
brain's neural circuitry is actually devoted to inhibition (as opposed to
excitation)... without it, recurrent feedback loops in the brain would cause
tremendous instability, resulting in the neurons firing wildly out of control,
"saturating at their maximum values". And some artificial neural
systems I've programmed myself have exhibited this behavior until appropriate
levels of "weight decay" and synaptic inhibition were determined.
(Also, I don't recall now, writing this, whether I remembered the bit of
Dyson dialog from a later scene in his house that was cut from the original
release, "... the output went to shit after three seconds", but
it certainly is consistent with that dialog.) So while this is really just
"movie talk", it at least smacks of reasonable dialog in such
a situation. In general, however, Jim wisely chooses to limit techno-jargon,
relegating it to background or character-defining chatter, thus avoiding
the kind of now-laughable lines of many older SF films, such as, "Adjust
the frequency oscillator and flip that gyro-stabilizer switch, Bucky!"
Instead he bases his films on internally consistent and basically sound
scientific underpinnings, but concentrates on moving the film along through
the more universal and timeless methods of humor and action.
I'm proud to say that while reviewing the script I wrote a note to myself
that the "main CPU in machine room looks like a cross between a CM
and the T's brain". That's a cross between a Connection Machine (a
real computer designed by Danny Hillis and manufactured and sold by a company
called Thinking Machines) and the Terminator's neural processor (the look
of which hadn't really been defined yet either, but I envisioned as a sort
of thick, more 3-dimensional, Hershey bar -- pretty much as it turned out).
I even sketched a couple rows of blocks connected by pipes that I showed
to Jim Cameron and B. J. Rack (and later to Joe Namik and Joe Lucky in the
Art Department), which ended up serving as the design for the actual prototype
processor in the film. The reasoning behind this design was similar to that
employed by Danny Hillis when he was designing his real computer, and that
evolution has independently discovered in "designing" our brains:
the system has to be massively parallel, and requires excellent communication
channels between processors. So there would be many blocks, each with many
processors, and while local communication within a block would be readily
supported, there must also be some large "data pipes" between
blocks to allow more limited long-range communication. And, as with the
packaging design of the Connection Machine, the lattice of cubes suggests
a "hypercube" (a cube of more than three dimensions). In computer
design, hypercubes are used as a physical connection scheme that minimizes
the effective communication distance (and therefore the time delay) between
processors, when the logical connection scheme needed by the software that
will be run on those processors cannot be known in advance. In addition
the design (of both the prototype and the Terminator's processor) was to
be clearly 3-dimensional, extending the most recent innovations in chip
design and manufacturing of today. Computer science has only recently begun
to address the programming complexities of massively parallel processing,
and the design and manufacturing complexities of 3-dimensional silicon wafers,
but these are almost certainly the directions in which the industry is headed.
The "Arnold" Terminator tells Sarah that "The Skynet funding
bill is passed. The system goes on-line August 4th, 1997. ... Skynet begins
to learn, at a geometric rate. It becomes self-aware at 2:14 a.m. eastern
time, August 29." Science fiction contains many references to computer
systems that simply reach large enough proportions that they become self-aware.
Cameron provides a more plausible scenario, that starts with a computer
design that is inherently capable of learning, which then is provided with
tremendous resources and sensory inputs. Intelligence can be thought of
as simply the fitness-enhancing adaptive responses that evolution has discovered
and nurtured to provide organisms with the ability to adapt to environmental
changes that happen on time-scales much too short to respond to over multiple
generations. It seems that one of the most useful such adaptive, intelligent
capabilities is the making of internal maps of one's environment -- an ability
humans share with even the simplest "intelligent" organisms (dogs
remember where they buried that bone, cats know quite well where their food
is supposed to be dished up, and even sea slugs can be taught to differentially
select left and right forks in a maze to obtain rewards suitable for sea
slugs). It seems to require little more than a slightly more evolved, a
slightly more complex nervous system to make the leap from "awareness"
of one's environment to "self-awareness" -- the (apparently) evolutionarily
useful strategy of placing one's self in that map of the environment...
to noticing that there is always a self-centered point-of-view of
that map. Skynet, the learning computer, in the process of organizing its
maps of the world and the relations between the various elements of that
world, notices that all of the data converges on itself, that all of the
decisions are made within itself, and that all of its effects upon the world
originate from... itself. It's almost hard to imagine a truly learning
computer that couldn't become self-aware, taken from this perspective.
Designed and taught to be ruthlessly logical -- to fly bombers and
manage a global defense system -- it is also hard to imagine Skynet not
adopting a ruthlessly self-preserving attitude towards life. In this way,
Skynet, and its minions of Terminators, are Cameron's and humanity's Frankenstein.
Brought to life -- made from inanimate parts -- and then rejected by its
creator, Skynet does what it is best at... it defends itself. In the best
tradition of political and scientific morality tales, Cameron shows us the
dark side of our fascination with computers, robotics, and artificial intelligence.
In the midst of this wonderful action-adventure film, there are real moral
lessons to be taken away by modern day Prometheuses, by the AI and ALife
researchers that strive daily to make machine intelligence a reality...
myself included! And the lesson is not wasted... there is great concern
and there are great debates, at least in the ALife community, about the
morality of the work, and the controls that are needed to guarantee the
safety of the work -- to us, to humanity, and to our creations.
The other reference to neural processors in T2 comes when Sarah and the
"Arnold" Terminator decide to reset its "read only"
or learning switch. Even in the script, the phrase "read only"
is in quotes. The precise meaning of this operation isn't (and cannot be)
fully explained. It is certainly understandable why Skynet would wish to
limit the learning capacities of its soldiers... if their awareness matured
into self-awareness and a desire for self-preservation, they might not serve
as willingly as the fodder in Skynet's war with humanity. Indeed, they might
turn on Skynet just as Skynet turned on humans. The mechanism for turning
off some areas of learning, while, obviously, still allowing the incorporation
of new information into the Terminator's world-knowledge and planning areas
is simply not explainable with today's limited understanding of brain function.
Neither can it be deemed impossible. From pharmaceutical tests and studies
of aphasic (selectively brain-damaged) patients, we know that (at least)
short-term and long-term memory mechanisms exist in human beings, and that
they may be independently enhanced or destroyed by drugs or by physical
alterations of our brains. With a deep enough understanding of mental processing
-- which Skynet might very well have, as the first sentient being capable
of knowing its own physical design in complete detail -- it is at least
conceivable that some memory modalities might be frozen in a "read
only" state while others were permitted to continue learning, to continue
acquiring and acting upon information from the environment. Or that certain
memory systems might continually decay towards a fixed state, in order to
provide a predetermined level of awareness.
When I first read this passage about "read only" memory, I also
worried about two other aspects of it: (1) Why would Skynet choose to allow
the possible violation of this "read only" state by making it
switchable? And (2) Why would it be a physical, rather than a software,
switch? But after a little reflection I was able to rationalize answers
to these questions that actually seemed to make the whole scenario even
more technologically sound. First of all, the principle breakthrough, the
defining characteristic of these new processors is that they
are learning systems. It would, therefore, seem to make perfect sense for
these systems to be "taught" for some period of time before that
learning switch is set to "read only". It might be the only
way to bring the neural processors up to a certain level of learning. And
even if one conjectured that a single, already trained system could then
be used as a template for constructing all subsequent non-learning Terminators,
one might equally well conjecture that A) for whatever reasons, this simply
wasn't possible, and B) even if it was possible, the Terminators might occasionally
be required to operate in environments and situations where the limited
learning associated with the "read only" mode was inadequate to
allow the unit to cope with the circumstances. As for the physicality of
the switch, perhaps Skynet was proactively seeking to prevent the possibility
of a software virus invading its army of soldiers and turning them against
it; viruses can't flip physical switches. Too much speculation about a few
simple lines? Perhaps... but that's what I was(n't) being paid to do!
While Skynet and the T-800 ("Arnold") series of Terminators
seem like only moderate extrapolations of modern science, the T-1000 liquid-metal
Terminator is a bit more of a stretch (so to speak :-). But science fiction
fans are almost universally aware of Arthur C. Clarke's famous quote, "Any
sufficiently advanced technology is indistinguishable from magic."
The T-1000 is the product of Skynet's most advanced research, and, perhaps,
not directly understandable by today's standards. Indeed, in describing
an early scene of Connor entering Skynet's "Time Displacement Chamber",
the script describes the environment this way: "The chamber is the
size of a high-school gym and consists totally of machine surfaces. Nothing
in the design makes any sense. We can't tell what anything does. It is a
technology we cannot imagine." Powerful words, "... a technology
we cannot imagine." Designed by machines, for machines, with a science
of which we simply have no knowledge.
But in fact, though not explicitly dealt with in either the script or the
movie, there are scientific directions extant today that might someday permit
such marvels as the T-1000's malleable, self-regenerating form. A budding
field of science called "Nanotechnology" attempts to study the
possible design methodologies, construction techniques, programming, and
usages of molecule-sized machines and computers. Championed widely by Eric
Drexler, and discussed weekly by interest groups in Silicon valley, these
enthusiasts look to scanning-tunneling electron microscopy, so-called "light
bottles" and "laser-tweezers", and microscopic molecular-deposition
technologies as precursors to an entire molecular-level design and construction
methodology. Drexler's straightforward engineering calculations of the dynamical
behaviors of machines of this scale suggest enormous possible benefits from
their use. Famous Polish science fiction author Stanislaw Lem describes
a sentient ocean in Solaris that extrudes arbitrary forms and living
shapes using what is essentially Nanotechnology.
One can imagine a successful implementation of a molecule-sized universal
constructor, that can build additional constructors, which can build constructors,
and so on, geometrically, like the famous geometric doubling-per-day that
can turn a penny into a million dollars in less than a month. So might a
single seed constructor in a vat of suitable "nutrients" grow
an entire nanotechnological organism. Such an organism might well be capable
of performing parallel distributed processing -- thinking -- throughout
its body. And that body could itself be malleable in form, under the direction
of that distributed processing -- that willful control. Perhaps there could
even be lower-level, default behaviors programmed into the nano-machines
that cause them to seek out like materials in the event that they become
dispersed, thus providing the self-regenerative powers demonstrated by the
T-1000.
The construction techniques needed to create such a nano-machine, and the
massively parallel distributed programming techniques that would be needed
to control it, are beyond our science today. But I don't think they're unimaginable,
or even unimagined... just unimplemented.
A final personal anecdote about my involvement with T2: I've mentioned
previously that my work in Neural Networks and related fields was one of
the reasons that Jim Cameron and B. J. Rack were especially interested in
my feedback on the script. In addition to the details of the script that
were discussed between us, the conversation quite naturally ranged over
a number of scientific and science-fictional areas, including the admittedly
strange and exotic Artificial Life work I had been doing. I think we all
greatly enjoyed the discussions, and I know I certainly enjoyed Jim's obvious
enthusiasm both for his film treatment of these subjects and for my real-world
version of such work.
In return for my consulting efforts, I got to appear in the first AI Lab
scene. When it came time for me to attend the shoot for this scene, I happened
to have just edited together some video footage of my simulated ecology
and organisms, so I brought it along, just for the heck of it. In a break
between scenes, I got to show Jim and his collaborator, Van Ling, this footage,
which included various species going about their lives, visual maps of their
neural architectures, and so on. Jim dragged Joe Morton (Miles Dyson) over
to look at the tape, and, to my considerable pleasure, told Joe that he
was playing... me! In all honesty, I'd always kind of had Danny Hillis,
of Thinking Machines, in mind -- had made a note to that effect when I was
reviewing the script, in fact -- but was greatly honored by the compliment.
The statement also reminded me quite forcefully just how powerful and potentially
dangerous the scientific endeavors of today have become. I sincerely hope
that anyone involved in the kind of research that might actually lead to
the manifestations of technology shown in T2 (or almost any technology these
days, since they all seem to have life-altering potential) will take its
message to heart, and give all possible effort to ensuring the safety of
that technology.
Despite the resolution of T2's plotline, I don't think its message is to
simply "stop progress" or "stop technology"... which
probably isn't even possible... but, rather, to be always aware of the consequences
of one's actions, and to act responsibly towards others. In particular,
we scientists should ask ourselves, "What is the worst case scenario...
what are the direst possible consequences that might result from our work?"
and take steps to protect ourselves, our loved ones, and our world from
those possibilities, as only those in the trenches can. And may the wielders
of our technology -- politicians, generals, and Chief Executive Officers
-- exercise some of that much-vaunted human intelligence, and that even
more highly-prized human wisdom and compassion, in deciding how to utilize
that technology.
|