In this article, I suggest that active information filtering technologies may help us approach this goal for both textual and multimedia information. I also pursue this concept further, discussing the introduction of augmented perception and Enhanced Reality (ER), and share some observations and predictions of the transformations in people's perception of the world and themselves in the course of the technological progress.
My friend Gary Bean suggested possible implementation of "cliché translators" that would explicitly convey the meaning of a sentence that is known to the translator, but not necessarily to the reader. For example, the phrase "that's an interesting idea" might be translated as "I have serious reservations about this". In the reverse operation, words and phrases can be replaced with politically correct euphemisms.
After the recent Communication Decency Act, Robert Carr developed a remarkable "HexOn Exon" program that allows the user to convert obscene words in the messages into the names of the senators responsible for this Act, and vice versa. Besides presenting a humorous attempt to bypass the new obscenity censorship, this program demonstrates that allocating both responsibilities and rights for the contents of a message among multiple authoring and filtering agencies may not be easy.
Translation between various dialects and jargons, though difficult, should still take less effort than the translation between different natural languages, since only a part of message semantics has to be processed. Good translation filters would give "linguistic minorities" -- speakers of languages ranging from Pig Latin to E-Prime and Loglan -- a chance to practice their own languages while communicating with the rest of the world.
Some jargon filters have already been developed, and you can benefit from them by enjoying reading Ible-Bay, the Pig Latin version of the Bible, or using Dialectizer program to convert your English texts to Fudd or Cockney.
Such translation agents would allow rapid linguistic and cultural diversification, to the point where the language you use to communicate with the world could diverge from everybody else's as far as the requirement of general semantic compatibility may allow. It is interesting that today's HTML Guide already calls for the "divorce of content from representation", suggesting that you should focus on what you want to convey rather than on how people will perceive it.
Some of these features will require full-scale future artificial intelligence, such as "sentient translation programs" described by Vernor Vinge in "A Fire Upon The Deep"). In the meantime, they could be successfully emulated by human agents.
Surprisingly, even translations between different measurement systems can be difficult. For example, your automatic translator might have trouble converting such expressions as "a few inches away", "the temperature will be in the 80s" or "a duck with two feet". A proficient translator might be able to convey the original meaning, but the best approach would be to write the message in a general semantic form which would store the information explicitly, indicating in the examples above where the terms refer to measurements, whether you insist on the usage of the original system, and the intended degree of precision. As long as the language is expressive enough, it is suitable for the task - and this requirement is purely semantic; symbol sets, syntax, grammar and everything else can differ dramatically.
A translation agent would interactively convert natural-language texts to this semantic lingua franca and interpret them back according to a given user profile. It could also reveal additional parts of the document depending on users' interests, competence in the field, and access privileges.
Currently, we can structure our mental images any way we want so long as we can translate them to a common language. This has led to relatively stable standardized languages and a great variability among minds. Likewise, intelligent software translators could let us make our languages as liberated as our minds and push the communication standards beyond our biological bodies. (It really means just further exosomatic expansion of the human functional body, but the liberation still goes beyond the traditional human interpretation of "skin-encapsulated" personal identity.)
So will there be more variety or more standardization? Most likely both, as flexible translation will help integrate knowledge domains currently isolated by linguistic and terminological barriers, and at the same time will protect linguistically adventurous intellectual excursions from the danger of losing contact with the semantic mainland. Intelligent translators could facilitate the development of more comprehensive semantic architectures that would make the global body of knowledge at the same time more diverse and more coherent.
Information may be stored and transmitted in the general semantic form. With time, an increasing number of applications can be expected to use the enriched representation as their native mode of operation. Client translation software will provide an emulation of the traditional world of "natural" human interactions while humans still remain to appreciate it. The semantic richness of the system will gradually shift away from biological brains, just as data storage, transmission and computation have in recent history. Humans will enjoy growing benefits from the system they launched, but at the expense of understanding the increasingly complex "details" of its internal structure, and for a while will keep playing an important role in guiding the flow of events. Later, after the functional entities liberate themselves from the realm of flesh that gave birth to them, the involvement of humans in the evolutionary process will be of little interest to anybody except humans themselves.
It also seems possible to augment human senses with transparent external information pre-processors. For example, if your audio/video filters notice an object of potential interest that fails to differ from its signal environment enough to catch your attention, the filters can amplify or otherwise differentiate (move, flash, change pitch, etc.) the signal momentarily, to give you enough time to focus on the object, but not enough to realize what triggered your attention. In effect, you would instantly see your name in a text or find Waldo in a puzzle as easily as you would notice a source of loud noise or a bright light.
While such filters do not have to be transparent, they may be a way to provide a comfortable "natural" feeling of augmented perception for the next few generations of humans, until the forthcoming integration of technological and neural processing systems makes such kludgy patches obsolete.
Some non-transparent filters can already be found in military applications. Called "target enhancements", they allow military personnel to see the enemy's tanks and radars nicely outlined and blinking.
More advanced filtering techniques could put consistent dynamic edits into the perceived world.
Volume controls could sharpen your senses by allowing you to adjust the level of the signal or zoom in on small or distant objects.
Calibration tools could expand the effective spectral range of your perception by changing the frequency of the signal to allow you to hear ultrasound or perceive X-rays and radiowaves as visible light.
Conversions between different types of signals may allow you, for example, to "see" noise as fog while enjoying quiet, or convert radar readings from decelerating pedestrians in front of you into images of red brake lights on their backs.
Artificial annotations to perceived images would add text tags with names and descriptions to chosen objects, append warning labels with skull and crossbones on boxes that emit too much radiation, and surround angry people with red auras (serving as a "cold reading" aid for wanna-be psychics).
Reality filters may help you filter all signals coming from the world the way your favorite mail reader filters you messages, based on your stated preferences or advice from your peers. With such filters you may choose to see only the objects that are worthy of your attention, and completely remove useless and annoying sounds and images (such as advertisements) from your view.
Perception utilities would give you additional information in a familiar way -- project clocks, thermometers, weather maps, and your current EKG readings upon [the image of] the wall in front of you, or honk a virtual horn every time a car approaches you from behind. They could also build on existing techniques that present us with recordings of the past and forecasts of the future to help people develop an immersive trans-temporal perception of reality.
"World improvement" enhancements could paint things in new colors, put smiles on faces, "babify" figures of your incompetent colleagues, change night into day, erase shadows and improve landscapes.
Finally, completely artificial additions could project northern lights, meteorites, and supernovas upon your view of the sky, or populate it with flying toasters, virtualize and superimpose on the image of the real world your favorite mythical characters and imaginary companions, and provide other educational and recreational functions.
I would call the resulting image of the world Enhanced Reality
(ER).
Some of the interface enhancements can be made common, temporarily or permanently, for large communities of people. This would allow people to interact with each other using, and referring to, the ER extensions as if they were parts of the real world, thus elevating the ER entities from individual perceptions to parts of shared, if not objective, reality. Some of such enhancements can follow the existing metaphors. A person who has a reputation as a liar, could appear to have a long nose. Entering a high-crime area, people may see the sky darken and hear distant funeral music. Changes in global political and economic situations with possible effect on some ethnic groups may be translated into bolts of thunder and other culture-specific omens.
Other extensions could be highly individualized. It is already possible, for example, to create personalized traffic signs. Driving by the same place, an interstate truck driver may see a "no go" sign projected on his windshield, while the driver of the car behind him will see a sign saying "Bob's house - next right". More advanced technologies may create personalized interactive illusions that would be loosely based on reality and propelled by real events, but would show the world the way a person wants to see it. The transparency of the illusion would not be important, since people are already quite good at hiding bitter or boring truths behind a veil of pleasant illusions. Many people even believe that their entirely artificial creations (such as music or temples) either "reveal" the truth of the world to them or, in some sense, "are" the truth. Morphing unwashed Marines into singing angels or naked beauties would help people reconcile their dreams with their observations.
Personal illusions should be built with some caution however. The joy of seeing the desired color on the traffic light in front of you may not be worth the risk. As a general rule, the more control you want over the environment, the more careful you should be in your choice of filters. However, if the system creating your personal world also takes care of all your real needs, you may feel free to live in any fairy tale you like.
In many cases, ER may provide us with more true-to-life information than our "natural" perception of reality. It could edit out mirages, show us our "real" images in a virtual mirror instead of the mirror images provided by the real mirror, or allow to see into -- and through -- solid objects. It could also show us many interesting phenomena that human sensors cannot perceive directly. Giving us knowledge of these things has been a historical role of science. Merging the obtained knowledge with our sensory perception of the world may be the most important task of Enhanced Reality.
With time, a growing proportion of objects of interest to an intelligent observer will be entirely artificial, with no inherent "natural" appearance. Image modification techniques then may be incorporated into integrated object designs that would simultaneously interface with a multitude of alternative intelligent representation agents.
The implementation of ER extensions would vary depending on the available technology. At the beginning, it could be a computer terminal, later a headset, then a brain implant. The implant can be internal in more than just the physical sense, as it can actually post- and re-process information supplied by biological sensors and other parts of the brain. The important thing here is not the relative functional position of the extension, but the fact of intentional redesign of perception mechanisms -- a prelude to the era of comprehensive conscious self-engineering. The ultimate effects of these processes may appear quite confusing to humans, as emergence of things like personalized reality and fluid distributed identity could undermine their fundamental biological and cultural assumptions regarding the world and the self. The resulting "identity" architectures will form the kernel of trans-human civilization.
The advancement of human input processing beyond the skin boundary is not a novel phenomenon. In the audiovisual domain, it started with simple optics and hearing aids centuries ago and is now making rapid progress with all kinds of recording, transmitting and processing machinery. With such development, "live" contacts with the "raw world" data might ultimately become rare, and could be considered inefficient, unsafe and even illegal. This may seem an exaggeration, but this is exactly what has already happened during the last few thousand years to our perception of a more traditional resource -- food. Using nothing but one's bare hands, teeth and stomach for obtaining, breaking up, and consuming naturally grown food is quite unpopular in all modern societies for these very reasons. In the visual domain, contacts with objects that have not been intentionally enhanced for one's perception (in other words, looking at real, unmanipulated, unpainted objects without glasses) are still rather frequent for many people, and the process is still gaining momentum, in both usage time and the intensity of the enhancements.
Rapid progress of technological artifacts and still stagnant human body construction create an imperative for continuing gradual migration of all aspects of human functionality beyond the boundaries of the biological body, with human identity becoming increasingly exosomatic (non-biological).
Of course, unless you are forced to "wear glasses", you can take them off any time and see the things the way they "are" (i.e., processed only by your biological sensors and filters that had been developed by the blind evolutionary process for jungle conditions and obsolete purposes). In my experience, though, people readily abandon the "truth" of implementation details for the convenience of the interface and, as long as the picture looks pleasing, have little interest in peeking into the binary or HTML source code or studying the nature of the physical processes they observe - or listening to those who understand them. Most likely, your favorite window into the real world is already not the one with the curtains - it's the one with the controls...
Many people seem already quite comfortable with the thought that their environment might have been purposefully created by somebody smarter than themselves, so the construction of ER shouldn't come to them as a great epistemological shock.
Canonization of chief ER engineers (probably, well-deserved) could help these people combine their split concepts of technology and spirituality into the long-sought-after "holistic worldview".
Technological advances may provide us with the informational, restrictive and emotional functions of pain without most of the above handicaps. Indicators of important, critical, or abnormal bodily functions could be put on output devices such as a monitor, watch or even your skin. It is possible to restrain your body slightly when, for example, your blood pressure climbs too high, and to emulate other restrictive effects of pain. It may also be possible to create "artificial symptoms" of some diseases. For example, showing to a patient a graph demonstrating spectral divergence of his alpha- and delta- rhythms that may indicate some neurotransmitter deficiency, may not be very useful. It would be much better to give the patient a diagnostic device that is easier to understand and more "natural-looking":
- "Hello, Doctor, my toenails turned
green!"
- "Don't worry, it's a typical arti-symptom of the
XYZ
condition, I'm sending you the pills".
(Actually, a watch may serve a lot better than toenails as a display.)
Sometimes, a direct feedback generating real pain may be implemented for patients who do not feel it when their activities approach dangerous thresholds. For example, a non-removable, variable-strength earclip that would cause increasing pain in your ear when your blood sugar climbs too high may dissuade you from having that extra piece of cake. A similar clip could make a baby cry out for help every time its EKG readings go bad. A more ethical solution with improved communication could be provided by attaching this clip to the doctor's ear. "I feel your pain..."
Similar techniques could be used to connect inputs from external systems to human biological receptors. Wiring exosomatic sensors to our nervous systems may allow us to better feel our environments, and start perceiving our technological extensions as parts of our bodies (which they already are). On the other hand, poor performance of your company could now give you a real pain in the neck...
With future waves of structural change dissolving the borders between self and environment, the term may generalize into Harmonization of Structural Interrelations. Still later, when interfaces become so smooth and sophisticated that human-based intelligence will hardly be able to tell where the system core ends and interface begins, we'd better just call it Improvement of Everything. Immediately after that, we will lose any understanding of what is going on and what constitutes an improvement, and should not try to name things anymore. Not that it would matter much if we did...
My readers often tell me that if any version of Enhanced, Augmented or Annotated Reality gets implemented, it might be abused by people trying to manipulate other people's views and force perceptions upon them. I realize that all human history is filled with people's attempts to trick themselves and others into looking at the world through the wrong glasses, and new powerful technologies may become very dangerous tools if placed in the wrong hands, so adding safeguards to such projects seems more than important.
Unfortunately though, a description of any idea sufficiently complex for protecting the world from such disasters wouldn't fit into an article that my contemporaries would take time to read. So I just do what I can -- clean my glasses and observe the events -- and share some impressions.
If you are interested in my more general and long-term views on evolution of intelligence, personhood and identity, you can access my essays on Cyborgs and Mind Age and other resources related to these topics via my Web home page at http://www.lucifer.com/~sasha/home.html.
I am grateful to Ron Hale-Evans, Bill Alexander, and Gary Bean for inspiration and discussions that helped me shape this text.