Dr. John Antoine Labadie
Coordinator of Digital Studios in the Art Department
and Director of the Media integration Center
of The University of North Carolina at Pembroke

Some thoughts on IT in 2001: In the arts ... and beyond

For better or worse, computers and their related kin have undeniably revolutionized every area of inquiry in the Postmodern world from astronomy to visual art. Moreover, the computer, and related information technology (IT), has, in many and sometimes profound ways, made the lives of some of us more comfortable, convenient, and complete. For those interested in locating the current “edge” in the visual arts some form of digital work is surely the direction to pursue. Whatever one’s life pursuit it is no stretch of imagination to suggest that computing, in one form or another, has had an unprecedented impact on global cultures of many kinds.

Certainly, a large number of you reading this essay work in industries or at jobs or are using technologies that are inventions of the Postmodernist era. Overall, high technology has, in some very significant ways, served some of us very well as we move into the second millennium. But can the tables be turned in this new century? Have our computers (collectively) really served us well or have “they” hijacked our global destiny? Or, as many conjecture, is the “truth” of this matter somewhere in between? Some have even declared that we are but a few years from the end of the human era, that point at which (a la “The Terminator”) a set of runaway technologies commandeers the future and drives us all off who-knows-where.

What factual evidence of our potential digital future do we have? Consider the scenario which was widely reported in1998 after IBM’s “Deep Blue” computer defeated world chess champion Garry Kasparov. After his defeat, Mr. Kasparov suggested he had (perhaps) met God. “I met something I couldn’t explain ... people turn to religion to explain things like that.” To some such a perspective is certainly hyperbole, as the matter (or machine) in question was in no way a cosmic mystery. In his game against the IBM machine the chess champion had played a 3,000-pound bundle of more than 500 computers which considered as many as 200 million moves a second in order to beat him. On the biological side, the very human Kasparov, evaluating at a rate of perhaps two or three moves a second, won one game and tied three in the six-game contest. Even so, the final outcome of such competitions, over time, does not seem in much doubt.

Moreover, consider Moore’s Law (stated in 1965 by Intel Corporation founder and CEO Gordon Moore) which posits that (the development of) computer performance doubles every 18 months. This means that today’s notebooks are exponentially more able than one of the granddaddys of all electronic digital processors, the Atanasoff-Berry Computer. Built during World War II by John V. Atanasoff and Clifford Berry, the station wagon-size machine had a storage capacity of less than 400 characters and performed one operation approximately every 15 seconds. Some 50-plus years later, existing experimental machines are now capable of “teraflops” performance at 1 trillion floating-point operations per second. Such processors have rendered Moore’s Law obsolete. Machines 1,000 times faster are on the digital horizon: petaflops are anticipated within five years, based on smaller semiconductor technologies now considered feasible.

It is beyond conjecture that restive technologies have always been a force in human history. From ancient times and the introduction of pyrotechnologies, to the ugly realities of nuclear energy, humans have invented things that are difficult to control. Now, in the twenty-first century, our current high technology seems to have reoriented human culture -- again. For example, in Silicon Valley, where smaller equals faster, nanotechnology, engineering on the molecular level, is pushing things even further down the structural ladder. In “Engines of Creation: The Coming Era of Nanotechnology” K. E. Drexler explains how we will eventually be able to create almost any arrangement of atoms desired. In this way, nanotechnology will further reduce the size (and increase the speed) of computers. Drexler predicts nano-supercomputers that are perhaps even smaller than grains of sand. We are asked to then imagine many swarms of nano-scale cell-repair cruisers carefully and deliberately moving through a human body, identifying faulty cells and repairing abnormal (or aging?) DNA. And what then?

Already, consumer-grade products using digital technologies have become much smarter in the sense that a machine’s awareness of its role and tasks are more precise and effective: fuzzy logic washing machines can determine how much water to let in based on how dirty your clothes are; and “shape memory” eyeglass frames return to an original form when run under hot water. Even so, at least in 2001 it still takes human intelligence to conceptualize such clever uses for innovative materials and technologies. Some futurists have conjectured that sometime before 2035 a computer somewhere will be nudged into consciousness and suddenly “wake up” to find it is capable of performing the processes now exclusively the domain of the human brain. That computer will have found computing’s Holy-Grail–of-awareness: a condition we term “intelligence.” Should such a moment come to pass ... well, after that many things will quickly get very interesting.

In this regard, it has also been suggested that such “smart” machines will be reproductive ... creating smarter machines, which will build yet smarter ones, ad infinitum. In such a scenario technological progress would then explode, swelling superexponentially almost overnight toward what seers have called the “Singularity.” The term comes from mathematics and is the point at which a function goes infinite; it’s also popularized in the science fiction novels of Vernor Vinge. He thinks of it this way: If we make machines as smart as humans, it’s not difficult to imagine that we could make, or cause to be made, machines that are smarter. After that we could plunge into an incomprehensible era of “posthumanity.”

On the other hand, many futurists are not worried about the concept of the Singularity because “techno-prophecy” is almost always wrong. A less nihilistic seer, Edward Tenner, in “Why Things Bite Back,” has suggested that almost nothing regarding technology has been predicted with any accuracy whatsoever. In many cases, innovation that solves one problem, winds up creating another, e.g., the development of the plastic soda bottle, which, when discarded, lasts practically forever; and high-tech improvements in football gear designed to prevent injuries, which instead allow for more aggressive play, which in turn causes injuries to increase. One can only imagine what engineers were (not) thinking while inventing the leaf-blower, or the jet-ski -- not to mention our old restive friend nuclear energy with its apocalyptical possibilities

So what does all this high technology mean and where does it lead us? We simply don’t know. We don’t know whether technology will eventually convey us to the Singularity or more safely house some of us in the very sanitary suburbs of the future. We don’t know whether to regard it as inherently benign, treacherous, or transparent. And one might ponder the entire issue itself, considering the fact that perhaps 90 percent of the world’s people have no telephone. Which side of the technological divide is the more disadvantaged remains to be seen.

But what of computers and art? A prime question might be what exactly is "digital art" and by what criteria shall it be judged? Well, a digital work is, by definition, composed on or translated by or through a binary computer. A digital work is, collectively, a carefully defined set of "0s" and "1s" which have been used to encode data into files that can contain, for example, text, audio, or visual information. A 35mm slide (such as a Nikon Coolscan), once scanned through, can be "digitized" according to the inclinations of the equipment operator and then immediately printed on a "photo-realistic" inkjet printer at a level of quality to rival that of most any camera store. But is this product a photograph? Good question. The computer is a polymorph of tools and electronic databases. A computer can isolate and conjoin, expand and limit, remember and forget, tempt and deny. But whatever a computer is, or can do, the human factor of the operator is still very much, at least for the moment, a part of the output producing equation.

Some of the earliest examples of what we now know as computer art began as early as 1968, when Lillian Schwartz seized a light pen and began to draw. Over a long career Ms. Schwartz combined her dual careers as a computer artist and a groundbreaking user of high technology. Her works, in many media are found in major museum collections throughout the world. In her own words, Schwartz has described her relationship with IT and the arts:

“A computer can have (be!) an unlimited supply of brushes, colors, textures, shadings, and rules of perspective and three dimensional geometry. It can be used to design a work of art or to control a kinetic sculpture. It can reproduce an image of a famous Renaissance painting and record that image to video, film, facsimile, a plotter or a printer... I see the computer as part of the natural evolution of an artist's tools. It can facilitate areas of traditional drudgery in a manner analogous to the Renaissance masters applying their cartoons to frescoes. It can help develop an artists "eye," through which the creative act is channeled into the work of art. Not because it can think, but because it can be told how to calculate in a logical fashion, the computer can also be used for art research and analysis. In other words, computers can be made to accommodate the entire breadth of artistic thought. But even that broad potential does not make the computer more than a tool - it only shows that the computer can be a variety of tools." (The Computer Artist's Handbook by Lillian Schwartz/Laurens R. Schwartz; W.W. Norton, Inc. 1992).

With the possibilities offered by the computers (Apple, PC, Linux), peripherals (scanners, printers, digital cameras), and software (for example: Adobe Photoshop, Corel/ MetaCreations Painter, Adobe Illustrator), available in the early twenty-first century, those persons competent with these various new and ever-evolving technologies can make and/or alter images in ways never before available -- to anyone. Many artists and art critics agree, once visual information is converted into binary code (those 0s & 1s) it is possible to produce original images that are as visually and aesthetically stunning as those produced through any other medium. Digital imaging is simply another way to communicate visually and artistically and perhaps the one of means to carry us into brave new worlds in the arts.

In my estimation, the core question is not at all "What do we do about computer/digital work?" Perhaps it might more productively be phrased, "How can digital be incorporated into what we already know how to do?" Take printmaking for example. As printing and publications standards are already in use (and evolving) in the graphic design industry, the question becomes "How does digital work benefit both the producers of the art in question and the consumers of said work?" We all need make no mistake in understanding what digital has already wrought: a new era is here and it IS revolutionary, unprecedented and marvelously powerful. Even so, digital technology, taken as a whole, is nothing more, or less, than the tool(s) we make of it.

Certainly all artwork is interpretive, and digital imaging is the first truly new and unprecedented interpretive tool set available to us since the introduction of chemical photography in the 1830's. New endeavors in any medium should be unhindered by critical disapproval which derides works accomplished by a means with no historical precedent. As N. Negroponte (co-founder of the MIT Media Lab) has written in his best seller, “Being Digital”, the most facile future users of digital technologies will “live digitally.” As to the impact on human creative output, whatever the form, the proverbial “jury” is still “out” ... and perhaps has even yet to be seated. My suggestion: embrace the possibilities now available to us and enjoy the digital possibilities.