I started thinking a few days ago about how the digitization and networking of so much of what we hold dear has changed things. I see that in my lifetime I will witness the end of books, or most of them, physical copies of recorded music and probably physical newspapers too. Stuff that’s been around for a thousand years will be gone in my lifetime! Film based photography is pretty much a remnant, an art form, an artisanal craft used by fine artists and high-end fashion photographers. And writing letters to one another? On paper? And dropping them in the mailbox? When was the last time I wrote and mailed a physical letter? All those academic books filled with Auden’s or Jane Austen’s letters — it’s hard to imagine a collection of someone’s text messages, tweets and e-mails. I suspect that television as we know it will be gone soon as well. All right, film and recorded music have only been around a hundred or so years, but books! All of which led me back to wondering — how did this get started?
The Internet, the World Wide Web, as much of a boon as it has been, has left an awful lot of wreckage in its wake, beyond just the elimination of those formats we thought of as eternal and the industries that produced and delivered them. Interconnectivity has facilitated the loss of privacy of many of the world’s citizens. We’ve been liberated and captured at the same time. I sense that the loss of privacy — which to me seems inevitable — is part and parcel of the whole project. You can’t have efficient search algorithms, cloud computing and digitized everything and anything and expect to retain the anonymity of the past.
Security races to keep up, but I wonder if the dream of unlimited access and personal and corporate data security aren’t simply incompatible. Maybe we just can’t have them both. Maybe we need to throw up our hands and give in. Stop resisting and surrender. Live totally and completely in public. The world would truly be the village that McLuhan predicted — a small town where everyone does know your business. Maybe that would keep us honest, and push the realization that as custodians of the planet we really are all in this together.
This “creative destruction” began in the ’60s, as did many things that we now both love and regret, and it was initially a spinoff of a project funded by US military agencies. The military (along with the space agency) gave us Velcro and (I believe) cheap integrated circuits (i.e. gizmolandia), as well as the blowback that helped nurture the current mess in the Middle East, South America and Afghanistan. The Internet’s connection to the military, as much as I would love it to be a big secret conspiracy, seems a lot more benign than that. Mephistopheles came to Faust in the form of a poodle. After all…in some versions of the story, he cannot enter your house unbidden — you have to invite him in, like a vampire.
One man foresaw a global network before any such thing was close to being possible. J. C. R. Licklider (sounds like a character in a Coen bros movie!) envisioned, in a 1960 paper called Man-Computer Symbiosis, "A network of such [computers], connected to one another by wide-band communication lines…[which provided] the functions of present-day libraries together with anticipated advances in information storage and retrieval and [other] symbiotic functions." [Source]
In other words, he saw it all coming.
Is this man the antichrist? Or merely a prophet?
In a weird coincidence, Licklider began his career studying psychoacoustics (more on that later), and wrote a paper called “Duplex Theory of Pitch Perception” in 1951 that forms the basis of contemporary concepts of how we perceive pitch, even though it sounds like it might be about two-story apartments with uneven floors. That the man who predicted a worldwide information exchange network was initially interested in how we perceive music is slightly uncanny.
More about Licklider from Wikipedia:
“His ideas foretold of graphical computing, point-and-click interfaces, digital libraries, e-commerce, online banking, and software that would exist on a network and migrate wherever it was needed. He has been called ‘computing's Johnny Appleseed’ for having planted the seeds of computing in the digital age.”
Now, it’s been pointed out that he didn’t actually invent any of this stuff — he merely “planted the seed.” But often it seems that putting out the idea that something might be possible encourages others to actually make it possible. In a way, to imagine is to create.
In the ’50s, Licklider “worked on a Cold War project known as Semi Automatic Ground Environment (better known by its [weirdly appropriate] acronym ‘SAGE’) which was designed to create a computer-aided air defense system. The SAGE system included computers that collected and presented data to a human operator, who then chose the appropriate response. In 1957 he…conducted the first public demonstration of time-sharing,” [Source] which is when multiple parties can share the use of a single large computer. And in 1958, he became president of the Acoustical Society of America.
“He played a similar role in conceiving of and funding early networking research, most notably the ARPANET [acknowledged to be the predecessor to the Internet]. His 1968 paper on The Computer as a Communication Device predicts the use of computer networks to support communities of common interest and collaboration without regard to location.” [Source]
“Without regard to location”— the phrase resonates for me. It implies disincorporation — an out-of-body experience. In this case, it’s data that has no fixed place, no physical manifestation. But I sense it’s happening to us, too.
I had thought that the Internet began with the linking of some military computers in the Pentagon (ARPANET) in 1969, and that this network was an experimental project to create a system which was specifically designed so that its data could survive a nuclear attack. It turns out my hunch was wrong, although the military were indeed involved in funding the research. ARPANET (which Licklider was involved with) did give birth to internet protocols — how computers “talk” to one another — sometime later in the 1970s, but it was not, it seems, all about securing secret data from the electromagnetic pulses associated with nuclear weapons.
Bob Taylor, the Pentagon official who was in charge of the Advanced Research Projects Agency Network (or ARPANET) program, insists that its purpose was not military, but scientific. Though we might take whatever the Pentagon says with a big grain of salt, he could be telling the truth. Larry Roberts, who was employed by Taylor to build the Network, states that ARPANET was never intended to link people or act as a communications and information facility. So, the evolution into the Internet was completely unintentional, though Licklider foresaw it. ARPANET was primarily about finding a more efficient way of time-sharing.
Those were the days when computers looked like this:
They were extremely expensive, and there weren’t a lot of them, so many people, like my friend C’s brother, made a good living managing access to them. Time-sharing was a big issue. If however, access could be accomplished remotely, through a network, then the efficiency of the time-sharing would be increased. Time-sharing via these networks was focused on making it possible for research organizations (and the military) to use the processing power of other institutions’ computers when they had laborious calculations to do, or when someone else's facility might do the job better.
Because this research (used to develop ARPANET) was government-funded, its use was restricted to the military and university research facilities — C’s brother couldn’t use it to create or enhance the commercial enterprise he had established to manage computer access, for example.
“During the 1980s, the connections expanded to more educational institutions, and even to a growing number of companies such as Digital Equipment Corporation and Hewlett-Packard, which were participating in research projects or providing services to those who were.” [Source]
We can see by the involvement of these companies that the line between non-commercial use and commercial and public access was already getting fuzzy.
“Several other branches of the U.S. government, the National Aeronautics and Space Agency (NASA), the National Science Foundation (NSF), and the Department of Energy (DOE) became heavily involved in Internet research and started development of a successor to ARPANET. In the mid 1980s, all three of these branches developed the first Wide Area Networks based on TCP/IP.
“In 1984 the NSF…supported departments without such sophisticated network connections, using automated dial-up mail exchange. [For those who don’t remember or are too young, one used to access the Internet and send e-mail by modems that would “dial-up” using regular phone lines…a web page in this era would take many minutes to load; these were NOT the good old days in that sense.] This grew into the NSFNet backbone, established in 1986, and was intended to connect and provide access to a number of supercomputing centers established by the NSF.
“In 1992, Congress allowed commercial activity on NSFNet with the Scientific and Advanced-Technology Act, permitting NSFNet to interconnect with commercial networks. University users were outraged at the idea of noneducational use of their networks. Eventually, it was the commercial Internet service providers who brought prices low enough that junior colleges and other schools could afford to participate in the new arenas of education and research […and soon the rest of us].
“By 1990, ARPANET had been overtaken and replaced by newer networking technologies and the project came to a close.” [Source]
The mother, seed or egg that gave birth to the Internet was gone, and the floodgates had opened.
By the mid-’90s, access became easy enough that the commercialization of the Internet proceeded rapidly. I wondered to myself if the military kept a parallel World Wide Web, inaccessible to civilians, since they were so involved in the early stages of its development. They do, or did — it was called MILNET.
A quarter of the earth’s people now use the Internet and the World Wide Web. We don’t know how many use MILNET. Finland and France are about to make Internet access a right, like a legal right to a trial, free speech or health services (well, these rights exist in some countries). The Finns want everyone in their entire country to have broadband (5mb) in a few years. (FYI, 5mb allows streaming video like most of us can see now, 10mb would allow HD streaming video and 100mb, which the Finnish government proposes offering by 2015, would, well, increase not only ease of access to information, but interactivity on a level and with repercussions we can hardly imagine.)
In the Meantime
While these networks were evolving, there were simultaneously a number of innovations and technological breakthroughs that allowed for the digitization of all sorts of media — the stuff that would soon be flying around those same networks.
The technology that allowed sound information (and soon all other information) to be digitized was largely developed by the phone companies. Bell Labs, a research division of AT&T, wanted to find more efficient and reliable ways of transmitting phone conversations. Phone lines up until that time were all analog, and with that technology the only way to squeeze more calls through a line was by rolling off the high and low frequencies, and turning the resulting lo-fi sound into waves that could run in parallel without interfering with one another — like terrestrial radio transmissions. TV and radio communications had the same problems.
Bell Labs was huge, and they had branches in many states, most of which are closed now. They invented the transistor and the semiconductors that made the integrated circuits in our tiny devices possible, they developed the laser — the list goes on and on. Their scientists won a lot of Nobel prizes.
When Bell Labs figured out how to digitize sound — to, in effect, sample a sound wave and slice it into tiny bits in a way that was not prohibitively expensive and that still left the human voice recognizable — they applied it to long distance calls, switchers and all manner of phone technology, allowing more calls to be made simultaneously, especially considering the limitations imposed by underwater cables. Much of the research regarding what makes a sound understandable (like a voice, in AT&T’s case) involves applying lessons from the science of psychoacoustics — how the brain perceives sound in all its aspects. We’re back to Licklider!
Out of this combination of psychoacoustic and technical research emerged digital equipment that was used in, among other places, recording studios — where I saw this technology. In the ’70s, the Harmonizers and digital delays that appeared little by little were in effect primitive samplers — the samples were usually less than a second long. These were quickly followed by machines that could hold longer samples of greater resolution, and manipulate those “sounds” more freely (clumps of data more than sounds, technically). All sorts of weirdness resulted. Bell Labs was involved in manufacturing a sound processor called a vocoder that would preserve certain aspects of talking (or singing), like speech formants — the shape of the sound apart from its pitch. Using this machine one could transmit these aspects of the voice separate from the rest of the vocalization in ways that rendered them unintelligible. One use for this was a sort of cryptology for the voice — a garbling that could be “decoded” at the other end. These machines were also adapted for music production. Here is Kraftwerk’s vocoder, made especially for them:
I once used a vocoder borrowed from Bernie Krause when Eno and I did the Bush of Ghosts record. It was beautifully made, but rather complicated and very expensive.
A Harmonizer cost thousands of dollars, a digital reverb set a studio back maybe 10K, and a full-fledged sampling device like a Fairlight or later the Synclavier cost much, much more. But soon the price of memory and processing dropped, and the technology became more affordable. Inexpensive Akai samplers became the backbone of music like hip hop and DJ mixes, and sampled or digitally derived drum sounds took the place of live drummers in many recordings. And we were off to the races, for better or worse. With the digitization of sound, digital recording and eventually the CD became possible — and not too long after that, the capacity and speed of home computers was sufficient to record, archive, and process music.
Some years ago I visited Bell Labs and was shown the famous anechoic (perfect, sound absorbent) chamber. This was where John Cage claimed that he could hear both his heart pounding and the high-pitched whine of his nervous system. His insight was that true silence doesn’t exist — even if we can block out everything else, we can’t stop hearing ourselves.
Here is one such chamber:
They also showed me a processor that could squeeze what seemed to the ear to be CD-quality sound into a miniscule bandwidth. I’m not sure, but I believe encoding music as MP3s had at that date already been invented in Germany, so this compressing/encoding was not a big surprise — but like most people, I worried that something in the quality of the music might have been sacrificed in this rezzing down process. I was right, but MP3s have improved quite a bit since then, and now I listen to most of the music I own in that format. I believe what Bell Labs was working on is used for satellite radio — getting more hi-fi sound into smaller transmissions.
In 1988 I went with designer Tibor Kalman to visit a printing studio on Long Island. It had a machine that could digitize and then subtly manipulate images (we wanted to “improve” the image on a Talking Heads record cover). This machine was, like those early computers, incredibly expensive and rare — we had to go to it (it couldn’t be brought to the design studio), and we had to book time in advance. Sytex I think it was called. This was exciting, but its cost and rarity meant we didn’t think much about incorporating its talents into more projects at that time.
After a while, though, the price of scanning dropped, and manipulating scanned images using something called Photoshop became common. Who would buy a film camera these days? Who buys film for their old camera? There are some holdouts, and I have no doubt that there is a richness or at least some special qualities that have been lost, but, well, for most of us, the trade-off seems fair — and inevitable. Needless to say, as these images became digitized they could enter the river of networked data.
Photojournalism went digital a number of years ago. In the beginning, the photographers, realizing that their images would be reproduced in newspapers no larger than 8x10 (if that), didn’t need to shoot at the highest available resolution on their new digital cameras, allowing them to squeeze more images onto their data chips — and giving them fewer problems with storage and developing in the field. To compare these low-res images to video, it’d be like if movies past a certain date were all captured at the quality of YouTube files. While researching archival news footage at some point, I discovered that when it migrated to videotape from 16mm film, the quality went way down.
The confluence of digitized media and the capability of digital information to be shared, transmitted and stored anywhere in the world — this volatile, disembodied mixture that Licklider predicted and whose seed he planted — has, duh, had a huge effect on countless institutions. Many that deal with physical objects — newsstands, record stores, bookstores — will all go away, along with their support structures: trucks, warehouses and all the people that worked in those places. For many of us this is not all bad. The record stores like Sam Goody or Coconuts were never great experiences.
Maybe the first institution to disappear almost completely as a result of this process was the letter. Conventional mail still exists — I get bills, junk mail and announcements — but communication related to my work and between my friends and me is almost all by e-mail or text, as has been for a while.
Television, not a big part of my life for quite a number of years anyway, is bound to migrate online and become something very different.
It’s not so surprising to witness the end of many of the delivery systems for recorded music — vinyl, cassettes and CDs. Somehow those changed from one form to another so rapidly over the decades that to see them all go away isn’t that much of a shock. I don’t really miss them all that much, to be honest. But to imagine that I might live to see the end of print — books, newspapers and many magazines — is mind-boggling. Publishers and news organizations might argue that they are not like the music business, but the patterns are too similar to ignore, except by those who don’t want to see them. Print and books have remained more or less unchanged since Gutenberg, but all that seems about to become history.
I’m not advocating trying to stop this — it all seems inevitable, and the access to information and convenience will be unprecedented — although without newspapers as a Fourth Estate, a check and balance, democracy as we were taught it, will not be, um, the same. We can’t rely on bloggers to police the entire government. Danielle comments, however, that the death of physical newspapers isn’t the same as the death of journalism — if the NY Times can find a way to make money as with digital distribution, it will continue to provide a similar function in society. Whether that will be possible is still an open question — but digitization doesn’t necessarily equal death, at least not yet.
The End of Privacy
Now that the Internet and the World Wide Web have enabled data, content and information to be shuttled anywhere in the world — even around China, sometimes — it seems inevitable that the flow goes both ways, or actually in many ways. The ability to access the Internet is incredibly useful to us and we can’t imagine life without it, so we don’t seem too bothered that as a result of this interconnectedness, the National Security Administration, for one, has access to our web lives and loves — and we don’t seem all that nervous that cloud computing will eliminate any real sense of privacy (despite assurances), or about the massive amounts of information Google and other commercial enterprises have about us.
Danielle points out that many people are in fact very nervous about this — that privacy & the Internet is a huge topic of concern. Google data mining, the ownership and confidentiality of social networking data, security of financial data, etc. — these are all topics that are regularly reported on in the press and about which people have very strong feelings. However, the sense I get on the street is that most ordinary folks are happy (so far) to give up some personal security for all the convenience they’re getting.
Google’s batteries of server farms allow us to search, so, naturally the NSA can also search, dredge and process. I typed in someone’s name yesterday and found that for a small fee, I could see how much they paid for their house, who their neighbors are and what their credit rating is! I was flabbergasted. That’s me, a private citizen, who can know stuff I’d sort of rather not know, not some corporation or governmental agency.
Here’s an NSA data mining facility in Yakima, Washington. (A massive one is being built in Utah.)
So far I’m not aware of malicious use of all that information, not on a large scale anyway — though identity thieves and guys sucking up US credit card numbers by the truckload in Ukraine are a start.
I recently read an article regarding the security of so-called “scrubbed” data. Netflix or some other company wanted to employ a third party to analyze some of their customers’ patterns of purchase — but as a precaution they removed (scrubbed) the customers’ names off the data. So theoretically, the people being analyzed were now abstract entities. However, out of curiosity they hired another company, to see if any of those unidentified customers could possibly be re-identified. It turned out they could. Not due to a fault of the scrubbing, or some security or software malfunction, but because other data and patterns of customer and citizen behavior were available online, and correlating these with the patterns of the anonymous customers led to conclude, beyond a reasonable doubt, the re-identification of many.
To me this means that, yes, information already flows both, or rather all, ways. Privacy and security, as much as we might strive for them, are phantoms that we chase but can never truly catch. As much as we love getting information, data, media and connections, so we ourselves become available as data. Social websites like MySpace, Facebook and Twitter seem to use these conflicting urges — the urge to reveal oneself to the world, in all one’s intimate details, and yet simultaneously maintain some kind of privacy. Good luck with that.
The end of privacy in parts of the world is near. It will be traumatic for some, and a comfort for others — for to relinquish one’s privacy is to become a part of the hive and the herd, and there is a certain reassurance there. How our corporate culture and its twin, the government, make use of this process and this massive change in society leads one to imagine something closer to a paranoid Phillip K. Dick scenario than a return to the nurturing tribe (or the Global Village) that it will be for some. I suspect it will be both — liberating and restrictive. Conflicting and opposite tendencies, operating simultaneously.
So, there it is. The free flow of information, and the ability to digitize all media as it enters the river, has a lot more repercussions than the end of books, newspapers and CDs — it portends a massive social and political shift. Licklider may have seen this coming as well, but he didn’t let on about it.