tiket kereta toko bagus berita bola terkini anton nb Aneka Kreasi Resep Masakan Indonesia resep masakan menghilangkan jerawat villa di puncak recepten berita harian game online hp dijual windows gadget jual console voucher online gosip terbaru berita terbaru windows gadget toko game cerita horor
That uneasy stare at an alien nature

That uneasy stare at an alien nature[*]

Willard McCarty

King's College London

willard.mccarty@kcl.ac.uk | staff.cch.kcl.ac.uk/~wmccarty/


KEYWORDS / MOTS-CLÉS

computing, history, future, strangeness / informatique, histoire, futur, étrangeté


  • 1. Back to Mount Pisgah
  • 2. Primitive conditions
  • 3. Excavating the cognitive strangeness
  • 4. View from a moving centre
  • 5. Modern and postmodern times
  • 6. Estrangement, reduction and emergence
  • 7. The success of not getting there
  • Works Cited

...I have seen nothing, which more completely astonished me, than the first sight of a Savage; It was a naked Fuegian his long hair blowing about, his face besmeared with paint. There is in their countenances, an expression, which I believe to those who have not seen it, must be inconceivably wild.... [Charles Darwin, Correspondence, i.396]


Here no relation, in the sense of message or narrative, can be established. The other is ‘inconceivably wild’. But that which is inconceivable is also here a mirror image.
Beer 1996: 23, 25


The new gadget seems magical and mysterious. It arouses curiosity: How does it work? What does it do to us? To be sure, when the television sets will have appeared on the birthday tables and under the Christmas trees, curiosity will abate. Mystery asks for explanation only as long as it is new. Let us take advantage of the propitious moment.
Arnheim 1957/1935: 188


1. Back to Mount Pisgah

The title of this paper is taken from Northrop Frye’s The Educated Imagination (22). Frye reminds us that although we have come a long way in humanizing the natural world, the reminder of something other remains disturbingly with us. Once upon a time the gods and heroes of old were the human face of nature. In time we ourselves took their place as measure of all things. Even now, configured posthuman by some, thought to be close to other primates or radically distinct from them, we are still using our literary and artistic abilities, Frye observes, to do “the same job that mythology did earlier … filling in its huge cloudy shapes with sharper lights and deeper shadows.” Ovid’s Metamorphoses, Francisco Goya’s Saturno, Francis Bacon’s Pope Innocent X, Seamus Heaney’s “Field of vision,” and all the rest remind us that as the writer and artist Bruno Schulz wrote,


In a work of art the umbilical cord linking it with the totality of our concerns has not yet been severed, the blood of the mystery still circulates; the ends of the blood vessels vanish into the surrounding night and return from it full of dark fluid…. [T]he work operates at a premoral depth, at a point where value is still in statu nascendi.... If art were merely to confirm what had already been established elsewhere, it would be superfluous. The role of art is to be a probe sunk into the nameless. (368-70)[2]


As good a definition of our job in the humanities as I know is to interpret the troubling strangeness communicated by art, then with it to explicate the extraordinariness of the world we have made routine. As practitioners of computing we view the strangeness of reality by way of a quite different art, which forces us to rigorously describe the extraordinary products of imagination as if they were perfectly routine, or as mathematicians say, “trivial” – not just deemed so out of boredom, stupidity, inattention, or other human failing, but found exactly and always in accordance with an algorithmic rule.

Anyone who has taught programming or who can remember first encounters will know the strangeness with which the perfection of such a rule, which is a product of the human imagination, is apt to strike the novice – closely enough to give the mind some purchase by a close enough match with routine behaviour, yet without question beyond the human pale. But how can this be? Bruno Schulz’s psycho-embryonic imagery and Frye’s mythography both suggest an answer: that this strangeness is a paradoxical quality of something not simply beyond our ordinary selves but also simultaneously, embryonically us in statu nascendi, “in the state of being born,” as Schulz wrote. Thus a human invention, such as Turing’s machine, properly takes on the full ambiguity of the etymological sense of invenio: encounter, come upon, find; devise, contrive, plan. Some inventions more than others have this quite distinct oddness about them, this sense of being both from somewhere else and from ourselves – like one’s own children, appreciated with all the wonder and disturbance they bring.

Here I wish to argue that computing is profoundly and perpetually ambiguous in this sense, and so wonder-producing. I want to explicate this wonder and ask what its value is for scholarship. But by now, I expect, my patient reader is wanting to know why I make such a case for computing, especially in a Festschrift. My answer is simply this: As a pioneer in the field the man honoured here first encountered computing when its strangeness was utterly unavoidable. The beginner’s Pisgah-sight he and others among us were afforded then, before progress in hardware and software engineering made that daunting encounter of strangeness and wonderment mostly a thing of the past, constitutes not just an interesting historiographical problem to work on, with a fascinating history to recover, but an urgently needed source of guidance for us now. That history is our future.


2. Primitive conditions

For Ian, as for me, computers were by that time of first encounter already products of commercial industry,[3] with an impressive record of successful application in the physical sciences and engineering, and with formidable promise elsewhere. Computing by then was on a roll, despite the gathering evidence that the towering mountains of difficulty already encountered by the early 1960s were a far better guide to computing’s future in the humanities than the small hills of problem-solving imagined in the 1950s.[4] Word reaching the humanities from the small cadre of scholars involved with computing already for decades, as well as from popular accounts in the press, was that something hugely important was going on. But only a very few of us could then see its relevance to our scholarly concerns.

For very good reason. By the early 1980s, when as a graduate student in English at the University of Toronto I first met Ian, computing was rebarbative to a degree few now can recall, despite all its progress and might in the world of techno-science and commerce. The machines themselves were physically inaccessible. At Toronto, for example, computing machinery was in the hands of the University of Toronto Computing Services (UTCS), housed in the McLennan Physical Laboratories building, kept safe behind glass walls and cooled by a noisy forced-air system whose conduits ran underneath a “floating floor.” The machines were very expensive and complicated to operate, requiring highly trained operators (among whom I was one, years before, at the Lawrence Radiation Laboratory), a clerical staff, managers, and “customer engineers” from DEC and IBM. For the ordinary user, computing was thus mediated and interpreted by professional technicians and administrators whose background training was far more likely to be in science or commerce than in the humanities. Long gone by then were the early days when computers were the personal instruments of those whose problems they helped to solve; still to come were the days when ordinary people could actually touch a computer, play with it, experiment. The sense of computing’s entrapment by a grey administrative bureaucracy and its deployment for dull if not sinister purposes was entertainingly depicted in 1984 by Steven Levy, in Hackers: Heroes of the Computer Revolution, and by the Orwellian commercial video circulated just prior to the release of the first Apple Macintosh.[5] To many, especially in North America, it seemed (to borrow Justice Oliver Wendell Holmes’ famous phrase, a “clear and present danger.”[6] From the beginning of commercialization into the early 1980s, advertisements for computer systems were almost entirely aimed at managers of “data processing,” as the commercial brochures of the time demonstrate.[7]

Computers were also very slow. Until time-sharing mainframes became common in the mid to late 1970s, the effective speed of computing, however fast the machine, was measured by users in “turnaround time”, i.e. the interval in hours or days between submitting a “job” across a counter-top in the form of punched cards and receiving the printout. More often than not (in my case at least) the printout would reveal a card-punch error or similarly trivial mistake and so compel error-prone correction, submission of a new job, another wait and so on. Even when, during my PhD research, a time-sharing system became accessible from home by dial-up acoustic modem from a “dumb” terminal at very slow speeds,[8] a printout still had to be fetched from the computing centre. Then “microcomputers” and dot-matrix printing arrived, and with them, computing centres began to recede into the infrastructure.

It is easy to forget how great the physical and cultural distance to be traversed was for the humanist scholar before computers became personal, how foreign the environment once you got there and, in our terms now, how meagre the reward. It is easy to forget how much violence we did to our inherited ways of working by accommodating ourselves to the “batch-mode” style imposed by commercial operating systems then.[9] It is easy to forget the excitement among those who braved such difficulties, and (for those who could program or who worked closely with programmers) the bracing challenges to be clever, the triumphs of success, the wild thoughts of what might now become possible. It is easy to forget the extent to which disciplinary provincialism on the part of one’s colleagues exacerbated the difficulties, indeed made computing toxic to a budding academic’s career and a stain on an established scholar’s reputation.


3. Excavating the cognitive strangeness

Recollecting all those difficulties, however accurately, has its perils too. By doing so I am at risk of suggesting that the Pisgah-sight to be recovered is a product of the technical and institutional mismatch between scholars and computers. Decades of brilliant progress in hardware and software engineering and decades of skillful marketing have made that mismatch far less obvious than it once was. Progress has made the graphical user interface our springboard of attention – something we attend from with too little thought about how it shapes our view of things. The Web, its “Real Soon Now” putatively semantic successor and equally careless talk of “information” to be had there have further distanced us from the strangeness of the cognitive interface between computing and scholarship. In a rush to domesticate innovation (whose perceived threat is easily recoverable from the literature)[10] we have buried this strangeness in familiarity.

From careful recollection and perusal of the literature I would suggest, however, that during the incunabular period of computing, awkward infelicities and cognitive strangeness led one to the other. The former served the novice in the humanities as forced introduction to a new, equally imaginative but radically different language game, new social relations and unfamiliar disciplines. The latter required of him or her new ways of thinking and a sometimes uncomfortable, always minute probing of mostly tacit research methods. It also led to incisive questioning of the claims made for computing and so to a critical perspective from which its place within humanistic discourse could begin to be explored, as we then did.


4. View from a moving centre

At the beginning of Open Fields Dame Gillian Beer comments:


Encounter, whether between peoples, between disciplines, or answering a ring at the bell, braces attention. It does not guarantee understanding; it may emphasize first (or only) what’s incommensurate. But it brings into active play unexamined assumptions and so may allow interpreters, if not always the principals, to tap into unexpressed incentives. Exchange, dialogue, misprision, fugitive understanding, are all crucial within disciplinary encounters as well as between peoples. Understanding askance, with your attention fixed elsewhere, or your expectations focused on a different outcome, is also a common enough event in such encounters and can produce effects as powerful, if stranger, than fixed attention. (2)


The creation of institutional centres for humanities computing[11] afforded those of us in them a perspective on scholarship much like that of a dean or research officer, whose panoptic view comes with the job.[12] We were and still are peripatetic, in respect of the job frequently in and out of the older disciplines rather than of them, with estranged attention not on but through their discourses to common research methods. Our interdisciplinary standing point I have elsewhere compared to the ethnographic perspective of a trader in intellectual goods, for whom disciplines are as cultures to the social anthropologist (McCarty, Humanities Computing 114-57; ”Tree, Turf, Centre, Archipelago.”). This makes us participant-observers of research (i.e., direct participants in our own research, to some degree indirect participants of others’ and always observers of both). For the digital humanist who is methodologically self-aware, that is, the roles of native and anthropologist are cohabiting states of mind.

At Toronto, before Ian’s Centre was founded, the scholar needing help would walk the distance to the UTCS Humanities Support Group, of which Ian was prime mover.[13] For me this meant a physical stroll down St George Avenue from the Robarts Library (institutional centre of the humanities) to the McLennan building – and a metaphorical journey from the familiar slog of research to often demanding, exciting and enlightening conversations with Lidio Presutti, a stubbornly exacting and brilliant programmer. The experience indeed afforded a Pisgah-sight of that which has remained central to humanities computing. Harold Short has called this educating interplay the “bilateral curiosity” of collaborators (Humanities Computing 121). What has changed or begun to change since Ian’s Centre was disassembled are the institutional arrangements allowing for this interplay to take place on level ground, among equals: not support, as then was, but collaboration, as now is, at least in London.


5. Modern and postmodern times

In attempting to move from a purely chronological telling to a genuine history one looks for clues to the unchanging (such as this interplay of curiosities) as well as for synchronous events and ideas. For me, as Academic Liaison Officer, then Assistant Director in Ian’s Centre at Toronto (1984-1996), the best clue was the evident collision of the promo-man’s hyped-up claims against the solid demands of scholarly interpretation ( Humanities Computing 3-4). The clue took on real substance through many conversations with working scholars and so led me to the cognitive dissonance of analytic computing for the humanities. My own decade-long research project, An Analytical Onomasticon to the Metamorphoses of Ovid,[14] became in effect my exploratory response. I used textual markup to implement minute acts of critical interpretation, simultaneously constraining my interpretative self according to the twin computational demands of complete explicitness and absolute consistency. The result was an inner wrestling match between two personae, an interpreter of literature on the one hand, a “computer” on the other. The latter was the stranger, and so more interesting, role because it allowed me to become, however briefly, and so to understand from the inside, the perfect mathematical factory-worker Alan Turing depicts at the beginning of his seminal paper, “On computable numbers, with an application to the Entscheidungsproblem” (231). In the paper Turing’s analogy quickly drops out of sight, of no further use to him in explicating the disembodied, flawlessly step-wise process that is his aim. But it nevertheless evokes a socially potent imaginary which runs tellingly alongside the development of automation in the 20th Century and the fears which I mentioned earlier.

Much historical spade-work remains to be done, but evidence of this imaginary surfaces, for example, in Charlie Chaplin’s tragicomic farce Modern Times, which appeared in that same rather amazing year of 1936.[15] Chaplin plays an assembly-line factory worker who in his struggle to conform to the machine is driven insane. The surreal truth in Chaplin’s depiction had by that time become deeply familiar in the daily experience of real factory-workers, whose workplace reflected Frederick Winslow Taylor’s programmatic elevation of “the system” above the individual (Principles of Scientific Management, published in 1911). Taylor in turn was indebted to Charles Babbage’s On the economy of machinery and manufactures (published in 1832), the writing of which was as Babbage says in the Preface “one of the consequences that have resulted from the calculating engine” for which he is celebrated. Babbage’s economics may be irretrievably Victorian, but his cognitive model, articulated through Taylorian ideas of automation, continues well into this century via the assembly line itself and the theorizing of operations research (or operational research, as it is called in the U.K.).[16]

Both rampant fear of automation and besotted love of it, especially visible in the early years of computing, are deterministic in character. But simultaneous with if not prior to both dystopian and utopian fantasies, quite opposite and far more practical and intimate ideas of computing arose in the work of Vannevar Bush, Douglas Engelbart and others whose aim was to augment human intelligence, as Engelbart wrote, rather than to suppress it or render it otiose.[17] Thus we can see from the very outset of computing machinery two opposed tendencies and arguments: one to replace humans in their current role, making them masters of electronic slaves or slaves to electronic masters; the other ultimately to join the two, hence the transformation underway in art from the 1960s and more recently in medicine (See Hacking, “The Cartesian vision fulfilled”, and Hayles). I will return to the master/slave dialectic below.

The principle involved here has been well articulated by Terry Winograd and Fernando Flores in Understanding Computers and Cognition:


All new technologies develop within the background of a tacit understanding of human nature and human work. The use of technology in turn leads to fundamental changes in what we do, and ultimately in what it is to be human. We encounter deep questions of design when we recognize that in designing tools we are designing ways of being (xi).


What was and is at issue, then, is something like the historical confrontation of humankind with a bi-directional self-imaging device, or rather with an indefinitely extensible scheme for constructing such devices. I will leave aside the question of whether or not computing is in this respect or any other “unprecedented.”[18] I prefer rather to join computing with the other media of expression we use to show ourselves to ourselves, then ask about the particular ways in which computing inflects self-reflection. Its use both as device and as metaphor to reflect on ideas about the mind is obvious and, we might say, the low-hanging fruit of early research. But it is clear from the vocabulary of John von Neumann’s First Draft Report on the EDVAC that, as Arthur W. Burks has written, his scheme used “idealised switches with delays, derived from the logical neurons of Warren McCulloch and Walter Pitts.”[19] From the beginning, that is, the most influential architecture we have for computing machinery reflected the architecture of cognition as it was then understood in brain science.[20] Furthermore, as computing systems emerged and developed into complex layered structures of subsystems, they furnished a receptive medium for existing ideas of cognitive modularity, already worked out in considerable detail by the phrenologists. Early twentieth-century imagery is vividly provided, for example, by Fritz Kahn in his multivolume treatise, Das Leben des Menschen, and by other forms catalogued in Dream Anatomy (Sappol).

My point here is not a conspiracy of theories, though a confluence of thinking is evident, but the inheritance of strangeness that had made our relation to the world, specifically as vested in automata, a cause of wonder and fear for millennia. The point of my point is the resonance that connects it to the unavoidable, formidable otherness which both Ian and I confronted when we began our turbulent love affairs with computing.


6. Estrangement, reduction and emergence

Computing in this respect is, as I said earlier, bi-directional: it not only appears strange, it also makes strange. Brian Cantwell Smith has pointed out that to do anything useful at all a computer requires a model of something to direct its actions (25ff). Hardware failures and software bugs aside, the correctness of these actions is the correctness of the model, i.e. its truth to the world. Truing it is in part the ongoing work of computer science. From that work comes a digital infrastructure on which we increasingly depend, with hugely consequential effects that we are only beginning to understand. Apart from the trade-off demanded of us, the rigorous strangeness of Turing’s machine makes “correctness” in any absolute sense an incoherent concept, as Cantwell Smith argues. In other words, the attempt to rule-bind the world inevitably fails. Particular failures can be fiddled – indeed, this is what computational modelling (the process) consists of. The incorrigible failures can, however, also be put to work to illumine the irregular, the non-trivial, the extraordinary, the anomalous. Hence computing as an agent of what the Russian Formalists called ostranenie or “defamiliarization” – the artistic technique of creating a pre-interpretative “vision” of an object rather than serving as a means of knowing it (Shlovsky 18), or “how we freshen things that have become banal, rather than banalize things that have become revolutionary” (Bruner, “Life and Language” 22).

What we might then call the negative theology of modelling is a subject I have written on extensively. But for present purposes, all I nee

gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.