Application Software for Electronic Literature |
Antoinette LaFarge Demotic Software: YIN MOO, Max/MSP
A pioneer of net-linked performance, Antoinette LaFarge is the founder of the seminal online performance group The Plaintext Players. Her work -- which encompasses virtual and mixed realities, computer-mediated performance, net-based improvisation, online role-playing games, avatar performance, nonlinear narrative, fictive art, and geofiction -- has been exhibited at the Beall Center for Art + Technology; the Laguna Art Museum; Location One; the Sandra Gering Gallery; Xavier Lopez Gallery, London; Side Street Live, the New York International Fringe Festival; the Venice Biennale; and the European Media Arts Festival, among many others. She is the creator/co-creator of a continuing series of mixed-reality performance works, and her works in collaboration with Robert Allen include The Roman Forum; Playing the Rapture; Galileo in America; and Demotic,the work described in this Authoring Software statement. Antoinette LaFarge is Professor of Digital Media at the University of California, Irvine. She is also an Associate of the Institute of Cultural Inquiry where she has worked on projects including The AIDS Bottle Project and The AIDS Chronicles. For more information about her current work and projects, visit Antoinette LaFarge's home page Antoinette LaFarge: Demotic A ll of my mixed-reality performance works use different authoring strategies and tools. In general, however, all share a focus on multi-authoring, on improvisation in various forms, and on a fluid relationship between creation of text and creation of other forms, including software, vocals, sound, video, and movement. Perhaps the best introduction to my authorial practices is a 2004/2006 mixed-reality performance work entitled "Demotic". The essential idea behind "Demotic" was to have a single stage actor and two sound artists channeling, in real time, many other voices and information sources, some from real space, and some from the Internet. I conceived it with director Robert Allen, and we co-created it with actor Tracey A. Leigh, sound artists Maria de los Angeles Esteves and Jeff Ridenour, and a group of online performers known as the Plaintext Players. The following general description of our processes applies to both the 2004 and 2006 versions of the piece, but it should be noted that not all the methods described were used in every segment of the final work. To start with, we gathered the Plaintext Players on a MOO, which is a virtual, text-based, multi-user domain of a kind that predominated on the Net before the advent of graphical worlds. The MOO performers improvised 'in character', in real time, creating a text that was partly written, partly performed. Only one of the MOO performers was situated in the same physical space as the actor and the sound artists, and this person served as a key link between the remote and local performers. The base text generated by the MOO improvisers was fed into the physical space of the stage in two ways: as visuals and as sound. The visuals took the form of scrolling text projections, which the stage performer used as a kind of teleprompter, responding vocally (by reading/improvising) and physically (with improvised movement). Since the MOO is a form of programmable software, we were able to control and alter the text output from the MOO: for example, in some cases we altered the speed with which it appeared on screen, or its layout, while in others we algorithmically garbled it or mixed the live text with previously recorded (stored) material. The transformation of the MOO text into sound took two forms in addition to the actor's own reading of the text. One was artificial speech created by text-to-speech synthesis, which gave the remote MOO performers a vividly physical presence in the stage space. The other was a layered soundscape created by further processing of the MOO text and synthetic speech through the programming environment known as MAX/MSP, which allowed our sound designers to deploy spatialization, repetition, additional sounds, and other effects. Feedback loops were a critical part of this 'interdependent' creative process. Through streaming audio, the remote MOO performers could hear what the stage actor and sound artists were doing with their text in real time (though slightly delayed) and respond to it. And since the sound designers and the actor were in the same physical space, they could respond directly to each other as well as working with the MOO text as it was created. Although from a traditional writerly perspective, one could argue that 'the text' was created in the very first step (during the MOO improvisations), from our perspective what mattered was that it then underwent a series of transformations each of which brought new creative elements into play: at the point of MOO output, at the point of speech synthesis, at the point of sound processing, at the point of actor improvisation, at the point of feedback...Only at the end of this web of authorship did we have what we would consider 'the text'; that is, the full synthesis of verbal, audio, visual, and physical elements that is "Demotic."
For a diagram of the authorial process, see the "Demotic" website at:
http://yin.arts.uci.edu/~players/demotic/gallery17.html
Information about the specific processes featured in each segment of "Demotic"can be found in the program notes at:
http://yin.arts.uci.edu/~players/demotic/program-06.html
(2006) and
http://yin.arts.uci.edu/~players/demotic/archive-04-AV.html
(2004). |
Authoring Software Home Index
Writers and Artists
Mark Bernstein
J. R. Carpenter
M.D. Coverley
Dene Grigar
Deena Larsen
Nick Montfort
Stuart Moulthrop
Andrew Plotkin
Sonya Rapoport:
Silvia Stoyanova and Ben Johnston
Stephanie Strickland |