conception, adaptation, world design, web works: (Israel)


Part 4 of the landscapes project 2003

Conception & Artistic Direction: Ibrahim Quraishi (US)

Production of Compagnie Faim de Siecle



The Technology:


Basic assumption for the technology aspect, is that it should be 3D community oriented, accessible to all, with fallback to the most feasible implementations, enabling virtuous solutions as we proceed (and/or have more budget). This means we’re not going on expensive production using virtual studio technology, neither motion capture, but rather compilation of tools (some open source, some commercial), fit for the slightly above average user.

Another guideline is to “stretch” web available technologies, textual features (hypertext, text to speech…) and 3D, to an extent of new experience, as if creating DisneyLand for the common people, or, IMAX effect for cheap.


The main features that we’re testing:

-         rich background – features for designing detailed high-resolution patterns and objects

-         smooth movement – intuitive navigation

-         active objects - as hotspots activating sound, html info, etc…

-         still images and running texts – displayed on sign objects

-         video streaming embedded – for pre-recorded, or, on-site camera input

-         sound embedded – activated by objects or avatars, pre-recorded or text to speech options

-         chat between users – private whispers and public chat. Displayed on same or separate screen. Transparent or solid background.

-         html related files – info, games, alternative, problem shooting in standard html and flash

-         bots – “intelligent” avatars, programmable to be pro-active and serve us as virtual actors

-         user management – authorization and authentication mechanisms to differentiate between priviledged users who build the system, registered users that paid for real-time show ticket and common users who are occasional visitors to the MEDEAEX world.

-         Real time manipulation of objects, lighting, etc…


There are few technological options for the development of the 3D community oriented world. Each has pros and cons that will be discussed further on. My preferred option was “Plastic Planet 3D” software, for its capabilities of video embedding and full screen options (chat transparent background). PlasticPlanet 3D Chat is non-commercial, a private initiated 3D environment under development, made available to the public in a beta trial version at this time, but will not be ready for our first production. I finally chose to run it on Outerworlds (based on Alphaworlds technology), because I know it from previous projects and have some preliminary design done there and a team to work with. It is steady and debugged environment with building tools and user management, but lacks video embedding and free screen design. These lacks will be worked around with alternative solutions.

Other software elements used here are: html, flash, text generators, graphic processing, text to speech mechanism, random selection from databases of texts…




For the User at home (virtual audience):


The user at home is either occasional visitor who enters the MEDEAEX world as educational or entertainment space (see chapter on research and community), or a registered user that bought a ticket for the real-time show, thus has a role as part of the chorus and optional features to influence the performance taking place. The basic mechanism for the chorus users is a trigger spoken by pre-programmed bot, and then collection of optional texts or activities from which to choose from. Chorus choices will be played in the real performance, by order of their arrival, and will end either by timeout or by Medea.


Home workstation should be regular PC connected to the net.

For better quality experience, PC should have strong graphic card.

Also, high broadband connection enables better experience of course.

-         The 3D community oriented world will be up and running and open to the public, where the actors are replaced by pro-active bots

-         During the show, the virtual world will be open only to those who bought tickets for online participation. Internet Cafes in several locations around the site of the performance (ex: Germany, Israel, US, France and Palestine) will be approached ahead of time, to enable users that want the best configuration.

-         Medea will be videoed and displayed on the users screen as streaming video in a dedicated window.

-         Some spaces in the ME DEA EX 3D universe, will have objects linked to Medea materials on the net, and will allow the visitors to redesign architecturally the landscapes and virtual objects.



Specs of required equipment on site:


-         4 strong pc (pentium 3, 0.5-1 GBRam, PCI/VGA cards) connected to the net and running the 3D-VR application.

-         At least 1 of them - 24 channels professional sound card. The others need reasonably good sound cards as well.

-         4 projectors - 120 degrees * 3, and one for the ceiling. Among other features, projector should be able to cut top and bottom margins (zoom/pan features), or else we’d need a video mixer and connector to pass the data from pc-video-mixer-projector (more specs to be completed)

-         3 PC’s are in a separate booth, where people from the real audience can decide they rather have privileges as the virtual audience and will enter that booth.

-         2 Video Cameras

-         Sound/Light (will be comtrolled from the virtual space)

Loud speakers – 8-16, spread around in 2 rows, plus 2 bigger ones for top and bottom

-         Broadband internet connection (must have dialup line as backup), 3D server & licence, streaming video server

·        Possible interaction tools, other than the user’s options next to the workstation, can be cellular/sms between real/virtual audience, and more…

·        Projection issues have not been tested yet, synchronization between the screens and more…


Programming of the bots:


-         The bots in our system will function both for activating avatars that represent virtual actors, and for managing the user point of the screens that are projected, as well as the switching between scenes, and the lighting.

-         The bots will be developed with Xelagot open source software plus enhancements developed in our team.

-         We’ll run the Xelagot server (or AV99), and control it, and the bots created there, with “whispers” (private chat commands). Later on (and/or higher budget) a better control panel can be developed.

-         These are the bots that we should program and will be installed on the server:

·        4 transparent unseen users that represent the point of view for each projector – activated by bot. This is actually the camera point of view, targetted at 4 (or 3) different directions. This bot will teleport us from one layer of the universe to another and will also control the lighting.

·        Jason – avatar doing gestures, moving, speaking (mp3 pre-recorded files).

Vanishing between scenes, re-appearing in next scene in a different location/direction and with new color of tie/suit. Collapsing at the end.

·        Head of chorus – avatar that is similar in looks to the gray guest avatars, moving like them. Speaks with text to speech mechanism the first portion of text from each part of the chorus. May be the one in charge of playing the sound files of the chorus (or else we need another transparent bot to control it).

·        Side character (guide) – standing next to the entrance, explaining and handing information (homeusers activate her, real audience see what she replies to them. At later time (/budget) she can be prohrammed to be activated with a sensor that recognizes when someone from the real audience approaches her, and will actually communicate with them).

·         Webcam frames - We may use another bot to display animated incoming images from the webcam




Sketch of the monitoring screen (4 projectors, cues, sounds, chat monitoring…)

Creative Commons License
This work is licensed under a Creative Commons License.