Project EXPLAIN

Almost ten years ago the researching ideas of interactive music and virtual musical space were materialized into creative and scientific project, which we called in a manner of modern epoch: EXPLAIN (EXPerimental Laptop INtercommunication). Per se, this is both an artistic experiment and a researching project. Today it is for 80 per cent a research and just about 20 per cent – the way to get a new musical experience. To gain this, we also developed the special musical software.

Our first aim was to study a creative potential of a performer and a composer in a new acoustic environment, as well as a coordination and collaboration in non-standard conditions. Later we realized wider artistic and scientific perspectives, and a large technical range of problems in connection between musical instrument, ensemble and performer relationships.

Initially the main concept was that every performer can fill all available sound space equally to other performers. Moreover, the placement of sound elements is limited only by sound system capabilities, as for the performer’s fantasy –  it is potentially unconstrained. As a result, a complicated interaction of different interpenetrated sound spaces is born, as every performer not just takes part in a shared picture of world-formation, but creates his/her own sound world.

Realizing, that it is impossible to grasp the immensity, in our project we came to three poorly explored musical spheres: virtual musical space, network musical performance and interactive music.

Project software are based upon client-server architecture. It let us potentially make experiments in the field of “remote” network musical performance. But today, when virtual musical space problems are still the most important, all connections between performers are “local”. Musicians – each one with his/her own laptop – connect to the server via wireless network. The server joins all client’s processes together and plays back the sound result to “real” acoustic quadraphonic system, as well as performs synchronization and conducter’s functions.

  • Client

    “Performer’s” part allows to generate the arrays of sound elements. It is also possible to save and load this presets “on demand”. Unlimited amount of sound elements of each performer can be put onto 2D projection of 3D virtual space. Nevertheless, every musician also can manipulate a sound element in all three dimensions.

    Main capabilities of client in current version:

      Loading and choosing sound files (sound elements) – prior prepared in most popular formats
      Sound objects (elements) manipulations: playback in cycles (“loops”) or once, unlimited adding, deleting and moving them in real-time
      Loading and saving elements collections and spatialization presets
      Space “zones”: distance and moving speed settings
      “Solo” mode
      Text “chat”
      Network latency compensation settings

  • Server

    “Control” part (server itself) performs special organizational instruction and playback functions, the most important of them from the point of view of the musical process – rhythmical synchronization and managing of the composition aspects. It uses different methods: all kind of timers, synchronization impulses and even some sort of “score”. Besides, on server we research types of coordination and cooperation in diverse modes, so the server should also manage a harmonical logic according to prior prepared schemes.

    Main server capabilities in current version:

      The ability to connect from 1 to 20 performers (256 in theory)
      Simultaneous playback in chosen sound system (for now, it is a quadrophonic system) with stereo recording on disk
      Time control: metronome, synchronization “on click”
      Composition control: shifting and changing harmony of sound elements, changing a list of sound elements of each performer (separately)
      Managing of elements playback (including smooth fade-in and fade-out)
      Record performance for later analysis (“log”)

  • Analysis

    For research purposes every single performer’s action is recording to a file. Later we analyze this information with special software (third module in this project). Using “analysis” module, researcher can playback record in every direction, step-by-step or “real-time”, choose a particular performer and his/her musical “part”.

    Main capabilities of analyzing part in current version:

      Playback of “score” and individual parts (in every combination)
      “Step by step” and “as is “ playback
      Graphical mode to view every element and performer’s actions

Offcanvas

Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.