ColorEmotion Project

Main Page Publications Media Blog

Two issues are the most important in mobile creativity using augmented reality, which give us a lot of new possibilities: virtual space and ‘real’ mobility. The whole system gives to musicians – both composer and performer – a virtual scene with abilities of total control. Hence, a musician becomes a producer and a conductor of this specific virtual concert stage. It becomes possible to put ‘virtual’ performers everywhere according to prepared or spontaneous concept; it opens a possibility to change shape and structure of such space during performance; we can give to the listener an ability to jump to every point of this space. It is also important that these processes work visually and in real-time. Technically, a computer agent is a virtual space generator, the mobile device works as a navigator and/or a ‘window’ into this space. As a result, we have a ‘personal’ virtual space, which we have visualized and interacted with.



A confident, creatively motivated communication of group of performers in interactive music is quite complicated aim. A problem of communication in interactive music is what we began to study in our previous project EXPLAIN. This issue has been discussed in my recent article, to summarize: it is quite difficult for composer to transmit and for performers to realize the unified emotion at the same moment. Without emotional unity it is almost impossible to overcome a narrow stylistic ‘corridor’, to avoid the sudden disappearing of the performer’s potential.

In other words, in a distributed network performance (and in interactive music at all) the creation of performers communication system becomes high-priority goal. We tried to resolve communication problems in this project through color-emotional pairs..

Philosophy and, especially, psychology often study relationships between color and emotion. Correlation between emotion and color depends on many conditions, often treated different in various cultures, epochs, philosophies. Nevertheless, on a basic level, ‘cold’ and ‘warm’ colors arouse different emotions  and vice versa, less ‘intensive’ – than more ‘intensive’ etc. Therefore, our choice of color-emotion pairs on one hand, was quite predictable and, on the other hand, became the specific ‘convention’ between participants.


Analysis of major psychological researches about emotions allowed us make a rating list of basic emotions.

From this list with 197 different emotions we take 16-top emotions:

Fear

Disgust

Shame

Distraction (disappointment)

Anger

Joy

Contempt

Enmity (hatred)

Love

Surprise

Distress

Confidence

Sadness

Happiness

Anxiety

Content

Each concrete marker, linked with a specific sound element (which have a particular emotional characteristic), join together with a certain color. These color-emotion pairs randomly were put on a white pieces of paper (for better AR recognition). Hence, performers ‘moves’ from emotion through color to sound, because a color is a starting point, not a marker or a sound. It is important that this ‘path’ is totally different from Scriabin’s famous idea: from sound – through color – to emotion.


The second composition is based on a visual art – an animation movie ‘Destino’. For some reasons, the original soundtrack is not very suitable for this movie, therefore there were some attempts to ‘re-record’ soundtrack. So we did in our project. But the main reason was that for creating an emotional ‘score’ (i. e. a starting point of performance in this project’s concept) the sequence of emotionally vivid visual images was needed – and this movie was the good one.


One of possible stories in ‘Destino’ – the ill-fated love Chronos has for a mortal woman named Dahlia. A bell in the movie is a symbol of time and eternity, so it becomes the only sound in new soundtrack. The sound of bells has an emotional information in a whole spectrum, therefore sound elements (which linked to markers) are spectral 'parts' of a real bell’s sound.

Other Augmented Reality (AR) projects

Digital Signal Processing


This is a specific mixer or FX console.
I intentionally avoided the using of advanced 3D models for better focus on a musical component of this quite complex system. In a real application it would be wise to stitch each marker with more sophisticated sound processors (of course, there can be unlimited amount of processors/markers with possibility to change their functionality on-the-fly).
The composition (Max Richter, "Embers") was transformed using these 4 filters:


  • Low-pass filter
    High-pass filter
    Reverberator
    Distortion


‘Black’ marker is using for other markers orientation.

Augmented Musical Palette


A ‘scene’ or a ‘palette’ of 16 markers, each connected with a specific sound element. Closing (by hand for instance) a part of markers, we switch-off ‘invisible’ sounds and change sound space interactively.

Virtual Music Scene


Here we have a virtual scene, where we can put unlimited number of performers everywhere in this space.
Mobile device here is equal to a listener, or from a sound engineer point of view becomes a virtual microphone. Hence, we can freely move a listener / a mic in space of augmented reality: move it closer or farther, change viewpoint, put it inside a group of virtual performers or near a particular performer – and, most important, on-the-fly.
Also we can change style patterns in real-time, leaving performers on their ‘virtual’ places.

2011 (c) SoundWorlds.net