Capillaries Capillaries is an audiovisual composition based on a non-hierarchical and hence bi-directional relationship between sound and image. Therefore the piece does not represent a visualisation of music or sonification of an image but rather emanates from an audiovisual paradigm that is based around the idea of an intertwined audiovisual interactions that I call audiovisual tangle. Audiovisual tangle essentially functions as a compositional constraint - the limits of visual paradigm prevent or constraint certain musical expressions and vice versa. These limitations on the other hand open up many additional possibilities. One that is of particular interest of me and was explored extensively in the piece is the ability of both modalities to modulate the meaning of each other on the level of perception, hence in very abstract and subjective way as our brains always try to fill the gaps between sound and image in order to make sense of the world out there.
Capillaries Capillaries does not try to exploit the fascination of a new technology as such but rather exploits software and computer development to try to rearrange “old” things in a “new” way. Due to new possibilities of technological advancement the whole composition was created in a (customised) real-time programming environment where audiovisual material could be treated as an instrument that reacts instantaneously and could be played via various controllers. Sonifying the geometry and manipulating it via 3D motion controller for instance enabled me to create rhythms and sounds in certain parts.
Special thanks go to: Dr. Alex Harker and Prof. Pierre Alexandre Tremblay for being absolutely amazing PhD mentors (*Capillaries Capillaries is a part of my PhD portfolio); Centre for Research in New Music (CeReNeM), University of Huddersfield, for awarding me a Denis Smalley Scholarship in Electroacoustic Music; the Ministry of Culture Slovenia for awarding me a scholarship for post-graduate studies abroad.
1. Max/MSP/Jitter (programming environment).
2. Ableton Live (including various VSTs and AUs).
3. I have programmed various externals for Max/MSP/Jitter in Java programming language and various Max for Live devices using graphical programming language Max. Most of my composition is generative end hence comes from programmed system. Also I have implemented a sequencer that treats a timeline or static temporal grid as a deformable physical string (https://www.youtube.com/watch?v=JpdGQbkCV_o). That enabled me to link geometric deformations with temporal deformations. Since my audiovisual system spans across Max/MSP/Jitter and Live I have also programmed a system that enables me off-line rendering across two different pieces of software - hence a 4K resolution and 60 FPS.
4. 123D Catch (to create a 3D model out of photos).
5. Meshmixer (to prepate a 3D model for Jitter).
6. Final Cut Pro X (only for exposure corrections, blurring and few edits)
1. DIY electromagnetic microphone that picks fluctuations in electromagnetic field rather than air pressure
2. Leap Motion 3D controller
3. Two Mac computers that communicate virelesly via UDP messages
4. Apple Magic Tracpad controller
5. Modular analogue synth system (various modules from various companies)
6. Alessis Andromeda analogue synth
7. Virus TI
8. Various condenser microphones