Friday, March 25, 2011

An Interview with Tarik Barri

An Interview with Tarik Barri – Cycling 74


By Marsha Vdovin – March 14, 2011.

One of the really exciting things about Max/MSP/Jitter is its fluid integration of sound, visuals and effects. Tarik Barri, an innovative young artist based in Holland, is elevating the possibilities with a synergistic blend of compatible programing languages.

I spoke to him about his work with Robert Henke [monolake] and their immersive collaboration of 360° surround visuals and audio for the RML CineChamber project.

Where did you grow up and go to school?

I was born in Arnhem, The Netherlands and moved to Saudi Arabia when I was 5 years old, where I lived until I was 10. My mother home-taught my little brother and I during this period, which meant that we could go through all the school-stuff at our own tempo so there was more time left for other stuff, like drawing and computer programming on our MSX home computer. Since I had a difficult time making friends there, the drawing and computer programming were quite essential in not getting bored out of my skull in the oppressive heat.

What is an early artistic/creative memory that you have?

I really enjoyed programming little games with racing cars, space ships and fighting kung fu masters. My brother came in very handy as my one and only beta tester. It is also during this period, that I made my first computer music. After one of the chickens that we owned literally dropped dead in front of my eyes, my brother and I got a day off from school. I used this time to compose my first piece of music. I can’t remember how it sounded, but my mother maintains that it was an exceptionally sad and melancholic little tune.

What did you study at school?

After returning to my home country and finishing high school, I studied architecture for one year before deciding that I didn’t really care about buildings that much. After that I took on the study of psychology, which was quite interesting, but this hobby of mine, namely creating music with my computer, just became too dominant a force in my life to ignore. I just had to develop this further, so I had to also quit psychology to start a study in Audio Design at the Utrecht School of Music and Technology. There I learned about sound synthesis, acoustics, programming in Max/MSP, C++ and Supercollider. During my free time, I also discovered the joy of making patches not only to create sounds but also to create visuals, and combinations of the two.

Can you describe your work at the time?

When I started my study in Audio Design, I did this because I wanted to find methods for performing my music live on stage in such a way that what I was doing would actually make sense to the audience. Basically, I wanted the audience to understand my live performance of electronic music in an equally intuitive way as the audience would understand what’s going on when they see a guitar player wailing away. I wanted to get rid of the quite literal wall that the upright laptop screen creates between me the performer and my audience. I wanted to really show the audience what I’m doing.

During the course of my study, while learning and experimenting with Max/MSP, I discovered the wonderful joys of working with Jitter. I had been blown away by music videos of people like Chris Cunningham and Alex Rutterford. I soon became addicted to creating my own synergies between music and visuals. At some point I realized that by doing this, I was actually getting very close to my initial goal: to literally show the audience what was happening during the music creation process. Using visuals would be the way to get there.

So I set out to create an audiovisual paradigm, in which I would not merely ‘visualize’ the music, but actually show how the creation of the music itself worked — actually show where the sounds were coming from. To achieve this I had to create a system in which the visuals would come from the exact same source as the audio, just like both the audio and the visual appearance of the guitar player come from the same source of creation: the guitar player. With such a system one would both see and hear the same underlying source and develop some understanding of how this source operates; how the music is created and which mechanisms and ideas underlie it all. This might sound like I’d want people to take a very rational approach to my work, but I believe that if these mechanisms and ideas are intuitive enough, the audience may intimately ‘feel’ these concepts and mechanisms.

Out of this, your Versum project was born?

It was. I created Versum, as an audiovisual composition tool. I wanted this tool to be as intuitive as possible. That meant that it had to be recognizable for the average audience and therefore resembling reality as we all know it in some basic aspects. So I created a virtual 3D world, which is seen and heard from the viewpoint of a moving camera that moves through space, like in a first person shooter game. Within this space, I placed objects that can be both seen and heard, and like in reality, the closer the camera is to them, the louder you hear them. So when the camera moves past several visual objects, you simultaneously hear several sounds fading in and out. Consequently, by carefully choosing the placement of objects, the way that each of them sounds, and the way the camera travels past them, I can create melodies and compositional structures, which are both seen and heard.

After finishing my study in 2008 I sent some emails to Robert Henke. Versum wasn’t quite as advanced as it is now and not many people really got why I was so excited about this whole concept. But I had the feeling that perhaps Robert Henke would be one of the few people who would truly understand what I was trying to achieve here.

How were your drawn to Robert?

Well, because of the texts that he wrote, the music that he made, and the fact that Versum was partially inspired by the concepts behind his audiovisual work Atlantic Waves.

So, I sent him links to my work, stating that I’d really like to work together in some shape, way or form. To my surprise, he reacted by saying that yeah, he really liked my work and perhaps we could try collaborating during his next Monolake performance in Glasgow, Scotland. He’d arrange the flights and a fee, and I’d arrange the visuals during his show using the Versum engine. So, wow, yeah, of course, I was there.

Was Versum set up to use in real-time at that point?

I prepared for this performance by further developing Versum into an environment which would be suitable for real-time 3D VJ-ing. I added lots of shapes, movements and effects purely for expanding the visual possibilities of my software. I concentrated a lot on how to control this system in real-time, in order to quickly and effectively react to changes in the music. Both meeting Robert and the performance itself, worked out great and we have continued our collaboration to this day. We’re continuously developing and refining the ways in which we combine music and visuals, discovering the many possibilities of this huge, exciting and largely unexplored domain.

So now Versum has grown to become a hybrid real-time tool for both purely visual purposes and for creating and performing 3D audiovisual compositions.

So, when you’re performing with Monolake, you are only handling visuals?

Yeah, for the Monolake Live performances I take on the role of VJ and use the Versum software in a purely visual way, while Robert makes the music. Since we both control our instruments live, every performance involves a lot of spontaneous decision making which produce results that are surprising — even to ourselves!

The first time that I used Versum to create non-real time, full surround 3D video was with Fundamental Forces, which Robert Henke and I showed at Club Transmediale in RML’s CineChamber. In order to create the non-real time visuals, I recorded a live run-through, in which I listened to Robert’s music and reacted to it in real time, using my controllers. I recorded all of my actions during this run-through and stored them in a text file, which detailed every single camera movement and rotation, every single parameter change during this live run-through, frame by frame. Afterwards I rendered the ten full resolution videos for the CineChamber setup by playing back this file at a much slower tempo — which took the computer about 2 days — while re-executing every command that I had generated during my live run-through and storing the resulting images onto my hard drive. So essentially the Fundamental Forces video is a live video, with all of the spontaneity that this entails, but rendered with an amount of detail and a frame rate that could never be achieved live.

When you expanded to 10 channels for the CineChamber project, did you need multiple linked computers to pull it off? I know that that was a huge stumbling block for Naut & the crew, when they had it based at the RML studios here in San Francisco.

The CineChamber project makes use of 10 projections, each with a resolution of 1980×1080 pixels, placed in a rectangle surrounding the audience. At first I considered using multiple fast computers to do a live performance and link all of them together by sending OSC [Open Sound Control] messages back and forth. Via these messages, the master computer would tell the slaves where the virtual camera is, where all of the surrounding objects are etc. I had already done similar projects in the past where several instances of Versum were talking to each other via network connections, without applying any elaborate time syncing.

The real problem was that the CineChamber doesn’t have any gaps between the screens, so any slight timing differences between them would immediately and easily be seen. So this implied that I would have to seriously think about accurate ways of time syncing. I discussed possible solutions with Robert Henke and he had some nice ideas, but I didn’t go there since I couldn’t obtain enough computers to pull this off. I only had the possibility of making use of RML’s single supercomputer, which handles all of the 10 projections. So to make sure I would get the most out of the CineChamber setup, without totally overloading the CPU or sacrificing frame rate or resolution, I decided to use Versum in a non-real time way and render everything in full resolution at 30fps. These projections combined, formed one big panoramic surround window, through which the audience could see the Versum universe all around them in 360 degrees. After providing them with the visuals, I was very happy to leave all of the actual video syncing to CineChamber specialist, Barry Threw, who did an amazing job.

How was the jump from Max/MSP to Jitter?

Well, actually Max/MSP has been a jumping board for many good things in my life, including Jitter, but it also introduced me to a lot of other great stuff like JavaScript, Java, Open Sound Control, Shader programming and object oriented thinking in general. But Jitter was the first of these worlds that I explored, partly because it’s incredibly easy to get into.

Did you just jump into it, figuring it out as you went along?

Basically, I started out by simply opening some jit.gl.handle and jit.gl.gridshape help files and fooling around with those patches. Discovering the possibilities and the ease with which they could be explored, was very inspiring and exciting to me. To gain a better understanding of what was really going on in this visual world, I went through all of the Jitter tutorials. They were a really helpful and an engaging way to learn about the general concepts behind it.

There’s no point in learning, without actually putting the knowledge into practice, so I soon integrated this visual world with my musical study as much as possible. I started using alternative ways to visually represent and structure sounds, which in turn opened up new ways of thinking about musical composition. It freed me from the more standard sequencer-type style of creating music, which is of course also largely visual, in that all sequencers have a visual interface that defines your possibilities. These interfaces determine how you approach your compositions and they strongly influence the way you think about them. Changing the interface therefore changes the music you create. My teachers didn’t always get that point, but approaching music in these alternative ways had a strongly addictive draw on me. It felt and still feels like, within this domain there is so much beauty, practically screaming to be explored and I’m more than happy to respond to this call.

From there you started expanding to other programming environments?

After starting out with Jitter I came to a point where I wasn’t satisfied with 3D shapes simply circling around each other and those kind of movements. I wanted to make more complex movement and interaction patterns and I wanted to interpolate presets in very specific ways. After studying the JavaScript chapters in the Max tutorials I decided to create my own JavaScript objects to implement some algorithms. I loved it. JavaScript was fairly easy to learn and it was a good way to keep my max patches nice and clean while still being able to do such things as recursive iterations through complex datasets, etc.

I also used JavaScript’s ability to create max objects and connect them to each other, to automatically generate large portions of my patches for which I needed large amounts of objects, such as creating complex user interfaces. I could make much more complex patches than ever before, until, at some point, they really became way too complex to ever want to maintain. Besides that, I also found that some of the JavaScript algorithms where just too slow for my taste.

So, you moved on to… ?

Well, I had heard that Java was faster than JavaScript so I decided to experiment with the ‘mxj‘ [max java] object in Max, which makes it very easy to incorporate Java code in Max patches. And yes, Java was indeed several times faster. I could even do sound calculations at sample rate with it and make my own MSP objects. Still, JavaScript is great for quickly generating algorithms and it’s a great introduction to text-based languages. I definitely recommend it as a language to learn and use in Max, but after you’ve hit the roof with JavaScript, Java is the way to go.

How about C++ ?

Of course others create their objects in C++, which works even faster in many cases, but C++ code takes a lot of time and attention to details to develop. I really like rapid development in order to quickly arrive at the more artistic part of the process. In this respect Java works perfectly for me.

I still had to lose the habit of generating thousands of objects and patch cords, without losing the ability to create complex user interfaces, so by the time I started to create Versum, I decided to use Processing as a graphical interface. This interface interacted with Max through OSC messages. Since Processing is basically the same as Java, but with a lot of really cool extra features, I only had to learn Java to develop this new method of working with Max mxj objects and Processing.

So now Versum consists of several elements: there’s Max/MSP, there’s Java, there’s the Jitter objects, there’s Processing and at some point I even threw Supercollider into the mix. Oh yeah, and I also used GLSL shading language — I learned about that through the Max Jitter tutorials as well.

Whoa, that’s amazing! Could you give us an example of how you used all these different programs in Versum?

Sure…

Processing creates the user interface which contains mainly numerical controls — comparable to number boxes in Max — for changing parameters and an interactive map on which I can both control and keep track of all the elements in the space.

Max/MSP is basically the place where all of the other elements come together. It contains all the Java information within a single mxj object. It also contains the Jitter info and it receives and sends OSC messages to and from the Processing interface and Supercollider.

Java, within the mxj object, constitutes the real ‘brain’ behind Versum. It knows where the virtual camera is, in which direction it is looking, where the objects around it are, what their speeds are, what their relative distances to the virtual microphones are, etc.

Jitter creates all of the OpenGL 3D visuals, based on the info that the mxj object sends out of its patch cords and it loads the Shaders, making sure they are being used in combination with the right Jitter opengl objects.

Shaders, written in the GLSL language, tell the GPU [the CPU of the graphics card] how to interpret incoming OpenGL data coming from Jitter. They’re the last programmable step before the pixels are generated on the screen. The amazing thing is, you can do really, insanely complex, cool and beautiful stuff with Shaders and due to the way that the GPU is structured, it works insanely fast. They have the power to change the shapes, textures and movements of 2 and 3 dimensional objects in amazing ways. They’re my main ingredient for making sexy visuals.

And finally, Supercollider creates all the sounds and gets its commands through OSC from the mxj object. I wanted to easily and dynamically create and delete sound objects without resorting to complex solutions using the poly~ object, so I turned to Supercollider because of its flexibility in these matters.

Looking back, is there anything you would have done differently to make the progression of and updating Versum easier?

Well, if I now look at the earlier versions of Versum, I see tons of design flaws and absurdly complex and weird solutions for what I now see were actually simple problems. When updating something that I implemented long ago, or fixing some bug, I still sometimes encounter some inexplicable mess, which I then really need to clean up before I can proceed. But I’m happy to pay this price for quickly obtaining artistic results. I guess my programming style can be described as quick and dirty at first, with a thorough cleanup afterwards. (Click here to see a full-sized version of the patch below.)

Cleaning up is, of course, totally vital, and anyone who lazily disregards this fact will inevitably and totally get lost within the swamp of their own software. And yes, I’ve been there… it’s a scary, dark and dreadful place. But still, the quick and dirty style is a great way of immediately finding out if some theoretical concept actually works artistically. If not, the result might still lead to the development of other ideas, since I often need bad ideas first to arrive at good ones. This process of natural evolution of ideas, which can be tested, dismissed and changed on the basis of their practical results, would be much slower if I’d meticulously think everything through months in advance. The very nature of Max/MSP has encouraged me to think in this way; as soon as I think of something I can go ahead, implement it and just see what happens, even during the execution of the software.

So, you are using alternate controllers?

Yes. Amongst others, I use a Behringer BCF 2000 and a Spacenavigator. For installations, I have also used Reactable software in combination with camera input.

Do you have any tips for integration, or problems to avoid, you can pass along?

One of the most important things I’ve learned in the use of external controllers within a live context, is to let go of the concept that each slider, knob or button should correspond to a single parameter in the software. A disadvantage to such a way of working is that it takes quite a long time to achieve a desired result, if all involved parameters need to be individually set. This is quite a serious disadvantage because in a live context, the speed and timing with which you work is essential. Another disadvantage is that, since I can’t use tons of controllers, I don’t have enough of them to influence all the parameters that I’d want to influence. So this way of working, which I had employed for quite some time, made me feel both slow and quite limited.

The solution for me has been to use my controllers as meta-controllers. They don’t just control a bunch of single parameters; they control whole ranges of parameters in different ways. Sometimes even one single parameter is influenced by three sliders simultaneously.

The concept behind this is, that as a live performer, I don’t care about the individual parameters any more; I care about aggression, dreaminess, melancholy, the subjective states that I can evoke in myself and the audience. So I want these states to be represented as directly as possible in my use of controllers, enabling me to react intuitively and quickly to the current situation. For instance, my ‘aggression’ slider influences the speed at which all objects move: the shakiness of the camera, the saturation of the colors and the roughness of objects’ surfaces.

Of course I still also use controllers to access some single parameters which need to be precisely set on the basis of more rational choices, but in any case I believe this topic is a very important one to consider when deciding what you’re going to use your controllers for.

Anything new on the horizon we should keep a lookout for?

Oh yes, very much so. I plan to do a lot with dome projections inside planetariums and such, in the near future. I’d love to do stereoscopic 3D projections. Also, sound-wise, I’ll shortly be making use of speakers not only surrounding the audience on a horizontal plane, but also on a vertical plane, meaning that sounds wouldn’t only come from the front and behind but also from above and below. Regarding the content, I’ll focus on seriously expanding the range of sounds and shapes that Versum can produce. And I will implement several new ways in which objects will move and interact with each other. I want to literally make them come to life, organize themselves, follow or flee from the camera, fight each other, perhaps even have sex with each other to produce beautiful new little baby objects… we’ll see. In any case, I do plan to keep on working on this system for a very long time to come.

Tarik’s Website

Follow Tarik on Twitter

CineChamber


Text interview by Marsha Vdovin and Ron MacLeod for Cycling ’74.
3 Comments

1.
julien bayle says

Congrats Tarik and thanks a lot Cycling74 for this nice interview !

inspiring & amazing !
March 15, 2011, 6:33 am
2.
Eric Ameres says

Fantastic Interview! It was great having you and Robert here at EMPAC for the Monolake Live Surround show and lectures too!
March 15, 2011, 1:11 pm
3.
kyle says

inspiring
March 22, 2011, 7:59 am

Leave a Reply Cancel

Some HTML is OK

Name (required)

Email (required, but never shared)

Web



Type the two words:Type what you hear:Incorrect. Try again.
Get a new challenge
Get an audio challengeGet a visual challenge
Help


← Video Tutorial: Play a Movie

Wednesday, March 23, 2011

KISS2011 Call for Proposals

http://www.myspace.com/dhlxstudios

The third annual Kyma International Sound Symposium (KISS2011) will take place from 16-18 September 2011 at Casa Da Musica, architect Rem Koolhaas' dramatic new music venue in Porto, Portugal: http://www.casadamusica.com/

Inspired by Portugal's proud history of navigators who set out to explore beyond the known and visible horizon, the theme of this year's symposium is Explorando o espaço do som (Exploring Sound Space) in honor of those who are exploring new methods, concepts, and ideas, beyond the familiar horizons in sound and music.

Call for Proposals

Universidade do Porto and Symbolic Sound invite you to join us to share your ideas, experiences and results with fellow practitioners by submitting a proposal related to this year's theme, including topics ranging from the most literal to the most abstract definitions of sound, space, and exploration.

Proposal topics could include (but are not limited to):

* Exploring in immersive environments & spatialization (literal space)

* Representations and traversals of timbre spaces (abstract sound space)

* Physical interfaces for controlling Kyma (because navigators need ships!)

* Sounds for science fiction films and games (as in "outer space")

* Using sound for exploring a data space (data sonification)

* Sound space representations (new kinds of musical scores or GUIs: maps of the sound space)

* Cognitive studies of time and space, pitch space, or other psychophysical spaces

* Using sound to evoke a space (actual or imagined)

* Modular thinking and guided explorations of unknown sonic territory

Formats can include:

* Live Kyma performances

* Live demos of Kyma projects for interactive games, films, or short form video

* Live Kyma-generated sound tracks for films

* 30' Paper Presentations

* Workshops on specific topics by power users intended for fellow Kyma practitioners

* Public lectures on the theme that would be of interest to a wider, educated & interested, public audience

Priority consideration will be given to projects that involve live performances, interaction, demonstrations that include sound and/or video, and other lively interactive aspects of sound space exploration. Due to the time and scheduling constraints, it is impractical to mount sound installations (purely because symposium participants would not have enough free time for exploring them and because the symposium will last only three days; however proposals for presentations describing and demonstrating previously mounted installations using video and/or live excerpts would be very welcome!)

Although no funds are available to reimburse travel, lodging, performer salaries or materials, we would like to offer a conference fee waiver to all presenters and performers whose work is accepted for the conference. Thank you!

How to submit:

Prior to the 1 May 2011 deadline, please go to http://bit.ly/fdOI5W and fill out the submission form which will include an abstract describing your proposal and any required resources. The proposals will be evaluated by a committee according to feasibility (duration, technical and human resource requirements) and the degree to which the proposal fits the theme and promises to be of interest to an audience of those who are actively using or seriously considering using Kyma in their own work. You will be contacted sometime before 1 June 2011 to discuss whether it will be possible to include your proposal in this year's program.

The KISS2011 program will also include master classes on sound design, CapyTalk, and other Kyma topics, plus the annual demonstration of what's new in Kyma this year.

Questions? Please send email to: info.kiss2011@gmail.com

Thank you for submitting your proposals! We are really looking forward to reading them!

Wednesday, March 16, 2011

F....U ;)

http://www.myspace.com/dhlxstudios

yeah yeah yeah, Goodmorning F....U 2

Tuesday, March 15, 2011

HA! Life sucks..

the end of an era

http://www.myspace.com/dhlxstudios

the end of an era for us, it is scary but there is nothing that can be done about it. strength is all i need with all the new things in my life, i now wish i was alone somewhere having no connections to this world. i now see the fultillity of my decisions. u can t force life, life forces u. a deep resentment rest @ my heart, for anything. dont really know how i am going to be in the future, but got a feeling that is going to be ugly. i really see no future on my current ventures.

Tuesday, March 1, 2011

BLENTER @ AZUL

http://www.myspace.com/dhlxstudios

Playing with Anna Ballmer (viola) a mixtape composition THURSDAY 17/03/11

Top Stories - Google News