Collanews

🔒
❌ À propos de FreshRSS
Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierVos flux RSS

https://github.com/keijiro/Grubo – Grubo, an audio visual...



https://github.com/keijiro/Grubo – Grubo, an audio visual experience with Roland MC-101 and Unity. I’ll use this in a live performance tomorrow >> https://channel21.peatix.com/

Roland Groovebox MC-101 and Unity



Roland Groovebox MC-101 and Unity

Roland Groovebox MC-101 and Unity



Roland Groovebox MC-101 and Unity

Implemented a variable resolution test pattern generator as a...



Implemented a variable resolution test pattern generator as a custom post processing effect https://github.com/keijiro/Kino

Implemented a variable resolution test pattern generator as a...



Implemented a variable resolution test pattern generator as a custom post processing effect https://github.com/keijiro/Kino

Blurring the boundaries between technology and the individual – Exclusive interview with Shoeg.

Par Hayley Cantor







After seeing Shoeg’s project Infiltrate at LEV Matadero, we decided to catch up with him in Barcelona to find out more about his work, and to try and decipher the fascinating performance we saw that intrigued us to discover what technologies he uses to create his live AV shows.

Primarily I understand, you would consider yourself to be a musician, am I right? Or how would you label yourself? When did you decide to experiment with the A/V side of your show?

In the last years I’ve changed that way of seeing myself, so I would say I’m an artist. It’s not only sound anymore, I feel really that I am trying to express myself also through my code, my visual stuff, even my movements. I’m also collaborating with dance companies, where it is quite important to know how you move on stage, and this made me aware of that. So, for example I try to play without the table and computer blocking the visual line to the audience. I have also changed my relationship with sound, focusing more on textured layers instead of pitch.  

I started as a “musician”, but my visual side has been always there. I’ve been working for 15 years as a video editor, and I always had this fascination about image and sound synchronicity and feedback. 

Shoeg - Oudeis
Image from Shoeg’s project – Oudeis

Have you created the visual part of the show yourself or collaborated with a visual artist? (If so, who and why?) If not, tell us about how you developed the project and any challenges you faced in dealing with both elements of the performance.

I almost always create my own stuff. I’m not closed to collaborating with other people, but I tried to involve other artists in the past and for a reason it almost never happened, except for when I worked at the very beginning on the project with Ana Drucker, but after that I spent 2-3 years without a visual show, and I was really missing it. At some point, I wanted it back and I decided I had to refresh my coding knowledge to achieve what I wanted. I studied Computer Science for a couple of years, so at least I had a starting point – more or less.

I wanted to build a real time reactive visual system, that could be completely autonomous in a live set. The idea was to set up a bunch of rules, and do something sound reactive that could last 45 minutes in a live set without getting boring. So first challenge in this process was choosing which tools suited my needs the better. I tried, for example, Open Frameworks, which was a bit too complicated for my coding skills. Later, I knew about game engines like Unreal or Unity, which are free and you can do a lot of things scripting, easier to code. It’s also great to have this good amount of documentation and works done by other people online. I’m curious now about what Touch Designer can do, but for the moment Unity allows me to have a precise control of what I need. 

Shoeg - skin
Image from Shoeg’s project – Container

On the other hand, I wanted to work with objects from the real world in 3D aesthetics. I could model them with Blender, but I have no idea. So I learned some 3D techniques, like photogrammetry or 3D scanning. I remember wanting something more “perfect”, but discovered almost by accident the beautiful imperfections this techniques introduce in the models.

We recently saw your performance of your latest project ‘Infiltrate’ at LEV Matadero. What tools and set up are you using for the show? 

All the sound was generated using a couple of Etee sensors that the guys at Tangi0 lent me for a couple of months. These devices capture my hand and finger motion, as well as pressure data, and that is converted into MIDI signal through a Max MSP patch. Finally, MIDI is sent to the Virus and Digitakt. I had to bring hardware synths to the live sets, because I need a lot of polyphony to build these big layers of sound, and I couldn’t achieve it in virtual synths. Then, the visual stuff is a Unity app reacts to the sound mix. 

Infiltrate - LEV Matadero
Infiltrate at LEV Matadero, photo by Hayley Cantor

How does the use of this technology improve, or add to the quality and experience of your show for you, as an artist?

It allows to express myself in ways I could’ve never imagined. I’ve never performed as comfortable and with wide palette of possibilities with an instrument until I discovered motion sensors combined with the computer. The ability to map any behaviour to any response allows you to optimize your abilities in order to get what you want. This can’t ever happen with “traditional” instruments, you have to adapt to the instrument rigidness and background. I also see the coding process as a prosthesis, an extension able to repeat mechanical operations while you pierce through them.

What does the future hold for Shoeg in the world of live performance?

In the near future, I have to improve a lot of things: I want to make my hands more prominent on stage and be less computer dependent. People keep asking what is happening with the sensors, and I want to make it a bit more understandable. I also have this long list of ideas to code which don’t have time to make, and I would also like to collaborate with other people. But before that, I want to record a new album. I hope I’ll be able to work on it in the next months. 

You can find out more about Shoeg’s work through his artist page.

The post Blurring the boundaries between technology and the individual – Exclusive interview with Shoeg. appeared first on Audiovisualcity.

Added a block noise glitch effect to the custom post processing...



Added a block noise glitch effect to the custom post processing effect collection https://github.com/keijiro/Kino

Added a block noise glitch effect to the custom post processing...



Added a block noise glitch effect to the custom post processing effect collection https://github.com/keijiro/Kino

Shoeg (AKA Carlos Martorell)

Par Hayley Cantor


Carlos Martorell is a sound and visual artist based in a small Catalan town called Torelló, near Girona. His work focuses on the symbiotic relationship between humans and technology. He uses his programming skills and knowledge of new technologies to explore the visual and audio through the creation of experimental music and live AV.

Carlos Martorell by Xavi Casanueva

He creates sumptious virtual worlds through programming language such as Unity, as well as with 3D scans. It’s not uncommon to see him performing with non-traditional MIDI equipment, using apparatus such as gloves and hand-held technology, which as an adds a peculiar physical dimension to his live shows.

SHOEG: Website
LINK BOX: Soundcloud / Bandcamp / Instagram / Youtube / Resonate


Header photo © Hayley Cantor

The post Shoeg (AKA Carlos Martorell) appeared first on Audiovisualcity.

Unity rachète le français Obvioos et sa plateforme de streaming 3D

Par Shadows

Unity Technologies annonce le rachat de l’entreprise française Obvioos, basée à Lille. Cette dernière proposait principalement jusqu’ici des services de visualisation architecturale mais aussi une plateforme de streaming 3D nommée Furioos que nous vous avions déjà présentée. C’est évidemment cette dernière qui a motivé cette transaction.

Pour rappel, Furioos permet d’héberger dans le cloud une application 3D ; un utilisateur peut alors exécuter l’application à distance et sans installation locale, les images étant calculées sur les serveurs de Furioos. En clair, il s’agit de l’équivalent du cloud gaming mais pour des usages 3D plus généralistes, par exemple pour une visite virtuelle.
Furioos permet aux applications de s’adapter au nombre de visiteurs en temps réel, et s’insère dans un site web aussi simplement qu’une vidéo Youtube, ce qui facilite grandement le déploiement.

Il est à noter que Furioos avait aussi suscité l’intérêt d’Epic Games : la plateforme avait même bénéficié du programme de bourses de l’éditeur.

Présentation vidéo de Furioos, mise en ligne il y a un an environ.
Ci-dessous : exemple d’intégration Furioos. La démo tourne sur les serveurs du service, et non sur votre GPU.

Christophe Robert, cofondateur et président d’Obvioos, nous confirme que malgré ce rachat la plateforme restera agnostique : pas question de fermer la porte aux applications Unreal, par exemple. Inversement, rejoindre Unity permettra à la plateforme de prendre son envol, nous explique-t-il.
Si, comme la tradition l’exige, il n’a pas pu nous donner de détails sur le montant du rachat, il nous a en revanche indiqué que pour le moment, rien ne change concernant Furioos : prix, intégrations restent identiques.
Par ailleurs, les équipes d’Obvioos resteront à Lille mais vont déménager pour disposer de plus de place. Enfin, même si Furioos sera au coeur des préoccupations de l’équipe, cette dernière continuera à travailler sur de la visualisation architecturale, mais sans doute sous une autre forme.

Nous vous tiendrons évidemment informés des futures avancées de Furioos.

L’article Unity rachète le français Obvioos et sa plateforme de streaming 3D est apparu en premier sur 3DVF.

Added VFX property/event binder classes for the new Input System...



Added VFX property/event binder classes for the new Input System to the VfxExtra repository. https://github.com/keijiro/VfxExtra

I also added a MIDI-out sample script to the RtMidi repository...



I also added a MIDI-out sample script to the RtMidi repository (a MIDI backend used in the MIDI input plugin). So now, you can not only receive input but also control external MIDI devices from Unity. https://github.com/keijiro/jp.keijiro.rtmidi

I implemented a custom VFX property binder that binds an input...



I implemented a custom VFX property binder that binds an input action to a VFX property. Now you can control a VFX from a MIDI controller without scripting. https://github.com/keijiro/VfxMinisExamples

❌