Le Live Cinema Festival 2022 aura lieu du 22 au 25 septembre au Palazzo delle Esposizioni de Rome. C’est la neuvième édition du principal festival international dédié au Live Cinéma, une technique narrative expérimentale appliquée à la performance vidéo, créant des spectacles hyper sensoriels uniques ou les sons, écoutent les images.
12 performances AV en direct seront présentées en Italie, en plus des conférences, des projections, des symposiums créeront une expérience immersive.
Le programme complet sur le site du Live Cinema Festival.
We've got a great piece that acts a primer to odd time signatures. Specifically, we'll talk about how drum machine users can program their devices to play with some of the odder time signatures out there.
The post Breaking the Four by Four: a primer on playing in odd time signatures with drum machines appeared first on DJ TechTools.
Nous étions à la @nouvelle_vague de la @villedesaintmalo vendredi et samedi. Quelle salle, quelle équipe et quel public ! MERCI MERCI MERCI.
Pressé de vous retrouver quand l'occasion se représentera.
Ci joint notre @pierrealfredeberhard pendant la mise en place Lumières d'ANiiMA, aidé par Martin Mignon en régie, qui n'a pas d'Instagram, mais qui n'en est pas moins génial.
L’équipe Disney Research profite comme chaque année du SIGGRAPH pour présenter ses dernières avancées technologiques.
La publication Facial Hair Tracking for High Fidelity Performance Capture a retenu notre attention : de la performance capture sans rasage !
En effet, jusqu’ici les pipelines de production typiques ne géraient pas la pilosité faciale. Les techniques de reconstruction 3D et tracking sont pensées pour des surfaces classiques, pas pour des poils fins et avec de l’occlusion.
On demandait donc aux acteurs de raser barbe et moustache avant une séance de capture, ce qui peut poser de multiples problèmes, de la star refusant de modifier son apparence aux obligations pour d’autres films : on se souvient des reshoots de Justice League avec Henry Cavill, contractuellement obligé de garder sa moustache pour le film Mission Impossible: Fallout qu’il tournait en même temps. Et plus globalement, cette contrainte imposait de faire la performance capture en amont ou aval du tournage, mais pas pendant.
Bref : il s’agissait d’une contrainte forte.
Notons que les sourcils cachent également une partie du visage : une solution gérant les barbes et moustaches pourrait donc être utile même pour les visages fraîchement rasés.
C’est ici qu’intervient le projet de recherche Disney, la première méthode de performance capture faciale capable de fonctionner avec pilosité faciale.
Pour y parvenir, Disney adopte une approche qui passe par la division de la capture en deux parties qui viennent se rejoindre :
En pratique, le tracking de la peau et de la pilosité sont couplés et traités en alternance, avec de multiples étapes d’affinage : le tracking des poils permet d’améliorer la précision du tracking de la peau en rajoutant une contrainte puisque les poils sont forcément implantés sur la peau : ils ne peuvent ni flotter ni être « dans » la peau.
L’image ci-dessous donne une idée des effets de cet affinage : en bas à gauche, l’estimation initiale du visage et des poils reconstruits, à en bas à droite le résultat affiné.
La vidéo de présentation donne une idée des résultats, plutôt convaincants même si les poils ne sont pas toujours trackés à la perfection.
Enfin, Disney souligne que son pipeline permet d’éditer la barbe ou la moustache : en modifiant le style ou la longueur de la pilosité en pose neutre évoquée plus haut, on va pouvoir répercuter ces changements sur toute la performance. Des exemples sont visibles en fin de vidéo (4 minutes 34).
Autrement dit, on peut transformer un bouc en moustache, rendre une barbe plus fournie, épaissir ou affiner une moustache. Ou, bien entendu, supprimer totalement barbe et moustache.
De quoi faire rêver les studios confrontés face à des situations similaires à celles vécue par Henry Cavill.
Comme toujours en recherche, des limitations sont présentes, et des améliorations possibles. On citera par exemple le fait que Disney n’a pas fait de tests sur de très grandes barbes, qui masqueront davantage le visage. Ou encore le fait que la méthode risque de donner de mauvais résultats si la couleur de la peau et des poils sont trop proches, par exemple avec une peau et une barbe noires.
Même si Disney ne l’évoque pas, on peut également se demander si ce type d’approche ne pourrait pas permettre d’améliorer la motion capture d’animaux, comme des chats et chiens. Une piste à explorer pour le SIGGRAPH 2023, qui sait ?
On trouvera chez Disney la publication complète avec plus de détails sur la méthode, sa mise en application et ses limites.
Facial Hair Tracking for High Fidelity Performance Capture est une publication de Sebastian Winberg (DisneyResearch|Studios/ETH Zurich), Gaspard Zoss (DisneyResearch|Studios/ETH Joint PhD), Prashanth Chandran (DisneyResearch|Studios/ETH Joint PhD), Paulo Gotardo (DisneyResearch|Studios), Derek Bradley (DisneyResearch|Studios).
L’article Performance capture : pour Disney, la barbe n’est plus un problème est apparu en premier sur 3DVF.
Foxtel chooses largest TAG monitoring solution in the southern hemisphere from Magna Systems & Engineering SYDNEY, 4 July 2022 – In 2020 Foxtel started a project to exit the aging television centre facility at Macquarie Park. The plan involved migrating their entire technology platform across two sites in Sydney to an all-IP platform encompassing video routing, ...
The post Foxtel chooses largest TAG monitoring solution in the southern hemisphere from Magna Systems & Engineering appeared first on Broadcast Beat - Broadcast, Motion Picture & Post Production Industry News and Information.
TAG’s Newly Designed API Ties it All Together for Streamlined and Unified Control of Every Capability in the Company’s Multi-Channel Monitoring System New version will be showcased in Booth W 3517 at NAB Tel Aviv, Israel – April 20, 2022 – TAG Video Systems is introducing a newly designed Application Programming Interface (API) built to ...
The post TAG’s Newly Designed API Ties it All Together for Streamlined and Unified Control of Every Capability in the Company’s Multi-Channel Monitoring System appeared first on Broadcast Beat - Broadcast, Motion Picture & Post Production Industry News and Information.
TAG Introduces Redis Integration Enhancement Delivers Comprehensive Data in an IT Open Environment for Deeper Analysis and Workflow Customization Tel Aviv, Israel – April 7, 2022 — TAG Video Systems, the leader in real-time media performance, software-based deep monitoring of linear video workflows, has announced that its multi-level Realtime Media Performance (RMP) Platform now integrates ...
Fans can now get a clearer, real-time view of what’s happening on the ice through the NHL’s new UHD-enhanced video production pipeline. NHL’s recent technology infrastructure update, which includes the addition of several AWS Elemental Link UHD cloud contribution encoders across 32 NHL arenas, provides innovative viewer experiences. The NHL’s new Link UHDs make it ...
@u.machine digitalart #dance #mapping #videoprojection #art #picoftheday #instadance #audiovisual #installation #umachine #artist #audiovisualperformance #show #interactiveart #generativeart #madewithsmode #millumin #realtimevideo
Reposted from @paolo.morvan: “Detail de la scénographie réalisée pour le musicien @romain__muller
Construction et création lumière avec @julesbouit
@zikamine @bliiida @laregiongrandest
#scenographie #scenography #show #concert #singer #wood #structure #light #lightray #designproject #design #eventdesign #stagedesign #stage #musicperformance #music #audiovisual #liveav #filter #3m #millumin #led #ledlights #lightart #digitalart #installation”
Three years ago already… the first experience in tracking and generative content. Now developed setup feels ugly and strange… but on that moment )))
Huge thanks to #millumin creators for great software and fast careful support
#millumin #projection #performance #show #generative #dance #vj (at Shanghai, China)
It all begun in 2009. After quitting my job as an engineer I moved to Madrid to study a Master in Motion Graphics while making my first experiences in the world of video art.
In 2010 I started up several collaborations with art organizations in Madrid and they proposed to me to shoot a dance performance by the choreographer Iratche Ansa.
The performance was going to be held at the Matadero in Madrid. From that recording I put together my first video dance “Comunicación Interpretación Automática” which had a very good reception.
A few months later I made my first dance pieces with live visuals with choreographer Barbara Fritsche.
Thanks to those projects I was able to work on the musical “Hoy No Me Puedo Levantar” in 2013. In 2014 I directed my first proper theatre show: “Girasomnis”.
I try to “connect” the visuals with the dancers. Sometimes I encourage the dancers to “connect” or follow my visuals.
In this project I also composed the music. This is very useful as I have better control of the creative process. With this project I tried to evoke feelings in the audience without words: just images, dance and instrumental music.
This quarantine caused an abrupt stop in my job, but also gave me time to start imagining something new.
The idea was born during the first week of confinement. At the start, It was simple: I just wanted to publish some of our best projects and make them public. But I also felt a need for a change.
The last 3 years I was quite disconnected from my artistic side due to working mostly on commercial projects.
I was just focusing on making money to pay my bills and trying to have a stable team for audiovisual production. The outbreak of the Covid-19 has been an absolute shift in our work. We started questioning the possibility of doing our shows as we did before.
So, in April I started to visualize and write a synopsis of this new project. I then decided to publish “Dance Mapping Virtual Tour 2020” as a memorandum of all these years of physical shows.
This new production is planned to be released in VR and physical 360 projection format in late 2021.
When I have the budget I can work with some powerful audiovisual freelancers from my network of collaborators. Failing that I work alone.
I also work with very talented dancers/choreographers from Barcelona. During the years they started to understand my ideas and transform them in beautiful choreographies.
I have mostly 2 ways of work. I compose a music draft and then I work on the visuals and choreography or vice versa: I make a draft of visual content with a draft choreography and I try to match the sounds and music.
Sometimes I give leeway to the dancer, so they can create their own choreography and then I create the visual content following their movements.
In the last few years I also worked with some talented musicians for a faster audiovisual production.
This was a concept from Roman Torre. In 2015 I shared a space with him and we collaborated together on a video mapping of a rotating stone. It was a nice project called Liquid Series.
In the video mapping area I also tried to develop innovative concepts, differing from the typical big projection on a building facade. 2 Years ago I started to develop the concept of “Holomapping”. I am planning to finish it next year as well.
Nowadays, it is possible to learn a lot following digital online courses, but it is always better if somebody guides you. As with everything in life the best way to learn is practice, making mistakes and improving.
Spain is not the best country for arts, I would say. As far as I know French artists or from other European countries have more grants and support from their governments, but everything is possible if you are passionate about your work.
From immersive VJ sets to operatic projection mapping, from AV live graffiti to cutting-edge interactive installations. Cosmic Lab always achieves to captivate the audience and mind-blow even the more AV experts.
As we said: no fear of experimenting. Here we see a very interesting fusion between hip hop and audiovisual culture.
At the opening event of MAGNET by SHIBUYA 109 “ShibuGekiSai”, Cosmic Lab and Doppel collaborated on a performance combining live painting and video projection.
The 3D video cubes are animated in motion graphics by the audiovisual artists guiding the graffiti artists on the patterns they will fill with their spray cans.
The DJ spins tunes throughout the performance linking graffiti and projection through the overall hip hop groove.
An audiovisual feast and a once-in-a-lifetime experience to celebrate the Koyasan’s 1200th anniversary (The center of Shingon Buddhism).
We see something truly remarkable and unique: a fusion among the vibrating tones of the Buddhist chant, Japanese drums and an elaborate projection mapping.
Under the musical inputs and the AV latest technologies the great Pagoda comes alive. The result is spectacular, mesmerizing and sumptuous to honor the ancient tradition of Japanese Buddhism.
Here Cosmic Lab went a few steps ahead by reinventing the way of making AV live performances through a new tool called QUASAR.
It loosely reminded us of the Reactable Machine, developed in Barcelona in 2003 to make music through physical interaction.
In this next-generation AV instrument, each musical measure is not interpreted in a linear fashion, but as an endless loop.
Also the tangible interface gives a physical structure to the AV content making possible to build rhythm and layers in all new intuitive physical way. Impressive!
It is interesting to notice the wide range of categories not necessarily related to the new media arts world. The inclusion of animation and manga highlights their wide recognition in the Japanese scene as special forms of art.
The Japan Media Arts Contest offers opportunities for young and emerging artists as well as recognized professionals making the audiovisual event a turning point every year for artistic innovation and excellency.
The Japan Media Arts Festival has been awarding prizes to outstanding artistic works since its establishment in 1997.
It is supported by The Agency of Cultural Affairs, Government of Japan to develop and promote the creation of Japanese and international media arts.
Through the annual Exhibition of Award-winning Works, the festival has offered the opportunity to the audience of directly appreciating these celebrated works.
The attendants are also invited to participate to the side events such as symposia, screenings and artists’ showcases.
One of the first artists to investigate the relations between music and imagery was Kandinski. He explored how different shapes and colours relate to each other by communicating different movements to the viewer. Same as different notes, rhythm and music patterns relate to our inner soul, triggering different emotional movements.
The Japanese artist (currently living in Berlin) features an interesting background of music composition, computer programming and multimedia art. Arai’s generative art is highly complex and goes beyond the audiovisual genre. It takes on a profound reflection around the universe and its structure based on vibrations, as advocated by the theoretical framework of the string theory.
The result is an audiovisual duet between human and machine with the two elements constantly learning from each other in his ever-evolving investigation. Ultimately, Tatsuru uses the sound and its visualization as a key to make experience of the nature of the universe, even if just a small part of it.
Immersive installations, interactive artwork and live performance communicate complex statements through sensorial experience, giving us the chance to digest information slowly, at our own pace. Generative art as an antidote to the overwhelming data dump we experience everyday.
Due to the uncertainty of in-person attendance, the organizers are planning the audiovisual event through virtual platforms. It will be therefore accessible to a wider audience through the experimental use of new technologies.
Patchlab will present art projects online and via AR (augmented reality). There will be experimental computer animations presented in the virtual cinema, remotely accessible workshops and also a dystopian multi-person computer game allowing the exploration of a post-apocalyptic New York. There will also be AV NIGHT, during which we will see unique audiovisual projects in 360° format.
All audiovisual artists are invited to submit their project proposals for this year’s program to be implemented online in an unexpected way.
Proposals can only be submitted via the online form.
The 2020 year theme reflects upon our primitive status in the foundations of the new hyper-informational world, where the data-flux is absorbing the entire existence reaching the status of God.
Is technology serving us or we are serving the data-totem by providing our more sensible information, giving up our privacy for a greater good?
Algorithms, already present everywhere in the digital realm, are reading us better than ourselves, better than our friends and siblings and in the name of optimization of our virtual experience, we are gradually letting them make decisions for us, filter our perceptions predict our behavior, our bio metrics, our emotions.
All manifestations of culture can now be experienced on a digitized basis, translated to a language (code, DNA) and stored for everyone who possess it to experience regardless the circumstances. Markets and Money are transfiguring into intangible algorithmic byproducts. Everything to serve the information flow.
The post ATHENS DIGITAL ARTS FESTIVAL: 10 July – 10 September 2020 ONLINE appeared first on Audiovisualcity.
Rafael has been living in Asia for 10 years where he merged his own European artistic and sociocultural background with the Asian aesthetics and political issues, especially within South Korea.
His phography academic background heightened his sensibility towards the image, with special reference to the body within the space.
This is an extract of a live performance named “BOM” which is part of an ongoing day-by-day project made in Korea: KYOULBOMYOELEUMGAEUL.
The performance took place at the SEMA: Seoul Museum of Art. KYOUL means Winter in Korean, BOM is Spring, YOELEUM is Summer and GAEUL is Autumn.
The four season piece is a video reportage of Rafael experience in South Korea, presented in chronological order throughout different audiovisual mediums such as Live Cinema performance, Installation and screening.
In creating art installations and performances using sensor technology, she strives to explore the importance of human relationships and connections.
Park is a recipient of the New York Foundation for the Arts Fellowship. Her works have been featured by Art21, Artnet, The Creators Project, New York Times magazine, Wired, PBS, Time Out NY, the New York Post, and through many other media outlets.
She received BFA in Fine Arts at Art Center College of Design and her Masters from the Interactive Telecommunications Program at New York University’s Tisch School of the Arts.
It highlights the importance of human presence and physical connection in our lives. It cannot be bloomed alone and is only bloomed by the relationship between people. As a response to participants’ skin- to – skin contacts, heart rate, and gestures, “Blooming” blossoms according to their intimacy. As audience members hold hands or embrace , the digital Cherry tree flowers bloom and scatter.BUY US A COFFEE?
ONLINE, 28 – 29 May 2020
As we slide into the new normal or la nueva normalidad it is inevitable that the AV world will experience a considerable amount of visibility during the pandemic as technology plays an important part in everything that we do. A surge of online events, meetings and live streams now fill up our diaries like they are going out of fashion and meeting up with your mates down the pub for a pint after work is so 2019.
Enter the evolution of user generated entertainment platforms like Twitch, which now boasts 17.5 million average daily visitors. Resident Advisor has invented its own virtual island Streamland where all virtual events that have been successfully submitted to RA exist. And MelodyVR brings the artist even closer to the fan through some very high spec virtual reality streaming experiences. Did somebody say Zoom quiz?
The drive for innovation and exploration in the world of audiovisual art and culture is again on the rise, opening up in new forms. Which leads me onto the question about interdisciplinary artists and institutions who challenge the status quo and dare to oppose the mainstream. Where are they and what is their artistic response to the pandemic?
I give you BODY (UN)MUTE. A two-day online festival curated by Bogomir Doringer hosted by ICK Dans Amsterdam that looks into the rituals of dancing and masking in times of social distancing. The audiovisual event will deliver a programme of workshops, talks and performances from all corners of new media, dance and conceptual art. But how can these rituals take place in an online space?
“Technology has been around forever, but most people are not familiar with the basics of streaming. Porn channels and video gaming platforms are way ahead of time and up until now artists haven’t really engaged with it, which makes it harder to get a certain quality that produces something more than just a Zoom call. I have been following the ritual of masking since 9/11 with my project Faceless – Re-inventing Privacy Through Subversive Media Strategies. What is the role of this in contemporary times? BODY (UN)MUTE is a physical representation of Faceless and my art exhibition Dance Of Urgency, which explores how dance and ritual rise in times of personal and collective crises, and how it can empower individuals and groups. In amongst a global pandemic both these ideas live together and that is why I want to explore this space with new media artists”– Bogomir Doringer
Some highlights come in the form of Famous New Media Artist Jeremy Bailey who wants you to join his Augmented Reality Makeover Party where step-by-step you can learn how to perfect your own Augmented Reality (AR) digital mask and alter ego. Transgress and queer-up your identity, become a drag unicorn or whatever else you can imagine!
Rosa Menkman, an art theorist and visual artist specialising in glitch art and resolution theory, will screen her work Pique Nique pour les Inconnues :: The CHORUS VERSION (2019-2020). The video looks at various unknown women whose images are linked to the history of image processing. While these women seem to be able to prolong their existence for as long as the (digital) realms will copy and reuse them, most of them have lost their name and identity.
Live performance comes in the form of Keren Rosenberg and Nicola Cavalazzi, who will present an audiovisual art installation which explores our social obsession in self-exposure through the use of modern technology. Together they will question what it means to perform in front of a camera – where does the body finish and the screen start?
Dr. Kelina Gotman talks about how Choreomania, the manic crave for dance, is not just a bi-product of lockdown. Choreographer Emio Greco will elaborate on the Pizzica, a dance from his native ground in Puglia that was danced to heal yourself from the bite of a poisonous spider. And Shanghai Radio will close the two day event giving us an insight into how creativity, music and online streaming kept the Chinese creative community connected during the lockdown.
In a reaction to the pandemic tickets for the event are based on the principles of donation, which provides the public freedom to support the hard work and dedication from all the artists involved.
BODY (UN)MUTE in collaboration with ICK Dans Amsterdam
Online Tickets available through the event website.
They combine digital media with other artistic disciplines such as music, dance, theatre and performance.
Medusa Lab took part of many national and international event such as Venice Biennale of Architecture 2014, Mediaxion, Live Performers Meeting and Circuito Electrovisiones.BUY US A COFFEE?