Tone Drops

Mixed project in the fields of video-art, installation and sound art.

Today more than ever, we’re changing from an old world that was based on physical connections into a purely digital territory, whether it’s GPS assisted driving, remote work forces, communications by videocall or even the act of being in public is starting to have more of an impact in the digital domain than in the real one. We take digital pictures only to post them online, we refer people to events exclusively online, and most times, the only actual footage that something occurred this day and age, is accounted by digital media alone, specially among younger people’s inner circles.

So my goal for this project is to delve into the waters of this relationship with the physical world and try to get people to readress their surroundings, by making connections between a physical acoustic condition and visual cues. Basically we’re creating an enveloping soundscape that evolves based on light. This sensorial experience is about going minimal in terms of awareness, so we’re using hidden speakers, a projector screen, and light sensors. The objective is that the user can experience a refocus on the little details, the small nuance, in both sound and sight, the very things that are often overlooked in this current of fast, consumable and forgettable public experiences.

This project involves a darkened room, stripped of natural light, with just enough minimal light to keep people from bumping into things or falling. Within this darkened space, people enter a soundscape that is an envolving and ominous mood, something that says “be careful where you step and be very observant of everything around you”. This sonic mood immediately puts the user in an attentive state. And the user should be advised to use their own light sources to guide their path from the darkness, and know that each of their light will influence the global soundscape in some way. It’s really an uncommon new interface, that people generally don’t use, which is the kinetic force of their arms with their phones in hand, being able to influence a sound development. The user should be invited to experiement, to make gestures, to try different things with his presence, and from different positions across the room.

Example of the type of evolving soundscape we want to produce. Deep, resonant sounds, rich in different timbres and of a smoothly evolving nature.

Since the space is kept dark, people will react in the most natural way possible and try to illuminate the room with what tools they have, mostly their phones, etc, but will soon find that the act of pointing light sources at different areas of the room transform the soundscape in creative ways, so once they are aware of this effect, they’re effectively mapping the whole room with their eyes, ears, and brain, looking for clues, attentive to the sounds that are generated and looking for the cause/effect relationship. Since the room will be darkened artificially, the immediate experience of people will be to try to shine some light for reassurance of their surroundings and pathways, and by doing this, they’re unconsciously mapping the room’s architecture with each passing light gesture. People will have the tendency to stick together and to accompany each other with their phone/lights. The light sensors that are spread and hidden across the room are photosensitive devices that will trigger impulses to the Max MSP engine, in tandem with each other.

The generative soundscape that results from user interaction with the light sensors can be actively going for hours, possibly even days, without becoming repetitive, as the brain of the sound design patch is an intricate web of connected parameters, interfaced by Max-MSP, where a vast library of long audio samples are constantly being triggered in interesting ways, making the sonic experience appear to be an evolving and almost living soundscape.

Another example of the the multitimbral and dynamic relationship we want to introduce in regards to light versus sound.

The idea of this project is to reconnect the user with the lost physicality of human interaction, to encourage attentive observation of their surroundings and to pair the act of discovering through exploration, another lost habit of modern times. This piece would be particularly adequate to the current times we’re in, where people live in a state of fear of a virus that takes advantage of human’s most primordial nature, that of tactile proximity and all acts of kindness based on touch.

Demonstration of the visual relationship we want to create between the soundscape with video imagery. Theme: Light versus Dark and Heard vs Seen.

Technical Requirements:

– Laptop capable of running Max 8;
– Audio interface with at least 4 independent outputs;
– Amplified speaker system with at least 4 channels;
– Amplified subwoofer;
– HDMI video projector with at least 2500 lumens;
– Array of light sensors between 6 to 24, depending on the room dimensions;
– Arduino type board connected to the light sensors, USB connection for interface with the computer;

Useful Links:

A fully functioning light-triggered Max MSP sound patch from minute 01:07

Biographic Note:

Fred Pêgo was born in Coimbra, Portugal on October 24, 1987. After his academic background in post- production audio for film and multimedia, his taste for art-house cinema and long-form documentary have led him to improve his techniques and explore creative sound as a storytelling tool. With a diverse range of work in the field of sound and an aptitude for learning and speaking languages, he has collaborated with talented artists and filmmakers across Europe and the US. His objectives today are to continuously employ sound design techniques and ideas in cohesion with the project’s aesthetic, in order to reinforce the story and help reach it’s full potential. This is the same objective for his creative sound endeavor Sonoro.Studio.