Agent 0

Agency #

David Pirrò

To synthesize sound is to listen to sound. #

This speculation is based on the concepts I developed during my PhD research. The dissertation can be found here.

This research fundamentally revolves around comprehending interaction within the sphere of computer music, particularly the composition of generative sound synthesis processes and their performance.

Interaction has become a ubiquitous term in multiple fields, including electronic and computer music. However, the term’s meaning varies quite a bit depending on the context it is used in.

My approach involves connecting this term with philosophical discourse and cognitive sciences, particularly with the concepts of embodiment and enaction.

To comprehend interactive systems, we need to understand how these “systems” perceive and react to the world, the environment they are in. In his work “Action in Perception,” Alva Noë suggests that listening and perceiving are proper actions performed by the sensing entity: perception is an action, not a passive experience.

This concept is further developed in Even Thompson’s work “Mind in Life”. His text revolves around the concept of enaction (originating from Varela and Maturana), according to which ’living beings are autonomous agents that actively generate and maintain themselves, thereby enacting their own cognitive domains’. A ‘cognitive being’s world is not a pre-specified, external realm, represented internally by its brain, but a relational domain enacted or brought forth by that being’s autonomous agency and mode of coupling with the environment’. Interaction as a temporal process of mutual influence taking place between agents and their environments. By this interaction, i.e. through its being coupled with the environment, the agent finds its way into the world.

To compose an interactive sound synthesis process means to create an agent. My speculation aims to construct the most compact and simplest formulation of a sound synthesis process in the form of an agent.

While doing so, I draw inspiration from Ezequiel Di Paolo’s work “Autopoiesis, adaptivity, teleology, agency,” which presents a systematic definition of agency comprising three necessary and sufficient conditions:

  1. Individuality: An agent must be distinctly identifiable from its environment. The agent should have clear boundaries, and the correlation between the system and its environment should be evident.

  2. Interaction Asymmetry / Source of Activity: The agent isn’t merely reacting; it’s a source of activity performing actions using internal mechanisms; it’s not merely a passive receiver of external effect. This is most discernible when the agent independently changes its coupling to the environment, disrupting the symmetry of two coupled systems.

  3. Normativity / Adaptivity: The agent modulates its interaction with the environment to achieve specific goals (norms). This modulation may succeed or fail, a characteristic quality of agency. In the contrary, failure, or the possibility of it, is a central characterizing quality of agency. For instance, a planetary system, which unquestioningly obeys the laws of gravitation, is not an agent. The specification of a system’s goal or aim might seem odd, dependent as it is on the perspective from which the dynamics of system and environment is observed. What is actually meant by this, is that an agent system should work towards preserving its further activity. The norm is its continued existence. The continued existence or preservation of further activity is the agent’s norm.

Agent Diagram

To implement these ideas in the context of composing sound synthesis processes, I developed a specific method, a recipe:

  1. Sound Generation: Start with a simple sound generation process, such as a sine wave oscillator that you will see in the following example. This process is the basic function of the audio agent.

  2. Reactive Modulation: Make the sound generation process sensitive to an input signal. This is achieved by integrating an effect parameter that modulates the sound generation process based upon the input signal. The measure of this effect, or the depth of this influence, should be adjustable through parametrization.

  3. Dynamic Coupling: Ensure that the effect parameter, or the ‘coupling’, is not static but rather dynamic. It should fluctuate based on the characteristics of the incoming signal and the agent’s internal state. This is the step the requires more experience and experimentation and is the more speculative.

Following this methodology formed the basis of composing the ensuing example: ‘Agent 0’. This agent is one of the possible implementation of the outlined speculation. However, I plan to further test, adapt, and refine this recipe by composing and observing a variety of agents, each time aiming to encapsulate the core essence of an audio agent within concise formulations.

Agent 0 #

  1. The first ingredient is the sound generating process. In this case it is a simple oscillator with a fixed frequency. Using the henri programming language this is expressed in form of differential equations:

\begin{aligned} p’(n) = f q_i + (r - \sqrt{q^2 + p^2}) p \end{aligned} \begin{aligned} q’(n) = - f p_i + (r - \sqrt{q^2 + p^2}) q \end{aligned}

The frequency of the oscillator is \( f\). The oscillator has also a limit cycle of radius \( r \) to ensure the oscillations remain bounded.

The above equation correspond to the henri code (see below):

agent[1]'() = - bf * agent[0] ## bf is the eigen frequency 
              + (rad - sqrt(sum(agent[[0,1]]^2)) ) * agent[1] * radm; ## limit cycle part

agent[0]'() = bf * agent[1] ## bf is the eigen frequency 
              + (rad - sqrt(sum(agent[[0,1]]^2)) ) * agent[0] * radm; ## limit cycle part 
  1. The second step is the ‘sense’ the input sound. In this case I using a very simple MS calculation (that is RMS without the root) implemented as an IIR filter.

\begin{aligned} r’(n) = (in(n) * in(n) - r(n) ) * l \end{aligned}

where \( l \) is the RMS ’length’ and \( in \) is the input signal

In terms of henri code:

## input 'sensing' part of agent: computing rms of input
## lag control how fast the rms reacts
## rms is implemented as IIR 
agent[3]'() = (sig * sig - agent[3]) * lag;
  1. The third and last step involves composing dynamic relationship between the sensed input signal (the RMS) the agent’s internal state. This is the step of the process the more relies on experimentation and experience. I cannot really explain how I got to this specific formula: the process to get there was iterative and always guided by listening and performing. This is Agent 0’s dynamic coupling:

\begin{aligned} c’(n) = a (0.707 - r(n) ) - b\sqrt{(q^2 + p^2) - 1} - c(n)^3 \end{aligned}

In terms of henri code:

## coupling part of agent
## coupling variation depends both on sensed input values (here RMS)
## and on internal state (here radius or magnitude of oscillator)
agent[2]'() = (0.707 - agent[3]) * au ## dependence on input : pushes coupling up of input has RMS is smaller than full scale sine 
                 - (sqrt(sum(agent[[0,1]]^2)) - 1.0) * ad ## dependence on internal state : pushes down if oscillator magnitude (radius of oscillation) is bigger than 1.0
                 - agent[2] ^ 3 ; ## 'leak' : a 'safety' ensures that the parameter doen not grow infinitely

Code #

This example refers to the henri file in the Speculative Sound Synthesis git repository

This is the program I used in the performance at Sonology which uses 4 of the ‘Agent 0’ instances which are interconnected and influencing each other as well as being affected by the input signals.

References #

  • NOË, Alva. Action in perception. MIT press, 2004.
  • THOMPSON, Evan. Mind in life. Harvard University Press, 2010.
  • DI PAOLO, Ezequiel A. Autopoiesis, adaptivity, teleology, agency. Phenomenology and the cognitive sciences, 2005, 4.4: 429-452.