Swarm – Interactive Audio-Visual Installation

My project, Swarm, is an interactive audio-visual installation, exploring the concept of the hive mind and swarm intelligence. It is presented as a visualisation of several hundred entities, projected on a touch-sensitive projection screen,  sonified with generative audio. Visitors to the installation are invited to touch the screen, which affects the behaviour of the swarm visualisation in different ways depending on where and how much of the screen is touched.

When there is no interaction with the swarm, entities gently ebb and flow across the screen, flocking together with their neighbours. The accompanying audio is designed to complement the visuals, initially comprising of smooth slowly shifting textures. As soon as the screen is touched the hive is disturbed; swarm entities will avoid the area being touched, shifting their colour and speeding away when they  encounter the disturbance. Each disturbance is individually sonified, the characteristics of the sound linked to the size and position of the area touched. The larger a disturbance is, the faster the sounds and the higher their volume. The sound’s position in the stereo panorama directly controlled by its position across the width of the screen.

If there are any large disturbances, or more than a handful of disturbances to the hive, it will start to become agitated, entities increasing in velocity and fading their hue to red. If the hive continues to be disturbed, it will swarm, all entities turning entirely red and moving with increased speed and momentum. The entities change their attitude toward disturbances, swarming around them instead of their normal evasive behaviour. This shift towards aggressive behaviour is reflected in the audio, where the bed of soft, flowing sounds are changed for sharp, coarse sounds, showing that the hive is angry. The entities will swarm until they are no longer disturbed, taking several seconds to cool down before returning to their docile state.

The audio is generated from constant automatic recordings of the gallery space, automatically ingested into time-stretchers, whose playback speed, volume, transposition and panning are controlled by the hive’s disturbances and aggression level. The audio generation system is continually refreshed with new recordings from the space, which often contain audio that was generated and output by the installation previously.

While I have been faithful to the core concept laid out in my project proposal, several aspects have evolved from my original intentions. The most notable is the visual element of the piece, originally slated to be a cloud of white noise, only exhibiting hive characteristics when interacted with. I chose to develop this idea after I realised such a visualisation, while in keeping with the inspiration for the project (Nine Inch Nails’ concert visuals), would not only be visually dull, but unrepresentative of the core idea behind my entire project – hive intelligence and swarming behaviours. Changing the visualisation to flocking entities now much better communicates these concepts at all times, even when no interaction is taking place.


Swarm from ilumos on Vimeo.

 

Contextual Reflection

The most closely related work to my completed project that I’m aware of is one of Zachary Booth Simpson’s many interactive swarm-related installations, also named Swarm. In his work a projected swarm of entities are attracted to subjects’ bodies when they walk in front of the projection screen, effectively hiding behind the subject’s shadow. Simpson describes the behaviours of the swarm as either “schooling like fish or bustling like ants, depending on their mood”. This idea of attaching a “mood” to the swarm is prominently present in my work, allowing the swarm to be something more than just a simulation of flocking behaviours. Although Simpson includes an implementation of hive  moods, it is certainly not as central to his installation as it is in my own. The omission of an audio counterpart and relatively simplistic visual representations of the swarm entities indeed make for a simple and effective installation which is strongly related to my work, but distinct in its execution. My work focuses on tangible interactivity and a visually pleasing representation of the swarm, coupled with audio feedback, whereas Simpson does not seem concerned with decoupling the organic nature of the swarm with its enabling technology. In its technical application, this piece of Simpson’s work is dissimilar from my own, relying on the shadows cast by the subjects to interact with the hive. This said, nine of his installations use infra-red for greatly improved accuracy in computer vision, several of which are using infra-red illumination in a very similar way to my project. Looking at his detailed diagrams and explanations now I can say it would have been extremely useful to find his website prior to embarking on my project, despite eventually arriving at the same solution.

Another project that encourages interaction with swarm emulations is Jimmy McGilchrist and Darryn van Someren‘s interactive digital media installation, also congruously named Swarm. Video cameras display the Melbourne public of Federation Square on video screens, with superimposed images of butterflies which fly toward and land upon motionless viewers until they move, whereupon the butterflies take flight and leave the video frame. This method of interaction, where the viewers see an image of themselves in the work has the benefit of being inexpensive for the artist and intuitive for the audience. It also scales extremely well with larger audiences, as has been discussed by Jeffrey E. Boyd and others in the paper Video Interaction for Swarm Art. McGilchrist and Someren’s work differs in several ways to my own, being closer to an interactive visual toy than an in-depth exploration of swarm behaviour. Aside from the sparsity of the swarm entities and the much larger audience, the principal difference is the audience see themselves in the work. They are presented with and included in an alternate reality through the medium of the screen, where they are able to interact with butterflies that only exist with them on the screen. In my piece you look into a space defined by the bounds of the projection screen, containing the swarm. The observer and the piece are separate, and the observer is aware that the swarm exists in front of them, in the same reality as they are. When the art includes the audience visually, as in McGilchrist and Someren’s piece, the audience must accept that the images they see of themselves interacting with the swarm are an illusion. It is uncommon to see yourself in the third person interacting with something, and thus I believe that using visuals of the audience in interactive pieces akin to my own detract from their realism.

Yunsil Heo and Hyunwoo Bang’s Oasis: II explores swarm behaviours without using any electronic interaction. Screens set into the floor covered in sand show a virtual pond of computer-controlled schooling life, which can only be seen where sand has been displaced. This piece has some commonality with my own, chiefly the requirement for the work to be interacted with for it to reveal itself fully. I am very much in support of this paradigm in the art world, I believe that with interaction a piece can be much more immersive and memorable than without.

Project Summary

I do believe the project, broadly speaking, has been a success and I am very pleased with its aesthetic outcome and robustness, reassured by the positive feedback from my peers. I can say that the project has not fallen short of any of my intentions. Aesthetically, the project has surpassed by expectations, which is in part testament to the rapid development fostered by the Processing programming language and environment.

Technically, I do feel that that elements of the project could be significantly improved. This is not to say that in their current incarnation they are not fit for purpose, the code I have written, while robust and functional, runs slowly as I am not a proficient enough programmer to know how and where to optimise the code. The visuals run at 22-24 frames per second, which is certainly fast enough to look smooth, but by today’s standards it is not fantastic, especially when you consider the system is running on a quad-core CPU. My other technical grievance is the inaccuracy of the infra-red touch detection system. Specifically, how the system cannot cope with users standing too close to the screen, as their body disrupts the IR light falling on it. The interaction system certainly works accurately enough for the installation to work as intended, but it would be unfit for any installation that required more precise control. If I was to begin the project again knowing what I do now, I would seek a different touch interaction method, such as frustrated internal reflection in transparent panes of glass or Perspex.

One of the key elements in the practical realisation of the project was turning a standard front-surface projection screen into a working touch screen. Needless to say, interactivity is central to the piece and thus a lot of time was invested in getting the touch input system as accurate and responsive as was possible. Another important consideration was the audio system, made entirely in Puredata, it had to be both extremely robust and error-tolerant, as well as being flexible and have parameters that could be easily controlled. The audio component, an afterthought in the project’s proposal, evolved into a complex system in its own right, taking the most time to complete. The keystone that linked the entire project together is the “behaviours” system, taking interaction data from the computer vision library and processing it into up to six “disturbances”, tracking their size, age, area and perimeter, and making a judgement as to the hive’s “aggression level”, a metric passed to all of the system’s components, controlling the audio and the visuals simultaneously.

The journey I have taken in planning this project and bringing it to fruition has been greatly enjoyable and informative, and I believe this is reflected in the work itself.

 

 

Additional Media:

Webliography:

Light to Sound

As part of my Creative Music Technology degree at Bath Spa, I’ve created an instrument for our 13-strong ensemble – “Behaviour”.

The core of my instrument is a photocell, connected to an audio mixer’s microphone input. When light strikes the photocell, it generates a small current, View of instrument and controlwhich is amplified by the mixer and output to my laptop. In the same way that a microphone translates sound – vibrating air, into a varying current, flashing lights are translated by the photocell into a varying current, which can then be amplified and played through loudspeakers, enabling us to listen to the sound of lights. My instrument uses two flashing cycling lights, a single battery-powered LED, and a large variable-colour stage lamp to excite the photocell. I have modified the stage lamp, replacing its microphone with an input for a magnetic coil pickup, allowing me to trigger the lamp’s sound-activated mode by moving the pickup through magnetic fields, such as around transformers. I process and refine the audio on my laptop, with an equaliser, two pitch-shifters and reverb, all controlled using a simple USB controller.

 

The performance strategy I employ with this instrument is to confine myself to one of five sonic palettes which I have discovered and explored in the development of my instrument. Within one of these palettes I modulate a handful of parameters, which allow me to vary my sonic output, whilst keeping control of the instrument and staying within my chosen sound area. I find this strategy to be very effective for contributing an appropriate palette and creating a diverse spectrum of sound on demand.

 

Though photocells have been widely used in music production since the advent of the Teletronix LA-2A optical compressor in 1965, their use has always been functional. The idea of using a photocell as a “light microphone” comes from the same school of thought as using a loudspeaker as a microphone, as Geoff Emerick, the studio engineer for The Beatles’ album Revolver chose to do for the bass track of Paperback Writer. This creative misuse and abuse of technology is behind many effects we can recognise today, such as the vocoder, originally invented at Bell Labs for wartime communications in 1943 has since been used for the “robot voice” effect that can be heard in many pieces of music. Auto-tune, now widely used and recognised, was originally created for interpreting seismic data in drilling for oil. The inspiration for my instrument came from a video by Eric Archer, in which he uses a photocell to listen to various lights on a drive through Brooklyn. The diversity of the hidden sound-world fascinates me, and I try to emulate it in my work.

We are performing at Spike Island in Bristol, on May 31st, details available here.

Light To Sound from ilumos on Vimeo.

Windows: Automatic Logon & Lock – Save Time Turning On Your PC

Does it ever annoy you when you turn your computer on and walk off while it wakes off, that when you return, you have to wait for another minute or so before your computer is responsive enough to use?

Wish you could boot up, walk away and return to your logged on but password protected computer?

It’s a small niggle, but over time, those 30 seconds where you wait for your computer to go from your log-in screen to logged on and responsive add up. This little trick will allow you to press the power button, walk off and do something more productive than watching Windows boot for however long, and return to a logged on, but a secure locked workstation. This of course is assuming you only use one user account and you’ve sped up your start-up time in the usual ways.

Read more…

“Expunge” – Track Made Using Pro Tools as an Instrument

Here’s a track I wrote last year as part of Andy Keep‘s “Wrong Pro Tools” module, where it was perscribed that we could only use Pro Tools’ signal generator plug-in, the hand drawing of waveforms and digital feedback to write a piece of music. Creative limitations taken to extremes?

Expunge by ilumos

Music Video: “Expunge” – Fun with Analogue Video Glitching

University project exploring quality degradation after several generations of recording to and playing from VHS cassettes. The video follows and tracks an undesirable, through CCTV, until he is expunged…

Shot in Brighton, acting credit to Patch Cordwell, second camera credit to Ben Kazemi.

Read more…