goto section page goto youngmonkey main page make contact, e-mail


Sound Mind: A Creative Approach
Advertising Space Available

Author:dhomas trenn
Published by:NewTekniques magazine (US)
Date:April 1999

It is common place to find articles that discuss the tried and true methods of using computer software and hardware to achieve a particular result. These articles often imply that there is a proper way to do things and that a recipe should be followed in order to get the best results. Many times, this might be true - a good tutorial can go a long way to getting you on the right track. But, if you are keen to explore and to delve into a different creative style, to give your work that stands-out-in-a-crowd perspective, then read on... maybe there is another approach.

The discussion here is based on techniques I developed for specific projects that were used to control sound synthesis equipment through MIDI (a standard communication protocol for interfacing electronic musical instruments and computers); but, the ideas could easily be applied to other applications, such as graphics and video generation.

As you read, remember that the intention is not to substitute one recipe for another; but, to open your mind to the potential of becoming your own master chef. The goal is to expose the reader to new ideas, not to detail how to accomplish a particular task. For the most part, readers will not have access to the equipment originally used for these projects. Remember, the method used is not what we are stressing here, its the potential for inspiration. Hopefully, these ideas will be used as a starting point to something new and wonderful.

Control Systems
One interesting area of exploration involves the development of control systems. A control system is defined, for this purpose, as a system which monitors any type of change or stimuli and applies the response information to the control of something else. A control source, or controller, could be as simple as the position of a pen on a drawing tablet or as complex as the monitoring of brain response to sound.

A control system can provide data for any kind of destination, be it the control of laser lighting, visual art or even music.

Implementing a Control System
The Amiga is an ideal platform for the development of these systems. Many Amiga applications include support for external control of functions through an ARexx port; but, to get at these features you have to be able to program in ARexx. It is beyond the scope of this article to teach the reader everything about writing an ARexx program. There are plenty of books and tutorials available. A comprehensive ARexx programming manual, written by Robin Evans, is available online (????). The manual is also downloadable as an AmigaGuide file (aminet: util/rexx/ARexxGuide2_0A.lha).

You do not have to be a programming master to implement a control system. The basic process required by the program is as follows:

  1. Read control data
  2. Convert data to usable form
  3. Send converted data to an application or device
  4. Return to Step 1 and repeat

Control data can be obtained from anywhere; the serial port, mouse, joystick or the keyboard are a few obvious examples. A very simple idea, would be to use a joystick as a device for panning sound - move the joystick left and the sound pans to the left speaker. Perhaps, moving the joystick forward turns up the volume. Although this is easily done, it is not a particularly exciting example.

Weather or Not?
Inspiration is important to the development of any project, and most artists make it a point to try to create something that looks and/or sounds unique in some way. Equally important though, is to apply your creative talents to making your project "feel" right. Any professional 3-D artist will tell you that the mind perceives many things, both consciously and unconsciously. It is often the case that even the slightest details can make the difference between a result that looks real and something that looks computer generated. The audience should not have to understand, and often does not care to know, the methods behind the creation of what they are seeing or hearing. If there is a problem, they may not know what it is exactly, but they are sure to feel that something is out of place.

While working on a film project, it became necessary to create a sound that added a particularly moody atmosphere to a windy scene. Numerous attempts to create a wind-like sound using sequenced pitch changing, sound filtering and other processing had failed. Though the results were somewhat pleasing, they were mechanical sounding and seemed to be missing some level of authenticity. After much experimentation, it was decided that the humanized manipulation of the sounds was the culprit.

When you watch leaves, for example, being blown around by the wind, what you are seeing is not the wind itself, but the result of the wind's blow. The effect I had been trying to create, by manually manipulating sound, was based on what I could see - not what was actually happening.

To achieve a more realistic effect, a control system was developed that used precision weather monitoring equipment in an outdoor environment. By applying real-time data, wind speed and direction, to the control of the sequenced sounds, a more realistic result was obtained.

Another possible application of a control system like this, is to use weather data to control computer generation of clouds (ImageFX). In this way, you could use wind speed and direction to control the rate of drift as animated clouds move across the screen.

Radio Shack has introduced a consumer weather monitoring system, the WX200 Weather Station (63-1015), that can measure barometric pressure, dew point, humidity, rainfall, temperature, wind direction and wind speed. It also includes a standard serial interface, so you can easily get it connected to your Amiga (aminet: misc/misc/wx200_1.10b.lha).

The Sky's NOT the Limit
A control source does not always have to be hardware based. You can just as easily use computer software to generate the control data. One very interesting experiment was done with the program Distant Suns, an application that provides highly accurate satellite tracking (planets, moons, comets). In addition to being a terrific astronomer's tool, it can also be used for great creative fun - such as, a control system for the manipulation of graphics and/or sound.

The project involved the projection of abstract graphics onto a large backdrop, during a live musical performance. Distant Suns was used to set up a simulation of two satellites, whose orbital paths would eventually result in a collision. The two satellites were tracked through their orbital paths and their distance and velocity data was used to generate geometric patterns that changed shape and size. The simulation was designed such that the collision would occur at a peak moment during the musical performance, causing a wild display of color and abstract art.

Putting Your Brain To Work
Another particularly interesting experiment, was the result of a conversation with an audiologist about methods of testing for hearing related problems. Auditory Evoked Potentials (AEP) are an area of tests for studying the brain's response to sound. There are different types of AEP, each of which represents a different part of the brain's response to sound stimuli. For example, Auditory Brainstem Response (ABR) reveals information about the latency, or "processing time", required by the brainstem to analyze sound stimuli. Because this response is usually consistent, ABR does not provide a useful control source.

Another type, and one that provides a more useful control source, is called the P300. It is believed to reflect the cortical response to sound (higher level functioning / actual thinking). The P300 gives a waveform which is dependent on the subject's thoughts, while "concentrating" on different sound stimuli (e.g. frequency).

To gather AEP data, electrodes are attached to the subject's head. These electrodes pick up electrical activity in the brain. There are many areas of ongoing electrical activity in the brain and so it is necessary to separate this activity (typically called noise) from the signal to be monitored. This is accomplished using signal averaging equipment, which works by adding and averaging the monitored signal over and over again. This method provides an amplification of the wanted signal, causing the signal to "grow" and the noise to "shrink". By transferring this monitored signal to a computer, this data can be used to control almost anything.

An interesting application of this technique is to monitor a person's brain response while listening to a passage of music. This response is then used to control the music being listened to, allowing the person to participate in the composition of the song.

Another application would involve monitoring a person while they listen to pre-recorded audio through headphones. This audio could be anything from jack-hammer sounds to classical music. The person's responses can be used to control the selection of instruments, sound effects and musical notes. These selections can be recorded by computer and used as source information for the generation of a new composition.

For further reading, see Richard Seabrook's The Brain-Computer Interface: Techniques for Controlling Machines online at enterprise.aacc.cc.md.us/~rhs/bcipaper.html.

Same or Different
While we are on the topic of hearing, here's something that is very interesting, though not really usable as a control system. Audiology research has revealed many interesting things about human hearing. For example: Shepard Tones reveal a hearing obscurity that shows that two people listening to a melody in these tones, may each hear a different melody.

Shepard Tones are made up of a number of sinusoidal tones, which are in octave relation to each other. They are typically constructed from frequencies which correspond to notes on the piano/musical scale. So, in this manner, there would be twelve possible Shepard Tones (each corresponding to one of the twelve different notes of the musical scale, from A to G#). Each of the notes is referred to as a pitch class. So, pitch class C is made up of ...C0, C1, C2... that is, all frequencies which are half of or double (octave relation) a C frequency. This applies to the other eleven notes in the musical scale, as well.

Because Shepard Tones are made up of all of one pitch class, they are ambiguous in terms of how high or low their pitch is; but clear in terms of what pitch class they are. If you play one Shepard Tone pitch class, followed by another Shepard Tone pitch class, it can be difficult to determine which tone is higher in pitch than the other - some people hear high, while others hear low.

This principle/phenomenon suggests that a song could be written using Shepard Tones, which would give a melody different for each listener.

Saying It Like It Is
A final example, and perhaps one that is more accessible to the reader, involves the generation of a database of words and sound controls (pitch bend, sound parameters, volume). For example, the word ANGER could represent a sound with short attack, above average volume, low pitch and a short sustain. Taking this one step further, the part of speech could influence the method of creation; for example, an adjective might somehow enhance the effect of the word receiving the adjective. A VERY ANGRY sound might be LOUDER than an ANGRY sound.

A database of this type would allow the music composer to enter "sentences" to generate a control sequence for a composition, and to select appropriate sound structures for creating it. The database could be filled with common word associations for words such as FEAR, JOY, FLOWER, etc., based on the requirements of the composition and the composer.

This idea can easily be applied to graphics... What might a PRETTY PINK FLOWER effect do to a video stream?

Help at Hand
Not everyone has the necessary knowledge to implement these kinds of ideas on their own. But, the Amiga has always been a dream-come-true creative tool and its users are some of the most helpful people in the world. The Amiga may be living the cat's nine lives; but, the soul of the Amiga is shared equally among its users and exists as an entity of its own. As you explore the unknown, do not hesitate to call on the public support resource that gives the Amiga its breath. If you are stumped, turn to the internet for help. Newsgroups, in particular, are a tremendous resource in finding help with the information you need. People with answers are out there, and although they might not necessarily be waiting, they are definitely listening and generous in response.



MIDI (Musical Instrument Digital Interface)
MIDI is a communications protocol, originally created for interfacing synthesizers and other electronic musical instruments. It has evolved into a communication system that can link virtually all of the equipment used in music and video production.

Specialized systems, called sequencers, allow MIDI information to be stored and played back. Common MIDI commands include: musical notes, volume level, sustain pedal, tempo, etc.

A special area of MIDI, called System Exclusive, allows for information control, specific to a particular device. Each device recognizes its specific control commands and responds only to them.