After researching more about brain waves studies and and music created by artists, I think a musical composition of brain waves is too systematized to be an accurate interpretation for my dreams. While researching dreams and neuro-triggered data, I’ve listened to Luciana Haill’s work where she uses EEG to record her brain activity, beams the data to a computer that plays it back as song by triggering different GarageBand samples and other devices for each type of neuro activity (http://www.brainwavechick.com/education/pages/music.html). I found James Fung’s piece where he creates an orchestration by playing back transposed sounds from an audience’s vitals (a video of this work can be seen under the “videos” tab) which was a little more aurally appealing to me only because it didn’t sound patterned or systematic. What type of sounds would I create with my brain waves? I felt that music wouldn’t do the interpretation justice. If I’m using data during sleep (hopefully I can pull out data during REM sleep), the sounds I synch with my brain wave data should reflect my dream experience. Sounds would have to coincide with whatever imagery I could remember when I wake. This approach could take on a life of it’s own and the outcome is unpredictable. Eventually this could be paired with imagery but for now I’m excited to see what type of audio journal emerges from this experiment.
I will record my dreams and brain waves with a sensor, process it through neuro-feedback equipment, (concentrating mainly on stage 5 or REM sleep if possible), then I will assign audio elements to frequency values. These elements will be audio components taken from my dream. Most likely I will use Max/MSP or PureData and then eventually use Jitter to trigger video elements. First, I’m concentrating on the audio portion of this and perhaps I can continue to document my dreams with audio to create an audio dream database for future use in my work.