I’m still working on a patch for Max/MSP to record sensor information and then another patch to trigger audio events with that information. Max recognizes the 3 sensors and are very sensitive to touch, heart rate and heat. So far I’m able to play different notes depending on the frequency of the signal.
Setup: I’m using 3 sensors with a head band to an iCubeX box to a MIDI interface to my computer. I’m using the iCubeX software to set up the sensors then Max is reading the information and filtering it through “if ” statements. This patch probably could have been much cleaner but I’m still learning Max and hopefully with time my technique will improve.
Goal: I want to play back the recorded midi from my brain waves while sleeping/dreaming back through Max/MSP and trigger sound elements taken from my dream. Eventually, I may use Jitter to trigger video which would also reflect my dreams and memory and maybe even combine some of the RSS data I’ve been experimenting with in Processing. The video elements would probably be similar to my previous meme imagery.