Graduate Projects

GTCMT - MUSI 6203 - Robotic Musicianship Studio

Recorder Bot

Being a saxophonist, when tasked to make a small robotic musician for this class, I really wanted to make a wind player. In order to circumvent the tough task of replicating an embouchure, my team members and I created a robot that plays the recorder. It does so through the use of a solenoid valve for regulating air flow, and solenoid actuators to cover the tone holes. It's driven by an Arduino Mega as well as a Max/MSP patch. The robot incorporates gestural expression into performance via a linear actuator that moves the recorder, and a servo that rotates its head. The Max patch allows for sequenced MIDI input (for recorder output and gestural control) or live control with a MIDI controller.


GTCMT - MUSI 7100 - Research - Spring 2018

SoundCage

After a large-scale installation project we were doing research for fell through, Avneesh Sarwate and I put together a small installation in just a few weeks with supplies we had purchased for the previous project. The result was SoundCage: a closet-sized tangible experience that encouraged visitors to explore strings extending from sensor packages mounted in a PVC frame. The trial experience we implemented was a bell tower metaphor: as visitors moved their hands closer to the corners, a bell in that corner would toll faster. When bells tolled at predefined ratios, a hidden sound emerged. The audio quality of the vibrational transducers serving as speakers on each sensor package created a gritty, industrial aesthetic.


GTCMT - MUSI 6304 - Computer Music Composition

Various Original Compositions

As my first foray into algorithmic composition, I created a composition generation system that takes a text file as input, generates a song from the words contained in that file, and generates a song. The origin of this composition stems from a personal interest in spoken word. There is so much music in the way people speak, and I sought to highlight and hyperbolize that phenomenon through this piece. The way that I generate these compositions is by interleaving instrumental voices I create by splicing, filtering, stretching, and pitch shifting the pronunciation recordings of each input word. I’m sharing a song generated by the following quote from the Greek philosopher Heraclitus: “there is nothing permanent except change.” I felt that this quote fit the aesthetics generated by my composition, especially with the drastic changes I applied to all of the words in this quote.


For this project, I wanted to compile the techniques I’d learned throughout and apply them to an aesthetic I’d been interested in: sound collage. However, after browsing through Archive.org and falling in love with a 1950 psychology educational video on emotions, I decided against using multiple collaged found sound sources, and just used the one. This video features a teacher and three students who receive a lesson on psychological conditioning and how it affects one’s emotional responses. The emphatic, diverse dialogue and purposefully contrasting characters allowed so many new narratives to form in my head. My goal was to use content from the video to tell a different story than the one told in the video. I also wanted to play around with humor and the zaniness accompanied with taking words out of context.


GTCMT - MUSI 6002 - Interactive Music

Twitthear: An Audio Interface for Twitter

Twitthear is an audio interface that brings Twitter to the virtual personal assistant Amazon Alexa. Its goal is to encode information about a tweet in musical phrase for a user to listen to, and decide whether or not to save the tweet to read on a visual interface later. Using sentiment analysis, semantic rhythm, and various sonification techniques to create meaningful melodies, Twitthear seeks to be more useful than traditional text-to-speech social media readers. More information about this project can be read in this paper.


GTCMT - MUSI 6201 - Audio Content Analysis (MIR)

Real-Time Mood Detection of Music

This project predicts the mood of input music on the arousal-valence scale in real time. Using features from a pretrained convolutional neural network as input to a series of support vector regressors, this project allows a rough, continuous estimate of mood from any input audio source. The idea behind this project is that if a robotic musician can estimate the mood of a performance in real time, they will be able to better contribute to a human and robotic musician collaborative jam session.


GTCMT - MUSI 7100 - Research - Fall 2017

Performance Systems for Live Coders and Non-Coders

A. Sarwate, R. T. Rose, J. Freeman, and J. Armitage, “Performance systems for live coders and non-coders,” in Proceedings of the international conference on new interfaces for musical expression, Blacksburg, Virginia, USA, 2018, p. 370–373.

This paper explores the question of how live coding musicians can perform with musicians who are not using code (such as acoustic instrumentalists or those using graphical and tangible electronic interfaces). This paper investigates performance systems that facilitate improvisation where the musicians can interact not just by listening to each other and changing their own output, but also by manipulating the data stream of the other performer(s). In a course of performance-led research four prototypes were built and analyzed using concepts from NIME and creative collaboration literature. Based on this analysis it was found that the systems should 1) provide a commonly modifiable visual representation of musical data for both coder and non-coder, and 2) provide some independent means of sound production for each user, giving the non-coder the ability to slow down and make non-realtime decisions for greater performance flexibility.

Undergraduate Projects

CWRU - EECS 398 - Senior Project

Lake Metropark Farmpark solar tracker

As part of EECS 398, Senior Capstone, partner Ailin Yu and I helped renovate and upgrade a solar tracker exhibit at the Lake Metroparks Farmpark. The Farmpark is akin to a children's museum, with various exhibits demonstrating the importance of nature in our lives. My roles in the project involved developing a GUI for a new interactive touchscreen display and helping program a Rockwell PLC for system control. I incorporated music into the project by developing a musical representation of the energy coming into the solar panel. Using ChucK, an on-the-fly music programming language, and a Raspberry Pi, I transformed proportial energy signals coming into the solar panel into musical data streams. By mapping voltage to pitch and current to tempo, visitors can appreciate the work the solar panel is doing through music. I defined multiple sets of pitches corresponding to popular scales (blues, pentatonic, and wholetone) in order to make the data sonification accessible.


CWRU - EECS 301 - Digital Logic Lab

Audio filtering and peak detection, FPGA design

As part of EECS 301, partner Ailin Yu and I programmed an FPGA to take an audio input, filter high and low frequencies, and display a peak detection visualization on an LCD screen. This project required us to first recieve an audio signal with an ADC and use FIR filters to separate high and low frequencies. Low frequencies were output to a motor via pulse-width modulation, and high frequencies were converted back to an analog signal and output through a speaker. Before being output, both high and low frequencies were sent through a peak-detect module. We used dual-port RAM to record the highest peak frequency every 60 MHz, and output the peak data to the LCD screen. The strength of each peak was reflected on the screen via a corresponding shade of color. A very brief demonstration of the visual output can be seen below.


CWRU - EECS 397 - Mobile Computing and Sensor Networks

Radon sensor hacking, data collection, and data transfer to an Android app over Bluetooth

As part of EECS 397, partner Walter Huang and I successfully hacked a Radon sensor and integrated its functionality with a basic Android app. By dismantling a household Radon sensor and wiring it to an Intel Edison computer-on-module, we were able to read and record the sensor's display. Our second task was to pass data from the Radon sensor, among other sensors, to an Android app via Bluetooth. Establishing communication and transferring data, we finally saved the data from our sensors to our smartphone in CSV format. Below is a demonstration of our functioning system, and the Arduino code used to read the Radon sensor display can be found here.