Happy Valley Band, 2010 – present


STATEMENT

The Happy Valley Band plays computer transcriptions of popular songs. Using machine hearing and automated transcription techniques, the instruments are separated from the original audio recordings, transcribed to music notation and replaced by new performers. The original voice is left in place as American pop icons sing along to microtonal interpretations of their original backing bands.


Interested in the space between computational analysis and its interpretation, The Happy Valley Band explores how machines can help us to understand our own perception. It takes interest in the differences between signal analysis and human information, and raises the question of how to translate between the two.


The result is a strangely idiosyncratic interpretation of familiar music. Our perceptual attention naturally shifts, constantly foregrounding and backgrounding information. The machine transcription is constantly diverging from our perceptual tendencies, losing focus over overfocussing. Allowing these divergences to call attention to themselves, they stand in juxtaposition to our familiar way of hearing the music.


Producing Happy Valley Band music occurs through many stages. Artifacts occur at every step of the process and iterate through the following steps. I hope that these artifacts are something more than compounded error, but rather reflect the peculiarities of the process and of our own hearing.

First, the original audio recordings are separated into individual instruments using signal processing tools. The separated instruments are then translated into raw note on and off data through pitch and amplitude analysis. Finally, the raw note data is transcribed to music notation.


[1] Audio Separation


The audio separation techniques vary with the audio recording. In mixes that are well and consistently isolated, good separation can be achieved by identifying frequency bands within a range of the stereo field. If, for example, the guitar is the only mid-range instrument in the middle-right part of the stereo image, it may be separated by isolating that part of the signal.


Other mixes may require more sophisticated signal analysis. Probabilistic Latent Component Analysis (PLCA) decomposes a signal into a set of components that combine to the original. The components may correspond to individual instruments, or they may not. PLCA, however, can be trained to guide the decomposition towards specific kinds of sources, in effect, offering the decomposition algorithm a kind of model, some notion of what to look for. If an instrument can be isolated in a short clip, a set of bases are trained to represent it and used to extract that instrument from the full track.


Often some combination of techniques is most successful. I try to make the separated instruments as clear as possible, while maintaing significant reduction of the rest of the mix. The separated instruments inevitably bleed into one another, reflecting the sophistication of our auditory perception. These separation methods first located Happy Valley Band material in a specific era of music production and recording techniques where the methods were most successful—early stereo recordings, featuring hard panning and few spatializing effects.


[2] Pitch and Rhythm Analysis


The second step is determining raw note information from the extracted audio—when notes occur and their pitch. The separated instruments are analyzed for pitch and amplitude, producing values at machine resolution, which, at an FFT hop size of 512 samples, or roughly 86 readings per second, can be an excess. Though perhaps available to us, we are not constantly distinguishing perceptual events at this resolution. Rather, the ear is presented with a continuously varying pitch, but hears a constant tone. The task is that of parsing this wealth of information into notes more familiar to the ear.


The tracker determines notes by changes in pitch and amplitude. With adjustable thresholds, it is tuned to the character of the material tracked. If, for instance, the material is rhythmic, amplitude onsets may be weighted more heavily than pitch onsets, and vice versa. The thresholds may be adjusted with each new section, many times within sections or just once for an entire song. It is tuned with concern for the material tracked as well as considering the playable limits of the instrument.


[3] Symbolic Notation


Finally, the the raw pitch onset and offset data is rendered in symbolic music notation. The pitch notation is fully microtonal, notated to the closest twelve-tone equal-tempered pitch and modified with microtonal cent deviation indications. The rhythmic notation is transcribed to the pulse of the song. Rather than transcribing to a constant pulse, the the rhythmic notation is transcribed to a map of where the beat actually falls in the recording. The individual parts are notated along to the vocal melody, and, in performance, the original singing voice is heard, giving a guiding pulse to coordinate the performers.


guitar part excerpt

[*] Remarks


I try not to be pedantic or overspecific at any stage of the process. I am interested in the play between computation and interpretation. I do not want to tell the machine what to hear; I want to tell it how to hear. Through brute force, I could contrive the machine analysis, note by note, to hear the music just as I do. I could tune the pitch and rhythm tracker separately to each note, or manually edit the bleed from one instrumental track into the next. I am, however, not interested in cleaning up the edges around these automated processes. Rather, it is the material on the boundaries that I am interested in. These divergences are the moments that tell us the most about our perception.

The Happy Valley Band formed in the summer of 2011 at the behest of Mustafa Walker and Beau Sievers. The premiere performance at Ostrava Days 2011 [Ostrava, Czech Republic] featured Larry Polansky (guitar), Beau Sievers (drums) and Mustafa Walker (bass). The band has since performed at Electric Eclectics 7 [Meaford, Canada], Index Series [Brooklyn, NY] and live on the air of WFMU radio [Jersey City, NJ], and continues to acquire new members with each performance. The band is currently Alexander Dupuis (guitar), Conrad Harris (violin), Pauline Kim Harris (violin), Larry Polansky (honorary guitar), Beau Sievers (drums), Andrew Smith (piano), Thomas Verchot (future trumpet), Mustafa Walker (bass) and David Kant (saxophone and arrangement). You may be next!