For Ann (rising) by James Tenney

Posted on by Jacob Joaquin
Reply

In 1969, American composer James Tenney wrote For Ann (rising), one of the “earliest applications of gestalt theory and cognitive science to music.” (source: wikipedia). The auditory illusion heard in the piece is achieved by layering multiple rising sine waves.

Tom Erbe recent wrote a blog post, Some notes on For Ann (rising), in which he describes in detail the specifications of the piece. This includes a thorough description, an excerpt of Csound code, and a PD patch he recently created. The PD patch is available for download at his site.

I myself love studying classic computer music languages and instrument designs, so this afforded me the perfect opportunity to study the piece. For Ann (rising) is also a personal favorite of mine.

First, I assembled the Csound version based on Erbe’s notes and Csound code excerpt, which was a straight forward process. I copied the instrument without any modifications. Then I generated the score with the following two lines of Python code:

for i in range(0, 240):
	print 'i 1 ' + str(i * 2.8) + ' 33.6'

The Csound csd is available for download here.

The next thing I did was realize the piece in SuperCollider based on Erbe’s Csound code. The technical simplicity of the instrument as well as the process for spawning voices allows for the piece to be expressed in less than 140 characters when translated into SuperCollider, making the following line of code twitter ready:

fork{{play{SinOsc.ar(EnvGen.ar(Env.new([40,10240],[33.6],\exp)),0,EnvGen.ar(Env.linen(8.4,16.8,8.4),1,0.1,0,1,2))!2};2.8.wait}!240}//JTenney

You can view the tweet here.

I want to thank Tom Erbe for publicly sharing his work and insight, which has allowed this classical computer music piece to be reconstructed in multiple modern day mediums.

Posted in Computer Music, Csound, SuperCollider | Leave a reply

Music IV Block Diagram and Computer Code

Posted on by Jacob Joaquin
6

spacer

From the MUSIC IV PROGRAMMER’S MANUAL by M. V. Mathews and Joan E. Miller. (1967)

Posted in Computer Music | 6 Replies

Mark Ballora – Opening Your Ears to Data

Posted on by Jacob Joaquin
Reply

Utilizing the computer music language SuperCollider, Ballora translates data sets into audio. In this TEDxPSU talk, he discusses the potential role data sonification plays in understanding the natural world.

Posted in SuperCollider | Leave a reply

WolframAlpha FM Synthesizer

Posted on by Jacob Joaquin
2

spacer

Did you know that WolframAlpha is also an FM Synthesizer?

Note: The “Play sound” button doesn’t always appear, but when it does it’s magical.

Posted in Computer Music | 2 Replies

Sampler Concrete

Posted on by Jacob Joaquin
1

spacer
Photo by Carbon Arc. Licensed under Creative Commons.

First, I want to welcome aboard Jean-Luc Sinclair. As part of his NYU Software Synthesis class, he has graciously decided to share the articles he is writing for his students. His first contribution Organizing Sounds: Musique Concrete, Part I has already proven to be the most popular post here at CodeHop. Last year, before The Csound Blog became CodeHop, Jean-Luc had written another amazing piece, Organizing Sounds: Sonata Form, which I highly recommend. Thank you, Jean-Luc!

Now on to today’s example. (Get sampler_concrete.csd)

Many tape techniques are simplistic in nature and are easily mimic-able in the digital domain. After all, a sampler can be thought of as a high-tech featured-endowed tape machine. A more apt comparison would be that of a waveform editor such as Peak, WaveLab or Audacity.

I’ve designed a Csound instrument called “splice” that is about as basic as it gets when it comes to samplers. My hope is that the simplicity of the instrument will bring attention to the fact that many of the tape concrete techniques mentioned in Jean-Luc’s article are themselves simple.

Let’s take a look at the score interface to “splice”:

i "splice" start_time duration amplitude begin_splice end_splice

The start time and duration are both default parameters of a score instrument event. Three additional parameters are included for setting the amplitude, specifying the beginning time (in seconds) of the sample and specifying the end time (in seconds) of the sample to be played.

With this short list of instrument parameters, the following techniques are showcased in the Csound example: Splicing, Vari-speed, Reversal, “Tape” Loop, Layering, Delay and Comb Filtering.

Continuing Schaeffer’s tradition of using recordings of train, I’m using a found sound that I found on SoundCloud of the Manhattan subway. The recording is approximately 30 seconds in length. Most of the splicing in the examples take place between 17 and 26 seconds into the recording. Here are the results.

With this one simple instrument, it is entirely conceivable to compose a complete piece in the style of classic tape music.

Posted in Csound | Tagged composition, computer music, csound, Musique Concrete, NYU Software Synthesis, synthesis | 1 Reply

Organizing Sounds: Musique Concrete, Part I

Posted on by Jean-Luc Sinclair
4

 

I.               Musique Concrete and the emergence of electronic music

Musique Concrete is nothing new. It was pioneered by Pierre Schaeffer and his team in the early 1940’s at the Studio d’Essai de la Radiodiffusion Nationale, an experimental studio created initially to serve as a resistance hub for radio broadcasters in occupied France, in Paris. While Musique Concrete might not be anything new today, at the time however, it represented a major departure from the traditional musical paradigms. By relying entirely on recorded sounds (hence the name “concrete”, as in ‘real’) as a means of musical creation, Schaeffer opened the door to an entirely new way of not only making, but also thinking music. It represented a major push towards a number of new directions.

Timbre was all of a sudden an equally important musical dimension as pitch had been up until then, something that composers like Edgard Varese had long been thinking and writing about. It also paved the way for the emergence of new compositional forms and strucutures. As Pierre Boulez pointed out in “Penser la musique aujourd’hui”, musical structures were traditionally perceived by the listener as a product of the melody. By removing the melody altogether, and working instead with sound objects, Schaeffer became a bit of an iconoclast. As he himself pointed out in his “Traité des objects musicaux”, the composer is never really free. The choice of his or her notes is based upon the musical code that he himself and his audience have in common. When Musique Concrete was invented, the composer had moved one step ahead of his audience, and was, to some extent, liberated.

One of the earliest pieces of the genre, and perhaps the most famous to this day is Schaeffer’s own “étude aux chemins de fer”, where the composer mixed a number of sounds recorded from railroads such as engines, whistles and others, in order to create a unique, and truly original composition.

You can listen to the piece here: www.synthtopia.com/content/2009/11/28/pierre-schaeffer-etude-aux-chemins-de-fer/

By today’s standards, the techniques used by the pioneer of the genre were rudimentary at best, yet they were, and remain, crucial tools of electronic music creation to this day. By taking a look at these techniques, and applying them to a computer music language such as Csound, we can not only gain a better understanding of this pivotal moment in music history, but also deepen our knowledge of sound and composition.

II.            Concrete Techniques

The early composers of Musique Concrete mostly worked with records, tape decks, tone generators, mixers, reverbs and delays. Compared to the tools available to the computer musician today, it’s a rather limited palette indeed. This however forced the composer to be much more careful in the selection of the source materials, mostly recordings of course, and far more judicious with the use of the processes to be applied. Using recordings as the main source of sounds confronts the composers with decisions early on in the compositional process, which will have profound consequences on the final piece.

1.     Material Selection

While perhaps a bit reductive, the compositional process could be thought of as the selection and combination of various materials. When working with sound objects, the selection process is maybe even more crucial.

It could easily be argued that this process begins at the recording stage. If you happen to be recording your own material, the auditory perspective you chose will have a profound impact on the outcome of the sound. As Jean-Claude Risset pointed out in the analysis of his 1985 piece “Sud”, the placement and choice of the microphone will hugely change the sound itself. For instance, placing the microphone very close to the source will have a magnifying effect on the sound, while moving it back a bit will give a broader view of the context within the sound is recorded, allowing more ambient sounds and atmospheres to seep in. This is something audio engineers have been very aware of for a long time, but that often gets overlooked by computer musicians. I’m quite fond of small microphones, such as Lavalier mics for instance, which allow the engineer to place them in places where a traditional microphone will not fit, very close to the sound source. This makes for some very interesting results. For instance, a lav mic placed right below a rotating fan will make it sound like a giant machine, shaking and rattling as if it were 60 feet tall inside a giant wind tunnel. As always, experimentation and careful listening is key.

If you are working with already recorded material, an interesting approach is to work with different sounds, but that evoke similar emotions. This approach was favored by the American composer Tod Dockstader, who in his 1963 piece “Apocalypse” used a recording of Gregorian chant as a vocalization to the slowed down sound of a creaky door opening and closing.  Dockstader came from a post-production background, and perhaps it is no accident that Schaeffer had a background in broadcasting and engineering as well.

You can listen to an excerpt of Apocalypse here: www.youtube.com/watch?v=TYabnQctxpo

This technique, of using very different sounds that evoke similar or complementary emotions, is also often used by film sound designers.

Star Wars’ sound designer Ben Burtt often speaks of this in his process. By working with familiar sounds, combining them in unexpected ways and putting them to picture, he has been able to create some of the most successful and iconic sounds in the history of film.

2.    Sound techniques and manipulations

While the technology available to the pioneers of musique concrete was fairly primitive, composers managed to come up with a number of creative methods for sound manipulation and creation. A non exhaustive but comprehensive list of these would include:

-       Vari-speed: changing the speed of the tape to change the pitch of the sound.

-       Reversal: playing the tape backwards

-       Comb Filtering: by playing a sound against a slightly delayed version of itself various resonant frequencies are brought in or out.

-       Tape loops: in order to create loops, and grooves out of otherwise non rhythmic material, composers would repeat certain portions of a recording.

-       Splicing: to change the order of the material, or insert new sounds within a recording

-       Filtering: to bring in or out different frequencies of a sound and change its quality and texture

-       Layering: Either done by recording multiple sources down to a new reel or by mixing them in real time via a mixing board.

-       Reverberation, delay: used to create a sense of unity, or fusion between sound sources coming from different origins, and a great way of superimposing a new sense of space on an existing recording.

-       Expanded-compressed time: by slowing down, or speeding up then reversing the direction of a sound.

-       Panning: allowing the composer to place the sound within a stereo or multichannel environment

-       Analog Synthesis: Although the genre was based on recorded sounds mostly, composers sometimes inserted tones and sweeps from oscillators in their compositions.

-       Amplitude modulation: Often done by periodically varying the amplitude of a sound or applying a different amplitude envelope over it.

-       Frequency Modulation: Although frequency modulation as a synthesis technique was discovered long after the beginnings of tape music, vibrato was a well-known technique long before then.

In the next installment of this article, we will look at practical ways to apply these techniques to Csound, and create our own musique conrete etude. In the meantime I encourage you to listen to more music by the composers mentioned above, and as always, experiment, experiment, experiment.


Posted in Computer Music | Tagged composition, computer music, csound, Musique Concrete, NYU Software Synthesis, synthesis | 4 Replies

Overheard in Black Rock City

Posted on by Jacob Joaquin
Reply

Overheard at 2:00 and Anniversary in Black Rock City.

fork{{play{SinOsc.ar(0.2*WhiteNoise.ar*1943+1932)*EnvGen.ar(Env.new([1,0.4,0],[0.05,2],-4),2,1,0,1,2)};15.wait}!inf} // 2:00 & Anniversary
Posted in SuperCollider | Leave a reply

SuperCollider Bohlen-Pierce Tweet

Posted on by Jacob Joaquin
Reply

There is definitely a Zen thing to composing in 140 characters or less. This next tweet features the Bohlen-Pierce scale.

fork{loop{play{f=_*3.pow(17.rand/13);e=EnvGen.ar(Env.perc,1,0.3,0,1,2);PMOsc.ar(f.([438,442]),f.(880),f.(e))*e};[1/6,1/3].choose.wait}}
Posted in SuperCollider | Leave a reply

SuperCollider Markov Chain

Posted on by Jacob Joaquin
3

spacer

During my aggressive push to learn as much as possible about SuperCollider over the weekend, I’ve translated an earlier Csound etude of mine into SC code that generates a sequence in real-time using a Markov chain. I’ve come away with a few thoughts.

While I believe Csound definitely has an sharp edge in the DSP department, SuperCollider excels in allowing users to compose their own algorithmic sequencers. Even though the syntax of this Smalltalk-based language looks and feels very slippery to me, the SC code comes off as being much more concise and expressive than the Csound counterpart.

As for the work itself, I consider this to be very much a technical exercise; There is still so much about SuperCollider I’m completely ignorant of, including basic patterns and Pbinds, etc, and grinding against a problem like this is a big help in leveling up. Though it appears I’ll be able to build a generic Markov chain engine, separating the the SynthDefs from the nodes in a reusable function of some sort, which is the long term goal. This earliest of prototypes already goes pretty far in this direction, but there is plenty room for improvement.

Grab the SuperCollider code.

Posted in Csound, SuperCollider | 3 Replies

The Audio Programming Book at Facebook

Posted on by Jacob Joaquin
1

spacer

The Audio Programming Book, edited by Richard Boulanger and Victor Lazzarini, has a brand new Facebook page.

I personally love this book. I spent a lot of time with it when I first received my copy. Within that timeframe, many concepts of what is actually happening behind the scenes of digital audio and synthesis started to become clear to me.

Unfortunately, I got crazy busy, and had to shelf-it. Though I have plans to revisit it this upcoming October.

I did post some of the exercises I was working over at GitHub. If you’re interested:

MIDI to Frequency Chart
Breakpoint List