University of York Weather Station Sonification/Visualisation
I made "this weather" in Pure Data to sonify and visualise the weather station data online at the University of York's website (http://weather.elec.york.ac.uk/). I wanted to give the data a sonic/visual narrative that could be easily understood.
Wind: White noise is the basis for the wind. Speed determines the value of a lowpass filter & gust speed determines the amplitude of the output. Direction is represented by the white noise's pan position.
Rain/hail: The amount of rain & hail that has fallen since midnight is used to control a the frequency of a sawtooth wave. The faster it oscillates, the more rain that has fallen. Rain is heard in the left audio channel, whilst hail is heard in the right.
Pressure: The air pressure controls the frequency of a sine wave generator. The low frequency is scaled to be audible to human ears.
Humidity/Dew Point: This data controls the duty cycle of a pulse wave. If humidity is very high, it will produce a thin and raspy tone and if it is low it will produce are 'fuller' sound. Dew point & humidity are closely related. Here, it affects the amplitude modulation of the pulse wave. The higher the dew point, the faster the modulation.
Temperature: Simple FM synthesis is used to represent temperature information. The overall temperature controls the pitch of the carrier wave, wind chill controls the modulator and wind speed controls the modulation amount.
Once the language of the sound and visuals are understood, you can understand the weather conditions very quickly through sensory perception; possibly quicker than if you were to read the information.
(Please note: the weather information on the University of York's website is updated every minute. Therefore sonic and visual changes happen at a glacial pace, particularly on fairly pleasant days)
Bandlimited oscillators
This is a collection of abstractions that generate bandlimited oscillators. They include:
[bl-saw.mmb~] - bandlimited sawtooth waveform
[bl-pulse.mmb~] - bandlimited pulse wave with PWM
[bl-tri.mmb~] - bandlimited triangle wave
[bl-asymtri.mmb~] - bandlimited asymmetrical triangle wave (sort of...see below)
There is also an object called [bl-init.mmb]. This is the object that initializes all the waveforms and at least one instance MUST be included in order for the others to work.
There are also help patches included.
IMPORTANT!
Before you can use these, you must do the following steps.
1. Open [bl-init.mmb]
2. There is a message box that says [44100(. This is the maximum sampling rate that these will work at (running at lower sampling rates will be fine). If you plan on using higher sampling rates, change this message box and click it. Technically, it will still work at a higher sampling rate, but it won't generate harmonics above the sampling rate in this box.
3. Click the [bang( to fill the wave tables. This patch actually creates a wavetable for EVERY harmonic between 30Hz and the Nyquist frequency. So it will take a few minutes. Be patient! You will get a message in the Pd window when it is done.
4. Save the patch.
Once you do this, [bl-init.mmb] will simply load with the tables already generated, so you don't have to wait every time you instantiate it. I didn't have this already done for you in order to keep the upload small, and so you can see how to adjust it if you need to.
So, I guess I'll go ahead and try to explain how these work. As stated above, every harmonic is generated in [bl-init.mmb] for the oscillators. It doesn't create a table for each set of harmonics however (e.g., there isn't a saw table with two harmonics, a saw table with three harmonics, etc.). Instead, each of these individual tables are tacked on to the end of each other to create one long wave table. So, for each set of 1027 samples in the sawtooth wavetable, there is one cycle with a set amount of harmonics.
When the oscillators read the frequency input, it is divided into the Nyquist frequency to determine how many harmonics are needed. It then uses this (* 1027) as the offset for the table. This is how I got around the problem of table switching at block boundaries. By doing this way, the "switching" is done at audio rate.
There are actually two [tabread4~]s. One has one less harmonic than the other. As the frequency changes it crossfades between these tables. When one table goes completely silent, that's when it "switches." Below 30Hz, they switch to geometrically perfect waveforms.
[bl-saw.mmb~] and [bl-tri.mmb~] just read through the tables. Nothing really interesting about them.
[bl-pulse.mmb~] is actually the difference between to sawtooths. In other words, there are two bandlimited sawtooth oscillators inside of it. Adjusting the pulse width cause the phase of one of the sawtooths to shift. When you subtract this phase-shifted sawtooth from the other, it creates a bandlimited pulse wave...without oversampling! This is the same Phase Offset Modulation method used in Reason's SubTractor.
[bl-asymtri.mmb~] uses the same technique as [bl-pulse.mmb~], except it uses bandlimited parabola waves instead of sawtooths. Adjust the phase offset sets where the top vertex is. This doesn't really generate true triangle or saw waves, though. They still have the parabolic curve in them, so the harmonics seem to come out a little more. It's more of a "reasonable approximation." But, it is bandlimited, and it does sound pretty cool to modulate the shape. I don't have the scaling quite right yet, but I'll get to it later...maybe.
I should also mention that these use my [vphasor.mmb~] abstraction, so the phase reset is sample accurate.
I'll eventually set these up to allow frequency and pulse-width arguments, but I'm currently in the process of moving to another country, so it may be a little bit before I get around to it.
Didn't any of that make any sense?
Bandlimited~ - bandlimited signal generator (square, triangle, saw)
After reading this article
http://en.flossmanuals.net/PureData/Antialiasing
I decided to create my own bandlimited signal generator.
This external generates square, triangle saw, reverse saw and saw-triangle waves band limited to the nyquist limit.
In order to conserve CPU usage it uses a series of wavetables. Because of this it might take a little while (a few seconds) to load up the first instance.
The easiest way to use it is to add the bandlimited folder to your path. But you can also use it like this:
[bandlimited/bandlimited~ square]
[bandlimited/bl.sq~]
as long as the bandlimited folder is placed into a folder already in the path (ex: ~/Library/Pd on OS X)
...
There are wrapper abstractions for each waveform type (bl.saw~, bl.tri~, bl.rsaw~, bl.sawtri~, bl.sq~]
Start by taking a look at bandlimited~-help.pd
The object should act a lot like [osc~] and [phasor~]. The first signal inlet controls the frequency while the second float inlet sets the phase and it takes a creation argument for the starting frequency. There are three methods but I'll let the help file explain.
There should already be a win32 dll build and an OS 10.5 build. I'm sure linux folk can take care of themselves in this regard. Tested on PD-Extended 0.41
There's a repository here
http://gitorious.org/bandlimited/bandlimited
I plan on adding more waveforms and am taking suggestions.
v0.92
EDIT:
Version v0.92 includes a new waveform, pulse. This wave form is a lot like a square wave but with a variable duty cycle (it's not symmetric). This new parameter is set by a third signal inlet whose value should be between 0 and 1. 0.5 would give you a square wave.
Version v0.91 includes new method "approximate" which uses the CPU a little less for a close enough waveform.
Writesf~ set parameters?
Hi Everyone,
this is my first post, I searched the forum but couldn't find the info I needed and was wondering if anyone had any insight into whats going on.
Basically I have a pd script I'm using that records a wav file to my hard drive, when I import that wav file into this other program I'm using, I get the following data but no playback.
BROKEN:
format: Unknown
channels: 2
sampleRate: 44100
bitRateUpper: 44100
bitRateLower: 44100
bitRateNominal: 0
blockAlign: 8
bitsPerSample: 32
sampleMultiplier: 1
commentList:
commentTable:
When I take that same file into an audio editor and export it as a wav i get the following data and the wav but with playback
WORKS:
format: PCM
channels: 2
sampleRate: 44100
bitRateUpper: 22050
bitRateLower: 22050
bitRateNominal: 0
blockAlign: 4
bitsPerSample: 16
sampleMultiplier: 1
commentList:
commentTable:
So my question is if writesf is the best way to write wav files to the harddrive? Also is there a way for me to set some parameters so that my broken wav file has the same parameters as my working wav ( i.e. format to PCM )?
thanks
Seq Sampler Loop
No sound out of this Oscar.
Here's a bit of the error message:
ch file or directory
error: soundfiler_read: /home/pelao/Documentos/audio/loops/terminus13.wav: No such file or directory
error: soundfiler_read: /home/pelao/Documentos/audio/loops/terminus15.wav: No such file or directory
error: soundfiler_read: /home/pelao/Documentos/audio/loops/terminus15.wav: No such file or directory
error: soundfiler_read: /home/pelao/Documentos/audio/loops/terminus15.wav: No such file or directory
error: soundfiler_read: /home/pelao/Documentos/audio/loops/terminus15.wav: No such file or directory
error: soundfiler_read: /home/pelao/Documentos/audio/loops/terminus16.wav: No such file or directory
error: soundfiler_read: /home/pelao/Documentos/audio/loops/terminus16.wav: No such file or directory
error: soundfiler_read: /home/pelao/Documentos/audio/loops/terminus15.wav: No such file or directory
error: soundfiler_read: /home/pelao/Documentos/audio/loops/terminus15.wav: No such file or directory
error: soundfiler_read: /home/pelao/Documentos/audio/loops/terminus14.wav: No such file or directory
error: soundfiler_read: /home/pelao/Documentos/audio/loops/terminus14.wav: No such file or directory
error: soundfiler_read: /home/pelao/Documentos/audio/loops/terminus15.wav: No such file or directory
error: soundfiler_read: /home/pelao/Documentos/audio/loops/zapa06.wav: No such file or directory
error: soundfiler_read: /home/pelao/Documentos/audio/loops/terminus15.wav: No such file or directory
error: soundfiler_read: /home/pelao/Documentos/audio/loops/terminus15.wav: No such file or directory
error: soundfiler_read: /home/pelao/Documentos/audio/loops/terminus15.wav: No such file or directory
error: soundfiler_read: /home/pelao/Documentos/audio/loops/terminus15.wav:
But, it looks awesome.
Keep well ~ Shankar
Float output format
I am using a float as part of a filename (of a sound I record). However the filenames are like:
blabla0.wav
blabla10.wav
blabla20.wav
[..]
blabla100.wav
Now, I was wondering if it is possible to get:
blabla000.wav
blabla010.wav
blabla020.wav
[..]
blabla100.wav
In other words can I get a specific output format, e.g. 3decimals, from a [float]?
Note: I am computing the float (counter*constant), so I can't just type a 3 decimal number as input for the [makefilename]
Synthetic thunder
I don't understand enough about the geometry to visualise the distortions and phase changes that occur off the perpendicular axis. I can see you deal with this in the matlab code. Is that a re-implementation of Ribner and Roys equations? It's hard enough to imagine in a 2D plane, nevermind 3D
All I did with Pd is make a recirculating buffer that has a filter to approximate propagation losses and add N-waves created by the vline segment randomly to it. Sometimes the superposition does create thunder like effects, but that's more luck than judgement it seems.
I reckon to get the right effect you either need several n-wave generators in parallel, or several parallel delays operating on one wave.
As the text says, it's equivalent to a convolution of the n-wave with a set of points that are distances to corners in the tortuous line, it's the hypotenuse of a right angle triange if we assume the bolt comes straight down that's the square root of the horizontal distance squared plus the height squared, and with c as a constant then time correlates directly to distance.
In a way it's granular synthesis, so the density must be calculated. I think you could reduce the whole caper to two variables, density and off-axis phase shift. It's the propagation time divided by the period of one n-wave I think. At ground level the observer is perpendicular so it's very dense, but as you move up the lightning bolt the subtended angle increases and so does the propagation time, so density tails off.
I'm really looking forward to hearing where this takes you next...
a.
Timbre conversion
@daisy said:
I have read some where that "if a voice is at same pitch and same loudness and still if one recognize that two voices are different , it is becuase of TIMBRE (tone quality)". (I agree there are other features as well who need to consider).
Timbre is another word for spectrum. The spectrum of a sound is the combination of basic sine waves that are mixed together to make it. Every sound (except a sine wave) is a mixture of sine waves. You can make any sound by adding the right sine waves together. This is called synthesis.
@daisy said:
First Question:
So how we can calculate the TIMBRE of voice? as fiddle~ object is used to determine the pitch of voice? what object is used for TIMBRE calculation?.
[fft~] object splits up the spectrum of a sound. Think of it like a prism acting on a ray of light. Sound which is a mixture of sines, like white light, goes in. A rainbow of different colours comes out. Now you can see how much red, blue, yellow or green light was in the input. That's called analysis.
So the calculation that gives the spectrum doesn't return a single number. Timbre is a vector, or list of numbers which give the frequencies and amplitudes of the sine waves in the mixture. We sometimes call these "partials".
If you use sine wave oscillators to make a bunch of new sine waves and add them together according to this recipe you get the original sound back! That's called resynthesis.
@daisy said:
Second Question:
And how one can change TIMBRE? as pitch shifting technique is used for pitch? what about timbre change?Thanks.
Many things change timbre. The simplest is a filter. A high pass filter removes all the low bits of the spectrum, a bandpass only lets through some of the sine waves in the middle, and so on...
Another way to change timbre is to do analysis with [fft~] and then shift some of the partials or remove some, and then resynthesise the sound.
@daisy said:
I have a kind of general idea (vcoder). but how to implement it? and how to change formant?.
A vocoder is a bank of filters and an analysis unit. Each partial that appears in the analysis affects the amplitude of a filter. The filter itself operates on another sound (often in real time). We can take the timbre of one sound by analysing it and get it to shape another sound that is fed through the filters. The second sound takes on some of the character of the first sound. This is called cross-synthesis.
/doc/4.fft.examples/05.sheepgoat.pd
Help -> 7.Stuff -> Sound file tools -> 6.Vocoder