Alrighty, so, where to start. Audacity is a free open source program that allows you to edit audio files. Nifty feature of Audacity is it allows you to normalize data among a bunch of other stuff and also analyze the waveform using Fast Fourier Transform (FFT) to see what component frequencies make up your audio waveform. All the Fourier transform does is transform your data from the time domain (seconds, s) (like how you listen to it, in time) to the frequency domain (Hz, 1/s, or s^-1 if you like). So Audacity can do this Fourier transform thingie pretty well. What is interesting about the Fourier transform is the resolution of your data. The more audio you analyze in time the higher the resolution will be once you’re in the frequency domain. Turns out FFT is fastest when you use a certain number of sample points, namely a power of two: 2^n. So, to keep things reasonable, the largest number of sample points Audacity allows you to use is 16384, 2^14. Right, so the larger the number, the longer the transformation takes to compute. Trouble is, this only gives you a resolution on the order of about 2.69 Hz because 16384 samples only corresponds to about 0.37 seconds at a 44.1 kHz sample rate which is common for .wav files. So what do you do if you want more resolution than 2.69 Hz in your frequency spectrum? Well, you use this script. But be careful, this script will analyze your entire .wav file (only .wav files!). So if you give it a really long file, the transformation will take FOREEEEVEEEEEEEEERRRRRRRRRRRR. So, take care in how long of a file you pass to this script. So, how do you prepare a file to be analyzed using the script? Well, you can use audacity to trim out only the part you want to analyze. That’s pretty straight forward so I’ll let you figure it out. Or you can use ffmpeg which can be tricky to obtain. Anyways, the relevant commands if using ffmpeg would be something like this:
ffmpeg -i _original_file_.wav -ss 0:00:05 -t 4 -ac 1 _new_file_.wav
That will take your original wave file, grab the audio 5 seconds in plus the 4 seconds after it, convert it to mono, and write a new .wav file called _new_file_.
So you’ve got your file. But you’re not ready yet. You need python. And numpy. And for the fancy features of the script, scipy too. Uh, google it for your operating system. I’m a Linux MacOS guy so I can help you out if you need it for those 2.
So, you’ve got everything installed (python, numpy, and scipy). Great! So, you go to the command line and you type something like:
python dofft.py _filename_ bass tenor
This tells python to use the dofft.py script and it tells the script to get the audio data from the file _filename_.wav and that you want to analyze both the bass and tenor drones. It will tell you the peak heights and the area under the peaks corresponding to the first 19 overtones of the bass drone and the first 9 of the tenor drones. Or you can just pass it bass or tenor. Note that the first bass overtone is the tenor fundamental. What you’ll get out is a text file (.txt) with the frequency in the left column and the amplitude in the right column. So, plop that in Excel or something and plot it. It will be a lot of data. Excel may not be even able to handle it. I like gnuplot, it can do anything. In gnuplot I use the following commands:
set logscale x
plot “_filename_.txt” u 1:2 w lp
Alrighty, now the good stuff, if you add the word inverse as one of the command line arguments:
python dofft.py _filename_ bass tenor inverse
The script will generate another .wav file named _filename_inverse.wav. Yay, you have 2 files with the exact same information in them. BORING! Not so fast ya hear. If you also pass numbers between 1-20 as command line arguments, the script will zero out those frequencies (not in the .txt file though, those are preserved). So who cares, some frequencies are zeroed out? Well, the inverse command transforms the frequency data back to the time domain. So, if you zero out some frequencies and then transform back, you can hear what the audio would sound like without certain fundamentals or overtones. So, say you want to hear just the overtones your pipes produce. Record your drones, get a .wav file, and run the following command:
python dofft.py _filename_ bass tenor 1 2 inverse
1 is for the bass fundamental and 2 is for the tenor fundamental (also 1st bass overtone). So the file _filename_inverse.wav will not have the bass or tenor fundamentals in there! Neato eh? I haven’t really played with it much passing just bass or tenor, but the functionality is there. Currently, if you only specify one of them you can only go up to the 9th overtone. But you can still do the inverse and you’ll still get a .txt file of the raw frequency data over the whole spectrum.
Download the script here. Probably best to right click and download the linked file. You’ll then need to change the file extension from .txt to .py in order for it to work.
Note, if you can’t get scipy installed, you can still use the script, but there are some funny things you have to do that are hopefully explained in all of the messages it prints if you don’t have scipy. You can’t do the inverse or integrate the peaks to get the area. But you still get the .txt file and the peak heights printed for you.
Remember, I’m a computational chemist who happens to know a little bit about python. I’m not a programmer and have never had any formal education in programming. So if you don’t like my style, cool, just don’t bug me about it. My first language was FORTRAN 77 so cut me some slack. Old habits are hard to break.
Good write up, Patrick. Will have to try the new script with my previous files…
I never cared much about bagpipes but recently went to a bagpipe festival mainly because it was free and have to say it was a great time! The bagpipe music gets to your head after a while and seems to have an overall positive effect on the crowd mood.