
I downloaded Sonic Visualiser and put in an audio file generated by Metasynth which was meant to have a spectrogram that resembled the cover of the Space Voyager record.
Sonic visualiser chords how to#
So it would be, a sort of dream, to find out how to create such a program on my own. The vowel sound for "A" would move it up, the vowel sound for "E" would move it to the right, the vowel sound for "U" would move it down, and so on so forth. Although the children in this programme were profoundly deaf (often from birth) and could not hear anything at all, they were being physically trained to produce the right vibration and sound through their vocal chords, aided with this motivational computer game that moved the character up/down/left/right according to the sound that was emitted. Detecting beats, pitch, vowel sounds, and other audio features is something that has fascinated me since I once saw a documentary about deaf children in 1970s France, where children were apparently trained to speak using a computer game that made children learn the subtle difference between speaking different vowel sounds. I guess I am following this line of thought because I am interested in how we can analyse sound data meaningfully.

I downloaded it recently and found that it worked incredibly fast and was also full of many useful annotative functions and could also run "feature-extraction" plugins (eg: beat trackers, pitch detection, etc). Sonic Visualiser is apparently a program explicitly built for viewing and exploring audio data for semantic music analysis and annotation. And after you get the graph, what do you do with it? After the data is visualised, how can we get to the data and break it down?

Viewing audio spectrums in audio editing programs like Audacity can be unpredictable as it may frequently crash or hang because the program/computer can't handle the processing required to analyse and display the spectrogram.
