TsunamiPro FAQs
How To Avoid Reading The TsunamiPro User's Manual
TsunamiPro comes with an extensive built-in Help system, as well as a complete User's Manual (which you can download here). If you've read this far, congratulations! Most folks, including those at Black Diamond Sound Systems, hate reading manuals and only do it if, for example, they are sitting in a dentist's office waiting for a root canal and that's all there is to read. So, rather than expecting you to read this manual from start to finish before actually trying out your new program, we expect that you will probably consult it only for reference purposes. For that reason, the manual is organized according to the layout of the various menus and controls used in TsunamiPro, followed by procedures such as recording, editing, saving files, etc. Because you must be very smart to begin with (in order to use a computer in the first place, and to have bought TsunamiPro!), you probably don't really need to read the entire manual unless you want to. Instead, after installing the programs on the Setup disks you can immediately start TsunamiPro by double-clicking on its icon in the new program group just created, and use the on-line Help system to navigate around as the mood strikes you.
Sound And Sampling
In order to better understand some of the procedures used by TsunamiPro to record and edit sound files, it is helpful to consider some physical aspects of sound and of the human auditory system, and how these relate to the design and implementation of sound recording devices.
Sound And The Auditory System
When you move something rapidly in a physical medium such as air or water, some of the kinetic energy is imparted to the surrounding medium and begins to spread outward from the moving object in "waves" (ie., patterns of alternating increased and decreased pressure). The faster you wiggle the object (say, a violin string), the faster the frequency of the corresponding sound waves. The denser the medium, the faster the sound waves travel through it. (Conversely, in the vacuum of outer space, where there is no medium to propagate the sound waves, there is no sound! Not a good place to go to listen to Mozart.)
When a sound wave enters your ear and bumps up against your tympanic membrane (ear drum), a truly incredible chain of events is set into motion, so incredible, in fact, that at first thought you might be persuaded to believe that the human auditory system was designed by a team of sanitary engineers who had taken the Wrong Medicine. Briefly, what happens is as follows: the fluctuating sound pressure changes cause your ear drum to start vibrating at the same frequency (ie, the pressure changes are converted back into mechanical motion), and this motion is "geared down" through a series of tiny bones in your middle ear and finally transmitted to a specialized membrane in your inner ear, shaped like a snail, which flaps up and down, tickling the even tinier hairs on a group of electrically charged "hair cells", partially discharging them and causing nerve action potentials to go dancing up your auditory nerve to your brain. How the brain analyzes this stream of incoming action potentials is a subject of intense interest to a growing body of brain scientists. Of special interest is the fact that a smoothly varying sound source (called an analog signal) is converted into a discrete number of samples per unit time (called a digital signal) before being analyzed by your auditory system, which is perforce a classic example of an "analog to digital converter".
Sampling Rate
In recording music to the soundcard in your computer, a similar form of analog to digital conversion takes place in the hardware. Since humans can hear frequencies in the range of 20 to 22,000 cycles per second (Hertz, or Hz), your soundcard must be able to record these frequencies. Another factor imposes an additional constraint on the hardware, however, and that is that in order to recreate the highest frequency present in a signal, the signal must be sampled by the hardware at at least two times that frequency. A simple way to think of why this is so is to consider a simple sine wave, ie, a pure tone that just wiggles up and down at a constant frequency. To define one cycle of the wave, you need to sample it when it has reached the top of it's excursion AND when it has reached the bottom of it's excursion: that is, you need to sample it twice per cycle. So, in order to achieve maximum fidelity in a recorded sound signal, your soundcard should have a maximum Sampling Rate of at least 44 kHz.
Resolution
The human auditory system is also remarkable because it can hear (and make subjective discriminations on) sounds that vary over a tremendous range of intensities. The usual way that this is measured is to look at the ratio of two sound pressure levels: the average person can distinguish sound pressure differences over 10 orders of magnitude. This is such a huge number that sound engineers (and audiologists) take (10 times) the logarithm of the ratio of two sound pressures as the unit of measurement of loudness, and call it the "decibel", or "db". Clearly, expecting your hardware to be able to faithfully resolve 1010 different loudness levels is asking a lot. So how many levels is enough for adequate resolution of the different loudness levels in music? The earliest soundcards sort of ducked this question, for the simple reason that the available hardware was barely fast enough to keep up with the incoming data stream. The next generation of hardware chose the minimum resolution to be the same as the size of an 8-bit register, or one "byte". A byte represents 28, or 256, possible values. This turns out to be adequate for music that doesn't contain a lot of loud (or soft) transients. The newer generation of hardware is capable of using 2 bytes (or 16 bits) per sample. This represents 216, or 65,536 possible values, which is adequate for the majority of musical sounds encountered in the real world. Note: most of the latest sound cards can digitize samples to 24 or 32 bits accuracy.
Storage Requirements
The price you pay for 16 bit resolution and a 44 kHz sampling rate is a relatively huge requirement for data storage. For example, if you are recording stereo, at 16 bits, and 44 kHz, you will need 10.58 million bytes (mega bytes, or MB) of storage for every minute of music that you record. Few folks come equipped with enough computer memory to handle huge amounts of data such as these, although a 2 minute section of stereo music recorded at 8 bits resolution and 22 kHz will require "only" 5.29 MB of storage, which is readily "doable" with the 8 MB of memory common in many MultiMedia PC computers today.
Data Compression
Lots of folks have addressed the problem of reducing the size of digitally recorded music files, so that they can be transmitted over the Internet in more-or-less real time without causing your computer to go bonkers. The most popular data compression method converts standard WAV files into MP3 files. MP3 file compression uses an interesting property of the human auditory system, which is that relatively loud sounds at a particular frequency mask out softer sounds at nearby frequencies, so that you can't even hear them. So, why bother to encode sounds that you can't even hear? MP3 file compression (and some other compression methods such as WMA and RM compression methods) work by using a psycho-acoustic model to filter out sounds that you can't hear, thus dramatically reducing file size. The folks at Black Diamond Sound Systems have created a proprietary MP3 file compression algorithm that does an amazingly good job of retaining the fidelity of the original sound.