With the issue of my USBi out of the way, I would like to ask a question more related to actual signal processing.
Although I am no novice to audio and sound, I am just now trying to sharpen my actual signal processing chops. As I look through the tutorials in SigmaStudio, most things I understand (I think), but there are a few that I don't really understand. I would like to kind of talk through what I understand the signal flow of the "Basic Stereo In/Out" example project included with Studio and have you correct me where I'm wrong and answer a couple of questions.
- Okay, so the schematic starts off with the standard input block that is routed into an individual volume slider to control the input level.
- The signal exiting the volume slider goes to T connetors where it is split into two equal stereo signals.
- One stereo signal is sent, unchanged, to a high pass filter.
- The second has both channels added together into one mono signal, and is fed into a low pass filter.
- The outputs from those filters, are respectively, routed to individual EQs taylored to their specific frequency ranges.
- The outputs from the EQs are routed to independant compressors, were a mild compression ratio is applied at signal levels above aprox. -10dB.
- Those signals are sent to independant, one sample delay blocks.
- The output of the delay blocks has the "high" signal being summed with mono "low" signal and sent to the DACs.
Am I understanding it correctly?
- In step 4, I know that bass is "non directional" but, what is the advantage of processing this signal as a mono, sum of channels, instead of a stereo signal?
- In step 5, being that these signals have just exited filters, what is the advantage of using the "shelf" bands on the EQ blocks? Weren't those frequencies just filtered?
- In step 7, what is the purpose of delaying all of the signals by one sample?
Thanks, in advance for any insights shared...