# Computers aren't that good with numbers

Updated: Oct 8, 2019

This is a blog about all things audio, so it may seem odd to talk of something that apparently hasn't much to do with audio. We want to make music, don't we?

Thing is, so much of music making nowadays is processed by using computers that is very useful for an engineer to know a little computer science, and understand why certain things are the way they are. No math required, I promise! A little arithmetic at most :)

When using a DAW for recording, mixing or any other audio-related activity, we are continuously going from the analog world, where an audio signal is usually moving around your kit as a voltage over electric wires, to the digital world where it moves around a stream of numbers.

The bridge between these two worlds are **analog-digital ("AD") converters** (which transform an analog signal into a stream of digital numbers) and their counterparts **digital-analog ("DA") converters** which do (you guessed!) the opposite.

Your audio interface has converters. Even your computer has built-in converters (that's why you can plug in headphones, which are analog devices, and listen to your mp3, which isn't: the mp3 data goes thru the D/A converter). And there's no end of boxes whose only job is to do conversion (generally somewhat better than the average interface does, and a lot more expensively).

So what's the deal?

Converter translate an analog signal - a voltage - into a sample - a sequence of bits*.*

The thing is that **an analog signal can assume any value within a certain range**.

And when I say any value, I mean *any*.

That's the whole definition of "analog": a physical signal that can be measured and can assume any value within a continuous range. An audio "line" signal in particular, is an AC voltage, nominally between -1.736 and +1.736 milliVolts.

Continuous, decimal number in mathematics are called *real numbers* and there are literally infinite of them between any two. We can also add decimals and get more finely grained numbers.

How many numbers there are between 0 and 1 ? Infinite! How many between 0 and 0.5? Infinite! How many... well, you get it, right?

**When a real number is the measurement of a physical value (say a voltage level) - its precision (i.e. how many decimals we can use) depends only on the accuracy of the measuring gear we use. **

For example, an eye can see only so much, but an optical microscope has a better resolution, and an atomic microscope goes down to atomic level... and so forth.

**A computer, however, does not have infinite numbers at his disposal. **

It can't use real numbers: you don't have infinite memory in your computer, for one.

It uses sequences of bits (called "words") to represent anything - including numbers. With one single bit - that is, an on/off switch which stays in the position where it is until we move it - we can represent 2 distinct values (conventionally "0" and "1" but can be really any two values we want).

More bits in words, more stuff we can represent.

For example, words of two bits (i.e. two bits in sequence) allow us to represent *4 distinct values*:

00 for the value #1

01 for the value #2

10 for the value #3

11 for the value #4

If we try different word lengths, it's easy to see that with *n* bits we can represent *2 power n* *values.*
2 *power n *simply means multiplying 2 with 2 *n times *(like 2 power 3 is simply 2 x 2 x 2). For example, with the two bits above we can represent 2 power 2 (that is 4) values.

If what we want to represent with bits combinations are *numbers*, we can represent say 0, 1 2 and 3; or 1, 2, 3 and 4.

Similarly if we want to represent *Beatles*, two bits give us space enough for Paul, John, George and Ringo.

With 16 bit, we can represent 2 power 16, that is *65 536 *distinct values.

With 24 bits, we can represent 2 power 24 values, that is *16 777 216* distinct values.

**Over 16 million values. **That's a lot!

It is indeed. But **it is also far, far less than the infinite amount of numbers that exist between -1.736 and +1.736**.

The good news is that for audio purposes, it turns out that 65 535 values (i.e. 16 bit worth of distinct values) are enough to represent *all the nuances that our ears can detect*.

**That's biology at work, and that's why the CD format uses 16 bit.**

There are certainly more fine separations of frequencies in nature, but our ears get only so far. 65536 distinct values are enough.

**So why mixing at 24 bits?** (or more)

Well.. the fact that a digital computing machine cannot really represent real numbers has interesting consequences: for example**, methods of calculations that work very well with paper and pencil may produce big errors if used as they are within a computer**.

That's because the tiny error that may occur when we don't have all the infinite numbers available may accumulate and generate quite large errors in the end.

The more numbers we have, the more we approximate "infinite" and the smaller these errors will be.

**Mixing in a DAW has nothing to do with voltages, but it has all to do with summing up, multiplying, dividing etc samples **(which are numbers), as determined by both the basic "mixer" functionality of a DAW, and the plugins we put on the tracks and busses..

**Plugins are nothing more than pre-packaged calculations to be applied to our samples.**

For these calculations, 24 bits is a better word length , as **24 bits allow for more numbers to be represented . and therefore more precise calculations**.

Why? As I mentioned above, the general idea is that instead of making calculations using *n* bits to represent numbers, we *use a few more.*

This allows us to have, in broad terms, "more decimals" to play with, so any computation errors that occurs will happen on the rightmost, least significant decimals. Once we've done the computation, we cut the result back to 16 bits, which will have a far better chance to be the right number.

As an example, say a voltage value is 1.2376 and to apply a certain EQ boost we need to multiply it by 1.435 (totally random numbers, but gives you the idea).

If we "have space" for only two decimals, our sample will look like 1.23 and our calculation will be 1.23 x 1.43, that is **1,75**.

But if *we can keep four decimals*, we can calculate 1.2376 x 1.4350, that is 1.7759... and when bringing it back to 2 decimals, we'll have **1.77**, which is a more correct result.

More, with 4 decimals we can also notice that the result is 1.77**59** (nearer to 1.78 than to 1.77) and thus decide that our result is indeed **1.78**.

More bits is "more decimals". **So the more the bits, the more correct the results.**

Due to this, 32 bits are even better than 24 and 64 even better than 32 - and indeed many DAW engines run at 64 bits, since they have to deal with many calculations over each sample, in sequences which are unpredictable because depend on which plugins we use!

So here it is.

Computers aren't that great with numbers, in the sense that they always have too few!

But **by increasing the word length we use, we can make calculations as precise as we need**, until they are precise enough that it makes no difference for the 16-bit range of sounds we hear.

Enjoy your DAW!