Many individuals, businesses and other organizations have a legitimate and lawful need to keep some of their information confidential or even absolutely secret from competitors, the press and others. Because email has become vital to our work and because email is not secure, encryption is often necessary. Law-abiding people cannot be confused with criminals passing illicit secrets if they keep their messages and comply with lawful authority on those rare occasions when investigations of their activities are warranted.

To create a message that only the sender and the intended receivers can understand, the message must be encrypted by means of a method that relies upon information that cannot be discovered by any examination of the transmission. If the encryption relies upon complexity or computational difficulty without involving secret information, the intended message

Within encryption technology, the embodiment of information unknowable to third parties often takes the form of random numbers kept by the sender and receivers and not known to anybody else. These random numbers are used to encrypt messages in such a way as to make decryption of a transmission impossible without access to those same random numbers. Random numbers are also needed in scientific experiments to ensure the absence of bias.

Truly random numbers have no pattern in them. A string of random bits is analogous to noise. In the jargon of signal processing, noise can be

A B Output ---------------------- 0 0 None 0 1 1 1 0 0 1 1 NoneThe resultant output string is more random than the input strings but shorter. Two input strings, each 1000 bits in length, will produce and output sting that is about 250 bits in length (a four to one reduction in data).

The von Neumann whitening algorithm points to a paradigm that is not often if ever used by writers of random number software. The usual approach is to generate long and complex strings of bits from an algorithm together with a short "seed" of numbers. This approach does not produce cryptographically strong random numbers and consequently their output is termed

I concentrate on text files written in English as the first example of files from which randomness can be extracted. They are plentiful and fruitful. The randomness that can be extracted from large text files comes mainly from two sources: Firstly, it comes from the somewhat arbitrary choices (of which there are many) that writers make in deciding how to express themselves. Secondly, pairs of such files (or pairs of sections of such files) can be compared to reveal thoroughly coincidental digital features that appear randomly as two independently written strings of such data are examined together.

There are a number of tools other than the von Neumann whitening algorithm that can be useful in extracting this randomness. A prominent one is the logical operation known as the

A B Output ---------------------- 0 0 0 0 1 1 1 0 1 1 1 0The Exclusive OR value is 1 whenever the two inputs are different and it is 0 whenever the two inputs are the same. It says "yes" or "1" when either input is a 1 but says "no" or "0" if they are both 1. It accepts either 1 but not both - thus the name

Filtering out text file data that is not at all like a succession of thoughtfully constructed sentences can be done by deleting identical triplets of characters. Long strings of blanks, tabs or carriage returns (end of line characters) often present for formatting reasons, would be limited to two in a row.

There is also a motive to exclude the most significant four bits of each byte because the least significant four bits are simply more evenly divided among its 16 possible numbers. This is because the one most significant bit in a byte is often constant, as in ASCII text which requires only 128 (not 256) characters. This would produce output data that is half the length of the input data.

Applying each of these measures once would result in a data reduction factor of 16 with a further reduction due to excluding byte triples. A test which reduces 16 megabytes of text to somewhat less than one megabyte of output data and examines the statistical characteristics of the output would be a valuable first step.

The von Neumann whitening and the Exclusive OR operation could be applied multiple times. If the Exclusive OR were used on two files to create a third, and then on another file with the third, the resultant file in each bit position would have a bit that was the parity (odd or even) of the bits in the same bit position in the source files. No matter how many source files are used this way, the only information that remains in the output file is the parity for each bit position.

These methods can be used on source files other than text files. Picture files, compressed audio files of music, and audio files of noise are examples of workable source files.

It would be interesting to see if using these techniques produces statistical results that stop getting more random at some ratio of input data to output data. If so, this would be a way to go beyond pseudo-random software. It might even be a way to go beyond the tiniest imperfections of hardware random number generators (which are always present) by using many different hardware random number generators to produce separate source files.

I believe that at some ratio of input data to output data, even text files processed in this way will produce statistically random bits as output; however, if the input data has a high degree of structure (such as sentences) the ratio of the amount of input data versus the amount of output data might need to be large. To moderate this input to output ratio, files might be compressed before these whitening measures are taken.

The following algorithm is intended to transform text into a string of bytes that include all 256 possible bytes (equally distributed) while removing the regularities of correct spelling. In the text, a "q" is always followed by a "u" but in the output array, "q" might be followed by any letter or, in fact, any byte.

N = a power of 2 greater than 1000 A megabyte might be typical.

i = 0

j = 0

k = a number between 10 and N/2 that is not divisible by 2

A, B, C, D and OUT are arrays of size N. A

Save = 0 This is a byte that does not need to be initialized.

While j < N - 1

Save = (A

i = (i + k) MOD N

OUT

j = j + 1

End

This algorithm produces a more equal distribution of byte values in the output if there are many arrays being summed (there are four in this example) and if the source files are divided into small arrays (1024 bytes as opposed to one megabyte); however, this drastically affects the ratio of input to output. The most profitable balance can found through statistical tests. The following algorithm seeks to balance the frequency of byte values:

i = 0

u = 0

N is the array length

Sum = array

While i < N

u = (i+1) MOD N

array

IF array

Sum = (Sum + array

i = i + 1

If the number of input arrays is even, pairs of them can be preprocessed by the von Neumann algorithm as well.

The bits of a sequentially constructed file (such as a text file) has many relationships that are not controlled or intended. One of these types is between bits separated by a great distance within the file. It is hard to imagine a strong correlation between such pairs of single bits. If by any algorithm that uses data in the file itself, bit offset addresses can be constructed, the succession of bits pointed to by the successive addition of such bit addresses could be used to create a new file in which each successive bit has little to do with its neighbors. This example produces an output array that contains approximately one quarter of the data size of the input:

N = a power of 2 greater than 4000 This N is a number of

A = a bit array of length N

B = a bit array of length N

L = minimum number of bits required to represent number N

j = 0 This is the bit offset into array A.

k = 2000 This is the bit offset into array B.

Z = destination bit array

h = 0 This is the bit offset into array Z (the output array).

LbitNumber = an L bit number not initialized

While h < N

LbitNumber = A

j = (j + LbitNumber) MOD N

LbitNumber = B

k = (k + LbitNumber) MOD N

IF A(k) = B(j), continue If equal, continue the loop from the top.

Z(h) = B(j)

h = h + 1

End

The usefulness of whitening processes is proved by statistical tests of randomness. The virtually unused paradigm of extracting randomness from large files rather than generating pseudo-random numbers from small seeds is the reason for suspecting that these processes have value.

Please feel free to write to me directly for more information or to make suggestions or comments. My email address is jim@futurebeacon.com. You can also go to my contact page to get my full contact information. Suggestions, questions, additional information and critiques are very welcome.