This second chapter concludes our exploration tour of coding and data compression. We shall first consider integer coding, which represents another family branch of optimal codes (next to Shannon–Fano and Huffman coding). Integer coding applies to the case where the source symbols are fully known, but the probability distribution is only partially known (thus, the previous optimal codes cannot be implemented). Three main integer codes, called Elias, Fibonacci, and Golomb–Rice, will then be described. Together with the previous chapter, this description will complete our inventory of static codes, namely codes that apply to cases where the source symbols are known, and the matter is to assign the optimal code type. In the most general case, the source symbols and their distribution are unknown, or the distribution may change according to the amount of symbols being collected. Then, we must find new algorithms to assign optimal codes without such knowledge; this is referred to as dynamic coding. The three main algorithms for dynamic coding to be considered here are referred to as arithmetic coding, adaptive Huffman coding, and Lempel–Ziv coding.
Integer coding
The principle of integer coding is to assign an optimal (and predefined) codeword to a list of n known symbols, which we may call {1,2,3,…, n}. In such a list, the symbols are ranked in order of decreasing frequency or probability, or mathematically speaking, in order of “nonincreasing” frequency or probability.